repo_name
stringlengths 6
112
| path
stringlengths 4
204
| copies
stringlengths 1
3
| size
stringlengths 4
6
| content
stringlengths 714
810k
| license
stringclasses 15
values |
---|---|---|---|---|---|
shikhardb/scikit-learn | examples/covariance/plot_mahalanobis_distances.py | 348 | 6232 | r"""
================================================================
Robust covariance estimation and Mahalanobis distances relevance
================================================================
An example to show covariance estimation with the Mahalanobis
distances on Gaussian distributed data.
For Gaussian distributed data, the distance of an observation
:math:`x_i` to the mode of the distribution can be computed using its
Mahalanobis distance: :math:`d_{(\mu,\Sigma)}(x_i)^2 = (x_i -
\mu)'\Sigma^{-1}(x_i - \mu)` where :math:`\mu` and :math:`\Sigma` are
the location and the covariance of the underlying Gaussian
distribution.
In practice, :math:`\mu` and :math:`\Sigma` are replaced by some
estimates. The usual covariance maximum likelihood estimate is very
sensitive to the presence of outliers in the data set and therefor,
the corresponding Mahalanobis distances are. One would better have to
use a robust estimator of covariance to guarantee that the estimation is
resistant to "erroneous" observations in the data set and that the
associated Mahalanobis distances accurately reflect the true
organisation of the observations.
The Minimum Covariance Determinant estimator is a robust,
high-breakdown point (i.e. it can be used to estimate the covariance
matrix of highly contaminated datasets, up to
:math:`\frac{n_\text{samples}-n_\text{features}-1}{2}` outliers)
estimator of covariance. The idea is to find
:math:`\frac{n_\text{samples}+n_\text{features}+1}{2}`
observations whose empirical covariance has the smallest determinant,
yielding a "pure" subset of observations from which to compute
standards estimates of location and covariance.
The Minimum Covariance Determinant estimator (MCD) has been introduced
by P.J.Rousseuw in [1].
This example illustrates how the Mahalanobis distances are affected by
outlying data: observations drawn from a contaminating distribution
are not distinguishable from the observations coming from the real,
Gaussian distribution that one may want to work with. Using MCD-based
Mahalanobis distances, the two populations become
distinguishable. Associated applications are outliers detection,
observations ranking, clustering, ...
For visualization purpose, the cubic root of the Mahalanobis distances
are represented in the boxplot, as Wilson and Hilferty suggest [2]
[1] P. J. Rousseeuw. Least median of squares regression. J. Am
Stat Ass, 79:871, 1984.
[2] Wilson, E. B., & Hilferty, M. M. (1931). The distribution of chi-square.
Proceedings of the National Academy of Sciences of the United States
of America, 17, 684-688.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.covariance import EmpiricalCovariance, MinCovDet
n_samples = 125
n_outliers = 25
n_features = 2
# generate data
gen_cov = np.eye(n_features)
gen_cov[0, 0] = 2.
X = np.dot(np.random.randn(n_samples, n_features), gen_cov)
# add some outliers
outliers_cov = np.eye(n_features)
outliers_cov[np.arange(1, n_features), np.arange(1, n_features)] = 7.
X[-n_outliers:] = np.dot(np.random.randn(n_outliers, n_features), outliers_cov)
# fit a Minimum Covariance Determinant (MCD) robust estimator to data
robust_cov = MinCovDet().fit(X)
# compare estimators learnt from the full data set with true parameters
emp_cov = EmpiricalCovariance().fit(X)
###############################################################################
# Display results
fig = plt.figure()
plt.subplots_adjust(hspace=-.1, wspace=.4, top=.95, bottom=.05)
# Show data set
subfig1 = plt.subplot(3, 1, 1)
inlier_plot = subfig1.scatter(X[:, 0], X[:, 1],
color='black', label='inliers')
outlier_plot = subfig1.scatter(X[:, 0][-n_outliers:], X[:, 1][-n_outliers:],
color='red', label='outliers')
subfig1.set_xlim(subfig1.get_xlim()[0], 11.)
subfig1.set_title("Mahalanobis distances of a contaminated data set:")
# Show contours of the distance functions
xx, yy = np.meshgrid(np.linspace(plt.xlim()[0], plt.xlim()[1], 100),
np.linspace(plt.ylim()[0], plt.ylim()[1], 100))
zz = np.c_[xx.ravel(), yy.ravel()]
mahal_emp_cov = emp_cov.mahalanobis(zz)
mahal_emp_cov = mahal_emp_cov.reshape(xx.shape)
emp_cov_contour = subfig1.contour(xx, yy, np.sqrt(mahal_emp_cov),
cmap=plt.cm.PuBu_r,
linestyles='dashed')
mahal_robust_cov = robust_cov.mahalanobis(zz)
mahal_robust_cov = mahal_robust_cov.reshape(xx.shape)
robust_contour = subfig1.contour(xx, yy, np.sqrt(mahal_robust_cov),
cmap=plt.cm.YlOrBr_r, linestyles='dotted')
subfig1.legend([emp_cov_contour.collections[1], robust_contour.collections[1],
inlier_plot, outlier_plot],
['MLE dist', 'robust dist', 'inliers', 'outliers'],
loc="upper right", borderaxespad=0)
plt.xticks(())
plt.yticks(())
# Plot the scores for each point
emp_mahal = emp_cov.mahalanobis(X - np.mean(X, 0)) ** (0.33)
subfig2 = plt.subplot(2, 2, 3)
subfig2.boxplot([emp_mahal[:-n_outliers], emp_mahal[-n_outliers:]], widths=.25)
subfig2.plot(1.26 * np.ones(n_samples - n_outliers),
emp_mahal[:-n_outliers], '+k', markeredgewidth=1)
subfig2.plot(2.26 * np.ones(n_outliers),
emp_mahal[-n_outliers:], '+k', markeredgewidth=1)
subfig2.axes.set_xticklabels(('inliers', 'outliers'), size=15)
subfig2.set_ylabel(r"$\sqrt[3]{\rm{(Mahal. dist.)}}$", size=16)
subfig2.set_title("1. from non-robust estimates\n(Maximum Likelihood)")
plt.yticks(())
robust_mahal = robust_cov.mahalanobis(X - robust_cov.location_) ** (0.33)
subfig3 = plt.subplot(2, 2, 4)
subfig3.boxplot([robust_mahal[:-n_outliers], robust_mahal[-n_outliers:]],
widths=.25)
subfig3.plot(1.26 * np.ones(n_samples - n_outliers),
robust_mahal[:-n_outliers], '+k', markeredgewidth=1)
subfig3.plot(2.26 * np.ones(n_outliers),
robust_mahal[-n_outliers:], '+k', markeredgewidth=1)
subfig3.axes.set_xticklabels(('inliers', 'outliers'), size=15)
subfig3.set_ylabel(r"$\sqrt[3]{\rm{(Mahal. dist.)}}$", size=16)
subfig3.set_title("2. from robust estimates\n(Minimum Covariance Determinant)")
plt.yticks(())
plt.show()
| bsd-3-clause |
kkozarev/mwacme | synchrotron_fitting/GS_kappa_function.py | 1 | 2634 | import Get_MW
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
import numpy as np
N=10 #number of frequencies
#These values are starting positions for coronal CME radio observations
ParmIn=29*[0] # input array
ParmIn[0] =8e19 # Area, cm^2
ParmIn[1] =5e9 # Depth, cm
ParmIn[2] =3e6 # T_0, K
ParmIn[3] =0.05 # \eps (not used in this example)
ParmIn[4] =6.0 # \kappa (not used in this example)
ParmIn[5] =16 # number of integration nodes
ParmIn[6] =0.1 # E_min, MeV
ParmIn[7] =10.0 # E_max, MeV
ParmIn[8] =1.0 # E_break, MeV (not used in this example)
ParmIn[9] =4.0 # \delta_1
ParmIn[10]=6.0 # \delta_2 (not used in this example)
ParmIn[11]=1e8 # n_0 - thermal electron density, cm^{-3}
ParmIn[12]=1e6 # n_b - nonthermal electron density, cm^{-3}
ParmIn[13]=5.0 # B - magnetic field, G
ParmIn[14]=60.0 # theta - the viewing angle, degrees
ParmIn[15]=8.e7 # starting frequency to calculate spectrum, Hz
ParmIn[16]=0.005 # logarithmic step in frequency
ParmIn[17]=6 # Index of distribution over energy (KAP is chosen)
ParmIn[18]=N # Number of frequencies (specified above)
ParmIn[19]=3 # Index of distribution over pitch-angle (GLC is chosen)
ParmIn[20]=90.0 # loss-cone boundary, degrees
ParmIn[21]=0.0 # beam direction (degrees) in GAU and SGA (not used in this example)
ParmIn[22]=0.2 # \Delta\mu
ParmIn[23]=0.0 # a_4 in SGA (not used in this example)
ParmIn[25]=12.0 # f^C_cr
ParmIn[26]=12.0 # f^WH_cr
ParmIn[27]=1 # matching on
ParmIn[28]=1 # Q-optimization on
def init_frequency_grid(startfreq,endfreq,numfreq=N):
Params = ParmIn
Params[16]=np.log10(endfreq/startfreq)/numfreq
Params[15]=startfreq*1.e6
Params[18]=numfreq
s=Get_MW.GET_MW(Params) # calling the main function
f=s[0] # emission frequency (GHz)
fmhz=[i*1000. for i in f]
return fmhz
def gs_kappa_func(freqgrid, temp=ParmIn[2],dens=ParmIn[11],kappa=ParmIn[4],emax=ParmIn[7],numfreq=N):
Params = ParmIn
Params[2]=temp
Params[4]=kappa
Params[7]=emax
Params[11]=dens
Params[15]=freqgrid[0]/1.e6
Params[17]=6
if not numfreq:
numfreq=len(freqgrid)
Params[16]=np.log10(freqgrid[-1]/freqgrid[0])/numfreq
ParmIn[18]=numfreq
s=Get_MW.GET_MW(ParmIn) # calling the main function
I_O=s[1] # observed (at the Earth) intensity, O-mode (sfu)
k_O=s[2] # exp(-tau), O-mode
#I_X=s[3] # observed (at the Earth) intensity, X-mode (sfu)
#k_X=s[4] # exp(-tau), X-mode
return I_O
| gpl-2.0 |
YihaoLu/statsmodels | statsmodels/tools/grouputils.py | 25 | 22518 | # -*- coding: utf-8 -*-
"""Tools for working with groups
This provides several functions to work with groups and a Group class that
keeps track of the different representations and has methods to work more
easily with groups.
Author: Josef Perktold,
Author: Nathaniel Smith, recipe for sparse_dummies on scipy user mailing list
Created on Tue Nov 29 15:44:53 2011 : sparse_dummies
Created on Wed Nov 30 14:28:24 2011 : combine_indices
changes: add Group class
Notes
~~~~~
This reverses the class I used before, where the class was for the data and
the group was auxiliary. Here, it is only the group, no data is kept.
sparse_dummies needs checking for corner cases, e.g.
what if a category level has zero elements? This can happen with subset
selection even if the original groups where defined as arange.
Not all methods and options have been tried out yet after refactoring
need more efficient loop if groups are sorted -> see GroupSorted.group_iter
"""
from __future__ import print_function
from statsmodels.compat.python import lrange, lzip, range
import numpy as np
import pandas as pd
from statsmodels.compat.numpy import npc_unique
import statsmodels.tools.data as data_util
from pandas.core.index import Index, MultiIndex
def combine_indices(groups, prefix='', sep='.', return_labels=False):
"""use np.unique to get integer group indices for product, intersection
"""
if isinstance(groups, tuple):
groups = np.column_stack(groups)
else:
groups = np.asarray(groups)
dt = groups.dtype
is2d = (groups.ndim == 2) # need to store
if is2d:
ncols = groups.shape[1]
if not groups.flags.c_contiguous:
groups = np.array(groups, order='C')
groups_ = groups.view([('', groups.dtype)] * groups.shape[1])
else:
groups_ = groups
uni, uni_idx, uni_inv = npc_unique(groups_, return_index=True,
return_inverse=True)
if is2d:
uni = uni.view(dt).reshape(-1, ncols)
# avoiding a view would be
# for t in uni.dtype.fields.values():
# assert (t[0] == dt)
#
# uni.dtype = dt
# uni.shape = (uni.size//ncols, ncols)
if return_labels:
label = [(prefix+sep.join(['%s']*len(uni[0]))) % tuple(ii)
for ii in uni]
return uni_inv, uni_idx, uni, label
else:
return uni_inv, uni_idx, uni
# written for and used in try_covariance_grouploop.py
def group_sums(x, group, use_bincount=True):
"""simple bincount version, again
group : array, integer
assumed to be consecutive integers
no dtype checking because I want to raise in that case
uses loop over columns of x
for comparison, simple python loop
"""
x = np.asarray(x)
if x.ndim == 1:
x = x[:, None]
elif x.ndim > 2 and use_bincount:
raise ValueError('not implemented yet')
if use_bincount:
# re-label groups or bincount takes too much memory
if np.max(group) > 2 * x.shape[0]:
group = pd.factorize(group)[0]
return np.array([np.bincount(group, weights=x[:, col])
for col in range(x.shape[1])])
else:
uniques = np.unique(group)
result = np.zeros([len(uniques)] + list(x.shape[1:]))
for ii, cat in enumerate(uniques):
result[ii] = x[g == cat].sum(0)
return result
def group_sums_dummy(x, group_dummy):
"""sum by groups given group dummy variable
group_dummy can be either ndarray or sparse matrix
"""
if data_util._is_using_ndarray_type(group_dummy, None):
return np.dot(x.T, group_dummy)
else: # check for sparse
return x.T * group_dummy
def dummy_sparse(groups):
"""create a sparse indicator from a group array with integer labels
Parameters
----------
groups: ndarray, int, 1d (nobs,)
an array of group indicators for each observation. Group levels are
assumed to be defined as consecutive integers, i.e. range(n_groups)
where n_groups is the number of group levels. A group level with no
observations for it will still produce a column of zeros.
Returns
-------
indi : ndarray, int8, 2d (nobs, n_groups)
an indicator array with one row per observation, that has 1 in the
column of the group level for that observation
Examples
--------
>>> g = np.array([0, 0, 2, 1, 1, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi
<7x3 sparse matrix of type '<type 'numpy.int8'>'
with 7 stored elements in Compressed Sparse Row format>
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
current behavior with missing groups
>>> g = np.array([0, 0, 2, 0, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
"""
from scipy import sparse
indptr = np.arange(len(groups)+1)
data = np.ones(len(groups), dtype=np.int8)
indi = sparse.csr_matrix((data, g, indptr))
return indi
class Group(object):
def __init__(self, group, name=''):
# self.group = np.asarray(group) # TODO: use checks in combine_indices
self.name = name
uni, uni_idx, uni_inv = combine_indices(group)
# TODO: rename these to something easier to remember
self.group_int, self.uni_idx, self.uni = uni, uni_idx, uni_inv
self.n_groups = len(self.uni)
# put this here so they can be overwritten before calling labels
self.separator = '.'
self.prefix = self.name
if self.prefix:
self.prefix = self.prefix + '='
# cache decorator
def counts(self):
return np.bincount(self.group_int)
# cache_decorator
def labels(self):
# is this only needed for product of groups (intersection)?
prefix = self.prefix
uni = self.uni
sep = self.separator
if uni.ndim > 1:
label = [(prefix+sep.join(['%s']*len(uni[0]))) % tuple(ii)
for ii in uni]
else:
label = [prefix + '%s' % ii for ii in uni]
return label
def dummy(self, drop_idx=None, sparse=False, dtype=int):
"""
drop_idx is only available if sparse=False
drop_idx is supposed to index into uni
"""
uni = self.uni
if drop_idx is not None:
idx = lrange(len(uni))
del idx[drop_idx]
uni = uni[idx]
group = self.group
if not sparse:
return (group[:, None] == uni[None, :]).astype(dtype)
else:
return dummy_sparse(self.group_int)
def interaction(self, other):
if isinstance(other, self.__class__):
other = other.group
return self.__class__((self, other))
def group_sums(self, x, use_bincount=True):
return group_sums(x, self.group_int, use_bincount=use_bincount)
def group_demean(self, x, use_bincount=True):
nobs = float(len(x))
means_g = group_sums(x / nobs, self.group_int,
use_bincount=use_bincount)
x_demeaned = x - means_g[self.group_int] # check reverse_index?
return x_demeaned, means_g
class GroupSorted(Group):
def __init__(self, group, name=''):
super(self.__class__, self).__init__(group, name=name)
idx = (np.nonzero(np.diff(group))[0]+1).tolist()
self.groupidx = lzip([0] + idx, idx + [len(group)])
def group_iter(self):
for low, upp in self.groupidx:
yield slice(low, upp)
def lag_indices(self, lag):
"""return the index array for lagged values
Warning: if k is larger then the number of observations for an
individual, then no values for that individual are returned.
TODO: for the unbalanced case, I should get the same truncation for
the array with lag=0. From the return of lag_idx we wouldn't know
which individual is missing.
TODO: do I want the full equivalent of lagmat in tsa?
maxlag or lag or lags.
not tested yet
"""
lag_idx = np.asarray(self.groupidx)[:, 1] - lag # asarray or already?
mask_ok = (lag <= lag_idx)
# still an observation that belongs to the same individual
return lag_idx[mask_ok]
def _is_hierarchical(x):
"""
Checks if the first item of an array-like object is also array-like
If so, we have a MultiIndex and returns True. Else returns False.
"""
item = x[0]
# is there a better way to do this?
if isinstance(item, (list, tuple, np.ndarray, pd.Series, pd.DataFrame)):
return True
else:
return False
def _make_hierarchical_index(index, names):
return MultiIndex.from_tuples(*[index], names=names)
def _make_generic_names(index):
n_names = len(index.names)
pad = str(len(str(n_names))) # number of digits
return [("group{0:0"+pad+"}").format(i) for i in range(n_names)]
class Grouping(object):
def __init__(self, index, names=None):
"""
index : index-like
Can be pandas MultiIndex or Index or array-like. If array-like
and is a MultipleIndex (more than one grouping variable),
groups are expected to be in each row. E.g., [('red', 1),
('red', 2), ('green', 1), ('green', 2)]
names : list or str, optional
The names to use for the groups. Should be a str if only
one grouping variable is used.
Notes
-----
If index is already a pandas Index then there is no copy.
"""
if isinstance(index, (Index, MultiIndex)):
if names is not None:
if hasattr(index, 'set_names'): # newer pandas
index.set_names(names, inplace=True)
else:
index.names = names
self.index = index
else: # array-like
if _is_hierarchical(index):
self.index = _make_hierarchical_index(index, names)
else:
self.index = Index(index, name=names)
if names is None:
names = _make_generic_names(self.index)
if hasattr(self.index, 'set_names'):
self.index.set_names(names, inplace=True)
else:
self.index.names = names
self.nobs = len(self.index)
self.nlevels = len(self.index.names)
self.slices = None
@property
def index_shape(self):
if hasattr(self.index, 'levshape'):
return self.index.levshape
else:
return self.index.shape
@property
def levels(self):
if hasattr(self.index, 'levels'):
return self.index.levels
else:
return pd.Categorical(self.index).levels
@property
def labels(self):
# this was index_int, but that's not a very good name...
if hasattr(self.index, 'labels'):
return self.index.labels
else: # pandas version issue here
# Compat code for the labels -> codes change in pandas 0.15
# FIXME: use .codes directly when we don't want to support
# pandas < 0.15
tmp = pd.Categorical(self.index)
try:
labl = tmp.codes
except AttributeError:
labl = tmp.labels # Old pandsd
return labl[None]
@property
def group_names(self):
return self.index.names
def reindex(self, index=None, names=None):
"""
Resets the index in-place.
"""
# NOTE: this isn't of much use if the rest of the data doesn't change
# This needs to reset cache
if names is None:
names = self.group_names
self = Grouping(index, names)
def get_slices(self, level=0):
"""
Sets the slices attribute to be a list of indices of the sorted
groups for the first index level. I.e., self.slices[0] is the
index where each observation is in the first (sorted) group.
"""
# TODO: refactor this
groups = self.index.get_level_values(level).unique()
groups.sort()
if isinstance(self.index, MultiIndex):
self.slices = [self.index.get_loc_level(x, level=level)[0]
for x in groups]
else:
self.slices = [self.index.get_loc(x) for x in groups]
def count_categories(self, level=0):
"""
Sets the attribute counts to equal the bincount of the (integer-valued)
labels.
"""
# TODO: refactor this not to set an attribute. Why would we do this?
self.counts = np.bincount(self.labels[level])
def check_index(self, is_sorted=True, unique=True, index=None):
"""Sanity checks"""
if not index:
index = self.index
if is_sorted:
test = pd.DataFrame(lrange(len(index)), index=index)
test_sorted = test.sort()
if not test.index.equals(test_sorted.index):
raise Exception('Data is not be sorted')
if unique:
if len(index) != len(index.unique()):
raise Exception('Duplicate index entries')
def sort(self, data, index=None):
"""Applies a (potentially hierarchical) sort operation on a numpy array
or pandas series/dataframe based on the grouping index or a
user-supplied index. Returns an object of the same type as the
original data as well as the matching (sorted) Pandas index.
"""
if index is None:
index = self.index
if data_util._is_using_ndarray_type(data, None):
if data.ndim == 1:
out = pd.Series(data, index=index, copy=True)
out = out.sort_index()
else:
out = pd.DataFrame(data, index=index)
out = out.sort(inplace=False) # copies
return np.array(out), out.index
elif data_util._is_using_pandas(data, None):
out = data
out = out.reindex(index) # copies?
out = out.sort_index()
return out, out.index
else:
msg = 'data must be a Numpy array or a Pandas Series/DataFrame'
raise ValueError(msg)
def transform_dataframe(self, dataframe, function, level=0, **kwargs):
"""Apply function to each column, by group
Assumes that the dataframe already has a proper index"""
if dataframe.shape[0] != self.nobs:
raise Exception('dataframe does not have the same shape as index')
out = dataframe.groupby(level=level).apply(function, **kwargs)
if 1 in out.shape:
return np.ravel(out)
else:
return np.array(out)
def transform_array(self, array, function, level=0, **kwargs):
"""Apply function to each column, by group
"""
if array.shape[0] != self.nobs:
raise Exception('array does not have the same shape as index')
dataframe = pd.DataFrame(array, index=self.index)
return self.transform_dataframe(dataframe, function, level=level,
**kwargs)
def transform_slices(self, array, function, level=0, **kwargs):
"""Apply function to each group. Similar to transform_array but does
not coerce array to a DataFrame and back and only works on a 1D or 2D
numpy array. function is called function(group, group_idx, **kwargs).
"""
array = np.asarray(array)
if array.shape[0] != self.nobs:
raise Exception('array does not have the same shape as index')
# always reset because level is given. need to refactor this.
self.get_slices(level=level)
processed = []
for s in self.slices:
if array.ndim == 2:
subset = array[s, :]
elif array.ndim == 1:
subset = array[s]
processed.append(function(subset, s, **kwargs))
processed = np.array(processed)
return processed.reshape(-1, processed.shape[-1])
# TODO: this isn't general needs to be a PanelGrouping object
def dummies_time(self):
self.dummy_sparse(level=1)
return self._dummies
def dummies_groups(self, level=0):
self.dummy_sparse(level=level)
return self._dummies
def dummy_sparse(self, level=0):
"""create a sparse indicator from a group array with integer labels
Parameters
----------
groups: ndarray, int, 1d (nobs,) an array of group indicators for each
observation. Group levels are assumed to be defined as consecutive
integers, i.e. range(n_groups) where n_groups is the number of
group levels. A group level with no observations for it will still
produce a column of zeros.
Returns
-------
indi : ndarray, int8, 2d (nobs, n_groups)
an indicator array with one row per observation, that has 1 in the
column of the group level for that observation
Examples
--------
>>> g = np.array([0, 0, 2, 1, 1, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi
<7x3 sparse matrix of type '<type 'numpy.int8'>'
with 7 stored elements in Compressed Sparse Row format>
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
current behavior with missing groups
>>> g = np.array([0, 0, 2, 0, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
"""
from scipy import sparse
groups = self.labels[level]
indptr = np.arange(len(groups)+1)
data = np.ones(len(groups), dtype=np.int8)
self._dummies = sparse.csr_matrix((data, groups, indptr))
if __name__ == '__main__':
# ---------- examples combine_indices
from numpy.testing import assert_equal
np.random.seed(985367)
groups = np.random.randint(0, 2, size=(10, 2))
uv, ux, u, label = combine_indices(groups, return_labels=True)
uv, ux, u, label = combine_indices(groups, prefix='g1,g2=', sep=',',
return_labels=True)
group0 = np.array(['sector0', 'sector1'])[groups[:, 0]]
group1 = np.array(['region0', 'region1'])[groups[:, 1]]
uv, ux, u, label = combine_indices((group0, group1),
prefix='sector,region=',
sep=',',
return_labels=True)
uv, ux, u, label = combine_indices((group0, group1), prefix='', sep='.',
return_labels=True)
group_joint = np.array(label)[uv]
group_joint_expected = np.array(['sector1.region0', 'sector0.region1',
'sector0.region0', 'sector0.region1',
'sector1.region1', 'sector0.region0',
'sector1.region0', 'sector1.region0',
'sector0.region1', 'sector0.region0'],
dtype='|S15')
assert_equal(group_joint, group_joint_expected)
"""
>>> uv
array([2, 1, 0, 0, 1, 0, 2, 0, 1, 0])
>>> label
['sector0.region0', 'sector1.region0', 'sector1.region1']
>>> np.array(label)[uv]
array(['sector1.region1', 'sector1.region0', 'sector0.region0',
'sector0.region0', 'sector1.region0', 'sector0.region0',
'sector1.region1', 'sector0.region0', 'sector1.region0',
'sector0.region0'],
dtype='|S15')
>>> np.column_stack((group0, group1))
array([['sector1', 'region1'],
['sector1', 'region0'],
['sector0', 'region0'],
['sector0', 'region0'],
['sector1', 'region0'],
['sector0', 'region0'],
['sector1', 'region1'],
['sector0', 'region0'],
['sector1', 'region0'],
['sector0', 'region0']],
dtype='|S7')
"""
# ------------- examples sparse_dummies
from scipy import sparse
g = np.array([0, 0, 1, 2, 1, 1, 2, 0])
u = lrange(3)
indptr = np.arange(len(g)+1)
data = np.ones(len(g), dtype=np.int8)
a = sparse.csr_matrix((data, g, indptr))
print(a.todense())
print(np.all(a.todense() == (g[:, None] == np.arange(3)).astype(int)))
x = np.arange(len(g)*3).reshape(len(g), 3, order='F')
print('group means')
print(x.T * a)
print(np.dot(x.T, g[:, None] == np.arange(3)))
print(np.array([np.bincount(g, weights=x[:, col]) for col in range(3)]))
for cat in u:
print(x[g == cat].sum(0))
for cat in u:
x[g == cat].sum(0)
cc = sparse.csr_matrix([[0, 1, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0, 0],
[1, 0, 0, 0, 1, 0, 1, 0, 0],
[0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 1, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 1, 0, 1, 0]])
# ------------- groupsums
print(group_sums(np.arange(len(g)*3*2).reshape(len(g), 3, 2), g,
use_bincount=False).T)
print(group_sums(np.arange(len(g)*3*2).reshape(len(g), 3, 2)[:, :, 0], g))
print(group_sums(np.arange(len(g)*3*2).reshape(len(g), 3, 2)[:, :, 1], g))
# ------------- examples class
x = np.arange(len(g)*3).reshape(len(g), 3, order='F')
mygroup = Group(g)
print(mygroup.group_int)
print(mygroup.group_sums(x))
print(mygroup.labels())
| bsd-3-clause |
PrashntS/scikit-learn | examples/calibration/plot_compare_calibration.py | 241 | 5008 | """
========================================
Comparison of Calibration of Classifiers
========================================
Well calibrated classifiers are probabilistic classifiers for which the output
of the predict_proba method can be directly interpreted as a confidence level.
For instance a well calibrated (binary) classifier should classify the samples
such that among the samples to which it gave a predict_proba value close to
0.8, approx. 80% actually belong to the positive class.
LogisticRegression returns well calibrated predictions as it directly
optimizes log-loss. In contrast, the other methods return biased probilities,
with different biases per method:
* GaussianNaiveBayes tends to push probabilties to 0 or 1 (note the counts in
the histograms). This is mainly because it makes the assumption that features
are conditionally independent given the class, which is not the case in this
dataset which contains 2 redundant features.
* RandomForestClassifier shows the opposite behavior: the histograms show
peaks at approx. 0.2 and 0.9 probability, while probabilities close to 0 or 1
are very rare. An explanation for this is given by Niculescu-Mizil and Caruana
[1]: "Methods such as bagging and random forests that average predictions from
a base set of models can have difficulty making predictions near 0 and 1
because variance in the underlying base models will bias predictions that
should be near zero or one away from these values. Because predictions are
restricted to the interval [0,1], errors caused by variance tend to be one-
sided near zero and one. For example, if a model should predict p = 0 for a
case, the only way bagging can achieve this is if all bagged trees predict
zero. If we add noise to the trees that bagging is averaging over, this noise
will cause some trees to predict values larger than 0 for this case, thus
moving the average prediction of the bagged ensemble away from 0. We observe
this effect most strongly with random forests because the base-level trees
trained with random forests have relatively high variance due to feature
subseting." As a result, the calibration curve shows a characteristic sigmoid
shape, indicating that the classifier could trust its "intuition" more and
return probabilties closer to 0 or 1 typically.
* Support Vector Classification (SVC) shows an even more sigmoid curve as
the RandomForestClassifier, which is typical for maximum-margin methods
(compare Niculescu-Mizil and Caruana [1]), which focus on hard samples
that are close to the decision boundary (the support vectors).
.. topic:: References:
.. [1] Predicting Good Probabilities with Supervised Learning,
A. Niculescu-Mizil & R. Caruana, ICML 2005
"""
print(__doc__)
# Author: Jan Hendrik Metzen <jhm@informatik.uni-bremen.de>
# License: BSD Style.
import numpy as np
np.random.seed(0)
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.calibration import calibration_curve
X, y = datasets.make_classification(n_samples=100000, n_features=20,
n_informative=2, n_redundant=2)
train_samples = 100 # Samples used for training the models
X_train = X[:train_samples]
X_test = X[train_samples:]
y_train = y[:train_samples]
y_test = y[train_samples:]
# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
svc = LinearSVC(C=1.0)
rfc = RandomForestClassifier(n_estimators=100)
###############################################################################
# Plot calibration plots
plt.figure(figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(gnb, 'Naive Bayes'),
(svc, 'Support Vector Classification'),
(rfc, 'Random Forest')]:
clf.fit(X_train, y_train)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s" % (name, ))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
plt.show()
| bsd-3-clause |
openconnectome/m2g | MR-OCP/MROCPdjango/computation/plotting/HBMPlot.py | 2 | 14895 | #!/usr/bin/env python
# Copyright 2014 Open Connectome Project (http://openconnecto.me)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Disa Mhembere, Johns Hopkins University
# Separated: 10/2/2012
# Plot all .np arrays in a common dir on the same axis & save
# 1 indexed
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
import pylab as pl
import numpy as np
import os
import sys
from glob import glob
import argparse
import scipy
from scipy import interpolate
import inspect
import csv
# Issues: Done nothing with MAD
def lineno():
'''
Get current line number
'''
return str(inspect.getframeinfo(inspect.currentframe())[1])
def csvtodict(fn ='/home/disa/code/mrn_covariates_n120-v4.csv', char = 'class'):
if char == 'class':
col = 4
elif char == 'gender':
col = 2
reader = csv.reader(open(fn, 'rb'))
outdict = dict()
for row in reader:
outdict[row[0].strip()] = row[col].strip()
#print row[0] ,'TYPE' ,outdict[row[0]]
#import pdb; pdb.set_trace()
return outdict
def pickprintcolor(charDict, arrfn):
'''
charDict: dict
'''
if (charDict[(arrfn.split('/')[-1]).split('_')[0]] == '0'):
plot_color = 'grey'
elif (charDict[(arrfn.split('/')[-1]).split('_')[0]] == '1'):
plot_color = 'blue'
elif (charDict[(arrfn.split('/')[-1]).split('_')[0]] == '2'):
plot_color = 'green'
else:
print "[ERROR]: %s, no match on subject type" % lineno()
return plot_color
def plotInvDist(invDir, pngName, numBins =100):
subj_types = csvtodict(char = 'class') # load up subject types
# ClustCoeff Degree Eigen MAD numEdges.npy ScanStat Triangle
MADdir = "MAD"
ccDir = "ClustCoeff"
DegDir = "Degree"
EigDir = "Eigen/values"
SS1dir = "ScanStat1"
triDir = "Triangle"
invDirs = [triDir, ccDir, SS1dir, DegDir ]
if not os.path.exists(invDir):
print "%s does not exist" % invDir
sys.exit(1)
pl.figure(2)
fig_gl, axes = pl.subplots(nrows=3, ncols=2)
for idx, drcty in enumerate (invDirs):
for arrfn in glob(os.path.join(invDir, drcty,'*.npy')):
try:
arr = np.load(arrfn)
arr = np.log(arr[arr.nonzero()])
print "Processing %s..." % arrfn
except:
print "[ERROR]: Line %s: Invariant file not found %s" % (lineno(),arrfn)
pl.figure(1)
n, bins, patches = pl.hist(arr, bins=numBins , range=None, normed=False, weights=None, cumulative=False, \
bottom=None, histtype='stepfilled', align='mid', orientation='vertical', \
rwidth=None, log=False, color=None, label=None, hold=None)
n = np.append(n,0)
n = n/float(sum(n))
fig = pl.figure(2)
fig.subplots_adjust(hspace=.5)
ax = pl.subplot(3,2,idx+1)
#if idx == 0:
# plt.axis([0, 35, 0, 0.04])
# ax.set_yticks(scipy.arange(0,0.04,0.01))
#if idx == 1 or idx == 2:
# ax.set_yticks(scipy.arange(0,0.03,0.01))
#if idx == 3:
# ax.set_yticks(scipy.arange(0,0.04,0.01))
# Interpolation
f = interpolate.interp1d(bins, n, kind='cubic')
x = np.arange(bins[0],bins[-1],0.03) # vary linspc
interp = f(x)
ltz = interp < 0
interp[ltz] = 0
plot_color = pickprintcolor(subj_types, arrfn)
#pl.plot(x, interp, color = plot_color, linewidth=1)
pl.plot(interp, color = plot_color, linewidth=1)
if idx == 0:
pl.ylabel('Probability')
pl.xlabel('Log Number of Local Triangles')
if idx == 1:
#pl.ylabel('Probability') #**
pl.xlabel('Log Local Clustering Coefficient')
if idx == 2:
pl.ylabel('Probability')
pl.xlabel('Log Scan Statistic 1')
if idx == 3:
#pl.ylabel('Probability') #**
pl.xlabel('Log Degree')
''' Eigenvalues '''
ax = pl.subplot(3,2,5)
ax.set_yticks(scipy.arange(0,16,4))
for eigValInstance in glob(os.path.join(invDir, EigDir,"*.npy")):
try:
eigv = np.load(eigValInstance)
except:
print "Eigenvalue array"
n = len(eigv)
sa = (np.sort(eigv)[::-1])
plot_color = pickprintcolor(subj_types, eigValInstance)
pl.plot(range(1,n+1), sa/10000, color=plot_color)
pl.ylabel('Magnitude ($X 10^4$) ')
pl.xlabel('Eigenvalue Rank')
''' Edges '''
arrfn = os.path.join(invDir, 'Globals/numEdges.npy')
try:
arr = np.load(arrfn)
arr = np.log(arr[arr.nonzero()])
print "Processing %s..." % arrfn
except:
print "[ERROR]: Line %s: Invariant file not found %s" % (lineno(),arrfn)
pl.figure(1)
n, bins, patches = pl.hist(arr, bins=10 , range=None, normed=False, weights=None, cumulative=False, \
bottom=None, histtype='stepfilled', align='mid', orientation='vertical', \
rwidth=None, log=False, color=None, label=None, hold=None)
n = np.append(n,0)
fig = pl.figure(2)
ax = pl.subplot(3,2,6)
ax.set_xticks(scipy.arange(17.2,18.1,0.2))
f = interpolate.interp1d(bins, n, kind='cubic')
x = np.arange(bins[0],bins[-1],0.01) # vary linspc
interp = f(x)
ltz = interp < 0
interp[ltz] = 0
pl.plot(x, interp,color ='grey' ,linewidth=1)
pl.ylabel('Frequency')
pl.xlabel('Log Global Edge Number')
pl.savefig(pngName+'.pdf')
#################################################
##################################################
##################################################
def plotstdmean(invDir, pngName, numBins =100):
subj_types = csvtodict() # load up subject types
# ClustCoeff Degree Eigen MAD numEdges.npy ScanStat Triangle
MADdir = "MAD"
ccDir = "ClustCoeff"
DegDir = "Degree"
EigDir = "Eigen"
SS1dir = "ScanStat1"
triDir = "Triangle"
invDirs = [triDir, ccDir, SS1dir, DegDir ]
if not os.path.exists(invDir):
print "%s does not exist" % invDir
sys.exit(1)
pl.figure(2)
fig_gl, axes = pl.subplots(nrows=3, ncols=2)
fig_gl.tight_layout()
for idx, drcty in enumerate (invDirs):
mean_arr = []
stddev_arr = []
ones_mean = []
twos_mean = []
zeros_mean = []
ones_std = []
twos_std = []
zeros_std = []
for arrfn in glob(os.path.join(invDir, drcty,'*.npy')):
try:
arr = np.load(arrfn)
arr = np.log(arr[arr.nonzero()])
print "Processing %s..." % arrfn
except:
print "[ERROR]: Line %s: Invariant file not found %s" % (lineno(),arrfn)
pl.figure(1)
n, bins, patches = pl.hist(arr, bins=numBins , range=None, normed=False, weights=None, cumulative=False, \
bottom=None, histtype='stepfilled', align='mid', orientation='vertical', \
rwidth=None, log=False, color=None, label=None, hold=None)
n = np.append(n,0)
n = n/float(sum(n))
fig = pl.figure(2)
fig.subplots_adjust(hspace=.5)
nrows=5
ncols=4
ax = pl.subplot(nrows,ncols,idx+1)
if idx == 0:
plt.axis([0, 35, 0, 0.04])
ax.set_yticks(scipy.arange(0,0.04,0.01))
if idx == 1 or idx == 2:
ax.set_yticks(scipy.arange(0,0.03,0.01))
if idx == 3:
ax.set_yticks(scipy.arange(0,0.04,0.01))
# Interpolation
f = interpolate.interp1d(bins, n, kind='cubic')
x = np.arange(bins[0],bins[-1],0.03) # vary linspc
interp = f(x)
ltz = interp < 0
interp[ltz] = 0
import pdb; pdb.set_trace()
'''
pl.plot(x, interp, color = plot_color, linewidth=1)
if ( subj_types[arrfn.split('/')[-1].split('_')[0]] == '0'):
zeros_mean.append(arr.mean())
zeros_std.append(arr.std())
if ( subj_types[arrfn.split('/')[-1].split('_')[0]] == '1'):
ones_mean.append(arr.mean())
ones_std.append(arr.std())
if ( subj_types[arrfn.split('/')[-1].split('_')[0]] == '2'):
twos_mean.append(arr.mean())
twos_std.append(arr.std())
'''
plot_color = pickprintcolor(subj_types, arrfn)
if idx == 0:
pl.ylabel('Probability')
pl.xlabel('Log Number of Local Triangles')
if idx == 1:
#pl.ylabel('Probability') #**
pl.xlabel('Log Local Clustering Coefficient')
if idx == 2:
pl.ylabel('Probability')
pl.xlabel('Log Scan Statistic 1')
if idx == 3:
#pl.ylabel('Probability') #**
pl.xlabel('Log Degree')
''' Eigenvalues '''
ax = pl.subplot(3,2,5)
ax.set_yticks(scipy.arange(0,16,4))
for eigValInstance in glob(os.path.join(invDir, EigDir,"*.npy")):
try:
eigv = np.load(eigValInstance)
except:
print "Eigenvalue array"
n = len(eigv)
sa = (np.sort(eigv)[::-1])
plot_color = pickprintcolor(subj_types, eigValInstance)
pl.plot(range(1,n+1), sa/10000, color=plot_color)
pl.ylabel('Magnitude ($X 10^4$) ')
pl.xlabel('eigenvalue rank')
''' Edges '''
arrfn = os.path.join(invDir, 'Globals/numEdges.npy')
try:
arr = np.load(arrfn)
arr = np.log(arr[arr.nonzero()])
print "Processing %s..." % arrfn
except:
print "[ERROR]: Line %s: Invariant file not found %s" % (lineno(),arrfn)
pl.figure(1)
n, bins, patches = pl.hist(arr, bins=10 , range=None, normed=False, weights=None, cumulative=False, \
bottom=None, histtype='stepfilled', align='mid', orientation='vertical', \
rwidth=None, log=False, color=None, label=None, hold=None)
n = np.append(n,0)
fig = pl.figure(2)
ax = pl.subplot(3,2,6)
ax.set_xticks(scipy.arange(17.2,18.1,0.2))
f = interpolate.interp1d(bins, n, kind='cubic')
x = np.arange(bins[0],bins[-1],0.01) # vary linspc
interp = f(x)
ltz = interp < 0
interp[ltz] = 0
pl.plot(x, interp,color ='grey' ,linewidth=1)
pl.ylabel('Frequency')
pl.xlabel('log global edge number')
pl.savefig(pngName+'.png')
##################################################
##################################################
##################################################
def OLDplotstdmean(invDir, pngName):
subj_types = csvtodict() # load up subject types
# ClustCoeff Degree Eigen MAD numEdges.npy ScanStat Triangle
ccDir = "ClustCoeff"
DegDir = "Degree"
EigDir = "Eigen"
SS1dir = "ScanStat1"
triDir = "Triangle"
invDirs = [triDir, ccDir, SS1dir, DegDir ]
#invDirs = []
if not os.path.exists(invDir):
print "%s does not exist" % invDir
sys.exit(1)
pl.figure(1)
nrows=4
ncols=2
fig_gl, axes = pl.subplots(nrows=nrows, ncols=ncols)
fig_gl.tight_layout()
for idx, drcty in enumerate (invDirs):
mean_arr = []
stddev_arr = []
ones_mean = []
twos_mean = []
zeros_mean = []
ones_std = []
twos_std = []
zeros_std = []
for arrfn in glob(os.path.join(invDir, drcty,'*.npy')):
try:
arr = np.load(arrfn)
mean_arr.append(arr.mean())
stddev_arr.append(arr.std())
if ( subj_types[arrfn.split('/')[-1].split('_')[0]] == '0'):
zeros_mean.append(arr.mean())
zeros_std.append(arr.std())
if ( subj_types[arrfn.split('/')[-1].split('_')[0]] == '1'):
ones_mean.append(arr.mean())
ones_std.append(arr.std())
if ( subj_types[arrfn.split('/')[-1].split('_')[0]] == '2'):
twos_mean.append(arr.mean())
twos_std.append(arr.std())
#mean_arr.append(np.log(arr.mean()))
#stddev_arr.append(np.log(arr.std()))
#arr = np.log(arr[arr.nonzero()])
print "Processing %s..." % arrfn
except:
print "[ERROR]: Line %s: Invariant file not found %s" % (lineno(),arrfn)
mean_arr = np.array(mean_arr)
stddev_arr = np.array(stddev_arr)
ax = pl.subplot(nrows,ncols,(idx*ncols)+1)
ax.set_yticks(scipy.arange(0,1,.25))
pl.gcf().subplots_adjust(bottom=0.07)
'''
if idx == 0:
plt.axis([0, 35, 0, 0.04])
ax.set_yticks(scipy.arange(0,0.04,0.01))
if idx == 1 or idx == 2:
ax.set_yticks(scipy.arange(0,0.03,0.01))
if idx == 3:
ax.set_yticks(scipy.arange(0,0.04,0.01))
'''
# Interpolation
#f = interpolate.interp1d(bins, n, kind='cubic')
#x = np.arange(bins[0],bins[-1],0.03) # vary linspc
#interp = f(x)
#ltz = interp < 0
#interp[ltz] = 0
#plot_color = pickprintcolor(subj_types, arrfn)
#pl.plot(x, interp, color = plot_color, linewidth=1)
#pl.plot(mean_arr/float(mean_arr.max()), color = "black", linewidth=1)
if (idx*ncols)+1 == 1:
pl.ylabel('')
pl.xlabel('Norm. Local Triangle Count Mean')
if (idx*ncols)+1 == 3:
#pl.ylabel('Probability') #**
pl.xlabel('Norm. Local Clustering Coefficient Mean')
if (idx*ncols)+1 == 5:
pl.ylabel('Normalized Magnitude Scale')
pl.xlabel('Norm. Scan Statistic 1 Mean')
if (idx*ncols)+1 == 7:
#pl.ylabel('Probability') #**
pl.xlabel('Norm. Local Degree Mean')
pl.plot(zeros_mean, color = 'grey' , linewidth=1)
pl.plot(ones_mean, color = 'blue', linewidth=1)
pl.plot(twos_mean, color = 'green', linewidth=1)
ax = pl.subplot(nrows,ncols,(idx*ncols)+2)
ax.set_yticks(scipy.arange(0,1,.25))
pl.gcf().subplots_adjust(bottom=0.07)
stddev_arr = np.array(stddev_arr)
#pl.plot(stddev_arr/float(stddev_arr.max()), color = "black", linewidth=1)
if (idx*ncols)+2 == 2:
pl.ylabel('')
pl.xlabel('Norm. Local Triangle Count Std Dev')
if (idx*ncols)+2 == 4:
#pl.ylabel('Probability') #**
pl.xlabel('Norm. Local Clustering Coefficient Std Dev')
if (idx*ncols)+2 == 6:
#pl.ylabel('Probability')
pl.xlabel('Norm. Scan Statistic 1 Std Dev')
if (idx*ncols)+2 == 8:
#pl.ylabel('Probability') #**
pl.xlabel('Norm. Local Degree Std Dev')
pl.plot(zeros_std, color = 'grey' , linewidth=1)
pl.plot(ones_std, color = 'blue', linewidth=1)
pl.plot(twos_std, color = 'green', linewidth=1)
pl.savefig(pngName+'.png')
def main():
parser = argparse.ArgumentParser(description='Plot distribution of invariant arrays of several graphs')
parser.add_argument('invDir', action='store',help='The full path of directory containing .npy invariant arrays')
parser.add_argument('pngName', action='store', help='Full path of directory of resulting png file')
parser.add_argument('numBins', type = int, action='store', help='Number of bins')
result = parser.parse_args()
plotInvDist(result.invDir, result.pngName, result.numBins)
#plotstdmean(result.invDir, result.pngName)
if __name__ == '__main__':
main()
#csvtodict(sys.argv[1]) | apache-2.0 |
sheadovas/tools | misc/plotter.py | 1 | 1780 | #!/usr/bin/python
# created by shead
import sys
import numpy as np
import matplotlib.pyplot as plt
import pylab
"""
USAGE
============
./plotter.py [log]
./plotter.py my_log.log
REQUIRED DEPENDENCIES
============
* Python2
* Matplot http://matplotlib.org/users/installing.html
FILE FORMAT
============
[iteration] [amount_of_cmp] [amount_of_swaps]
...
EXAMPLE FILE
============
10 1 2
20 30 121
"""
def load_data_from_file(filename, data_size, data_cmp, data_swp):
with open(filename, 'r') as f:
for line in f:
raw = line.split()
data_size.append(int(raw[0]))
data_cmp.append(int(raw[1]))
data_swp.append(int(raw[2]))
# func from docs
def autolabel(rects, ax):
# attach some text labels
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2.0, 1.05*height,
'%d' % int(height),
ha='center', va='bottom')
def main(argv):
if len(argv) != 2:
print 'USAGE: plotter [path_to_log]'
sys.exit(1)
data_size = []
data_cmp = []
data_swp = []
load_data_from_file(argv[1], data_size, data_cmp, data_swp)
# plot
N = len(data_size)
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, data_cmp, width, color='r')
rects2 = ax.bar(ind + width, data_swp, width, color='y')
# add some text for labels, title and axes ticks
ax.set_ylabel('Values')
title = argv[1].split('.')[0]
ax.set_title(title)
#ax.set_xticks(ind + width)
#x.set_xticklabels(data_size)
ax.legend((rects1[0], rects2[0]), ('cmp', 'swp'))
#autolabel(rects1, ax)
#autolabel(rects2, ax)
fname = '%s.png' % (title)
pylab.savefig(fname, dpi=333)
print 'Saved to %s' % fname
if __name__ == "__main__":
main(sys.argv) | mit |
kzky/python-online-machine-learning-library | pa/passive_aggressive_2.py | 1 | 6472 | import numpy as np
import scipy as sp
import logging as logger
import time
import pylab as pl
from collections import defaultdict
from sklearn.metrics import confusion_matrix
class PassiveAggressiveII(object):
"""
Passive Aggressive-II algorithm: squared hinge loss PA.
References:
- http://jmlr.org/papers/volume7/crammer06a/crammer06a.pdf
This model is only applied to binary classification.
"""
def __init__(self, fname, delimiter = " ", C = 1, n_scan = 10):
"""
model initialization.
"""
logger.basicConfig(level=logger.DEBUG)
logger.info("init starts")
self.n_scan = 10
self.data = defaultdict()
self.model = defaultdict()
self.cache = defaultdict()
self._load(fname, delimiter)
self._init_model(C)
logger.info("init finished")
def _load(self, fname, delimiter = " "):
"""
Load data set specified with filename.
data format must be as follows (space-separated file as default),
l_1 x_11 x_12 x_13 ... x_1m
l_2 x_21 x_22 ... x_2m
...
l_n x_n1 x_n2 ... x_nm
l_i must be {1, -1} because of binary classifier.
Arguments:
- `fname`: file name.
- `delimiter`: delimiter of a file.
"""
logger.info("load data starts")
# load data
self.data["data"] = np.loadtxt(fname, delimiter = delimiter)
self.data["n_sample"] = self.data["data"].shape[0]
self.data["f_dim"] = self.data["data"].shape[1] - 1
# binalize
self._binalize(self.data["data"])
# normalize
self.normalize(self.data["data"][:, 1:])
logger.info("load data finished")
def _binalize(self, data):
"""
Binalize label of data.
Arguments:
- `data`: dataset.
"""
logger.info("init starts")
# binary check
labels = data[:, 0]
classes = np.unique(labels)
if classes.size != 2:
print "label must be a binary value."
exit(1)
# convert binary lables to {1, -1}
for i in xrange(labels.size):
if labels[i] == classes[0]:
labels[i] = 1
else:
labels[i] = -1
# set classes
self.data["classes"] = classes
logger.info("init finished")
def normalize(self, samples):
"""
nomalize sample, such that sqrt(x^2) = 1
Arguments:
- `samples`: dataset without labels.
"""
logger.info("normalize starts")
for i in xrange(0, self.data["n_sample"]):
samples[i, :] = self._normalize(samples[i, :])
logger.info("normalize finished")
def _normalize(self, sample):
norm = np.sqrt(sample.dot(sample))
sample = sample/norm
return sample
def _init_model(self, C):
"""
Initialize model.
"""
logger.info("init model starts")
self.model["w"] = np.ndarray(self.data["f_dim"] + 1) # model paremter
self.model["C"] = C # aggressive parameter
logger.info("init model finished")
def _learn(self, ):
"""
Learn internally.
"""
def _update(self, label, sample, margin):
"""
Update model parameter internally.
update rule is as follows,
w = w + y (1 - m)/(||x||_2^2 + C) * x
Arguments:
- `label`: label = {1, -1}
- `sample`: sample, or feature vector
"""
# add bias
sample = self._add_bias(sample)
norm = sample.dot(sample)
w = self.model["w"] + label * (1 - margin)/(norm + self.model["C"]) * sample
self.model["w"] = w
def _predict_value(self, sample):
"""
predict value of \w^T * x
Arguments:
- `sample`:
"""
return self.model["w"].dot(self._add_bias(sample))
def _add_bias(self, sample):
return np.hstack((sample, 1))
def learn(self, ):
"""
Learn.
"""
logger.info("learn starts")
data = self.data["data"]
# learn
for i in xrange(0, self.n_scan):
for i in xrange(0, self.data["n_sample"]):
sample = data[i, 1:]
label = data[i, 0]
pred_val = self._predict_value(sample)
margin = label * pred_val
if margin < 1:
self._update(label, sample, margin)
logger.info("learn finished")
def predict(self, sample):
"""
predict {1, -1} base on \w^T * x
Arguments:
- `sample`:
"""
pred_val = self._predict_value(sample)
self.cache["pred_val"] = pred_val
if pred_val >=0:
return 1
else:
return -1
def update(self, label, sample):
"""
update model.
Arguments:
- `sample`: sample, or feature vector
- `pred_val`: predicted value i.e., w^T * sample
"""
margin = label * self.model["pred_val"]
if margin < 1:
_update(label, sample, margin)
@classmethod
def examplify(cls, fname, delimiter = " ", C = 1 , n_scan = 3):
"""
Example of how to use
"""
# learn
st = time.time()
model = PassiveAggressiveII(fname, delimiter, C , n_scan)
model.learn()
et = time.time()
print "learning time: %f[s]" % (et - st)
# predict (after learning)
data = np.loadtxt(fname, delimiter = " ")
model._binalize(data)
n_sample = data.shape[0]
y_label = data[:, 0]
y_pred = np.ndarray(n_sample)
for i in xrange(0, n_sample):
sample = data[i, 1:]
y_pred[i] = model.predict(sample)
# show result
cm = confusion_matrix(y_label, y_pred)
print cm
print "accurary: %d [%%]" % (np.sum(cm.diagonal()) * 100.0/np.sum(cm))
if __name__ == '__main__':
fname = "/home/kzk/datasets/uci_csv/liver.csv"
#fname = "/home/kzk/datasets/uci_csv/ad.csv"
print "dataset is", fname
PassiveAggressiveII.examplify(fname, delimiter = " ", C = 1, n_scan = 100)
| bsd-3-clause |
giorgiop/scikit-learn | doc/tutorial/text_analytics/solutions/exercise_01_language_train_model.py | 73 | 2264 | """Build a language detector model
The goal of this exercise is to train a linear classifier on text features
that represent sequences of up to 3 consecutive characters so as to be
recognize natural languages by using the frequencies of short character
sequences as 'fingerprints'.
"""
# Author: Olivier Grisel <olivier.grisel@ensta.org>
# License: Simplified BSD
import sys
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import Perceptron
from sklearn.pipeline import Pipeline
from sklearn.datasets import load_files
from sklearn.model_selection import train_test_split
from sklearn import metrics
# The training data folder must be passed as first argument
languages_data_folder = sys.argv[1]
dataset = load_files(languages_data_folder)
# Split the dataset in training and test set:
docs_train, docs_test, y_train, y_test = train_test_split(
dataset.data, dataset.target, test_size=0.5)
# TASK: Build a vectorizer that splits strings into sequence of 1 to 3
# characters instead of word tokens
vectorizer = TfidfVectorizer(ngram_range=(1, 3), analyzer='char',
use_idf=False)
# TASK: Build a vectorizer / classifier pipeline using the previous analyzer
# the pipeline instance should stored in a variable named clf
clf = Pipeline([
('vec', vectorizer),
('clf', Perceptron()),
])
# TASK: Fit the pipeline on the training set
clf.fit(docs_train, y_train)
# TASK: Predict the outcome on the testing set in a variable named y_predicted
y_predicted = clf.predict(docs_test)
# Print the classification report
print(metrics.classification_report(y_test, y_predicted,
target_names=dataset.target_names))
# Plot the confusion matrix
cm = metrics.confusion_matrix(y_test, y_predicted)
print(cm)
#import matlotlib.pyplot as plt
#plt.matshow(cm, cmap=plt.cm.jet)
#plt.show()
# Predict the result on some short new sentences:
sentences = [
u'This is a language detection test.',
u'Ceci est un test de d\xe9tection de la langue.',
u'Dies ist ein Test, um die Sprache zu erkennen.',
]
predicted = clf.predict(sentences)
for s, p in zip(sentences, predicted):
print(u'The language of "%s" is "%s"' % (s, dataset.target_names[p]))
| bsd-3-clause |
wzbozon/statsmodels | statsmodels/sandbox/km_class.py | 31 | 11748 | #a class for the Kaplan-Meier estimator
from statsmodels.compat.python import range
import numpy as np
from math import sqrt
import matplotlib.pyplot as plt
class KAPLAN_MEIER(object):
def __init__(self, data, timesIn, groupIn, censoringIn):
raise RuntimeError('Newer version of Kaplan-Meier class available in survival2.py')
#store the inputs
self.data = data
self.timesIn = timesIn
self.groupIn = groupIn
self.censoringIn = censoringIn
def fit(self):
#split the data into groups based on the predicting variable
#get a set of all the groups
groups = list(set(self.data[:,self.groupIn]))
#create an empty list to store the data for different groups
groupList = []
#create an empty list for each group and add it to groups
for i in range(len(groups)):
groupList.append([])
#iterate through all the groups in groups
for i in range(len(groups)):
#iterate though the rows of dataArray
for j in range(len(self.data)):
#test if this row has the correct group
if self.data[j,self.groupIn] == groups[i]:
#add the row to groupList
groupList[i].append(self.data[j])
#create an empty list to store the times for each group
timeList = []
#iterate through all the groups
for i in range(len(groupList)):
#create an empty list
times = []
#iterate through all the rows of the group
for j in range(len(groupList[i])):
#get a list of all the times in the group
times.append(groupList[i][j][self.timesIn])
#get a sorted set of the times and store it in timeList
times = list(sorted(set(times)))
timeList.append(times)
#get a list of the number at risk and events at each time
#create an empty list to store the results in
timeCounts = []
#create an empty list to hold points for plotting
points = []
#create a list for points where censoring occurs
censoredPoints = []
#iterate trough each group
for i in range(len(groupList)):
#initialize a variable to estimate the survival function
survival = 1
#initialize a variable to estimate the variance of
#the survival function
varSum = 0
#initialize a counter for the number at risk
riskCounter = len(groupList[i])
#create a list for the counts for this group
counts = []
##create a list for points to plot
x = []
y = []
#iterate through the list of times
for j in range(len(timeList[i])):
if j != 0:
if j == 1:
#add an indicator to tell if the time
#starts a new group
groupInd = 1
#add (0,1) to the list of points
x.append(0)
y.append(1)
#add the point time to the right of that
x.append(timeList[i][j-1])
y.append(1)
#add the point below that at survival
x.append(timeList[i][j-1])
y.append(survival)
#add the survival to y
y.append(survival)
else:
groupInd = 0
#add survival twice to y
y.append(survival)
y.append(survival)
#add the time twice to x
x.append(timeList[i][j-1])
x.append(timeList[i][j-1])
#add each censored time, number of censorings and
#its survival to censoredPoints
censoredPoints.append([timeList[i][j-1],
censoringNum,survival,groupInd])
#add the count to the list
counts.append([timeList[i][j-1],riskCounter,
eventCounter,survival,
sqrt(((survival)**2)*varSum)])
#increment the number at risk
riskCounter += -1*(riskChange)
#initialize a counter for the change in the number at risk
riskChange = 0
#initialize a counter to zero
eventCounter = 0
#intialize a counter to tell when censoring occurs
censoringCounter = 0
censoringNum = 0
#iterate through the observations in each group
for k in range(len(groupList[i])):
#check of the observation has the given time
if (groupList[i][k][self.timesIn]) == (timeList[i][j]):
#increment the number at risk counter
riskChange += 1
#check if this is an event or censoring
if groupList[i][k][self.censoringIn] == 1:
#add 1 to the counter
eventCounter += 1
else:
censoringNum += 1
#check if there are any events at this time
if eventCounter != censoringCounter:
censoringCounter = eventCounter
#calculate the estimate of the survival function
survival *= ((float(riskCounter) -
eventCounter)/(riskCounter))
try:
#calculate the estimate of the variance
varSum += (eventCounter)/((riskCounter)
*(float(riskCounter)-
eventCounter))
except ZeroDivisionError:
varSum = 0
#append the last row to counts
counts.append([timeList[i][len(timeList[i])-1],
riskCounter,eventCounter,survival,
sqrt(((survival)**2)*varSum)])
#add the last time once to x
x.append(timeList[i][len(timeList[i])-1])
x.append(timeList[i][len(timeList[i])-1])
#add the last survival twice to y
y.append(survival)
#y.append(survival)
censoredPoints.append([timeList[i][len(timeList[i])-1],
censoringNum,survival,1])
#add the list for the group to al ist for all the groups
timeCounts.append(np.array(counts))
points.append([x,y])
#returns a list of arrays, where each array has as it columns: the time,
#the number at risk, the number of events, the estimated value of the
#survival function at that time, and the estimated standard error at
#that time, in that order
self.results = timeCounts
self.points = points
self.censoredPoints = censoredPoints
def plot(self):
x = []
#iterate through the groups
for i in range(len(self.points)):
#plot x and y
plt.plot(np.array(self.points[i][0]),np.array(self.points[i][1]))
#create lists of all the x and y values
x += self.points[i][0]
for j in range(len(self.censoredPoints)):
#check if censoring is occuring
if (self.censoredPoints[j][1] != 0):
#if this is the first censored point
if (self.censoredPoints[j][3] == 1) and (j == 0):
#calculate a distance beyond 1 to place it
#so all the points will fit
dx = ((1./((self.censoredPoints[j][1])+1.))
*(float(self.censoredPoints[j][0])))
#iterate through all the censored points at this time
for k in range(self.censoredPoints[j][1]):
#plot a vertical line for censoring
plt.vlines((1+((k+1)*dx)),
self.censoredPoints[j][2]-0.03,
self.censoredPoints[j][2]+0.03)
#if this censored point starts a new group
elif ((self.censoredPoints[j][3] == 1) and
(self.censoredPoints[j-1][3] == 1)):
#calculate a distance beyond 1 to place it
#so all the points will fit
dx = ((1./((self.censoredPoints[j][1])+1.))
*(float(self.censoredPoints[j][0])))
#iterate through all the censored points at this time
for k in range(self.censoredPoints[j][1]):
#plot a vertical line for censoring
plt.vlines((1+((k+1)*dx)),
self.censoredPoints[j][2]-0.03,
self.censoredPoints[j][2]+0.03)
#if this is the last censored point
elif j == (len(self.censoredPoints) - 1):
#calculate a distance beyond the previous time
#so that all the points will fit
dx = ((1./((self.censoredPoints[j][1])+1.))
*(float(self.censoredPoints[j][0])))
#iterate through all the points at this time
for k in range(self.censoredPoints[j][1]):
#plot a vertical line for censoring
plt.vlines((self.censoredPoints[j-1][0]+((k+1)*dx)),
self.censoredPoints[j][2]-0.03,
self.censoredPoints[j][2]+0.03)
#if this is a point in the middle of the group
else:
#calcuate a distance beyond the current time
#to place the point, so they all fit
dx = ((1./((self.censoredPoints[j][1])+1.))
*(float(self.censoredPoints[j+1][0])
- self.censoredPoints[j][0]))
#iterate through all the points at this time
for k in range(self.censoredPoints[j][1]):
#plot a vetical line for censoring
plt.vlines((self.censoredPoints[j][0]+((k+1)*dx)),
self.censoredPoints[j][2]-0.03,
self.censoredPoints[j][2]+0.03)
#set the size of the plot so it extends to the max x and above 1 for y
plt.xlim((0,np.max(x)))
plt.ylim((0,1.05))
#label the axes
plt.xlabel('time')
plt.ylabel('survival')
plt.show()
def show_results(self):
#start a string that will be a table of the results
resultsString = ''
#iterate through all the groups
for i in range(len(self.results)):
#label the group and header
resultsString += ('Group {0}\n\n'.format(i) +
'Time At Risk Events Survival Std. Err\n')
for j in self.results[i]:
#add the results to the string
resultsString += (
'{0:<9d}{1:<12d}{2:<11d}{3:<13.4f}{4:<6.4f}\n'.format(
int(j[0]),int(j[1]),int(j[2]),j[3],j[4]))
print(resultsString)
| bsd-3-clause |
timcera/hspfbintoolbox | tests/test_catalog.py | 1 | 115314 | # -*- coding: utf-8 -*-
"""
catalog
----------------------------------
Tests for `hspfbintoolbox` module.
"""
import csv
import shlex
import subprocess
import sys
from unittest import TestCase
from pandas.testing import assert_frame_equal
try:
from cStringIO import StringIO
except:
from io import StringIO
import pandas as pd
from hspfbintoolbox import hspfbintoolbox
interval2codemap = {"yearly": 5, "monthly": 4, "daily": 3, "bivl": 2}
def capture(func, *args, **kwds):
sys.stdout = StringIO() # capture output
out = func(*args, **kwds)
out = sys.stdout.getvalue() # release output
try:
out = bytes(out, "utf-8")
except:
pass
return out
def read_unicode_csv(
filename,
delimiter=",",
quotechar='"',
quoting=csv.QUOTE_MINIMAL,
lineterminator="\n",
encoding="utf-8",
):
# Python 3 version
if sys.version_info[0] >= 3:
# Open the file in text mode with given encoding
# Set newline arg to ''
# (see https://docs.python.org/3/library/csv.html)
# Next, get the csv reader, with unicode delimiter and quotechar
csv_reader = csv.reader(
filename,
delimiter=delimiter,
quotechar=quotechar,
quoting=quoting,
lineterminator=lineterminator,
)
# Now, iterate over the (already decoded) csv_reader generator
for row in csv_reader:
yield row
# Python 2 version
else:
# Next, get the csv reader, passing delimiter and quotechar as
# bytestrings rather than unicode
csv_reader = csv.reader(
filename,
delimiter=delimiter.encode(encoding),
quotechar=quotechar.encode(encoding),
quoting=quoting,
lineterminator=lineterminator,
)
# Iterate over the file and decode each string into unicode
for row in csv_reader:
yield [cell.decode(encoding) for cell in row]
class TestDescribe(TestCase):
def setUp(self):
self.catalog = b"""\
LUE , LC,GROUP ,VAR , TC,START ,END ,TC
IMPLND, 11,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 11,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 11,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 11,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 11,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 11,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 12,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 12,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 12,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 12,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 12,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 12,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 13,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 13,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 13,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 13,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 13,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 13,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 14,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 14,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 14,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 14,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 14,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 14,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 21,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 21,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 21,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 21,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 21,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 21,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 22,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 22,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 22,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 22,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 22,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 22,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 23,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 23,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 23,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 23,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 23,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 23,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 24,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 24,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 24,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 24,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 24,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 24,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 31,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 31,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 31,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 31,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 31,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 31,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 32,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 32,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 32,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 32,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 32,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 32,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 33,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 33,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 33,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 33,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 33,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 33,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 111,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 111,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 111,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 111,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 111,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 111,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 112,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 112,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 112,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 112,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 112,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 112,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 113,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 113,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 113,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 113,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 113,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 113,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 114,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 114,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 114,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 114,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 114,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 114,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 211,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 211,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 211,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 211,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 211,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 211,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 212,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 212,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 212,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 212,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 212,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 212,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 213,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 213,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 213,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 213,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 213,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 213,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 214,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 214,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 214,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 214,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 214,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 214,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 301,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 301,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 301,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 301,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 301,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 301,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 302,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 302,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 302,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 302,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 302,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 302,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 303,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 303,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 303,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 303,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 303,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 303,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 304,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 304,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 304,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 304,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 304,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 304,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 311,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 311,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 311,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 311,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 311,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 311,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 312,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 312,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 312,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 312,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 312,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 312,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 313,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 313,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 313,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 313,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 313,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 313,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 314,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 314,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 314,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 314,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 314,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 314,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 411,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 411,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 411,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 411,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 411,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 411,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 412,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 412,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 412,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 412,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 412,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 412,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 413,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 413,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 413,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 413,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 413,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 413,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 414,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 414,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 414,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 414,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 414,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 414,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 511,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 511,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 511,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 511,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 511,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 511,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 512,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 512,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 512,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 512,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 512,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 512,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 513,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 513,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 513,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 513,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 513,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 513,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 514,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 514,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 514,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 514,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 514,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 514,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 611,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 611,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 611,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 611,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 611,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 611,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 612,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 612,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 612,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 612,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 612,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 612,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 613,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 613,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 613,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 613,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 613,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 613,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 614,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 614,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 614,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 614,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 614,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 614,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 711,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 711,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 711,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 711,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 711,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 711,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 712,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 712,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 712,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 712,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 712,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 712,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 713,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 713,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 713,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 713,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 713,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 713,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 714,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 714,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 714,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 714,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 714,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 714,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 811,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 811,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 811,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 811,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 811,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 811,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 812,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 812,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 812,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 812,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 812,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 812,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 813,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 813,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 813,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 813,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 813,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 813,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 814,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 814,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 814,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 814,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 814,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 814,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 822,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 822,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 822,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 822,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 822,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 822,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 823,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 823,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 823,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 823,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 823,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 823,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 824,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 824,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 824,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 824,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 824,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 824,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 901,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 901,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 901,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 901,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 901,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 901,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 902,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 902,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 902,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 902,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 902,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 902,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 903,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 903,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 903,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 903,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 903,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 903,IWATER ,SURS , 5,1951 ,2001 ,yearly
IMPLND, 904,IWATER ,IMPEV, 5,1951 ,2001 ,yearly
IMPLND, 904,IWATER ,PET , 5,1951 ,2001 ,yearly
IMPLND, 904,IWATER ,RETS , 5,1951 ,2001 ,yearly
IMPLND, 904,IWATER ,SUPY , 5,1951 ,2001 ,yearly
IMPLND, 904,IWATER ,SURO , 5,1951 ,2001 ,yearly
IMPLND, 904,IWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 11,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 12,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 13,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 14,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 15,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 21,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 22,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 23,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 24,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 25,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 31,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 32,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 33,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 35,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 111,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 112,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 113,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 114,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 115,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 211,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 212,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 213,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 214,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 215,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 301,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 302,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 303,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 304,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 305,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 311,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 312,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 313,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 314,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 315,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 411,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 412,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 413,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 414,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 415,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 511,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 512,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 513,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 514,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 515,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 611,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 612,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 613,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 614,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 615,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 711,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 712,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 713,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 714,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 715,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 811,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 812,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 813,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 814,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 815,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 822,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 823,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 824,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 825,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 901,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 902,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 903,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 904,PWATER ,UZS , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,AGWET, 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,AGWI , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,AGWO , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,AGWS , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,BASET, 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,CEPE , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,CEPS , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,GWVS , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,IFWI , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,IFWO , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,IFWS , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,IGWI , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,INFIL, 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,LZET , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,LZI , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,LZS , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,PERC , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,PERO , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,PERS , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,PET , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,SUPY , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,SURO , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,SURS , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,TAET , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,UZET , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,UZI , 5,1951 ,2001 ,yearly
PERLND, 905,PWATER ,UZS , 5,1951 ,2001 ,yearly
"""
ndict = []
rd = read_unicode_csv(StringIO(self.catalog.decode()))
next(rd)
for row in rd:
if len(row) == 0:
continue
nrow = [i.strip() for i in row]
ndict.append(
(nrow[0], int(nrow[1]), nrow[2], nrow[3], interval2codemap[nrow[7]])
)
self.ncatalog = sorted(ndict)
def test_catalog_api(self):
out = hspfbintoolbox.catalog("tests/6b_np1.hbn")
out = [i[:5] for i in out]
self.assertEqual(out, self.ncatalog)
def test_catalog_cli(self):
args = "hspfbintoolbox catalog --tablefmt csv tests/6b_np1.hbn"
args = shlex.split(args)
out = subprocess.Popen(
args, stdout=subprocess.PIPE, stdin=subprocess.PIPE
).communicate()[0]
self.assertEqual(out, self.catalog)
| bsd-3-clause |
talbarda/kaggle_predict_house_prices | Build Model.py | 1 | 2629 | import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
import pandas as pd
import sklearn.linear_model as lm
from sklearn.model_selection import learning_curve
from sklearn.metrics import accuracy_score
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
def get_model(estimator, parameters, X_train, y_train, scoring):
model = GridSearchCV(estimator, param_grid=parameters, scoring=scoring)
model.fit(X_train, y_train)
return model.best_estimator_
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5), scoring='accuracy'):
plt.figure(figsize=(10,6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel(scoring)
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, scoring=scoring,
n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
train = pd.read_csv('input/train.csv')
test = pd.read_csv('input/test.csv')
for c in train:
train[c] = pd.Categorical(train[c].values).codes
X = train.drop(['SalePrice'], axis=1)
X = train[['OverallQual', 'GarageArea', 'GarageCars', 'TotalBsmtSF', 'TotRmsAbvGrd', 'FullBath', 'GrLivArea']]
y = train.SalePrice
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
scoring = make_scorer(accuracy_score, greater_is_better=True)
from sklearn.linear_model import RidgeCV
RidgeCV.fit(X, y, sample_weight=None)
clf_ridge = RidgeCV()
print (accuracy_score(y_test, clf_ridge.predict(X_test)))
print (clf_ridge)
plt = plot_learning_curve(clf_ridge, 'GaussianNB', X, y, cv=4);
plt.show() | mit |
platinhom/ManualHom | Coding/Python/scipy-html-0.16.1/generated/scipy-stats-probplot-1.py | 1 | 1101 | from scipy import stats
import matplotlib.pyplot as plt
nsample = 100
np.random.seed(7654321)
# A t distribution with small degrees of freedom:
ax1 = plt.subplot(221)
x = stats.t.rvs(3, size=nsample)
res = stats.probplot(x, plot=plt)
# A t distribution with larger degrees of freedom:
ax2 = plt.subplot(222)
x = stats.t.rvs(25, size=nsample)
res = stats.probplot(x, plot=plt)
# A mixture of two normal distributions with broadcasting:
ax3 = plt.subplot(223)
x = stats.norm.rvs(loc=[0,5], scale=[1,1.5],
size=(nsample/2.,2)).ravel()
res = stats.probplot(x, plot=plt)
# A standard normal distribution:
ax4 = plt.subplot(224)
x = stats.norm.rvs(loc=0, scale=1, size=nsample)
res = stats.probplot(x, plot=plt)
# Produce a new figure with a loggamma distribution, using the ``dist`` and
# ``sparams`` keywords:
fig = plt.figure()
ax = fig.add_subplot(111)
x = stats.loggamma.rvs(c=2.5, size=500)
stats.probplot(x, dist=stats.loggamma, sparams=(2.5,), plot=ax)
ax.set_title("Probplot for loggamma dist with shape parameter 2.5")
# Show the results with Matplotlib:
plt.show()
| gpl-2.0 |
rbdavid/DNA_stacking_analysis | angles_binary.py | 1 | 9052 | #!/Library/Frameworks/Python.framework/Versions/2.7/bin/python
# USAGE:
# PREAMBLE:
import numpy as np
import MDAnalysis
import sys
import os
import matplotlib.pyplot as plt
traj_file ='%s' %(sys.argv[1])
# ----------------------------------------
# VARIABLE DECLARATION
base1 = 1
nbases = 15
#nbases = 3
#Nsteps = 150000 # check length of the energy file; if not 150000 lines, then need to alter Nsteps value so that angle values will match up
#Nsteps = 149996
#equilib_step = 37500 # we have chosen 75 ns to be the equilib time; 75ns = 37500 frames; if energy values do not match with angle values, then equilib_step needs to be altered as well...
#equilib_step = 37496
#production = Nsteps - equilib_step
# SUBROUTINES/DEFINITIONS:
arccosine = np.arccos
dotproduct = np.dot
pi = np.pi
ldtxt = np.loadtxt
zeros = np.zeros
# ----------------------------------------
# DICTIONARY DECLARATION
normals = {} # create the normals dictionary for future use
total_binaries = {} # create the total_binaries dictionary for future use
get_norm = normals.get
get_tb = total_binaries.get
# ----------------------------------------
# PLOTTING SUBROUTINES
def plotting(xdata, ydata, base):
plt.plot(xdata, ydata, 'rx')
plt.title('Stacking behavior of base %s over the trajectory' %(base))
plt.xlabel('Simulation time (ns)')
plt.ylabel('Stacking metric')
plt.xlim((0,300))
plt.grid( b=True, which='major', axis='both', color='k', linestyle='-')
plt.savefig('stacking_binary.%s.png' %(base))
plt.close()
def vdw_hist(data, base_a, base_b):
events, edges, patches = plt.hist(data, bins = 100, histtype = 'bar')
plt.title('Distribution of vdW Energies - Base Pair %s-%s' %(base_a, base_b))
plt.xlabel('vdW Energy ($kcal\ mol^{-1}$)')
plt.xlim((-8,0))
plt.ylabel('Frequency')
plt.savefig('energy.%s.%s.png' %(base_a, base_b))
nf = open('energy.%s.%s.dat' %(base_a, base_b), 'w')
for i in range(len(events)):
nf.write(' %10.1f %10.4f\n' %(events[i], edges[i]))
nf.close()
plt.close()
events = []
edges = []
patches = []
def angle_hist(data, base_a, base_b):
events, edges, patches = plt.hist(data, bins = 100, histtype = 'bar')
plt.title('Distribution of Angles btw Base Pair %s-%s' %(base_a, base_b))
plt.xlabel('Angle (Degrees)')
plt.ylabel('Frequency')
plt.savefig('angle.%s.%s.png' %(base_a, base_b))
nf = open('angle.%s.%s.dat' %(base_a, base_b), 'w')
for i in range(len(events)):
nf.write(' %10.1f %10.4f\n' %(events[i], edges[i]))
nf.close()
plt.close()
events = []
edges = []
patches = []
def energy_angle_hist(xdata, ydata, base_a, base_b):
counts, xedges, yedges, image = plt.hist2d(xdata, ydata, bins = 100)
cb1 = plt.colorbar()
cb1.set_label('Frequency')
plt.title('Distribution of Base Pair interactions - %s-%s' %(base_a, base_b))
plt.xlabel('Angle (Degrees)')
plt.ylabel('vdW Energy ($kcal\ mol^{-1}$)')
plt.ylim((-6,0.5))
plt.savefig('vdw_angle.%s.%s.png' %(base_a, base_b))
plt.close()
counts = []
xedges = []
yedges = []
image = []
# MAIN PROGRAM:
# ----------------------------------------
# ATOM SELECTION - load the trajectory and select the desired nucleotide atoms to be analyzed later on
u = MDAnalysis.Universe('../nucleic_ions.pdb', traj_file, delta=2.0) # load in trajectory file
Nsteps = len(u.trajectory)
equilib_step = 37500 # first 75 ns are not to be included in total stacking metric
production = Nsteps - equilib_step
nucleic = u.selectAtoms('resid 1:15') # atom selections for nucleic chain
a1 = nucleic.selectAtoms('resid 1') # residue 1 has different atom IDs for the base atoms
a1_base = a1.atoms[10:24] # atom selections
bases = [] # make a list of the 15 bases filled with atoms
bases.append(a1_base) # add base 1 into list
for residue in nucleic.residues[1:15]: # collect the other bases into list
residue_base = []
residue_base = residue.atoms[12:26]
bases.append(residue_base)
# ----------------------------------------
# DICTIONARY DEVELOPMENT - Develop the normals and total binary dictionary which contain the data for each base
while base1 <= nbases:
normals['normal.%s' %(base1)] = get_norm('normal.%s' %(base1), np.zeros((Nsteps, 3)))
total_binaries['base.%s' %(base1)] = get_tb('base.%s' %(base1), np.zeros(Nsteps))
base1 += 1
# ----------------------------------------
# SIMULATION TIME - calculate the array that contains the simulation time in ns units
time = np.zeros(Nsteps)
for i in range(Nsteps):
time[i] = i*0.002 # time units: ns
# ----------------------------------------
# NORMAL ANALYSIS for each base - loops through all bases and all timesteps of the trajectory; calculates the normal vector of the base atoms
base1 = 1
while (base1 <= nbases):
for ts in u.trajectory:
Princ_axes = []
Princ_axes = bases[base1 - 1].principalAxes()
normals['normal.%s' %(base1)][ts.frame - 1] = Princ_axes[2] # ts.frame index starts at 1; add normal to dictionary with index starting at 0
base1 += 1
# ----------------------------------------
# BASE PAIR ANALYSIS - loops through all base pairs (w/out duplicates) and performs the angle analysis as well as the binary analysis
base1 = 1 # reset the base index to start at 1
while (base1 <= nbases): # while loops to perform the base-pair analysis while avoiding performing the same analysis twice
base2 = base1 + 1
while (base2 <= nbases):
os.mkdir('base%s_base%s' %(base1, base2)) # makes and moves into a directory for the base pair
os.chdir('base%s_base%s' %(base1, base2))
energyfile = '../../nonbond_energy/base%s_base%s/base%s_base%s.energies.dat' %(base1, base2, base1, base2)
energies = ldtxt(energyfile) # load in the energy file to a numpy array
vdw_energies = energies[:,2]
binary = zeros(Nsteps)
nf = open('binary.%s.%s.dat' %(base1, base2), 'w') # write the base pair data to a file; make sure to be writing this in a base pair directory
# angle and binary analysis for base pair;
for i in range(Nsteps):
angle = 0.
angle = arccosine(dotproduct(normals['normal.%s' %(base1)][i], normals['normal.%s' %(base2)][i]))
angle = angle*(180./pi)
if angle > 90.:
angle = 180. - angle
if vdw_energies[i] <= -3.5 and angle <= 30.: # cutoff: -3.5 kcal mol^-1 and 30 degrees
binary[i] = 1. # assumed else binary[i] = 0.
nf.write(' %10.3f %10.5f %10.5f %10.1f\n' %(time[i], vdw_energies[i], angle, binary[i])) # check time values
total_binaries['base.%s' %(base1)][i] = total_binaries['base.%s' %(base1)][i] + binary[i]
total_binaries['base.%s' %(base2)][i] = total_binaries['base.%s' %(base2)][i] + binary[i]
nf.close()
angles = []
energies = []
vdw_energies = []
os.chdir('..')
base2 += 1
base1 += 1
# ----------------------------------------
# TOTAL BINARY METRIC ANALYSIS - writing to file and plotting
# print out (also plot) the total binary data to an indivual file for each individual base
base1 = 1 # reset the base index to start at 1
os.mkdir('total_binaries')
os.chdir('total_binaries')
while (base1 <= nbases):
os.mkdir('base%s' %(base1))
os.chdir('base%s' %(base1))
nf = open('binary.%s.dat' %(base1), 'w')
for i in range(Nsteps):
nf.write(' %10.3f %10.1f\n' %(time[i], total_binaries['base.%s' %(base1)][i])) # check time values
nf.close()
counts = 0
for i in range(equilib_step, Nsteps):
if total_binaries['base.%s' %(base1)][i] > 0.:
counts +=1
prob = 0.
prob = (float(counts)/production)*100.
nf = open('stacking.%s.dat' %(base1), 'w')
nf.write('counts: %10.1f out of %10.1f time steps \n Probability of stacking = %10.4f ' %(counts, production, prob))
nf.close()
plotting(time[:], total_binaries['base.%s' %(base1)][:], base1)
os.chdir('..')
base1 += 1
# ----------------------------------------
# BASE PAIR PLOTTING - making histogram plots for vdW energy distributions, angle distributions, and 2d hist of vdw vs angle distributions
# Also printint out a file that contains the count of timesteps where the base pair are stacked
os.chdir('..')
base1 = 1
while (base1 <= nbases): # while loops to perform the base-pair analysis while avoiding performing the same analysis twice
base2 = base1 + 1
while (base2 <= nbases):
os.chdir('base%s_base%s' %(base1, base2))
infile = 'binary.%s.%s.dat' %(base1, base2)
data = ldtxt(infile) # data[0] = time, data[1] = vdW energies, data[2] = angle, data[3] = base pair binary metric
vdw_hist(data[equilib_step:,1], base1, base2)
angle_hist(data[equilib_step:,2], base1, base2)
energy_angle_hist(data[equilib_step:,2], data[equilib_step:,1], base1, base2)
nf = open('stacking.%s.%s.dat' %(base1, base2), 'w')
bp_counts = sum(data[equilib_step:,3])
nf.write('counts for base pair %s-%s: %10.1f' %(base1, base2, bp_counts))
nf.close()
data = []
os.chdir('..')
base2 += 1
base1 += 1
# ----------------------------------------
# END
| mit |
runiq/modeling-clustering | find-correct-cluster-number/plot_clustering_metrics.py | 1 | 10092 | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Performs a clustering run with a number of clusters and a given mask,
and creates graphs of the corresponding DBI, pSF, SSR/SST, and RMSD
values.
These faciliate the choice of cluster numbers and improve the clustering
process by allowing to pick the number of clusters with the highest
information content.
"""
# TODO
# - Fix plot_tree()
# - Do some logging
# - remove clustering_run from plot_metrics() and plot_tree() as it
# basically represents world state. Use explicit metrics/nodes instead
# - Implement ylabel alignment as soon as PGF backend has its act together
import cStringIO as csio
from glob import glob, iglob
import os
import os.path as op
import sys
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import matplotlib.ticker as tic
import matplotlib.transforms as tfs
import clustering_run as cr
import newick as cn
def align_yaxis_labels(axes, sortfunc):
xpos = sortfunc(ax.yaxis.get_label().get_position()[0] for ax in axes)
for ax in axes:
trans = tfs.blended_transform_factory(tfs.IdentityTransform(), ax.transAxes)
ax.yaxis.set_label_coords(xpos, 0.5, transform=trans)
def plot_metrics(clustering_run, output_file, xmin=None, xmax=None,
use_tex=False, figsize=(12,8), square=False):
metrics = clustering_run.gather_metrics()
# The ±0.5 are so that all chosen points are well within the
# plots
if xmin is None:
xmin = min(metrics['n'])
if xmax is None:
xmax = max(metrics['n'])
xlim = (xmin-0.5, xmax+0.5)
fig = plt.figure(figsize=figsize)
if clustering_run.no_ssr_sst:
gridindex = 310
else:
if square:
gridindex = 220
else:
gridindex = 410
if use_tex:
rmsd_ylabel = r'Critical distance/\si{\angstrom}'
xlabel = r'$n_{\text{Clusters}}$'
else:
rmsd_ylabel = u'Critical distance/Å'
xlabel = r'Number of clusters'
ax1 = fig.add_subplot(gridindex+1, ylabel=rmsd_ylabel)
ax2 = fig.add_subplot(gridindex+2, ylabel='DBI', sharex=ax1)
ax3 = fig.add_subplot(gridindex+3, ylabel='pSF', sharex=ax1)
ax1.plot(metrics['n'], metrics['rmsd'], marker='.')
ax2.plot(metrics['n'], metrics['dbi'], marker='.')
ax3.plot(metrics['n'], metrics['psf'], marker='.')
if not clustering_run.no_ssr_sst:
ax4 = fig.add_subplot(gridindex+4,
ylabel='SSR/SST', xlim=xlim, sharex=ax1)
ax4.plot(metrics['n'], metrics['ssr_sst'], marker='.')
if square and not clustering_run.no_ssr_sst:
nonxaxes = fig.axes[:-2]
xaxes = fig.axes[-2:]
lefthandplots = fig.axes[0::2]
righthandplots = fig.axes[1::2]
# Put yticklabels of right-hand plots to the right
for ax in righthandplots:
ax.yaxis.tick_right()
ax.yaxis.set_label_position('right')
else:
nonxaxes = fig.axes[:-1]
xaxes = [fig.axes[-1]]
lefthandplots = fig.axes
# xaxes limits and tick locations are propagated across sharex plots
for ax in xaxes:
ax.set_xlabel(xlabel)
ax.xaxis.set_major_locator(tic.MultipleLocator(10))
ax.xaxis.set_minor_locator(tic.AutoMinorLocator(2))
for ax in nonxaxes:
plt.setp(ax.get_xticklabels(), visible=False)
# 5 yticklabels are enough for everybody
for ax in fig.axes:
ax.yaxis.set_major_locator(tic.MaxNLocator(nbins=5))
ax.yaxis.set_minor_locator(tic.MaxNLocator(nbins=5))
# Draw first to get proper ylabel coordinates
# fig.canvas.draw()
# align_yaxis_labels(lefthandplots, sortfunc=min)
# if square and not clustering_run.no_ssr_sst:
# align_yaxis_labels(righthandplots, sortfunc=max)
fig.savefig(output_file)
def plot_tree(clustering_run, node_info, steps, dist, output, graphical=None, no_length=False):
tree = cn.parse_clustermerging(clustering_run)
newick = tree.create_newick(node_info=node_info, no_length=no_length, steps=steps, dist=dist)
if output is sys.stdout:
fh = output
else:
fh = open(output, 'w')
fh.write(newick)
fh.close()
fig = plt.figure()
ax1 = fig.add_subplot(111, ylabel='Cluster tree')
if graphical is not None:
cn.draw(csio.StringIO(newick), do_show=False, axes=ax1)
fig.savefig(graphical)
def parse_args():
import argparse as ap
parser = ap.ArgumentParser()
parser.add_argument('-c', '--cm-file', metavar='FILE',
default='./ClusterMerging.txt', dest='cm_fn',
help="File to parse (default: ./ClusterMerging.txt)")
parser.add_argument('-C', '--matplotlibrc', metavar='FILE', default=None,
help="Matplotlibrc file to use")
parser.add_argument('-p', '--prefix', default='c',
help="Prefix for clustering result files (default: \"c\")")
parser.add_argument('-N', '--no-ssr-sst', action='store_true', default=False,
help="Don't gather SSR_SST values (default: False)")
subs = parser.add_subparsers(dest='subcommand', help="Sub-command help")
c = subs.add_parser('cluster', help="Do clustering run to gather metrics")
c.add_argument('prmtop', help="prmtop file")
c.add_argument('-m', '--mask', metavar='MASKSTR', default='@CA,C,O,N',
help=("Mask string (default: '@CA,C,O,N')"))
c.add_argument('-P', '--ptraj-trajin-file', metavar='FILE',
default='ptraj_trajin', dest='ptraj_trajin_fn',
help=("Filename for ptraj trajin file (default: ptraj_trajin)"))
c.add_argument('-n', '--num-clusters', dest='n_clusters', type=int,
metavar='CLUSTERS', default=50,
help="Number of clusters to examine (default (also maximum): 50)")
c.add_argument('-s', '--start-num-clusters', dest='start_n_clusters',
type=int, metavar='CLUSTERS', default=2,
help="Number of clusters to start from (default: 2)")
c.add_argument('-l', '--logfile', metavar='FILE', default=None,
dest='log_fn',
help=("Logfile for ptraj run (default: Print to stdout)"))
c.add_argument('--use-cpptraj', action='store_true', default=False,
help="Use cpptraj instead of ptraj")
t = subs.add_parser('tree', help="Create Newick tree representation")
t.add_argument('-o', '--output', metavar='FILE', default=sys.stdout,
help="Output file for Newick tree (default: print to terminal)")
t.add_argument('-g', '--graphical', default=None,
help="Save tree as png (default: Don't)")
t.add_argument('-s', '--steps', type=int, default=None,
help="Number of steps to print (default: all)")
t.add_argument('-d', '--dist', type=float, default=None,
help="Minimum distance to print (default: all)")
t.add_argument('-i', '--node-info', choices=['num', 'dist', 'id'],
default='num', help="Node data to print")
t.add_argument('-l', '--no-length', default=False, action='store_true',
help="Don't print branch length information")
p = subs.add_parser('plot', help="Plot clustering metrics")
p.add_argument('-o', '--output', metavar='FILE',
default='clustering_metrics.png',
help="Filename for output file (default: show using matplotlib)")
p.add_argument('-n', '--num-clusters', dest='n_clusters', type=int,
metavar='CLUSTERS', default=50,
help="Number of clusters to examine (default (also maximum): 50)")
p.add_argument('-s', '--start-num-clusters', dest='start_n_clusters',
type=int, metavar='CLUSTERS', default=2,
help="Number of clusters to start from (default: 2)")
p.add_argument('-T', '--use-tex', default=False, action='store_true',
help="Use LaTeX output (default: use plaintext output)")
p.add_argument('-S', '--fig-size', nargs=2, type=float, metavar='X Y', default=[12, 8],
help=("Figure size in inches (default: 12x8)"))
p.add_argument('--square', default=False, action='store_true',
help="Plot in two columns")
return parser.parse_args()
def main():
args = parse_args()
if args.matplotlibrc is not None:
matplotlib.rc_file(args.matplotlibrc)
if args.subcommand == 'cluster':
if args.n_clusters < 1 or args.n_clusters > 50:
print "Error: Maximum cluster number must be between 1 and 50."
sys.exit(1)
cn_fns = None
clustering_run = cr.ClusteringRun(prmtop=args.prmtop,
start_n_clusters=args.start_n_clusters, n_clusters=args.n_clusters,
cm_fn=args.cm_fn, mask=args.mask,
ptraj_trajin_fn=args.ptraj_trajin_fn, cn_fns=cn_fns,
prefix=args.prefix, log_fn=args.log_fn,
no_ssr_sst=args.no_ssr_sst)
else:
if not op.exists(args.cm_fn):
print ("{cm_fn} doesn't exist. Please perform a clustering run",
"first.".format(cm_fn=args.cm_fn))
sys.exit(1)
# We assume that the number of clusters starts at 1
n_clusters = len(glob('{prefix}*.txt'.format(prefix=args.prefix)))
cn_fns = {i: '{prefix}{n}.txt'.format(prefix=args.prefix, n=i) for
i in xrange(1, n_clusters+1)}
# Only cm_fn and cn_fns are necessary for plotting the tree and
# metrics
clustering_run = cr.ClusteringRun(prmtop=None, cm_fn=args.cm_fn,
cn_fns=cn_fns, no_ssr_sst=args.no_ssr_sst)
if args.subcommand == 'plot':
plot_metrics(clustering_run, output_file=args.output,
xmin=args.start_n_clusters, xmax=args.n_clusters,
use_tex=args.use_tex, figsize=args.fig_size,
square=args.square)
elif args.subcommand == 'tree':
plot_tree(clustering_run=clustering_run, node_info=args.node_info,
steps=args.steps, dist=args.dist, no_length=args.no_length,
graphical=args.graphical, output=args.output)
if __name__ == '__main__':
main()
| bsd-2-clause |
f3r/scikit-learn | benchmarks/bench_plot_randomized_svd.py | 38 | 17557 | """
Benchmarks on the power iterations phase in randomized SVD.
We test on various synthetic and real datasets the effect of increasing
the number of power iterations in terms of quality of approximation
and running time. A number greater than 0 should help with noisy matrices,
which are characterized by a slow spectral decay.
We test several policy for normalizing the power iterations. Normalization
is crucial to avoid numerical issues.
The quality of the approximation is measured by the spectral norm discrepancy
between the original input matrix and the reconstructed one (by multiplying
the randomized_svd's outputs). The spectral norm is always equivalent to the
largest singular value of a matrix. (3) justifies this choice. However, one can
notice in these experiments that Frobenius and spectral norms behave
very similarly in a qualitative sense. Therefore, we suggest to run these
benchmarks with `enable_spectral_norm = False`, as Frobenius' is MUCH faster to
compute.
The benchmarks follow.
(a) plot: time vs norm, varying number of power iterations
data: many datasets
goal: compare normalization policies and study how the number of power
iterations affect time and norm
(b) plot: n_iter vs norm, varying rank of data and number of components for
randomized_SVD
data: low-rank matrices on which we control the rank
goal: study whether the rank of the matrix and the number of components
extracted by randomized SVD affect "the optimal" number of power iterations
(c) plot: time vs norm, varing datasets
data: many datasets
goal: compare default configurations
We compare the following algorithms:
- randomized_svd(..., power_iteration_normalizer='none')
- randomized_svd(..., power_iteration_normalizer='LU')
- randomized_svd(..., power_iteration_normalizer='QR')
- randomized_svd(..., power_iteration_normalizer='auto')
- fbpca.pca() from https://github.com/facebook/fbpca (if installed)
Conclusion
----------
- n_iter=2 appears to be a good default value
- power_iteration_normalizer='none' is OK if n_iter is small, otherwise LU
gives similar errors to QR but is cheaper. That's what 'auto' implements.
References
----------
(1) Finding structure with randomness: Stochastic algorithms for constructing
approximate matrix decompositions
Halko, et al., 2009 http://arxiv.org/abs/arXiv:0909.4061
(2) A randomized algorithm for the decomposition of matrices
Per-Gunnar Martinsson, Vladimir Rokhlin and Mark Tygert
(3) An implementation of a randomized algorithm for principal component
analysis
A. Szlam et al. 2014
"""
# Author: Giorgio Patrini
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import gc
import pickle
from time import time
from collections import defaultdict
import os.path
from sklearn.utils import gen_batches
from sklearn.utils.validation import check_random_state
from sklearn.utils.extmath import randomized_svd
from sklearn.datasets.samples_generator import (make_low_rank_matrix,
make_sparse_uncorrelated)
from sklearn.datasets import (fetch_lfw_people,
fetch_mldata,
fetch_20newsgroups_vectorized,
fetch_olivetti_faces,
fetch_rcv1)
try:
import fbpca
fbpca_available = True
except ImportError:
fbpca_available = False
# If this is enabled, tests are much slower and will crash with the large data
enable_spectral_norm = False
# TODO: compute approximate spectral norms with the power method as in
# Estimating the largest eigenvalues by the power and Lanczos methods with
# a random start, Jacek Kuczynski and Henryk Wozniakowski, SIAM Journal on
# Matrix Analysis and Applications, 13 (4): 1094-1122, 1992.
# This approximation is a very fast estimate of the spectral norm, but depends
# on starting random vectors.
# Determine when to switch to batch computation for matrix norms,
# in case the reconstructed (dense) matrix is too large
MAX_MEMORY = np.int(2e9)
# The following datasets can be dowloaded manually from:
# CIFAR 10: http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
# SVHN: http://ufldl.stanford.edu/housenumbers/train_32x32.mat
CIFAR_FOLDER = "./cifar-10-batches-py/"
SVHN_FOLDER = "./SVHN/"
datasets = ['low rank matrix', 'lfw_people', 'olivetti_faces', '20newsgroups',
'MNIST original', 'CIFAR', 'a1a', 'SVHN', 'uncorrelated matrix']
big_sparse_datasets = ['big sparse matrix', 'rcv1']
def unpickle(file_name):
with open(file_name, 'rb') as fo:
return pickle.load(fo, encoding='latin1')["data"]
def handle_missing_dataset(file_folder):
if not os.path.isdir(file_folder):
print("%s file folder not found. Test skipped." % file_folder)
return 0
def get_data(dataset_name):
print("Getting dataset: %s" % dataset_name)
if dataset_name == 'lfw_people':
X = fetch_lfw_people().data
elif dataset_name == '20newsgroups':
X = fetch_20newsgroups_vectorized().data[:, :100000]
elif dataset_name == 'olivetti_faces':
X = fetch_olivetti_faces().data
elif dataset_name == 'rcv1':
X = fetch_rcv1().data
elif dataset_name == 'CIFAR':
if handle_missing_dataset(CIFAR_FOLDER) == "skip":
return
X1 = [unpickle("%sdata_batch_%d" % (CIFAR_FOLDER, i + 1))
for i in range(5)]
X = np.vstack(X1)
del X1
elif dataset_name == 'SVHN':
if handle_missing_dataset(SVHN_FOLDER) == 0:
return
X1 = sp.io.loadmat("%strain_32x32.mat" % SVHN_FOLDER)['X']
X2 = [X1[:, :, :, i].reshape(32 * 32 * 3) for i in range(X1.shape[3])]
X = np.vstack(X2)
del X1
del X2
elif dataset_name == 'low rank matrix':
X = make_low_rank_matrix(n_samples=500, n_features=np.int(1e4),
effective_rank=100, tail_strength=.5,
random_state=random_state)
elif dataset_name == 'uncorrelated matrix':
X, _ = make_sparse_uncorrelated(n_samples=500, n_features=10000,
random_state=random_state)
elif dataset_name == 'big sparse matrix':
sparsity = np.int(1e6)
size = np.int(1e6)
small_size = np.int(1e4)
data = np.random.normal(0, 1, np.int(sparsity/10))
data = np.repeat(data, 10)
row = np.random.uniform(0, small_size, sparsity)
col = np.random.uniform(0, small_size, sparsity)
X = sp.sparse.csr_matrix((data, (row, col)), shape=(size, small_size))
del data
del row
del col
else:
X = fetch_mldata(dataset_name).data
return X
def plot_time_vs_s(time, norm, point_labels, title):
plt.figure()
colors = ['g', 'b', 'y']
for i, l in enumerate(sorted(norm.keys())):
if l is not "fbpca":
plt.plot(time[l], norm[l], label=l, marker='o', c=colors.pop())
else:
plt.plot(time[l], norm[l], label=l, marker='^', c='red')
for label, x, y in zip(point_labels, list(time[l]), list(norm[l])):
plt.annotate(label, xy=(x, y), xytext=(0, -20),
textcoords='offset points', ha='right', va='bottom')
plt.legend(loc="upper right")
plt.suptitle(title)
plt.ylabel("norm discrepancy")
plt.xlabel("running time [s]")
def scatter_time_vs_s(time, norm, point_labels, title):
plt.figure()
size = 100
for i, l in enumerate(sorted(norm.keys())):
if l is not "fbpca":
plt.scatter(time[l], norm[l], label=l, marker='o', c='b', s=size)
for label, x, y in zip(point_labels, list(time[l]), list(norm[l])):
plt.annotate(label, xy=(x, y), xytext=(0, -80),
textcoords='offset points', ha='right',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
va='bottom', size=11, rotation=90)
else:
plt.scatter(time[l], norm[l], label=l, marker='^', c='red', s=size)
for label, x, y in zip(point_labels, list(time[l]), list(norm[l])):
plt.annotate(label, xy=(x, y), xytext=(0, 30),
textcoords='offset points', ha='right',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
va='bottom', size=11, rotation=90)
plt.legend(loc="best")
plt.suptitle(title)
plt.ylabel("norm discrepancy")
plt.xlabel("running time [s]")
def plot_power_iter_vs_s(power_iter, s, title):
plt.figure()
for l in sorted(s.keys()):
plt.plot(power_iter, s[l], label=l, marker='o')
plt.legend(loc="lower right", prop={'size': 10})
plt.suptitle(title)
plt.ylabel("norm discrepancy")
plt.xlabel("n_iter")
def svd_timing(X, n_comps, n_iter, n_oversamples,
power_iteration_normalizer='auto', method=None):
"""
Measure time for decomposition
"""
print("... running SVD ...")
if method is not 'fbpca':
gc.collect()
t0 = time()
U, mu, V = randomized_svd(X, n_comps, n_oversamples, n_iter,
power_iteration_normalizer,
random_state=random_state, transpose=False)
call_time = time() - t0
else:
gc.collect()
t0 = time()
# There is a different convention for l here
U, mu, V = fbpca.pca(X, n_comps, raw=True, n_iter=n_iter,
l=n_oversamples+n_comps)
call_time = time() - t0
return U, mu, V, call_time
def norm_diff(A, norm=2, msg=True):
"""
Compute the norm diff with the original matrix, when randomized
SVD is called with *params.
norm: 2 => spectral; 'fro' => Frobenius
"""
if msg:
print("... computing %s norm ..." % norm)
if norm == 2:
# s = sp.linalg.norm(A, ord=2) # slow
value = sp.sparse.linalg.svds(A, k=1, return_singular_vectors=False)
else:
if sp.sparse.issparse(A):
value = sp.sparse.linalg.norm(A, ord=norm)
else:
value = sp.linalg.norm(A, ord=norm)
return value
def scalable_frobenius_norm_discrepancy(X, U, s, V):
# if the input is not too big, just call scipy
if X.shape[0] * X.shape[1] < MAX_MEMORY:
A = X - U.dot(np.diag(s).dot(V))
return norm_diff(A, norm='fro')
print("... computing fro norm by batches...")
batch_size = 1000
Vhat = np.diag(s).dot(V)
cum_norm = .0
for batch in gen_batches(X.shape[0], batch_size):
M = X[batch, :] - U[batch, :].dot(Vhat)
cum_norm += norm_diff(M, norm='fro', msg=False)
return np.sqrt(cum_norm)
def bench_a(X, dataset_name, power_iter, n_oversamples, n_comps):
all_time = defaultdict(list)
if enable_spectral_norm:
all_spectral = defaultdict(list)
X_spectral_norm = norm_diff(X, norm=2, msg=False)
all_frobenius = defaultdict(list)
X_fro_norm = norm_diff(X, norm='fro', msg=False)
for pi in power_iter:
for pm in ['none', 'LU', 'QR']:
print("n_iter = %d on sklearn - %s" % (pi, pm))
U, s, V, time = svd_timing(X, n_comps, n_iter=pi,
power_iteration_normalizer=pm,
n_oversamples=n_oversamples)
label = "sklearn - %s" % pm
all_time[label].append(time)
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if fbpca_available:
print("n_iter = %d on fbca" % (pi))
U, s, V, time = svd_timing(X, n_comps, n_iter=pi,
power_iteration_normalizer=pm,
n_oversamples=n_oversamples,
method='fbpca')
label = "fbpca"
all_time[label].append(time)
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if enable_spectral_norm:
title = "%s: spectral norm diff vs running time" % (dataset_name)
plot_time_vs_s(all_time, all_spectral, power_iter, title)
title = "%s: Frobenius norm diff vs running time" % (dataset_name)
plot_time_vs_s(all_time, all_frobenius, power_iter, title)
def bench_b(power_list):
n_samples, n_features = 1000, 10000
data_params = {'n_samples': n_samples, 'n_features': n_features,
'tail_strength': .7, 'random_state': random_state}
dataset_name = "low rank matrix %d x %d" % (n_samples, n_features)
ranks = [10, 50, 100]
if enable_spectral_norm:
all_spectral = defaultdict(list)
all_frobenius = defaultdict(list)
for rank in ranks:
X = make_low_rank_matrix(effective_rank=rank, **data_params)
if enable_spectral_norm:
X_spectral_norm = norm_diff(X, norm=2, msg=False)
X_fro_norm = norm_diff(X, norm='fro', msg=False)
for n_comp in [np.int(rank/2), rank, rank*2]:
label = "rank=%d, n_comp=%d" % (rank, n_comp)
print(label)
for pi in power_list:
U, s, V, _ = svd_timing(X, n_comp, n_iter=pi, n_oversamples=2,
power_iteration_normalizer='LU')
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if enable_spectral_norm:
title = "%s: spectral norm diff vs n power iteration" % (dataset_name)
plot_power_iter_vs_s(power_iter, all_spectral, title)
title = "%s: frobenius norm diff vs n power iteration" % (dataset_name)
plot_power_iter_vs_s(power_iter, all_frobenius, title)
def bench_c(datasets, n_comps):
all_time = defaultdict(list)
if enable_spectral_norm:
all_spectral = defaultdict(list)
all_frobenius = defaultdict(list)
for dataset_name in datasets:
X = get_data(dataset_name)
if X is None:
continue
if enable_spectral_norm:
X_spectral_norm = norm_diff(X, norm=2, msg=False)
X_fro_norm = norm_diff(X, norm='fro', msg=False)
n_comps = np.minimum(n_comps, np.min(X.shape))
label = "sklearn"
print("%s %d x %d - %s" %
(dataset_name, X.shape[0], X.shape[1], label))
U, s, V, time = svd_timing(X, n_comps, n_iter=2, n_oversamples=10,
method=label)
all_time[label].append(time)
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if fbpca_available:
label = "fbpca"
print("%s %d x %d - %s" %
(dataset_name, X.shape[0], X.shape[1], label))
U, s, V, time = svd_timing(X, n_comps, n_iter=2, n_oversamples=2,
method=label)
all_time[label].append(time)
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if len(all_time) == 0:
raise ValueError("No tests ran. Aborting.")
if enable_spectral_norm:
title = "normalized spectral norm diff vs running time"
scatter_time_vs_s(all_time, all_spectral, datasets, title)
title = "normalized Frobenius norm diff vs running time"
scatter_time_vs_s(all_time, all_frobenius, datasets, title)
if __name__ == '__main__':
random_state = check_random_state(1234)
power_iter = np.linspace(0, 6, 7, dtype=int)
n_comps = 50
for dataset_name in datasets:
X = get_data(dataset_name)
if X is None:
continue
print(" >>>>>> Benching sklearn and fbpca on %s %d x %d" %
(dataset_name, X.shape[0], X.shape[1]))
bench_a(X, dataset_name, power_iter, n_oversamples=2,
n_comps=np.minimum(n_comps, np.min(X.shape)))
print(" >>>>>> Benching on simulated low rank matrix with variable rank")
bench_b(power_iter)
print(" >>>>>> Benching sklearn and fbpca default configurations")
bench_c(datasets + big_sparse_datasets, n_comps)
plt.show()
| bsd-3-clause |
jakevdp/megaman | megaman/embedding/tests/test_embeddings.py | 4 | 1798 | """General tests for embeddings"""
# LICENSE: Simplified BSD https://github.com/mmp2/megaman/blob/master/LICENSE
from itertools import product
import numpy as np
from numpy.testing import assert_raises, assert_allclose
from megaman.embedding import (Isomap, LocallyLinearEmbedding,
LTSA, SpectralEmbedding)
from megaman.geometry.geometry import Geometry
EMBEDDINGS = [Isomap, LocallyLinearEmbedding, LTSA, SpectralEmbedding]
# # TODO: make estimator_checks pass!
# def test_estimator_checks():
# from sklearn.utils.estimator_checks import check_estimator
# for Embedding in EMBEDDINGS:
# yield check_estimator, Embedding
def test_embeddings_fit_vs_transform():
rand = np.random.RandomState(42)
X = rand.rand(100, 5)
geom = Geometry(adjacency_kwds = {'radius':1.0},
affinity_kwds = {'radius':1.0})
def check_embedding(Embedding, n_components):
model = Embedding(n_components=n_components,
geom=geom, random_state=rand)
embedding = model.fit_transform(X)
assert model.embedding_.shape == (X.shape[0], n_components)
assert_allclose(embedding, model.embedding_)
for Embedding in EMBEDDINGS:
for n_components in [1, 2, 3]:
yield check_embedding, Embedding, n_components
def test_embeddings_bad_arguments():
rand = np.random.RandomState(32)
X = rand.rand(100, 3)
def check_bad_args(Embedding):
# no radius set
embedding = Embedding()
assert_raises(ValueError, embedding.fit, X)
# unrecognized geometry
embedding = Embedding(radius=2, geom='blah')
assert_raises(ValueError, embedding.fit, X)
for Embedding in EMBEDDINGS:
yield check_bad_args, Embedding
| bsd-2-clause |
shangwuhencc/scikit-learn | examples/cluster/plot_kmeans_stability_low_dim_dense.py | 338 | 4324 | """
============================================================
Empirical evaluation of the impact of k-means initialization
============================================================
Evaluate the ability of k-means initializations strategies to make
the algorithm convergence robust as measured by the relative standard
deviation of the inertia of the clustering (i.e. the sum of distances
to the nearest cluster center).
The first plot shows the best inertia reached for each combination
of the model (``KMeans`` or ``MiniBatchKMeans``) and the init method
(``init="random"`` or ``init="kmeans++"``) for increasing values of the
``n_init`` parameter that controls the number of initializations.
The second plot demonstrate one single run of the ``MiniBatchKMeans``
estimator using a ``init="random"`` and ``n_init=1``. This run leads to
a bad convergence (local optimum) with estimated centers stuck
between ground truth clusters.
The dataset used for evaluation is a 2D grid of isotropic Gaussian
clusters widely spaced.
"""
print(__doc__)
# Author: Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from sklearn.utils import shuffle
from sklearn.utils import check_random_state
from sklearn.cluster import MiniBatchKMeans
from sklearn.cluster import KMeans
random_state = np.random.RandomState(0)
# Number of run (with randomly generated dataset) for each strategy so as
# to be able to compute an estimate of the standard deviation
n_runs = 5
# k-means models can do several random inits so as to be able to trade
# CPU time for convergence robustness
n_init_range = np.array([1, 5, 10, 15, 20])
# Datasets generation parameters
n_samples_per_center = 100
grid_size = 3
scale = 0.1
n_clusters = grid_size ** 2
def make_data(random_state, n_samples_per_center, grid_size, scale):
random_state = check_random_state(random_state)
centers = np.array([[i, j]
for i in range(grid_size)
for j in range(grid_size)])
n_clusters_true, n_features = centers.shape
noise = random_state.normal(
scale=scale, size=(n_samples_per_center, centers.shape[1]))
X = np.concatenate([c + noise for c in centers])
y = np.concatenate([[i] * n_samples_per_center
for i in range(n_clusters_true)])
return shuffle(X, y, random_state=random_state)
# Part 1: Quantitative evaluation of various init methods
fig = plt.figure()
plots = []
legends = []
cases = [
(KMeans, 'k-means++', {}),
(KMeans, 'random', {}),
(MiniBatchKMeans, 'k-means++', {'max_no_improvement': 3}),
(MiniBatchKMeans, 'random', {'max_no_improvement': 3, 'init_size': 500}),
]
for factory, init, params in cases:
print("Evaluation of %s with %s init" % (factory.__name__, init))
inertia = np.empty((len(n_init_range), n_runs))
for run_id in range(n_runs):
X, y = make_data(run_id, n_samples_per_center, grid_size, scale)
for i, n_init in enumerate(n_init_range):
km = factory(n_clusters=n_clusters, init=init, random_state=run_id,
n_init=n_init, **params).fit(X)
inertia[i, run_id] = km.inertia_
p = plt.errorbar(n_init_range, inertia.mean(axis=1), inertia.std(axis=1))
plots.append(p[0])
legends.append("%s with %s init" % (factory.__name__, init))
plt.xlabel('n_init')
plt.ylabel('inertia')
plt.legend(plots, legends)
plt.title("Mean inertia for various k-means init across %d runs" % n_runs)
# Part 2: Qualitative visual inspection of the convergence
X, y = make_data(random_state, n_samples_per_center, grid_size, scale)
km = MiniBatchKMeans(n_clusters=n_clusters, init='random', n_init=1,
random_state=random_state).fit(X)
fig = plt.figure()
for k in range(n_clusters):
my_members = km.labels_ == k
color = cm.spectral(float(k) / n_clusters, 1)
plt.plot(X[my_members, 0], X[my_members, 1], 'o', marker='.', c=color)
cluster_center = km.cluster_centers_[k]
plt.plot(cluster_center[0], cluster_center[1], 'o',
markerfacecolor=color, markeredgecolor='k', markersize=6)
plt.title("Example cluster allocation with a single random init\n"
"with MiniBatchKMeans")
plt.show()
| bsd-3-clause |
blaze/dask | dask/base.py | 1 | 37839 | from collections import OrderedDict
from collections.abc import Mapping, Iterator
from contextlib import contextmanager
from functools import partial
from hashlib import md5
from operator import getitem
import inspect
import pickle
import os
import threading
import uuid
from distutils.version import LooseVersion
from tlz import merge, groupby, curry, identity
from tlz.functoolz import Compose
from .compatibility import is_dataclass, dataclass_fields
from .context import thread_state
from .core import flatten, quote, get as simple_get, literal
from .hashing import hash_buffer_hex
from .utils import Dispatch, ensure_dict, apply
from . import config, local, threaded
__all__ = (
"DaskMethodsMixin",
"annotate",
"is_dask_collection",
"compute",
"persist",
"optimize",
"visualize",
"tokenize",
"normalize_token",
)
@contextmanager
def annotate(**annotations):
"""Context Manager for setting HighLevelGraph Layer annotations.
Annotations are metadata or soft constraints associated with
tasks that dask schedulers may choose to respect: They signal intent
without enforcing hard constraints. As such, they are
primarily designed for use with the distributed scheduler.
Almost any object can serve as an annotation, but small Python objects
are preferred, while large objects such as NumPy arrays are discouraged.
Callables supplied as an annotation should take a single *key* argument and
produce the appropriate annotation. Individual task keys in the annotated collection
are supplied to the callable.
Parameters
----------
**annotations : key-value pairs
Examples
--------
All tasks within array A should have priority 100 and be retried 3 times
on failure.
>>> import dask
>>> import dask.array as da
>>> with dask.annotate(priority=100, retries=3):
... A = da.ones((10000, 10000))
Prioritise tasks within Array A on flattened block ID.
>>> nblocks = (10, 10)
>>> with dask.annotate(priority=lambda k: k[1]*nblocks[1] + k[2]):
... A = da.ones((1000, 1000), chunks=(100, 100))
Annotations may be nested.
>>> with dask.annotate(priority=1):
... with dask.annotate(retries=3):
... A = da.ones((1000, 1000))
... B = A + 1
"""
prev_annotations = config.get("annotations", {})
new_annotations = {
**prev_annotations,
**{f"annotations.{k}": v for k, v in annotations.items()},
}
with config.set(new_annotations):
yield
def is_dask_collection(x):
"""Returns ``True`` if ``x`` is a dask collection"""
try:
return x.__dask_graph__() is not None
except (AttributeError, TypeError):
return False
class DaskMethodsMixin(object):
"""A mixin adding standard dask collection methods"""
__slots__ = ()
def visualize(self, filename="mydask", format=None, optimize_graph=False, **kwargs):
"""Render the computation of this object's task graph using graphviz.
Requires ``graphviz`` to be installed.
Parameters
----------
filename : str or None, optional
The name of the file to write to disk. If the provided `filename`
doesn't include an extension, '.png' will be used by default.
If `filename` is None, no file will be written, and we communicate
with dot using only pipes.
format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional
Format in which to write output file. Default is 'png'.
optimize_graph : bool, optional
If True, the graph is optimized before rendering. Otherwise,
the graph is displayed as is. Default is False.
color: {None, 'order'}, optional
Options to color nodes. Provide ``cmap=`` keyword for additional
colormap
**kwargs
Additional keyword arguments to forward to ``to_graphviz``.
Examples
--------
>>> x.visualize(filename='dask.pdf') # doctest: +SKIP
>>> x.visualize(filename='dask.pdf', color='order') # doctest: +SKIP
Returns
-------
result : IPython.diplay.Image, IPython.display.SVG, or None
See dask.dot.dot_graph for more information.
See Also
--------
dask.base.visualize
dask.dot.dot_graph
Notes
-----
For more information on optimization see here:
https://docs.dask.org/en/latest/optimize.html
"""
return visualize(
self,
filename=filename,
format=format,
optimize_graph=optimize_graph,
**kwargs,
)
def persist(self, **kwargs):
"""Persist this dask collection into memory
This turns a lazy Dask collection into a Dask collection with the same
metadata, but now with the results fully computed or actively computing
in the background.
The action of function differs significantly depending on the active
task scheduler. If the task scheduler supports asynchronous computing,
such as is the case of the dask.distributed scheduler, then persist
will return *immediately* and the return value's task graph will
contain Dask Future objects. However if the task scheduler only
supports blocking computation then the call to persist will *block*
and the return value's task graph will contain concrete Python results.
This function is particularly useful when using distributed systems,
because the results will be kept in distributed memory, rather than
returned to the local process as with compute.
Parameters
----------
scheduler : string, optional
Which scheduler to use like "threads", "synchronous" or "processes".
If not provided, the default is to check the global settings first,
and then fall back to the collection defaults.
optimize_graph : bool, optional
If True [default], the graph is optimized before computation.
Otherwise the graph is run as is. This can be useful for debugging.
**kwargs
Extra keywords to forward to the scheduler function.
Returns
-------
New dask collections backed by in-memory data
See Also
--------
dask.base.persist
"""
(result,) = persist(self, traverse=False, **kwargs)
return result
def compute(self, **kwargs):
"""Compute this dask collection
This turns a lazy Dask collection into its in-memory equivalent.
For example a Dask array turns into a NumPy array and a Dask dataframe
turns into a Pandas dataframe. The entire dataset must fit into memory
before calling this operation.
Parameters
----------
scheduler : string, optional
Which scheduler to use like "threads", "synchronous" or "processes".
If not provided, the default is to check the global settings first,
and then fall back to the collection defaults.
optimize_graph : bool, optional
If True [default], the graph is optimized before computation.
Otherwise the graph is run as is. This can be useful for debugging.
kwargs
Extra keywords to forward to the scheduler function.
See Also
--------
dask.base.compute
"""
(result,) = compute(self, traverse=False, **kwargs)
return result
def __await__(self):
try:
from distributed import wait, futures_of
except ImportError as e:
raise ImportError(
"Using async/await with dask requires the `distributed` package"
) from e
from tornado import gen
@gen.coroutine
def f():
if futures_of(self):
yield wait(self)
raise gen.Return(self)
return f().__await__()
def compute_as_if_collection(cls, dsk, keys, scheduler=None, get=None, **kwargs):
"""Compute a graph as if it were of type cls.
Allows for applying the same optimizations and default scheduler."""
schedule = get_scheduler(scheduler=scheduler, cls=cls, get=get)
dsk2 = optimization_function(cls)(ensure_dict(dsk), keys, **kwargs)
return schedule(dsk2, keys, **kwargs)
def dont_optimize(dsk, keys, **kwargs):
return dsk
def optimization_function(x):
return getattr(x, "__dask_optimize__", dont_optimize)
def collections_to_dsk(collections, optimize_graph=True, **kwargs):
"""
Convert many collections into a single dask graph, after optimization
"""
from .highlevelgraph import HighLevelGraph
optimizations = kwargs.pop("optimizations", None) or config.get("optimizations", [])
if optimize_graph:
groups = groupby(optimization_function, collections)
_opt_list = []
for opt, val in groups.items():
dsk, keys = _extract_graph_and_keys(val)
groups[opt] = (dsk, keys)
_opt = opt(dsk, keys, **kwargs)
_opt_list.append(_opt)
for opt in optimizations:
_opt_list = []
group = {}
for k, (dsk, keys) in groups.items():
_opt = opt(dsk, keys, **kwargs)
group[k] = (_opt, keys)
_opt_list.append(_opt)
groups = group
# Merge all graphs
if any(isinstance(graph, HighLevelGraph) for graph in _opt_list):
dsk = HighLevelGraph.merge(*_opt_list)
else:
dsk = merge(*map(ensure_dict, _opt_list))
else:
dsk, _ = _extract_graph_and_keys(collections)
return dsk
def _extract_graph_and_keys(vals):
"""Given a list of dask vals, return a single graph and a list of keys such
that ``get(dsk, keys)`` is equivalent to ``[v.compute() for v in vals]``."""
from .highlevelgraph import HighLevelGraph
graphs, keys = [], []
for v in vals:
graphs.append(v.__dask_graph__())
keys.append(v.__dask_keys__())
if any(isinstance(graph, HighLevelGraph) for graph in graphs):
graph = HighLevelGraph.merge(*graphs)
else:
graph = merge(*map(ensure_dict, graphs))
return graph, keys
def unpack_collections(*args, **kwargs):
"""Extract collections in preparation for compute/persist/etc...
Intended use is to find all collections in a set of (possibly nested)
python objects, do something to them (compute, etc...), then repackage them
in equivalent python objects.
Parameters
----------
*args
Any number of objects. If it is a dask collection, it's extracted and
added to the list of collections returned. By default, python builtin
collections are also traversed to look for dask collections (for more
information see the ``traverse`` keyword).
traverse : bool, optional
If True (default), builtin python collections are traversed looking for
any dask collections they might contain.
Returns
-------
collections : list
A list of all dask collections contained in ``args``
repack : callable
A function to call on the transformed collections to repackage them as
they were in the original ``args``.
"""
traverse = kwargs.pop("traverse", True)
collections = []
repack_dsk = {}
collections_token = uuid.uuid4().hex
def _unpack(expr):
if is_dask_collection(expr):
tok = tokenize(expr)
if tok not in repack_dsk:
repack_dsk[tok] = (getitem, collections_token, len(collections))
collections.append(expr)
return tok
tok = uuid.uuid4().hex
if not traverse:
tsk = quote(expr)
else:
# Treat iterators like lists
typ = list if isinstance(expr, Iterator) else type(expr)
if typ in (list, tuple, set):
tsk = (typ, [_unpack(i) for i in expr])
elif typ in (dict, OrderedDict):
tsk = (typ, [[_unpack(k), _unpack(v)] for k, v in expr.items()])
elif is_dataclass(expr) and not isinstance(expr, type):
tsk = (
apply,
typ,
(),
(
dict,
[
[f.name, _unpack(getattr(expr, f.name))]
for f in dataclass_fields(expr)
],
),
)
else:
return expr
repack_dsk[tok] = tsk
return tok
out = uuid.uuid4().hex
repack_dsk[out] = (tuple, [_unpack(i) for i in args])
def repack(results):
dsk = repack_dsk.copy()
dsk[collections_token] = quote(results)
return simple_get(dsk, out)
return collections, repack
def optimize(*args, **kwargs):
"""Optimize several dask collections at once.
Returns equivalent dask collections that all share the same merged and
optimized underlying graph. This can be useful if converting multiple
collections to delayed objects, or to manually apply the optimizations at
strategic points.
Note that in most cases you shouldn't need to call this method directly.
Parameters
----------
*args : objects
Any number of objects. If a dask object, its graph is optimized and
merged with all those of all other dask objects before returning an
equivalent dask collection. Non-dask arguments are passed through
unchanged.
traverse : bool, optional
By default dask traverses builtin python collections looking for dask
objects passed to ``optimize``. For large collections this can be
expensive. If none of the arguments contain any dask objects, set
``traverse=False`` to avoid doing this traversal.
optimizations : list of callables, optional
Additional optimization passes to perform.
**kwargs
Extra keyword arguments to forward to the optimization passes.
Examples
--------
>>> import dask as d
>>> import dask.array as da
>>> a = da.arange(10, chunks=2).sum()
>>> b = da.arange(10, chunks=2).mean()
>>> a2, b2 = d.optimize(a, b)
>>> a2.compute() == a.compute()
True
>>> b2.compute() == b.compute()
True
"""
collections, repack = unpack_collections(*args, **kwargs)
if not collections:
return args
dsk = collections_to_dsk(collections, **kwargs)
postpersists = []
for a in collections:
r, s = a.__dask_postpersist__()
postpersists.append(r(dsk, *s))
return repack(postpersists)
def compute(*args, **kwargs):
"""Compute several dask collections at once.
Parameters
----------
args : object
Any number of objects. If it is a dask object, it's computed and the
result is returned. By default, python builtin collections are also
traversed to look for dask objects (for more information see the
``traverse`` keyword). Non-dask arguments are passed through unchanged.
traverse : bool, optional
By default dask traverses builtin python collections looking for dask
objects passed to ``compute``. For large collections this can be
expensive. If none of the arguments contain any dask objects, set
``traverse=False`` to avoid doing this traversal.
scheduler : string, optional
Which scheduler to use like "threads", "synchronous" or "processes".
If not provided, the default is to check the global settings first,
and then fall back to the collection defaults.
optimize_graph : bool, optional
If True [default], the optimizations for each collection are applied
before computation. Otherwise the graph is run as is. This can be
useful for debugging.
kwargs
Extra keywords to forward to the scheduler function.
Examples
--------
>>> import dask as d
>>> import dask.array as da
>>> a = da.arange(10, chunks=2).sum()
>>> b = da.arange(10, chunks=2).mean()
>>> d.compute(a, b)
(45, 4.5)
By default, dask objects inside python collections will also be computed:
>>> d.compute({'a': a, 'b': b, 'c': 1})
({'a': 45, 'b': 4.5, 'c': 1},)
"""
traverse = kwargs.pop("traverse", True)
optimize_graph = kwargs.pop("optimize_graph", True)
collections, repack = unpack_collections(*args, traverse=traverse)
if not collections:
return args
schedule = get_scheduler(
scheduler=kwargs.pop("scheduler", None),
collections=collections,
get=kwargs.pop("get", None),
)
dsk = collections_to_dsk(collections, optimize_graph, **kwargs)
keys, postcomputes = [], []
for x in collections:
keys.append(x.__dask_keys__())
postcomputes.append(x.__dask_postcompute__())
results = schedule(dsk, keys, **kwargs)
return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
def visualize(*args, **kwargs):
"""
Visualize several dask graphs at once.
Requires ``graphviz`` to be installed. All options that are not the dask
graph(s) should be passed as keyword arguments.
Parameters
----------
dsk : dict(s) or collection(s)
The dask graph(s) to visualize.
filename : str or None, optional
The name of the file to write to disk. If the provided `filename`
doesn't include an extension, '.png' will be used by default.
If `filename` is None, no file will be written, and we communicate
with dot using only pipes.
format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional
Format in which to write output file. Default is 'png'.
optimize_graph : bool, optional
If True, the graph is optimized before rendering. Otherwise,
the graph is displayed as is. Default is False.
color : {None, 'order'}, optional
Options to color nodes. Provide ``cmap=`` keyword for additional
colormap
collapse_outputs : bool, optional
Whether to collapse output boxes, which often have empty labels.
Default is False.
verbose : bool, optional
Whether to label output and input boxes even if the data aren't chunked.
Beware: these labels can get very long. Default is False.
**kwargs
Additional keyword arguments to forward to ``to_graphviz``.
Examples
--------
>>> x.visualize(filename='dask.pdf') # doctest: +SKIP
>>> x.visualize(filename='dask.pdf', color='order') # doctest: +SKIP
Returns
-------
result : IPython.diplay.Image, IPython.display.SVG, or None
See dask.dot.dot_graph for more information.
See Also
--------
dask.dot.dot_graph
Notes
-----
For more information on optimization see here:
https://docs.dask.org/en/latest/optimize.html
"""
from dask.dot import dot_graph
filename = kwargs.pop("filename", "mydask")
optimize_graph = kwargs.pop("optimize_graph", False)
dsks = []
args3 = []
for arg in args:
if isinstance(arg, (list, tuple, set)):
for a in arg:
if isinstance(a, Mapping):
dsks.append(a)
if is_dask_collection(a):
args3.append(a)
else:
if isinstance(arg, Mapping):
dsks.append(arg)
if is_dask_collection(arg):
args3.append(arg)
dsk = dict(collections_to_dsk(args3, optimize_graph=optimize_graph))
for d in dsks:
dsk.update(d)
color = kwargs.get("color")
if color == "order":
from .order import order
import matplotlib.pyplot as plt
o = order(dsk)
try:
cmap = kwargs.pop("cmap")
except KeyError:
cmap = plt.cm.RdBu
if isinstance(cmap, str):
import matplotlib.pyplot as plt
cmap = getattr(plt.cm, cmap)
mx = max(o.values()) + 1
colors = {k: _colorize(cmap(v / mx, bytes=True)) for k, v in o.items()}
kwargs["function_attributes"] = {
k: {"color": v, "label": str(o[k])} for k, v in colors.items()
}
kwargs["data_attributes"] = {k: {"color": v} for k, v in colors.items()}
elif color:
raise NotImplementedError("Unknown value color=%s" % color)
return dot_graph(dsk, filename=filename, **kwargs)
def persist(*args, **kwargs):
"""Persist multiple Dask collections into memory
This turns lazy Dask collections into Dask collections with the same
metadata, but now with their results fully computed or actively computing
in the background.
For example a lazy dask.array built up from many lazy calls will now be a
dask.array of the same shape, dtype, chunks, etc., but now with all of
those previously lazy tasks either computed in memory as many small :class:`numpy.array`
(in the single-machine case) or asynchronously running in the
background on a cluster (in the distributed case).
This function operates differently if a ``dask.distributed.Client`` exists
and is connected to a distributed scheduler. In this case this function
will return as soon as the task graph has been submitted to the cluster,
but before the computations have completed. Computations will continue
asynchronously in the background. When using this function with the single
machine scheduler it blocks until the computations have finished.
When using Dask on a single machine you should ensure that the dataset fits
entirely within memory.
Examples
--------
>>> df = dd.read_csv('/path/to/*.csv') # doctest: +SKIP
>>> df = df[df.name == 'Alice'] # doctest: +SKIP
>>> df['in-debt'] = df.balance < 0 # doctest: +SKIP
>>> df = df.persist() # triggers computation # doctest: +SKIP
>>> df.value().min() # future computations are now fast # doctest: +SKIP
-10
>>> df.value().max() # doctest: +SKIP
100
>>> from dask import persist # use persist function on multiple collections
>>> a, b = persist(a, b) # doctest: +SKIP
Parameters
----------
*args: Dask collections
scheduler : string, optional
Which scheduler to use like "threads", "synchronous" or "processes".
If not provided, the default is to check the global settings first,
and then fall back to the collection defaults.
traverse : bool, optional
By default dask traverses builtin python collections looking for dask
objects passed to ``persist``. For large collections this can be
expensive. If none of the arguments contain any dask objects, set
``traverse=False`` to avoid doing this traversal.
optimize_graph : bool, optional
If True [default], the graph is optimized before computation.
Otherwise the graph is run as is. This can be useful for debugging.
**kwargs
Extra keywords to forward to the scheduler function.
Returns
-------
New dask collections backed by in-memory data
"""
traverse = kwargs.pop("traverse", True)
optimize_graph = kwargs.pop("optimize_graph", True)
collections, repack = unpack_collections(*args, traverse=traverse)
if not collections:
return args
schedule = get_scheduler(
scheduler=kwargs.pop("scheduler", None), collections=collections
)
if inspect.ismethod(schedule):
try:
from distributed.client import default_client
except ImportError:
pass
else:
try:
client = default_client()
except ValueError:
pass
else:
if client.get == schedule:
results = client.persist(
collections, optimize_graph=optimize_graph, **kwargs
)
return repack(results)
dsk = collections_to_dsk(collections, optimize_graph, **kwargs)
keys, postpersists = [], []
for a in collections:
a_keys = list(flatten(a.__dask_keys__()))
rebuild, state = a.__dask_postpersist__()
keys.extend(a_keys)
postpersists.append((rebuild, a_keys, state))
results = schedule(dsk, keys, **kwargs)
d = dict(zip(keys, results))
results2 = [r({k: d[k] for k in ks}, *s) for r, ks, s in postpersists]
return repack(results2)
############
# Tokenize #
############
def tokenize(*args, **kwargs):
"""Deterministic token
>>> tokenize([1, 2, '3'])
'7d6a880cd9ec03506eee6973ff551339'
>>> tokenize('Hello') == tokenize('Hello')
True
"""
if kwargs:
args = args + (kwargs,)
return md5(str(tuple(map(normalize_token, args))).encode()).hexdigest()
normalize_token = Dispatch()
normalize_token.register(
(int, float, str, bytes, type(None), type, slice, complex, type(Ellipsis)), identity
)
@normalize_token.register(dict)
def normalize_dict(d):
return normalize_token(sorted(d.items(), key=str))
@normalize_token.register(OrderedDict)
def normalize_ordered_dict(d):
return type(d).__name__, normalize_token(list(d.items()))
@normalize_token.register(set)
def normalize_set(s):
return normalize_token(sorted(s, key=str))
@normalize_token.register((tuple, list))
def normalize_seq(seq):
def func(seq):
try:
return list(map(normalize_token, seq))
except RecursionError:
return str(uuid.uuid4())
return type(seq).__name__, func(seq)
@normalize_token.register(literal)
def normalize_literal(lit):
return "literal", normalize_token(lit())
@normalize_token.register(range)
def normalize_range(r):
return list(map(normalize_token, [r.start, r.stop, r.step]))
@normalize_token.register(object)
def normalize_object(o):
method = getattr(o, "__dask_tokenize__", None)
if method is not None:
return method()
return normalize_function(o) if callable(o) else uuid.uuid4().hex
function_cache = {}
function_cache_lock = threading.Lock()
def normalize_function(func):
try:
return function_cache[func]
except KeyError:
result = _normalize_function(func)
if len(function_cache) >= 500: # clear half of cache if full
with function_cache_lock:
if len(function_cache) >= 500:
for k in list(function_cache)[::2]:
del function_cache[k]
function_cache[func] = result
return result
except TypeError: # not hashable
return _normalize_function(func)
def _normalize_function(func):
if isinstance(func, Compose):
first = getattr(func, "first", None)
funcs = reversed((first,) + func.funcs) if first else func.funcs
return tuple(normalize_function(f) for f in funcs)
elif isinstance(func, (partial, curry)):
args = tuple(normalize_token(i) for i in func.args)
if func.keywords:
kws = tuple(
(k, normalize_token(v)) for k, v in sorted(func.keywords.items())
)
else:
kws = None
return (normalize_function(func.func), args, kws)
else:
try:
result = pickle.dumps(func, protocol=0)
if b"__main__" not in result: # abort on dynamic functions
return result
except Exception:
pass
try:
import cloudpickle
return cloudpickle.dumps(func, protocol=0)
except Exception:
return str(func)
@normalize_token.register_lazy("pandas")
def register_pandas():
import pandas as pd
# Intentionally not importing PANDAS_GT_0240 from dask.dataframe._compat
# to avoid ImportErrors from extra dependencies
PANDAS_GT_0240 = LooseVersion(pd.__version__) >= LooseVersion("0.24.0")
@normalize_token.register(pd.Index)
def normalize_index(ind):
if PANDAS_GT_0240:
values = ind.array
else:
values = ind.values
return [ind.name, normalize_token(values)]
@normalize_token.register(pd.MultiIndex)
def normalize_index(ind):
codes = ind.codes if PANDAS_GT_0240 else ind.levels
return (
[ind.name]
+ [normalize_token(x) for x in ind.levels]
+ [normalize_token(x) for x in codes]
)
@normalize_token.register(pd.Categorical)
def normalize_categorical(cat):
return [normalize_token(cat.codes), normalize_token(cat.dtype)]
if PANDAS_GT_0240:
@normalize_token.register(pd.arrays.PeriodArray)
@normalize_token.register(pd.arrays.DatetimeArray)
@normalize_token.register(pd.arrays.TimedeltaArray)
def normalize_period_array(arr):
return [normalize_token(arr.asi8), normalize_token(arr.dtype)]
@normalize_token.register(pd.arrays.IntervalArray)
def normalize_interval_array(arr):
return [
normalize_token(arr.left),
normalize_token(arr.right),
normalize_token(arr.closed),
]
@normalize_token.register(pd.Series)
def normalize_series(s):
return [
s.name,
s.dtype,
normalize_token(s._data.blocks[0].values),
normalize_token(s.index),
]
@normalize_token.register(pd.DataFrame)
def normalize_dataframe(df):
data = [block.values for block in df._data.blocks]
data.extend([df.columns, df.index])
return list(map(normalize_token, data))
@normalize_token.register(pd.api.extensions.ExtensionArray)
def normalize_extension_array(arr):
import numpy as np
return normalize_token(np.asarray(arr))
# Dtypes
@normalize_token.register(pd.api.types.CategoricalDtype)
def normalize_categorical_dtype(dtype):
return [normalize_token(dtype.categories), normalize_token(dtype.ordered)]
@normalize_token.register(pd.api.extensions.ExtensionDtype)
def normalize_period_dtype(dtype):
return normalize_token(dtype.name)
@normalize_token.register_lazy("numpy")
def register_numpy():
import numpy as np
@normalize_token.register(np.ndarray)
def normalize_array(x):
if not x.shape:
return (x.item(), x.dtype)
if hasattr(x, "mode") and getattr(x, "filename", None):
if hasattr(x.base, "ctypes"):
offset = (
x.ctypes.get_as_parameter().value
- x.base.ctypes.get_as_parameter().value
)
else:
offset = 0 # root memmap's have mmap object as base
if hasattr(
x, "offset"
): # offset numpy used while opening, and not the offset to the beginning of the file
offset += getattr(x, "offset")
return (
x.filename,
os.path.getmtime(x.filename),
x.dtype,
x.shape,
x.strides,
offset,
)
if x.dtype.hasobject:
try:
try:
# string fast-path
data = hash_buffer_hex(
"-".join(x.flat).encode(
encoding="utf-8", errors="surrogatepass"
)
)
except UnicodeDecodeError:
# bytes fast-path
data = hash_buffer_hex(b"-".join(x.flat))
except (TypeError, UnicodeDecodeError):
try:
data = hash_buffer_hex(pickle.dumps(x, pickle.HIGHEST_PROTOCOL))
except Exception:
# pickling not supported, use UUID4-based fallback
data = uuid.uuid4().hex
else:
try:
data = hash_buffer_hex(x.ravel(order="K").view("i1"))
except (BufferError, AttributeError, ValueError):
data = hash_buffer_hex(x.copy().ravel(order="K").view("i1"))
return (data, x.dtype, x.shape, x.strides)
@normalize_token.register(np.matrix)
def normalize_matrix(x):
return type(x).__name__, normalize_array(x.view(type=np.ndarray))
normalize_token.register(np.dtype, repr)
normalize_token.register(np.generic, repr)
@normalize_token.register(np.ufunc)
def normalize_ufunc(x):
try:
name = x.__name__
if getattr(np, name) is x:
return "np." + name
except AttributeError:
return normalize_function(x)
@normalize_token.register_lazy("scipy")
def register_scipy():
import scipy.sparse as sp
def normalize_sparse_matrix(x, attrs):
return (
type(x).__name__,
normalize_seq((normalize_token(getattr(x, key)) for key in attrs)),
)
for cls, attrs in [
(sp.dia_matrix, ("data", "offsets", "shape")),
(sp.bsr_matrix, ("data", "indices", "indptr", "blocksize", "shape")),
(sp.coo_matrix, ("data", "row", "col", "shape")),
(sp.csr_matrix, ("data", "indices", "indptr", "shape")),
(sp.csc_matrix, ("data", "indices", "indptr", "shape")),
(sp.lil_matrix, ("data", "rows", "shape")),
]:
normalize_token.register(cls, partial(normalize_sparse_matrix, attrs=attrs))
@normalize_token.register(sp.dok_matrix)
def normalize_dok_matrix(x):
return type(x).__name__, normalize_token(sorted(x.items()))
def _colorize(t):
"""Convert (r, g, b) triple to "#RRGGBB" string
For use with ``visualize(color=...)``
Examples
--------
>>> _colorize((255, 255, 255))
'#FFFFFF'
>>> _colorize((0, 32, 128))
'#002080'
"""
t = t[:3]
i = sum(v * 256 ** (len(t) - i - 1) for i, v in enumerate(t))
h = hex(int(i))[2:].upper()
h = "0" * (6 - len(h)) + h
return "#" + h
named_schedulers = {
"sync": local.get_sync,
"synchronous": local.get_sync,
"single-threaded": local.get_sync,
"threads": threaded.get,
"threading": threaded.get,
}
try:
from dask import multiprocessing as dask_multiprocessing
except ImportError:
pass
else:
named_schedulers.update(
{
"processes": dask_multiprocessing.get,
"multiprocessing": dask_multiprocessing.get,
}
)
get_err_msg = """
The get= keyword has been removed.
Please use the scheduler= keyword instead with the name of
the desired scheduler like 'threads' or 'processes'
x.compute(scheduler='single-threaded')
x.compute(scheduler='threads')
x.compute(scheduler='processes')
or with a function that takes the graph and keys
x.compute(scheduler=my_scheduler_function)
or with a Dask client
x.compute(scheduler=client)
""".strip()
def get_scheduler(get=None, scheduler=None, collections=None, cls=None):
"""Get scheduler function
There are various ways to specify the scheduler to use:
1. Passing in scheduler= parameters
2. Passing these into global configuration
3. Using defaults of a dask collection
This function centralizes the logic to determine the right scheduler to use
from those many options
"""
if get:
raise TypeError(get_err_msg)
if scheduler is not None:
if callable(scheduler):
return scheduler
elif "Client" in type(scheduler).__name__ and hasattr(scheduler, "get"):
return scheduler.get
elif scheduler.lower() in named_schedulers:
return named_schedulers[scheduler.lower()]
elif scheduler.lower() in ("dask.distributed", "distributed"):
from distributed.worker import get_client
return get_client().get
else:
raise ValueError(
"Expected one of [distributed, %s]"
% ", ".join(sorted(named_schedulers))
)
# else: # try to connect to remote scheduler with this name
# return get_client(scheduler).get
if config.get("scheduler", None):
return get_scheduler(scheduler=config.get("scheduler", None))
if config.get("get", None):
raise ValueError(get_err_msg)
if getattr(thread_state, "key", False):
from distributed.worker import get_worker
return get_worker().client.get
if cls is not None:
return cls.__dask_scheduler__
if collections:
collections = [c for c in collections if c is not None]
if collections:
get = collections[0].__dask_scheduler__
if not all(c.__dask_scheduler__ == get for c in collections):
raise ValueError(
"Compute called on multiple collections with "
"differing default schedulers. Please specify a "
"scheduler=` parameter explicitly in compute or "
"globally with `dask.config.set`."
)
return get
return None
def wait(x, timeout=None, return_when="ALL_COMPLETED"):
"""Wait until computation has finished
This is a compatibility alias for ``dask.distributed.wait``.
If it is applied onto Dask collections without Dask Futures or if Dask
distributed is not installed then it is a no-op
"""
try:
from distributed import wait
return wait(x, timeout=timeout, return_when=return_when)
except (ImportError, ValueError):
return x
| bsd-3-clause |
henridwyer/scikit-learn | examples/covariance/plot_covariance_estimation.py | 250 | 5070 | """
=======================================================================
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood
=======================================================================
When working with covariance estimation, the usual approach is to use
a maximum likelihood estimator, such as the
:class:`sklearn.covariance.EmpiricalCovariance`. It is unbiased, i.e. it
converges to the true (population) covariance when given many
observations. However, it can also be beneficial to regularize it, in
order to reduce its variance; this, in turn, introduces some bias. This
example illustrates the simple regularization used in
:ref:`shrunk_covariance` estimators. In particular, it focuses on how to
set the amount of regularization, i.e. how to choose the bias-variance
trade-off.
Here we compare 3 approaches:
* Setting the parameter by cross-validating the likelihood on three folds
according to a grid of potential shrinkage parameters.
* A close formula proposed by Ledoit and Wolf to compute
the asymptotically optimal regularization parameter (minimizing a MSE
criterion), yielding the :class:`sklearn.covariance.LedoitWolf`
covariance estimate.
* An improvement of the Ledoit-Wolf shrinkage, the
:class:`sklearn.covariance.OAS`, proposed by Chen et al. Its
convergence is significantly better under the assumption that the data
are Gaussian, in particular for small samples.
To quantify estimation error, we plot the likelihood of unseen data for
different values of the shrinkage parameter. We also show the choices by
cross-validation, or with the LedoitWolf and OAS estimates.
Note that the maximum likelihood estimate corresponds to no shrinkage,
and thus performs poorly. The Ledoit-Wolf estimate performs really well,
as it is close to the optimal and is computational not costly. In this
example, the OAS estimate is a bit further away. Interestingly, both
approaches outperform cross-validation, which is significantly most
computationally costly.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
from sklearn.covariance import LedoitWolf, OAS, ShrunkCovariance, \
log_likelihood, empirical_covariance
from sklearn.grid_search import GridSearchCV
###############################################################################
# Generate sample data
n_features, n_samples = 40, 20
np.random.seed(42)
base_X_train = np.random.normal(size=(n_samples, n_features))
base_X_test = np.random.normal(size=(n_samples, n_features))
# Color samples
coloring_matrix = np.random.normal(size=(n_features, n_features))
X_train = np.dot(base_X_train, coloring_matrix)
X_test = np.dot(base_X_test, coloring_matrix)
###############################################################################
# Compute the likelihood on test data
# spanning a range of possible shrinkage coefficient values
shrinkages = np.logspace(-2, 0, 30)
negative_logliks = [-ShrunkCovariance(shrinkage=s).fit(X_train).score(X_test)
for s in shrinkages]
# under the ground-truth model, which we would not have access to in real
# settings
real_cov = np.dot(coloring_matrix.T, coloring_matrix)
emp_cov = empirical_covariance(X_train)
loglik_real = -log_likelihood(emp_cov, linalg.inv(real_cov))
###############################################################################
# Compare different approaches to setting the parameter
# GridSearch for an optimal shrinkage coefficient
tuned_parameters = [{'shrinkage': shrinkages}]
cv = GridSearchCV(ShrunkCovariance(), tuned_parameters)
cv.fit(X_train)
# Ledoit-Wolf optimal shrinkage coefficient estimate
lw = LedoitWolf()
loglik_lw = lw.fit(X_train).score(X_test)
# OAS coefficient estimate
oa = OAS()
loglik_oa = oa.fit(X_train).score(X_test)
###############################################################################
# Plot results
fig = plt.figure()
plt.title("Regularized covariance: likelihood and shrinkage coefficient")
plt.xlabel('Regularizaton parameter: shrinkage coefficient')
plt.ylabel('Error: negative log-likelihood on test data')
# range shrinkage curve
plt.loglog(shrinkages, negative_logliks, label="Negative log-likelihood")
plt.plot(plt.xlim(), 2 * [loglik_real], '--r',
label="Real covariance likelihood")
# adjust view
lik_max = np.amax(negative_logliks)
lik_min = np.amin(negative_logliks)
ymin = lik_min - 6. * np.log((plt.ylim()[1] - plt.ylim()[0]))
ymax = lik_max + 10. * np.log(lik_max - lik_min)
xmin = shrinkages[0]
xmax = shrinkages[-1]
# LW likelihood
plt.vlines(lw.shrinkage_, ymin, -loglik_lw, color='magenta',
linewidth=3, label='Ledoit-Wolf estimate')
# OAS likelihood
plt.vlines(oa.shrinkage_, ymin, -loglik_oa, color='purple',
linewidth=3, label='OAS estimate')
# best CV estimator likelihood
plt.vlines(cv.best_estimator_.shrinkage, ymin,
-cv.best_estimator_.score(X_test), color='cyan',
linewidth=3, label='Cross-validation best estimate')
plt.ylim(ymin, ymax)
plt.xlim(xmin, xmax)
plt.legend()
plt.show()
| bsd-3-clause |
lucidfrontier45/scikit-learn | examples/covariance/plot_covariance_estimation.py | 2 | 4991 | """
=======================================================================
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood
=======================================================================
The usual estimator for covariance is the maximum likelihood estimator,
:class:`sklearn.covariance.EmpiricalCovariance`. It is unbiased, i.e. it
converges to the true (population) covariance when given many
observations. However, it can also be beneficial to regularize it, in
order to reduce its variance; this, in turn, introduces some bias. This
example illustrates the simple regularization used in
:ref:`shrunk_covariance` estimators. In particular, it focuses on how to
set the amount of regularization, i.e. how to choose the bias-variance
trade-off.
Here we compare 3 approaches:
* Setting the parameter by cross-validating the likelihood on three folds
according to a grid of potential shrinkage parameters.
* A close formula proposed by Ledoit and Wolf to compute
the asymptotical optimal regularization parameter (minimizing a MSE
criterion), yielding the :class:`sklearn.covariance.LedoitWolf`
covariance estimate.
* An improvement of the Ledoit-Wolf shrinkage, the
:class:`sklearn.covariance.OAS`, proposed by Chen et al. Its
convergence is significantly better under the assumption that the data
are Gaussian, in particular for small samples.
To quantify estimation error, we plot the likelihood of unseen data for
different values of the shrinkage parameter. We also show the choices by
cross-validation, or with the LedoitWolf and OAS estimates.
Note that the maximum likelihood estimate corresponds to no shrinkage,
and thus performs poorly. The Ledoit-Wolf estimate performs really well,
as it is close to the optimal and is computational not costly. In this
example, the OAS estimate is a bit further away. Interestingly, both
approaches outperform cross-validation, which is significantly most
computationally costly.
"""
print __doc__
import numpy as np
import pylab as pl
from scipy import linalg
from sklearn.covariance import LedoitWolf, OAS, ShrunkCovariance, \
log_likelihood, empirical_covariance
from sklearn.grid_search import GridSearchCV
###############################################################################
# Generate sample data
n_features, n_samples = 40, 20
np.random.seed(42)
base_X_train = np.random.normal(size=(n_samples, n_features))
base_X_test = np.random.normal(size=(n_samples, n_features))
# Color samples
coloring_matrix = np.random.normal(size=(n_features, n_features))
X_train = np.dot(base_X_train, coloring_matrix)
X_test = np.dot(base_X_test, coloring_matrix)
###############################################################################
# Compute the likelihood on test data
# spanning a range of possible shrinkage coefficient values
shrinkages = np.logspace(-2, 0, 30)
negative_logliks = [-ShrunkCovariance(shrinkage=s).fit(X_train).score(X_test)
for s in shrinkages]
# under the ground-truth model, which we would not have access to in real
# settings
real_cov = np.dot(coloring_matrix.T, coloring_matrix)
emp_cov = empirical_covariance(X_train)
loglik_real = -log_likelihood(emp_cov, linalg.inv(real_cov))
###############################################################################
# Compare different approaches to setting the parameter
# GridSearch for an optimal shrinkage coefficient
tuned_parameters = [{'shrinkage': shrinkages}]
cv = GridSearchCV(ShrunkCovariance(), tuned_parameters)
cv.fit(X_train)
# Ledoit-Wolf optimal shrinkage coefficient estimate
lw = LedoitWolf()
loglik_lw = lw.fit(X_train).score(X_test)
# OAS coefficient estimate
oa = OAS()
loglik_oa = oa.fit(X_train).score(X_test)
###############################################################################
# Plot results
fig = pl.figure()
pl.title("Regularized covariance: likelihood and shrinkage coefficient")
pl.xlabel('Regularizaton parameter: shrinkage coefficient')
pl.ylabel('Error: negative log-likelihood on test data')
# range shrinkage curve
pl.loglog(shrinkages, negative_logliks, label="Negative log-likelihood")
pl.plot(pl.xlim(), 2 * [loglik_real], '--r',
label="Real covariance likelihood")
# adjust view
lik_max = np.amax(negative_logliks)
lik_min = np.amin(negative_logliks)
ymin = lik_min - 6. * np.log((pl.ylim()[1] - pl.ylim()[0]))
ymax = lik_max + 10. * np.log(lik_max - lik_min)
xmin = shrinkages[0]
xmax = shrinkages[-1]
# LW likelihood
pl.vlines(lw.shrinkage_, ymin, -loglik_lw, color='magenta',
linewidth=3, label='Ledoit-Wolf estimate')
# OAS likelihood
pl.vlines(oa.shrinkage_, ymin, -loglik_oa, color='purple',
linewidth=3, label='OAS estimate')
# best CV estimator likelihood
pl.vlines(cv.best_estimator_.shrinkage, ymin,
-cv.best_estimator_.score(X_test), color='cyan',
linewidth=3, label='Cross-validation best estimate')
pl.ylim(ymin, ymax)
pl.xlim(xmin, xmax)
pl.legend()
pl.show()
| bsd-3-clause |
rmccoy7541/egillettii-rnaseq | scripts/snp_performance_analysis.py | 1 | 3682 | #! /bin/env python
import sys
from optparse import OptionParser
import copy
import matplotlib
matplotlib.use('Agg')
import pylab
import scipy.optimize
import numpy
from numpy import array
import dadi
import os
#call ms program from within dadi, using optimized parameters (converted to ms units)
core = "-n 1 0.922 -n 2 0.104 -ej 0.0330 2 1 -en 0.0330 1 1"
command = dadi.Misc.ms_command(100000, (12,12), core, 1, 2000)
ms_fs = dadi.Spectrum.from_ms_file(os.popen(command))
#modify the following line to adjust the sample size of SNPs used for inference
scaled_ms_fs = ms_fs.fixed_size_sample(2000)
scaled_ms_fs = scaled_ms_fs.fold()
#import demographic models
import gillettii_models
def runModel(outFile, nuW_start, nuC_start, T_start):
# Extract the spectrum from ms output
fs = scaled_ms_fs
ns = fs.sample_sizes
print 'sample sizes:', ns
# These are the grid point settings will use for extrapolation.
pts_l = [20,30,40]
# suggested that the smallest grid be slightly larger than the largest sample size. But this may take a long time.
# bottleneck_split model
func = gillettii_models.bottleneck_split
params = array([nuW_start, nuC_start, T_start])
upper_bound = [30, 10, 10]
lower_bound = [1e-5, 1e-10, 0]
# Make the extrapolating version of the demographic model function.
func_ex = dadi.Numerics.make_extrap_func(func)
# Calculate the model AFS
model = func_ex(params, ns, pts_l)
# Calculate likelihood of the data given the model AFS
# Likelihood of the data given the model AFS.
ll_model = dadi.Inference.ll_multinom(model, fs)
print 'Model log-likelihood:', ll_model, "\n"
# The optimal value of theta given the model.
theta = dadi.Inference.optimal_sfs_scaling(model, fs)
p0 = dadi.Misc.perturb_params(params, fold=1, lower_bound=lower_bound, upper_bound=upper_bound)
print 'perturbed parameters: ', p0, "\n"
popt = dadi.Inference.optimize_log_fmin(p0, fs, func_ex, pts_l, upper_bound=upper_bound, lower_bound=lower_bound, maxiter=None, verbose=len(params))
print 'Optimized parameters:', repr(popt), "\n"
#use the optimized parameters in a new model to try to get the parameters to converge
new_model = func_ex(popt, ns, pts_l)
ll_opt = dadi.Inference.ll_multinom(new_model, fs)
print 'Optimized log-likelihood:', ll_opt, "\n"
# Write the parameters and log-likelihood to the outFile
s = str(nuW_start) + '\t' + str(nuC_start) + '\t' + str(T_start) + '\t'
for i in range(0, len(popt)):
s += str(popt[i]) + '\t'
s += str(ll_opt) + '\n'
outFile.write(s)
#################
def mkOptionParser():
""" Defines options and returns parser """
usage = """%prog <outFN> <nuW_start> <nuC_start> <T_start>
%prog performs demographic inference on gillettii RNA-seq data. """
parser = OptionParser(usage)
return parser
def main():
""" see usage in mkOptionParser. """
parser = mkOptionParser()
options, args= parser.parse_args()
if len(args) != 4:
parser.error("Incorrect number of arguments")
outFN = args[0]
nuW_start = float(args[1])
nuC_start = float(args[2])
T_start = float(args[3])
if outFN == '-':
outFile = sys.stdout
else:
outFile = open(outFN, 'a')
runModel(outFile, nuW_start, nuC_start, T_start)
#run main
if __name__ == '__main__':
main()
| mit |
go-smart/glossia-quickstart | code/problem.py | 1 | 13906 | """This requires CGAL mesher applied to series of surfaces. See readme.txt for details.
"""
from __future__ import print_function
# Use FEniCS for Finite Element
import fenics as d
# Useful to import the derivative separately
from dolfin import dx
# Useful numerical libraries
import numpy as N
import matplotlib
matplotlib.use('SVG')
import matplotlib.pyplot as P
# General tools
import os
import subprocess
import shutil
# UFL
import ufl
# Set interactive plotting on
P.ion()
# Use a separate Python file to declare variables
import variables as v
import vtk_tools
input_mesh = "input"
class IREProblem:
"""class IREProblem()
This represents a Finite Element IRE problem using a similar algorithm to that of ULJ
"""
def __init__(self):
pass
def load(self):
# Convert mesh from MSH to Dolfin-XML
shutil.copyfile("input/%s.msh" % input_mesh, "%s.msh" % input_mesh)
destination_xml = "%s.xml" % input_mesh
subprocess.call(["dolfin-convert", "%s.msh" % input_mesh, destination_xml])
# Load mesh and boundaries
mesh = d.Mesh(destination_xml)
self.patches = d.MeshFunction("size_t", mesh, "%s_facet_region.xml" % input_mesh)
self.subdomains = d.MeshFunction("size_t", mesh, "%s_physical_region.xml" % input_mesh)
# Define differential over subdomains
self.dxs = d.dx[self.subdomains]
# Turn subdomains into a Numpy array
self.subdomains_array = N.asarray(self.subdomains.array(), dtype=N.int32)
# Create a map from subdomain indices to tissues
self.tissues_by_subdomain = {}
for i, t in v.tissues.items():
print(i, t)
for j in t["indices"]:
self.tissues_by_subdomain[j] = t
self.mesh = mesh
self.setup_fe()
self.prepare_increase_conductivity()
def load_patient_data(self):
indicators = {}
for subdomain in ("liver", "vessels", "tumour"):
values = N.empty((v.dim_height, v.dim_width, v.dim_depth), dtype='uintp')
for i in range(0, v.dim_depth):
slice = N.loadtxt(os.path.join(
v.patient_data_location,
"patient-%s.%d.txt" % (subdomain, i + 1))
)
values[:, :, i] = slice.astype('uintp')
indicators[subdomain] = values
self.indicators = indicators
def interpolate_to_patient_data(self, function, indicator):
values = N.empty((v.dim_height, v.dim_width, v.dim_depth), dtype='float')
it = N.nditer(values, flags=['multi_index'])
u = N.empty((1,))
x = N.empty((3,))
delta = (v.delta_height, v.delta_width, v.delta_depth)
offset = (v.offset_x, v.offset_y, v.offset_z)
while not it.finished:
if indicator[it.multi_index] != 1:
it.iternext()
continue
x[0] = it.multi_index[1] * delta[1] - offset[0]
x[1] = it.multi_index[0] * delta[0] - offset[1]
x[2] = it.multi_index[2] * delta[2] - offset[2]
function.eval(u, x)
values[...] = u[0]
it.iternext()
return values
def setup_fe(self):
# Define the relevant function spaces
V = d.FunctionSpace(self.mesh, "Lagrange", 1)
self.V = V
# DG0 is useful for defining piecewise constant functions
DV = d.FunctionSpace(self.mesh, "Discontinuous Lagrange", 0)
self.DV = DV
# Define test and trial functions for FE
self.z = d.TrialFunction(self.V)
self.w = d.TestFunction(self.V)
def per_tissue_constant(self, generator):
fefunction = d.Function(self.DV)
generated_values = dict((l, generator(l)) for l in N.unique(self.subdomains_array))
vector = N.vectorize(generated_values.get)
fefunction.vector()[:] = vector(self.subdomains_array)
return fefunction
def get_tumour_volume(self):
# Perhaps there is a prettier way, but integrate a unit function over the tumour tets
one = d.Function(self.V)
one.vector()[:] = 1
return sum(d.assemble(one * self.dxs(i)) for i in v.tissues["tumour"]["indices"])
def save_lesion(self):
final_filename = "results/%s-max_e%06d.vtu" % (input_mesh, self.max_e_count)
shutil.copyfile(final_filename, "../lesion_volume.vtu")
destination = "../lesion_surface.vtp"
vtk_tools.save_lesion(destination, final_filename, "max_E", (80, None))
print("Output file to %s?" % destination, os.path.exists(destination))
def solve(self):
# TODO: when FEniCS ported to Python3, this should be exist_ok
try:
os.makedirs('results')
except OSError:
pass
z, w = (self.z, self.w)
u0 = d.Constant(0.0)
# Define the linear and bilinear forms
L = u0 * w * dx
# Define useful functions
cond = d.Function(self.DV)
U = d.Function(self.V)
# Initialize the max_e vector, that will store the cumulative max e values
max_e = d.Function(self.V)
max_e.vector()[:] = 0.0
max_e.rename("max_E", "Maximum energy deposition by location")
max_e_file = d.File("results/%s-max_e.pvd" % input_mesh)
max_e_per_step = d.Function(self.V)
max_e_per_step_file = d.File("results/%s-max_e_per_step.pvd" % input_mesh)
self.es = {}
self.max_es = {}
fi = d.File("results/%s-cond.pvd" % input_mesh)
potential_file = d.File("results/%s-potential.pvd" % input_mesh)
# Loop through the voltages and electrode combinations
for i, (anode, cathode, voltage) in enumerate(v.electrode_triples):
print("Electrodes %d (%lf) -> %d (0)" % (anode, voltage, cathode))
cond = d.project(self.sigma_start, V=self.DV)
# Define the Dirichlet boundary conditions on the active needles
uV = d.Constant(voltage)
term1_bc = d.DirichletBC(self.V, uV, self.patches, v.needles[anode])
term2_bc = d.DirichletBC(self.V, u0, self.patches, v.needles[cathode])
e = d.Function(self.V)
e.vector()[:] = max_e.vector()
# Re-evaluate conductivity
self.increase_conductivity(cond, e)
for j in range(v.max_restarts):
# Update the bilinear form
a = d.inner(d.nabla_grad(z), cond * d.nabla_grad(w)) * dx
# Solve again
print(" [solving...")
d.solve(a == L, U, bcs=[term1_bc, term2_bc])
print(" ....solved]")
# Extract electric field norm
for k in range(len(U.vector())):
if N.isnan(U.vector()[k]):
U.vector()[k] = 1e5
e_new = d.project(d.sqrt(d.dot(d.grad(U), d.grad(U))), self.V)
# Take the max of the new field and the established electric field
e.vector()[:] = N.array([max(*X) for X in zip(e.vector(), e_new.vector())])
# Re-evaluate conductivity
fi << cond
self.increase_conductivity(cond, e)
potential_file << U
# Save the max e function to a VTU
max_e_per_step.vector()[:] = e.vector()[:]
max_e_per_step_file << max_e_per_step
# Store this electric field norm, for this triple, for later reference
self.es[i] = e
# Store the max of this electric field norm and that for all previous triples
max_e_array = N.array([max(*X) for X in zip(max_e.vector(), e.vector())])
max_e.vector()[:] = max_e_array
# Create a new max_e function for storage, or it will be overwritten by the next iteration
max_e_new = d.Function(self.V)
max_e_new.vector()[:] = max_e_array
# Store this max e function for the cumulative coverage curve calculation later
self.max_es[i] = max_e_new
# Save the max e function to a VTU
max_e_file << max_e
self.max_e_count = i
def prepare_increase_conductivity(self):
def sigma_function(l, i):
s = self.tissues_by_subdomain[l]["sigma"]
if isinstance(s, list):
return s[i]
else:
return s
def threshold_function(l, i):
s = self.tissues_by_subdomain[l]["sigma"]
if isinstance(s, list):
return self.tissues_by_subdomain[l][i]
else:
return 1 if i == "threshold reversible" else 0
self.sigma_start = self.per_tissue_constant(lambda l: sigma_function(l, 0))
self.sigma_end = self.per_tissue_constant(lambda l: sigma_function(l, 1))
self.threshold_reversible = self.per_tissue_constant(lambda l: threshold_function(l, "threshold reversible"))
self.threshold_irreversible = self.per_tissue_constant(lambda l: threshold_function(l, "threshold irreversible"))
self.k = (self.sigma_end - self.sigma_start) / (self.threshold_irreversible - self.threshold_reversible)
self.h = self.sigma_start - self.k * self.threshold_reversible
def increase_conductivity(self, cond, e):
# Set up the three way choice function
intermediate = e * self.k + self.h
not_less_than = ufl.conditional(ufl.gt(e, self.threshold_irreversible), self.sigma_end, intermediate)
cond_expression = ufl.conditional(ufl.lt(e, self.threshold_reversible), self.sigma_start, not_less_than)
# Project this onto the function space
cond_function = d.project(ufl.Max(cond_expression, cond), cond.function_space())
cond.assign(cond_function)
def plot_bitmap_result(self):
# Create a horizontal axis
cc_haxis = N.linspace(5000, 1e5, 200)
# Import the binary data indicating the location of structures
self.load_patient_data()
# Calculate the tumour volume; this is what we will compare against
tumour_volume = (self.indicators["tumour"] == 1).sum()
# Initialize the output_arrays vector a rescale the x to V/cm
output_arrays = [cc_haxis / 100]
# Loop through the electrode triples
for i, triple in enumerate(v.electrode_triples):
# Project the max e values for this triple to DG0 - this forces an evaluation of the function at the mid-point of each tet, DG0's only DOF
e_dg = self.interpolate_to_patient_data(self.max_es[i], self.indicators["tumour"])
# Sum the tet volumes for tets with a midpoint value greater than x, looping over x as e-norm thresholds (also scale to tumour volume)
elim = N.vectorize(lambda x: (e_dg > x).sum() / tumour_volume)
output_arrays.append(elim(cc_haxis))
# Compile into a convenient array
output = N.array(zip(*output_arrays))
# Output cumulative coverage curves as CSV
N.savetxt('results/%s-coverage_curves_bitmap.csv' % input_mesh, output)
# Plot the coverage curves
for (anode, cathode, voltage), a in zip(v.electrode_triples, output_arrays[1:]):
P.plot(output_arrays[0], a, label="%d - %d" % (anode, cathode))
# Draw the plot
P.draw()
P.title(r"Bitmap-based")
P.xlabel(r"Threshold level of $|E|$ ($\mathrm{J}$)")
P.ylabel(r"Fraction of tumour beneath level")
# Show a legend for the plot
P.legend(loc=3)
# Display the plot
P.show(block=True)
def plot_result(self):
# Calculate preliminary relationships
dofmap = self.DV.dofmap()
cell_dofs = N.array([dofmap.cell_dofs(c)[0] for c in N.arange(self.mesh.num_cells()) if (self.subdomains[c] in v.tissues["tumour"]["indices"])])
volumes = N.array([d.Cell(self.mesh, c).volume() for c in N.arange(self.mesh.num_cells()) if (self.subdomains[c] in v.tissues["tumour"]["indices"])])
# Create a horizontal axis
cc_haxis = N.linspace(5000, 1e5, 200)
# Calculate the tumour volume; this is what we will compare against
tumour_volume = self.get_tumour_volume()
# Initialize the output_arrays vector a rescale the x to V/cm
output_arrays = [cc_haxis / 100]
# Loop through the electrode pairs
for i, triple in enumerate(v.electrode_triples):
# Project the max e values for this triple to DG0 - this forces an evaluation of the function at the mid-point of each tet, DG0's only DOF
e_dg = d.project(self.max_es[i], self.DV)
# Calculate the "max e" contribution for each cell
contributor = N.vectorize(lambda c: e_dg.vector()[c])
contributions = contributor(cell_dofs)
# Sum the tet volumes for tets with a midpoint value greater than x, looping over x as e-norm thresholds (also scale to tumour volume)
elim = N.vectorize(lambda x: volumes[contributions > x].sum() / tumour_volume)
output_arrays.append(elim(cc_haxis))
# Compile into a convenient array
output = N.array(zip(*output_arrays))
# Output cumulative coverage curves as CSV
N.savetxt('results/%s-coverage_curves.csv' % input_mesh, output)
# Plot the coverage curves
for (anode, cathode, voltage), a in zip(v.electrode_triples, output_arrays[1:]):
P.plot(output_arrays[0], a, label="%d - %d" % (anode, cathode))
# Draw the plot
P.draw()
P.xlabel(r"Threshold level of $|E|$ ($\mathrm{J}$)")
P.ylabel(r"Fraction of tumour beneath level")
# Show a legend for the plot
P.legend(loc=3)
# Display the plot
P.savefig('%s-coverage_curves' % input_mesh)
| mit |
zhoulingjun/zipline | zipline/utils/security_list.py | 18 | 4472 | from datetime import datetime
from os import listdir
import os.path
import pandas as pd
import pytz
import zipline
from zipline.finance.trading import with_environment
DATE_FORMAT = "%Y%m%d"
zipline_dir = os.path.dirname(zipline.__file__)
SECURITY_LISTS_DIR = os.path.join(zipline_dir, 'resources', 'security_lists')
class SecurityList(object):
def __init__(self, data, current_date_func):
"""
data: a nested dictionary:
knowledge_date -> lookup_date ->
{add: [symbol list], 'delete': []}, delete: [symbol list]}
current_date_func: function taking no parameters, returning
current datetime
"""
self.data = data
self._cache = {}
self._knowledge_dates = self.make_knowledge_dates(self.data)
self.current_date = current_date_func
self.count = 0
self._current_set = set()
def make_knowledge_dates(self, data):
knowledge_dates = sorted(
[pd.Timestamp(k) for k in data.keys()])
return knowledge_dates
def __iter__(self):
return iter(self.restricted_list)
def __contains__(self, item):
return item in self.restricted_list
@property
def restricted_list(self):
cd = self.current_date()
for kd in self._knowledge_dates:
if cd < kd:
break
if kd in self._cache:
self._current_set = self._cache[kd]
continue
for effective_date, changes in iter(self.data[kd].items()):
self.update_current(
effective_date,
changes['add'],
self._current_set.add
)
self.update_current(
effective_date,
changes['delete'],
self._current_set.remove
)
self._cache[kd] = self._current_set
return self._current_set
@with_environment()
def update_current(self, effective_date, symbols, change_func, env=None):
for symbol in symbols:
asset = env.asset_finder.lookup_symbol(
symbol,
as_of_date=effective_date
)
# Pass if no Asset exists for the symbol
if asset is None:
continue
change_func(asset.sid)
class SecurityListSet(object):
# provide a cut point to substitute other security
# list implementations.
security_list_type = SecurityList
def __init__(self, current_date_func):
self.current_date_func = current_date_func
self._leveraged_etf = None
@property
def leveraged_etf_list(self):
if self._leveraged_etf is None:
self._leveraged_etf = self.security_list_type(
load_from_directory('leveraged_etf_list'),
self.current_date_func
)
return self._leveraged_etf
def load_from_directory(list_name):
"""
To resolve the symbol in the LEVERAGED_ETF list,
the date on which the symbol was in effect is needed.
Furthermore, to maintain a point in time record of our own maintenance
of the restricted list, we need a knowledge date. Thus, restricted lists
are dictionaries of datetime->symbol lists.
new symbols should be entered as a new knowledge date entry.
This method assumes a directory structure of:
SECURITY_LISTS_DIR/listname/knowledge_date/lookup_date/add.txt
SECURITY_LISTS_DIR/listname/knowledge_date/lookup_date/delete.txt
The return value is a dictionary with:
knowledge_date -> lookup_date ->
{add: [symbol list], 'delete': [symbol list]}
"""
data = {}
dir_path = os.path.join(SECURITY_LISTS_DIR, list_name)
for kd_name in listdir(dir_path):
kd = datetime.strptime(kd_name, DATE_FORMAT).replace(
tzinfo=pytz.utc)
data[kd] = {}
kd_path = os.path.join(dir_path, kd_name)
for ld_name in listdir(kd_path):
ld = datetime.strptime(ld_name, DATE_FORMAT).replace(
tzinfo=pytz.utc)
data[kd][ld] = {}
ld_path = os.path.join(kd_path, ld_name)
for fname in listdir(ld_path):
fpath = os.path.join(ld_path, fname)
with open(fpath) as f:
symbols = f.read().splitlines()
data[kd][ld][fname] = symbols
return data
| apache-2.0 |
eonum/medword | model_validation.py | 1 | 11972 | import numpy as np
import preprocess as pp
import os
from random import randint
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import csv
def validate_model(embedding, emb_model_dir, emb_model_fn):
print("Start validation. Loading model. \n")
# load config
config = embedding.config
# load model
embedding.load_model(emb_model_dir, emb_model_fn)
# directories and filenames
val_dir = config.config['val_data_dir']
doesntfit_fn = config.config['doesntfit_file']
doesntfit_src = os.path.join(val_dir, doesntfit_fn)
synonyms_fn = config.config['synonyms_file']
syn_file_src = os.path.join(val_dir, synonyms_fn)
# test with doesn't fit questions
test_doesntfit(embedding, doesntfit_src)
# test with synonyms
# TODO get better syn file (slow, contains many non-significant instances)
# test_synonyms(embedding, syn_file_src)
# test with human similarity TODO remove hardcoding
human_sim_file_src = 'data/validation_data/human_similarity.csv'
test_human_similarity(embedding, human_sim_file_src)
#### Doesn't Fit Validation ####
def doesntfit(embedding, word_list):
"""
- compares each word-vector to mean of all word-vectors of word_list using the vector dot-product
- vector with lowest dot-produt to mean-vector is regarded as the one that dosen't fit
"""
used_words = [word for word in word_list if embedding.may_construct_word_vec(word)]
n_used_words = len(used_words)
n_words = len(word_list)
if n_used_words != n_words:
ignored_words = set(word_list) - set(used_words)
print("vectors for words %s are not present in the model, ignoring these words: ", ignored_words)
if not used_words:
print("cannot select a word from an empty list.")
vectors = np.vstack(embedding.word_vec(word) for word in used_words)
mean = np.mean(vectors, axis=0)
dists = np.dot(vectors, mean)
return sorted(zip(dists, used_words))[0][1]
def test_doesntfit(embedding, file_src):
"""
- tests all doesntfit-questions (lines) of file
- a doesnt-fit question is of the format "word_1 word_2 ... word_N word_NotFitting"
where word_1 to word_n are members of a category but word_NotFitting isn't
eg. "Auto Motorrad Fahrrad Ampel"
"""
# load config
config = embedding.config
print("Validating 'doesntfit' with file", file_src)
num_lines = sum(1 for line in open(file_src))
num_questions = 0
num_right = 0
tokenizer = pp.get_tokenizer(config)
# get questions
with open(file_src) as f:
questions = f.read().splitlines()
tk_questions = [tokenizer.tokenize(q) for q in questions]
# TODO: check if tokenizer has splitted one word to mulitple words and handle it.
# So far no word in the doesnt_fit testfile should be splitted
# vocab used to speed checking if word is in vocabulary
# (also checked by embedding.may_construct_word_vec(word))
vocab = embedding.get_vocab()
# test each question
for question in tk_questions:
# check if all words exist in vocabulary
if all(((word in vocab) or (embedding.may_construct_word_vec(word))) for word in question):
num_questions += 1
if doesntfit(embedding, question) == question[-1]:
num_right += 1
# calculate result
correct_matches = np.round(num_right/np.float(num_questions)*100, 1) if num_questions>0 else 0.0
coverage = np.round(num_questions/np.float(num_lines)*100, 1) if num_lines>0 else 0.0
# log result
print("\n*** Doesn't fit ***")
print('Doesn\'t fit correct: {0}% ({1}/{2})'.format(str(correct_matches), str(num_right), str(num_questions)))
print('Doesn\'t fit coverage: {0}% ({1}/{2}) \n'.format(str(coverage), str(num_questions), str(num_lines)))
#### Synonyms Validation ####
def test_synonyms(embedding, file_src):
"""
- tests all synonym-questions (lines) of file
- a synonym-question is of the format "word_1 word_2"
where word_1 and word_2 are synonyms
eg. "Blutgerinnsel Thrombus"
- for word_1 check if it appears in the n closest words of word_2 using "model.cosine(word, n)"
and vice-versa
- for each synonym-pair TWO CHECKS are made therefore (non-symmetric problem)
"""
print("Validating 'synonyms' with file", file_src)
config = embedding.config
num_lines = sum(1 for line in open(file_src))
num_questions = 0
cos_sim_sum_synonyms = 0
tokenizer = pp.get_tokenizer(config)
# get questions which are still of lenght 2 after tokenization
# TODO: improve for compound words (aaa-bbb) which are splitted by the tokenizer
tk_questions = []
with open(file_src, 'r') as f:
questions = f.read().splitlines()
for q in questions:
# synonyms = q.split(';')#tokenizer.tokenize(q)
# synonyms = [" ".join(tokenizer.tokenize(synonym)) for synonym in
# synonyms]
synonyms = tokenizer.tokenize(q)
if len(synonyms) == 2:
tk_questions.append(synonyms)
vocab = embedding.get_vocab()
# test each question
for tk_quest in tk_questions:
# check if all words exist in vocabulary
if all(((word in vocab) or embedding.may_construct_word_vec(word)) for word in tk_quest):
num_questions += 1
w1 = tk_quest[0]
w2 = tk_quest[1]
cos_sim_sum_synonyms += embedding.similarity(w1, w2)
# compute avg cosine similarity for random vectors to relate to avg_cosine_similarity of synonyms
vocab_size = len(vocab)
n_vals = 1000
similarity_sum_rand_vec = 0
vals1 = [randint(0, vocab_size -1) for i in range(n_vals)]
vals2 = [randint(0, vocab_size -1) for i in range(n_vals)]
for v1, v2 in zip(vals1, vals2):
similarity_sum_rand_vec += embedding.similarity(vocab[v1], vocab[v2])
avg_cosine_similarity_rand_vec = similarity_sum_rand_vec / np.float(n_vals)
# calculate result
avg_cosine_similarity_synonyms = (cos_sim_sum_synonyms / num_questions) if num_questions>0 else 0.0
coverage = np.round(num_questions/np.float(num_lines)*100, 1) if num_lines>0 else 0.0
# log result
print("\n*** Cosine-Similarity ***")
print("Synonyms avg-cos-similarity (SACS):", avg_cosine_similarity_synonyms, "\nRandom avg-cos-similarity (RACS):", avg_cosine_similarity_rand_vec,
"\nRatio SACS/RACS:", avg_cosine_similarity_synonyms/float(avg_cosine_similarity_rand_vec))
print("\n*** Word Coverage ***")
print("Synonyms: {0} pairs in input. {1} pairs after tokenization. {2} pairs could be constructed from model-vocabulary.".format(str(num_lines), str(len(tk_questions)), str(num_questions)))
print("Synonyms coverage: {0}% ({1}/{2})\n".format(str(coverage), str(2*num_questions), str(2*num_lines), ))
def get_human_rating_deviation(embedding, word1, word2, human_similarity):
# compute deviation of human similarity from cosine similarity
# cosine similarity
cosine_similarity = embedding.similarity(word1, word2)
return np.abs(cosine_similarity - human_similarity)
def test_human_similarity(embedding, file_src):
"""
Compare cosine similarity of 2 word-vectors against a similarity value
based on human ratings.
Each line in the file contains two words and the similarity value,
separated by ':'.
The datasets were obtained by asking human subjects to assign a similarity
or relatedness judgment to a number of German word pairs.
https://www.ukp.tu-darmstadt.de/data/semantic-relatedness/german-relatedness-datasets/
"""
config = embedding.config
tokenizer = pp.get_tokenizer(config)
vocab = embedding.get_vocab()
vocab_size = len(vocab)
# accumulate error and count test instances
summed_error = 0.0
n_test_instances = 0
n_skipped_instances = 0
summed_random_error = 0.0
# load file to lines
with open(file_src, 'r') as csvfile:
filereader = csv.reader(csvfile, delimiter=':',)
next(filereader)
# process line by line
for line in filereader:
n_test_instances += 1
# split lines to instances
word1 = tokenizer.tokenize(line[0])[0]
word2 = tokenizer.tokenize(line[1])[0]
human_similarity = np.float32(line[2])
# check if both words are in vocab
if (word1 in embedding.get_vocab()
and word2 in embedding.get_vocab()):
# add current deviation to error
deviation = get_human_rating_deviation(embedding, word1, word2,
human_similarity)
summed_error += deviation
# get a random error for comparison
rand_word1 = vocab[randint(0, vocab_size -1)]
rand_word2 = vocab[randint(0, vocab_size -1)]
random_dev = get_human_rating_deviation(embedding, rand_word1,
rand_word2,
human_similarity)
summed_random_error += random_dev
else:
n_skipped_instances += 1
# print results
print("\n*** Human-Similarity ***")
print("Number of instances: {0}, skipped: {1}"
.format(str(n_test_instances), str(n_skipped_instances)))
# check whether we found any valid test instance
n_processed_instances = n_test_instances - n_skipped_instances
if (n_processed_instances == 0):
print("Error: No instance could be computed with this model.")
else:
mean_error = summed_error / n_processed_instances
random_error = summed_random_error / n_processed_instances
print("random error: {0}, mean error: {1}"
.format(str(random_error), str(mean_error)))
#### Visualization ####
def visualize_words(embedding, word_list, n_nearest_neighbours):
# get indexes and words that you want to visualize
words_to_visualize = []
# word_indexes_to_visualize = []
# get all words and neighbors that you want to visualize
for word in word_list:
if not embedding.may_construct_word_vec(word):
continue
words_to_visualize.append(word)
# word_indexes_to_visualize.append(model.ix(word))
# get neighbours of word
neighbours = [n for (n, m) in embedding.most_similar_n(word, n_nearest_neighbours)]
words_to_visualize.extend(neighbours)
#word_indexes_to_visualize.extend(indexes)
# get vectors from indexes to visualize
if words_to_visualize == []:
print("No word found to show.")
return
emb_vectors = np.vstack([embedding.word_vec(word) for word in words_to_visualize])
# project down to 2D
pca = PCA(n_components=2)
emb_vec_2D = pca.fit_transform(emb_vectors)
n_inputs = len(word_list)
for i in range(n_inputs):
# group word and it's neighbours together (results in different color in plot)
lower = i*n_nearest_neighbours + i
upper = (i+1)*n_nearest_neighbours + (i+1)
# plot 2D
plt.scatter(emb_vec_2D[lower:upper, 0], emb_vec_2D[lower:upper, 1])
for label, x, y in zip(words_to_visualize, emb_vec_2D[:, 0], emb_vec_2D[:, 1]):
plt.annotate(label, xy=(x, y), xytext=(0, 0), textcoords='offset points')
# find nice axes for plot
lower_x = min(emb_vec_2D[:, 0])
upper_x = max(emb_vec_2D[:, 0])
lower_y = min(emb_vec_2D[:, 1])
upper_y = max(emb_vec_2D[:, 1])
# 10% of padding on all sides
pad_x = 0.1 * abs(upper_x - lower_x)
pad_y = 0.1 * abs(upper_y - lower_y)
plt.xlim([lower_x - pad_x, upper_x + pad_x])
plt.ylim([lower_y - pad_y, upper_y + pad_y])
plt.show()
| mit |
wzbozon/statsmodels | statsmodels/tools/print_version.py | 23 | 7951 | #!/usr/bin/env python
from __future__ import print_function
from statsmodels.compat.python import reduce
import sys
from os.path import dirname
def safe_version(module, attr='__version__'):
if not isinstance(attr, list):
attr = [attr]
try:
return reduce(getattr, [module] + attr)
except AttributeError:
return "Cannot detect version"
def _show_versions_only():
print("\nINSTALLED VERSIONS")
print("------------------")
print("Python: %d.%d.%d.%s.%s" % sys.version_info[:])
try:
import os
(sysname, nodename, release, version, machine) = os.uname()
print("OS: %s %s %s %s" % (sysname, release, version, machine))
print("byteorder: %s" % sys.byteorder)
print("LC_ALL: %s" % os.environ.get('LC_ALL', "None"))
print("LANG: %s" % os.environ.get('LANG', "None"))
except:
pass
try:
from statsmodels import version
has_sm = True
except ImportError:
has_sm = False
print('\nStatsmodels\n===========\n')
if has_sm:
print('Installed: %s' % safe_version(version, 'full_version'))
else:
print('Not installed')
print("\nRequired Dependencies\n=====================\n")
try:
import Cython
print("cython: %s" % safe_version(Cython))
except ImportError:
print("cython: Not installed")
try:
import numpy
print("numpy: %s" % safe_version(numpy, ['version', 'version']))
except ImportError:
print("numpy: Not installed")
try:
import scipy
print("scipy: %s" % safe_version(scipy, ['version', 'version']))
except ImportError:
print("scipy: Not installed")
try:
import pandas
print("pandas: %s" % safe_version(pandas, ['version', 'version']))
except ImportError:
print("pandas: Not installed")
try:
import dateutil
print(" dateutil: %s" % safe_version(dateutil))
except ImportError:
print(" dateutil: not installed")
try:
import patsy
print("patsy: %s" % safe_version(patsy))
except ImportError:
print("patsy: Not installed")
print("\nOptional Dependencies\n=====================\n")
try:
import matplotlib as mpl
print("matplotlib: %s" % safe_version(mpl))
except ImportError:
print("matplotlib: Not installed")
try:
from cvxopt import info
print("cvxopt: %s" % safe_version(info, 'version'))
except ImportError:
print("cvxopt: Not installed")
print("\nDeveloper Tools\n================\n")
try:
import IPython
print("IPython: %s" % safe_version(IPython))
except ImportError:
print("IPython: Not installed")
try:
import jinja2
print(" jinja2: %s" % safe_version(jinja2))
except ImportError:
print(" jinja2: Not installed")
try:
import sphinx
print("sphinx: %s" % safe_version(sphinx))
except ImportError:
print("sphinx: Not installed")
try:
import pygments
print(" pygments: %s" % safe_version(pygments))
except ImportError:
print(" pygments: Not installed")
try:
import nose
print("nose: %s" % safe_version(nose))
except ImportError:
print("nose: Not installed")
try:
import virtualenv
print("virtualenv: %s" % safe_version(virtualenv))
except ImportError:
print("virtualenv: Not installed")
print("\n")
def show_versions(show_dirs=True):
if not show_dirs:
_show_versions_only()
print("\nINSTALLED VERSIONS")
print("------------------")
print("Python: %d.%d.%d.%s.%s" % sys.version_info[:])
try:
import os
(sysname, nodename, release, version, machine) = os.uname()
print("OS: %s %s %s %s" % (sysname, release, version, machine))
print("byteorder: %s" % sys.byteorder)
print("LC_ALL: %s" % os.environ.get('LC_ALL', "None"))
print("LANG: %s" % os.environ.get('LANG', "None"))
except:
pass
try:
import statsmodels
from statsmodels import version
has_sm = True
except ImportError:
has_sm = False
print('\nStatsmodels\n===========\n')
if has_sm:
print('Installed: %s (%s)' % (safe_version(version, 'full_version'),
dirname(statsmodels.__file__)))
else:
print('Not installed')
print("\nRequired Dependencies\n=====================\n")
try:
import Cython
print("cython: %s (%s)" % (safe_version(Cython),
dirname(Cython.__file__)))
except ImportError:
print("cython: Not installed")
try:
import numpy
print("numpy: %s (%s)" % (safe_version(numpy, ['version', 'version']),
dirname(numpy.__file__)))
except ImportError:
print("numpy: Not installed")
try:
import scipy
print("scipy: %s (%s)" % (safe_version(scipy, ['version', 'version']),
dirname(scipy.__file__)))
except ImportError:
print("scipy: Not installed")
try:
import pandas
print("pandas: %s (%s)" % (safe_version(pandas, ['version',
'version']),
dirname(pandas.__file__)))
except ImportError:
print("pandas: Not installed")
try:
import dateutil
print(" dateutil: %s (%s)" % (safe_version(dateutil),
dirname(dateutil.__file__)))
except ImportError:
print(" dateutil: not installed")
try:
import patsy
print("patsy: %s (%s)" % (safe_version(patsy),
dirname(patsy.__file__)))
except ImportError:
print("patsy: Not installed")
print("\nOptional Dependencies\n=====================\n")
try:
import matplotlib as mpl
print("matplotlib: %s (%s)" % (safe_version(mpl),
dirname(mpl.__file__)))
except ImportError:
print("matplotlib: Not installed")
try:
from cvxopt import info
print("cvxopt: %s (%s)" % (safe_version(info, 'version'),
dirname(info.__file__)))
except ImportError:
print("cvxopt: Not installed")
print("\nDeveloper Tools\n================\n")
try:
import IPython
print("IPython: %s (%s)" % (safe_version(IPython),
dirname(IPython.__file__)))
except ImportError:
print("IPython: Not installed")
try:
import jinja2
print(" jinja2: %s (%s)" % (safe_version(jinja2),
dirname(jinja2.__file__)))
except ImportError:
print(" jinja2: Not installed")
try:
import sphinx
print("sphinx: %s (%s)" % (safe_version(sphinx),
dirname(sphinx.__file__)))
except ImportError:
print("sphinx: Not installed")
try:
import pygments
print(" pygments: %s (%s)" % (safe_version(pygments),
dirname(pygments.__file__)))
except ImportError:
print(" pygments: Not installed")
try:
import nose
print("nose: %s (%s)" % (safe_version(nose), dirname(nose.__file__)))
except ImportError:
print("nose: Not installed")
try:
import virtualenv
print("virtualenv: %s (%s)" % (safe_version(virtualenv),
dirname(virtualenv.__file__)))
except ImportError:
print("virtualenv: Not installed")
print("\n")
if __name__ == "__main__":
show_versions()
| bsd-3-clause |
deepakantony/sms-tools | software/transformations_interface/harmonicTransformations_function.py | 20 | 5398 | block=False# function call to the transformation functions of relevance for the hpsModel
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import get_window
import sys, os
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../models/'))
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../transformations/'))
import sineModel as SM
import harmonicModel as HM
import sineTransformations as ST
import harmonicTransformations as HT
import utilFunctions as UF
def analysis(inputFile='../../sounds/vignesh.wav', window='blackman', M=1201, N=2048, t=-90,
minSineDur=0.1, nH=100, minf0=130, maxf0=300, f0et=7, harmDevSlope=0.01):
"""
Analyze a sound with the harmonic model
inputFile: input sound file (monophonic with sampling rate of 44100)
window: analysis window type (rectangular, hanning, hamming, blackman, blackmanharris)
M: analysis window size
N: fft size (power of two, bigger or equal than M)
t: magnitude threshold of spectral peaks
minSineDur: minimum duration of sinusoidal tracks
nH: maximum number of harmonics
minf0: minimum fundamental frequency in sound
maxf0: maximum fundamental frequency in sound
f0et: maximum error accepted in f0 detection algorithm
harmDevSlope: allowed deviation of harmonic tracks, higher harmonics have higher allowed deviation
returns inputFile: input file name; fs: sampling rate of input file, tfreq,
tmag: sinusoidal frequencies and magnitudes
"""
# size of fft used in synthesis
Ns = 512
# hop size (has to be 1/4 of Ns)
H = 128
# read input sound
fs, x = UF.wavread(inputFile)
# compute analysis window
w = get_window(window, M)
# compute the harmonic model of the whole sound
hfreq, hmag, hphase = HM.harmonicModelAnal(x, fs, w, N, H, t, nH, minf0, maxf0, f0et, harmDevSlope, minSineDur)
# synthesize the sines without original phases
y = SM.sineModelSynth(hfreq, hmag, np.array([]), Ns, H, fs)
# output sound file (monophonic with sampling rate of 44100)
outputFile = 'output_sounds/' + os.path.basename(inputFile)[:-4] + '_harmonicModel.wav'
# write the sound resulting from the inverse stft
UF.wavwrite(y, fs, outputFile)
# create figure to show plots
plt.figure(figsize=(12, 9))
# frequency range to plot
maxplotfreq = 5000.0
# plot the input sound
plt.subplot(3,1,1)
plt.plot(np.arange(x.size)/float(fs), x)
plt.axis([0, x.size/float(fs), min(x), max(x)])
plt.ylabel('amplitude')
plt.xlabel('time (sec)')
plt.title('input sound: x')
if (hfreq.shape[1] > 0):
plt.subplot(3,1,2)
tracks = np.copy(hfreq)
numFrames = tracks.shape[0]
frmTime = H*np.arange(numFrames)/float(fs)
tracks[tracks<=0] = np.nan
plt.plot(frmTime, tracks)
plt.axis([0, x.size/float(fs), 0, maxplotfreq])
plt.title('frequencies of harmonic tracks')
# plot the output sound
plt.subplot(3,1,3)
plt.plot(np.arange(y.size)/float(fs), y)
plt.axis([0, y.size/float(fs), min(y), max(y)])
plt.ylabel('amplitude')
plt.xlabel('time (sec)')
plt.title('output sound: y')
plt.tight_layout()
plt.show(block=False)
return inputFile, fs, hfreq, hmag
def transformation_synthesis(inputFile, fs, hfreq, hmag, freqScaling = np.array([0, 2.0, 1, .3]),
freqStretching = np.array([0, 1, 1, 1.5]), timbrePreservation = 1,
timeScaling = np.array([0, .0, .671, .671, 1.978, 1.978+1.0])):
"""
Transform the analysis values returned by the analysis function and synthesize the sound
inputFile: name of input file
fs: sampling rate of input file
tfreq, tmag: sinusoidal frequencies and magnitudes
freqScaling: frequency scaling factors, in time-value pairs
freqStretchig: frequency stretching factors, in time-value pairs
timbrePreservation: 1 preserves original timbre, 0 it does not
timeScaling: time scaling factors, in time-value pairs
"""
# size of fft used in synthesis
Ns = 512
# hop size (has to be 1/4 of Ns)
H = 128
# frequency scaling of the harmonics
yhfreq, yhmag = HT.harmonicFreqScaling(hfreq, hmag, freqScaling, freqStretching, timbrePreservation, fs)
# time scale the sound
yhfreq, yhmag = ST.sineTimeScaling(yhfreq, yhmag, timeScaling)
# synthesis
y = SM.sineModelSynth(yhfreq, yhmag, np.array([]), Ns, H, fs)
# write output sound
outputFile = 'output_sounds/' + os.path.basename(inputFile)[:-4] + '_harmonicModelTransformation.wav'
UF.wavwrite(y, fs, outputFile)
# create figure to plot
plt.figure(figsize=(12, 6))
# frequency range to plot
maxplotfreq = 15000.0
# plot the transformed sinusoidal frequencies
plt.subplot(2,1,1)
if (yhfreq.shape[1] > 0):
tracks = np.copy(yhfreq)
tracks = tracks*np.less(tracks, maxplotfreq)
tracks[tracks<=0] = np.nan
numFrames = int(tracks[:,0].size)
frmTime = H*np.arange(numFrames)/float(fs)
plt.plot(frmTime, tracks)
plt.title('transformed harmonic tracks')
plt.autoscale(tight=True)
# plot the output sound
plt.subplot(2,1,2)
plt.plot(np.arange(y.size)/float(fs), y)
plt.axis([0, y.size/float(fs), min(y), max(y)])
plt.ylabel('amplitude')
plt.xlabel('time (sec)')
plt.title('output sound: y')
plt.tight_layout()
plt.show()
if __name__ == "__main__":
# analysis
inputFile, fs, hfreq, hmag = analysis()
# transformation and synthesis
transformation_synthesis (inputFile, fs, hfreq, hmag)
plt.show()
| agpl-3.0 |
LukeC92/iris | lib/iris/tests/integration/plot/test_netcdftime.py | 3 | 2243 | # (C) British Crown Copyright 2016 - 2017, Met Office
#
# This file is part of Iris.
#
# Iris is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Iris is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Iris. If not, see <http://www.gnu.org/licenses/>.
"""
Test plot of time coord with non-gregorian calendar.
"""
from __future__ import (absolute_import, division, print_function)
from six.moves import (filter, input, map, range, zip) # noqa
# import iris tests first so that some things can be initialised before
# importing anything else
import iris.tests as tests
import netcdftime
import numpy as np
from iris.coords import AuxCoord
from cf_units import Unit
if tests.NC_TIME_AXIS_AVAILABLE:
from nc_time_axis import CalendarDateTime
# Run tests in no graphics mode if matplotlib is not available.
if tests.MPL_AVAILABLE:
import matplotlib.pyplot as plt
import iris.plot as iplt
@tests.skip_nc_time_axis
@tests.skip_plot
class Test(tests.GraphicsTest):
def test_360_day_calendar(self):
n = 360
calendar = '360_day'
time_unit = Unit('days since 1970-01-01 00:00', calendar=calendar)
time_coord = AuxCoord(np.arange(n), 'time', units=time_unit)
times = [time_unit.num2date(point) for point in time_coord.points]
times = [netcdftime.datetime(atime.year, atime.month, atime.day,
atime.hour, atime.minute, atime.second)
for atime in times]
expected_ydata = np.array([CalendarDateTime(time, calendar)
for time in times])
line1, = iplt.plot(time_coord)
result_ydata = line1.get_ydata()
self.assertArrayEqual(expected_ydata, result_ydata)
if __name__ == "__main__":
tests.main()
| lgpl-3.0 |
hefen1/chromium | chrome/test/data/nacl/gdb_rsp.py | 99 | 2431 | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# This file is based on gdb_rsp.py file from NaCl repository.
import re
import socket
import time
def RspChecksum(data):
checksum = 0
for char in data:
checksum = (checksum + ord(char)) % 0x100
return checksum
class GdbRspConnection(object):
def __init__(self, addr):
self._socket = self._Connect(addr)
def _Connect(self, addr):
# We have to poll because we do not know when sel_ldr has
# successfully done bind() on the TCP port. This is inherently
# unreliable.
# TODO(mseaborn): Add a more reliable connection mechanism to
# sel_ldr's debug stub.
timeout_in_seconds = 10
poll_time_in_seconds = 0.1
for i in xrange(int(timeout_in_seconds / poll_time_in_seconds)):
# On Mac OS X, we have to create a new socket FD for each retry.
sock = socket.socket()
try:
sock.connect(addr)
except socket.error:
# Retry after a delay.
time.sleep(poll_time_in_seconds)
else:
return sock
raise Exception('Could not connect to sel_ldr\'s debug stub in %i seconds'
% timeout_in_seconds)
def _GetReply(self):
reply = ''
while True:
data = self._socket.recv(1024)
if len(data) == 0:
raise AssertionError('EOF on socket reached with '
'incomplete reply message: %r' % reply)
reply += data
if '#' in data:
break
match = re.match('\+\$([^#]*)#([0-9a-fA-F]{2})$', reply)
if match is None:
raise AssertionError('Unexpected reply message: %r' % reply)
reply_body = match.group(1)
checksum = match.group(2)
expected_checksum = '%02x' % RspChecksum(reply_body)
if checksum != expected_checksum:
raise AssertionError('Bad RSP checksum: %r != %r' %
(checksum, expected_checksum))
# Send acknowledgement.
self._socket.send('+')
return reply_body
# Send an rsp message, but don't wait for or expect a reply.
def RspSendOnly(self, data):
msg = '$%s#%02x' % (data, RspChecksum(data))
return self._socket.send(msg)
def RspRequest(self, data):
self.RspSendOnly(data)
return self._GetReply()
def RspInterrupt(self):
self._socket.send('\x03')
return self._GetReply()
| bsd-3-clause |
phdowling/scikit-learn | sklearn/utils/tests/test_linear_assignment.py | 421 | 1349 | # Author: Brian M. Clapper, G Varoquaux
# License: BSD
import numpy as np
# XXX we should be testing the public API here
from sklearn.utils.linear_assignment_ import _hungarian
def test_hungarian():
matrices = [
# Square
([[400, 150, 400],
[400, 450, 600],
[300, 225, 300]],
850 # expected cost
),
# Rectangular variant
([[400, 150, 400, 1],
[400, 450, 600, 2],
[300, 225, 300, 3]],
452 # expected cost
),
# Square
([[10, 10, 8],
[9, 8, 1],
[9, 7, 4]],
18
),
# Rectangular variant
([[10, 10, 8, 11],
[9, 8, 1, 1],
[9, 7, 4, 10]],
15
),
# n == 2, m == 0 matrix
([[], []],
0
),
]
for cost_matrix, expected_total in matrices:
cost_matrix = np.array(cost_matrix)
indexes = _hungarian(cost_matrix)
total_cost = 0
for r, c in indexes:
x = cost_matrix[r, c]
total_cost += x
assert expected_total == total_cost
indexes = _hungarian(cost_matrix.T)
total_cost = 0
for c, r in indexes:
x = cost_matrix[r, c]
total_cost += x
assert expected_total == total_cost
| bsd-3-clause |
LeeYiFang/Carkinos | src/probes/views.py | 1 | 122212 | from django.shortcuts import render,render_to_response
from django.http import HttpResponse, Http404,JsonResponse
from django.views.decorators.http import require_GET
from .models import Dataset, CellLine, ProbeID, Sample, Platform, Clinical_Dataset,Clinical_sample,Gene
from django.template import RequestContext
from django.utils.html import mark_safe
import json
import pandas as pd
import numpy as np
from pathlib import Path
import sklearn
from sklearn.decomposition import PCA
from scipy import stats
import os
import matplotlib as mpl
mpl.use('agg')
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.colors import LinearSegmentedColormap
import uuid
from rpy2.robjects.packages import importr
import rpy2.robjects as ro
r=ro.r
#lumi= importr('lumi')
from rpy2.robjects import pandas2ri
pandas2ri.activate()
import csv
#import logging
#logger = logging.getLogger("__name__")
show_row=4000 #more than how many rows will become download file mode
def generate_samples():
d=Dataset.objects.all()
cell_d_name=list(d.values_list('name',flat=True))
same_name=[]
cell_datasets=[] #[[dataset_name,[[primary_site,[cell line]]]]
for i in cell_d_name:
if i=="Sanger Cell Line Project":
alias='sanger'
same_name.append('sanger')
elif i=="NCI60":
alias='nci'
same_name.append('nci')
elif i=="GSE36133":
alias='ccle'
same_name.append('ccle')
else:
alias=i
same_name.append(i)
sample=Sample.objects.filter(dataset_id__name=i).order_by('cell_line_id__primary_site').select_related('cell_line_id')
cell_datasets.append([i,alias,list(sample),[]])
sites=list(sample.values_list('cell_line_id__primary_site',flat=True))
hists=list(sample.values_list('cell_line_id__name',flat=True))
dis_prim=list(sample.values_list('cell_line_id__primary_site',flat=True).distinct())
hists=list(hists)
id_counter=0
for p in range(0,len(dis_prim)):
temp=sites.count(dis_prim[p])
cell_datasets[-1][3].append([dis_prim[p],list(set(hists[id_counter:id_counter+temp]))])
id_counter+=temp
d=Clinical_Dataset.objects.all()
d_name=list(d.values_list('name',flat=True))
datasets=[] #[[dataset_name,[[primary_site,[primary_histology]]],[[filter_type,[filter_choice]]]]
primarys=[] #[[primary_site,[primary_hist]]]
primh_filter=[] #[[filter_type,[filter_choice]]]
f_type=['age','gender','ethnic','grade','stage','stageT','stageN','stageM','metastatic']
for i in d_name:
same_name.append(i)
datasets.append([i,[],[]])
sample=Clinical_sample.objects.filter(dataset_id__name=i).order_by('primary_site')
sites=list(sample.values_list('primary_site',flat=True))
hists=list(sample.values_list('primary_hist',flat=True))
dis_prim=list(sample.values_list('primary_site',flat=True).distinct())
hists=list(hists)
id_counter=0
for p in range(0,len(dis_prim)):
temp=sites.count(dis_prim[p])
datasets[-1][1].append([dis_prim[p],list(set(hists[id_counter:id_counter+temp]))])
id_counter+=temp
for f in f_type:
temp=list(set(sample.values_list(f,flat=True)))
datasets[-1][2].append([f,temp])
sample=Clinical_sample.objects.all().order_by('primary_site')
sites=list(sample.values_list('primary_site',flat=True))
hists=list(sample.values_list('primary_hist',flat=True))
dis_prim=list(sample.values_list('primary_site',flat=True).distinct())
hists=list(hists)
id_counter=0
for p in range(0,len(dis_prim)):
temp=sites.count(dis_prim[p])
primarys.append([dis_prim[p],list(set(hists[id_counter:id_counter+temp]))])
id_counter+=temp
s=Clinical_sample.objects.all()
for f in f_type:
temp=list(set(s.values_list(f,flat=True)))
primh_filter.append([f,temp])
all_full_name=cell_d_name+d_name
return {
'all_full_name':mark_safe(json.dumps(all_full_name)), #full name of all datasets
'same_name':mark_safe(json.dumps(same_name)), #short name for all datasets
'cell_d_name':mark_safe(json.dumps(cell_d_name)), #cell line dataset name(full)
'cell_datasets':cell_datasets,
'd_name': mark_safe(json.dumps(d_name)), #clinical dataset name
'datasets': datasets,
'primarys': primarys,
'primh_filter':primh_filter,
}
def sample_microarray(request):
d=Clinical_Dataset.objects.all()
d_name=list(d.values_list('name',flat=True))
datasets=[] #[[dataset_name,[[primary_site,[primary_histology]]],[[filter_type,[filter_choice]]]]
primarys=[] #[[primary_site,[primary_hist]]]
primh_filter=[] #[[filter_type,[filter_choice]]]
f_type=['age','gender','ethnic','grade','stage','stageT','stageN','stageM','metastatic']
for i in d_name:
datasets.append([i,[],[]])
sample=Clinical_sample.objects.filter(dataset_id__name=i).order_by('primary_site')
sites=list(sample.values_list('primary_site',flat=True))
hists=list(sample.values_list('primary_hist',flat=True))
dis_prim=list(sample.values_list('primary_site',flat=True).distinct())
hists=list(hists)
id_counter=0
for p in range(0,len(dis_prim)):
temp=sites.count(dis_prim[p])
datasets[-1][1].append([dis_prim[p],list(set(hists[id_counter:id_counter+temp]))])
id_counter+=temp
for f in f_type:
temp=list(set(sample.values_list(f,flat=True)))
datasets[-1][2].append([f,temp])
sample=Clinical_sample.objects.all().order_by('primary_site')
sites=list(sample.values_list('primary_site',flat=True))
hists=list(sample.values_list('primary_hist',flat=True))
dis_prim=list(sample.values_list('primary_site',flat=True).distinct())
hists=list(hists)
id_counter=0
for p in range(0,len(dis_prim)):
temp=sites.count(dis_prim[p])
primarys.append([dis_prim[p],list(set(hists[id_counter:id_counter+temp]))])
id_counter+=temp
s=Clinical_sample.objects.all()
for f in f_type:
temp=list(set(s.values_list(f,flat=True)))
primh_filter.append([f,temp])
return render(request, 'sample_microarray.html', {
'd_name': mark_safe(json.dumps(d_name)),
'datasets': datasets,
'primarys': primarys,
'primh_filter':primh_filter,
})
def user_pca(request):
#load the ranking file and the table of probe first,open files
pform=request.POST['data_platform']
uni=[] #to store valid probe offset for getting the correct data
uni_probe=[]
gene_flag=0
if(pform=="others"): #gene level
gene_flag=1
if(request.POST['ngs']=="ngs_u133a"):
pform="U133A"
else:
pform="PLUS2"
elif (pform=="U133A"):
quantile=list(np.load('ranking_u133a.npy'))
probe_path=Path('../').resolve().joinpath('src','Affy_U133A_probe_info.csv')
probe_list = pd.read_csv(probe_path.as_posix())
uni_probe=pd.unique(probe_list['PROBEID'])
else:
quantile=np.load('ranking_u133plus2.npy')
probe_path=Path('../').resolve().joinpath('src','Affy_U133plus2_probe_info.csv')
probe_list = pd.read_csv(probe_path.as_posix())
uni_probe=pd.unique(probe_list['PROBEID'])
propotion=0
table_propotion=0
show=request.POST['show_type'] #get the pca show type
nci_size=Sample.objects.filter(dataset_id__name__in=["NCI60"]).count()
gse_size=Sample.objects.filter(dataset_id__name__in=["GSE36133"]).count()
group_counter=1
user_out_group=[]
s_group_dict={} #store sample
offset_group_dict={} #store offset
cell_line_dict={}
#this part is for selecting cell lines base on dataset
#count how many group
group_counter=1
while True:
temp_name='dataset_g'+str(group_counter)
if temp_name in request.POST:
group_counter=group_counter+1
else:
group_counter=group_counter-1
break
s_group_dict={} #store sample
group_name=[]
offset_group_dict={} #store offset
clinic=list(Clinical_Dataset.objects.all().values_list('name',flat=True))
clline=list(Dataset.objects.all().values_list('name',flat=True))
all_exist_dataset=[]
for i in range(1,group_counter+1):
dname='dataset_g'+str(i)
all_exist_dataset=all_exist_dataset+request.POST.getlist(dname)
all_exist_dataset=list(set(all_exist_dataset))
all_base=[0]
for i in range(0,len(all_exist_dataset)-1):
if all_exist_dataset[i] in clline:
all_base.append(all_base[i]+Sample.objects.filter(dataset_id__name__in=[all_exist_dataset[i]]).count())
else:
all_base.append(all_base[i]+Clinical_sample.objects.filter(dataset_id__name__in=[all_exist_dataset[i]]).count())
all_c=[]
for i in range(1,group_counter+1):
s_group_dict['g'+str(i)]=[]
offset_group_dict['g'+str(i)]=[]
cell_line_dict['g'+str(i)]=[]
dname='dataset_g'+str(i)
datasets=request.POST.getlist(dname)
group_name.append('g'+str(i))
for dn in datasets:
if dn=='Sanger Cell Line Project':
c='select_sanger_g'+str(i)
elif dn=='NCI60':
c='select_nci_g'+str(i)
elif dn=='GSE36133':
c='select_ccle_g'+str(i)
if dn in clline:
temp=list(set(request.POST.getlist(c)))
if 'd_sample' in show:
if all_c==[]:
all_c=all_c+temp
uni=temp
else:
uni=list(set(temp)-set(all_c))
all_c=all_c+uni
else:
uni=list(temp) #do not filter duplicate input only when select+centroid
s=Sample.objects.filter(cell_line_id__name__in=uni,dataset_id__name__in=[dn]).order_by('dataset_id'
).select_related('cell_line_id__name','cell_line_id__primary_site','cell_line_id__primary_hist','dataset_id','dataset_id__name')
cell_line_dict['g'+str(i)]=cell_line_dict['g'+str(i)]+list(s.values_list('cell_line_id__name',flat=True))
s_group_dict['g'+str(i)]=s_group_dict['g'+str(i)]+list(s)
offset_group_dict['g'+str(i)]=offset_group_dict['g'+str(i)]+list(np.add(list(s.values_list('offset',flat=True)),all_base[all_exist_dataset.index(dn)]))
else: #dealing with clinical sample datasets
com_hists=list(set(request.POST.getlist('primd_'+dn+'_g'+str(i)))) #can I get this by label to reduce number of queries?
com_hists=[w1 for segments in com_hists for w1 in segments.split('/')]
#print(com_hists)
prims=com_hists[0::2]
hists=com_hists[1::2]
temp=request.POST.getlist('filter_'+dn+'_g'+str(i))
age=[]
gender=[]
ethnic=[]
grade=[]
stage=[]
T=[]
N=[]
M=[]
metas=[]
for t in temp:
if 'stage/' in t:
stage.append(t[6:])
elif 'gender/' in t:
gender.append(t[7:])
elif 'ethnic/' in t:
ethnic.append(t[7:])
elif 'grade/' in t:
grade.append(t[6:])
elif 'stageT/' in t:
T.append(t[7:])
elif 'stageN/' in t:
N.append(t[7:])
elif 'stageM/' in t:
M.append(t[7:])
elif 'metastatic/' in t:
metas.append(t[11:])
'''
if t[11:]=='False':
metas.append(0)
else:
metas.append(1)
'''
else: #"age/"
age.append(t[4:])
#print(len(prims))
#print(len(hists))
for x in range(0,len(prims)):
s=Clinical_sample.objects.filter(dataset_id__name=dn,primary_site=prims[x],
primary_hist=hists[x],
age__in=age,
gender__in=gender,
ethnic__in=ethnic,
stage__in=stage,
grade__in=grade,
stageT__in=T,
stageN__in=N,
stageM__in=M,
metastatic__in=metas,
).select_related('dataset_id').order_by('id')
s_group_dict['g'+str(i)]=s_group_dict['g'+str(i)]+list(s)
cell_line_dict['g'+str(i)]=cell_line_dict['g'+str(i)]+list(s.values_list('name',flat=True))
offset_group_dict['g'+str(i)]=offset_group_dict['g'+str(i)]+list(np.add(list(s.values_list('offset',flat=True)),all_base[all_exist_dataset.index(dn)]))
#return render_to_response('welcome.html',locals())
all_sample=[]
all_cellline=[]
cell_object=[]
all_offset=[]
sample_counter={}
group_cell=[]
g_s_counter=[0]
for i in range(1,group_counter+1):
all_sample=all_sample+s_group_dict['g'+str(i)] #will not exist duplicate sample if d_sample
all_offset=all_offset+offset_group_dict['g'+str(i)]
all_cellline=all_cellline+cell_line_dict['g'+str(i)]
g_s_counter.append(g_s_counter[i-1]+len(s_group_dict['g'+str(i)]))
for i in all_sample:
sample_counter[i.name]=1
if str(type(i))=="<class 'probes.models.Sample'>":
##print("i am sample!!")
cell_object.append(i.cell_line_id)
else:
##print("i am clinical!!")
cell_object.append(i)
#read the user file
text=request.FILES.getlist('user_file')
user_counter=len(text)
if(gene_flag==1):
user_counter=1
ugroup=[]
for i in range(user_counter):
ugroup.append(request.POST['ugroup_name'+str(i+1)])
if(ugroup[-1]==''):
ugroup[-1]='User_Group'+str(i+1)
dgroup=[]
for i in range(1,group_counter+1):
dgroup.append(request.POST['group_name'+str(i)])
if(dgroup[-1]==''):
dgroup[-1]='Dataset_Group'+str(i)
user_dict={} #{user group number:user 2d array}
samples=0
nans=[] #to store the probe name that has nan
for x in range(1,user_counter+1):
#check the file format and content here first
filetype=str(text[x-1]).split('.')
if(filetype[-1]!="csv"):
error_reason='You have the wrong file type. Please upload .csv files'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
if(text[x-1].size>=80000000): #bytes
error_reason='The file size is too big. Please upload .csv file with size less than 80MB.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
temp_data = pd.read_csv(text[x-1])
col=list(temp_data.columns.values)
samples=samples+len(col)-1
if(samples==0):
error_reason='The file does not have any samples.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
if(gene_flag==0): #probe level check
check_probe=[str(x) for x in list(temp_data.iloc[:,0]) if not str(x).lower().startswith('affx')]
#print(len(check_probe))
if(len(check_probe)!=len(uni_probe)):
error_reason='The probe number does not match with the platform you selected.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
if(set(check_probe)!=set(uni_probe)):
error_reason='The probe number or probe name in your file does not match the platform you selected.'
#error_reason+='</br>The probes that are not in the platform: '+str(set(check_probe)-set(uni_probe))[1:-1]
#error_reason+='</br>The probes that are lacking: '+str(set(uni_probe)-set(check_probe))[1:-1]
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
col=list(temp_data.columns.values)
n=pd.isnull(temp_data).any(1).nonzero()[0]
nans += list(temp_data[col[0]][n])
user_dict[x]=temp_data
if 'd_sample' in show:
if((len(all_sample)+samples)<4):
error_reason='You should have at least 4 samples for PCA. The samples are not enough.<br />'\
'The total number of samples in your uploaded file is '+str(samples)+'.<br />'\
'The number of samples you selected is '+str(len(all_sample))+'.<br />'\
'Total is '+str(len(all_sample)+samples)+'.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
else:
s_count=0
sample_list=[]
a_sample=np.array(all_sample)
for i in range(1,group_counter+1):
dis_cellline=list(set(cell_object[g_s_counter[i-1]:g_s_counter[i]]))
a_cell_object=np.array(cell_object)
for c in dis_cellline:
temp1=np.where((a_cell_object==c))[0]
temp2=np.where((temp1>=g_s_counter[i-1])&(temp1<g_s_counter[i]))
total_offset=temp1[temp2]
selected_sample=a_sample[total_offset]
if list(selected_sample) in sample_list: #to prevent two different colors in different group
continue
else:
sample_list.append(list(selected_sample))
s_count=s_count+1
if(s_count>=4): #check this part
break
if(s_count>=4): #check this part
break
if((s_count+samples)<4):
error_reason='Since the display method is [centroid], you should have at least 4 dots for PCA. The total number is not enough.<br />'\
'The total number of dots in you uploaded file is '+str(samples)+'.<br />'\
'The number of centroid dots you selected is '+str(s_count)+'.<br />'\
'Total is '+str(s_count+samples)+'.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
new_name=[]
origin_name=[]
com_gene=[] #for ngs select same gene
nans=list(set(nans))
for x in range(1,user_counter+1):
#temp_data = pd.read_csv(text[x-1])
temp_data=user_dict[x]
col=list(temp_data.columns.values)
col[0]='probe'
temp_data.columns=col
temp_data.index = temp_data['probe']
temp_data.index.name = None
temp_data=temp_data.iloc[:, 1:]
#add "use_" to user's sample names
col_name=list(temp_data.columns.values) #have user's sample name list here
origin_name=origin_name+list(temp_data.columns.values)
col_name=[ "user_"+str(index)+"_"+s for index,s in enumerate(col_name)]
temp_data.columns=col_name
new_name=new_name+col_name
if(gene_flag==0):
try:
temp_data=temp_data.reindex(uni_probe)
except ValueError:
return HttpResponse('The file has probes with the same names, please let them be unique.')
#remove probe that has nan
temp_data=temp_data.drop(nans)
temp_data=temp_data.rank(method='dense')
#this is for quantile
for i in col_name:
for j in range(0,len(temp_data[i])):
#if(not(np.isnan(temp_data[i][j]))):
temp_data[i][j]=quantile[int(temp_data[i][j]-1)]
if x==1:
data=temp_data
else:
data=np.concatenate((data,temp_data), axis=1)
user_dict[x]=np.array(temp_data)
else:
temp_data=temp_data.drop(nans) #drop nan
temp_data=temp_data.loc[~(temp_data==0).all(axis=1)] #drop all rows with 0 here
temp_data=temp_data.groupby(temp_data.index).first() #drop the duplicate gene row
user_dict[x]=temp_data
#print(temp_data)
#delete nan, combine user data to the datasets,transpose matrix
for x in range(0,len(all_exist_dataset)):
if(gene_flag==0):
if all_exist_dataset[x] in clline:
pth=Path('../').resolve().joinpath('src',Dataset.objects.get(name=all_exist_dataset[x]).data_path)
else:
pth=Path('../').resolve().joinpath('src',Clinical_Dataset.objects.get(name=all_exist_dataset[x]).data_path)
else:
if all_exist_dataset[x] in clline:
pth=Path('../').resolve().joinpath('src','gene_'+Dataset.objects.get(name=all_exist_dataset[x]).data_path)
else:
pth=Path('../').resolve().joinpath('src','gene_'+Clinical_Dataset.objects.get(name=all_exist_dataset[x]).data_path)
if x==0:
val=np.load(pth.as_posix())
else:
val=np.hstack((val, np.load(pth.as_posix())))#combine together
#database dataset remove nan probes
if(gene_flag==0):
uni=[]
p_offset=list(ProbeID.objects.filter(platform__name__in=[pform],Probe_id__in=nans).values_list('offset',flat=True))
for n in range(0,len(uni_probe)):
if(n not in p_offset):
uni.append(n)
else:
#deal with ngs uploaded data here
probe_path=Path('../').resolve().joinpath('src','new_human_gene_info.txt')
#probe_list = pd.read_csv(probe_path.as_posix())
#notice duplicate
#get the match gene first, notice the size issue
info=pd.read_csv(probe_path.as_posix(),sep='\t')
col=list(info.columns.values)
col[0]='symbol'
info.columns=col
info.index = info['symbol']
info.index.name = None
info=info.iloc[:, 1:]
data=user_dict[1]
#data=data.groupby(data.index).first() #drop the duplicate gene row
com_gene=list(data.index)
temp_data=data
rloop=divmod(len(com_gene),990)
if(rloop[1]==0):
rloop=(rloop[0]-1,0)
gg=[]
for x in range(0,rloop[0]+1):
gg+=list(Gene.objects.filter(platform__name__in=[pform],symbol__in=com_gene[x*990:(x+1)*990]).order_by('offset'))
exist_gene=[]
uni=[]
for i in gg:
exist_gene.append(i.symbol)
uni.append(i.offset)
info=info.drop(exist_gene,errors='ignore')
new_data=temp_data.loc[data.index.isin(exist_gene)].reindex(exist_gene)
##print(exist_gene)
##print(new_data.index)
#search remain symbol's alias and symbol
search_alias=list(set(com_gene)-set(exist_gene))
for i in search_alias:
re_symbol=list(set(info.loc[info['alias'].isin([i])].index)) #find whether has alias first
if(len(re_symbol)!=0):
re_match=Gene.objects.filter(platform__name__in=[pform],symbol__in=re_symbol).order_by('offset') #check the symbol in database or not
repeat=len(re_match)
if(repeat!=0): #match gene symbol in database
##print(re_match)
for x in re_match:
to_copy=data.loc[i]
to_copy.name=x.symbol
new_data=new_data.append(to_copy)
uni.append(x.offset)
info=info.drop(x.symbol,errors='ignore')
user_dict[1]=np.array(new_data)
##print("length of new data:"+str(len(new_data)))
##print("data:")
##print(data)
##print("new_data:")
##print(new_data)
data=new_data
if 'd_sample' in show:
val=val[np.ix_(uni,all_offset)]
#print(len(val))
user_offset=len(val[0])
if(gene_flag==1):
#do the rank invariant here
#print("sample with ngs data do rank invariant here")
ref_path=Path('../').resolve().joinpath('src','cv_result.txt')
ref=pd.read_csv(ref_path.as_posix())
col=list(ref.columns.values)
col[0]='symbol'
ref.columns=col
ref.index = ref['symbol']
ref.index.name = None
ref=ref.iloc[:, 1:]
ref=ref.iloc[:5000,:] #rank invariant need 5000 genes
same_gene=list(ref.index.intersection(data.index))
#to lowess
rref=pandas2ri.py2ri(ref.loc[same_gene])
rngs=pandas2ri.py2ri(data.loc[same_gene].mean(axis=1))
rall=pandas2ri.py2ri(data)
ro.globalenv['x'] = rngs
ro.globalenv['y'] = rref
ro.globalenv['newx'] = rall
r('x<-as.vector(as.matrix(x))')
r('y<-as.vector(as.matrix(y))')
r('newx<-as.matrix(newx)')
try:
if(request.POST['data_type']=='raw'):
r('y.loess<-loess(2**y~x,span=0.3)')
r('for(z in c(1:ncol(newx))) newx[,z]=log2(as.matrix(predict(y.loess,newx[,z])))')
elif(request.POST['data_type']=='log2'):
r('y.loess<-loess(2**y~2**x,span=0.3)')
r('for(z in c(1:ncol(newx))) newx[,z]=log2(as.matrix(predict(y.loess,2**newx[,z])))')
else:
r('y.loess<-loess(2**y~10**x,span=0.3)')
r('for(z in c(1:ncol(newx))) newx[,z]=log2(as.matrix(predict(y.loess,10**newx[,z])))')
except:
error_reason='Match too less genes. Check your gene symbols again. We use NCBI standard gene symbol.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
#r('for(z in c(1:ncol(newx))) newx[,z]=log2(as.matrix(predict(y.loess,newx[,z])))')
data=r('newx')
#print(data[:10])
#print(type(data))
val=np.hstack((np.array(val), np.array(data)))
val=val[~np.isnan(val).any(axis=1)]
val=np.transpose(val)
else:
#val=np.array(val)
val=val[np.ix_(uni)]
user_offset=len(val[0])
if(gene_flag==1):
#print("sample with ngs data do rank invariant here")
ref_path=Path('../').resolve().joinpath('src','cv_result.txt')
ref=pd.read_csv(ref_path.as_posix())
col=list(ref.columns.values)
col[0]='symbol'
ref.columns=col
ref.index = ref['symbol']
ref.index.name = None
ref=ref.iloc[:, 1:]
ref=ref.iloc[:5000,:] #rank invariant need 5000 genes
same_gene=list(ref.index.intersection(data.index))
#to lowess
rref=pandas2ri.py2ri(ref.loc[same_gene])
rngs=pandas2ri.py2ri(data.loc[same_gene].mean(axis=1))
rall=pandas2ri.py2ri(data)
ro.globalenv['x'] = rngs
ro.globalenv['y'] = rref
ro.globalenv['newx'] = rall
r('x<-as.vector(as.matrix(x))')
r('y<-as.vector(as.matrix(y))')
r('newx<-as.matrix(newx)')
try:
if(request.POST['data_type']=='raw'):
r('y.loess<-loess(2**y~x,span=0.3)')
r('for(z in c(1:ncol(newx))) newx[,z]=log2(as.matrix(predict(y.loess,newx[,z])))')
elif(request.POST['data_type']=='log2'):
r('y.loess<-loess(2**y~2**x,span=0.3)')
r('for(z in c(1:ncol(newx))) newx[,z]=log2(as.matrix(predict(y.loess,2**newx[,z])))')
else:
r('y.loess<-loess(2**y~10**x,span=0.3)')
r('for(z in c(1:ncol(newx))) newx[,z]=log2(as.matrix(predict(y.loess,10**newx[,z])))')
except:
error_reason='Match too less genes. Check your gene symbols again. We use NCBI standard gene symbol.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
#r('for(z in c(1:ncol(newx))) newx[,z]=log2(as.matrix(predict(y.loess,newx[,z])))')
data=r('newx')
#print(data[:10])
#print(type(data))
val=np.hstack((np.array(val), np.array(data)))
val=val[~np.isnan(val).any(axis=1)]
pca_index=[]
dis_offset=[]
#PREMISE:same dataset same cell line will have only one type of primary site and primary histology
name1=[]
name2=[]
name3=[]
name4=[]
name5=[]
X1=[]
Y1=[]
Z1=[]
X2=[]
Y2=[]
Z2=[]
X3=[]
Y3=[]
Z3=[]
X4=[]
Y4=[]
Z4=[]
X5=[]
Y5=[]
Z5=[]
n=4 #need to fix to the best one
if 'd_sample' in show:
#count the pca first
pca= PCA(n_components=n)
#combine user sample's offset to all_offset in another variable
Xval = pca.fit_transform(val[:,:]) #cannot get Xval with original offset any more
ratio_temp=pca.explained_variance_ratio_
propotion=sum(ratio_temp[1:n])
table_propotion=sum(ratio_temp[0:n])
user_new_offset=len(all_offset)
##print(Xval)
max=0
min=10000000000
out_group=[]
exist_cell={}#cell line object:counter
for g in range(1,group_counter+1):
output_cell={}
check={}
for s in range(g_s_counter[g-1],g_s_counter[g]):
if str(type(all_sample[s]))=="<class 'probes.models.Sample'>":
cell=all_sample[s].cell_line_id
else:
cell=all_sample[s]
try:
counter=exist_cell[cell]
exist_cell[cell]=counter+1
except KeyError:
exist_cell[cell]=1
try:
t=output_cell[cell]
except KeyError:
output_cell[cell]=[cell,[]]
check[all_sample[s].name]=[]
sample_counter[all_sample[s].name]=exist_cell[cell]
for i in range(0,len(all_sample)):
if i!=s:
try:
if(all_sample[s].name not in check[all_sample[i].name]):
distance=np.linalg.norm(Xval[i][n-3:n]-Xval[s][n-3:n])
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[cell][1].append([all_cellline[s]+'('+str(exist_cell[cell])+')'
,all_sample[s].name,cell.primary_site,cell.primary_hist,
all_sample[s].dataset_id.name,all_cellline[i],all_sample[i].name,cell_object[i].primary_site
,cell_object[i].primary_hist,all_sample[i].dataset_id.name,distance])
check[all_sample[s].name].append(all_sample[i].name)
except KeyError:
distance=np.linalg.norm(Xval[i][n-3:n]-Xval[s][n-3:n])
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[cell][1].append([all_cellline[s]+'('+str(exist_cell[cell])+')'
,all_sample[s].name,cell.primary_site,cell.primary_hist,
all_sample[s].dataset_id.name,all_cellline[i],all_sample[i].name,cell_object[i].primary_site
,cell_object[i].primary_hist,all_sample[i].dataset_id.name,distance])
check[all_sample[s].name].append(all_sample[i].name)
g_count=1
u_count=len(user_dict[g_count][0]) #sample number in first user file
for i in range(user_new_offset,user_new_offset+len(origin_name)): #remember to prevent empty file uploaded
distance=np.linalg.norm(Xval[i][n-3:n]-Xval[s][n-3:n])
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[cell][1].append([all_cellline[s]+'('+str(exist_cell[cell])+')'
,all_sample[s].name,cell.primary_site,cell.primary_hist,
all_sample[s].dataset_id.name," ",origin_name[i-user_new_offset]," "," ","User Group"+str(g_count),distance])
if ((i-user_new_offset+1)==u_count):
g_count+=1
try:
u_count+=len(user_dict[g_count][0])
except KeyError:
u_count+=0
if(g==1):
name3.append(all_cellline[s]+'('+str(exist_cell[cell])+')'+'<br>'+all_sample[s].name)
X3.append(round(Xval[s][n-3],5))
Y3.append(round(Xval[s][n-2],5))
Z3.append(round(Xval[s][n-1],5))
elif(g==2):
name4.append(all_cellline[s]+'('+str(exist_cell[cell])+')'+'<br>'+all_sample[s].name)
X4.append(round(Xval[s][n-3],5))
Y4.append(round(Xval[s][n-2],5))
Z4.append(round(Xval[s][n-1],5))
elif(g==3):
name5.append(all_cellline[s]+'('+str(exist_cell[cell])+')'+'<br>'+all_sample[s].name)
X5.append(round(Xval[s][n-3],5))
Y5.append(round(Xval[s][n-2],5))
Z5.append(round(Xval[s][n-1],5))
dictlist=[]
for key, value in output_cell.items():
temp = [value]
dictlist+=temp
output_cell=list(dictlist)
out_group.append(["Dataset Group"+str(g),output_cell])
if g==group_counter:
output_cell={}
g_count=1
output_cell[g_count]=[" ",[]]
u_count=len(user_dict[g_count][0])
temp_count=u_count
temp_g=1
before=0
for i in range(user_new_offset,user_new_offset+len(origin_name)):
for x in range(0,len(all_sample)):
distance=np.linalg.norm(Xval[x][n-3:n]-Xval[i][n-3:n])
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[g_count][1].append([origin_name[i-user_new_offset],"User Group"+str(g_count),all_cellline[x]
,all_sample[x].name,cell_object[x].primary_site,cell_object[x].primary_hist,
all_sample[x].dataset_id.name,distance])
temp_g=1
temp_count=len(user_dict[temp_g][0])
for j in range(user_new_offset,user_new_offset+before):
if ((j-user_new_offset)==temp_count):
temp_g+=1
try:
temp_count+=len(user_dict[temp_g][0])
except KeyError:
temp_count+=0
distance=np.linalg.norm(Xval[j][n-3:n]-Xval[i][n-3:n])
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[g_count][1].append([origin_name[i-user_new_offset],"User Group"+str(g_count)
," ",origin_name[j-user_new_offset]," "," ","User Group"+str(temp_g),distance])
temp_g=g_count
temp_count=len(user_dict[g_count][0])
for j in range(i+1,user_new_offset+len(origin_name)):
if ((j-user_new_offset)==temp_count):
temp_g+=1
try:
temp_count+=len(user_dict[temp_g][0])
except KeyError:
temp_count+=0
distance=np.linalg.norm(Xval[j][n-3:n]-Xval[i][n-3:n])
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[g_count][1].append([origin_name[i-user_new_offset],"User Group"+str(g_count)
," ",origin_name[j-user_new_offset]," "," ","User Group"+str(temp_g),distance])
if g_count==1:
name1.append(origin_name[i-user_new_offset])
X1.append(round(Xval[i][n-3],5))
Y1.append(round(Xval[i][n-2],5))
Z1.append(round(Xval[i][n-1],5))
else:
name2.append(origin_name[i-user_new_offset])
X2.append(round(Xval[i][n-3],5))
Y2.append(round(Xval[i][n-2],5))
Z2.append(round(Xval[i][n-1],5))
if ((i-user_new_offset+1)==u_count):
dictlist=[]
for key, value in output_cell.items():
temp = [value]
dictlist+=temp
output_cell=list(dictlist)
user_out_group.append(["User Group"+str(g_count),output_cell])
g_count+=1
before=u_count+before
#print("I am here!!")
try:
u_count+=len(user_dict[g_count][0])
output_cell={}
output_cell[g_count]=[" ",[]]
except KeyError:
u_count+=0
#[g,[group_cell_1 object,[[outputs paired1,......,],[paired2],[paired3]]],[group_cell_2 object,[[pair1],[pair2]]]]
#for xx in origin_name:
#sample_counter[xx]=1
##print(out_group)
element_counter=0
for i in out_group:
for temp_list in i[1]:
element_counter=element_counter+len(temp_list[1])
for temp in temp_list[1]:
if(temp[5]!=" "):
temp[5]=temp[5]+'('+str(sample_counter[temp[6]])+')'
for i in user_out_group:
for temp_list in i[1]:
for temp in temp_list[1]:
if(temp[2]!=" "):
temp[2]=temp[2]+'('+str(sample_counter[temp[3]])+')'
return_html='user_pca.html'
else:
#This part is for centroid display
return_html='user_pca_center.html'
#This part is for select cell line base on dataset,count centroid base on the dataset
#group中的cell line為單位來算重心
location_dict={} #{group number:[[cell object,dataset,new location]]}
combined=[]
sample_list=[]
pca_index=np.array(pca_index)
X_val=[]
val_a=np.array(val)
a_all_offset=np.array(all_offset)
a_sample=np.array(all_sample)
for i in range(1,group_counter+1):
dis_cellline=list(set(cell_object[g_s_counter[i-1]:g_s_counter[i]])) #cell object may have duplicate cell line since:NCI A + CCLE A===>[A,A]
location_dict['g'+str(i)]=[]
dataset_dict={}
a_cell_object=np.array(cell_object)
for c in dis_cellline: #dis_cellline may not have the same order as cell_object
temp1=np.where((a_cell_object==c))[0]
temp2=np.where((temp1>=g_s_counter[i-1])&(temp1<g_s_counter[i]))
total_offset=temp1[temp2]
selected_val=val_a[:,a_all_offset[total_offset]]
selected_val=np.transpose(selected_val)
new_loca=(np.mean(selected_val,axis=0,dtype=np.float64,keepdims=True)).tolist()[0]
selected_sample=a_sample[total_offset]
if list(selected_sample) in sample_list: #to prevent two different colors in different group
continue
else:
sample_list.append(list(selected_sample))
d_temp=[]
for s in selected_sample:
d_temp.append(s.dataset_id.name)
dataset_dict[c]="/".join(list(set(d_temp)))
X_val.append(new_loca)
location_dict['g'+str(i)].append([c,dataset_dict[c],len(X_val)-1]) #the last part is the index to get pca result from new_val
combined.append([c,dataset_dict[c],len(X_val)-1]) #all cell line, do not matter order
#run the pca
user_new_offset=len(X_val)
temp_val=np.transpose(val[:,user_offset:])
for x in range(0,len(temp_val)):
X_val.append(list(temp_val[x]))
if(len(X_val)<4):
error_reason='Since the display method is [centroid], you should have at least 4 dots for PCA. The total number is not enough.<br />'\
'The total number of dots in you uploaded file is '+str(len(temp_val))+'.<br />'\
'The number of centroid dots you selected is '+str(len(X_val)-len(temp_val))+'.<br />'\
'Total is '+str(len(X_val))+'.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
X_val=np.matrix(X_val)
pca= PCA(n_components=n)
new_val = pca.fit_transform(X_val[:,:]) #cannot get Xval with original offset any more
ratio_temp=pca.explained_variance_ratio_
propotion=sum(ratio_temp[1:n])
table_propotion=sum(ratio_temp[0:n])
#print(new_val)
out_group=[]
min=10000000000
max=0
element_counter=0
for g in range(1,group_counter+1):
output_cell=[]
exist_cell={}
for group_c in location_dict['g'+str(g)]: #a list of [c,dataset_dict[c],new_val index] in group one
cell=group_c[0]
key_string=cell.name+'/'+cell.primary_site+'/'+cell.primary_hist+'/'+group_c[1]
exist_cell[key_string]=[]
output_cell.append([cell,[]])
#count the distance
for temp_list in combined:
c=temp_list[0]
temp_string=c.name+'/'+c.primary_site+'/'+c.primary_hist+'/'+temp_list[1]
try:
if(key_string not in exist_cell[temp_string]):
distance=np.linalg.norm(np.array(new_val[group_c[2]][n-3:n])-np.array(new_val[temp_list[2]][n-3:n]))
if distance==0:
continue
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[len(output_cell)-1][1].append([cell.name,cell.primary_site,cell.primary_hist
,group_c[1],temp_list[0].name,temp_list[0].primary_site,temp_list[0].primary_hist,temp_list[1],distance])
element_counter=element_counter+1
except KeyError:
distance=np.linalg.norm(np.array(new_val[group_c[2]][n-3:n])-np.array(new_val[temp_list[2]][n-3:n]))
if distance==0:
continue
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[len(output_cell)-1][1].append([cell.name,cell.primary_site,cell.primary_hist
,group_c[1],temp_list[0].name,temp_list[0].primary_site,temp_list[0].primary_hist,temp_list[1],distance])
element_counter=element_counter+1
exist_cell[key_string].append(temp_string)
g_count=1
u_count=len(user_dict[g_count][0]) #sample number in first user file
for i in range(user_new_offset,user_new_offset+len(origin_name)):
distance=np.linalg.norm(np.array(new_val[group_c[2]][n-3:n])-np.array(new_val[i][n-3:n]))
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[len(output_cell)-1][1].append([cell.name,cell.primary_site,cell.primary_hist,group_c[1]
,origin_name[i-user_new_offset]," "," ","User Group"+str(g_count),distance])
element_counter=element_counter+1
if ((i-user_new_offset+1)==u_count):
g_count+=1
try:
u_count+=len(user_dict[g_count][0])
except KeyError:
u_count+=0
if(g==1):
name3.append(cell.name+'<br>'+group_c[1])
X3.append(round(new_val[group_c[2]][n-3],5))
Y3.append(round(new_val[group_c[2]][n-2],5))
Z3.append(round(new_val[group_c[2]][n-1],5))
elif(g==2):
name4.append(cell.name+'<br>'+group_c[1])
X4.append(round(new_val[group_c[2]][n-3],5))
Y4.append(round(new_val[group_c[2]][n-2],5))
Z4.append(round(new_val[group_c[2]][n-1],5))
elif(g==3):
name5.append(cell.name+'<br>'+group_c[1])
X5.append(round(new_val[group_c[2]][n-3],5))
Y5.append(round(new_val[group_c[2]][n-2],5))
Z5.append(round(new_val[group_c[2]][n-1],5))
out_group.append(["Dataset Group"+str(g),output_cell])
if g==group_counter:
output_cell=[]
g_count=1
output_cell.append([" ",[]])
u_count=len(user_dict[g_count][0])
temp_count=u_count
temp_g=1
before=0
for i in range(user_new_offset,user_new_offset+len(origin_name)):
for temp_list in combined:
c=temp_list[0]
distance=np.linalg.norm(np.array(new_val[i][n-3:n])-np.array(new_val[temp_list[2]][n-3:n]))
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[len(output_cell)-1][1].append([origin_name[i-user_new_offset],"User Group"+str(g_count)
,c.name,c.primary_site,c.primary_hist,temp_list[1],distance])
temp_g=1
temp_count=len(user_dict[temp_g][0])
for j in range(user_new_offset,user_new_offset+before):
if ((j-user_new_offset)==temp_count):
temp_g+=1
try:
temp_count+=len(user_dict[temp_g][0])
except KeyError:
temp_count+=0
distance=np.linalg.norm(np.array(new_val[i][n-3:n])-np.array(new_val[j][n-3:n]))
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[len(output_cell)-1][1].append([origin_name[i-user_new_offset],"User Group"+str(g_count)
,origin_name[j-user_new_offset]," "," ","User Group"+str(temp_g),distance])
temp_g=g_count
temp_count=len(user_dict[g_count][0])
for x in range(i+1,user_new_offset+len(origin_name)):
if ((x-user_new_offset)==temp_count):
temp_g+=1
try:
temp_count+=len(user_dict[temp_g][0])
except KeyError:
temp_count+=0
distance=np.linalg.norm(np.array(new_val[i][n-3:n])-np.array(new_val[x][n-3:n]))
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[len(output_cell)-1][1].append([origin_name[i-user_new_offset],"User Group"+str(g_count)
,origin_name[x-user_new_offset]," "," ","User Group"+str(temp_g),distance])
if g_count==1:
name1.append(origin_name[i-user_new_offset])
X1.append(round(new_val[i][n-3],5))
Y1.append(round(new_val[i][n-2],5))
Z1.append(round(new_val[i][n-1],5))
else:
name2.append(origin_name[i-user_new_offset])
X2.append(round(new_val[i][n-3],5))
Y2.append(round(new_val[i][n-2],5))
Z2.append(round(new_val[i][n-1],5))
if ((i-user_new_offset+1)==u_count):
user_out_group.append(["User Group"+str(g_count),output_cell])
g_count+=1
before=u_count+before
#print("I am here!!")
try:
u_count+=len(user_dict[g_count][0])
output_cell=[]
output_cell.append([" ",[]])
except KeyError:
u_count+=0
#print(element_counter)
#print(show_row)
if(element_counter>show_row):
big_flag=1
sid=str(uuid.uuid1())+".csv"
if(return_html=='user_pca.html'):
dataset_header=['Group Cell Line/Clinical Sample','Sample Name','Primary Site','Primary Histology'
,'Dataset','Paired Cell Line name/Clinical Sample','Sample Name','Primary Site','Primary Histology','Dataset','Distance']
user_header=['User Sample Name','Dataset','Paired Cell Line name/Clinical Sample','Sample Name','Primary Site','Primary Histology','Dataset','Distance']
else:
dataset_header=['Group Cell Line/Clinical Sample','Primary Site','Primary Histology'
,'Dataset','Paired Cell Line name/Clinical Sample','Primary Site','Primary Histology','Dataset','Distance']
user_header=['User Sample Name','Dataset','Paired Cell Line name/Clinical Sample','Primary Site','Primary Histology','Dataset','Distance']
P=Path('../').resolve().joinpath('src','static','csv',"dataset_"+sid)
userP=Path('../').resolve().joinpath('src','static','csv',"user_"+sid)
assP=Path('../').resolve().joinpath('src','assets','csv',"dataset_"+sid)
assuserP=Path('../').resolve().joinpath('src','assets','csv',"user_"+sid)
#print("start writing files")
with open(str(assP), "w", newline='') as f:
writer = csv.writer(f)
for index,output_cell in out_group:
writer.writerows([[dgroup[int(index[-1])-1]]])
writer.writerows([dataset_header])
for cell_line,b in output_cell:
writer.writerows(b)
#print("end writing first file")
'''
with open(str(assP), "w", newline='') as ff:
writer = csv.writer(ff)
for index,output_cell in out_group:
writer.writerows([[index]])
writer.writerows([dataset_header])
for cell_line,b in output_cell:
writer.writerows(b)
'''
#print("end writing 2 file")
with open(str(assuserP), "w", newline='') as ff:
writer = csv.writer(ff)
for index,output_cell in user_out_group:
writer.writerows([[ugroup[int(index[-1])-1]]])
writer.writerows([user_header])
for cell_line,b in output_cell:
writer.writerows(b)
#print("end writing 3 file")
'''
with open(str(userP), "w", newline='') as f:
writer = csv.writer(f)
for index,output_cell in user_out_group:
writer.writerows([[index]])
writer.writerows([user_header])
for cell_line,b in output_cell:
writer.writerows(b)
'''
#print("end writing 4 file")
data_file_name="dataset_"+sid
user_file_name="user_"+sid
else:
big_flag=0
data_file_name=0
user_file_name=0
return render_to_response(return_html,RequestContext(request,
{
'ugroup':ugroup,
'dgroup':dgroup,
'min':min,'max':max,
'big_flag':big_flag,
'out_group':out_group,'user_out_group':user_out_group,
'propotion':propotion,
'table_propotion':table_propotion,
'data_file_name':data_file_name,
'user_file_name':user_file_name,
'X1':X1,'name1':mark_safe(json.dumps(name1)),
'Y1':Y1,'name2':mark_safe(json.dumps(name2)),
'Z1':Z1,'name3':mark_safe(json.dumps(name3)),
'X2':X2,'name4':mark_safe(json.dumps(name4)),
'Y2':Y2,'name5':mark_safe(json.dumps(name5)),
'Z2':Z2,
'X3':X3,
'Y3':Y3,
'Z3':Z3,
'X4':X4,
'Y4':Y4,
'Z4':Z4,
'X5':X5,
'Y5':Y5,
'Z5':Z5,
}))
#notice that we need to return a user_pca_center.html, too!!
#return render_to_response('welcome.html',locals())
def express_profiling(request):
return render(request, 'express_profiling.html', generate_samples())
def welcome(request):
return render_to_response('welcome.html',locals())
def help(request):
example_name="CellExpress_Examples.pptx"
tutorial_name="CellExpress_Tutorial.pptx"
return render_to_response('help.html',locals())
def help_similar_assessment(request):
return render_to_response('help_similar_assessment.html',RequestContext(request))
def similar_assessment(request):
return render(request, 'similar_assessment.html', generate_samples())
def gene_signature(request):
return render(request, 'gene_signature.html', generate_samples())
def heatmap(request):
group1=[]
group2=[]
group_count=0
presult={} #{probe object:p value}
expression=[]
probe_out=[]
sample_out=[]
not_found=[]
quantile_flag=0
ratio_flag=0
indata=[]
#get probe from different platform
pform=request.POST.get('data_platform','U133A')
if(pform=="mix_quantile"):
pform="U133A"
quantile_flag=1
if(pform=="mix_ratio"):
pform="U133A"
ratio_flag=1
stop_end=601
return_page_flag=0
user_probe_flag=0
if(request.POST['user_type']=="all"):
all_probe=ProbeID.objects.filter(platform__name=pform).order_by('offset')
probe_offset=list(all_probe.values_list('offset',flat=True))
pro_number=float(request.POST['probe_number']) #significant 0.05 or 0.01
all_probe=list(all_probe)
elif(request.POST['user_type']=="genes"): #for all genes
return_page_flag=1
if(pform=="U133A"):
probe_path=Path('../').resolve().joinpath('src','uni_u133a.txt')
gene_list = pd.read_csv(probe_path.as_posix())
all_probe=list(gene_list['SYMBOL'])
else:
probe_path=Path('../').resolve().joinpath('src','uni_plus2.txt')
gene_list = pd.read_csv(probe_path.as_posix())
all_probe=list(gene_list['SYMBOL'])
pro_number=float(request.POST['probe_number_gene'])
probe_offset=[]
for i in range(0,len(all_probe)):
probe_offset.append(i)
else:
indata=request.POST['keyword']
indata = list(set(indata.split()))
if(request.POST['gtype']=="probeid"):
user_probe_flag=1
all_probe=ProbeID.objects.filter(platform__name=pform,Probe_id__in=indata).order_by('offset')
probe_offset=list(all_probe.values_list('offset',flat=True))
pro_number=float('+inf')
not_found=list(set(set(indata) - set(all_probe.values_list('Probe_id',flat=True))))
all_probe=list(all_probe)
else:
probe_offset=[]
return_page_flag=1
pro_number=float('+inf')
if(pform=="U133A"):
probe_path=Path('../').resolve().joinpath('src','uni_u133a.txt')
gene_list = pd.read_csv(probe_path.as_posix())
gene=list(gene_list['SYMBOL'])
else:
probe_path=Path('../').resolve().joinpath('src','uni_plus2.txt')
gene_list = pd.read_csv(probe_path.as_posix())
gene=list(gene_list['SYMBOL'])
probe_path=Path('../').resolve().joinpath('src','new_human_gene_info.txt')
info=pd.read_csv(probe_path.as_posix(),sep='\t')
col=list(info.columns.values)
col[0]='symbol'
info.columns=col
info.index = info['symbol']
info.index.name = None
info=info.iloc[:, 1:]
all_probe=[]
for i in indata:
try:
probe_offset.append(gene.index(i))
all_probe.append(i)
info=info.drop(i,errors='ignore')
except ValueError:
re_symbol=list(set(info.loc[info['alias'].isin([i])].index)) #find whether has alias first
if(len(re_symbol)!=0):
re_match=Gene.objects.filter(platform__name__in=[pform],symbol__in=re_symbol).order_by('offset') #check the symbol in database or not
repeat=len(re_match)
if(repeat!=0): #match gene symbol in database
##print(re_match)
for x in re_match:
info=info.drop(x.symbol,errors='ignore')
probe_offset.append(x.offset)
all_probe.append(i+"("+x.symbol+")")
else:
not_found.append(i)
else:
not_found.append(i)
#count the number of group
group_counter=1
check_set=[]
while True:
temp_name='dataset_g'+str(group_counter)
if temp_name in request.POST:
group_counter=group_counter+1
else:
group_counter=group_counter-1
break
#get binary data
s_group_dict={} #store sample
val=[] #store value get from binary data
group_name=[]
clinic=list(Clinical_Dataset.objects.all().values_list('name',flat=True))
clline=list(Dataset.objects.all().values_list('name',flat=True))
#print(clline)
opened_name=[]
opened_val=[]
for i in range(1,group_counter+1):
s_group_dict['g'+str(i)]=[]
dname='dataset_g'+str(i)
datasets=request.POST.getlist(dname)
temp_name='g'+str(i)
group_name.append(temp_name)
a_data=np.array([])
for dn in datasets:
if dn=='Sanger Cell Line Project':
c='select_sanger_g'+str(i)
elif dn=='NCI60':
c='select_nci_g'+str(i)
elif dn=='GSE36133':
c='select_ccle_g'+str(i)
if dn in clline:
ACELL=request.POST.getlist(c)
s=Sample.objects.filter(dataset_id__name__in=[dn],cell_line_id__name__in=ACELL).order_by('dataset_id').select_related('cell_line_id__name','dataset_id')
s_group_dict['g'+str(i)]=list(s)+s_group_dict['g'+str(i)]
goffset=list(s.values_list('offset',flat=True))
#print(goffset)
if dn not in opened_name: #check if the file is opened
#print("opend file!!")
opened_name.append(dn)
if(return_page_flag==1):
pth=Path('../').resolve().joinpath('src','gene_'+Dataset.objects.get(name=dn).data_path)
if(quantile_flag==1):
pth=Path('../').resolve().joinpath('src','mix_gene_'+Dataset.objects.get(name=dn).data_path)
elif(ratio_flag==1):
pth=Path('../').resolve().joinpath('src','mix_gene_'+Dataset.objects.get(name=dn).data_path)
gap=[Gene.objects.filter(platform__name=pform,symbol="GAPDH")[0].offset]
else:
pth=Path('../').resolve().joinpath('src',Dataset.objects.get(name=dn).data_path)
if(quantile_flag==1):
pth=Path('../').resolve().joinpath('src','mix_'+Dataset.objects.get(name=dn).data_path)
elif(ratio_flag==1):
pth=Path('../').resolve().joinpath('src','mix_'+Dataset.objects.get(name=dn).data_path)
gap=list(ProbeID.objects.filter(platform__name=pform).filter(Gene_symbol="GAPDH").order_by('id').values_list('offset',flat=True))
raw_val=np.load(pth.as_posix(),mmap_mode='r')
if(ratio_flag==1):
norm=raw_val[np.ix_(gap)]
raw_val=np.subtract(raw_val,np.mean(norm,axis=0, dtype=np.float64,keepdims=True))
opened_val.append(raw_val)
temp=raw_val[np.ix_(probe_offset,list(goffset))]
if (len(a_data)!=0 ) and (len(temp)!=0):
a_data=np.concatenate((a_data,temp),axis=1)
elif (len(temp)!=0):
a_data=raw_val[np.ix_(probe_offset,list(goffset))]
else:
temp=opened_val[opened_name.index(dn)][np.ix_(probe_offset,list(goffset))]
if (len(a_data)!=0 ) and (len(temp)!=0):
a_data=np.concatenate((a_data,temp),axis=1)
elif (len(temp)!=0):
a_data=opened_val[opened_name.index(dn)][np.ix_(probe_offset,list(goffset))]
elif dn in clinic:
#print("I am in clinical part")
com_hists=list(set(request.POST.getlist('primd_'+dn+'_g'+str(i)))) #can I get this by label to reduce number of queries?
com_hists=[w1 for segments in com_hists for w1 in segments.split('/')]
prims=com_hists[0::2]
hists=com_hists[1::2]
temp=request.POST.getlist('filter_'+dn+'_g'+str(i))
age=[]
gender=[]
ethnic=[]
grade=[]
stage=[]
T=[]
N=[]
M=[]
metas=[]
for t in temp:
if 'stage/' in t:
stage.append(t[6:])
elif 'gender/' in t:
gender.append(t[7:])
elif 'ethnic/' in t:
ethnic.append(t[7:])
elif 'grade/' in t:
grade.append(t[6:])
elif 'stageT/' in t:
T.append(t[7:])
elif 'stageN/' in t:
N.append(t[7:])
elif 'stageM/' in t:
M.append(t[7:])
elif 'metastatic/' in t:
metas.append(t[11:])
'''
if t[11:]=='False':
metas.append(0)
else:
metas.append(1)
'''
else: #"age/"
age.append(t[4:])
cgoffset=[]
for x in range(0,len(prims)):
s=Clinical_sample.objects.filter(dataset_id__name=dn,primary_site=prims[x],
primary_hist=hists[x],
age__in=age,
gender__in=gender,
ethnic__in=ethnic,
stage__in=stage,
grade__in=grade,
stageT__in=T,
stageN__in=N,
stageM__in=M,
metastatic__in=metas,
).select_related('dataset_id').order_by('id')
s_group_dict['g'+str(i)]=list(s)+s_group_dict['g'+str(i)]
cgoffset+=list(s.values_list('offset',flat=True))
if dn not in opened_name: #check if the file is opened
#print("opend file!!")
opened_name.append(dn)
if(return_page_flag==1):
pth=Path('../').resolve().joinpath('src','gene_'+Clinical_Dataset.objects.get(name=dn).data_path)
if(quantile_flag==1):
pth=Path('../').resolve().joinpath('src','mix_gene_'+Clinical_Dataset.objects.get(name=dn).data_path)
elif(ratio_flag==1):
pth=Path('../').resolve().joinpath('src','mix_gene_'+Clinical_Dataset.objects.get(name=dn).data_path)
gap=[Gene.objects.filter(platform__name=pform,symbol="GAPDH")[0].offset]
else:
pth=Path('../').resolve().joinpath('src',Clinical_Dataset.objects.get(name=dn).data_path)
if(quantile_flag==1):
pth=Path('../').resolve().joinpath('src','mix_'+Clinical_Dataset.objects.get(name=dn).data_path)
elif(ratio_flag==1):
pth=Path('../').resolve().joinpath('src','mix_'+Clinical_Dataset.objects.get(name=dn).data_path)
gap=list(ProbeID.objects.filter(platform__name=pform).filter(Gene_symbol="GAPDH").order_by('id').values_list('offset',flat=True))
raw_val=np.load(pth.as_posix(),mmap_mode='r')
if(ratio_flag==1):
norm=raw_val[np.ix_(gap)]
raw_val=np.subtract(raw_val,np.mean(norm,axis=0, dtype=np.float64,keepdims=True))
opened_val.append(raw_val)
temp=raw_val[np.ix_(probe_offset,list(cgoffset))]
#print(temp)
if (len(a_data)!=0 ) and (len(temp)!=0):
a_data=np.concatenate((a_data,temp),axis=1)
elif (len(temp)!=0):
a_data=raw_val[np.ix_(probe_offset,list(cgoffset))]
else:
temp=opened_val[opened_name.index(dn)][np.ix_(probe_offset,list(cgoffset))]
if (len(a_data)!=0 ) and (len(temp)!=0):
a_data=np.concatenate((a_data,temp),axis=1)
elif (len(temp)!=0):
a_data=opened_val[opened_name.index(dn)][np.ix_(probe_offset,list(cgoffset))]
val.append(a_data.tolist())
#print(len(val))
#print(len(val[0]))
##print(val)
#run the one way ANOVA test or ttest for every probe base on the platform selected
express={}
#logger.info('run ttest or anova')
if group_counter<=2:
for i in range(0,len(all_probe)): #need to fix if try to run on laptop
presult[all_probe[i]]=stats.ttest_ind(list(val[0][i]),list(val[1][i]),equal_var=False,nan_policy='omit')[1]
express[all_probe[i]]=np.append(val[0][i],val[1][i]).tolist()
else:
for i in range(0,len(all_probe)): #need to fix if try to run on laptop
to_anova=[]
for n in range(0,group_counter):
#val[n]=sum(val[n],[])
to_anova.append(val[n][i])
presult[all_probe[i]]=stats.f_oneway(*to_anova)[1]
express[all_probe[i]]=sum(to_anova,[])
#print("test done")
#sort the dictionary with p-value and need to get the expression data again (top20)
#presult[all_probe[0]]=float('nan')
#presult[all_probe[11]]=float('nan')
#how to deal with all "nan"?
tempf=pd.DataFrame(list(presult.items()), columns=['probe', 'pvalue'])
tempf=tempf.replace(to_replace=float('nan'),value=float('+inf'))
presult=dict(zip(tempf.probe, tempf.pvalue))
sortkey=sorted(presult,key=presult.get) #can optimize here
counter=1
cell_probe_val=[]
for w in sortkey:
#print(presult[w],":",w)
if (presult[w]<pro_number):
cell_probe_val.append([w,presult[w]])
print(cell_probe_val)
express_mean=np.mean(np.array(express[w]))
expression.append(list((np.array(express[w]))-express_mean))
if(return_page_flag==1):
probe_out.append(w)
else:
probe_out.append(w.Probe_id+"("+w.Gene_symbol+")")
counter+=1
else:
break
if counter>=stop_end:
break
n_counter=1
for n in group_name:
sample_counter=1
for s in s_group_dict[n]:
dataset_n=s.dataset_id.name
if dataset_n=="Sanger Cell Line Project":
sample_out.append(s.cell_line_id.name+"(SCLP)(group"+str(n_counter)+"-"+str(sample_counter)+")")
elif dataset_n in clline:
#print(s.cell_line_id.name+"("+s.dataset_id.name+")"+"(group"+str(n_counter)+"-"+str(sample_counter)+")")
sample_out.append(s.cell_line_id.name+"("+s.dataset_id.name+")"+"(group"+str(n_counter)+"-"+str(sample_counter)+")")
else: #what to output for clinical part?
#print(s.name+"("+s.dataset_id.name+")"+"(group"+str(n_counter)+"-"+str(sample_counter)+")")
sample_out.append(s.name+"("+s.dataset_id.name+")"+"(group"+str(n_counter)+"-"+str(sample_counter)+")")
sample_counter+=1
n_counter+=1
#logger.info('finish combine output samples')
sns.set(font="monospace")
test=pd.DataFrame(data=expression,index=probe_out,columns=sample_out)
cdict = {'red': ((0.0, 0.0, 0.0),
(0.5, 0.0, 0.1),
(1.0, 1.0, 1.0)),
'blue': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'green': ((0.0, 0.0, 1.0),
(0.5, 0.1, 0.0),
(1.0, 0.0, 0.0))
}
my_cmap = LinearSegmentedColormap('my_colormap',cdict,256)
#test.to_csv('heatmap_text.csv')
try:
#g = sns.clustermap(test,cmap=my_cmap)
if(len(probe_out)<=300):
g = sns.clustermap(test,cmap=my_cmap,xticklabels=list(test.columns),yticklabels=(test.index),figsize=(19.20,48.60))
else:
g = sns.clustermap(test,cmap=my_cmap,xticklabels=list(test.columns),yticklabels=(test.index),figsize=(30.00,100.00))
except:
if((return_page_flag==1) and (probe_out==[])):
probe_out=indata #probe_out is the rows of heatmap
return render_to_response('noprobe.html',RequestContext(request,
{
'user_probe_flag':user_probe_flag,
'return_page_flag':return_page_flag,
'probe_out':probe_out,
'not_found':not_found
}))
'''
plt.setp(g.ax_heatmap.get_yticklabels(), rotation=0)
if counter>=stop_end:
plt.setp(g.ax_heatmap.yaxis.get_majorticklabels(), fontsize=4)
else:
plt.setp(g.ax_heatmap.yaxis.get_majorticklabels(), fontsize=7)
plt.setp(g.ax_heatmap.get_xticklabels(), rotation=270,ha='center')
'''
sid=str(uuid.uuid1())+".png"
#print(sid)
P=Path('../').resolve().joinpath('src','static','image',sid)
assP=Path('../').resolve().joinpath('src','assets','image',sid)
#g.savefig(str(P))
#plt.figure(figsize=(1920/my_dpi, 2160/my_dpi), dpi=100)
#plt.savefig(str(assP), dpi=my_dpi*10)
g.savefig(str(assP),bbox_inches='tight')
file_name=sid
return render_to_response('heatmap.html',RequestContext(request,
{
'user_probe_flag':user_probe_flag,
'return_page_flag':return_page_flag,
'cell_probe_val':cell_probe_val,
'file_name':file_name,
'pro_number':pro_number,
'not_found':not_found
}))
def pca(request):
propotion=0
table_propotion=0
pform=request.POST['data_platform'] #get the platform
show=request.POST['show_type'] #get the pca show type
group_counter=1
cell_line_dict={}
#count how many group
group_counter=1
while True:
temp_name='dataset_g'+str(group_counter)
if temp_name in request.POST:
group_counter=group_counter+1
else:
group_counter=group_counter-1
break
udgroup=[]
s_group_dict={} #store sample
group_name=[]
offset_group_dict={} #store offset
clinic=list(Clinical_Dataset.objects.all().values_list('name',flat=True))
clline=list(Dataset.objects.all().values_list('name',flat=True))
all_exist_dataset=[]
for i in range(1,group_counter+1):
udgroup.append(request.POST['group_name'+str(i)])
#print(udgroup)
if(udgroup[-1]==''):
udgroup[-1]='Group'+str(i)
dname='dataset_g'+str(i)
all_exist_dataset=all_exist_dataset+request.POST.getlist(dname)
all_exist_dataset=list(set(all_exist_dataset))
all_base=[0]
for i in range(0,len(all_exist_dataset)-1):
if all_exist_dataset[i] in clline:
all_base.append(all_base[i]+Sample.objects.filter(dataset_id__name__in=[all_exist_dataset[i]]).count())
else:
all_base.append(all_base[i]+Clinical_sample.objects.filter(dataset_id__name__in=[all_exist_dataset[i]]).count())
all_c=[]
for i in range(1,group_counter+1):
s_group_dict['g'+str(i)]=[]
offset_group_dict['g'+str(i)]=[]
cell_line_dict['g'+str(i)]=[]
dname='dataset_g'+str(i)
datasets=request.POST.getlist(dname)
group_name.append('g'+str(i))
goffset_nci=[]
goffset_gse=[]
for dn in datasets:
if dn=='Sanger Cell Line Project':
c='select_sanger_g'+str(i)
elif dn=='NCI60':
c='select_nci_g'+str(i)
elif dn=='GSE36133':
c='select_ccle_g'+str(i)
if dn in clline:
temp=list(set(request.POST.getlist(c)))
if 'd_sample' in show:
if all_c==[]:
all_c=all_c+temp
uni=temp
else:
uni=list(set(temp)-set(all_c))
all_c=all_c+uni
else:
uni=list(temp) #do not filter duplicate input only when select+centroid
s=Sample.objects.filter(cell_line_id__name__in=uni,dataset_id__name__in=[dn]).order_by('dataset_id'
).select_related('cell_line_id__name','cell_line_id__primary_site','cell_line_id__primary_hist','dataset_id','dataset_id__name')
cell_line_dict['g'+str(i)]=cell_line_dict['g'+str(i)]+list(s.values_list('cell_line_id__name',flat=True))
s_group_dict['g'+str(i)]=s_group_dict['g'+str(i)]+list(s)
offset_group_dict['g'+str(i)]=offset_group_dict['g'+str(i)]+list(np.add(list(s.values_list('offset',flat=True)),all_base[all_exist_dataset.index(dn)]))
else: #dealing with clinical sample datasets
com_hists=list(set(request.POST.getlist('primd_'+dn+'_g'+str(i)))) #can I get this by label to reduce number of queries?
com_hists=[w1 for segments in com_hists for w1 in segments.split('/')]
prims=com_hists[0::2]
hists=com_hists[1::2]
temp=request.POST.getlist('filter_'+dn+'_g'+str(i))
age=[]
gender=[]
ethnic=[]
grade=[]
stage=[]
T=[]
N=[]
M=[]
metas=[]
for t in temp:
if 'stage/' in t:
stage.append(t[6:])
elif 'gender/' in t:
gender.append(t[7:])
elif 'ethnic/' in t:
ethnic.append(t[7:])
elif 'grade/' in t:
grade.append(t[6:])
elif 'stageT/' in t:
T.append(t[7:])
elif 'stageN/' in t:
N.append(t[7:])
elif 'stageM/' in t:
M.append(t[7:])
elif 'metastatic/' in t:
metas.append(t[11:])
'''
if t[11:]=='False':
metas.append(0)
else:
metas.append(1)
'''
else: #"age/"
age.append(t[4:])
for x in range(0,len(prims)):
s=Clinical_sample.objects.filter(dataset_id__name=dn,primary_site=prims[x],
primary_hist=hists[x],
age__in=age,
gender__in=gender,
ethnic__in=ethnic,
stage__in=stage,
grade__in=grade,
stageT__in=T,
stageN__in=N,
stageM__in=M,
metastatic__in=metas,
).select_related('dataset_id').order_by('id')
s_group_dict['g'+str(i)]=s_group_dict['g'+str(i)]+list(s)
cell_line_dict['g'+str(i)]=cell_line_dict['g'+str(i)]+list(s.values_list('name',flat=True))
offset_group_dict['g'+str(i)]=offset_group_dict['g'+str(i)]+list(np.add(list(s.values_list('offset',flat=True)),all_base[all_exist_dataset.index(dn)]))
all_sample=[]
all_cellline=[]
cell_object=[]
all_offset=[]
sample_counter={}
group_cell=[]
g_s_counter=[0]
for i in range(1,group_counter+1):
all_sample=all_sample+s_group_dict['g'+str(i)] #will not exist duplicate sample if d_sample
all_offset=all_offset+offset_group_dict['g'+str(i)]
all_cellline=all_cellline+cell_line_dict['g'+str(i)]
g_s_counter.append(g_s_counter[i-1]+len(s_group_dict['g'+str(i)]))
if 'd_sample' in show:
if((len(all_sample))<4):
error_reason='You should have at least 4 samples for PCA. The samples are not enough.<br />'\
'The number of samples you selected is '+str(len(all_sample))+'.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
for i in all_sample:
sample_counter[i.name]=1
if str(type(i))=="<class 'probes.models.Sample'>":
##print("i am sample!!")
cell_object.append(i.cell_line_id)
else:
##print("i am clinical!!")
cell_object.append(i)
#delete nan, transpose matrix
##open file
for x in range(0,len(all_exist_dataset)):
if all_exist_dataset[x] in clline:
pth=Path('../').resolve().joinpath('src',Dataset.objects.get(name=all_exist_dataset[x]).data_path)
else:
pth=Path('../').resolve().joinpath('src',Clinical_Dataset.objects.get(name=all_exist_dataset[x]).data_path)
if x==0:
val=np.load(pth.as_posix())
else:
val=np.hstack((val, np.load(pth.as_posix())))#combine together
if 'd_sample' in show:
val=val[:,all_offset]
#val=val[~np.isnan(val).any(axis=1)]
val=np.transpose(val)
pca_index=[]
dis_offset=[]
#PREMISE:same dataset same cell line will have only one type of primary site and primary histology
name1=[]
name2=[]
name3=[]
name4=[]
name5=[]
X1=[]
Y1=[]
Z1=[]
X2=[]
Y2=[]
Z2=[]
X3=[]
Y3=[]
Z3=[]
X4=[]
Y4=[]
Z4=[]
X5=[]
Y5=[]
Z5=[]
if(len(all_exist_dataset)==1):
n=3 #need to fix to the best one #need to fix proportion
else:
n=4
#logger.info('pca show')
if 'd_sample' in show:
#count the pca first
pca= PCA(n_components=n)
Xval = pca.fit_transform(val[:,:]) #cannot get Xval with original all_offset any more
ratio_temp=pca.explained_variance_ratio_
propotion=sum(ratio_temp[n-3:n])
table_propotion=sum(ratio_temp[0:n])
##print(Xval)
##print(all_cellline)
##print(all_sample)
max=0
min=10000000000
out_group=[]
exist_cell={}#cell line object:counter
for g in range(1,group_counter+1):
output_cell={}
check={}
for s in range(g_s_counter[g-1],g_s_counter[g]):
if str(type(all_sample[s]))=="<class 'probes.models.Sample'>":
cell=all_sample[s].cell_line_id
else:
cell=all_sample[s]
try:
counter=exist_cell[cell]
exist_cell[cell]=counter+1
except KeyError:
exist_cell[cell]=1
try:
t=output_cell[cell]
except KeyError:
output_cell[cell]=[cell,[]]
check[all_sample[s].name]=[]
sample_counter[all_sample[s].name]=exist_cell[cell]
for i in range(0,len(all_sample)):
if i!=s:
try:
if(all_sample[s].name not in check[all_sample[i].name]):
distance=np.linalg.norm(Xval[i][n-3:n]-Xval[s][n-3:n])
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[cell][1].append([all_cellline[s]+'('+str(exist_cell[cell])+')'
,all_sample[s].name,all_sample[s].dataset_id.name,all_cellline[i],all_sample[i].name,all_sample[i].dataset_id.name,distance,cell_object[i]])
check[all_sample[s].name].append(all_sample[i].name)
except KeyError:
distance=np.linalg.norm(Xval[i][n-3:n]-Xval[s][n-3:n])
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[cell][1].append([all_cellline[s]+'('+str(exist_cell[cell])+')'
,all_sample[s].name,all_sample[s].dataset_id.name,all_cellline[i],all_sample[i].name,all_sample[i].dataset_id.name,distance,cell_object[i]])
check[all_sample[s].name].append(all_sample[i].name)
if(g==1):
name1.append(all_cellline[s]+'('+str(exist_cell[cell])+')'+'<br>'+all_sample[s].name)
X1.append(round(Xval[s][n-3],5))
Y1.append(round(Xval[s][n-2],5))
Z1.append(round(Xval[s][n-1],5))
elif(g==2):
name2.append(all_cellline[s]+'('+str(exist_cell[cell])+')'+'<br>'+all_sample[s].name)
X2.append(round(Xval[s][n-3],5))
Y2.append(round(Xval[s][n-2],5))
Z2.append(round(Xval[s][n-1],5))
elif(g==3):
name3.append(all_cellline[s]+'('+str(exist_cell[cell])+')'+'<br>'+all_sample[s].name)
X3.append(round(Xval[s][n-3],5))
Y3.append(round(Xval[s][n-2],5))
Z3.append(round(Xval[s][n-1],5))
elif(g==4):
name4.append(all_cellline[s]+'('+str(exist_cell[cell])+')'+'<br>'+all_sample[s].name)
X4.append(round(Xval[s][n-3],5))
Y4.append(round(Xval[s][n-2],5))
Z4.append(round(Xval[s][n-1],5))
elif(g==5):
name5.append(all_cellline[s]+'('+str(exist_cell[cell])+')'+'<br>'+all_sample[s].name)
X5.append(round(Xval[s][n-3],5))
Y5.append(round(Xval[s][n-2],5))
Z5.append(round(Xval[s][n-1],5))
dictlist=[]
for key, value in output_cell.items():
temp = [value]
dictlist+=temp
output_cell=list(dictlist)
out_group.append([g,output_cell])
element_counter=0
#[g,[[group_cell_line,[paired_cellline,......,]],[],[]]]
for i in out_group:
for temp_list in i[1]:
element_counter+=len(temp_list[1])
for temp in temp_list[1]:
##print(temp)
temp[3]=temp[3]+'('+str(sample_counter[temp[4]])+')'
return_html='pca.html'
else:
#This part is for centroid display
return_html='pca_center.html'
element_counter=0
#val=val[~np.isnan(val).any(axis=1)] #bottle neck???
#This part is for select cell line base on dataset,count centroid base on the dataset
#group中的cell line為單位來算重心
#logger.info('pca show centroid with selection')
location_dict={} #{group number:[[cell object,dataset,new location]]}
combined=[]
sample_list=[]
pca_index=np.array(pca_index)
X_val=[]
a_all_offset=np.array(all_offset)
for i in range(1,group_counter+1):
dis_cellline=list(set(cell_object[g_s_counter[i-1]:g_s_counter[i]])) #cell object may have duplicate cell line since:NCI A + CCLE A===>[A,A]
location_dict['g'+str(i)]=[]
dataset_dict={}
a_cell_object=np.array(cell_object)
for c in dis_cellline: #dis_cellline may not have the same order as cell_object
temp1=np.where((a_cell_object==c))[0]
temp2=np.where((temp1>=g_s_counter[i-1])&(temp1<g_s_counter[i]))
total_offset=temp1[temp2]
selected_val=val[:,a_all_offset[total_offset]]
selected_val=np.transpose(selected_val)
new_loca=(np.mean(selected_val,axis=0,dtype=np.float64,keepdims=True)).tolist()[0]
a_sample=np.array(all_sample)
selected_sample=a_sample[total_offset]
if list(selected_sample) in sample_list: #to prevent two different colors in different group
continue
else:
sample_list.append(list(selected_sample))
##print(selected_sample)
d_temp=[]
for s in selected_sample:
d_temp.append(s.dataset_id.name)
dataset_dict[c]="/".join(list(set(d_temp)))
##print(dataset_dict[c])
X_val.append(new_loca)
location_dict['g'+str(i)].append([c,dataset_dict[c],len(X_val)-1]) #the last part is the index to get pca result from new_val
combined.append([c,dataset_dict[c],len(X_val)-1]) #all cell line, do not matter order
#run the pca
##print(len(X_val))
if((len(X_val))<4):
error_reason='Since the display method is [centroid], you should have at least 4 dots for PCA. The dots are not enough.<br />'\
'The number of centroid dots you selected is '+str(len(X_val))+'.'
return render_to_response('pca_error.html',RequestContext(request,
{
'error_reason':mark_safe(json.dumps(error_reason)),
}))
X_val=np.matrix(X_val)
pca= PCA(n_components=n)
new_val = pca.fit_transform(X_val[:,:]) #cannot get Xval with original offset any more
ratio_temp=pca.explained_variance_ratio_
propotion=sum(ratio_temp[n-3:n])
table_propotion=sum(ratio_temp[0:n])
##print(new_val)
out_group=[]
min=10000000000
max=0
for g in range(1,group_counter+1):
output_cell=[]
exist_cell={}
for group_c in location_dict['g'+str(g)]: #a list of [c,dataset_dict[c],new_val index] in group one
cell=group_c[0]
key_string=cell.name+'/'+cell.primary_site+'/'+cell.primary_hist+'/'+group_c[1]
exist_cell[key_string]=[]
output_cell.append([cell,[]])
#count the distance
for temp_list in combined:
c=temp_list[0]
temp_string=c.name+'/'+c.primary_site+'/'+c.primary_hist+'/'+temp_list[1]
try:
if(key_string not in exist_cell[temp_string]):
distance=np.linalg.norm(np.array(new_val[group_c[2]][n-3:n])-np.array(new_val[temp_list[2]][n-3:n]))
if distance==0:
continue
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[len(output_cell)-1][1].append([cell,group_c[1],temp_list[0],temp_list[1],distance])
element_counter+=1
exist_cell[key_string].append(temp_string)
except KeyError:
distance=np.linalg.norm(np.array(new_val[group_c[2]][n-3:n])-np.array(new_val[temp_list[2]][n-3:n]))
if distance==0:
continue
if distance<min:
min=distance
if distance>max:
max=distance
output_cell[len(output_cell)-1][1].append([cell,group_c[1],temp_list[0],temp_list[1],distance])
element_counter+=1
exist_cell[key_string].append(temp_string)
if(g==1):
name1.append(cell.name+'<br>'+group_c[1])
X1.append(round(new_val[group_c[2]][n-3],5))
Y1.append(round(new_val[group_c[2]][n-2],5))
Z1.append(round(new_val[group_c[2]][n-1],5))
elif(g==2):
name2.append(cell.name+'<br>'+group_c[1])
X2.append(round(new_val[group_c[2]][n-3],5))
Y2.append(round(new_val[group_c[2]][n-2],5))
Z2.append(round(new_val[group_c[2]][n-1],5))
elif(g==3):
name3.append(cell.name+'<br>'+group_c[1])
X3.append(round(new_val[group_c[2]][n-3],5))
Y3.append(round(new_val[group_c[2]][n-2],5))
Z3.append(round(new_val[group_c[2]][n-1],5))
elif(g==4):
name4.append(cell.name+'<br>'+group_c[1])
X4.append(round(new_val[group_c[2]][n-3],5))
Y4.append(round(new_val[group_c[2]][n-2],5))
Z4.append(round(new_val[group_c[2]][n-1],5))
elif(g==5):
name5.append(cell.name+'<br>'+group_c[1])
X5.append(round(new_val[group_c[2]][n-3],5))
Y5.append(round(new_val[group_c[2]][n-2],5))
Z5.append(round(new_val[group_c[2]][n-1],5))
out_group.append([g,output_cell])
#logger.info('end pca')
if(element_counter>show_row):
big_flag=1
sid=str(uuid.uuid1())+".csv"
if(return_html=='pca.html'):
dataset_header=['Group Cell Line/Clinical Sample','Sample Name','Primary Site','Primary Histology'
,'Dataset','Paired Cell Line name/Clinical Sample','Sample Name','Primary Site','Primary Histology','Dataset','Distance']
else:
dataset_header=['Group Cell Line/Clinical Sample','Primary Site','Primary Histology'
,'Dataset','Paired Cell Line name/Clinical Sample','Primary Site','Primary Histology','Dataset','Distance']
P=Path('../').resolve().joinpath('src','static','csv',sid)
assP=Path('../').resolve().joinpath('src','assets','csv',sid)
with open(str(assP), "w", newline='') as f:
writer = csv.writer(f)
for index,output_cell in out_group:
writer.writerows([[udgroup[index-1]]])
writer.writerows([dataset_header])
for cell_line,b in output_cell:
temp_b=[]
if(return_html=='pca.html'):
for group_cell,sn,dset,cname,sname,setname,dis,cell_object in b:
temp_b.append([group_cell,sn,cell_line.primary_site,cell_line.primary_hist,dset,cname
,sname,cell_object.primary_site,cell_object.primary_hist,setname,dis])
else:
for group_cell,group_dataset,paired_cell,paired_dataset,dis in b:
temp_b.append([group_cell.name,group_cell.primary_site,group_cell.primary_hist,group_dataset
,paired_cell.name,paired_cell.primary_site,paired_cell.primary_hist,paired_dataset,dis])
writer.writerows(temp_b)
#print('write first file done')
'''
with open(str(assP), "w", newline='') as ff:
writer = csv.writer(ff)
for index,output_cell in out_group:
writer.writerows([[udgroup[index-1]]])
writer.writerows([dataset_header])
for cell_line,b in output_cell:
temp_b=[]
if(return_html=='pca.html'):
for group_cell,sn,dset,cname,sname,setname,dis,cell_object in b:
temp_b.append([group_cell,sn,cell_line.primary_site,cell_line.primary_hist,dset,cname
,sname,cell_object.primary_site,cell_object.primary_hist,setname,dis])
else:
for group_cell,group_dataset,paired_cell,paired_dataset,dis in b:
temp_b.append([group_cell.name,group_cell.primary_site,group_cell.primary_hist,group_dataset
,paired_cell.name,paired_cell.primary_site,paired_cell.primary_hist,paired_dataset,dis])
writer.writerows(temp_b)
'''
#print('write second file done')
data_file_name=sid
else:
big_flag=0
data_file_name=0
return render_to_response(return_html,RequestContext(request,
{
'udgroup':udgroup,
'min':min,'max':max,
'out_group':out_group,
'propotion':propotion,
'big_flag':big_flag,
'data_file_name':data_file_name,
'table_propotion':table_propotion,
'X1':X1,'name1':mark_safe(json.dumps(name1)),
'Y1':Y1,'name2':mark_safe(json.dumps(name2)),
'Z1':Z1,'name3':mark_safe(json.dumps(name3)),
'X2':X2,'name4':mark_safe(json.dumps(name4)),
'Y2':Y2,'name5':mark_safe(json.dumps(name5)),
'Z2':Z2,
'X3':X3,
'Y3':Y3,
'Z3':Z3,
'X4':X4,
'Y4':Y4,
'Z4':Z4,
'X5':X5,
'Y5':Y5,
'Z5':Z5,
}))
def cellline_microarray(request):
# Pre-fetch the cell line field for all samples.
# Reduce N query in to 1. N = number of samples
d=Dataset.objects.all()
d_name=list(d.values_list('name',flat=True))
datasets=[] #[[dataset_name,[[primary_site,[cell line]]]]
an=[]
for i in d_name:
if i=="Sanger Cell Line Project":
alias='sanger'
elif i=="NCI60":
alias='nci'
elif i=="GSE36133":
alias='gse'
else:
alias=i
an.append(alias)
sample=Sample.objects.filter(dataset_id__name=i).order_by('cell_line_id__primary_site').select_related('cell_line_id')
datasets.append([i,alias,list(sample),[]])
sites=list(sample.values_list('cell_line_id__primary_site',flat=True))
hists=list(sample.values_list('cell_line_id__name',flat=True))
dis_prim=list(sample.values_list('cell_line_id__primary_site',flat=True).distinct())
hists=list(hists)
id_counter=0
for p in range(0,len(dis_prim)):
temp=sites.count(dis_prim[p])
datasets[-1][3].append([dis_prim[p],list(set(hists[id_counter:id_counter+temp]))])
id_counter+=temp
return render(request, 'cellline_microarray.html', {
'an':mark_safe(json.dumps(an)),
'd_name':d_name,
'datasets':datasets,
})
def cell_lines(request):
#samples = Sample.objects.all().select_related('cell_line_id','dataset_id')
#lines=CellLine.objects.all().distinct()
#val_pairs = (
# (l, l.fcell_line_id.prefetch_related('dataset_id__name').values_list('dataset_id__name',flat=True).distinct())
# for l in lines
# )
#context['val_pairs']=val_pairs
cell_line_dict={}
context={}
nr_samples=[]
samples=Sample.objects.all().select_related('cell_line_id','dataset_id').order_by('id')
for ss in samples:
name=ss.cell_line_id.name
primary_site=ss.cell_line_id.primary_site
primary_hist=ss.cell_line_id.primary_hist
comb=name+"/"+primary_site+"/"+primary_hist
dataset=ss.dataset_id.name
try:
sets=cell_line_dict[comb]
if (dataset not in sets):
cell_line_dict[comb]=dataset+"/"+sets
except KeyError:
cell_line_dict[comb]=dataset
nr_samples.append(ss)
val_pairs = (
(ss,cell_line_dict[ss.cell_line_id.name+"/"+ss.cell_line_id.primary_site+"/"+ss.cell_line_id.primary_hist])
for ss in nr_samples
)
context['val_pairs']=val_pairs
return render_to_response('cell_line.html', RequestContext(request, context))
def clinical_search(request):
norm_name=[request.POST['normalize']] #get the normalize gene name
#f_type=['age','gender','ethnic','grade','stage','stageT','stageN','stageM','metastatic']
age=[]
gender=[]
ethnic=[]
grade=[]
stage=[]
T=[]
N=[]
M=[]
metas=[]
#get the probe/gene/id keywords
if 'keyword' in request.POST and request.POST['keyword'] != '':
words = request.POST['keyword']
words = list(set(words.split()))
else:
return HttpResponse("<p>where is your keyword?</p>")
plus2_rank=np.load('ranking_u133plus2.npy') #open only plus2 platform rank
sample_probe_val_pairs=[] #for output
if 'gtype' in request.POST and request.POST['gtype'] == 'probeid':
gene = ProbeID.objects.filter(platform__name__in=["PLUS2"]).filter(Probe_id__in=words).order_by('id')
probe=list(gene.values_list('offset',flat=True))
##print(gene)
elif 'gtype' in request.POST and request.POST['gtype'] == 'symbol':
gene = ProbeID.objects.filter(platform__name__in=["PLUS2"]).filter(Gene_symbol__in=words).order_by('id')
probe=list(gene.values_list('offset',flat=True))
else:
gene = ProbeID.objects.filter(platform__name__in=["PLUS2"]).filter(Entrez_id__in=words).order_by('id')
probe=list(gene.values_list('offset',flat=True))
if request.POST['clinical_method'] == 'prim_dataset':
if 'dataset' in request.POST and request.POST['dataset'] != '':
datas=request.POST.getlist('dataset')
else:
d=Clinical_Dataset.objects.all()
datas=d.values_list('name',flat=True)
com_hists=list(set(request.POST.getlist('primhist')))
com_hists=[w1 for segments in com_hists for w1 in segments.split('/')]
prims=com_hists[0::2]
hists=com_hists[1::2]
temp=request.POST.getlist('filter_primh')
for i in temp:
if 'stage/' in i:
stage.append(i[6:])
elif 'gender/' in i:
gender.append(i[7:])
elif 'ethnic/' in i:
ethnic.append(i[7:])
elif 'grade/' in i:
grade.append(i[6:])
elif 'stageT/' in i:
T.append(i[7:])
elif 'stageN/' in i:
N.append(i[7:])
elif 'stageM/' in i:
M.append(i[7:])
elif 'metastatic/' in i:
metas.append(i[11:])
'''
if i[11:]=='False':
metas.append(0)
else:
metas.append(1)
'''
else: #"age/"
age.append(i[4:])
for sets in datas:
samples=[]
offset=[]
if request.POST['clinical_method'] == 'prim_dataset':
com_hists=list(set(request.POST.getlist('primd_'+sets))) #can I get this by label to reduce number of queries?
com_hists=[w1 for segments in com_hists for w1 in segments.split('/')]
prims=com_hists[0::2]
hists=com_hists[1::2]
temp=request.POST.getlist('filter_'+sets)
age=[]
gender=[]
ethnic=[]
grade=[]
stage=[]
T=[]
N=[]
M=[]
metas=[]
for i in temp:
if 'stage/' in i:
stage.append(i[6:])
elif 'gender/' in i:
gender.append(i[7:])
elif 'ethnic/' in i:
ethnic.append(i[7:])
elif 'grade/' in i:
grade.append(i[6:])
elif 'stageT/' in i:
T.append(i[7:])
elif 'stageN/' in i:
N.append(i[7:])
elif 'stageM/' in i:
M.append(i[7:])
elif 'metastatic/' in i:
metas.append(i[11:])
'''
if i[11:]=='False':
metas.append(0)
else:
metas.append(1)
'''
else: #"age/"
age.append(i[4:])
for i in range(0,len(prims)):
#metas=[bool(x) for x in metas]
s=Clinical_sample.objects.filter(dataset_id__name=sets,primary_site=prims[i],
primary_hist=hists[i],
age__in=age,
gender__in=gender,
ethnic__in=ethnic,
stage__in=stage,
grade__in=grade,
stageT__in=T,
stageN__in=N,
stageM__in=M,
metastatic__in=metas
).select_related('dataset_id').order_by('id')
samples+=list(s)
offset+=list(s.values_list('offset',flat=True))
##print(s)
pth=Path('../').resolve().joinpath('src',Clinical_Dataset.objects.get(name=sets).data_path)
val=np.load(pth.as_posix(),mmap_mode='r')
norm_probe=ProbeID.objects.filter(platform__name__in=["PLUS2"]).filter(Gene_symbol__in=norm_name).order_by('id')
probe_offset=list(norm_probe.values_list('offset',flat=True))
temp=val[np.ix_(probe_offset,offset)]
norm=np.mean(temp,axis=0, dtype=np.float64,keepdims=True)
# Make a generator to generate all (cell, probe, val) pairs
if(len(gene)!=0 and len(samples)!=0):
raw_test=val[np.ix_(probe,offset)]
normalize=np.subtract(raw_test,norm)#dimension different!!!!
sample_probe_val_pairs += [
(c, p, raw_test[probe_ix, cell_ix],54614-np.where(plus2_rank==raw_test[probe_ix, cell_ix])[0],normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(gene)
for cell_ix, c in enumerate(samples)
]
return render(request, 'clinical_search.html', {
'sample_probe_val_pairs': sample_probe_val_pairs,
})
def data(request):
SANGER=[]
sanger_flag=0
NCI=[]
nci_flag=0
GSE=[]
gse_flag=0
cell=[]
ncicell=[]
CCcell=[]
ps_id='0'
pn_id='0'
if request.POST.get('cell_line_method','text') == 'text':
if request.POST['cellline'] =='':
return HttpResponse("<p>please make sure to enter cell line name in Step3.</p>" )
c = request.POST['cellline']
c = list(set(c.split()))
sanger_flag=1
samples=Sample.objects.filter(dataset_id__name__in=['Sanger Cell Line Project']).order_by('id')
cell=samples.select_related('cell_line_id','dataset_id').filter(cell_line_id__name__in=c).order_by('id')
offset=list(cell.values_list('offset',flat=True))
ps_id='1'
nci_flag=1
ncisamples=Sample.objects.filter(dataset_id__name__in=['NCI60']).select_related('cell_line_id','dataset_id').order_by('id')
ncicell=ncisamples.filter(cell_line_id__name__in=c).order_by('id')
ncioffset=list(ncicell.values_list('offset',flat=True))
pn_id='3'
gse_flag=1
CCsamples=Sample.objects.filter(dataset_id__name__in=['GSE36133']).select_related('cell_line_id','dataset_id').order_by('id')
CCcell=CCsamples.filter(cell_line_id__name__in=c).order_by('id')
CCoffset=list(CCcell.values_list('offset',flat=True))
pn_id='3'
else:
if 'dataset' in request.POST and request.POST['dataset'] != '':
datas=request.POST.getlist('dataset')
if 'Sanger Cell Line Project' in datas:
sanger_flag=1
SANGER=list(set(request.POST.getlist('select_sanger')))
samples=Sample.objects.filter(dataset_id__name__in=['Sanger Cell Line Project']).order_by('id')
cell=samples.select_related('cell_line_id','dataset_id').filter(cell_line_id__name__in=SANGER).order_by('id')
offset=list(cell.values_list('offset',flat=True))
ps_id=str(Platform.objects.filter(name__in=["U133A"])[0].id)
if 'NCI60' in datas:
nci_flag=1
NCI=list(set(request.POST.getlist('select_nci')))
ncisamples=Sample.objects.filter(dataset_id__name__in=['NCI60']).select_related('cell_line_id','dataset_id').order_by('id')
ncicell=ncisamples.filter(cell_line_id__name__in=NCI).order_by('id')
ncioffset=list(ncicell.values_list('offset',flat=True))
pn_id=str(Platform.objects.filter(name__in=["PLUS2"])[0].id)
if 'GSE36133' in datas:
gse_flag=1
GSE=list(set(request.POST.getlist('select_gse')))
CCsamples=Sample.objects.filter(dataset_id__name__in=['GSE36133']).select_related('cell_line_id','dataset_id').order_by('id')
CCcell=CCsamples.filter(cell_line_id__name__in=GSE).order_by('id')
CCoffset=list(CCcell.values_list('offset',flat=True))
pn_id=str(Platform.objects.filter(name__in=["PLUS2"])[0].id)
if len(SANGER)==0 and len(NCI)==0 and len(GSE)==0:
return HttpResponse("<p>please select primary sites.</p>" )
else:
return HttpResponse("<p>please check Step3 again.</p>" )
if 'keyword' in request.POST and request.POST['keyword'] != '':
words = request.POST['keyword']
words = list(set(words.split()))
else:
return HttpResponse("<p>where is your keyword?</p>")
#open files
sanger_val_pth=Path('../').resolve().joinpath('src','sanger_cell_line_proj.npy')
nci_val_pth=Path('../').resolve().joinpath('src','nci60.npy')
gse_val_pth=Path('../').resolve().joinpath('src','GSE36133.npy')
sanger_val=np.load(sanger_val_pth.as_posix(),mmap_mode='r')
nci_val=np.load(nci_val_pth.as_posix(),mmap_mode='r')
gse_val=np.load(gse_val_pth.as_posix(),mmap_mode='r')
u133a_rank=np.load('ranking_u133a.npy')
plus2_rank=np.load('ranking_u133plus2.npy')
gene = []
ncigene = []
CCgene = []
context={}
norm_name=[request.POST['normalize']]
if sanger_flag==1:
#if request.POST['normalize']!='NTRK3-AS1':
sanger_g=ProbeID.objects.filter(platform__in=ps_id).filter(Gene_symbol__in=norm_name).order_by('id')
sanger_probe_offset=list(sanger_g.values_list('offset',flat=True))
temp=sanger_val[np.ix_(sanger_probe_offset,offset)]
norm=np.mean(temp,axis=0, dtype=np.float64,keepdims=True)
#else:
# norm=0.0
else:
norm=0.0 #if / should = 1
if nci_flag==1:
nci_g=ProbeID.objects.filter(platform__in=pn_id).filter(Gene_symbol__in=norm_name).order_by('id')
nci_probe_offset=list(nci_g.values_list('offset',flat=True))
temp=nci_val[np.ix_(nci_probe_offset,ncioffset)]
nci_norm=np.mean(temp,axis=0, dtype=np.float64,keepdims=True)
##print(nci_norm)
else:
nci_norm=0.0 #if / should = 1
if gse_flag==1:
CC_g=ProbeID.objects.filter(platform__in=pn_id).filter(Gene_symbol__in=norm_name).order_by('id')
CC_probe_offset=list(CC_g.values_list('offset',flat=True))
temp=gse_val[np.ix_(CC_probe_offset,CCoffset)]
CC_norm=np.mean(temp,axis=0, dtype=np.float64,keepdims=True)
##print(CC_norm)
else:
CC_norm=0.0 #if / should = 1
#dealing with probes
if 'gtype' in request.POST and request.POST['gtype'] == 'probeid':
gene = ProbeID.objects.filter(platform__in=ps_id).filter(Probe_id__in=words).order_by('id')
probe_offset=list(gene.values_list('offset',flat=True))
ncigene = ProbeID.objects.filter(platform__in=pn_id).filter(Probe_id__in=words).order_by('id')
nciprobe_offset=list(ncigene.values_list('offset',flat=True))
#nci60 and ccle use same probe set(ncigene) and nicprobe
# Make a generator to generate all (cell, probe, val) pairs
if(len(gene)!=0 and len(cell)!=0):
raw_test=sanger_val[np.ix_(probe_offset,offset)]
normalize=np.subtract(raw_test,norm)#dimension different!!!!
#normalize=np.around(normalize, decimals=1)
cell_probe_val_pairs = (
(c, p, raw_test[probe_ix, cell_ix],22216-np.where(u133a_rank==raw_test[probe_ix, cell_ix])[0],normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(gene)
for cell_ix, c in enumerate(cell)
)
else:
cell_probe_val_pairs =()
if(len(ncigene)!=0 and len(ncicell)!=0):
nci_raw_test=nci_val[np.ix_(nciprobe_offset,ncioffset)]
nci_normalize=np.subtract(nci_raw_test,nci_norm)
nci_cell_probe_val_pairs = (
(c, p, nci_raw_test[probe_ix, cell_ix],54614-np.where(plus2_rank==nci_raw_test[probe_ix, cell_ix])[0],nci_normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(ncigene)
for cell_ix, c in enumerate(ncicell)
)
else:
nci_cell_probe_val_pairs =()
if(len(ncigene)!=0 and len(CCcell)!=0):
CC_raw_test=gse_val[np.ix_(nciprobe_offset,CCoffset)]
CC_normalize=np.subtract(CC_raw_test,CC_norm)
CC_cell_probe_val_pairs = (
(c, p, CC_raw_test[probe_ix, cell_ix],54614-np.where(plus2_rank==CC_raw_test[probe_ix, cell_ix])[0],CC_normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(ncigene)
for cell_ix, c in enumerate(CCcell)
)
else:
CC_cell_probe_val_pairs =()
context['cell_probe_val_pairs']=cell_probe_val_pairs
context['nci_cell_probe_val_pairs']=nci_cell_probe_val_pairs
context['CC_cell_probe_val_pairs']=CC_cell_probe_val_pairs
return render_to_response('data.html', RequestContext(request,context))
elif 'gtype' in request.POST and request.POST['gtype'] == 'symbol':
gene = ProbeID.objects.filter(platform__in=ps_id).filter(Gene_symbol__in=words).order_by('id')
probe_offset=gene.values_list('offset',flat=True)
ncigene = ProbeID.objects.filter(platform__in=pn_id).filter(Gene_symbol__in=words).order_by('id')
nciprobe_offset=ncigene.values_list('offset',flat=True)
#nci60 and ccle use same probe set(ncigene) and nicprobe
# Make a generator to generate all (cell, probe, val) pairs
if(len(gene)!=0 and len(cell)!=0):
raw_test=sanger_val[np.ix_(probe_offset,offset)]
normalize=np.subtract(raw_test,norm)
cell_probe_val_pairs = (
(c, p, raw_test[probe_ix, cell_ix],22216-np.where(u133a_rank==raw_test[probe_ix, cell_ix])[0],normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(gene)
for cell_ix, c in enumerate(cell)
)
else:
cell_probe_val_pairs =()
if(len(ncigene)!=0 and len(ncicell)!=0):
nci_raw_test=nci_val[np.ix_(nciprobe_offset,ncioffset)]
nci_normalize=np.subtract(nci_raw_test,nci_norm)
nci_cell_probe_val_pairs = (
(c, p, nci_raw_test[probe_ix, cell_ix],54676-np.where(plus2_rank==nci_raw_test[probe_ix, cell_ix])[0],nci_normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(ncigene)
for cell_ix, c in enumerate(ncicell)
)
else:
nci_cell_probe_val_pairs =()
if(len(ncigene)!=0 and len(CCcell)!=0):
CC_raw_test=gse_val[np.ix_(nciprobe_offset,CCoffset)]
CC_normalize=np.subtract(CC_raw_test,CC_norm)
CC_cell_probe_val_pairs = (
(c, p, CC_raw_test[probe_ix, cell_ix],54614-np.where(plus2_rank==CC_raw_test[probe_ix, cell_ix])[0],CC_normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(ncigene)
for cell_ix, c in enumerate(CCcell)
)
else:
CC_cell_probe_val_pairs =()
context['cell_probe_val_pairs']=cell_probe_val_pairs
context['nci_cell_probe_val_pairs']=nci_cell_probe_val_pairs
context['CC_cell_probe_val_pairs']=CC_cell_probe_val_pairs
return render_to_response('data.html', RequestContext(request,context))
elif 'gtype' in request.POST and request.POST['gtype'] == 'entrez':
gene = ProbeID.objects.filter(platform__in=ps_id).filter(Entrez_id=words).order_by('id')
probe_offset=gene.values_list('offset',flat=True)
ncigene = ProbeID.objects.filter(platform__in=pn_id).filter(Entrez_id__in=words).order_by('id')
nciprobe_offset=ncigene.values_list('offset',flat=True)
#nci60 and ccle use same probe set(ncigene) and nicprobe
# Make a generator to generate all (cell, probe, val) pairs
if(len(gene)!=0 and len(cell)!=0):
raw_test=sanger_val[np.ix_(probe_offset,offset)]
normalize=np.subtract(raw_test,norm)
cell_probe_val_pairs = (
(c, p, raw_test[probe_ix, cell_ix],22216-np.where(u133a_rank==raw_test[probe_ix, cell_ix])[0],normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(gene)
for cell_ix, c in enumerate(cell)
)
else:
cell_probe_val_pairs =()
if(len(ncigene)!=0 and len(ncicell)!=0):
nci_raw_test=nci_val[np.ix_(nciprobe_offset,ncioffset)]
nci_normalize=np.subtract(nci_raw_test,nci_norm)
nci_cell_probe_val_pairs = (
(c, p, nci_raw_test[probe_ix, cell_ix],54614-np.where(plus2_rank==nci_raw_test[probe_ix, cell_ix])[0],nci_normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(ncigene)
for cell_ix, c in enumerate(ncicell)
)
else:
nci_cell_probe_val_pairs =()
if(len(ncigene)!=0 and len(CCcell)!=0):
CC_raw_test=gse_val[np.ix_(nciprobe_offset,CCoffset)]
CC_normalize=np.subtract(CC_raw_test,CC_norm)
CC_cell_probe_val_pairs = (
(c, p, CC_raw_test[probe_ix, cell_ix],54614-np.where(plus2_rank==CC_raw_test[probe_ix, cell_ix])[0],CC_normalize[probe_ix, cell_ix])
for probe_ix, p in enumerate(ncigene)
for cell_ix, c in enumerate(CCcell)
)
else:
CC_cell_probe_val_pairs =()
context['cell_probe_val_pairs']=cell_probe_val_pairs
context['nci_cell_probe_val_pairs']=nci_cell_probe_val_pairs
context['CC_cell_probe_val_pairs']=CC_cell_probe_val_pairs
return render_to_response('data.html', RequestContext(request,context))
else:
return HttpResponse(
"<p>keyword type not match with your keyword input</p>"
)
| mit |
xulesc/algos | knn/scripts/exp3.py | 1 | 6922 | # -*- coding: utf-8 -*-
"""
Created on Mon Jun 23 12:12:31 2014
@author: anuj
"""
print(__doc__)
from time import time
import numpy as np
import pylab as pl
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
from sklearn.preprocessing import normalize
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn import preprocessing
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn import linear_model, datasets
from sklearn.decomposition import PCA
klas_file = '/home/anuj/workspace.custom/assignment/data/dataset_diabetes/diabetic_data.csv.klass'
data_file = '/home/anuj/workspace.custom/assignment/data/dataset_diabetes/diabetic_data.csv.num'
klasses = np.genfromtxt(klas_file, skip_header=1)
n_data = np.genfromtxt(data_file, usecols=range(2, 11), skip_header=1,delimiter=',', missing_values='?')
np.random.seed(42)
data = normalize(n_data[~np.isnan(n_data).any(axis=1)], axis = 0)
n_samples, n_features = [len(data), len(data[0])]
n_digits = len(np.unique(klasses))
labels = klasses[~np.isnan(n_data).any(axis=1)]
sample_size = int(10.0 * n_samples / 100)
print("n_digits: %d, \t n_samples %d, \t n_features %d"
% (n_digits, n_samples, n_features))
print(79 * '_')
print('% 9s' % 'init'
' time inertia homo compl v-meas ARI AMI silhouette')
def bench_k_means(estimator, name, data):
t0 = time()
estimator.fit(data)
print('% 9s %.2fs %i %.3f %.3f %.3f %.3f %.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.homogeneity_score(labels, estimator.labels_),
metrics.completeness_score(labels, estimator.labels_),
metrics.v_measure_score(labels, estimator.labels_),
metrics.adjusted_rand_score(labels, estimator.labels_),
metrics.adjusted_mutual_info_score(labels, estimator.labels_)))
bench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10),
name="k-means++", data=data)
bench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10),
name="random", data=data)
# in this case the seeding of the centers is deterministic, hence we run the
# kmeans algorithm only once with n_init=1
pca = PCA(n_components=n_digits).fit(data)
bench_k_means(KMeans(init=pca.components_, n_clusters=n_digits, n_init=1),
name="PCA-based",
data=data)
print(79 * '_')
###############################################################################
# Visualize the results on PCA-reduced data
#reduced_data = PCA(n_components=2).fit_transform(data)
#kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
#kmeans.fit(reduced_data)
#
## Step size of the mesh. Decrease to increase the quality of the VQ.
#h = .02 # point in the mesh [x_min, m_max]x[y_min, y_max].
#
## Plot the decision boundary. For that, we will assign a color to each
#x_min, x_max = reduced_data[:, 0].min() + 1, reduced_data[:, 0].max() - 1
#y_min, y_max = reduced_data[:, 1].min() + 1, reduced_data[:, 1].max() - 1
#xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
#
## Obtain labels for each point in mesh. Use last trained model.
#Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
#
## Put the result into a color plot
#Z = Z.reshape(xx.shape)
#pl.figure(1)
#pl.clf()
#pl.imshow(Z, interpolation='nearest',
# extent=(xx.min(), xx.max(), yy.min(), yy.max()),
# cmap=pl.cm.Paired,
# aspect='auto', origin='lower')
#
#pl.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
## Plot the centroids as a white X
#centroids = kmeans.cluster_centers_
#pl.scatter(centroids[:, 0], centroids[:, 1],
# marker='x', s=169, linewidths=3,
# color='w', zorder=10)
#pl.title('K-means clustering on the digits dataset (PCA-reduced data)\n'
# 'Centroids are marked with white cross')
#pl.xlim(x_min, x_max)
#pl.ylim(y_min, y_max)
#pl.xticks(())
#pl.yticks(())
#pl.show()
###############################################################################
data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size=0.20, random_state=42)
neigh = KNeighborsClassifier(n_neighbors=5)
neigh.fit(data_train, labels_train)
#print neigh.score(data_test, labels_test)
pred = neigh.predict(data_test)
cm = confusion_matrix(labels_test, pred)
print(cm)
pl.matshow(cm)
pl.title('Confusion matrix')
pl.colorbar()
pl.ylabel('True label')
pl.xlabel('Predicted label')
pl.show()
###############################################################################
klas_file = '/home/anuj/workspace.custom/assignment/data/dataset_diabetes/diabetic_data.csv.klass'
data_file = '/home/anuj/workspace.custom/assignment/data/dataset_diabetes/diabetic_data.csv.non_num'
data_file2 = '/home/anuj/workspace.custom/assignment/data/dataset_diabetes/diabetic_data.csv.num'
klasses = np.genfromtxt(klas_file, skip_header=1)
n_data = np.genfromtxt(data_file, skip_header=1,delimiter=',',dtype='|S5')
n_data_num = np.genfromtxt(data_file2, usecols=range(2, 11), skip_header=1,delimiter=',', missing_values='?')
#n_data = n_data[~np.isnan(n_data).any(axis=1)]
exc = np.isnan(n_data_num).any(axis=1)
n_data_num_n = normalize(n_data_num[~exc], axis = 0)
labels = klasses[~exc]
n_data2 = n_data[~exc]
n_data2 = [x[:len(x) - 1] for x in n_data2]
n_data2 = np.transpose(n_data2)
le = preprocessing.LabelEncoder()
n_data3 = [le.fit(d).transform(d) for d in n_data2]
##############
#f_data = np.transpose(n_data3)
f_data = n_data_num_n
#for x in np.transpose(n_data_num_n):
# f_data.append(x)
#f_data = np.transpose(f_data)
##############
data_train, data_test, labels_train, labels_test = train_test_split(f_data, labels, test_size=0.20, random_state=42)
neigh = KNeighborsClassifier(n_neighbors=1)
#neigh = MultinomialNB()
print('%d:%d\n' %(sum(labels_train),len(labels_train)))
neigh.fit(data_train, labels_train)
#print neigh.score(data_test, labels_test)
pred = neigh.predict(data_test)
print('%d:%d:%d:%d\n' %(sum(labels_test),len(labels_test),sum(pred),len(pred)))
cm = confusion_matrix(labels_test, pred)
print(cm)
pl.matshow(cm)
pl.title('Confusion matrix')
pl.colorbar()
pl.ylabel('True label')
pl.xlabel('Predicted label')
pl.show()
##############
###############################################################################
f = '/home/anuj/workspace.custom/assignment/data/dataset_diabetes/diabetic_data.csv'
d1 = np.genfromtxt(f, delimiter = ',', names=True)
names = d1.dtype.names
d = np.genfromtxt(f, delimiter = ',', dtype='|S5', skip_header=1)
dc = np.transpose(d)
for x, name in zip(dc, names):
print '%s: %d' %(name, len(np.unique(x)))
##
| gpl-3.0 |
RenqinCai/python_dataset | LR/shuffle_v8.py | 2 | 13109 | ###new function:shuffling the reviewing time and debug the program
###R is set to a constant value
import simplejson as json
import datetime
import time
import numpy as np
import math
from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
from dateutil.relativedelta import *
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.cross_validation import cross_val_score
def string_toDatetime(string):
return datetime.datetime.strptime(string, "%Y-%m-%d")
def string_toYear(string):
return datetime.datetime.strptime(string[0:4], "%Y").date()
def string_toYearMonth(string):
return datetime.datetime.strptime(string[0:7], "%Y-%m").date()
def monthDiff(timeDate1, timeDate2):
return (timeDate1.year-timeDate2.year)*12 + (timeDate1.month-timeDate2.month)
def yearDiff(timeDate1, timeDate2):
return (timeDate1.year-timeDate2)
def betweenTime(timeDate, downTime, upTime):
if ((monthDiff(timeDate, downTime) < 0)or(monthDiff(upTime, timeDate) < 0)):
return False
else:
return True
#####
###the data structure of userInfo is a list which stores the dict of a user
###userInfo {user:{"sinceTime":sinceTime, "reviewTime":reviewTime, "active":0,1, "friends":[]}}
###reviewTime represents the first time user review the business.
###for different business, the reviewTime and active is different
###timeUserData {time:[user]}
##########
def loadUser():
userInfo = {}
timeUserData = {}
defaultReviewTime = string_toYearMonth('2015-01')
defaultActive = 0
userSet = set()
userSum = 0
userFile = "../../dataset/user.json"
with open(userFile) as f:
for line in f:
userJson = json.loads(line)
user=userJson["user"]
friend = userJson["friends"]
sinceTime = string_toYearMonth(userJson["sinceTime"])
userInfo.setdefault(user, {})
userInfo[user]["sinceTime"] = sinceTime
userInfo[user]["reviewTime"] = defaultReviewTime
userInfo[user]["active"] = defaultActive
userInfo[user]["friends"] = []
timeUserData.setdefault(sinceTime, [])
timeUserData[sinceTime].append(user)
if friend:
for f in friend:
userInfo[user]["friends"].append(f)
userSum += 1
userSet.add(user)
userList = list(userSet)
print "load Friend"
print "total userSum %d"%userSum
return (userInfo, timeUserData, userSum, userList)
####load reviewData as format:{business:{reviewTime:[user]}}
####store reviewSum for a business as format: {business:businessSum}
###store timeReviewerDict_allBiz {time:[user]}
def loadReview():
reviewData = {}
reviewSum = {}
timeReviewerDict_allBiz = {}
reviewFile = "../../dataset/review.json"
with open(reviewFile) as f:
for line in f:
reviewJson = json.loads(line)
business = reviewJson["business"]
user = reviewJson["user"]
reviewTime = string_toYearMonth(reviewJson["date"])
reviewData.setdefault(business, {})
reviewData[business].setdefault(reviewTime, [])
reviewData[business][reviewTime].append(user)
timeReviewerDict_allBiz.setdefault(reviewTime, [])
timeReviewerDict_allBiz[reviewTime].append(user)
reviewSum.setdefault(business, 0)
reviewSum[business] += 1
return (reviewData, reviewSum, timeReviewerDict_allBiz)
###filter business which has more than 10 users in the period
####businessList contains these business
def filterReviewData(reviewData, reviewSum):
print "review process"
reviewSet = set()
for business in reviewSum.keys():
bNum = reviewSum[business]
if bNum > 50:
reviewSet.add(business)
reviewList = list(reviewSet)
# finalBusinessList = list(finalBusinessSet)
print "end process"
return (reviewList)
####selectBusiness which is a list contains the sequence number selected
def randomBusiness(totalNum, randomNum):
business = [i for i in range(totalNum)]
selectBusiness = []
for i in range(randomNum):
k = np.random.randint(0, totalNum-i)+i
temp = business[i]
business[i] = business[k]
business[k] = temp
selectBusiness.append(business[i])
return selectBusiness
#####from selectBusiness(a list)get the sequence number
###store the business_id into selectBusinessList.
def randomSelectBusiness(reviewList, selectBusinessNum):
businessSet = set(reviewList)
businessLen = len(businessSet)
if businessLen < selectBusinessNum:
selectBusinessList = reviewList
else:
selectBusiness = randomBusiness(businessLen, selectBusinessNum)
selectBusinessList = [reviewList[i] for i in selectBusiness]
return selectBusinessList
def increMonth(baseMonth):
return baseMonth+relativedelta(months=+1)
###cut part of the dict and sort the dict
def SortDict_Time(timeValDict, userInfo):
sortedTimeValDict = {}
timeList = sorted(timeValDict)
timeSet = set(timeList)
timeUserDict_oneBiz = {}##{time:[user]}
periodList = []
timeUserLenDict = {}
WList_oneBiz = [] ##w in the paper
tempWList_oneBiz = []
WSet_oneBiz = set()
monthRange = 18
if(monthRange > len(timeList)):
monthRange = len(timeList)
monthTime = timeList[0]
for i in range(monthRange):
periodList.append(monthTime)
if monthTime in timeSet:
sortedTimeValDict.setdefault(monthTime, [])
timeUserLenDict.setdefault(monthTime, 0)
reviewUserList = timeValDict[monthTime]
reviewUserSet = set(reviewUserList)
reviewUserSet = reviewUserSet.difference(WSet_oneBiz)
reviewUserList = list(reviewUserSet)
sortedTimeValDict[monthTime] = reviewUserList
timeUserLenDict[monthTime] = len(reviewUserList)
WSet_oneBiz = WSet_oneBiz.union(reviewUserSet)
monthTime = increMonth(monthTime)
WList_oneBiz = list(WSet_oneBiz)
tempWList_oneBiz = list(WSet_oneBiz)
for t in periodList:
for user in tempWList_oneBiz:
uSinceTime = userInfo[user]["sinceTime"]
if (monthDiff(uSinceTime, t)<=0):
timeUserDict_oneBiz.setdefault(t, [])
timeUserDict_oneBiz[t].append(user)
tempWList_oneBiz.remove(user)
return (sortedTimeValDict, periodList, WList_oneBiz, timeUserDict_oneBiz, timeUserLenDict)
###update the userInfo: "reviewTime", "active" for a business
def UpdateUserInfo_oneBiz(userInfo, timeReviewerDict_oneBiz, selectBusiness):
repeatReviewUserSet = set()
for t in timeReviewerDict_oneBiz.keys():
reviewUserList = timeReviewerDict_oneBiz[t]
reviewUserSet = set(reviewUserList)
for u in reviewUserSet:
preActive = userInfo[u]["active"]
if(preActive == 1):
continue
else:
userInfo[u]["active"] = 1
userInfo[u]["reviewTime"] = t
##timeReviewerDict_oneBiz {time:[reviewer]}
def ResetUserInfo_oneBiz(userInfo, timeReviewerDict_oneBiz):
defaultReviewTime = string_toYearMonth('2015-01')
defaultActive = 0
for t in timeReviewerDict_oneBiz.keys():
reviewUserSet = set(timeReviewerDict_oneBiz[t])
for u in reviewUserSet:
userInfo[u]["reviewTime"] = defaultReviewTime
userInfo[u]["active"] = defaultActive
def UpdateTimeReviewer_allBiz(reviewData, selectBusiness, timeReviewerDict_oneBiz):
for t in timeReviewerDict_oneBiz.keys():
reviewUserList = timeReviewerDict_oneBiz[t]
reviewData[selectBusiness][t] = reviewUserList
def ResetTimeReviewer_allBiz(reviewData, selectBusiness, timeReviewerDict_oneBiz):
for t in timeReviewerDict_oneBiz.keys():
reviewUserList = timeReviewerDict_oneBiz[t]
reviewData[selectBusiness][t] = reviewUserList
def compute_oneBiz(userInfo, selectBusiness, reviewData):
timereviewerDict_allBiz = dict(reviewData)
reviewDict_oneBiz = timereviewerDict_allBiz[selectBusiness]
(timeReviewerDict_oneBiz, periodList, WList_oneBiz, timeUserDict_oneBiz, timeUserLenDict) = SortDict_Time(reviewDict_oneBiz, userInfo)
###before permute
UpdateUserInfo_oneBiz(userInfo, timeReviewerDict_oneBiz, selectBusiness)
(LR_coef, LR_intercept) = LR_oneBiz(periodList, userInfo, timereviewerDict_allBiz)
ResetUserInfo_oneBiz(userInfo, timeReviewerDict_oneBiz)
###permuteTime
permute_timeReviewerDict_oneBiz = permuteTime(timeReviewerDict_oneBiz, timeUserDict_oneBiz, periodList, timeUserLenDict)
UpdateUserInfo_oneBiz(userInfo, permute_timeReviewerDict_oneBiz, selectBusiness)
UpdateTimeReviewer_allBiz(timereviewerDict_allBiz, selectBusiness, permute_timeReviewerDict_oneBiz)
(LR_coef2, LR_intercept2) = LR_oneBiz(periodList, userInfo, timereviewerDict_allBiz)
ResetUserInfo_oneBiz(userInfo, permute_timeReviewerDict_oneBiz)
ResetTimeReviewer_allBiz(timereviewerDict_allBiz, selectBusiness, timeReviewerDict_oneBiz)
return (LR_coef, LR_coef2)
def LR_oneBiz(periodList, userInfo, reviewData):
R = 10
Y = [0 for i in range(R+2)]
N = [0 for i in range(R+2)]
feature = []
output = []
activeZeroSum = 0
unactiveZeroSum = 0
positive = 0
negative = 0
totalReviewUserSet = set()
for t in periodList:
#print t
activeUserSet = set()
reviewUserSet = set()
raw_reviewUserSet = set()
###fix bugs: the reviewUserList_oneBiz does not change
for b in reviewData.keys():
if(reviewData[b].has_key(t)):
raw_reviewUserSet = raw_reviewUserSet.union(set(reviewData[b][t]))
reviewUserSet = raw_reviewUserSet
totalReviewUserSet=totalReviewUserSet.union(reviewUserSet)
for u in totalReviewUserSet:
uReviewTime = userInfo[u]["reviewTime"]
uActive = userInfo[u]["active"]
if(uActive == 1):
if (uReviewTime == t):
uActiveFriendSum = activeFriend_Sum(u, userInfo, t)
output.append(uActive)
positive += 1
if(uActiveFriendSum == 0):
activeZeroSum += 1
if uActiveFriendSum > R:
feature.append(R+1)
Y[R+1] += 1
else:
feature.append(uActiveFriendSum)
Y[uActiveFriendSum] += 1
activeUserSet.add(u)
else:
negative += 1
uActiveFriendSum = activeFriend_Sum(u, userInfo, t)
output.append(uActive)
if(uActiveFriendSum == 0):
unactiveZeroSum += 1
if uActiveFriendSum > R:
feature.append(R+1)
N[R+1] += 1
else:
feature.append(uActiveFriendSum)
N[uActiveFriendSum] += 1
totalReviewUserSet = totalReviewUserSet.difference(activeUserSet)
#print "positive %d negative %d"%(positive, negative)
(LR_coef, LR_intercept) = LR_result(feature, output)
return (LR_coef, LR_intercept)
def LR_result(x, y):
#print x
model = LogisticRegression()
x_feature = [[math.log(i+1)] for i in x]
model = model.fit(x_feature, y)
print model.score(x_feature, y)
return (model.coef_, model.intercept_)
def activeFriend_Sum(user, userInfo, uReviewTime):
friendList = userInfo[user]["friends"]
friendSet = set(friendList)
activeFriendSum = 0
friendLen = len(friendSet)
for f in friendSet:
fActive = userInfo[f]["active"]
if (fActive == 0):
continue
fReviewTime = userInfo[f]["reviewTime"]
if(monthDiff(fReviewTime, uReviewTime)<0):
activeFriendSum += 1
#print "active%d"%activeFriendSum
return activeFriendSum
def compute_oneBiz_helper(args):
return compute_oneBiz(*args)
def permuteTime(timeReviewerDict_oneBiz, timeUserDict_oneBiz, periodList, timeUserLenDict):
permute_timeReviewerDict_oneBiz = {}
totalSinceUserSet = set()
for t in periodList:
##todo
selectReviewerSum = 0
if timeUserLenDict.has_key(t):
selectReviewerSum = timeUserLenDict[t]
sinceUserSet = set()
if timeUserDict_oneBiz.has_key(t):
sinceUserList = timeUserDict_oneBiz[t]
sinceUserSet = set(sinceUserList)
totalSinceUserSet = totalSinceUserSet.union(sinceUserSet)
selectUserList = randomSelectBusiness(list(totalSinceUserSet), selectReviewerSum)
selectUserSet = set(selectUserList)
permute_timeReviewerDict_oneBiz.setdefault(t, [])
permute_timeReviewerDict_oneBiz[t] = selectUserList
totalSinceUserSet = totalSinceUserSet.difference(selectUserSet)
return permute_timeReviewerDict_oneBiz
def mainFunction():
f1_result = open("coef1_result.txt", "w")
f2_result = open("coef2_result.txt", "w")
(userInfo, timeUserData, userSum, userList) = loadUser()
(reviewData, reviewSum, timeReviewUser) = loadReview()
(reviewList) = filterReviewData(reviewData, reviewSum)
selectBusinessNum = 1
selectBusinessList = randomSelectBusiness(reviewList, selectBusinessNum)
selectBusinessSet = set(selectBusinessList)
beginTime = datetime.datetime.now()
positiveCoef = 0
negativeCoef = 0
results=[]
pool_args = [(userInfo, i, reviewData) for i in selectBusinessSet]
pool = ThreadPool(8)
results = pool.map(compute_oneBiz_helper, pool_args)
# results = []
# for i in range(selectBusinessNum):
# selectBusiness = selectBusinessList[i]
# reviewData_allBiz = dict(reviewData)
# (LR_coef, LR_coef2) = compute_oneBiz(userInfo, selectBusiness, reviewData_allBiz)
# results.append((LR_coef, LR_coef2))
for (LR_coef, LR_coef2) in results:
f1_result.write("%s\n"%LR_coef)
f2_result.write("%s\n"%LR_coef2)
endTime = datetime.datetime.now()
timeIntervals = (endTime-beginTime).seconds
print "time interval %s"%timeIntervals
f1_result.write("time interval %s"%timeIntervals)
f2_result.write("time interval %s"%timeIntervals)
f1_result.close()
f2_result.close()
mainFunction()
| gpl-2.0 |
herilalaina/scikit-learn | sklearn/learning_curve.py | 27 | 15421 | """Utilities to evaluate models with respect to a variable
"""
# Author: Alexander Fabisch <afabisch@informatik.uni-bremen.de>
#
# License: BSD 3 clause
import warnings
import numpy as np
from .base import is_classifier, clone
from .cross_validation import check_cv
from .externals.joblib import Parallel, delayed
from .cross_validation import _safe_split, _score, _fit_and_score
from .metrics.scorer import check_scoring
from .utils import indexable
warnings.warn("This module was deprecated in version 0.18 in favor of the "
"model_selection module into which all the functions are moved."
" This module will be removed in 0.20",
DeprecationWarning)
__all__ = ['learning_curve', 'validation_curve']
def learning_curve(estimator, X, y, train_sizes=np.linspace(0.1, 1.0, 5),
cv=None, scoring=None, exploit_incremental_learning=False,
n_jobs=1, pre_dispatch="all", verbose=0,
error_score='raise'):
"""Learning curve.
.. deprecated:: 0.18
This module will be removed in 0.20.
Use :func:`sklearn.model_selection.learning_curve` instead.
Determines cross-validated training and test scores for different training
set sizes.
A cross-validation generator splits the whole dataset k times in training
and test data. Subsets of the training set with varying sizes will be used
to train the estimator and a score for each training subset size and the
test set will be computed. Afterwards, the scores will be averaged over
all k runs for each training subset size.
Read more in the :ref:`User Guide <learning_curves>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
train_sizes : array-like, shape (n_ticks,), dtype float or int
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the dtype is float, it is regarded as a
fraction of the maximum size of the training set (that is determined
by the selected validation method), i.e. it has to be within (0, 1].
Otherwise it is interpreted as absolute sizes of the training sets.
Note that for classification the number of samples usually have to
be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass,
:class:`sklearn.model_selection.StratifiedKFold` is used. In all
other cases, :class:`sklearn.model_selection.KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
exploit_incremental_learning : boolean, optional, default: False
If the estimator supports incremental learning, this will be
used to speed up fitting for different training set sizes.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
error_score : 'raise' (default) or numeric
Value to assign to the score if an error occurs in estimator fitting.
If set to 'raise', the error is raised. If a numeric value is given,
FitFailedWarning is raised. This parameter does not affect the refit
step, which will always raise the error.
Returns
-------
train_sizes_abs : array, shape = (n_unique_ticks,), dtype int
Numbers of training examples that has been used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See :ref:`examples/model_selection/plot_learning_curve.py
<sphx_glr_auto_examples_model_selection_plot_learning_curve.py>`
"""
if exploit_incremental_learning and not hasattr(estimator, "partial_fit"):
raise ValueError("An estimator must support the partial_fit interface "
"to exploit incremental learning")
X, y = indexable(X, y)
# Make a list since we will be iterating multiple times over the folds
cv = list(check_cv(cv, X, y, classifier=is_classifier(estimator)))
scorer = check_scoring(estimator, scoring=scoring)
# HACK as long as boolean indices are allowed in cv generators
if cv[0][0].dtype == bool:
new_cv = []
for i in range(len(cv)):
new_cv.append((np.nonzero(cv[i][0])[0], np.nonzero(cv[i][1])[0]))
cv = new_cv
n_max_training_samples = len(cv[0][0])
# Because the lengths of folds can be significantly different, it is
# not guaranteed that we use all of the available training data when we
# use the first 'n_max_training_samples' samples.
train_sizes_abs = _translate_train_sizes(train_sizes,
n_max_training_samples)
n_unique_ticks = train_sizes_abs.shape[0]
if verbose > 0:
print("[learning_curve] Training set sizes: " + str(train_sizes_abs))
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
if exploit_incremental_learning:
classes = np.unique(y) if is_classifier(estimator) else None
out = parallel(delayed(_incremental_fit_estimator)(
clone(estimator), X, y, classes, train, test, train_sizes_abs,
scorer, verbose) for train, test in cv)
else:
out = parallel(delayed(_fit_and_score)(
clone(estimator), X, y, scorer, train[:n_train_samples], test,
verbose, parameters=None, fit_params=None, return_train_score=True,
error_score=error_score)
for train, test in cv for n_train_samples in train_sizes_abs)
out = np.array(out)[:, :2]
n_cv_folds = out.shape[0] // n_unique_ticks
out = out.reshape(n_cv_folds, n_unique_ticks, 2)
out = np.asarray(out).transpose((2, 1, 0))
return train_sizes_abs, out[0], out[1]
def _translate_train_sizes(train_sizes, n_max_training_samples):
"""Determine absolute sizes of training subsets and validate 'train_sizes'.
Examples:
_translate_train_sizes([0.5, 1.0], 10) -> [5, 10]
_translate_train_sizes([5, 10], 10) -> [5, 10]
Parameters
----------
train_sizes : array-like, shape (n_ticks,), dtype float or int
Numbers of training examples that will be used to generate the
learning curve. If the dtype is float, it is regarded as a
fraction of 'n_max_training_samples', i.e. it has to be within (0, 1].
n_max_training_samples : int
Maximum number of training samples (upper bound of 'train_sizes').
Returns
-------
train_sizes_abs : array, shape (n_unique_ticks,), dtype int
Numbers of training examples that will be used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
"""
train_sizes_abs = np.asarray(train_sizes)
n_ticks = train_sizes_abs.shape[0]
n_min_required_samples = np.min(train_sizes_abs)
n_max_required_samples = np.max(train_sizes_abs)
if np.issubdtype(train_sizes_abs.dtype, np.floating):
if n_min_required_samples <= 0.0 or n_max_required_samples > 1.0:
raise ValueError("train_sizes has been interpreted as fractions "
"of the maximum number of training samples and "
"must be within (0, 1], but is within [%f, %f]."
% (n_min_required_samples,
n_max_required_samples))
train_sizes_abs = (train_sizes_abs * n_max_training_samples).astype(
dtype=np.int, copy=False)
train_sizes_abs = np.clip(train_sizes_abs, 1,
n_max_training_samples)
else:
if (n_min_required_samples <= 0 or
n_max_required_samples > n_max_training_samples):
raise ValueError("train_sizes has been interpreted as absolute "
"numbers of training samples and must be within "
"(0, %d], but is within [%d, %d]."
% (n_max_training_samples,
n_min_required_samples,
n_max_required_samples))
train_sizes_abs = np.unique(train_sizes_abs)
if n_ticks > train_sizes_abs.shape[0]:
warnings.warn("Removed duplicate entries from 'train_sizes'. Number "
"of ticks will be less than the size of "
"'train_sizes' %d instead of %d)."
% (train_sizes_abs.shape[0], n_ticks), RuntimeWarning)
return train_sizes_abs
def _incremental_fit_estimator(estimator, X, y, classes, train, test,
train_sizes, scorer, verbose):
"""Train estimator on training subsets incrementally and compute scores."""
train_scores, test_scores = [], []
partitions = zip(train_sizes, np.split(train, train_sizes)[:-1])
for n_train_samples, partial_train in partitions:
train_subset = train[:n_train_samples]
X_train, y_train = _safe_split(estimator, X, y, train_subset)
X_partial_train, y_partial_train = _safe_split(estimator, X, y,
partial_train)
X_test, y_test = _safe_split(estimator, X, y, test, train_subset)
if y_partial_train is None:
estimator.partial_fit(X_partial_train, classes=classes)
else:
estimator.partial_fit(X_partial_train, y_partial_train,
classes=classes)
train_scores.append(_score(estimator, X_train, y_train, scorer))
test_scores.append(_score(estimator, X_test, y_test, scorer))
return np.array((train_scores, test_scores)).T
def validation_curve(estimator, X, y, param_name, param_range, cv=None,
scoring=None, n_jobs=1, pre_dispatch="all", verbose=0):
"""Validation curve.
.. deprecated:: 0.18
This module will be removed in 0.20.
Use :func:`sklearn.model_selection.validation_curve` instead.
Determine training and test scores for varying parameter values.
Compute scores for an estimator with different values of a specified
parameter. This is similar to grid search with one parameter. However, this
will also compute training scores and is merely a utility for plotting the
results.
Read more in the :ref:`User Guide <validation_curve>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
param_name : string
Name of the parameter that will be varied.
param_range : array-like, shape (n_values,)
The values of the parameter that will be evaluated.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass,
:class:`sklearn.model_selection.StratifiedKFold` is used. In all
other cases, :class:`sklearn.model_selection.KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
Returns
-------
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See
:ref:`examples/model_selection/plot_validation_curve.py
<sphx_glr_auto_examples_model_selection_plot_validation_curve.py>`
"""
X, y = indexable(X, y)
cv = check_cv(cv, X, y, classifier=is_classifier(estimator))
scorer = check_scoring(estimator, scoring=scoring)
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
out = parallel(delayed(_fit_and_score)(
clone(estimator), X, y, scorer, train, test, verbose,
parameters={param_name: v}, fit_params=None, return_train_score=True)
for train, test in cv for v in param_range)
out = np.asarray(out)[:, :2]
n_params = len(param_range)
n_cv_folds = out.shape[0] // n_params
out = out.reshape(n_cv_folds, n_params, 2).transpose((2, 1, 0))
return out[0], out[1]
| bsd-3-clause |
aetilley/scikit-learn | sklearn/datasets/samples_generator.py | 45 | 56433 | """
Generate samples of synthetic data sets.
"""
# Authors: B. Thirion, G. Varoquaux, A. Gramfort, V. Michel, O. Grisel,
# G. Louppe, J. Nothman
# License: BSD 3 clause
import numbers
import warnings
import array
import numpy as np
from scipy import linalg
import scipy.sparse as sp
from ..preprocessing import MultiLabelBinarizer
from ..utils import check_array, check_random_state
from ..utils import shuffle as util_shuffle
from ..utils.fixes import astype
from ..utils.random import sample_without_replacement
from ..externals import six
map = six.moves.map
zip = six.moves.zip
def _generate_hypercube(samples, dimensions, rng):
"""Returns distinct binary samples of length dimensions
"""
if dimensions > 30:
return np.hstack([_generate_hypercube(samples, dimensions - 30, rng),
_generate_hypercube(samples, 30, rng)])
out = astype(sample_without_replacement(2 ** dimensions, samples,
random_state=rng),
dtype='>u4', copy=False)
out = np.unpackbits(out.view('>u1')).reshape((-1, 32))[:, -dimensions:]
return out
def make_classification(n_samples=100, n_features=20, n_informative=2,
n_redundant=2, n_repeated=0, n_classes=2,
n_clusters_per_class=2, weights=None, flip_y=0.01,
class_sep=1.0, hypercube=True, shift=0.0, scale=1.0,
shuffle=True, random_state=None):
"""Generate a random n-class classification problem.
This initially creates clusters of points normally distributed (std=1)
about vertices of a `2 * class_sep`-sided hypercube, and assigns an equal
number of clusters to each class. It introduces interdependence between
these features and adds various types of further noise to the data.
Prior to shuffling, `X` stacks a number of these primary "informative"
features, "redundant" linear combinations of these, "repeated" duplicates
of sampled features, and arbitrary noise for and remaining features.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of samples.
n_features : int, optional (default=20)
The total number of features. These comprise `n_informative`
informative features, `n_redundant` redundant features, `n_repeated`
duplicated features and `n_features-n_informative-n_redundant-
n_repeated` useless features drawn at random.
n_informative : int, optional (default=2)
The number of informative features. Each class is composed of a number
of gaussian clusters each located around the vertices of a hypercube
in a subspace of dimension `n_informative`. For each cluster,
informative features are drawn independently from N(0, 1) and then
randomly linearly combined within each cluster in order to add
covariance. The clusters are then placed on the vertices of the
hypercube.
n_redundant : int, optional (default=2)
The number of redundant features. These features are generated as
random linear combinations of the informative features.
n_repeated : int, optional (default=0)
The number of duplicated features, drawn randomly from the informative
and the redundant features.
n_classes : int, optional (default=2)
The number of classes (or labels) of the classification problem.
n_clusters_per_class : int, optional (default=2)
The number of clusters per class.
weights : list of floats or None (default=None)
The proportions of samples assigned to each class. If None, then
classes are balanced. Note that if `len(weights) == n_classes - 1`,
then the last class weight is automatically inferred.
More than `n_samples` samples may be returned if the sum of `weights`
exceeds 1.
flip_y : float, optional (default=0.01)
The fraction of samples whose class are randomly exchanged.
class_sep : float, optional (default=1.0)
The factor multiplying the hypercube dimension.
hypercube : boolean, optional (default=True)
If True, the clusters are put on the vertices of a hypercube. If
False, the clusters are put on the vertices of a random polytope.
shift : float, array of shape [n_features] or None, optional (default=0.0)
Shift features by the specified value. If None, then features
are shifted by a random value drawn in [-class_sep, class_sep].
scale : float, array of shape [n_features] or None, optional (default=1.0)
Multiply features by the specified value. If None, then features
are scaled by a random value drawn in [1, 100]. Note that scaling
happens after shifting.
shuffle : boolean, optional (default=True)
Shuffle the samples and the features.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, n_features]
The generated samples.
y : array of shape [n_samples]
The integer labels for class membership of each sample.
Notes
-----
The algorithm is adapted from Guyon [1] and was designed to generate
the "Madelon" dataset.
References
----------
.. [1] I. Guyon, "Design of experiments for the NIPS 2003 variable
selection benchmark", 2003.
See also
--------
make_blobs: simplified variant
make_multilabel_classification: unrelated generator for multilabel tasks
"""
generator = check_random_state(random_state)
# Count features, clusters and samples
if n_informative + n_redundant + n_repeated > n_features:
raise ValueError("Number of informative, redundant and repeated "
"features must sum to less than the number of total"
" features")
if 2 ** n_informative < n_classes * n_clusters_per_class:
raise ValueError("n_classes * n_clusters_per_class must"
" be smaller or equal 2 ** n_informative")
if weights and len(weights) not in [n_classes, n_classes - 1]:
raise ValueError("Weights specified but incompatible with number "
"of classes.")
n_useless = n_features - n_informative - n_redundant - n_repeated
n_clusters = n_classes * n_clusters_per_class
if weights and len(weights) == (n_classes - 1):
weights.append(1.0 - sum(weights))
if weights is None:
weights = [1.0 / n_classes] * n_classes
weights[-1] = 1.0 - sum(weights[:-1])
# Distribute samples among clusters by weight
n_samples_per_cluster = []
for k in range(n_clusters):
n_samples_per_cluster.append(int(n_samples * weights[k % n_classes]
/ n_clusters_per_class))
for i in range(n_samples - sum(n_samples_per_cluster)):
n_samples_per_cluster[i % n_clusters] += 1
# Intialize X and y
X = np.zeros((n_samples, n_features))
y = np.zeros(n_samples, dtype=np.int)
# Build the polytope whose vertices become cluster centroids
centroids = _generate_hypercube(n_clusters, n_informative,
generator).astype(float)
centroids *= 2 * class_sep
centroids -= class_sep
if not hypercube:
centroids *= generator.rand(n_clusters, 1)
centroids *= generator.rand(1, n_informative)
# Initially draw informative features from the standard normal
X[:, :n_informative] = generator.randn(n_samples, n_informative)
# Create each cluster; a variant of make_blobs
stop = 0
for k, centroid in enumerate(centroids):
start, stop = stop, stop + n_samples_per_cluster[k]
y[start:stop] = k % n_classes # assign labels
X_k = X[start:stop, :n_informative] # slice a view of the cluster
A = 2 * generator.rand(n_informative, n_informative) - 1
X_k[...] = np.dot(X_k, A) # introduce random covariance
X_k += centroid # shift the cluster to a vertex
# Create redundant features
if n_redundant > 0:
B = 2 * generator.rand(n_informative, n_redundant) - 1
X[:, n_informative:n_informative + n_redundant] = \
np.dot(X[:, :n_informative], B)
# Repeat some features
if n_repeated > 0:
n = n_informative + n_redundant
indices = ((n - 1) * generator.rand(n_repeated) + 0.5).astype(np.intp)
X[:, n:n + n_repeated] = X[:, indices]
# Fill useless features
if n_useless > 0:
X[:, -n_useless:] = generator.randn(n_samples, n_useless)
# Randomly replace labels
if flip_y >= 0.0:
flip_mask = generator.rand(n_samples) < flip_y
y[flip_mask] = generator.randint(n_classes, size=flip_mask.sum())
# Randomly shift and scale
if shift is None:
shift = (2 * generator.rand(n_features) - 1) * class_sep
X += shift
if scale is None:
scale = 1 + 100 * generator.rand(n_features)
X *= scale
if shuffle:
# Randomly permute samples
X, y = util_shuffle(X, y, random_state=generator)
# Randomly permute features
indices = np.arange(n_features)
generator.shuffle(indices)
X[:, :] = X[:, indices]
return X, y
def make_multilabel_classification(n_samples=100, n_features=20, n_classes=5,
n_labels=2, length=50, allow_unlabeled=True,
sparse=False, return_indicator=False,
return_distributions=False,
random_state=None):
"""Generate a random multilabel classification problem.
For each sample, the generative process is:
- pick the number of labels: n ~ Poisson(n_labels)
- n times, choose a class c: c ~ Multinomial(theta)
- pick the document length: k ~ Poisson(length)
- k times, choose a word: w ~ Multinomial(theta_c)
In the above process, rejection sampling is used to make sure that
n is never zero or more than `n_classes`, and that the document length
is never zero. Likewise, we reject classes which have already been chosen.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of samples.
n_features : int, optional (default=20)
The total number of features.
n_classes : int, optional (default=5)
The number of classes of the classification problem.
n_labels : int, optional (default=2)
The average number of labels per instance. More precisely, the number
of labels per sample is drawn from a Poisson distribution with
``n_labels`` as its expected value, but samples are bounded (using
rejection sampling) by ``n_classes``, and must be nonzero if
``allow_unlabeled`` is False.
length : int, optional (default=50)
The sum of the features (number of words if documents) is drawn from
a Poisson distribution with this expected value.
allow_unlabeled : bool, optional (default=True)
If ``True``, some instances might not belong to any class.
sparse : bool, optional (default=False)
If ``True``, return a sparse feature matrix
return_indicator : bool, optional (default=False),
If ``True``, return ``Y`` in the binary indicator format, else
return a tuple of lists of labels.
return_distributions : bool, optional (default=False)
If ``True``, return the prior class probability and conditional
probabilities of features given classes, from which the data was
drawn.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array or sparse CSR matrix of shape [n_samples, n_features]
The generated samples.
Y : tuple of lists or array of shape [n_samples, n_classes]
The label sets.
p_c : array, shape [n_classes]
The probability of each class being drawn. Only returned if
``return_distributions=True``.
p_w_c : array, shape [n_features, n_classes]
The probability of each feature being drawn given each class.
Only returned if ``return_distributions=True``.
"""
generator = check_random_state(random_state)
p_c = generator.rand(n_classes)
p_c /= p_c.sum()
cumulative_p_c = np.cumsum(p_c)
p_w_c = generator.rand(n_features, n_classes)
p_w_c /= np.sum(p_w_c, axis=0)
def sample_example():
_, n_classes = p_w_c.shape
# pick a nonzero number of labels per document by rejection sampling
y_size = n_classes + 1
while (not allow_unlabeled and y_size == 0) or y_size > n_classes:
y_size = generator.poisson(n_labels)
# pick n classes
y = set()
while len(y) != y_size:
# pick a class with probability P(c)
c = np.searchsorted(cumulative_p_c,
generator.rand(y_size - len(y)))
y.update(c)
y = list(y)
# pick a non-zero document length by rejection sampling
n_words = 0
while n_words == 0:
n_words = generator.poisson(length)
# generate a document of length n_words
if len(y) == 0:
# if sample does not belong to any class, generate noise word
words = generator.randint(n_features, size=n_words)
return words, y
# sample words with replacement from selected classes
cumulative_p_w_sample = p_w_c.take(y, axis=1).sum(axis=1).cumsum()
cumulative_p_w_sample /= cumulative_p_w_sample[-1]
words = np.searchsorted(cumulative_p_w_sample, generator.rand(n_words))
return words, y
X_indices = array.array('i')
X_indptr = array.array('i', [0])
Y = []
for i in range(n_samples):
words, y = sample_example()
X_indices.extend(words)
X_indptr.append(len(X_indices))
Y.append(y)
X_data = np.ones(len(X_indices), dtype=np.float64)
X = sp.csr_matrix((X_data, X_indices, X_indptr),
shape=(n_samples, n_features))
X.sum_duplicates()
if not sparse:
X = X.toarray()
if return_indicator:
lb = MultiLabelBinarizer()
Y = lb.fit([range(n_classes)]).transform(Y)
else:
warnings.warn('Support for the sequence of sequences multilabel '
'representation is being deprecated and replaced with '
'a sparse indicator matrix. '
'return_indicator will default to True from version '
'0.17.',
DeprecationWarning)
if return_distributions:
return X, Y, p_c, p_w_c
return X, Y
def make_hastie_10_2(n_samples=12000, random_state=None):
"""Generates data for binary classification used in
Hastie et al. 2009, Example 10.2.
The ten features are standard independent Gaussian and
the target ``y`` is defined by::
y[i] = 1 if np.sum(X[i] ** 2) > 9.34 else -1
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=12000)
The number of samples.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, 10]
The input samples.
y : array of shape [n_samples]
The output values.
References
----------
.. [1] T. Hastie, R. Tibshirani and J. Friedman, "Elements of Statistical
Learning Ed. 2", Springer, 2009.
See also
--------
make_gaussian_quantiles: a generalization of this dataset approach
"""
rs = check_random_state(random_state)
shape = (n_samples, 10)
X = rs.normal(size=shape).reshape(shape)
y = ((X ** 2.0).sum(axis=1) > 9.34).astype(np.float64)
y[y == 0.0] = -1.0
return X, y
def make_regression(n_samples=100, n_features=100, n_informative=10,
n_targets=1, bias=0.0, effective_rank=None,
tail_strength=0.5, noise=0.0, shuffle=True, coef=False,
random_state=None):
"""Generate a random regression problem.
The input set can either be well conditioned (by default) or have a low
rank-fat tail singular profile. See :func:`make_low_rank_matrix` for
more details.
The output is generated by applying a (potentially biased) random linear
regression model with `n_informative` nonzero regressors to the previously
generated input and some gaussian centered noise with some adjustable
scale.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of samples.
n_features : int, optional (default=100)
The number of features.
n_informative : int, optional (default=10)
The number of informative features, i.e., the number of features used
to build the linear model used to generate the output.
n_targets : int, optional (default=1)
The number of regression targets, i.e., the dimension of the y output
vector associated with a sample. By default, the output is a scalar.
bias : float, optional (default=0.0)
The bias term in the underlying linear model.
effective_rank : int or None, optional (default=None)
if not None:
The approximate number of singular vectors required to explain most
of the input data by linear combinations. Using this kind of
singular spectrum in the input allows the generator to reproduce
the correlations often observed in practice.
if None:
The input set is well conditioned, centered and gaussian with
unit variance.
tail_strength : float between 0.0 and 1.0, optional (default=0.5)
The relative importance of the fat noisy tail of the singular values
profile if `effective_rank` is not None.
noise : float, optional (default=0.0)
The standard deviation of the gaussian noise applied to the output.
shuffle : boolean, optional (default=True)
Shuffle the samples and the features.
coef : boolean, optional (default=False)
If True, the coefficients of the underlying linear model are returned.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, n_features]
The input samples.
y : array of shape [n_samples] or [n_samples, n_targets]
The output values.
coef : array of shape [n_features] or [n_features, n_targets], optional
The coefficient of the underlying linear model. It is returned only if
coef is True.
"""
n_informative = min(n_features, n_informative)
generator = check_random_state(random_state)
if effective_rank is None:
# Randomly generate a well conditioned input set
X = generator.randn(n_samples, n_features)
else:
# Randomly generate a low rank, fat tail input set
X = make_low_rank_matrix(n_samples=n_samples,
n_features=n_features,
effective_rank=effective_rank,
tail_strength=tail_strength,
random_state=generator)
# Generate a ground truth model with only n_informative features being non
# zeros (the other features are not correlated to y and should be ignored
# by a sparsifying regularizers such as L1 or elastic net)
ground_truth = np.zeros((n_features, n_targets))
ground_truth[:n_informative, :] = 100 * generator.rand(n_informative,
n_targets)
y = np.dot(X, ground_truth) + bias
# Add noise
if noise > 0.0:
y += generator.normal(scale=noise, size=y.shape)
# Randomly permute samples and features
if shuffle:
X, y = util_shuffle(X, y, random_state=generator)
indices = np.arange(n_features)
generator.shuffle(indices)
X[:, :] = X[:, indices]
ground_truth = ground_truth[indices]
y = np.squeeze(y)
if coef:
return X, y, np.squeeze(ground_truth)
else:
return X, y
def make_circles(n_samples=100, shuffle=True, noise=None, random_state=None,
factor=.8):
"""Make a large circle containing a smaller circle in 2d.
A simple toy dataset to visualize clustering and classification
algorithms.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The total number of points generated.
shuffle: bool, optional (default=True)
Whether to shuffle the samples.
noise : double or None (default=None)
Standard deviation of Gaussian noise added to the data.
factor : double < 1 (default=.8)
Scale factor between inner and outer circle.
Returns
-------
X : array of shape [n_samples, 2]
The generated samples.
y : array of shape [n_samples]
The integer labels (0 or 1) for class membership of each sample.
"""
if factor > 1 or factor < 0:
raise ValueError("'factor' has to be between 0 and 1.")
generator = check_random_state(random_state)
# so as not to have the first point = last point, we add one and then
# remove it.
linspace = np.linspace(0, 2 * np.pi, n_samples // 2 + 1)[:-1]
outer_circ_x = np.cos(linspace)
outer_circ_y = np.sin(linspace)
inner_circ_x = outer_circ_x * factor
inner_circ_y = outer_circ_y * factor
X = np.vstack((np.append(outer_circ_x, inner_circ_x),
np.append(outer_circ_y, inner_circ_y))).T
y = np.hstack([np.zeros(n_samples // 2, dtype=np.intp),
np.ones(n_samples // 2, dtype=np.intp)])
if shuffle:
X, y = util_shuffle(X, y, random_state=generator)
if noise is not None:
X += generator.normal(scale=noise, size=X.shape)
return X, y
def make_moons(n_samples=100, shuffle=True, noise=None, random_state=None):
"""Make two interleaving half circles
A simple toy dataset to visualize clustering and classification
algorithms.
Parameters
----------
n_samples : int, optional (default=100)
The total number of points generated.
shuffle : bool, optional (default=True)
Whether to shuffle the samples.
noise : double or None (default=None)
Standard deviation of Gaussian noise added to the data.
Read more in the :ref:`User Guide <sample_generators>`.
Returns
-------
X : array of shape [n_samples, 2]
The generated samples.
y : array of shape [n_samples]
The integer labels (0 or 1) for class membership of each sample.
"""
n_samples_out = n_samples // 2
n_samples_in = n_samples - n_samples_out
generator = check_random_state(random_state)
outer_circ_x = np.cos(np.linspace(0, np.pi, n_samples_out))
outer_circ_y = np.sin(np.linspace(0, np.pi, n_samples_out))
inner_circ_x = 1 - np.cos(np.linspace(0, np.pi, n_samples_in))
inner_circ_y = 1 - np.sin(np.linspace(0, np.pi, n_samples_in)) - .5
X = np.vstack((np.append(outer_circ_x, inner_circ_x),
np.append(outer_circ_y, inner_circ_y))).T
y = np.hstack([np.zeros(n_samples_in, dtype=np.intp),
np.ones(n_samples_out, dtype=np.intp)])
if shuffle:
X, y = util_shuffle(X, y, random_state=generator)
if noise is not None:
X += generator.normal(scale=noise, size=X.shape)
return X, y
def make_blobs(n_samples=100, n_features=2, centers=3, cluster_std=1.0,
center_box=(-10.0, 10.0), shuffle=True, random_state=None):
"""Generate isotropic Gaussian blobs for clustering.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The total number of points equally divided among clusters.
n_features : int, optional (default=2)
The number of features for each sample.
centers : int or array of shape [n_centers, n_features], optional
(default=3)
The number of centers to generate, or the fixed center locations.
cluster_std: float or sequence of floats, optional (default=1.0)
The standard deviation of the clusters.
center_box: pair of floats (min, max), optional (default=(-10.0, 10.0))
The bounding box for each cluster center when centers are
generated at random.
shuffle : boolean, optional (default=True)
Shuffle the samples.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, n_features]
The generated samples.
y : array of shape [n_samples]
The integer labels for cluster membership of each sample.
Examples
--------
>>> from sklearn.datasets.samples_generator import make_blobs
>>> X, y = make_blobs(n_samples=10, centers=3, n_features=2,
... random_state=0)
>>> print(X.shape)
(10, 2)
>>> y
array([0, 0, 1, 0, 2, 2, 2, 1, 1, 0])
See also
--------
make_classification: a more intricate variant
"""
generator = check_random_state(random_state)
if isinstance(centers, numbers.Integral):
centers = generator.uniform(center_box[0], center_box[1],
size=(centers, n_features))
else:
centers = check_array(centers)
n_features = centers.shape[1]
if isinstance(cluster_std, numbers.Real):
cluster_std = np.ones(len(centers)) * cluster_std
X = []
y = []
n_centers = centers.shape[0]
n_samples_per_center = [int(n_samples // n_centers)] * n_centers
for i in range(n_samples % n_centers):
n_samples_per_center[i] += 1
for i, (n, std) in enumerate(zip(n_samples_per_center, cluster_std)):
X.append(centers[i] + generator.normal(scale=std,
size=(n, n_features)))
y += [i] * n
X = np.concatenate(X)
y = np.array(y)
if shuffle:
indices = np.arange(n_samples)
generator.shuffle(indices)
X = X[indices]
y = y[indices]
return X, y
def make_friedman1(n_samples=100, n_features=10, noise=0.0, random_state=None):
"""Generate the "Friedman \#1" regression problem
This dataset is described in Friedman [1] and Breiman [2].
Inputs `X` are independent features uniformly distributed on the interval
[0, 1]. The output `y` is created according to the formula::
y(X) = 10 * sin(pi * X[:, 0] * X[:, 1]) + 20 * (X[:, 2] - 0.5) ** 2 \
+ 10 * X[:, 3] + 5 * X[:, 4] + noise * N(0, 1).
Out of the `n_features` features, only 5 are actually used to compute
`y`. The remaining features are independent of `y`.
The number of features has to be >= 5.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of samples.
n_features : int, optional (default=10)
The number of features. Should be at least 5.
noise : float, optional (default=0.0)
The standard deviation of the gaussian noise applied to the output.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, n_features]
The input samples.
y : array of shape [n_samples]
The output values.
References
----------
.. [1] J. Friedman, "Multivariate adaptive regression splines", The Annals
of Statistics 19 (1), pages 1-67, 1991.
.. [2] L. Breiman, "Bagging predictors", Machine Learning 24,
pages 123-140, 1996.
"""
if n_features < 5:
raise ValueError("n_features must be at least five.")
generator = check_random_state(random_state)
X = generator.rand(n_samples, n_features)
y = 10 * np.sin(np.pi * X[:, 0] * X[:, 1]) + 20 * (X[:, 2] - 0.5) ** 2 \
+ 10 * X[:, 3] + 5 * X[:, 4] + noise * generator.randn(n_samples)
return X, y
def make_friedman2(n_samples=100, noise=0.0, random_state=None):
"""Generate the "Friedman \#2" regression problem
This dataset is described in Friedman [1] and Breiman [2].
Inputs `X` are 4 independent features uniformly distributed on the
intervals::
0 <= X[:, 0] <= 100,
40 * pi <= X[:, 1] <= 560 * pi,
0 <= X[:, 2] <= 1,
1 <= X[:, 3] <= 11.
The output `y` is created according to the formula::
y(X) = (X[:, 0] ** 2 + (X[:, 1] * X[:, 2] \
- 1 / (X[:, 1] * X[:, 3])) ** 2) ** 0.5 + noise * N(0, 1).
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of samples.
noise : float, optional (default=0.0)
The standard deviation of the gaussian noise applied to the output.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, 4]
The input samples.
y : array of shape [n_samples]
The output values.
References
----------
.. [1] J. Friedman, "Multivariate adaptive regression splines", The Annals
of Statistics 19 (1), pages 1-67, 1991.
.. [2] L. Breiman, "Bagging predictors", Machine Learning 24,
pages 123-140, 1996.
"""
generator = check_random_state(random_state)
X = generator.rand(n_samples, 4)
X[:, 0] *= 100
X[:, 1] *= 520 * np.pi
X[:, 1] += 40 * np.pi
X[:, 3] *= 10
X[:, 3] += 1
y = (X[:, 0] ** 2
+ (X[:, 1] * X[:, 2] - 1 / (X[:, 1] * X[:, 3])) ** 2) ** 0.5 \
+ noise * generator.randn(n_samples)
return X, y
def make_friedman3(n_samples=100, noise=0.0, random_state=None):
"""Generate the "Friedman \#3" regression problem
This dataset is described in Friedman [1] and Breiman [2].
Inputs `X` are 4 independent features uniformly distributed on the
intervals::
0 <= X[:, 0] <= 100,
40 * pi <= X[:, 1] <= 560 * pi,
0 <= X[:, 2] <= 1,
1 <= X[:, 3] <= 11.
The output `y` is created according to the formula::
y(X) = arctan((X[:, 1] * X[:, 2] - 1 / (X[:, 1] * X[:, 3])) \
/ X[:, 0]) + noise * N(0, 1).
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of samples.
noise : float, optional (default=0.0)
The standard deviation of the gaussian noise applied to the output.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, 4]
The input samples.
y : array of shape [n_samples]
The output values.
References
----------
.. [1] J. Friedman, "Multivariate adaptive regression splines", The Annals
of Statistics 19 (1), pages 1-67, 1991.
.. [2] L. Breiman, "Bagging predictors", Machine Learning 24,
pages 123-140, 1996.
"""
generator = check_random_state(random_state)
X = generator.rand(n_samples, 4)
X[:, 0] *= 100
X[:, 1] *= 520 * np.pi
X[:, 1] += 40 * np.pi
X[:, 3] *= 10
X[:, 3] += 1
y = np.arctan((X[:, 1] * X[:, 2] - 1 / (X[:, 1] * X[:, 3])) / X[:, 0]) \
+ noise * generator.randn(n_samples)
return X, y
def make_low_rank_matrix(n_samples=100, n_features=100, effective_rank=10,
tail_strength=0.5, random_state=None):
"""Generate a mostly low rank matrix with bell-shaped singular values
Most of the variance can be explained by a bell-shaped curve of width
effective_rank: the low rank part of the singular values profile is::
(1 - tail_strength) * exp(-1.0 * (i / effective_rank) ** 2)
The remaining singular values' tail is fat, decreasing as::
tail_strength * exp(-0.1 * i / effective_rank).
The low rank part of the profile can be considered the structured
signal part of the data while the tail can be considered the noisy
part of the data that cannot be summarized by a low number of linear
components (singular vectors).
This kind of singular profiles is often seen in practice, for instance:
- gray level pictures of faces
- TF-IDF vectors of text documents crawled from the web
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of samples.
n_features : int, optional (default=100)
The number of features.
effective_rank : int, optional (default=10)
The approximate number of singular vectors required to explain most of
the data by linear combinations.
tail_strength : float between 0.0 and 1.0, optional (default=0.5)
The relative importance of the fat noisy tail of the singular values
profile.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, n_features]
The matrix.
"""
generator = check_random_state(random_state)
n = min(n_samples, n_features)
# Random (ortho normal) vectors
u, _ = linalg.qr(generator.randn(n_samples, n), mode='economic')
v, _ = linalg.qr(generator.randn(n_features, n), mode='economic')
# Index of the singular values
singular_ind = np.arange(n, dtype=np.float64)
# Build the singular profile by assembling signal and noise components
low_rank = ((1 - tail_strength) *
np.exp(-1.0 * (singular_ind / effective_rank) ** 2))
tail = tail_strength * np.exp(-0.1 * singular_ind / effective_rank)
s = np.identity(n) * (low_rank + tail)
return np.dot(np.dot(u, s), v.T)
def make_sparse_coded_signal(n_samples, n_components, n_features,
n_nonzero_coefs, random_state=None):
"""Generate a signal as a sparse combination of dictionary elements.
Returns a matrix Y = DX, such as D is (n_features, n_components),
X is (n_components, n_samples) and each column of X has exactly
n_nonzero_coefs non-zero elements.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int
number of samples to generate
n_components: int,
number of components in the dictionary
n_features : int
number of features of the dataset to generate
n_nonzero_coefs : int
number of active (non-zero) coefficients in each sample
random_state: int or RandomState instance, optional (default=None)
seed used by the pseudo random number generator
Returns
-------
data: array of shape [n_features, n_samples]
The encoded signal (Y).
dictionary: array of shape [n_features, n_components]
The dictionary with normalized components (D).
code: array of shape [n_components, n_samples]
The sparse code such that each column of this matrix has exactly
n_nonzero_coefs non-zero items (X).
"""
generator = check_random_state(random_state)
# generate dictionary
D = generator.randn(n_features, n_components)
D /= np.sqrt(np.sum((D ** 2), axis=0))
# generate code
X = np.zeros((n_components, n_samples))
for i in range(n_samples):
idx = np.arange(n_components)
generator.shuffle(idx)
idx = idx[:n_nonzero_coefs]
X[idx, i] = generator.randn(n_nonzero_coefs)
# encode signal
Y = np.dot(D, X)
return map(np.squeeze, (Y, D, X))
def make_sparse_uncorrelated(n_samples=100, n_features=10, random_state=None):
"""Generate a random regression problem with sparse uncorrelated design
This dataset is described in Celeux et al [1]. as::
X ~ N(0, 1)
y(X) = X[:, 0] + 2 * X[:, 1] - 2 * X[:, 2] - 1.5 * X[:, 3]
Only the first 4 features are informative. The remaining features are
useless.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of samples.
n_features : int, optional (default=10)
The number of features.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, n_features]
The input samples.
y : array of shape [n_samples]
The output values.
References
----------
.. [1] G. Celeux, M. El Anbari, J.-M. Marin, C. P. Robert,
"Regularization in regression: comparing Bayesian and frequentist
methods in a poorly informative situation", 2009.
"""
generator = check_random_state(random_state)
X = generator.normal(loc=0, scale=1, size=(n_samples, n_features))
y = generator.normal(loc=(X[:, 0] +
2 * X[:, 1] -
2 * X[:, 2] -
1.5 * X[:, 3]), scale=np.ones(n_samples))
return X, y
def make_spd_matrix(n_dim, random_state=None):
"""Generate a random symmetric, positive-definite matrix.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_dim : int
The matrix dimension.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_dim, n_dim]
The random symmetric, positive-definite matrix.
See also
--------
make_sparse_spd_matrix
"""
generator = check_random_state(random_state)
A = generator.rand(n_dim, n_dim)
U, s, V = linalg.svd(np.dot(A.T, A))
X = np.dot(np.dot(U, 1.0 + np.diag(generator.rand(n_dim))), V)
return X
def make_sparse_spd_matrix(dim=1, alpha=0.95, norm_diag=False,
smallest_coef=.1, largest_coef=.9,
random_state=None):
"""Generate a sparse symmetric definite positive matrix.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
dim: integer, optional (default=1)
The size of the random matrix to generate.
alpha: float between 0 and 1, optional (default=0.95)
The probability that a coefficient is non zero (see notes).
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
largest_coef : float between 0 and 1, optional (default=0.9)
The value of the largest coefficient.
smallest_coef : float between 0 and 1, optional (default=0.1)
The value of the smallest coefficient.
norm_diag : boolean, optional (default=False)
Whether to normalize the output matrix to make the leading diagonal
elements all 1
Returns
-------
prec : sparse matrix of shape (dim, dim)
The generated matrix.
Notes
-----
The sparsity is actually imposed on the cholesky factor of the matrix.
Thus alpha does not translate directly into the filling fraction of
the matrix itself.
See also
--------
make_spd_matrix
"""
random_state = check_random_state(random_state)
chol = -np.eye(dim)
aux = random_state.rand(dim, dim)
aux[aux < alpha] = 0
aux[aux > alpha] = (smallest_coef
+ (largest_coef - smallest_coef)
* random_state.rand(np.sum(aux > alpha)))
aux = np.tril(aux, k=-1)
# Permute the lines: we don't want to have asymmetries in the final
# SPD matrix
permutation = random_state.permutation(dim)
aux = aux[permutation].T[permutation]
chol += aux
prec = np.dot(chol.T, chol)
if norm_diag:
# Form the diagonal vector into a row matrix
d = np.diag(prec).reshape(1, prec.shape[0])
d = 1. / np.sqrt(d)
prec *= d
prec *= d.T
return prec
def make_swiss_roll(n_samples=100, noise=0.0, random_state=None):
"""Generate a swiss roll dataset.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of sample points on the S curve.
noise : float, optional (default=0.0)
The standard deviation of the gaussian noise.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, 3]
The points.
t : array of shape [n_samples]
The univariate position of the sample according to the main dimension
of the points in the manifold.
Notes
-----
The algorithm is from Marsland [1].
References
----------
.. [1] S. Marsland, "Machine Learning: An Algorithmic Perspective",
Chapter 10, 2009.
http://www-ist.massey.ac.nz/smarsland/Code/10/lle.py
"""
generator = check_random_state(random_state)
t = 1.5 * np.pi * (1 + 2 * generator.rand(1, n_samples))
x = t * np.cos(t)
y = 21 * generator.rand(1, n_samples)
z = t * np.sin(t)
X = np.concatenate((x, y, z))
X += noise * generator.randn(3, n_samples)
X = X.T
t = np.squeeze(t)
return X, t
def make_s_curve(n_samples=100, noise=0.0, random_state=None):
"""Generate an S curve dataset.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
n_samples : int, optional (default=100)
The number of sample points on the S curve.
noise : float, optional (default=0.0)
The standard deviation of the gaussian noise.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, 3]
The points.
t : array of shape [n_samples]
The univariate position of the sample according to the main dimension
of the points in the manifold.
"""
generator = check_random_state(random_state)
t = 3 * np.pi * (generator.rand(1, n_samples) - 0.5)
x = np.sin(t)
y = 2.0 * generator.rand(1, n_samples)
z = np.sign(t) * (np.cos(t) - 1)
X = np.concatenate((x, y, z))
X += noise * generator.randn(3, n_samples)
X = X.T
t = np.squeeze(t)
return X, t
def make_gaussian_quantiles(mean=None, cov=1., n_samples=100,
n_features=2, n_classes=3,
shuffle=True, random_state=None):
"""Generate isotropic Gaussian and label samples by quantile
This classification dataset is constructed by taking a multi-dimensional
standard normal distribution and defining classes separated by nested
concentric multi-dimensional spheres such that roughly equal numbers of
samples are in each class (quantiles of the :math:`\chi^2` distribution).
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
mean : array of shape [n_features], optional (default=None)
The mean of the multi-dimensional normal distribution.
If None then use the origin (0, 0, ...).
cov : float, optional (default=1.)
The covariance matrix will be this value times the unit matrix. This
dataset only produces symmetric normal distributions.
n_samples : int, optional (default=100)
The total number of points equally divided among classes.
n_features : int, optional (default=2)
The number of features for each sample.
n_classes : int, optional (default=3)
The number of classes
shuffle : boolean, optional (default=True)
Shuffle the samples.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape [n_samples, n_features]
The generated samples.
y : array of shape [n_samples]
The integer labels for quantile membership of each sample.
Notes
-----
The dataset is from Zhu et al [1].
References
----------
.. [1] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.
"""
if n_samples < n_classes:
raise ValueError("n_samples must be at least n_classes")
generator = check_random_state(random_state)
if mean is None:
mean = np.zeros(n_features)
else:
mean = np.array(mean)
# Build multivariate normal distribution
X = generator.multivariate_normal(mean, cov * np.identity(n_features),
(n_samples,))
# Sort by distance from origin
idx = np.argsort(np.sum((X - mean[np.newaxis, :]) ** 2, axis=1))
X = X[idx, :]
# Label by quantile
step = n_samples // n_classes
y = np.hstack([np.repeat(np.arange(n_classes), step),
np.repeat(n_classes - 1, n_samples - step * n_classes)])
if shuffle:
X, y = util_shuffle(X, y, random_state=generator)
return X, y
def _shuffle(data, random_state=None):
generator = check_random_state(random_state)
n_rows, n_cols = data.shape
row_idx = generator.permutation(n_rows)
col_idx = generator.permutation(n_cols)
result = data[row_idx][:, col_idx]
return result, row_idx, col_idx
def make_biclusters(shape, n_clusters, noise=0.0, minval=10,
maxval=100, shuffle=True, random_state=None):
"""Generate an array with constant block diagonal structure for
biclustering.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
shape : iterable (n_rows, n_cols)
The shape of the result.
n_clusters : integer
The number of biclusters.
noise : float, optional (default=0.0)
The standard deviation of the gaussian noise.
minval : int, optional (default=10)
Minimum value of a bicluster.
maxval : int, optional (default=100)
Maximum value of a bicluster.
shuffle : boolean, optional (default=True)
Shuffle the samples.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape `shape`
The generated array.
rows : array of shape (n_clusters, X.shape[0],)
The indicators for cluster membership of each row.
cols : array of shape (n_clusters, X.shape[1],)
The indicators for cluster membership of each column.
References
----------
.. [1] Dhillon, I. S. (2001, August). Co-clustering documents and
words using bipartite spectral graph partitioning. In Proceedings
of the seventh ACM SIGKDD international conference on Knowledge
discovery and data mining (pp. 269-274). ACM.
See also
--------
make_checkerboard
"""
generator = check_random_state(random_state)
n_rows, n_cols = shape
consts = generator.uniform(minval, maxval, n_clusters)
# row and column clusters of approximately equal sizes
row_sizes = generator.multinomial(n_rows,
np.repeat(1.0 / n_clusters,
n_clusters))
col_sizes = generator.multinomial(n_cols,
np.repeat(1.0 / n_clusters,
n_clusters))
row_labels = np.hstack(list(np.repeat(val, rep) for val, rep in
zip(range(n_clusters), row_sizes)))
col_labels = np.hstack(list(np.repeat(val, rep) for val, rep in
zip(range(n_clusters), col_sizes)))
result = np.zeros(shape, dtype=np.float64)
for i in range(n_clusters):
selector = np.outer(row_labels == i, col_labels == i)
result[selector] += consts[i]
if noise > 0:
result += generator.normal(scale=noise, size=result.shape)
if shuffle:
result, row_idx, col_idx = _shuffle(result, random_state)
row_labels = row_labels[row_idx]
col_labels = col_labels[col_idx]
rows = np.vstack(row_labels == c for c in range(n_clusters))
cols = np.vstack(col_labels == c for c in range(n_clusters))
return result, rows, cols
def make_checkerboard(shape, n_clusters, noise=0.0, minval=10,
maxval=100, shuffle=True, random_state=None):
"""Generate an array with block checkerboard structure for
biclustering.
Read more in the :ref:`User Guide <sample_generators>`.
Parameters
----------
shape : iterable (n_rows, n_cols)
The shape of the result.
n_clusters : integer or iterable (n_row_clusters, n_column_clusters)
The number of row and column clusters.
noise : float, optional (default=0.0)
The standard deviation of the gaussian noise.
minval : int, optional (default=10)
Minimum value of a bicluster.
maxval : int, optional (default=100)
Maximum value of a bicluster.
shuffle : boolean, optional (default=True)
Shuffle the samples.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
X : array of shape `shape`
The generated array.
rows : array of shape (n_clusters, X.shape[0],)
The indicators for cluster membership of each row.
cols : array of shape (n_clusters, X.shape[1],)
The indicators for cluster membership of each column.
References
----------
.. [1] Kluger, Y., Basri, R., Chang, J. T., & Gerstein, M. (2003).
Spectral biclustering of microarray data: coclustering genes
and conditions. Genome research, 13(4), 703-716.
See also
--------
make_biclusters
"""
generator = check_random_state(random_state)
if hasattr(n_clusters, "__len__"):
n_row_clusters, n_col_clusters = n_clusters
else:
n_row_clusters = n_col_clusters = n_clusters
# row and column clusters of approximately equal sizes
n_rows, n_cols = shape
row_sizes = generator.multinomial(n_rows,
np.repeat(1.0 / n_row_clusters,
n_row_clusters))
col_sizes = generator.multinomial(n_cols,
np.repeat(1.0 / n_col_clusters,
n_col_clusters))
row_labels = np.hstack(list(np.repeat(val, rep) for val, rep in
zip(range(n_row_clusters), row_sizes)))
col_labels = np.hstack(list(np.repeat(val, rep) for val, rep in
zip(range(n_col_clusters), col_sizes)))
result = np.zeros(shape, dtype=np.float64)
for i in range(n_row_clusters):
for j in range(n_col_clusters):
selector = np.outer(row_labels == i, col_labels == j)
result[selector] += generator.uniform(minval, maxval)
if noise > 0:
result += generator.normal(scale=noise, size=result.shape)
if shuffle:
result, row_idx, col_idx = _shuffle(result, random_state)
row_labels = row_labels[row_idx]
col_labels = col_labels[col_idx]
rows = np.vstack(row_labels == label
for label in range(n_row_clusters)
for _ in range(n_col_clusters))
cols = np.vstack(col_labels == label
for _ in range(n_row_clusters)
for label in range(n_col_clusters))
return result, rows, cols
| bsd-3-clause |
jmetzen/scikit-learn | examples/linear_model/plot_logistic.py | 312 | 1426 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Logit function
=========================================================
Show in the plot is how the logistic regression would, in this
synthetic dataset, classify values as either 0 or 1,
i.e. class one or two, using the logit-curve.
"""
print(__doc__)
# Code source: Gael Varoquaux
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# this is our test set, it's just a straight line with some
# Gaussian noise
xmin, xmax = -5, 5
n_samples = 100
np.random.seed(0)
X = np.random.normal(size=n_samples)
y = (X > 0).astype(np.float)
X[X > 0] *= 4
X += .3 * np.random.normal(size=n_samples)
X = X[:, np.newaxis]
# run the classifier
clf = linear_model.LogisticRegression(C=1e5)
clf.fit(X, y)
# and plot the result
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.scatter(X.ravel(), y, color='black', zorder=20)
X_test = np.linspace(-5, 10, 300)
def model(x):
return 1 / (1 + np.exp(-x))
loss = model(X_test * clf.coef_ + clf.intercept_).ravel()
plt.plot(X_test, loss, color='blue', linewidth=3)
ols = linear_model.LinearRegression()
ols.fit(X, y)
plt.plot(X_test, ols.coef_ * X_test + ols.intercept_, linewidth=1)
plt.axhline(.5, color='.5')
plt.ylabel('y')
plt.xlabel('X')
plt.xticks(())
plt.yticks(())
plt.ylim(-.25, 1.25)
plt.xlim(-4, 10)
plt.show()
| bsd-3-clause |
Reagankm/KnockKnock | venv/lib/python3.4/site-packages/mpl_toolkits/mplot3d/art3d.py | 8 | 23462 | #!/usr/bin/python
# art3d.py, original mplot3d version by John Porter
# Parts rewritten by Reinier Heeres <reinier@heeres.eu>
# Minor additions by Ben Axelrod <baxelrod@coroware.com>
'''
Module containing 3D artist code and functions to convert 2D
artists into 3D versions which can be added to an Axes3D.
'''
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import six
from six.moves import zip
from matplotlib import lines, text as mtext, path as mpath, colors as mcolors
from matplotlib import artist
from matplotlib.collections import Collection, LineCollection, \
PolyCollection, PatchCollection, PathCollection
from matplotlib.cm import ScalarMappable
from matplotlib.patches import Patch
from matplotlib.colors import Normalize
from matplotlib.cbook import iterable
import warnings
import numpy as np
import math
from . import proj3d
def norm_angle(a):
"""Return angle between -180 and +180"""
a = (a + 360) % 360
if a > 180:
a = a - 360
return a
def norm_text_angle(a):
"""Return angle between -90 and +90"""
a = (a + 180) % 180
if a > 90:
a = a - 180
return a
def get_dir_vector(zdir):
if zdir == 'x':
return np.array((1, 0, 0))
elif zdir == 'y':
return np.array((0, 1, 0))
elif zdir == 'z':
return np.array((0, 0, 1))
elif zdir is None:
return np.array((0, 0, 0))
elif iterable(zdir) and len(zdir) == 3:
return zdir
else:
raise ValueError("'x', 'y', 'z', None or vector of length 3 expected")
class Text3D(mtext.Text):
'''
Text object with 3D position and (in the future) direction.
'''
def __init__(self, x=0, y=0, z=0, text='', zdir='z', **kwargs):
'''
*x*, *y*, *z* Position of text
*text* Text string to display
*zdir* Direction of text
Keyword arguments are passed onto :func:`~matplotlib.text.Text`.
'''
mtext.Text.__init__(self, x, y, text, **kwargs)
self.set_3d_properties(z, zdir)
def set_3d_properties(self, z=0, zdir='z'):
x, y = self.get_position()
self._position3d = np.array((x, y, z))
self._dir_vec = get_dir_vector(zdir)
def draw(self, renderer):
proj = proj3d.proj_trans_points([self._position3d, \
self._position3d + self._dir_vec], renderer.M)
dx = proj[0][1] - proj[0][0]
dy = proj[1][1] - proj[1][0]
if dx==0. and dy==0.:
# atan2 raises ValueError: math domain error on 0,0
angle = 0.
else:
angle = math.degrees(math.atan2(dy, dx))
self.set_position((proj[0][0], proj[1][0]))
self.set_rotation(norm_text_angle(angle))
mtext.Text.draw(self, renderer)
def text_2d_to_3d(obj, z=0, zdir='z'):
"""Convert a Text to a Text3D object."""
obj.__class__ = Text3D
obj.set_3d_properties(z, zdir)
class Line3D(lines.Line2D):
'''
3D line object.
'''
def __init__(self, xs, ys, zs, *args, **kwargs):
'''
Keyword arguments are passed onto :func:`~matplotlib.lines.Line2D`.
'''
lines.Line2D.__init__(self, [], [], *args, **kwargs)
self._verts3d = xs, ys, zs
def set_3d_properties(self, zs=0, zdir='z'):
xs = self.get_xdata()
ys = self.get_ydata()
try:
# If *zs* is a list or array, then this will fail and
# just proceed to juggle_axes().
zs = float(zs)
zs = [zs for x in xs]
except TypeError:
pass
self._verts3d = juggle_axes(xs, ys, zs, zdir)
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_data(xs, ys)
lines.Line2D.draw(self, renderer)
def line_2d_to_3d(line, zs=0, zdir='z'):
'''
Convert a 2D line to 3D.
'''
line.__class__ = Line3D
line.set_3d_properties(zs, zdir)
def path_to_3d_segment(path, zs=0, zdir='z'):
'''Convert a path to a 3D segment.'''
if not iterable(zs):
zs = np.ones(len(path)) * zs
seg = []
pathsegs = path.iter_segments(simplify=False, curves=False)
for (((x, y), code), z) in zip(pathsegs, zs):
seg.append((x, y, z))
seg3d = [juggle_axes(x, y, z, zdir) for (x, y, z) in seg]
return seg3d
def paths_to_3d_segments(paths, zs=0, zdir='z'):
'''
Convert paths from a collection object to 3D segments.
'''
if not iterable(zs):
zs = np.ones(len(paths)) * zs
segments = []
for path, pathz in zip(paths, zs):
segments.append(path_to_3d_segment(path, pathz, zdir))
return segments
class Line3DCollection(LineCollection):
'''
A collection of 3D lines.
'''
def __init__(self, segments, *args, **kwargs):
'''
Keyword arguments are passed onto :func:`~matplotlib.collections.LineCollection`.
'''
LineCollection.__init__(self, segments, *args, **kwargs)
def set_sort_zpos(self,val):
'''Set the position to use for z-sorting.'''
self._sort_zpos = val
def set_segments(self, segments):
'''
Set 3D segments
'''
self._segments3d = np.asanyarray(segments)
LineCollection.set_segments(self, [])
def do_3d_projection(self, renderer):
'''
Project the points according to renderer matrix.
'''
xyslist = [
proj3d.proj_trans_points(points, renderer.M) for points in
self._segments3d]
segments_2d = [list(zip(xs, ys)) for (xs, ys, zs) in xyslist]
LineCollection.set_segments(self, segments_2d)
# FIXME
minz = 1e9
for (xs, ys, zs) in xyslist:
minz = min(minz, min(zs))
return minz
def draw(self, renderer, project=False):
if project:
self.do_3d_projection(renderer)
LineCollection.draw(self, renderer)
def line_collection_2d_to_3d(col, zs=0, zdir='z'):
"""Convert a LineCollection to a Line3DCollection object."""
segments3d = paths_to_3d_segments(col.get_paths(), zs, zdir)
col.__class__ = Line3DCollection
col.set_segments(segments3d)
class Patch3D(Patch):
'''
3D patch object.
'''
def __init__(self, *args, **kwargs):
zs = kwargs.pop('zs', [])
zdir = kwargs.pop('zdir', 'z')
Patch.__init__(self, *args, **kwargs)
self.set_3d_properties(zs, zdir)
def set_3d_properties(self, verts, zs=0, zdir='z'):
if not iterable(zs):
zs = np.ones(len(verts)) * zs
self._segment3d = [juggle_axes(x, y, z, zdir) \
for ((x, y), z) in zip(verts, zs)]
self._facecolor3d = Patch.get_facecolor(self)
def get_path(self):
return self._path2d
def get_facecolor(self):
return self._facecolor2d
def do_3d_projection(self, renderer):
s = self._segment3d
xs, ys, zs = list(zip(*s))
vxs, vys,vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, renderer.M)
self._path2d = mpath.Path(list(zip(vxs, vys)))
# FIXME: coloring
self._facecolor2d = self._facecolor3d
return min(vzs)
def draw(self, renderer):
Patch.draw(self, renderer)
class PathPatch3D(Patch3D):
'''
3D PathPatch object.
'''
def __init__(self, path, **kwargs):
zs = kwargs.pop('zs', [])
zdir = kwargs.pop('zdir', 'z')
Patch.__init__(self, **kwargs)
self.set_3d_properties(path, zs, zdir)
def set_3d_properties(self, path, zs=0, zdir='z'):
Patch3D.set_3d_properties(self, path.vertices, zs=zs, zdir=zdir)
self._code3d = path.codes
def do_3d_projection(self, renderer):
s = self._segment3d
xs, ys, zs = list(zip(*s))
vxs, vys,vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, renderer.M)
self._path2d = mpath.Path(list(zip(vxs, vys)), self._code3d)
# FIXME: coloring
self._facecolor2d = self._facecolor3d
return min(vzs)
def get_patch_verts(patch):
"""Return a list of vertices for the path of a patch."""
trans = patch.get_patch_transform()
path = patch.get_path()
polygons = path.to_polygons(trans)
if len(polygons):
return polygons[0]
else:
return []
def patch_2d_to_3d(patch, z=0, zdir='z'):
"""Convert a Patch to a Patch3D object."""
verts = get_patch_verts(patch)
patch.__class__ = Patch3D
patch.set_3d_properties(verts, z, zdir)
def pathpatch_2d_to_3d(pathpatch, z=0, zdir='z'):
"""Convert a PathPatch to a PathPatch3D object."""
path = pathpatch.get_path()
trans = pathpatch.get_patch_transform()
mpath = trans.transform_path(path)
pathpatch.__class__ = PathPatch3D
pathpatch.set_3d_properties(mpath, z, zdir)
class Patch3DCollection(PatchCollection):
'''
A collection of 3D patches.
'''
def __init__(self, *args, **kwargs):
"""
Create a collection of flat 3D patches with its normal vector
pointed in *zdir* direction, and located at *zs* on the *zdir*
axis. 'zs' can be a scalar or an array-like of the same length as
the number of patches in the collection.
Constructor arguments are the same as for
:class:`~matplotlib.collections.PatchCollection`. In addition,
keywords *zs=0* and *zdir='z'* are available.
Also, the keyword argument "depthshade" is available to
indicate whether or not to shade the patches in order to
give the appearance of depth (default is *True*).
This is typically desired in scatter plots.
"""
zs = kwargs.pop('zs', 0)
zdir = kwargs.pop('zdir', 'z')
self._depthshade = kwargs.pop('depthshade', True)
PatchCollection.__init__(self, *args, **kwargs)
self.set_3d_properties(zs, zdir)
def set_sort_zpos(self,val):
'''Set the position to use for z-sorting.'''
self._sort_zpos = val
def set_3d_properties(self, zs, zdir):
# Force the collection to initialize the face and edgecolors
# just in case it is a scalarmappable with a colormap.
self.update_scalarmappable()
offsets = self.get_offsets()
if len(offsets) > 0:
xs, ys = list(zip(*offsets))
else:
xs = []
ys = []
self._offsets3d = juggle_axes(xs, ys, np.atleast_1d(zs), zdir)
self._facecolor3d = self.get_facecolor()
self._edgecolor3d = self.get_edgecolor()
def do_3d_projection(self, renderer):
xs, ys, zs = self._offsets3d
vxs, vys, vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, renderer.M)
fcs = (zalpha(self._facecolor3d, vzs) if self._depthshade else
self._facecolor3d)
fcs = mcolors.colorConverter.to_rgba_array(fcs, self._alpha)
self.set_facecolors(fcs)
ecs = (zalpha(self._edgecolor3d, vzs) if self._depthshade else
self._edgecolor3d)
ecs = mcolors.colorConverter.to_rgba_array(ecs, self._alpha)
self.set_edgecolors(ecs)
PatchCollection.set_offsets(self, list(zip(vxs, vys)))
if vzs.size > 0 :
return min(vzs)
else :
return np.nan
class Path3DCollection(PathCollection):
'''
A collection of 3D paths.
'''
def __init__(self, *args, **kwargs):
"""
Create a collection of flat 3D paths with its normal vector
pointed in *zdir* direction, and located at *zs* on the *zdir*
axis. 'zs' can be a scalar or an array-like of the same length as
the number of paths in the collection.
Constructor arguments are the same as for
:class:`~matplotlib.collections.PathCollection`. In addition,
keywords *zs=0* and *zdir='z'* are available.
Also, the keyword argument "depthshade" is available to
indicate whether or not to shade the patches in order to
give the appearance of depth (default is *True*).
This is typically desired in scatter plots.
"""
zs = kwargs.pop('zs', 0)
zdir = kwargs.pop('zdir', 'z')
self._depthshade = kwargs.pop('depthshade', True)
PathCollection.__init__(self, *args, **kwargs)
self.set_3d_properties(zs, zdir)
def set_sort_zpos(self, val):
'''Set the position to use for z-sorting.'''
self._sort_zpos = val
def set_3d_properties(self, zs, zdir):
# Force the collection to initialize the face and edgecolors
# just in case it is a scalarmappable with a colormap.
self.update_scalarmappable()
offsets = self.get_offsets()
if len(offsets) > 0:
xs, ys = list(zip(*offsets))
else:
xs = []
ys = []
self._offsets3d = juggle_axes(xs, ys, np.atleast_1d(zs), zdir)
self._facecolor3d = self.get_facecolor()
self._edgecolor3d = self.get_edgecolor()
def do_3d_projection(self, renderer):
xs, ys, zs = self._offsets3d
vxs, vys, vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, renderer.M)
fcs = (zalpha(self._facecolor3d, vzs) if self._depthshade else
self._facecolor3d)
fcs = mcolors.colorConverter.to_rgba_array(fcs, self._alpha)
self.set_facecolors(fcs)
ecs = (zalpha(self._edgecolor3d, vzs) if self._depthshade else
self._edgecolor3d)
ecs = mcolors.colorConverter.to_rgba_array(ecs, self._alpha)
self.set_edgecolors(ecs)
PathCollection.set_offsets(self, list(zip(vxs, vys)))
if vzs.size > 0 :
return min(vzs)
else :
return np.nan
def patch_collection_2d_to_3d(col, zs=0, zdir='z', depthshade=True):
"""
Convert a :class:`~matplotlib.collections.PatchCollection` into a
:class:`Patch3DCollection` object
(or a :class:`~matplotlib.collections.PathCollection` into a
:class:`Path3DCollection` object).
Keywords:
*za* The location or locations to place the patches in the
collection along the *zdir* axis. Defaults to 0.
*zdir* The axis in which to place the patches. Default is "z".
*depthshade* Whether to shade the patches to give a sense of depth.
Defaults to *True*.
"""
if isinstance(col, PathCollection):
col.__class__ = Path3DCollection
elif isinstance(col, PatchCollection):
col.__class__ = Patch3DCollection
col._depthshade = depthshade
col.set_3d_properties(zs, zdir)
class Poly3DCollection(PolyCollection):
'''
A collection of 3D polygons.
'''
def __init__(self, verts, *args, **kwargs):
'''
Create a Poly3DCollection.
*verts* should contain 3D coordinates.
Keyword arguments:
zsort, see set_zsort for options.
Note that this class does a bit of magic with the _facecolors
and _edgecolors properties.
'''
self.set_zsort(kwargs.pop('zsort', True))
PolyCollection.__init__(self, verts, *args, **kwargs)
_zsort_functions = {
'average': np.average,
'min': np.min,
'max': np.max,
}
def set_zsort(self, zsort):
'''
Set z-sorting behaviour:
boolean: if True use default 'average'
string: 'average', 'min' or 'max'
'''
if zsort is True:
zsort = 'average'
if zsort is not False:
if zsort in self._zsort_functions:
zsortfunc = self._zsort_functions[zsort]
else:
return False
else:
zsortfunc = None
self._zsort = zsort
self._sort_zpos = None
self._zsortfunc = zsortfunc
def get_vector(self, segments3d):
"""Optimize points for projection"""
si = 0
ei = 0
segis = []
points = []
for p in segments3d:
points.extend(p)
ei = si+len(p)
segis.append((si, ei))
si = ei
if len(segments3d) > 0 :
xs, ys, zs = list(zip(*points))
else :
# We need this so that we can skip the bad unpacking from zip()
xs, ys, zs = [], [], []
ones = np.ones(len(xs))
self._vec = np.array([xs, ys, zs, ones])
self._segis = segis
def set_verts(self, verts, closed=True):
'''Set 3D vertices.'''
self.get_vector(verts)
# 2D verts will be updated at draw time
PolyCollection.set_verts(self, [], closed)
def set_3d_properties(self):
# Force the collection to initialize the face and edgecolors
# just in case it is a scalarmappable with a colormap.
self.update_scalarmappable()
self._sort_zpos = None
self.set_zsort(True)
self._facecolors3d = PolyCollection.get_facecolors(self)
self._edgecolors3d = PolyCollection.get_edgecolors(self)
self._alpha3d = PolyCollection.get_alpha(self)
def set_sort_zpos(self,val):
'''Set the position to use for z-sorting.'''
self._sort_zpos = val
def do_3d_projection(self, renderer):
'''
Perform the 3D projection for this object.
'''
# FIXME: This may no longer be needed?
if self._A is not None:
self.update_scalarmappable()
self._facecolors3d = self._facecolors
txs, tys, tzs = proj3d.proj_transform_vec(self._vec, renderer.M)
xyzlist = [(txs[si:ei], tys[si:ei], tzs[si:ei]) \
for si, ei in self._segis]
# This extra fuss is to re-order face / edge colors
cface = self._facecolors3d
cedge = self._edgecolors3d
if len(cface) != len(xyzlist):
cface = cface.repeat(len(xyzlist), axis=0)
if len(cedge) != len(xyzlist):
if len(cedge) == 0:
cedge = cface
cedge = cedge.repeat(len(xyzlist), axis=0)
# if required sort by depth (furthest drawn first)
if self._zsort:
z_segments_2d = [(self._zsortfunc(zs), list(zip(xs, ys)), fc, ec) for
(xs, ys, zs), fc, ec in zip(xyzlist, cface, cedge)]
z_segments_2d.sort(key=lambda x: x[0], reverse=True)
else:
raise ValueError("whoops")
segments_2d = [s for z, s, fc, ec in z_segments_2d]
PolyCollection.set_verts(self, segments_2d)
self._facecolors2d = [fc for z, s, fc, ec in z_segments_2d]
if len(self._edgecolors3d) == len(cface):
self._edgecolors2d = [ec for z, s, fc, ec in z_segments_2d]
else:
self._edgecolors2d = self._edgecolors3d
# Return zorder value
if self._sort_zpos is not None:
zvec = np.array([[0], [0], [self._sort_zpos], [1]])
ztrans = proj3d.proj_transform_vec(zvec, renderer.M)
return ztrans[2][0]
elif tzs.size > 0 :
# FIXME: Some results still don't look quite right.
# In particular, examine contourf3d_demo2.py
# with az = -54 and elev = -45.
return np.min(tzs)
else :
return np.nan
def set_facecolor(self, colors):
PolyCollection.set_facecolor(self, colors)
self._facecolors3d = PolyCollection.get_facecolor(self)
set_facecolors = set_facecolor
def set_edgecolor(self, colors):
PolyCollection.set_edgecolor(self, colors)
self._edgecolors3d = PolyCollection.get_edgecolor(self)
set_edgecolors = set_edgecolor
def set_alpha(self, alpha):
"""
Set the alpha tranparencies of the collection. *alpha* must be
a float or *None*.
ACCEPTS: float or None
"""
if alpha is not None:
try:
float(alpha)
except TypeError:
raise TypeError('alpha must be a float or None')
artist.Artist.set_alpha(self, alpha)
try:
self._facecolors = mcolors.colorConverter.to_rgba_array(
self._facecolors3d, self._alpha)
except (AttributeError, TypeError, IndexError):
pass
try:
self._edgecolors = mcolors.colorConverter.to_rgba_array(
self._edgecolors3d, self._alpha)
except (AttributeError, TypeError, IndexError):
pass
def get_facecolors(self):
return self._facecolors2d
get_facecolor = get_facecolors
def get_edgecolors(self):
return self._edgecolors2d
get_edgecolor = get_edgecolors
def draw(self, renderer):
return Collection.draw(self, renderer)
def poly_collection_2d_to_3d(col, zs=0, zdir='z'):
"""Convert a PolyCollection to a Poly3DCollection object."""
segments_3d = paths_to_3d_segments(col.get_paths(), zs, zdir)
col.__class__ = Poly3DCollection
col.set_verts(segments_3d)
col.set_3d_properties()
def juggle_axes(xs, ys, zs, zdir):
"""
Reorder coordinates so that 2D xs, ys can be plotted in the plane
orthogonal to zdir. zdir is normally x, y or z. However, if zdir
starts with a '-' it is interpreted as a compensation for rotate_axes.
"""
if zdir == 'x':
return zs, xs, ys
elif zdir == 'y':
return xs, zs, ys
elif zdir[0] == '-':
return rotate_axes(xs, ys, zs, zdir)
else:
return xs, ys, zs
def rotate_axes(xs, ys, zs, zdir):
"""
Reorder coordinates so that the axes are rotated with zdir along
the original z axis. Prepending the axis with a '-' does the
inverse transform, so zdir can be x, -x, y, -y, z or -z
"""
if zdir == 'x':
return ys, zs, xs
elif zdir == '-x':
return zs, xs, ys
elif zdir == 'y':
return zs, xs, ys
elif zdir == '-y':
return ys, zs, xs
else:
return xs, ys, zs
def iscolor(c):
try:
if len(c) == 4 or len(c) == 3:
if iterable(c[0]):
return False
if hasattr(c[0], '__float__'):
return True
except:
return False
return False
def get_colors(c, num):
"""Stretch the color argument to provide the required number num"""
if type(c) == type("string"):
c = mcolors.colorConverter.to_rgba(c)
if iscolor(c):
return [c] * num
if len(c) == num:
return c
elif iscolor(c):
return [c] * num
elif len(c) == 0: #if edgecolor or facecolor is specified as 'none'
return [[0,0,0,0]] * num
elif iscolor(c[0]):
return [c[0]] * num
else:
raise ValueError('unknown color format %s' % c)
def zalpha(colors, zs):
"""Modify the alphas of the color list according to depth"""
# FIXME: This only works well if the points for *zs* are well-spaced
# in all three dimensions. Otherwise, at certain orientations,
# the min and max zs are very close together.
# Should really normalize against the viewing depth.
colors = get_colors(colors, len(zs))
if zs.size > 0 :
norm = Normalize(min(zs), max(zs))
sats = 1 - norm(zs) * 0.7
colors = [(c[0], c[1], c[2], c[3] * s) for c, s in zip(colors, sats)]
return colors
| gpl-2.0 |
jblupus/PyLoyaltyProject | old/interactions/format_interactions.py | 1 | 5936 | from os import mkdir
from os.path import exists
import numpy as np
import pandas as pd
from old.project import CassandraUtils
from old.project import get_time
RTD_STS_KEY = 'retweetedStatus'
MT_STS_KEY = 'userMentionEntities'
PATH = '/home/joao/Dev/Data/Twitter/'
FRIENDS_PATH = '/home/joao/Dev/Data/Twitter/friendships/'
def form_tweet(tweet, date=None):
return {'id': tweet['id'],
'lang': tweet['lang'] if 'lang' in tweet else None,
'text': tweet['text'],
'date': date or tweet['createdAt'],
'user': {'id': tweet['user']['id']}}
def folding(tweets):
last_index = np.ceil(len(tweets) * 0.75).astype(int)
if last_index < 10:
return np.array(tweets)
np_array = np.array(tweets)
return np_array[0:last_index]
def append_frame(df, df_plus):
return df.append(df_plus)
def save_frames(file_name, _data):
if len(_data) > 0:
data_frame = mount_frame(data=_data)
df = pd.DataFrame()
df = append_frame(df, data_frame)
try:
df.to_csv(file_name, sep=',', encoding='utf-8')
except IOError:
df.to_csv('Saida/retweets/' + file_name.split('/')[len(file_name.split('/')) - 1])
def mount_frame(data):
data = np.array(data)
data_frame = pd.DataFrame()
if np.size(data) > 0:
data_frame['alter'] = data[:, 1]
data_frame['tweet'] = data[:, 0]
return data_frame
class FriendsDataToDataFrame:
# frame_path = '/home/joao/Dev/Data/Twitter/friends.data/data.frame/'
def __init__(self):
self.cass = CassandraUtils()
self.frame_path = '/home/joao/Dev/Shared/Saida/'
self.likes_path = self.frame_path + 'likes/'
self.mentions_path = self.frame_path + 'mentions/'
self.retweets_path = self.frame_path + 'retweets/'
print self.retweets_path
try:
if not exists(self.frame_path):
mkdir(self.frame_path)
if not exists(self.likes_path):
mkdir(self.likes_path)
if not exists(self.mentions_path):
mkdir(self.mentions_path)
if not exists(self.retweets_path):
mkdir(self.retweets_path)
except Exception as e:
raise e
def get_seeds(self):
# seeds = pd.DataFrame()
seeds = self.cass.find_seeds();
# print seeds
seeds = map(lambda x: x.user_id, seeds)
return seeds
#
# def check_friends_data(self, user_id):
# friends = self(user_id=user_id)
# data = filter(lambda friend_id: not exists(path=self.frame_path + str(friend_id) + '.csv'), friends['id'])
# return len(data) == 0
# def check_json_interactions(self, seeds=None):
# print 'Starting...', get_time(True)
# # seeds = load_seeds() if seeds is None else seeds
# seeds = self.get_seeds()
# for user_id in seeds:
# if not self.check_friends_data(user_id=user_id):
# return user_id
# print user_id
# print 'Stopping...', get_time(True)
# return None
def json_interactions(self, _type=None, _force=False, _clean=False, check=False, init_id=0, unique=False):
print 'Starting...', get_time(True)
if not unique:
seeds = self.get_seeds()
# if _clean:
# clean_data(self.frame_path)
# elif check:
# user_id = self.check_json_interactions(seeds=seeds)
# if user_id is not None:
# self.friends_data(user_id=user_id, _type=_type, force=True)
# return Nonea
seeds = filter(lambda s: s > init_id, seeds)
for user_id in seeds:
self.friends_data(user_id=user_id, _type=_type, force=_force)
else:
self.friends_data(user_id=init_id, _type=_type, force=_force)
print 'Stopping...', get_time(True)
def friends_data(self, user_id, _type, force=False):
friends = map(lambda x: x.friend_id, self.cass.find_friends(user_id=user_id))
for friend_id in friends:
if _type is None:
pass
elif _type == 1:
path = self.likes_path + str(friend_id) + '.csv'
if not exists(path=path) or force:
self.save_likes(friend_id=friend_id)
elif _type == 2:
path = self.mentions_path + str(friend_id) + '.csv'
if not exists(path=path) or force:
self.save_mentions(friend_id=friend_id)
elif _type == 3:
path = self.retweets_path + str(friend_id) + '.csv'
if not exists(path=path) or force:
self.save_retweets(friend_id=friend_id)
else:
pass
print user_id, friend_id, get_time()
def save_likes(self, friend_id):
likes = self.cass.find_likes(user_id=friend_id)
likes_data = map(lambda tt: [tt['id'], tt['user']['id']], likes)
save_frames(self.likes_path + str(friend_id) + '.csv', likes_data)
def save_retweets(self, friend_id, _tweets=None):
tweets = _tweets or self.cass.find_tweets(user_id=friend_id)
retweets = self.cass.find_retweets(tweets=tweets)
retweets_data = map(lambda tt: [tt[RTD_STS_KEY]['id'], tt[RTD_STS_KEY]['user']['id']], retweets)
save_frames(self.retweets_path + str(friend_id) + '.csv', retweets_data)
def save_mentions(self, friend_id, _tweets=None):
tweets = _tweets or self.cass.find_tweets(user_id=friend_id)
mentions = self.cass.find_mentions(tweets=tweets)
mentions_data = []
for tweet in mentions:
mentions_data.extend(map(lambda mention: [tweet['id'], mention['id']], tweet[MT_STS_KEY]))
save_frames(self.mentions_path + str(friend_id) + '.csv', mentions_data)
| bsd-2-clause |
sandeepdsouza93/TensorFlow-15712 | tensorflow/tools/dist_test/python/census_widendeep.py | 3 | 11352 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Distributed training and evaluation of a wide and deep model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import os
from six.moves import urllib
import tensorflow as tf
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.learn.python.learn.estimators import run_config
# Define command-line flags
flags = tf.app.flags
flags.DEFINE_string("data_dir", "/tmp/census-data",
"Directory for storing the cesnsus data")
flags.DEFINE_string("model_dir", "/tmp/census_wide_and_deep_model",
"Directory for storing the model")
flags.DEFINE_string("output_dir", "", "Base output directory.")
flags.DEFINE_string("schedule", "local_run",
"Schedule to run for this experiment.")
flags.DEFINE_string("master_grpc_url", "",
"URL to master GRPC tensorflow server, e.g.,"
"grpc://127.0.0.1:2222")
flags.DEFINE_integer("num_parameter_servers", 0,
"Number of parameter servers")
flags.DEFINE_integer("worker_index", 0,
"Worker index (>=0)")
flags.DEFINE_integer("train_steps", 1000, "Number of training steps")
flags.DEFINE_integer("eval_steps", 1, "Number of evaluation steps")
FLAGS = flags.FLAGS
# Constants: Data download URLs
TRAIN_DATA_URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
TEST_DATA_URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test"
# Define features for the model
def census_model_config():
"""Configuration for the census Wide & Deep model.
Returns:
columns: Column names to retrieve from the data source
label_column: Name of the label column
wide_columns: List of wide columns
deep_columns: List of deep columns
categorical_column_names: Names of the categorical columns
continuous_column_names: Names of the continuous columns
"""
# 1. Categorical base columns.
gender = tf.contrib.layers.sparse_column_with_keys(
column_name="gender", keys=["female", "male"])
race = tf.contrib.layers.sparse_column_with_keys(
column_name="race",
keys=["Amer-Indian-Eskimo",
"Asian-Pac-Islander",
"Black",
"Other",
"White"])
education = tf.contrib.layers.sparse_column_with_hash_bucket(
"education", hash_bucket_size=1000)
marital_status = tf.contrib.layers.sparse_column_with_hash_bucket(
"marital_status", hash_bucket_size=100)
relationship = tf.contrib.layers.sparse_column_with_hash_bucket(
"relationship", hash_bucket_size=100)
workclass = tf.contrib.layers.sparse_column_with_hash_bucket(
"workclass", hash_bucket_size=100)
occupation = tf.contrib.layers.sparse_column_with_hash_bucket(
"occupation", hash_bucket_size=1000)
native_country = tf.contrib.layers.sparse_column_with_hash_bucket(
"native_country", hash_bucket_size=1000)
# 2. Continuous base columns.
age = tf.contrib.layers.real_valued_column("age")
age_buckets = tf.contrib.layers.bucketized_column(
age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
education_num = tf.contrib.layers.real_valued_column("education_num")
capital_gain = tf.contrib.layers.real_valued_column("capital_gain")
capital_loss = tf.contrib.layers.real_valued_column("capital_loss")
hours_per_week = tf.contrib.layers.real_valued_column("hours_per_week")
wide_columns = [
gender, native_country, education, occupation, workclass,
marital_status, relationship, age_buckets,
tf.contrib.layers.crossed_column([education, occupation],
hash_bucket_size=int(1e4)),
tf.contrib.layers.crossed_column([native_country, occupation],
hash_bucket_size=int(1e4)),
tf.contrib.layers.crossed_column([age_buckets, race, occupation],
hash_bucket_size=int(1e6))]
deep_columns = [
tf.contrib.layers.embedding_column(workclass, dimension=8),
tf.contrib.layers.embedding_column(education, dimension=8),
tf.contrib.layers.embedding_column(marital_status, dimension=8),
tf.contrib.layers.embedding_column(gender, dimension=8),
tf.contrib.layers.embedding_column(relationship, dimension=8),
tf.contrib.layers.embedding_column(race, dimension=8),
tf.contrib.layers.embedding_column(native_country, dimension=8),
tf.contrib.layers.embedding_column(occupation, dimension=8),
age, education_num, capital_gain, capital_loss, hours_per_week]
# Define the column names for the data sets.
columns = ["age", "workclass", "fnlwgt", "education", "education_num",
"marital_status", "occupation", "relationship", "race", "gender",
"capital_gain", "capital_loss", "hours_per_week",
"native_country", "income_bracket"]
label_column = "label"
categorical_columns = ["workclass", "education", "marital_status",
"occupation", "relationship", "race", "gender",
"native_country"]
continuous_columns = ["age", "education_num", "capital_gain",
"capital_loss", "hours_per_week"]
return (columns, label_column, wide_columns, deep_columns,
categorical_columns, continuous_columns)
class CensusDataSource(object):
"""Source of census data."""
def __init__(self, data_dir, train_data_url, test_data_url,
columns, label_column,
categorical_columns, continuous_columns):
"""Constructor of CensusDataSource.
Args:
data_dir: Directory to save/load the data files
train_data_url: URL from which the training data can be downloaded
test_data_url: URL from which the test data can be downloaded
columns: Columns to retrieve from the data files (A list of strings)
label_column: Name of the label column
categorical_columns: Names of the categorical columns (A list of strings)
continuous_columns: Names of the continuous columsn (A list of strings)
"""
# Retrieve data from disk (if available) or download from the web.
train_file_path = os.path.join(data_dir, "adult.data")
if os.path.isfile(train_file_path):
print("Loading training data from file: %s" % train_file_path)
train_file = open(train_file_path)
else:
urllib.urlretrieve(train_data_url, train_file_path)
test_file_path = os.path.join(data_dir, "adult.test")
if os.path.isfile(test_file_path):
print("Loading test data from file: %s" % test_file_path)
test_file = open(test_file_path)
else:
test_file = open(test_file_path)
urllib.urlretrieve(test_data_url, test_file_path)
# Read the training and testing data sets into Pandas DataFrame.
import pandas # pylint: disable=g-import-not-at-top
self._df_train = pandas.read_csv(train_file, names=columns,
skipinitialspace=True)
self._df_test = pandas.read_csv(test_file, names=columns,
skipinitialspace=True, skiprows=1)
# Remove the NaN values in the last rows of the tables
self._df_train = self._df_train[:-1]
self._df_test = self._df_test[:-1]
# Apply the threshold to get the labels.
income_thresh = lambda x: ">50K" in x
self._df_train[label_column] = (
self._df_train["income_bracket"].apply(income_thresh)).astype(int)
self._df_test[label_column] = (
self._df_test["income_bracket"].apply(income_thresh)).astype(int)
self.label_column = label_column
self.categorical_columns = categorical_columns
self.continuous_columns = continuous_columns
def input_train_fn(self):
return self._input_fn(self._df_train)
def input_test_fn(self):
return self._input_fn(self._df_test)
# TODO(cais): Turn into minibatch feeder
def _input_fn(self, df):
"""Input data function.
Creates a dictionary mapping from each continuous feature column name
(k) to the values of that column stored in a constant Tensor.
Args:
df: data feed
Returns:
feature columns and labels
"""
continuous_cols = {k: tf.constant(df[k].values)
for k in self.continuous_columns}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
categorical_cols = {k: tf.SparseTensor(
indices=[[i, 0] for i in range(df[k].size)],
values=df[k].values,
shape=[df[k].size, 1])
for k in self.categorical_columns}
# Merges the two dictionaries into one.
feature_cols = dict(continuous_cols.items() + categorical_cols.items())
# Converts the label column into a constant Tensor.
label = tf.constant(df[self.label_column].values)
# Returns the feature columns and the label.
return feature_cols, label
def _create_experiment_fn(output_dir): # pylint: disable=unused-argument
"""Experiment creation function."""
(columns, label_column, wide_columns, deep_columns, categorical_columns,
continuous_columns) = census_model_config()
census_data_source = CensusDataSource(FLAGS.data_dir,
TRAIN_DATA_URL, TEST_DATA_URL,
columns, label_column,
categorical_columns,
continuous_columns)
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
tf.contrib.learn.TaskType.PS: ["fake_ps"] *
FLAGS.num_parameter_servers
},
"task": {
"index": FLAGS.worker_index
}
})
config = run_config.RunConfig(master=FLAGS.master_grpc_url)
estimator = tf.contrib.learn.DNNLinearCombinedClassifier(
model_dir=FLAGS.model_dir,
linear_feature_columns=wide_columns,
dnn_feature_columns=deep_columns,
dnn_hidden_units=[5],
config=config)
return tf.contrib.learn.Experiment(
estimator=estimator,
train_input_fn=census_data_source.input_train_fn,
eval_input_fn=census_data_source.input_test_fn,
train_steps=FLAGS.train_steps,
eval_steps=FLAGS.eval_steps
)
def main(unused_argv):
print("Worker index: %d" % FLAGS.worker_index)
learn_runner.run(experiment_fn=_create_experiment_fn,
output_dir=FLAGS.output_dir,
schedule=FLAGS.schedule)
if __name__ == "__main__":
tf.app.run()
| apache-2.0 |
abhishekkrthakur/scikit-learn | benchmarks/bench_20newsgroups.py | 377 | 3555 | from __future__ import print_function, division
from time import time
import argparse
import numpy as np
from sklearn.dummy import DummyClassifier
from sklearn.datasets import fetch_20newsgroups_vectorized
from sklearn.metrics import accuracy_score
from sklearn.utils.validation import check_array
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
ESTIMATORS = {
"dummy": DummyClassifier(),
"random_forest": RandomForestClassifier(n_estimators=100,
max_features="sqrt",
min_samples_split=10),
"extra_trees": ExtraTreesClassifier(n_estimators=100,
max_features="sqrt",
min_samples_split=10),
"logistic_regression": LogisticRegression(),
"naive_bayes": MultinomialNB(),
"adaboost": AdaBoostClassifier(n_estimators=10),
}
###############################################################################
# Data
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-e', '--estimators', nargs="+", required=True,
choices=ESTIMATORS)
args = vars(parser.parse_args())
data_train = fetch_20newsgroups_vectorized(subset="train")
data_test = fetch_20newsgroups_vectorized(subset="test")
X_train = check_array(data_train.data, dtype=np.float32,
accept_sparse="csc")
X_test = check_array(data_test.data, dtype=np.float32, accept_sparse="csr")
y_train = data_train.target
y_test = data_test.target
print("20 newsgroups")
print("=============")
print("X_train.shape = {0}".format(X_train.shape))
print("X_train.format = {0}".format(X_train.format))
print("X_train.dtype = {0}".format(X_train.dtype))
print("X_train density = {0}"
"".format(X_train.nnz / np.product(X_train.shape)))
print("y_train {0}".format(y_train.shape))
print("X_test {0}".format(X_test.shape))
print("X_test.format = {0}".format(X_test.format))
print("X_test.dtype = {0}".format(X_test.dtype))
print("y_test {0}".format(y_test.shape))
print()
print("Classifier Training")
print("===================")
accuracy, train_time, test_time = {}, {}, {}
for name in sorted(args["estimators"]):
clf = ESTIMATORS[name]
try:
clf.set_params(random_state=0)
except (TypeError, ValueError):
pass
print("Training %s ... " % name, end="")
t0 = time()
clf.fit(X_train, y_train)
train_time[name] = time() - t0
t0 = time()
y_pred = clf.predict(X_test)
test_time[name] = time() - t0
accuracy[name] = accuracy_score(y_test, y_pred)
print("done")
print()
print("Classification performance:")
print("===========================")
print()
print("%s %s %s %s" % ("Classifier ", "train-time", "test-time",
"Accuracy"))
print("-" * 44)
for name in sorted(accuracy, key=accuracy.get):
print("%s %s %s %s" % (name.ljust(16),
("%.4fs" % train_time[name]).center(10),
("%.4fs" % test_time[name]).center(10),
("%.4f" % accuracy[name]).center(10)))
print()
| bsd-3-clause |
r-mart/scikit-learn | examples/tree/plot_tree_regression.py | 206 | 1476 | """
===================================================================
Decision Tree Regression
===================================================================
A 1D regression with decision tree.
The :ref:`decision trees <tree>` is
used to fit a sine curve with addition noisy observation. As a result, it
learns local linear regressions approximating the sine curve.
We can see that if the maximum depth of the tree (controlled by the
`max_depth` parameter) is set too high, the decision trees learn too fine
details of the training data and learn from the noise, i.e. they overfit.
"""
print(__doc__)
# Import the necessary modules and libraries
import numpy as np
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 3 * (0.5 - rng.rand(16))
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2)
regr_2 = DecisionTreeRegressor(max_depth=5)
regr_1.fit(X, y)
regr_2.fit(X, y)
# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_1 = regr_1.predict(X_test)
y_2 = regr_2.predict(X_test)
# Plot the results
plt.figure()
plt.scatter(X, y, c="k", label="data")
plt.plot(X_test, y_1, c="g", label="max_depth=2", linewidth=2)
plt.plot(X_test, y_2, c="r", label="max_depth=5", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Decision Tree Regression")
plt.legend()
plt.show()
| bsd-3-clause |
drewokane/seaborn | seaborn/tests/test_utils.py | 11 | 11338 | """Tests for plotting utilities."""
import warnings
import tempfile
import shutil
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import nose
import nose.tools as nt
from nose.tools import assert_equal, raises
import numpy.testing as npt
import pandas.util.testing as pdt
from distutils.version import LooseVersion
pandas_has_categoricals = LooseVersion(pd.__version__) >= "0.15"
from pandas.util.testing import network
try:
from bs4 import BeautifulSoup
except ImportError:
BeautifulSoup = None
from . import PlotTestCase
from .. import utils, rcmod
from ..utils import get_dataset_names, load_dataset
a_norm = np.random.randn(100)
def test_pmf_hist_basics():
"""Test the function to return barplot args for pmf hist."""
out = utils.pmf_hist(a_norm)
assert_equal(len(out), 3)
x, h, w = out
assert_equal(len(x), len(h))
# Test simple case
a = np.arange(10)
x, h, w = utils.pmf_hist(a, 10)
nose.tools.assert_true(np.all(h == h[0]))
def test_pmf_hist_widths():
"""Test histogram width is correct."""
x, h, w = utils.pmf_hist(a_norm)
assert_equal(x[1] - x[0], w)
def test_pmf_hist_normalization():
"""Test that output data behaves like a PMF."""
x, h, w = utils.pmf_hist(a_norm)
nose.tools.assert_almost_equal(sum(h), 1)
nose.tools.assert_less_equal(h.max(), 1)
def test_pmf_hist_bins():
"""Test bin specification."""
x, h, w = utils.pmf_hist(a_norm, 20)
assert_equal(len(x), 20)
def test_ci_to_errsize():
"""Test behavior of ci_to_errsize."""
cis = [[.5, .5],
[1.25, 1.5]]
heights = [1, 1.5]
actual_errsize = np.array([[.5, 1],
[.25, 0]])
test_errsize = utils.ci_to_errsize(cis, heights)
npt.assert_array_equal(actual_errsize, test_errsize)
def test_desaturate():
"""Test color desaturation."""
out1 = utils.desaturate("red", .5)
assert_equal(out1, (.75, .25, .25))
out2 = utils.desaturate("#00FF00", .5)
assert_equal(out2, (.25, .75, .25))
out3 = utils.desaturate((0, 0, 1), .5)
assert_equal(out3, (.25, .25, .75))
out4 = utils.desaturate("red", .5)
assert_equal(out4, (.75, .25, .25))
@raises(ValueError)
def test_desaturation_prop():
"""Test that pct outside of [0, 1] raises exception."""
utils.desaturate("blue", 50)
def test_saturate():
"""Test performance of saturation function."""
out = utils.saturate((.75, .25, .25))
assert_equal(out, (1, 0, 0))
def test_iqr():
"""Test the IQR function."""
a = np.arange(5)
iqr = utils.iqr(a)
assert_equal(iqr, 2)
class TestSpineUtils(PlotTestCase):
sides = ["left", "right", "bottom", "top"]
outer_sides = ["top", "right"]
inner_sides = ["left", "bottom"]
offset = 10
original_position = ("outward", 0)
offset_position = ("outward", offset)
def test_despine(self):
f, ax = plt.subplots()
for side in self.sides:
nt.assert_true(ax.spines[side].get_visible())
utils.despine()
for side in self.outer_sides:
nt.assert_true(~ax.spines[side].get_visible())
for side in self.inner_sides:
nt.assert_true(ax.spines[side].get_visible())
utils.despine(**dict(zip(self.sides, [True] * 4)))
for side in self.sides:
nt.assert_true(~ax.spines[side].get_visible())
def test_despine_specific_axes(self):
f, (ax1, ax2) = plt.subplots(2, 1)
utils.despine(ax=ax2)
for side in self.sides:
nt.assert_true(ax1.spines[side].get_visible())
for side in self.outer_sides:
nt.assert_true(~ax2.spines[side].get_visible())
for side in self.inner_sides:
nt.assert_true(ax2.spines[side].get_visible())
def test_despine_with_offset(self):
f, ax = plt.subplots()
for side in self.sides:
nt.assert_equal(ax.spines[side].get_position(),
self.original_position)
utils.despine(ax=ax, offset=self.offset)
for side in self.sides:
is_visible = ax.spines[side].get_visible()
new_position = ax.spines[side].get_position()
if is_visible:
nt.assert_equal(new_position, self.offset_position)
else:
nt.assert_equal(new_position, self.original_position)
def test_despine_with_offset_specific_axes(self):
f, (ax1, ax2) = plt.subplots(2, 1)
utils.despine(offset=self.offset, ax=ax2)
for side in self.sides:
nt.assert_equal(ax1.spines[side].get_position(),
self.original_position)
if ax2.spines[side].get_visible():
nt.assert_equal(ax2.spines[side].get_position(),
self.offset_position)
else:
nt.assert_equal(ax2.spines[side].get_position(),
self.original_position)
def test_despine_trim_spines(self):
f, ax = plt.subplots()
ax.plot([1, 2, 3], [1, 2, 3])
ax.set_xlim(.75, 3.25)
utils.despine(trim=True)
for side in self.inner_sides:
bounds = ax.spines[side].get_bounds()
nt.assert_equal(bounds, (1, 3))
def test_despine_trim_inverted(self):
f, ax = plt.subplots()
ax.plot([1, 2, 3], [1, 2, 3])
ax.set_ylim(.85, 3.15)
ax.invert_yaxis()
utils.despine(trim=True)
for side in self.inner_sides:
bounds = ax.spines[side].get_bounds()
nt.assert_equal(bounds, (1, 3))
def test_despine_trim_noticks(self):
f, ax = plt.subplots()
ax.plot([1, 2, 3], [1, 2, 3])
ax.set_yticks([])
utils.despine(trim=True)
nt.assert_equal(ax.get_yticks().size, 0)
def test_offset_spines_warns(self):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always", category=UserWarning)
f, ax = plt.subplots()
utils.offset_spines(offset=self.offset)
nt.assert_true('deprecated' in str(w[0].message))
nt.assert_true(issubclass(w[0].category, UserWarning))
def test_offset_spines(self):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always", category=UserWarning)
f, ax = plt.subplots()
for side in self.sides:
nt.assert_equal(ax.spines[side].get_position(),
self.original_position)
utils.offset_spines(offset=self.offset)
for side in self.sides:
nt.assert_equal(ax.spines[side].get_position(),
self.offset_position)
def test_offset_spines_specific_axes(self):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always", category=UserWarning)
f, (ax1, ax2) = plt.subplots(2, 1)
utils.offset_spines(offset=self.offset, ax=ax2)
for side in self.sides:
nt.assert_equal(ax1.spines[side].get_position(),
self.original_position)
nt.assert_equal(ax2.spines[side].get_position(),
self.offset_position)
def test_ticklabels_overlap():
rcmod.set()
f, ax = plt.subplots(figsize=(2, 2))
f.tight_layout() # This gets the Agg renderer working
assert not utils.axis_ticklabels_overlap(ax.get_xticklabels())
big_strings = "abcdefgh", "ijklmnop"
ax.set_xlim(-.5, 1.5)
ax.set_xticks([0, 1])
ax.set_xticklabels(big_strings)
assert utils.axis_ticklabels_overlap(ax.get_xticklabels())
x, y = utils.axes_ticklabels_overlap(ax)
assert x
assert not y
def test_categorical_order():
x = ["a", "c", "c", "b", "a", "d"]
y = [3, 2, 5, 1, 4]
order = ["a", "b", "c", "d"]
out = utils.categorical_order(x)
nt.assert_equal(out, ["a", "c", "b", "d"])
out = utils.categorical_order(x, order)
nt.assert_equal(out, order)
out = utils.categorical_order(x, ["b", "a"])
nt.assert_equal(out, ["b", "a"])
out = utils.categorical_order(np.array(x))
nt.assert_equal(out, ["a", "c", "b", "d"])
out = utils.categorical_order(pd.Series(x))
nt.assert_equal(out, ["a", "c", "b", "d"])
out = utils.categorical_order(y)
nt.assert_equal(out, [1, 2, 3, 4, 5])
out = utils.categorical_order(np.array(y))
nt.assert_equal(out, [1, 2, 3, 4, 5])
out = utils.categorical_order(pd.Series(y))
nt.assert_equal(out, [1, 2, 3, 4, 5])
if pandas_has_categoricals:
x = pd.Categorical(x, order)
out = utils.categorical_order(x)
nt.assert_equal(out, list(x.categories))
x = pd.Series(x)
out = utils.categorical_order(x)
nt.assert_equal(out, list(x.cat.categories))
out = utils.categorical_order(x, ["b", "a"])
nt.assert_equal(out, ["b", "a"])
x = ["a", np.nan, "c", "c", "b", "a", "d"]
out = utils.categorical_order(x)
nt.assert_equal(out, ["a", "c", "b", "d"])
if LooseVersion(pd.__version__) >= "0.15":
def check_load_dataset(name):
ds = load_dataset(name, cache=False)
assert(isinstance(ds, pd.DataFrame))
def check_load_cached_dataset(name):
# Test the cacheing using a temporary file.
# With Python 3.2+, we could use the tempfile.TemporaryDirectory()
# context manager instead of this try...finally statement
tmpdir = tempfile.mkdtemp()
try:
# download and cache
ds = load_dataset(name, cache=True, data_home=tmpdir)
# use cached version
ds2 = load_dataset(name, cache=True, data_home=tmpdir)
pdt.assert_frame_equal(ds, ds2)
finally:
shutil.rmtree(tmpdir)
@network(url="https://github.com/mwaskom/seaborn-data")
def test_get_dataset_names():
if not BeautifulSoup:
raise nose.SkipTest("No BeautifulSoup available for parsing html")
names = get_dataset_names()
assert(len(names) > 0)
assert(u"titanic" in names)
@network(url="https://github.com/mwaskom/seaborn-data")
def test_load_datasets():
if not BeautifulSoup:
raise nose.SkipTest("No BeautifulSoup available for parsing html")
# Heavy test to verify that we can load all available datasets
for name in get_dataset_names():
# unfortunately @network somehow obscures this generator so it
# does not get in effect, so we need to call explicitly
# yield check_load_dataset, name
check_load_dataset(name)
@network(url="https://github.com/mwaskom/seaborn-data")
def test_load_cached_datasets():
if not BeautifulSoup:
raise nose.SkipTest("No BeautifulSoup available for parsing html")
# Heavy test to verify that we can load all available datasets
for name in get_dataset_names():
# unfortunately @network somehow obscures this generator so it
# does not get in effect, so we need to call explicitly
# yield check_load_dataset, name
check_load_cached_dataset(name)
| bsd-3-clause |
janelia-idf/hybridizer | tests/adc_to_volume.py | 4 | 5112 | # -*- coding: utf-8 -*-
from __future__ import print_function, division
import matplotlib.pyplot as plot
import numpy
from numpy.polynomial.polynomial import polyfit,polyadd,Polynomial
import yaml
INCHES_PER_ML = 0.078
VOLTS_PER_ADC_UNIT = 0.0049
def load_numpy_data(path):
with open(path,'r') as fid:
header = fid.readline().rstrip().split(',')
dt = numpy.dtype({'names':header,'formats':['S25']*len(header)})
numpy_data = numpy.loadtxt(path,dtype=dt,delimiter=",",skiprows=1)
return numpy_data
# -----------------------------------------------------------------------------------------
if __name__ == '__main__':
# Load VA data
data_file = 'hall_effect_data_va.csv'
hall_effect_data_va = load_numpy_data(data_file)
distances_va = numpy.float64(hall_effect_data_va['distance'])
A1_VA = numpy.float64(hall_effect_data_va['A1'])
A9_VA = numpy.float64(hall_effect_data_va['A9'])
A4_VA = numpy.float64(hall_effect_data_va['A4'])
A12_VA = numpy.float64(hall_effect_data_va['A12'])
A2_VA = numpy.float64(hall_effect_data_va['A2'])
A10_VA = numpy.float64(hall_effect_data_va['A10'])
A5_VA = numpy.float64(hall_effect_data_va['A5'])
A13_VA = numpy.float64(hall_effect_data_va['A13'])
# Massage VA data
volumes_va = distances_va/INCHES_PER_ML
A1_VA = numpy.reshape(A1_VA,(-1,1))
A9_VA = numpy.reshape(A9_VA,(-1,1))
A4_VA = numpy.reshape(A4_VA,(-1,1))
A12_VA = numpy.reshape(A12_VA,(-1,1))
A2_VA = numpy.reshape(A2_VA,(-1,1))
A10_VA = numpy.reshape(A10_VA,(-1,1))
A5_VA = numpy.reshape(A5_VA,(-1,1))
A13_VA = numpy.reshape(A13_VA,(-1,1))
data_va = numpy.hstack((A1_VA,A9_VA,A4_VA,A12_VA,A2_VA,A10_VA,A5_VA,A13_VA))
data_va = data_va/VOLTS_PER_ADC_UNIT
# Load OA data
data_file = 'hall_effect_data_oa.csv'
hall_effect_data_oa = load_numpy_data(data_file)
distances_oa = numpy.float64(hall_effect_data_oa['distance'])
A9_OA = numpy.float64(hall_effect_data_oa['A9'])
A10_OA = numpy.float64(hall_effect_data_oa['A10'])
A11_OA = numpy.float64(hall_effect_data_oa['A11'])
A12_OA = numpy.float64(hall_effect_data_oa['A12'])
# Massage OA data
volumes_oa = distances_oa/INCHES_PER_ML
A9_OA = numpy.reshape(A9_OA,(-1,1))
A10_OA = numpy.reshape(A10_OA,(-1,1))
A11_OA = numpy.reshape(A11_OA,(-1,1))
A12_OA = numpy.reshape(A12_OA,(-1,1))
data_oa = numpy.hstack((A9_OA,A10_OA,A11_OA,A12_OA))
data_oa = data_oa/VOLTS_PER_ADC_UNIT
# Create figure
fig = plot.figure()
fig.suptitle('hall effect sensors',fontsize=14,fontweight='bold')
fig.subplots_adjust(top=0.85)
colors = ['b','g','r','c','m','y','k','b']
markers = ['o','o','o','o','o','o','o','^']
# Axis 1
ax1 = fig.add_subplot(121)
for column_index in range(0,data_va.shape[1]):
color = colors[column_index]
marker = markers[column_index]
ax1.plot(data_va[:,column_index],volumes_va,marker=marker,linestyle='--',color=color)
# for column_index in range(0,data_oa.shape[1]):
# color = colors[column_index]
# marker = markers[column_index]
# ax1.plot(data_oa[:,column_index],volumes_oa,marker=marker,linestyle='--',color=color)
ax1.set_xlabel('mean signals (ADC units)')
ax1.set_ylabel('volume (ml)')
ax1.grid(True)
# Axis 2
for column_index in range(0,data_va.shape[1]):
data_va[:,column_index] -= data_va[:,column_index].min()
MAX_VA = 120
data_va = data_va[numpy.all(data_va<MAX_VA,axis=1)]
length = data_va.shape[0]
volumes_va = volumes_va[-length:]
# for column_index in range(0,data_oa.shape[1]):
# data_oa[:,column_index] -= data_oa[:,column_index].max()
ax2 = fig.add_subplot(122)
for column_index in range(0,data_va.shape[1]):
color = colors[column_index]
marker = markers[column_index]
ax2.plot(data_va[:,column_index],volumes_va,marker=marker,linestyle='--',color=color)
# for column_index in range(0,data_oa.shape[1]):
# color = colors[column_index]
# marker = markers[column_index]
# ax2.plot(data_oa[:,column_index],volumes_oa,marker=marker,linestyle='--',color=color)
ax2.set_xlabel('offset mean signals (ADC units)')
ax2.set_ylabel('volume (ml)')
ax2.grid(True)
order = 3
sum_va = None
for column_index in range(0,data_va.shape[1]):
coefficients_va = polyfit(data_va[:,column_index],volumes_va,order)
if sum_va is None:
sum_va = coefficients_va
else:
sum_va = polyadd(sum_va,coefficients_va)
average_va = sum_va/data_va.shape[1]
with open('adc_to_volume_va.yaml', 'w') as f:
yaml.dump(average_va, f, default_flow_style=False)
round_digits = 8
average_va = [round(i,round_digits) for i in average_va]
poly = Polynomial(average_va)
ys_va = poly(data_va[:,-1])
ax2.plot(data_va[:,-1],ys_va,'r',linewidth=3)
ax2.text(5,7.5,r'$v = c_0 + c_1s + c_2s^2 + c_3s^3$',fontsize=20)
ax2.text(5,6.5,str(average_va),fontsize=18,color='r')
plot.show()
| bsd-3-clause |
hmendozap/master-arbeit-projects | autosk_dev_test/component/RegDeepNet.py | 1 | 19049 | import numpy as np
import scipy.sparse as sp
from HPOlibConfigSpace.configuration_space import ConfigurationSpace
from HPOlibConfigSpace.conditions import EqualsCondition, InCondition
from HPOlibConfigSpace.hyperparameters import UniformFloatHyperparameter, \
UniformIntegerHyperparameter, CategoricalHyperparameter, Constant
from autosklearn.pipeline.components.base import AutoSklearnRegressionAlgorithm
from autosklearn.pipeline.constants import *
class RegDeepNet(AutoSklearnRegressionAlgorithm):
def __init__(self, number_epochs, batch_size, num_layers,
dropout_output, learning_rate, solver,
lambda2, random_state=None,
**kwargs):
self.number_epochs = number_epochs
self.batch_size = batch_size
self.num_layers = ord(num_layers) - ord('a')
self.dropout_output = dropout_output
self.learning_rate = learning_rate
self.lambda2 = lambda2
self.solver = solver
# Also taken from **kwargs. Because the assigned
# arguments are the minimum parameters to run
# the iterative net. IMO.
self.lr_policy = kwargs.get("lr_policy", "fixed")
self.momentum = kwargs.get("momentum", 0.99)
self.beta1 = 1 - kwargs.get("beta1", 0.1)
self.beta2 = 1 - kwargs.get("beta2", 0.01)
self.rho = kwargs.get("rho", 0.95)
self.gamma = kwargs.get("gamma", 0.01)
self.power = kwargs.get("power", 1.0)
self.epoch_step = kwargs.get("epoch_step", 1)
# Empty features and shape
self.n_features = None
self.input_shape = None
self.m_issparse = False
self.m_isbinary = False
self.m_ismultilabel = False
self.m_isregression = True
# TODO: Should one add a try-except here?
self.num_units_per_layer = []
self.dropout_per_layer = []
self.activation_per_layer = []
self.weight_init_layer = []
self.std_per_layer = []
self.leakiness_per_layer = []
self.tanh_alpha_per_layer = []
self.tanh_beta_per_layer = []
for i in range(1, self.num_layers):
self.num_units_per_layer.append(int(kwargs.get("num_units_layer_" + str(i), 128)))
self.dropout_per_layer.append(float(kwargs.get("dropout_layer_" + str(i), 0.5)))
self.activation_per_layer.append(kwargs.get("activation_layer_" + str(i), 'relu'))
self.weight_init_layer.append(kwargs.get("weight_init_" + str(i), 'he_normal'))
self.std_per_layer.append(float(kwargs.get("std_layer_" + str(i), 0.005)))
self.leakiness_per_layer.append(float(kwargs.get("leakiness_layer_" + str(i), 1. / 3.)))
self.tanh_alpha_per_layer.append(float(kwargs.get("tanh_alpha_layer_" + str(i), 2. / 3.)))
self.tanh_beta_per_layer.append(float(kwargs.get("tanh_beta_layer_" + str(i), 1.7159)))
self.estimator = None
self.random_state = random_state
def _prefit(self, X, y):
self.batch_size = int(self.batch_size)
self.n_features = X.shape[1]
self.input_shape = (self.batch_size, self.n_features)
assert len(self.num_units_per_layer) == self.num_layers - 1,\
"Number of created layers is different than actual layers"
assert len(self.dropout_per_layer) == self.num_layers - 1,\
"Number of created layers is different than actual layers"
self.num_output_units = 1 # Regression
# Normalize the output
self.mean_y = np.mean(y)
self.std_y = np.std(y)
y = (y - self.mean_y) / self.std_y
if len(y.shape) == 1:
y = y[:, np.newaxis]
self.m_issparse = sp.issparse(X)
return X, y
def fit(self, X, y):
Xf, yf = self._prefit(X, y)
from implementation import FeedForwardNet
self.estimator = FeedForwardNet.FeedForwardNet(batch_size=self.batch_size,
input_shape=self.input_shape,
num_layers=self.num_layers,
num_units_per_layer=self.num_units_per_layer,
dropout_per_layer=self.dropout_per_layer,
activation_per_layer=self.activation_per_layer,
weight_init_per_layer=self.weight_init_layer,
std_per_layer=self.std_per_layer,
leakiness_per_layer=self.leakiness_per_layer,
tanh_alpha_per_layer=self.tanh_alpha_per_layer,
tanh_beta_per_layer=self.tanh_beta_per_layer,
num_output_units=self.num_output_units,
dropout_output=self.dropout_output,
learning_rate=self.learning_rate,
lr_policy=self.lr_policy,
lambda2=self.lambda2,
momentum=self.momentum,
beta1=self.beta1,
beta2=self.beta2,
rho=self.rho,
solver=self.solver,
num_epochs=self.number_epochs,
gamma=self.gamma,
power=self.power,
epoch_step=self.epoch_step,
is_sparse=self.m_issparse,
is_binary=self.m_isbinary,
is_multilabel=self.m_ismultilabel,
is_regression=self.m_isregression,
random_state=self.random_state)
self.estimator.fit(Xf, yf)
return self
def predict(self, X):
if self.estimator is None:
raise NotImplementedError
preds = self.estimator.predict(X, self.m_issparse)
return preds * self.std_y + self.mean_y
def predict_proba(self, X):
if self.estimator is None:
raise NotImplementedError()
return self.estimator.predict_proba(X, self.m_issparse)
@staticmethod
def get_properties(dataset_properties=None):
return {'shortname': 'reg_feed_nn',
'name': 'Regression Feed Forward Neural Network',
'handles_regression': True,
'handles_classification': False,
'handles_multiclass': False,
'handles_multilabel': False,
'is_deterministic': True,
'input': (DENSE, SPARSE, UNSIGNED_DATA),
'output': (PREDICTIONS,)}
@staticmethod
def get_hyperparameter_search_space(dataset_properties=None):
max_num_layers = 7 # Maximum number of layers coded
# Hacky way to condition layers params based on the number of layers
# 'c'=1, 'd'=2, 'e'=3 ,'f'=4', g ='5', h='6' + output_layer
layer_choices = [chr(i) for i in range(ord('c'), ord('b') + max_num_layers)]
batch_size = UniformIntegerHyperparameter("batch_size",
32, 4096,
log=True,
default=32)
number_epochs = UniformIntegerHyperparameter("number_epochs",
2, 80,
default=5)
num_layers = CategoricalHyperparameter("num_layers",
choices=layer_choices,
default='c')
lr = UniformFloatHyperparameter("learning_rate", 1e-6, 1.0,
log=True,
default=0.01)
l2 = UniformFloatHyperparameter("lambda2", 1e-7, 1e-2,
log=True,
default=1e-4)
dropout_output = UniformFloatHyperparameter("dropout_output",
0.0, 0.99,
default=0.5)
# Define basic hyperparameters and define the config space
# basic means that are independent from the number of layers
cs = ConfigurationSpace()
cs.add_hyperparameter(number_epochs)
cs.add_hyperparameter(batch_size)
cs.add_hyperparameter(num_layers)
cs.add_hyperparameter(lr)
cs.add_hyperparameter(l2)
cs.add_hyperparameter(dropout_output)
# Define parameters with different child parameters and conditions
solver_choices = ["adam", "adadelta", "adagrad",
"sgd", "momentum", "nesterov",
"smorm3s"]
solver = CategoricalHyperparameter(name="solver",
choices=solver_choices,
default="smorm3s")
beta1 = UniformFloatHyperparameter("beta1", 1e-4, 0.1,
log=True,
default=0.1)
beta2 = UniformFloatHyperparameter("beta2", 1e-4, 0.1,
log=True,
default=0.01)
rho = UniformFloatHyperparameter("rho", 0.05, 0.99,
log=True,
default=0.95)
momentum = UniformFloatHyperparameter("momentum", 0.3, 0.999,
default=0.9)
# TODO: Add policy based on this sklearn sgd
policy_choices = ['fixed', 'inv', 'exp', 'step']
lr_policy = CategoricalHyperparameter(name="lr_policy",
choices=policy_choices,
default='fixed')
gamma = UniformFloatHyperparameter(name="gamma",
lower=1e-3, upper=1e-1,
default=1e-2)
power = UniformFloatHyperparameter("power",
0.0, 1.0,
default=0.5)
epoch_step = UniformIntegerHyperparameter("epoch_step",
2, 20,
default=5)
cs.add_hyperparameter(solver)
cs.add_hyperparameter(beta1)
cs.add_hyperparameter(beta2)
cs.add_hyperparameter(momentum)
cs.add_hyperparameter(rho)
cs.add_hyperparameter(lr_policy)
cs.add_hyperparameter(gamma)
cs.add_hyperparameter(power)
cs.add_hyperparameter(epoch_step)
# Define parameters that are needed it for each layer
output_activation_choices = ['softmax', 'sigmoid', 'softplus', 'tanh']
activations_choices = ['sigmoid', 'tanh', 'scaledTanh', 'elu', 'relu', 'leaky', 'linear']
weight_choices = ['constant', 'normal', 'uniform',
'glorot_normal', 'glorot_uniform',
'he_normal', 'he_uniform',
'ortogonal', 'sparse']
# Iterate over parameters that are used in each layer
for i in range(1, max_num_layers):
layer_units = UniformIntegerHyperparameter("num_units_layer_" + str(i),
64, 4096,
log=True,
default=128)
cs.add_hyperparameter(layer_units)
layer_dropout = UniformFloatHyperparameter("dropout_layer_" + str(i),
0.0, 0.99,
default=0.5)
cs.add_hyperparameter(layer_dropout)
weight_initialization = CategoricalHyperparameter('weight_init_' + str(i),
choices=weight_choices,
default='he_normal')
cs.add_hyperparameter(weight_initialization)
layer_std = UniformFloatHyperparameter("std_layer_" + str(i),
1e-6, 0.1,
log=True,
default=0.005)
cs.add_hyperparameter(layer_std)
layer_activation = CategoricalHyperparameter("activation_layer_" + str(i),
choices=activations_choices,
default="relu")
cs.add_hyperparameter(layer_activation)
layer_leakiness = UniformFloatHyperparameter('leakiness_layer_' + str(i),
0.01, 0.99,
default=0.3)
cs.add_hyperparameter(layer_leakiness)
layer_tanh_alpha = UniformFloatHyperparameter('tanh_alpha_layer_' + str(i),
0.5, 1.0,
default=2. / 3.)
cs.add_hyperparameter(layer_tanh_alpha)
layer_tanh_beta = UniformFloatHyperparameter('tanh_beta_layer_' + str(i),
1.1, 3.0,
log=True,
default=1.7159)
cs.add_hyperparameter(layer_tanh_beta)
# TODO: Could be in a function in a new module
for i in range(2, max_num_layers):
# Condition layers parameter on layer choice
layer_unit_param = cs.get_hyperparameter("num_units_layer_" + str(i))
layer_cond = InCondition(child=layer_unit_param, parent=num_layers,
values=[l for l in layer_choices[i - 1:]])
cs.add_condition(layer_cond)
# Condition dropout parameter on layer choice
layer_dropout_param = cs.get_hyperparameter("dropout_layer_" + str(i))
layer_cond = InCondition(child=layer_dropout_param, parent=num_layers,
values=[l for l in layer_choices[i - 1:]])
cs.add_condition(layer_cond)
# Condition weight initialization on layer choice
layer_weight_param = cs.get_hyperparameter("weight_init_" + str(i))
layer_cond = InCondition(child=layer_weight_param, parent=num_layers,
values=[l for l in layer_choices[i - 1:]])
cs.add_condition(layer_cond)
# Condition std parameter on weight layer initialization choice
layer_std_param = cs.get_hyperparameter("std_layer_" + str(i))
weight_cond = EqualsCondition(child=layer_std_param,
parent=layer_weight_param,
value='normal')
cs.add_condition(weight_cond)
# Condition activation parameter on layer choice
layer_activation_param = cs.get_hyperparameter("activation_layer_" + str(i))
layer_cond = InCondition(child=layer_activation_param, parent=num_layers,
values=[l for l in layer_choices[i - 1:]])
cs.add_condition(layer_cond)
# Condition leakiness on activation choice
layer_leakiness_param = cs.get_hyperparameter("leakiness_layer_" + str(i))
activation_cond = EqualsCondition(child=layer_leakiness_param,
parent=layer_activation_param,
value='leaky')
cs.add_condition(activation_cond)
# Condition tanh on activation choice
layer_tanh_alpha_param = cs.get_hyperparameter("tanh_alpha_layer_" + str(i))
activation_cond = EqualsCondition(child=layer_tanh_alpha_param,
parent=layer_activation_param,
value='scaledTanh')
cs.add_condition(activation_cond)
layer_tanh_beta_param = cs.get_hyperparameter("tanh_beta_layer_" + str(i))
activation_cond = EqualsCondition(child=layer_tanh_beta_param,
parent=layer_activation_param,
value='scaledTanh')
cs.add_condition(activation_cond)
# Conditioning on solver
momentum_depends_on_solver = InCondition(momentum, solver,
values=["momentum", "nesterov"])
beta1_depends_on_solver = EqualsCondition(beta1, solver, "adam")
beta2_depends_on_solver = EqualsCondition(beta2, solver, "adam")
rho_depends_on_solver = EqualsCondition(rho, solver, "adadelta")
cs.add_condition(momentum_depends_on_solver)
cs.add_condition(beta1_depends_on_solver)
cs.add_condition(beta2_depends_on_solver)
cs.add_condition(rho_depends_on_solver)
# Conditioning on learning rate policy
lr_policy_depends_on_solver = InCondition(lr_policy, solver,
["adadelta", "adagrad", "sgd",
"momentum", "nesterov"])
gamma_depends_on_policy = InCondition(child=gamma, parent=lr_policy,
values=["inv", "exp", "step"])
power_depends_on_policy = EqualsCondition(power, lr_policy, "inv")
epoch_step_depends_on_policy = EqualsCondition(epoch_step, lr_policy, "step")
cs.add_condition(lr_policy_depends_on_solver)
cs.add_condition(gamma_depends_on_policy)
cs.add_condition(power_depends_on_policy)
cs.add_condition(epoch_step_depends_on_policy)
return cs
| mit |
openai/baselines | baselines/gail/dataset/mujoco_dset.py | 1 | 4448 | '''
Data structure of the input .npz:
the data is save in python dictionary format with keys: 'acs', 'ep_rets', 'rews', 'obs'
the values of each item is a list storing the expert trajectory sequentially
a transition can be: (data['obs'][t], data['acs'][t], data['obs'][t+1]) and get reward data['rews'][t]
'''
from baselines import logger
import numpy as np
class Dset(object):
def __init__(self, inputs, labels, randomize):
self.inputs = inputs
self.labels = labels
assert len(self.inputs) == len(self.labels)
self.randomize = randomize
self.num_pairs = len(inputs)
self.init_pointer()
def init_pointer(self):
self.pointer = 0
if self.randomize:
idx = np.arange(self.num_pairs)
np.random.shuffle(idx)
self.inputs = self.inputs[idx, :]
self.labels = self.labels[idx, :]
def get_next_batch(self, batch_size):
# if batch_size is negative -> return all
if batch_size < 0:
return self.inputs, self.labels
if self.pointer + batch_size >= self.num_pairs:
self.init_pointer()
end = self.pointer + batch_size
inputs = self.inputs[self.pointer:end, :]
labels = self.labels[self.pointer:end, :]
self.pointer = end
return inputs, labels
class Mujoco_Dset(object):
def __init__(self, expert_path, train_fraction=0.7, traj_limitation=-1, randomize=True):
traj_data = np.load(expert_path)
if traj_limitation < 0:
traj_limitation = len(traj_data['obs'])
obs = traj_data['obs'][:traj_limitation]
acs = traj_data['acs'][:traj_limitation]
# obs, acs: shape (N, L, ) + S where N = # episodes, L = episode length
# and S is the environment observation/action space.
# Flatten to (N * L, prod(S))
if len(obs.shape) > 2:
self.obs = np.reshape(obs, [-1, np.prod(obs.shape[2:])])
self.acs = np.reshape(acs, [-1, np.prod(acs.shape[2:])])
else:
self.obs = np.vstack(obs)
self.acs = np.vstack(acs)
self.rets = traj_data['ep_rets'][:traj_limitation]
self.avg_ret = sum(self.rets)/len(self.rets)
self.std_ret = np.std(np.array(self.rets))
if len(self.acs) > 2:
self.acs = np.squeeze(self.acs)
assert len(self.obs) == len(self.acs)
self.num_traj = min(traj_limitation, len(traj_data['obs']))
self.num_transition = len(self.obs)
self.randomize = randomize
self.dset = Dset(self.obs, self.acs, self.randomize)
# for behavior cloning
self.train_set = Dset(self.obs[:int(self.num_transition*train_fraction), :],
self.acs[:int(self.num_transition*train_fraction), :],
self.randomize)
self.val_set = Dset(self.obs[int(self.num_transition*train_fraction):, :],
self.acs[int(self.num_transition*train_fraction):, :],
self.randomize)
self.log_info()
def log_info(self):
logger.log("Total trajectories: %d" % self.num_traj)
logger.log("Total transitions: %d" % self.num_transition)
logger.log("Average returns: %f" % self.avg_ret)
logger.log("Std for returns: %f" % self.std_ret)
def get_next_batch(self, batch_size, split=None):
if split is None:
return self.dset.get_next_batch(batch_size)
elif split == 'train':
return self.train_set.get_next_batch(batch_size)
elif split == 'val':
return self.val_set.get_next_batch(batch_size)
else:
raise NotImplementedError
def plot(self):
import matplotlib.pyplot as plt
plt.hist(self.rets)
plt.savefig("histogram_rets.png")
plt.close()
def test(expert_path, traj_limitation, plot):
dset = Mujoco_Dset(expert_path, traj_limitation=traj_limitation)
if plot:
dset.plot()
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--expert_path", type=str, default="../data/deterministic.trpo.Hopper.0.00.npz")
parser.add_argument("--traj_limitation", type=int, default=None)
parser.add_argument("--plot", type=bool, default=False)
args = parser.parse_args()
test(args.expert_path, args.traj_limitation, args.plot)
| mit |
murali-munna/scikit-learn | sklearn/learning_curve.py | 110 | 13467 | """Utilities to evaluate models with respect to a variable
"""
# Author: Alexander Fabisch <afabisch@informatik.uni-bremen.de>
#
# License: BSD 3 clause
import warnings
import numpy as np
from .base import is_classifier, clone
from .cross_validation import check_cv
from .externals.joblib import Parallel, delayed
from .cross_validation import _safe_split, _score, _fit_and_score
from .metrics.scorer import check_scoring
from .utils import indexable
from .utils.fixes import astype
__all__ = ['learning_curve', 'validation_curve']
def learning_curve(estimator, X, y, train_sizes=np.linspace(0.1, 1.0, 5),
cv=None, scoring=None, exploit_incremental_learning=False,
n_jobs=1, pre_dispatch="all", verbose=0):
"""Learning curve.
Determines cross-validated training and test scores for different training
set sizes.
A cross-validation generator splits the whole dataset k times in training
and test data. Subsets of the training set with varying sizes will be used
to train the estimator and a score for each training subset size and the
test set will be computed. Afterwards, the scores will be averaged over
all k runs for each training subset size.
Read more in the :ref:`User Guide <learning_curves>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
train_sizes : array-like, shape (n_ticks,), dtype float or int
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the dtype is float, it is regarded as a
fraction of the maximum size of the training set (that is determined
by the selected validation method), i.e. it has to be within (0, 1].
Otherwise it is interpreted as absolute sizes of the training sets.
Note that for classification the number of samples usually have to
be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
exploit_incremental_learning : boolean, optional, default: False
If the estimator supports incremental learning, this will be
used to speed up fitting for different training set sizes.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
Returns
-------
train_sizes_abs : array, shape = (n_unique_ticks,), dtype int
Numbers of training examples that has been used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See :ref:`examples/model_selection/plot_learning_curve.py
<example_model_selection_plot_learning_curve.py>`
"""
if exploit_incremental_learning and not hasattr(estimator, "partial_fit"):
raise ValueError("An estimator must support the partial_fit interface "
"to exploit incremental learning")
X, y = indexable(X, y)
# Make a list since we will be iterating multiple times over the folds
cv = list(check_cv(cv, X, y, classifier=is_classifier(estimator)))
scorer = check_scoring(estimator, scoring=scoring)
# HACK as long as boolean indices are allowed in cv generators
if cv[0][0].dtype == bool:
new_cv = []
for i in range(len(cv)):
new_cv.append((np.nonzero(cv[i][0])[0], np.nonzero(cv[i][1])[0]))
cv = new_cv
n_max_training_samples = len(cv[0][0])
# Because the lengths of folds can be significantly different, it is
# not guaranteed that we use all of the available training data when we
# use the first 'n_max_training_samples' samples.
train_sizes_abs = _translate_train_sizes(train_sizes,
n_max_training_samples)
n_unique_ticks = train_sizes_abs.shape[0]
if verbose > 0:
print("[learning_curve] Training set sizes: " + str(train_sizes_abs))
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
if exploit_incremental_learning:
classes = np.unique(y) if is_classifier(estimator) else None
out = parallel(delayed(_incremental_fit_estimator)(
clone(estimator), X, y, classes, train, test, train_sizes_abs,
scorer, verbose) for train, test in cv)
else:
out = parallel(delayed(_fit_and_score)(
clone(estimator), X, y, scorer, train[:n_train_samples], test,
verbose, parameters=None, fit_params=None, return_train_score=True)
for train, test in cv for n_train_samples in train_sizes_abs)
out = np.array(out)[:, :2]
n_cv_folds = out.shape[0] // n_unique_ticks
out = out.reshape(n_cv_folds, n_unique_ticks, 2)
out = np.asarray(out).transpose((2, 1, 0))
return train_sizes_abs, out[0], out[1]
def _translate_train_sizes(train_sizes, n_max_training_samples):
"""Determine absolute sizes of training subsets and validate 'train_sizes'.
Examples:
_translate_train_sizes([0.5, 1.0], 10) -> [5, 10]
_translate_train_sizes([5, 10], 10) -> [5, 10]
Parameters
----------
train_sizes : array-like, shape (n_ticks,), dtype float or int
Numbers of training examples that will be used to generate the
learning curve. If the dtype is float, it is regarded as a
fraction of 'n_max_training_samples', i.e. it has to be within (0, 1].
n_max_training_samples : int
Maximum number of training samples (upper bound of 'train_sizes').
Returns
-------
train_sizes_abs : array, shape (n_unique_ticks,), dtype int
Numbers of training examples that will be used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
"""
train_sizes_abs = np.asarray(train_sizes)
n_ticks = train_sizes_abs.shape[0]
n_min_required_samples = np.min(train_sizes_abs)
n_max_required_samples = np.max(train_sizes_abs)
if np.issubdtype(train_sizes_abs.dtype, np.float):
if n_min_required_samples <= 0.0 or n_max_required_samples > 1.0:
raise ValueError("train_sizes has been interpreted as fractions "
"of the maximum number of training samples and "
"must be within (0, 1], but is within [%f, %f]."
% (n_min_required_samples,
n_max_required_samples))
train_sizes_abs = astype(train_sizes_abs * n_max_training_samples,
dtype=np.int, copy=False)
train_sizes_abs = np.clip(train_sizes_abs, 1,
n_max_training_samples)
else:
if (n_min_required_samples <= 0 or
n_max_required_samples > n_max_training_samples):
raise ValueError("train_sizes has been interpreted as absolute "
"numbers of training samples and must be within "
"(0, %d], but is within [%d, %d]."
% (n_max_training_samples,
n_min_required_samples,
n_max_required_samples))
train_sizes_abs = np.unique(train_sizes_abs)
if n_ticks > train_sizes_abs.shape[0]:
warnings.warn("Removed duplicate entries from 'train_sizes'. Number "
"of ticks will be less than than the size of "
"'train_sizes' %d instead of %d)."
% (train_sizes_abs.shape[0], n_ticks), RuntimeWarning)
return train_sizes_abs
def _incremental_fit_estimator(estimator, X, y, classes, train, test,
train_sizes, scorer, verbose):
"""Train estimator on training subsets incrementally and compute scores."""
train_scores, test_scores = [], []
partitions = zip(train_sizes, np.split(train, train_sizes)[:-1])
for n_train_samples, partial_train in partitions:
train_subset = train[:n_train_samples]
X_train, y_train = _safe_split(estimator, X, y, train_subset)
X_partial_train, y_partial_train = _safe_split(estimator, X, y,
partial_train)
X_test, y_test = _safe_split(estimator, X, y, test, train_subset)
if y_partial_train is None:
estimator.partial_fit(X_partial_train, classes=classes)
else:
estimator.partial_fit(X_partial_train, y_partial_train,
classes=classes)
train_scores.append(_score(estimator, X_train, y_train, scorer))
test_scores.append(_score(estimator, X_test, y_test, scorer))
return np.array((train_scores, test_scores)).T
def validation_curve(estimator, X, y, param_name, param_range, cv=None,
scoring=None, n_jobs=1, pre_dispatch="all", verbose=0):
"""Validation curve.
Determine training and test scores for varying parameter values.
Compute scores for an estimator with different values of a specified
parameter. This is similar to grid search with one parameter. However, this
will also compute training scores and is merely a utility for plotting the
results.
Read more in the :ref:`User Guide <validation_curve>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
param_name : string
Name of the parameter that will be varied.
param_range : array-like, shape (n_values,)
The values of the parameter that will be evaluated.
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
Returns
-------
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See
:ref:`examples/model_selection/plot_validation_curve.py
<example_model_selection_plot_validation_curve.py>`
"""
X, y = indexable(X, y)
cv = check_cv(cv, X, y, classifier=is_classifier(estimator))
scorer = check_scoring(estimator, scoring=scoring)
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
out = parallel(delayed(_fit_and_score)(
estimator, X, y, scorer, train, test, verbose,
parameters={param_name: v}, fit_params=None, return_train_score=True)
for train, test in cv for v in param_range)
out = np.asarray(out)[:, :2]
n_params = len(param_range)
n_cv_folds = out.shape[0] // n_params
out = out.reshape(n_cv_folds, n_params, 2).transpose((2, 1, 0))
return out[0], out[1]
| bsd-3-clause |
datapythonista/pandas | pandas/tests/util/test_validate_args_and_kwargs.py | 8 | 2391 | import pytest
from pandas.util._validators import validate_args_and_kwargs
_fname = "func"
def test_invalid_total_length_max_length_one():
compat_args = ("foo",)
kwargs = {"foo": "FOO"}
args = ("FoO", "BaZ")
min_fname_arg_count = 0
max_length = len(compat_args) + min_fname_arg_count
actual_length = len(kwargs) + len(args) + min_fname_arg_count
msg = (
fr"{_fname}\(\) takes at most {max_length} "
fr"argument \({actual_length} given\)"
)
with pytest.raises(TypeError, match=msg):
validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args)
def test_invalid_total_length_max_length_multiple():
compat_args = ("foo", "bar", "baz")
kwargs = {"foo": "FOO", "bar": "BAR"}
args = ("FoO", "BaZ")
min_fname_arg_count = 2
max_length = len(compat_args) + min_fname_arg_count
actual_length = len(kwargs) + len(args) + min_fname_arg_count
msg = (
fr"{_fname}\(\) takes at most {max_length} "
fr"arguments \({actual_length} given\)"
)
with pytest.raises(TypeError, match=msg):
validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args)
@pytest.mark.parametrize("args,kwargs", [((), {"foo": -5, "bar": 2}), ((-5, 2), {})])
def test_missing_args_or_kwargs(args, kwargs):
bad_arg = "bar"
min_fname_arg_count = 2
compat_args = {"foo": -5, bad_arg: 1}
msg = (
fr"the '{bad_arg}' parameter is not supported "
fr"in the pandas implementation of {_fname}\(\)"
)
with pytest.raises(ValueError, match=msg):
validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args)
def test_duplicate_argument():
min_fname_arg_count = 2
compat_args = {"foo": None, "bar": None, "baz": None}
kwargs = {"foo": None, "bar": None}
args = (None,) # duplicate value for "foo"
msg = fr"{_fname}\(\) got multiple values for keyword argument 'foo'"
with pytest.raises(TypeError, match=msg):
validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args)
def test_validation():
# No exceptions should be raised.
compat_args = {"foo": 1, "bar": None, "baz": -2}
kwargs = {"baz": -2}
args = (1, None)
min_fname_arg_count = 2
validate_args_and_kwargs(_fname, args, kwargs, min_fname_arg_count, compat_args)
| bsd-3-clause |
NixaSoftware/CVis | venv/lib/python2.7/site-packages/pandas/tests/test_sorting.py | 4 | 17560 | import pytest
from itertools import product
from collections import defaultdict
import warnings
from datetime import datetime
import numpy as np
from numpy import nan
import pandas as pd
from pandas.core import common as com
from pandas import DataFrame, MultiIndex, merge, concat, Series, compat
from pandas.util import testing as tm
from pandas.util.testing import assert_frame_equal, assert_series_equal
from pandas.core.sorting import (is_int64_overflow_possible,
decons_group_index,
get_group_index,
nargsort,
lexsort_indexer,
safe_sort)
class TestSorting(object):
@pytest.mark.slow
def test_int64_overflow(self):
B = np.concatenate((np.arange(1000), np.arange(1000), np.arange(500)))
A = np.arange(2500)
df = DataFrame({'A': A,
'B': B,
'C': A,
'D': B,
'E': A,
'F': B,
'G': A,
'H': B,
'values': np.random.randn(2500)})
lg = df.groupby(['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'])
rg = df.groupby(['H', 'G', 'F', 'E', 'D', 'C', 'B', 'A'])
left = lg.sum()['values']
right = rg.sum()['values']
exp_index, _ = left.index.sortlevel()
tm.assert_index_equal(left.index, exp_index)
exp_index, _ = right.index.sortlevel(0)
tm.assert_index_equal(right.index, exp_index)
tups = list(map(tuple, df[['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'
]].values))
tups = com._asarray_tuplesafe(tups)
expected = df.groupby(tups).sum()['values']
for k, v in compat.iteritems(expected):
assert left[k] == right[k[::-1]]
assert left[k] == v
assert len(left) == len(right)
def test_int64_overflow_moar(self):
# GH9096
values = range(55109)
data = pd.DataFrame.from_dict({'a': values,
'b': values,
'c': values,
'd': values})
grouped = data.groupby(['a', 'b', 'c', 'd'])
assert len(grouped) == len(values)
arr = np.random.randint(-1 << 12, 1 << 12, (1 << 15, 5))
i = np.random.choice(len(arr), len(arr) * 4)
arr = np.vstack((arr, arr[i])) # add sume duplicate rows
i = np.random.permutation(len(arr))
arr = arr[i] # shuffle rows
df = DataFrame(arr, columns=list('abcde'))
df['jim'], df['joe'] = np.random.randn(2, len(df)) * 10
gr = df.groupby(list('abcde'))
# verify this is testing what it is supposed to test!
assert is_int64_overflow_possible(gr.grouper.shape)
# mannually compute groupings
jim, joe = defaultdict(list), defaultdict(list)
for key, a, b in zip(map(tuple, arr), df['jim'], df['joe']):
jim[key].append(a)
joe[key].append(b)
assert len(gr) == len(jim)
mi = MultiIndex.from_tuples(jim.keys(), names=list('abcde'))
def aggr(func):
f = lambda a: np.fromiter(map(func, a), dtype='f8')
arr = np.vstack((f(jim.values()), f(joe.values()))).T
res = DataFrame(arr, columns=['jim', 'joe'], index=mi)
return res.sort_index()
assert_frame_equal(gr.mean(), aggr(np.mean))
assert_frame_equal(gr.median(), aggr(np.median))
def test_lexsort_indexer(self):
keys = [[nan] * 5 + list(range(100)) + [nan] * 5]
# orders=True, na_position='last'
result = lexsort_indexer(keys, orders=True, na_position='last')
exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.intp))
# orders=True, na_position='first'
result = lexsort_indexer(keys, orders=True, na_position='first')
exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.intp))
# orders=False, na_position='last'
result = lexsort_indexer(keys, orders=False, na_position='last')
exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110))
tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.intp))
# orders=False, na_position='first'
result = lexsort_indexer(keys, orders=False, na_position='first')
exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1))
tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.intp))
def test_nargsort(self):
# np.argsort(items) places NaNs last
items = [nan] * 5 + list(range(100)) + [nan] * 5
# np.argsort(items2) may not place NaNs first
items2 = np.array(items, dtype='O')
try:
# GH 2785; due to a regression in NumPy1.6.2
np.argsort(np.array([[1, 2], [1, 3], [1, 2]], dtype='i'))
np.argsort(items2, kind='mergesort')
except TypeError:
pytest.skip('requested sort not available for type')
# mergesort is the most difficult to get right because we want it to be
# stable.
# According to numpy/core/tests/test_multiarray, """The number of
# sorted items must be greater than ~50 to check the actual algorithm
# because quick and merge sort fall over to insertion sort for small
# arrays."""
# mergesort, ascending=True, na_position='last'
result = nargsort(items, kind='mergesort', ascending=True,
na_position='last')
exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
# mergesort, ascending=True, na_position='first'
result = nargsort(items, kind='mergesort', ascending=True,
na_position='first')
exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
# mergesort, ascending=False, na_position='last'
result = nargsort(items, kind='mergesort', ascending=False,
na_position='last')
exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110))
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
# mergesort, ascending=False, na_position='first'
result = nargsort(items, kind='mergesort', ascending=False,
na_position='first')
exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1))
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
# mergesort, ascending=True, na_position='last'
result = nargsort(items2, kind='mergesort', ascending=True,
na_position='last')
exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110))
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
# mergesort, ascending=True, na_position='first'
result = nargsort(items2, kind='mergesort', ascending=True,
na_position='first')
exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105))
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
# mergesort, ascending=False, na_position='last'
result = nargsort(items2, kind='mergesort', ascending=False,
na_position='last')
exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110))
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
# mergesort, ascending=False, na_position='first'
result = nargsort(items2, kind='mergesort', ascending=False,
na_position='first')
exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1))
tm.assert_numpy_array_equal(result, np.array(exp), check_dtype=False)
class TestMerge(object):
@pytest.mark.slow
def test_int64_overflow_issues(self):
# #2690, combinatorial explosion
df1 = DataFrame(np.random.randn(1000, 7),
columns=list('ABCDEF') + ['G1'])
df2 = DataFrame(np.random.randn(1000, 7),
columns=list('ABCDEF') + ['G2'])
# it works!
result = merge(df1, df2, how='outer')
assert len(result) == 2000
low, high, n = -1 << 10, 1 << 10, 1 << 20
left = DataFrame(np.random.randint(low, high, (n, 7)),
columns=list('ABCDEFG'))
left['left'] = left.sum(axis=1)
# one-2-one match
i = np.random.permutation(len(left))
right = left.iloc[i].copy()
right.columns = right.columns[:-1].tolist() + ['right']
right.index = np.arange(len(right))
right['right'] *= -1
out = merge(left, right, how='outer')
assert len(out) == len(left)
assert_series_equal(out['left'], - out['right'], check_names=False)
result = out.iloc[:, :-2].sum(axis=1)
assert_series_equal(out['left'], result, check_names=False)
assert result.name is None
out.sort_values(out.columns.tolist(), inplace=True)
out.index = np.arange(len(out))
for how in ['left', 'right', 'outer', 'inner']:
assert_frame_equal(out, merge(left, right, how=how, sort=True))
# check that left merge w/ sort=False maintains left frame order
out = merge(left, right, how='left', sort=False)
assert_frame_equal(left, out[left.columns.tolist()])
out = merge(right, left, how='left', sort=False)
assert_frame_equal(right, out[right.columns.tolist()])
# one-2-many/none match
n = 1 << 11
left = DataFrame(np.random.randint(low, high, (n, 7)).astype('int64'),
columns=list('ABCDEFG'))
# confirm that this is checking what it is supposed to check
shape = left.apply(Series.nunique).values
assert is_int64_overflow_possible(shape)
# add duplicates to left frame
left = concat([left, left], ignore_index=True)
right = DataFrame(np.random.randint(low, high, (n // 2, 7))
.astype('int64'),
columns=list('ABCDEFG'))
# add duplicates & overlap with left to the right frame
i = np.random.choice(len(left), n)
right = concat([right, right, left.iloc[i]], ignore_index=True)
left['left'] = np.random.randn(len(left))
right['right'] = np.random.randn(len(right))
# shuffle left & right frames
i = np.random.permutation(len(left))
left = left.iloc[i].copy()
left.index = np.arange(len(left))
i = np.random.permutation(len(right))
right = right.iloc[i].copy()
right.index = np.arange(len(right))
# manually compute outer merge
ldict, rdict = defaultdict(list), defaultdict(list)
for idx, row in left.set_index(list('ABCDEFG')).iterrows():
ldict[idx].append(row['left'])
for idx, row in right.set_index(list('ABCDEFG')).iterrows():
rdict[idx].append(row['right'])
vals = []
for k, lval in ldict.items():
rval = rdict.get(k, [np.nan])
for lv, rv in product(lval, rval):
vals.append(k + tuple([lv, rv]))
for k, rval in rdict.items():
if k not in ldict:
for rv in rval:
vals.append(k + tuple([np.nan, rv]))
def align(df):
df = df.sort_values(df.columns.tolist())
df.index = np.arange(len(df))
return df
def verify_order(df):
kcols = list('ABCDEFG')
assert_frame_equal(df[kcols].copy(),
df[kcols].sort_values(kcols, kind='mergesort'))
out = DataFrame(vals, columns=list('ABCDEFG') + ['left', 'right'])
out = align(out)
jmask = {'left': out['left'].notna(),
'right': out['right'].notna(),
'inner': out['left'].notna() & out['right'].notna(),
'outer': np.ones(len(out), dtype='bool')}
for how in 'left', 'right', 'outer', 'inner':
mask = jmask[how]
frame = align(out[mask].copy())
assert mask.all() ^ mask.any() or how == 'outer'
for sort in [False, True]:
res = merge(left, right, how=how, sort=sort)
if sort:
verify_order(res)
# as in GH9092 dtypes break with outer/right join
assert_frame_equal(frame, align(res),
check_dtype=how not in ('right', 'outer'))
def test_decons():
def testit(label_list, shape):
group_index = get_group_index(label_list, shape, sort=True, xnull=True)
label_list2 = decons_group_index(group_index, shape)
for a, b in zip(label_list, label_list2):
assert (np.array_equal(a, b))
shape = (4, 5, 6)
label_list = [np.tile([0, 1, 2, 3, 0, 1, 2, 3], 100), np.tile(
[0, 2, 4, 3, 0, 1, 2, 3], 100), np.tile(
[5, 1, 0, 2, 3, 0, 5, 4], 100)]
testit(label_list, shape)
shape = (10000, 10000)
label_list = [np.tile(np.arange(10000), 5), np.tile(np.arange(10000), 5)]
testit(label_list, shape)
class TestSafeSort(object):
def test_basic_sort(self):
values = [3, 1, 2, 0, 4]
result = safe_sort(values)
expected = np.array([0, 1, 2, 3, 4])
tm.assert_numpy_array_equal(result, expected)
values = list("baaacb")
result = safe_sort(values)
expected = np.array(list("aaabbc"), dtype='object')
tm.assert_numpy_array_equal(result, expected)
values = []
result = safe_sort(values)
expected = np.array([])
tm.assert_numpy_array_equal(result, expected)
def test_labels(self):
values = [3, 1, 2, 0, 4]
expected = np.array([0, 1, 2, 3, 4])
labels = [0, 1, 1, 2, 3, 0, -1, 4]
result, result_labels = safe_sort(values, labels)
expected_labels = np.array([3, 1, 1, 2, 0, 3, -1, 4], dtype=np.intp)
tm.assert_numpy_array_equal(result, expected)
tm.assert_numpy_array_equal(result_labels, expected_labels)
# na_sentinel
labels = [0, 1, 1, 2, 3, 0, 99, 4]
result, result_labels = safe_sort(values, labels,
na_sentinel=99)
expected_labels = np.array([3, 1, 1, 2, 0, 3, 99, 4], dtype=np.intp)
tm.assert_numpy_array_equal(result, expected)
tm.assert_numpy_array_equal(result_labels, expected_labels)
# out of bound indices
labels = [0, 101, 102, 2, 3, 0, 99, 4]
result, result_labels = safe_sort(values, labels)
expected_labels = np.array([3, -1, -1, 2, 0, 3, -1, 4], dtype=np.intp)
tm.assert_numpy_array_equal(result, expected)
tm.assert_numpy_array_equal(result_labels, expected_labels)
labels = []
result, result_labels = safe_sort(values, labels)
expected_labels = np.array([], dtype=np.intp)
tm.assert_numpy_array_equal(result, expected)
tm.assert_numpy_array_equal(result_labels, expected_labels)
def test_mixed_integer(self):
values = np.array(['b', 1, 0, 'a', 0, 'b'], dtype=object)
result = safe_sort(values)
expected = np.array([0, 0, 1, 'a', 'b', 'b'], dtype=object)
tm.assert_numpy_array_equal(result, expected)
values = np.array(['b', 1, 0, 'a'], dtype=object)
labels = [0, 1, 2, 3, 0, -1, 1]
result, result_labels = safe_sort(values, labels)
expected = np.array([0, 1, 'a', 'b'], dtype=object)
expected_labels = np.array([3, 1, 0, 2, 3, -1, 1], dtype=np.intp)
tm.assert_numpy_array_equal(result, expected)
tm.assert_numpy_array_equal(result_labels, expected_labels)
def test_mixed_integer_from_list(self):
values = ['b', 1, 0, 'a', 0, 'b']
result = safe_sort(values)
expected = np.array([0, 0, 1, 'a', 'b', 'b'], dtype=object)
tm.assert_numpy_array_equal(result, expected)
def test_unsortable(self):
# GH 13714
arr = np.array([1, 2, datetime.now(), 0, 3], dtype=object)
if compat.PY2 and not pd._np_version_under1p10:
# RuntimeWarning: tp_compare didn't return -1 or -2 for exception
with warnings.catch_warnings():
pytest.raises(TypeError, safe_sort, arr)
else:
pytest.raises(TypeError, safe_sort, arr)
def test_exceptions(self):
with tm.assert_raises_regex(TypeError,
"Only list-like objects are allowed"):
safe_sort(values=1)
with tm.assert_raises_regex(TypeError,
"Only list-like objects or None"):
safe_sort(values=[0, 1, 2], labels=1)
with tm.assert_raises_regex(ValueError,
"values should be unique"):
safe_sort(values=[0, 1, 2, 1], labels=[0, 1])
| apache-2.0 |
JeanKossaifi/scikit-learn | sklearn/utils/tests/test_fixes.py | 281 | 1829 | # Authors: Gael Varoquaux <gael.varoquaux@normalesup.org>
# Justin Vincent
# Lars Buitinck
# License: BSD 3 clause
import numpy as np
from nose.tools import assert_equal
from nose.tools import assert_false
from nose.tools import assert_true
from numpy.testing import (assert_almost_equal,
assert_array_almost_equal)
from sklearn.utils.fixes import divide, expit
from sklearn.utils.fixes import astype
def test_expit():
# Check numerical stability of expit (logistic function).
# Simulate our previous Cython implementation, based on
#http://fa.bianp.net/blog/2013/numerical-optimizers-for-logistic-regression
assert_almost_equal(expit(1000.), 1. / (1. + np.exp(-1000.)), decimal=16)
assert_almost_equal(expit(-1000.), np.exp(-1000.) / (1. + np.exp(-1000.)),
decimal=16)
x = np.arange(10)
out = np.zeros_like(x, dtype=np.float32)
assert_array_almost_equal(expit(x), expit(x, out=out))
def test_divide():
assert_equal(divide(.6, 1), .600000000000)
def test_astype_copy_memory():
a_int32 = np.ones(3, np.int32)
# Check that dtype conversion works
b_float32 = astype(a_int32, dtype=np.float32, copy=False)
assert_equal(b_float32.dtype, np.float32)
# Changing dtype forces a copy even if copy=False
assert_false(np.may_share_memory(b_float32, a_int32))
# Check that copy can be skipped if requested dtype match
c_int32 = astype(a_int32, dtype=np.int32, copy=False)
assert_true(c_int32 is a_int32)
# Check that copy can be forced, and is the case by default:
d_int32 = astype(a_int32, dtype=np.int32, copy=True)
assert_false(np.may_share_memory(d_int32, a_int32))
e_int32 = astype(a_int32, dtype=np.int32)
assert_false(np.may_share_memory(e_int32, a_int32))
| bsd-3-clause |
rbooth200/DiscEvolution | DiscEvolution/internal_photo.py | 1 | 30463 | # internal_photo.py
#
# Author: A. Sellek
# Date: 12 - Aug - 2020
#
# Implementation of Photoevaporation Models
################################################################################
import numpy as np
import argparse
import json
import matplotlib.pyplot as plt
from DiscEvolution.constants import *
from DiscEvolution.star import PhotoStar
from scipy.signal import argrelmin
class NotHoleError(Exception):
"""Raised if finds an outer edge, not a hole"""
pass
class PhotoBase():
def __init__(self, disc, Regime=None, Type=None):
# Basic mass loss properties
self._regime = Regime # EUV or X-ray
self._type = Type # 'Primordial' or 'InnerHole'
self._Sigmadot = np.zeros_like(disc.R)
self.mdot_XE(disc.star)
# Evolutionary state flags
self._Hole = False # Has the hole started to open?
self._reset = False # Have we needed to reset a decoy hole?
self._empty = False # When no longer a valid hole radius or all below density threshold
self._Thin = False # Is the hole exposed (ie low column density to star)?
# Parameters of hole
self._R_hole = None
self._N_hole = None
# The column density threshold below which the inner disc is "Thin"
if self._regime=='X-ray':
self._N_crit = 1e22
elif self._regime=='EUV':
self._N_crit = 1e18
else:
self._N_crit = 0.0 # (if 0, can never switch)
# Outer radius
self._R_out = max(disc.R_edge)
def mdot_XE(self, star, Mdot=0):
# Generic wrapper for initiating X-ray or EUV mass loss
# Without prescription, mass loss is 0
self._Mdot = Mdot
self._Mdot_true = Mdot
def Sigma_dot(self, R, star):
if self._type=='Primordial':
self.Sigma_dot_Primordial(R, star)
elif self._type=='InnerHole':
self.Sigma_dot_InnerHole(R, star)
def Sigma_dot_Primordial(self, R, star, ret=False):
# Without prescription, mass loss is 0
if ret:
return np.zeros(len(R)+1)
else:
self._Sigmadot = np.zeros_like(R)
def Sigma_dot_InnerHole(self, R, star, ret=False):
# Without prescription, mass loss is 0
if ret:
return np.zeros(len(R)+1)
else:
self._Sigmadot = np.zeros_like(R)
def scaled_R(self, R, star):
# Prescriptions may rescale the radius variable
# Without prescription, radius is unscaled
return R
def R_inner(self, star):
# Innermost mass loss
return 0
def check_dt(self, disc, dt):
# Work out the timescale to clear cell
where_photoevap = (self.dSigmadt > 0)
t_w = np.full_like(disc.R,np.inf)
t_w[where_photoevap] = disc.Sigma_G[where_photoevap] / self.dSigmadt[where_photoevap]
# Return minimum value for cells inside outer edge
indisc = (disc.R < self._R_out) * where_photoevap # Prohibit hole outside of mass loss region.
try:
imin = argrelmin(t_w[indisc])[0][0] # Find local minima in clearing time, neglecting outer edge where tails off. Take first to avoid solutions due to noise in dusty outskirts
except IndexError: # If no local minimum, try to find hole as wherever the min is.
imin = np.argmin(t_w[indisc])
# Check against timestep and report
if (dt > t_w[where_photoevap][imin]): # If an entire cell can deplete
#if not self._Hole:
# print("Alert - hole can open after this timestep at {:.2f} AU".format(disc.R[imin]))
# print("Outer radius is currently {:.2f} AU".format(self._R_out))
self._Hole = True # Set hole flag
return t_w[where_photoevap][imin]
def remove_mass(self, disc, dt, external_photo=None):
# Find disc "outer edge" so we can apply mass loss only inside
if external_photo:
self._R_out = external_photo._Rot # If external photoevaporation is present, only consider radii inside its influence
else:
self._R_out = disc.Rout(thresh=1e-10)
if disc.Rout()==0.0:
print("Disc everywhere below density threshold. Declare Empty.")
self._empty = True
# Check whether hole can open
if not self._Hole: #self._type=='Primordial':
self.check_dt(disc, dt)
# Determine mass loss
dSigma = np.minimum(self.dSigmadt * dt, disc.Sigma_G) # Limit mass loss to density of cell
dSigma *= (disc.R < self._R_out) # Only apply mass loss inside disc outer edge
# Apply, preserving the dust mass
if hasattr(disc, 'Sigma_D'):
Sigma_D = disc.Sigma_D # Save the dust density
disc._Sigma -= dSigma
if hasattr(disc, 'Sigma_D'):
dusty = Sigma_D.sum(0)>0
disc.dust_frac[:,dusty] = np.fmin(Sigma_D[:,dusty]/disc.Sigma[dusty],disc.dust_frac[:,dusty]/disc.dust_frac.sum(0)[dusty])
disc.dust_frac[:] /= np.maximum(disc.dust_frac.sum(0), 1.0) # Renormalise to 1 if it exceeds
# Calculate actual mass loss given limit
if dt>0:
dM = 2*np.pi * disc.R * dSigma
self._Mdot_true = np.trapz(dM,disc.R) / dt * AU**2 / Msun
def get_Rhole(self, disc, external_photo=None):
"""Deal with calls when there is no hole"""
if not self._Hole:
print("No hole for which to get radius. Ignoring command and returning nans.")
return np.nan, np.nan
"""Otherwise continue on to find hole
First find outer edge of disc - hole must be inside this"""
if external_photo:
self._R_out = external_photo._Rot # If external photoevaporation is present, only consider radii inside its influence
else:
self._R_out = disc.Rout(thresh=1e-10)
where_photoevap = (self.dSigmadt > 0)
indisc = (disc.R < self._R_out) * where_photoevap # Prohibit hole outside of mass loss region.
empty_indisc = (disc.Sigma_G <= 1e-10) * indisc # Consider empty if below 10^-10 g/cm^2
try:
if np.sum(empty_indisc) == 0: # If none in disc are empty
minima = argrelmin(disc.Sigma_G)
if len(minima[0]) > 0:
i_hole_out = minima[0][0] # Position of hole is minimum density
else: # No empty cells anymore - disc has cleared to outside
raise NotHoleError
else:
# First find the inner edge of the innermost hole
i_hole_in = np.nonzero(empty_indisc)[0][0]
# The hole cell is defined as the one inside the first non-empty cell outside the inner edge of the hole
outer_disc = ~empty_indisc * (disc.R>disc.R_edge[i_hole_in])
if np.sum(outer_disc) > 0:
i_hole_out = np.nonzero(outer_disc)[0][0] - 1
else: # No non-empty cells outside this - this is not a hole, but an outer edge.
raise NotHoleError
if i_hole_out == np.nonzero(indisc)[0][-1]: # This is not a hole, but the outermost photoevaporating cell
raise NotHoleError
"""If hole position drops by an order of magnitude, it is likely that the previous was really the clearing of low surface density material in the outer disc, so reset"""
if self._R_hole:
R_old = self._R_hole
if disc.R_edge[i_hole_out+1]/R_old<0.1:
self._reset = True
"""If everything worked, update hole properties"""
if not self._R_hole:
print("Hole opened at {:.2f} AU".format(disc.R_edge[i_hole_out+1]))
self._R_hole = disc.R_edge[i_hole_out+1]
self._N_hole = disc.column_density[i_hole_out]
# Test whether Thin
if (self._N_hole < self._N_crit):
self._Thin = True
except NotHoleError:
"""Potential hole isn't a hole but an outer edge"""
if self._type == 'Primordial':
self._Hole = False
self._reset = True
if self._R_hole:
print("No hole found")
print("Last known location {} AU".format(self._R_hole))
return 0, 0
elif self._type == 'InnerHole':
if not self._empty:
print("Transition Disc has cleared to outside")
self._empty = True
# Proceed as usual to report but without update
# Save state if tracking
return self._R_hole, self._N_hole
@property
def Mdot(self):
return self._Mdot
@property
def dSigmadt(self):
return self._Sigmadot
def __call__(self, disc, dt, external_photo=None):
# For inner hole discs, need to update the hole radius and then the mass-loss as the normalisation changes based on R, not just x~R-Rhole.
if self._type=='InnerHole':
self.get_Rhole(disc)
self.Sigma_dot(disc.R_edge, disc.star)
# Remove the mass
self.remove_mass(disc,dt, external_photo)
# Check for new holes
if self._Hole and not self._Thin: # If there is a hole but the inner disc is not already optically thin, update its properties
R_hole, N_hole = self.get_Rhole(disc, external_photo)
# Check if hole is now large enough that inner disc optically thin, switch internal photoevaporation to direct field if so
if self._Thin:
print("Column density to hole has fallen to N = {} < {} g cm^-2".format(N_hole,self._N_crit))
self._type = 'InnerHole'
# Run the mass loss rates to update the table
self.mdot_XE(disc.star)
self.Sigma_dot(disc.R_edge, disc.star)
# Report
print("At initiation of InnerHole Type, M_D = {} M_J, Mdot = {}, t_clear ~ {} yr".format(disc.Mtot()/Mjup, self._Mdot, disc.Mtot()/Msun/self._Mdot))
def ASCII_header(self):
return ("# InternalEvaporation, Type: {}, Mdot: {}"
"".format(self._type+self.__class__.__name__,self._Mdot))
def HDF5_attributes(self):
header = {}
header['Type'] = self._type+"/"+self._regime
header['Mdot'] = '{}'.format(self._Mdot)
return self.__class__.__name__, header
#################################################################################
"""""""""
X-ray dominated photoevaporation
-Following prescription of Owen, Ercolano and Clarke (2012)
-Following prescription of Picogna, Ercolano, Owen and Weber (2019)
"""""""""
#################################################################################
"""Owen, Ercolano and Clarke (2012)"""
class XrayDiscOwen(PhotoBase):
def __init__(self, disc, Type='Primordial', R_hole=None):
super().__init__(disc, Regime='X-ray', Type=Type)
# Parameters for Primordial mass loss profile
self._a1 = 0.15138
self._b1 = -1.2182
self._c1 = 3.4046
self._d1 = -3.5717
self._e1 = -0.32762
self._f1 = 3.6064
self._g1 = -2.4918
# Parameters for Inner Hole mass loss profile
self._a2 = -0.438226
self._b2 = -0.10658387
self._c2 = 0.5699464
self._d2 = 0.010732277
self._e2 = -0.131809597
self._f2 = -1.32285709
# If initiating with an Inner Hole disc, need to update properties
if self._type == 'InnerHole':
self._Hole = True
self._R_hole = R_hole
#self.get_Rhole(disc)
# Run the mass loss rates to update the table
self.Sigma_dot(disc.R_edge, disc.star)
def mdot_XE(self, star, Mdot=None):
# In Msun/yr
if Mdot is not None:
self._Mdot = Mdot
elif self._type=='Primordial':
self._Mdot = 6.25e-9 * star.M**(-0.068) * (star.L_X / 1e30)**(1.14) # Equation B1
elif self._type=='InnerHole':
self._Mdot = 4.8e-9 * star.M**(-0.148) * (star.L_X / 1e30)**(1.14) # Equation B4
else:
raise NotImplementedError("Disc is of unrecognised type, and no mass-loss rate has been manually specified")
self._Mdot_true = self._Mdot
def scaled_R(self, R, star):
# Where R in AU
x = 0.85 * R / star.M # Equation B3
if self._Hole:
y = 0.95 * (R-self._R_hole) / star.M # Equation B6
else:
y = R
return x, y
def R_inner(self, star):
# Innermost mass loss
return 0.7 / 0.85 * star.M
def Sigma_dot_Primordial(self, R, star, ret=False):
# Equation B2
Sigmadot = np.zeros_like(R)
x, y = self.scaled_R(R,star)
where_photoevap = (x >= 0.7) * (x<=99) # No mass loss close to star, mass loss prescription becomes negative at log10(x)=1.996
logx = np.log(x[where_photoevap])
log10 = np.log(10)
log10x = logx/log10
# First term
exponent = self._a1 * log10x**6 + self._b1 * log10x**5 + self._c1 * log10x**4 + self._d1 * log10x**3 + self._e1 * log10x**2 + self._f1 * log10x + self._g1
t1 = 10**exponent
# Second term
terms = 6*self._a1*logx**5/log10**7 + 5*self._b1*logx**4/log10**6 + 4*self._c1*logx**3/log10**5 + 3*self._d1*logx**2/log10**4 + 2*self._e1*logx/log10**3 + self._f1/log10**2
t2 = terms/x[where_photoevap]**2
# Third term
t3 = np.exp(-(x[where_photoevap]/100)**10)
# Combine terms
Sigmadot[where_photoevap] = t1 * t2 * t3
# Work out total mass loss rate for normalisation
M_dot = 2*np.pi * R * Sigmadot
total = np.trapz(M_dot,R)
# Normalise, convert to cgs
Sigmadot = np.maximum(Sigmadot,0)
Sigmadot *= self.Mdot / total * Msun / AU**2 # in g cm^-2 / yr
if ret:
# Return unaveraged values at cell edges
return Sigmadot
else:
# Store values as average of mass loss rate at cell edges
self._Sigmadot = (Sigmadot[1:] + Sigmadot[:-1]) / 2
def Sigma_dot_InnerHole(self, R, star, ret=False):
# Equation B5
Sigmadot = np.zeros_like(R)
x, y = self.scaled_R(R,star)
where_photoevap = (y >= 0.0) # No mass loss inside hole
use_y = y[where_photoevap]
# Exponent of second term
exp2 = -(use_y/57)**10
# Numerator
terms = self._a2*self._b2 * np.exp(self._b2*use_y+exp2) + self._c2*self._d2 * np.exp(self._d2*use_y+exp2) + self._e2*self._f2 * np.exp(self._f2*use_y+exp2)
# Divide by Denominator
Sigmadot[where_photoevap] = terms/R[where_photoevap]
# Work out total mass loss rate for normalisation
M_dot = 2*np.pi * R * Sigmadot
total = np.trapz(M_dot,R)
# Normalise, convert to cgs
Sigmadot = np.maximum(Sigmadot,0)
Sigmadot *= self.Mdot / total * Msun / AU**2 # in g cm^-2 / yr
# Mopping up in the gap
mop_up = (x >= 0.7) * (y < 0.0)
Sigmadot[mop_up] = np.inf
if ret:
# Return unaveraged values at cell edges
return Sigmadot
else:
# Store values as average of mass loss rate at cell edges
self._Sigmadot = (Sigmadot[1:] + Sigmadot[:-1]) / 2
"""Picogna, Ercolano, Owen and Weber (2019)"""
class XrayDiscPicogna(PhotoBase):
def __init__(self, disc, Type='Primordial', R_hole=None):
super().__init__(disc, Regime='X-ray', Type=Type)
# Parameters for Primordial mass loss profile
self._a1 = -0.5885
self._b1 = 4.3130
self._c1 = -12.1214
self._d1 = 16.3587
self._e1 = -11.4721
self._f1 = 5.7248
self._g1 = -2.8562
# Parameters for Inner Hole mass loss profile
self._a2 = 0.11843
self._b2 = 0.99695
self._c2 = 0.48835
# If initiating with an Inner Hole disc, need to update properties
if self._type == 'InnerHole':
self._Hole = True
self._R_hole = R_hole
#self.get_Rhole(disc)
# Run the mass loss rates to update the table
self.Sigma_dot(disc.R_edge, disc.star)
def mdot_XE(self, star, Mdot=None):
# In Msun/yr
if Mdot is not None:
self._Mdot = Mdot
elif self._type=='Primordial':
logMd = -2.7326 * np.exp((np.log(np.log(star.L_X)/np.log(10))-3.3307)**2/-2.9868e-3) - 7.2580 # Equation 5
self._Mdot = 10**logMd
elif self._type=='InnerHole':
logMd = -2.7326 * np.exp((np.log(np.log(star.L_X)/np.log(10))-3.3307)**2/-2.9868e-3) - 7.2580 # 1.12 * Equation 5
self._Mdot = 1.12 * (10**logMd)
else:
raise NotImplementedError("Disc is of unrecognised type, and no mass-loss rate has been manually specified")
self._Mdot_true = self._Mdot
def scaled_R(self, R, star):
# Where R in AU
# All are divided by stellar mass normalised to 0.7 Msun (value used by Picogna+19) to represent rescaling by gravitational radius
x = R / (star.M/0.7)
if self._Hole:
y = (R-self._R_hole) / (star.M/0.7) # Equation B6
else:
y = R / (star.M/0.7)
return x, y
def R_inner(self, star):
# Innermost mass loss
if self._type=='Primordial':
return 0 # Mass loss possible throughout
elif self._type=='InnerHole':
return self._R_hole # Mass loss profile applies outside hole
else:
return 0 # If unspecified, assume mass loss possible throughout
def Sigma_dot_Primordial(self, R, star, ret=False):
# Equation B2
Sigmadot = np.zeros_like(R)
x, y = self.scaled_R(R,star)
where_photoevap = (x<=137) # Mass loss prescription becomes negative at x=1.3785
logx = np.log(x[where_photoevap])
log10 = np.log(10)
log10x = logx/log10
# First term
exponent = self._a1 * log10x**6 + self._b1 * log10x**5 + self._c1 * log10x**4 + self._d1 * log10x**3 + self._e1 * log10x**2 + self._f1 * log10x + self._g1
t1 = 10**exponent
# Second term
terms = 6*self._a1*log10x**5 + 5*self._b1*log10x**4 + 4*self._c1*log10x**3 + 3*self._d1*log10x**2 + 2*self._e1*log10x + self._f1
t2 = terms/(2*np.pi*x[where_photoevap]**2)
# Combine terms
Sigmadot[where_photoevap] = t1 * t2
# Work out total mass loss rate for normalisation
M_dot = 2*np.pi * R * Sigmadot
total = np.trapz(M_dot,R)
# Normalise, convert to cgs
Sigmadot = np.maximum(Sigmadot,0)
Sigmadot *= self.Mdot / total * Msun / AU**2 # in g cm^-2 / yr
if ret:
# Return unaveraged values at cell edges
return Sigmadot
else:
# Store values as average of mass loss rate at cell edges
self._Sigmadot = (Sigmadot[1:] + Sigmadot[:-1]) / 2
def Sigma_dot_InnerHole(self, R, star, ret=False):
# Equation B5
Sigmadot = np.zeros_like(R)
x, y = self.scaled_R(R,star)
where_photoevap = (y > 0.0) * (y < -self._c2/np.log(self._b2)) # No mass loss inside hole, becomes negative at x=-c/ln(b)
use_y = y[where_photoevap]
# Numerator
terms = self._a2 * np.power(self._b2,use_y) * np.power(use_y,self._c2-1) * (use_y * np.log(self._b2) + self._c2)
# Divide by Denominator
Sigmadot[where_photoevap] = terms/(2*np.pi*R[where_photoevap])
# Work out total mass loss rate for normalisation
M_dot = 2*np.pi * R * Sigmadot
total = np.trapz(M_dot,R)
# Normalise, convert to cgs
Sigmadot = np.maximum(Sigmadot,0)
Sigmadot *= self.Mdot / total * Msun / AU**2 # in g cm^-2 / yr
# Mopping up in the gap - assume usual primordial rates there.
Sigmadot[(y<=0.0) * (x<=137)] = self.Sigma_dot_Primordial(R, star, ret=True)[(y<=0.0)*(x<=137)]/1.12 # divide by 1.12 so that normalise to correct mass loss rate
mop_up = (x > 137) * (y < 0.0)
Sigmadot[mop_up] = np.inf # Avoid having discontinuous mass-loss by filling in the rest
if ret:
# Return unaveraged values at cell edges
return Sigmadot
else:
# Store values as average of mass loss rate at cell edges
self._Sigmadot = (Sigmadot[1:] + Sigmadot[:-1]) / 2
#################################################################################
"""""""""
EUV dominated photoevaporation
-Following prescription given in Alexander and Armitage (2007)
and based on Font, McCarthy, Johnstone and Ballantyne (2004) for Primordial Discs
and based on Alexander, Clarke and Pringle (2006) for Inner Hole Discs
"""""""""
#################################################################################
class EUVDiscAlexander(PhotoBase):
def __init__(self, disc, Type='Primordial', R_hole=None):
super().__init__(disc, Regime='EUV', Type=Type)
# Parameters for mass loss profiles
self._cs = 10 # Sound speed in km s^-1
self._RG = disc.star.M / (self._cs*1e5 /Omega0/AU)**2 # Gravitational Radius in AU
self._mu = 1.35
self._aB = 2.6e-13 # Case B Recombination coeff. in cm^3 s^-1
self._C1 = 0.14
self._A = 0.3423
self._B = -0.3612
self._D = 0.2457
self._C2 = 0.235
self._a = 2.42
h = disc.H/disc.R
he = np.empty_like(disc.R_edge)
he[1:-1] = 0.5*(h[1:] + h[-1:])
he[0] = 1.5*h[0] - 0.5*h[1]
he[-1] = 1.5*h[-1] - 0.5*h[-2]
self._h = he
# If initiating with an Inner Hole disc, need to update properties
if self._type == 'InnerHole':
self._Hole = True
self._R_hole = R_hole
#self.get_Rhole(disc)
# Run the mass loss rates to update the table
self.Sigma_dot(disc.R_edge, disc.star)
def mdot_XE(self, star, Mdot=0):
# Store Mdot calculated from profile
self._Mdot = Mdot # In Msun/yr
self._Mdot_true = self._Mdot
def scaled_R(self, R, star):
if self._type=='Primordial':
return R / self._RG # Normalise to RG
elif self._type=='InnerHole':
return R / self.R_inner() # Normalise to inner edge
else:
return R # If unspecified, don't modify
def R_inner(self):
# Innermost mass loss
if self._type=='Primordial':
return 0.1 * self._RG # Mass loss profile is only positive for >0.1 RG
elif self._type=='InnerHole':
return self._R_hole # Mass loss profile applies outside hole
else:
return 0 # If unspecified, assume mass-loss possible throughout
def Sigma_dot_Primordial(self, R, star, ret=False):
Sigmadot = np.zeros_like(R)
x = self.scaled_R(R,star)
where_photoevap = (x >= 0.1) # No mass loss close to star
# Equation A3
nG = self._C1 * (3 * star.Phi / (4*np.pi * (self._RG*AU)**3 * self._aB))**(1/2) # cm^-3
# Equation A2
n0 = nG * (2 / (x**7.5 + x**12.5))**(1/5)
# Equation A4
u1 = self._cs*1e5*yr/Omega0 * self._A * np.exp(self._B * (x-0.1)) * (x-0.1)**self._D # cm yr^-1
# Combine terms (Equation A1)
Sigmadot[where_photoevap] = 2 * self._mu * m_H * (n0 * u1)[where_photoevap] # g cm^-2 /yr
Sigmadot = np.maximum(Sigmadot,0)
# Work out total mass loss rate
dMdot = 2*np.pi * R * Sigmadot
Mdot = np.trapz(dMdot,R) # g yr^-1 (AU/cm)^2
# Normalise, convert to cgs
Mdot = Mdot * AU**2/Msun # g yr^-1
# Store result
self.mdot_XE(star, Mdot=Mdot)
if ret:
# Return unaveraged values at cell edges
return Sigmadot
else:
# Store values as average of mass loss rate at cell edges
self._Sigmadot = (Sigmadot[1:] + Sigmadot[:-1]) / 2
def Sigma_dot_InnerHole(self, R, star, ret=False):
Sigmadot = np.zeros_like(R)
x = self.scaled_R(R,star)
where_photoevap = (x > 1) # No mass loss inside hole
# Combine terms (Equation A5)
Sigmadot[where_photoevap] = (2 * self._mu * m_H * self._C2 * self._cs*1e5*yr/Omega0 * (star.Phi / (4*np.pi * (self.R_inner()*AU)**3 * self._aB * self._h))**(1/2) * x**(-self._a))[where_photoevap] # g cm^-2 /yr
Sigmadot = np.maximum(Sigmadot,0)
# Work out total mass loss rate
dMdot = 2*np.pi * R * Sigmadot
Mdot = np.trapz(dMdot,R) # g yr^-1 (AU/cm)^2
# Normalise, convert to cgs
Mdot = Mdot * AU**2/Msun # g yr^-1
# Store result
self.mdot_XE(star, Mdot=Mdot)
# Mopping up in the gap
mop_up = (R >= 0.1 * self._RG) * (x <= 1.0)
Sigmadot[mop_up] = np.inf
if ret:
# Return unaveraged values at cell edges
return Sigmadot
else:
# Store values as average of mass loss rate at cell edges
self._Sigmadot = (Sigmadot[1:] + Sigmadot[:-1]) / 2
#################################################################################
"""""""""
Functions for running as main
Designed for plotting to test things out
"""""""""
#################################################################################
class DummyDisc(object):
def __init__(self, R, star, MD=10, RC=100):
self._M = MD * Mjup
self.Rc = RC
self.R_edge = R
self.R = 0.5*(self.R_edge[1:]+self.R_edge[:-1])
self._Sigma = self._M / (2 * np.pi * self.Rc * self.R * AU**2) * np.exp(-self.R/self.Rc)
self.star = star
def Rout(self, thresh=None):
return max(self.R_edge)
@property
def Sigma(self):
return self._Sigma
@property
def Sigma_G(self):
return self._Sigma
def main():
Sigma_dot_plot()
Test_Removal()
def Test_Removal():
"""Removes gas fom a power law disc in regular timesteps without viscous evolution etc"""
star1 = PhotoStar(LX=1e30, M=1.0, R=2.5, T_eff=4000)
R = np.linspace(0.1,200,2000)
disc1 = DummyDisc(R, star1, RC=10)
internal_photo = XrayDiscPicogna(disc1)
plt.figure()
for t in np.linspace(0,2e3,6):
internal_photo(disc1, 2e3)
plt.loglog(0.5*(R[1:]+R[:-1]), disc1.Sigma, label='{}'.format(t))
plt.xlabel("R / AU")
plt.ylabel("$\Sigma_G~/~\mathrm{g~cm^{-2}}$")
plt.legend(title='Time / yr')
plt.show()
def Sigma_dot_plot():
"""Plot a comparison of the mass loss rate prescriptions"""
from control_scripts import run_model
# Set up dummy model
parser = argparse.ArgumentParser()
parser.add_argument("--model", "-m", type=str, default=DefaultModel)
args = parser.parse_args()
model = json.load(open(args.model, 'r'))
plt.figure(figsize=(6,6))
starX = PhotoStar(LX=1e30, M=model['star']['mass'], R=model['star']['radius'], T_eff=model['star']['T_eff'])
starE = PhotoStar(Phi=1e42, M=model['star']['mass'], R=model['star']['radius'], T_eff=model['star']['T_eff'])
disc = run_model.setup_disc(model)
R = disc.R
# Calculate EUV rates
disc._star = starE
internal_photo_E = EUVDiscAlexander(disc)
Sigma_dot_E = internal_photo_E.dSigmadt
photoevaporating_E = (Sigma_dot_E>0)
t_w_E = disc.Sigma[photoevaporating_E] / Sigma_dot_E[photoevaporating_E]
print("Mdot maximum at R = {} AU".format(R[np.argmax(Sigma_dot_E)]))
print("Time minimum at R = {} AU".format(R[photoevaporating_E][np.argmin(t_w_E)]))
plt.loglog(R, Sigma_dot_E, label='EUV (AA07), $\Phi={}~\mathrm{{s^{{-1}}}}$'.format(1e42), linestyle='--')
# Calculate X-ray rates
disc._star = starX
internal_photo_X = XrayDiscOwen(disc)
Sigma_dot_X = internal_photo_X.dSigmadt
photoevaporating_X = (Sigma_dot_X>0)
t_w_X = disc.Sigma[photoevaporating_X] / Sigma_dot_X[photoevaporating_X]
print("Mdot maximum at R = {} AU".format(R[np.argmax(Sigma_dot_X)]))
print("Time minimum at R = {} AU".format(R[photoevaporating_X][np.argmin(t_w_X)]))
plt.loglog(R, Sigma_dot_X, label='X-ray (OEC12), $L_X={}~\mathrm{{erg~s^{{-1}}}}$'.format(1e30))
# Calculate X-ray rates
disc._star = starX
internal_photo_X2 = XrayDiscPicogna(disc)
Sigma_dot_X2 = internal_photo_X2.dSigmadt
photoevaporating_X2 = (Sigma_dot_X2>0)
t_w_X2 = disc.Sigma[photoevaporating_X2] / Sigma_dot_X2[photoevaporating_X2]
print("Mdot maximum at R = {} AU".format(R[np.argmax(Sigma_dot_X2)]))
print("Time minimum at R = {} AU".format(R[photoevaporating_X2][np.argmin(t_w_X2)]))
plt.loglog(R, Sigma_dot_X2, label='X-ray (PEOW19), $L_X={}~\mathrm{{erg~s^{{-1}}}}$'.format(1e30))
# Plot mass loss rates
plt.xlabel("R / AU")
plt.ylabel("$\dot{\Sigma}_{\\rm w}$ / g cm$^{-2}$ yr$^{-1}$")
plt.xlim([0.1,1000])
plt.ylim([1e-8,1e-2])
plt.legend()
plt.show()
# Plot depletion time
plt.figure(figsize=(6,6))
plt.loglog(R[photoevaporating_E], t_w_E, label='EUV (AA07), $\Phi={}~\mathrm{{s^{{-1}}}}$'.format(1e42), linestyle='--')
plt.loglog(R[photoevaporating_X], t_w_X, label='X-ray (OEC12), $L_X={}~\mathrm{{erg~s^{{-1}}}}$'.format(1e30))
plt.loglog(R[photoevaporating_X2], t_w_X2, label='X-ray (PEOW19), $L_X={}~\mathrm{{erg~s^{{-1}}}}$'.format(1e30))
plt.xlabel("R / AU")
plt.ylabel("$t_w / \mathrm{yr}$")
plt.xlim([0.1,1000])
plt.ylim([1e4,1e12])
plt.legend()
plt.show()
if __name__ == "__main__":
# Set extra things
DefaultModel = "../control_scripts/DiscConfig_default.json"
plt.rcParams['text.usetex'] = "True"
plt.rcParams['font.family'] = "serif"
main()
| gpl-3.0 |
spallavolu/scikit-learn | sklearn/linear_model/ridge.py | 60 | 44642 | """
Ridge regression
"""
# Author: Mathieu Blondel <mathieu@mblondel.org>
# Reuben Fletcher-Costin <reuben.fletchercostin@gmail.com>
# Fabian Pedregosa <fabian@fseoane.net>
# Michael Eickenberg <michael.eickenberg@nsup.org>
# License: BSD 3 clause
from abc import ABCMeta, abstractmethod
import warnings
import numpy as np
from scipy import linalg
from scipy import sparse
from scipy.sparse import linalg as sp_linalg
from .base import LinearClassifierMixin, LinearModel, _rescale_data
from .sag import sag_solver
from .sag_fast import get_max_squared_sum
from ..base import RegressorMixin
from ..utils.extmath import safe_sparse_dot
from ..utils import check_X_y
from ..utils import check_array
from ..utils import check_consistent_length
from ..utils import compute_sample_weight
from ..utils import column_or_1d
from ..preprocessing import LabelBinarizer
from ..grid_search import GridSearchCV
from ..externals import six
from ..metrics.scorer import check_scoring
def _solve_sparse_cg(X, y, alpha, max_iter=None, tol=1e-3, verbose=0):
n_samples, n_features = X.shape
X1 = sp_linalg.aslinearoperator(X)
coefs = np.empty((y.shape[1], n_features))
if n_features > n_samples:
def create_mv(curr_alpha):
def _mv(x):
return X1.matvec(X1.rmatvec(x)) + curr_alpha * x
return _mv
else:
def create_mv(curr_alpha):
def _mv(x):
return X1.rmatvec(X1.matvec(x)) + curr_alpha * x
return _mv
for i in range(y.shape[1]):
y_column = y[:, i]
mv = create_mv(alpha[i])
if n_features > n_samples:
# kernel ridge
# w = X.T * inv(X X^t + alpha*Id) y
C = sp_linalg.LinearOperator(
(n_samples, n_samples), matvec=mv, dtype=X.dtype)
coef, info = sp_linalg.cg(C, y_column, tol=tol)
coefs[i] = X1.rmatvec(coef)
else:
# linear ridge
# w = inv(X^t X + alpha*Id) * X.T y
y_column = X1.rmatvec(y_column)
C = sp_linalg.LinearOperator(
(n_features, n_features), matvec=mv, dtype=X.dtype)
coefs[i], info = sp_linalg.cg(C, y_column, maxiter=max_iter,
tol=tol)
if info < 0:
raise ValueError("Failed with error code %d" % info)
if max_iter is None and info > 0 and verbose:
warnings.warn("sparse_cg did not converge after %d iterations." %
info)
return coefs
def _solve_lsqr(X, y, alpha, max_iter=None, tol=1e-3):
n_samples, n_features = X.shape
coefs = np.empty((y.shape[1], n_features))
n_iter = np.empty(y.shape[1], dtype=np.int32)
# According to the lsqr documentation, alpha = damp^2.
sqrt_alpha = np.sqrt(alpha)
for i in range(y.shape[1]):
y_column = y[:, i]
info = sp_linalg.lsqr(X, y_column, damp=sqrt_alpha[i],
atol=tol, btol=tol, iter_lim=max_iter)
coefs[i] = info[0]
n_iter[i] = info[2]
return coefs, n_iter
def _solve_cholesky(X, y, alpha):
# w = inv(X^t X + alpha*Id) * X.T y
n_samples, n_features = X.shape
n_targets = y.shape[1]
A = safe_sparse_dot(X.T, X, dense_output=True)
Xy = safe_sparse_dot(X.T, y, dense_output=True)
one_alpha = np.array_equal(alpha, len(alpha) * [alpha[0]])
if one_alpha:
A.flat[::n_features + 1] += alpha[0]
return linalg.solve(A, Xy, sym_pos=True,
overwrite_a=True).T
else:
coefs = np.empty([n_targets, n_features])
for coef, target, current_alpha in zip(coefs, Xy.T, alpha):
A.flat[::n_features + 1] += current_alpha
coef[:] = linalg.solve(A, target, sym_pos=True,
overwrite_a=False).ravel()
A.flat[::n_features + 1] -= current_alpha
return coefs
def _solve_cholesky_kernel(K, y, alpha, sample_weight=None, copy=False):
# dual_coef = inv(X X^t + alpha*Id) y
n_samples = K.shape[0]
n_targets = y.shape[1]
if copy:
K = K.copy()
alpha = np.atleast_1d(alpha)
one_alpha = (alpha == alpha[0]).all()
has_sw = isinstance(sample_weight, np.ndarray) \
or sample_weight not in [1.0, None]
if has_sw:
# Unlike other solvers, we need to support sample_weight directly
# because K might be a pre-computed kernel.
sw = np.sqrt(np.atleast_1d(sample_weight))
y = y * sw[:, np.newaxis]
K *= np.outer(sw, sw)
if one_alpha:
# Only one penalty, we can solve multi-target problems in one time.
K.flat[::n_samples + 1] += alpha[0]
try:
# Note: we must use overwrite_a=False in order to be able to
# use the fall-back solution below in case a LinAlgError
# is raised
dual_coef = linalg.solve(K, y, sym_pos=True,
overwrite_a=False)
except np.linalg.LinAlgError:
warnings.warn("Singular matrix in solving dual problem. Using "
"least-squares solution instead.")
dual_coef = linalg.lstsq(K, y)[0]
# K is expensive to compute and store in memory so change it back in
# case it was user-given.
K.flat[::n_samples + 1] -= alpha[0]
if has_sw:
dual_coef *= sw[:, np.newaxis]
return dual_coef
else:
# One penalty per target. We need to solve each target separately.
dual_coefs = np.empty([n_targets, n_samples])
for dual_coef, target, current_alpha in zip(dual_coefs, y.T, alpha):
K.flat[::n_samples + 1] += current_alpha
dual_coef[:] = linalg.solve(K, target, sym_pos=True,
overwrite_a=False).ravel()
K.flat[::n_samples + 1] -= current_alpha
if has_sw:
dual_coefs *= sw[np.newaxis, :]
return dual_coefs.T
def _solve_svd(X, y, alpha):
U, s, Vt = linalg.svd(X, full_matrices=False)
idx = s > 1e-15 # same default value as scipy.linalg.pinv
s_nnz = s[idx][:, np.newaxis]
UTy = np.dot(U.T, y)
d = np.zeros((s.size, alpha.size))
d[idx] = s_nnz / (s_nnz ** 2 + alpha)
d_UT_y = d * UTy
return np.dot(Vt.T, d_UT_y).T
def ridge_regression(X, y, alpha, sample_weight=None, solver='auto',
max_iter=None, tol=1e-3, verbose=0, random_state=None,
return_n_iter=False):
"""Solve the ridge equation by the method of normal equations.
Read more in the :ref:`User Guide <ridge_regression>`.
Parameters
----------
X : {array-like, sparse matrix, LinearOperator},
shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
alpha : {float, array-like},
shape = [n_targets] if array-like
The l_2 penalty to be used. If an array is passed, penalties are
assumed to be specific to targets
max_iter : int, optional
Maximum number of iterations for conjugate gradient solver.
For 'sparse_cg' and 'lsqr' solvers, the default value is determined
by scipy.sparse.linalg. For 'sag' solver, the default value is 1000.
sample_weight : float or numpy array of shape [n_samples]
Individual weights for each sample. If sample_weight is not None and
solver='auto', the solver will be set to 'cholesky'.
solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg'}
Solver to use in the computational routines:
- 'auto' chooses the solver automatically based on the type of data.
- 'svd' uses a Singular Value Decomposition of X to compute the Ridge
coefficients. More stable for singular matrices than
'cholesky'.
- 'cholesky' uses the standard scipy.linalg.solve function to
obtain a closed-form solution via a Cholesky decomposition of
dot(X.T, X)
- 'sparse_cg' uses the conjugate gradient solver as found in
scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
more appropriate than 'cholesky' for large-scale data
(possibility to set `tol` and `max_iter`).
- 'lsqr' uses the dedicated regularized least-squares routine
scipy.sparse.linalg.lsqr. It is the fatest but may not be available
in old scipy versions. It also uses an iterative procedure.
- 'sag' uses a Stochastic Average Gradient descent. It also uses an
iterative procedure, and is often faster than other solvers when
both n_samples and n_features are large. Note that 'sag' fast
convergence is only guaranteed on features with approximately the
same scale. You can preprocess the data with a scaler from
sklearn.preprocessing.
All last four solvers support both dense and sparse data.
tol : float
Precision of the solution.
verbose : int
Verbosity level. Setting verbose > 0 will display additional
information depending on the solver used.
random_state : int seed, RandomState instance, or None (default)
The seed of the pseudo random number generator to use when
shuffling the data. Used in 'sag' solver.
return_n_iter : boolean, default False
If True, the method also returns `n_iter`, the actual number of
iteration performed by the solver.
Returns
-------
coef : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
n_iter : int, optional
The actual number of iteration performed by the solver.
Only returned if `return_n_iter` is True.
Notes
-----
This function won't compute the intercept.
"""
# SAG needs X and y columns to be C-contiguous and np.float64
if solver == 'sag':
X = check_array(X, accept_sparse=['csr'],
dtype=np.float64, order='C')
y = check_array(y, dtype=np.float64, ensure_2d=False, order='F')
else:
X = check_array(X, accept_sparse=['csr', 'csc', 'coo'],
dtype=np.float64)
y = check_array(y, dtype='numeric', ensure_2d=False)
check_consistent_length(X, y)
n_samples, n_features = X.shape
if y.ndim > 2:
raise ValueError("Target y has the wrong shape %s" % str(y.shape))
ravel = False
if y.ndim == 1:
y = y.reshape(-1, 1)
ravel = True
n_samples_, n_targets = y.shape
if n_samples != n_samples_:
raise ValueError("Number of samples in X and y does not correspond:"
" %d != %d" % (n_samples, n_samples_))
has_sw = sample_weight is not None
if solver == 'auto':
# cholesky if it's a dense array and cg in any other case
if not sparse.issparse(X) or has_sw:
solver = 'cholesky'
else:
solver = 'sparse_cg'
elif solver == 'lsqr' and not hasattr(sp_linalg, 'lsqr'):
warnings.warn("""lsqr not available on this machine, falling back
to sparse_cg.""")
solver = 'sparse_cg'
if has_sw:
if np.atleast_1d(sample_weight).ndim > 1:
raise ValueError("Sample weights must be 1D array or scalar")
if solver != 'sag':
# SAG supports sample_weight directly. For other solvers,
# we implement sample_weight via a simple rescaling.
X, y = _rescale_data(X, y, sample_weight)
# There should be either 1 or n_targets penalties
alpha = np.asarray(alpha).ravel()
if alpha.size not in [1, n_targets]:
raise ValueError("Number of targets and number of penalties "
"do not correspond: %d != %d"
% (alpha.size, n_targets))
if alpha.size == 1 and n_targets > 1:
alpha = np.repeat(alpha, n_targets)
if solver not in ('sparse_cg', 'cholesky', 'svd', 'lsqr', 'sag'):
raise ValueError('Solver %s not understood' % solver)
n_iter = None
if solver == 'sparse_cg':
coef = _solve_sparse_cg(X, y, alpha, max_iter, tol, verbose)
elif solver == 'lsqr':
coef, n_iter = _solve_lsqr(X, y, alpha, max_iter, tol)
elif solver == 'cholesky':
if n_features > n_samples:
K = safe_sparse_dot(X, X.T, dense_output=True)
try:
dual_coef = _solve_cholesky_kernel(K, y, alpha)
coef = safe_sparse_dot(X.T, dual_coef, dense_output=True).T
except linalg.LinAlgError:
# use SVD solver if matrix is singular
solver = 'svd'
else:
try:
coef = _solve_cholesky(X, y, alpha)
except linalg.LinAlgError:
# use SVD solver if matrix is singular
solver = 'svd'
elif solver == 'sag':
# precompute max_squared_sum for all targets
max_squared_sum = get_max_squared_sum(X)
coef = np.empty((y.shape[1], n_features))
n_iter = np.empty(y.shape[1], dtype=np.int32)
for i, (alpha_i, target) in enumerate(zip(alpha, y.T)):
coef_, n_iter_, _ = sag_solver(
X, target.ravel(), sample_weight, 'squared', alpha_i,
max_iter, tol, verbose, random_state, False, max_squared_sum,
dict())
coef[i] = coef_
n_iter[i] = n_iter_
coef = np.asarray(coef)
if solver == 'svd':
if sparse.issparse(X):
raise TypeError('SVD solver does not support sparse'
' inputs currently')
coef = _solve_svd(X, y, alpha)
if ravel:
# When y was passed as a 1d-array, we flatten the coefficients.
coef = coef.ravel()
if return_n_iter:
return coef, n_iter
else:
return coef
class _BaseRidge(six.with_metaclass(ABCMeta, LinearModel)):
@abstractmethod
def __init__(self, alpha=1.0, fit_intercept=True, normalize=False,
copy_X=True, max_iter=None, tol=1e-3, solver="auto",
random_state=None):
self.alpha = alpha
self.fit_intercept = fit_intercept
self.normalize = normalize
self.copy_X = copy_X
self.max_iter = max_iter
self.tol = tol
self.solver = solver
self.random_state = random_state
def fit(self, X, y, sample_weight=None):
X, y = check_X_y(X, y, ['csr', 'csc', 'coo'], dtype=np.float64,
multi_output=True, y_numeric=True)
if ((sample_weight is not None) and
np.atleast_1d(sample_weight).ndim > 1):
raise ValueError("Sample weights must be 1D array or scalar")
X, y, X_mean, y_mean, X_std = self._center_data(
X, y, self.fit_intercept, self.normalize, self.copy_X,
sample_weight=sample_weight)
self.coef_, self.n_iter_ = ridge_regression(
X, y, alpha=self.alpha, sample_weight=sample_weight,
max_iter=self.max_iter, tol=self.tol, solver=self.solver,
random_state=self.random_state, return_n_iter=True)
self._set_intercept(X_mean, y_mean, X_std)
return self
class Ridge(_BaseRidge, RegressorMixin):
"""Linear least squares with l2 regularization.
This model solves a regression model where the loss function is
the linear least squares function and regularization is given by
the l2-norm. Also known as Ridge Regression or Tikhonov regularization.
This estimator has built-in support for multi-variate regression
(i.e., when y is a 2d-array of shape [n_samples, n_targets]).
Read more in the :ref:`User Guide <ridge_regression>`.
Parameters
----------
alpha : {float, array-like}, shape (n_targets)
Small positive values of alpha improve the conditioning of the problem
and reduce the variance of the estimates. Alpha corresponds to
``C^-1`` in other linear models such as LogisticRegression or
LinearSVC. If an array is passed, penalties are assumed to be specific
to the targets. Hence they must correspond in number.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
max_iter : int, optional
Maximum number of iterations for conjugate gradient solver.
For 'sparse_cg' and 'lsqr' solvers, the default value is determined
by scipy.sparse.linalg. For 'sag' solver, the default value is 1000.
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag'}
Solver to use in the computational routines:
- 'auto' chooses the solver automatically based on the type of data.
- 'svd' uses a Singular Value Decomposition of X to compute the Ridge
coefficients. More stable for singular matrices than
'cholesky'.
- 'cholesky' uses the standard scipy.linalg.solve function to
obtain a closed-form solution.
- 'sparse_cg' uses the conjugate gradient solver as found in
scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
more appropriate than 'cholesky' for large-scale data
(possibility to set `tol` and `max_iter`).
- 'lsqr' uses the dedicated regularized least-squares routine
scipy.sparse.linalg.lsqr. It is the fatest but may not be available
in old scipy versions. It also uses an iterative procedure.
- 'sag' uses a Stochastic Average Gradient descent. It also uses an
iterative procedure, and is often faster than other solvers when
both n_samples and n_features are large. Note that 'sag' fast
convergence is only guaranteed on features with approximately the
same scale. You can preprocess the data with a scaler from
sklearn.preprocessing.
All last four solvers support both dense and sparse data.
tol : float
Precision of the solution.
random_state : int seed, RandomState instance, or None (default)
The seed of the pseudo random number generator to use when
shuffling the data. Used in 'sag' solver.
Attributes
----------
coef_ : array, shape (n_features,) or (n_targets, n_features)
Weight vector(s).
intercept_ : float | array, shape = (n_targets,)
Independent term in decision function. Set to 0.0 if
``fit_intercept = False``.
n_iter_ : array or None, shape (n_targets,)
Actual number of iterations for each target. Available only for
sag and lsqr solvers. Other solvers will return None.
See also
--------
RidgeClassifier, RidgeCV, KernelRidge
Examples
--------
>>> from sklearn.linear_model import Ridge
>>> import numpy as np
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> clf = Ridge(alpha=1.0)
>>> clf.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
Ridge(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None,
normalize=False, random_state=None, solver='auto', tol=0.001)
"""
def __init__(self, alpha=1.0, fit_intercept=True, normalize=False,
copy_X=True, max_iter=None, tol=1e-3, solver="auto",
random_state=None):
super(Ridge, self).__init__(alpha=alpha, fit_intercept=fit_intercept,
normalize=normalize, copy_X=copy_X,
max_iter=max_iter, tol=tol, solver=solver,
random_state=random_state)
def fit(self, X, y, sample_weight=None):
"""Fit Ridge regression model
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
sample_weight : float or numpy array of shape [n_samples]
Individual weights for each sample
Returns
-------
self : returns an instance of self.
"""
return super(Ridge, self).fit(X, y, sample_weight=sample_weight)
class RidgeClassifier(LinearClassifierMixin, _BaseRidge):
"""Classifier using Ridge regression.
Read more in the :ref:`User Guide <ridge_regression>`.
Parameters
----------
alpha : float
Small positive values of alpha improve the conditioning of the problem
and reduce the variance of the estimates. Alpha corresponds to
``C^-1`` in other linear models such as LogisticRegression or
LinearSVC.
class_weight : dict or 'balanced', optional
Weights associated with classes in the form ``{class_label: weight}``.
If not given, all classes are supposed to have weight one.
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set to false, no
intercept will be used in calculations (e.g. data is expected to be
already centered).
max_iter : int, optional
Maximum number of iterations for conjugate gradient solver.
The default value is determined by scipy.sparse.linalg.
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag'}
Solver to use in the computational routines:
- 'auto' chooses the solver automatically based on the type of data.
- 'svd' uses a Singular Value Decomposition of X to compute the Ridge
coefficients. More stable for singular matrices than
'cholesky'.
- 'cholesky' uses the standard scipy.linalg.solve function to
obtain a closed-form solution.
- 'sparse_cg' uses the conjugate gradient solver as found in
scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
more appropriate than 'cholesky' for large-scale data
(possibility to set `tol` and `max_iter`).
- 'lsqr' uses the dedicated regularized least-squares routine
scipy.sparse.linalg.lsqr. It is the fatest but may not be available
in old scipy versions. It also uses an iterative procedure.
- 'sag' uses a Stochastic Average Gradient descent. It also uses an
iterative procedure, and is faster than other solvers when both
n_samples and n_features are large.
tol : float
Precision of the solution.
random_state : int seed, RandomState instance, or None (default)
The seed of the pseudo random number generator to use when
shuffling the data. Used in 'sag' solver.
Attributes
----------
coef_ : array, shape (n_features,) or (n_classes, n_features)
Weight vector(s).
intercept_ : float | array, shape = (n_targets,)
Independent term in decision function. Set to 0.0 if
``fit_intercept = False``.
n_iter_ : array or None, shape (n_targets,)
Actual number of iterations for each target. Available only for
sag and lsqr solvers. Other solvers will return None.
See also
--------
Ridge, RidgeClassifierCV
Notes
-----
For multi-class classification, n_class classifiers are trained in
a one-versus-all approach. Concretely, this is implemented by taking
advantage of the multi-variate response support in Ridge.
"""
def __init__(self, alpha=1.0, fit_intercept=True, normalize=False,
copy_X=True, max_iter=None, tol=1e-3, class_weight=None,
solver="auto", random_state=None):
super(RidgeClassifier, self).__init__(
alpha=alpha, fit_intercept=fit_intercept, normalize=normalize,
copy_X=copy_X, max_iter=max_iter, tol=tol, solver=solver,
random_state=random_state)
self.class_weight = class_weight
def fit(self, X, y, sample_weight=None):
"""Fit Ridge regression model.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples,n_features]
Training data
y : array-like, shape = [n_samples]
Target values
sample_weight : float or numpy array of shape (n_samples,)
Sample weight.
Returns
-------
self : returns an instance of self.
"""
self._label_binarizer = LabelBinarizer(pos_label=1, neg_label=-1)
Y = self._label_binarizer.fit_transform(y)
if not self._label_binarizer.y_type_.startswith('multilabel'):
y = column_or_1d(y, warn=True)
if self.class_weight:
if sample_weight is None:
sample_weight = 1.
# modify the sample weights with the corresponding class weight
sample_weight = (sample_weight *
compute_sample_weight(self.class_weight, y))
super(RidgeClassifier, self).fit(X, Y, sample_weight=sample_weight)
return self
@property
def classes_(self):
return self._label_binarizer.classes_
class _RidgeGCV(LinearModel):
"""Ridge regression with built-in Generalized Cross-Validation
It allows efficient Leave-One-Out cross-validation.
This class is not intended to be used directly. Use RidgeCV instead.
Notes
-----
We want to solve (K + alpha*Id)c = y,
where K = X X^T is the kernel matrix.
Let G = (K + alpha*Id)^-1.
Dual solution: c = Gy
Primal solution: w = X^T c
Compute eigendecomposition K = Q V Q^T.
Then G = Q (V + alpha*Id)^-1 Q^T,
where (V + alpha*Id) is diagonal.
It is thus inexpensive to inverse for many alphas.
Let loov be the vector of prediction values for each example
when the model was fitted with all examples but this example.
loov = (KGY - diag(KG)Y) / diag(I-KG)
Let looe be the vector of prediction errors for each example
when the model was fitted with all examples but this example.
looe = y - loov = c / diag(G)
References
----------
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2007-025.pdf
http://www.mit.edu/~9.520/spring07/Classes/rlsslides.pdf
"""
def __init__(self, alphas=(0.1, 1.0, 10.0),
fit_intercept=True, normalize=False,
scoring=None, copy_X=True,
gcv_mode=None, store_cv_values=False):
self.alphas = np.asarray(alphas)
self.fit_intercept = fit_intercept
self.normalize = normalize
self.scoring = scoring
self.copy_X = copy_X
self.gcv_mode = gcv_mode
self.store_cv_values = store_cv_values
def _pre_compute(self, X, y):
# even if X is very sparse, K is usually very dense
K = safe_sparse_dot(X, X.T, dense_output=True)
v, Q = linalg.eigh(K)
QT_y = np.dot(Q.T, y)
return v, Q, QT_y
def _decomp_diag(self, v_prime, Q):
# compute diagonal of the matrix: dot(Q, dot(diag(v_prime), Q^T))
return (v_prime * Q ** 2).sum(axis=-1)
def _diag_dot(self, D, B):
# compute dot(diag(D), B)
if len(B.shape) > 1:
# handle case where B is > 1-d
D = D[(slice(None), ) + (np.newaxis, ) * (len(B.shape) - 1)]
return D * B
def _errors(self, alpha, y, v, Q, QT_y):
# don't construct matrix G, instead compute action on y & diagonal
w = 1.0 / (v + alpha)
c = np.dot(Q, self._diag_dot(w, QT_y))
G_diag = self._decomp_diag(w, Q)
# handle case where y is 2-d
if len(y.shape) != 1:
G_diag = G_diag[:, np.newaxis]
return (c / G_diag) ** 2, c
def _values(self, alpha, y, v, Q, QT_y):
# don't construct matrix G, instead compute action on y & diagonal
w = 1.0 / (v + alpha)
c = np.dot(Q, self._diag_dot(w, QT_y))
G_diag = self._decomp_diag(w, Q)
# handle case where y is 2-d
if len(y.shape) != 1:
G_diag = G_diag[:, np.newaxis]
return y - (c / G_diag), c
def _pre_compute_svd(self, X, y):
if sparse.issparse(X):
raise TypeError("SVD not supported for sparse matrices")
U, s, _ = linalg.svd(X, full_matrices=0)
v = s ** 2
UT_y = np.dot(U.T, y)
return v, U, UT_y
def _errors_svd(self, alpha, y, v, U, UT_y):
w = ((v + alpha) ** -1) - (alpha ** -1)
c = np.dot(U, self._diag_dot(w, UT_y)) + (alpha ** -1) * y
G_diag = self._decomp_diag(w, U) + (alpha ** -1)
if len(y.shape) != 1:
# handle case where y is 2-d
G_diag = G_diag[:, np.newaxis]
return (c / G_diag) ** 2, c
def _values_svd(self, alpha, y, v, U, UT_y):
w = ((v + alpha) ** -1) - (alpha ** -1)
c = np.dot(U, self._diag_dot(w, UT_y)) + (alpha ** -1) * y
G_diag = self._decomp_diag(w, U) + (alpha ** -1)
if len(y.shape) != 1:
# handle case when y is 2-d
G_diag = G_diag[:, np.newaxis]
return y - (c / G_diag), c
def fit(self, X, y, sample_weight=None):
"""Fit Ridge regression model
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
sample_weight : float or array-like of shape [n_samples]
Sample weight
Returns
-------
self : Returns self.
"""
X, y = check_X_y(X, y, ['csr', 'csc', 'coo'], dtype=np.float,
multi_output=True, y_numeric=True)
n_samples, n_features = X.shape
X, y, X_mean, y_mean, X_std = LinearModel._center_data(
X, y, self.fit_intercept, self.normalize, self.copy_X,
sample_weight=sample_weight)
gcv_mode = self.gcv_mode
with_sw = len(np.shape(sample_weight))
if gcv_mode is None or gcv_mode == 'auto':
if sparse.issparse(X) or n_features > n_samples or with_sw:
gcv_mode = 'eigen'
else:
gcv_mode = 'svd'
elif gcv_mode == "svd" and with_sw:
# FIXME non-uniform sample weights not yet supported
warnings.warn("non-uniform sample weights unsupported for svd, "
"forcing usage of eigen")
gcv_mode = 'eigen'
if gcv_mode == 'eigen':
_pre_compute = self._pre_compute
_errors = self._errors
_values = self._values
elif gcv_mode == 'svd':
# assert n_samples >= n_features
_pre_compute = self._pre_compute_svd
_errors = self._errors_svd
_values = self._values_svd
else:
raise ValueError('bad gcv_mode "%s"' % gcv_mode)
v, Q, QT_y = _pre_compute(X, y)
n_y = 1 if len(y.shape) == 1 else y.shape[1]
cv_values = np.zeros((n_samples * n_y, len(self.alphas)))
C = []
scorer = check_scoring(self, scoring=self.scoring, allow_none=True)
error = scorer is None
for i, alpha in enumerate(self.alphas):
weighted_alpha = (sample_weight * alpha
if sample_weight is not None
else alpha)
if error:
out, c = _errors(weighted_alpha, y, v, Q, QT_y)
else:
out, c = _values(weighted_alpha, y, v, Q, QT_y)
cv_values[:, i] = out.ravel()
C.append(c)
if error:
best = cv_values.mean(axis=0).argmin()
else:
# The scorer want an object that will make the predictions but
# they are already computed efficiently by _RidgeGCV. This
# identity_estimator will just return them
def identity_estimator():
pass
identity_estimator.decision_function = lambda y_predict: y_predict
identity_estimator.predict = lambda y_predict: y_predict
out = [scorer(identity_estimator, y.ravel(), cv_values[:, i])
for i in range(len(self.alphas))]
best = np.argmax(out)
self.alpha_ = self.alphas[best]
self.dual_coef_ = C[best]
self.coef_ = safe_sparse_dot(self.dual_coef_.T, X)
self._set_intercept(X_mean, y_mean, X_std)
if self.store_cv_values:
if len(y.shape) == 1:
cv_values_shape = n_samples, len(self.alphas)
else:
cv_values_shape = n_samples, n_y, len(self.alphas)
self.cv_values_ = cv_values.reshape(cv_values_shape)
return self
class _BaseRidgeCV(LinearModel):
def __init__(self, alphas=(0.1, 1.0, 10.0),
fit_intercept=True, normalize=False, scoring=None,
cv=None, gcv_mode=None,
store_cv_values=False):
self.alphas = alphas
self.fit_intercept = fit_intercept
self.normalize = normalize
self.scoring = scoring
self.cv = cv
self.gcv_mode = gcv_mode
self.store_cv_values = store_cv_values
def fit(self, X, y, sample_weight=None):
"""Fit Ridge regression model
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
sample_weight : float or array-like of shape [n_samples]
Sample weight
Returns
-------
self : Returns self.
"""
if self.cv is None:
estimator = _RidgeGCV(self.alphas,
fit_intercept=self.fit_intercept,
normalize=self.normalize,
scoring=self.scoring,
gcv_mode=self.gcv_mode,
store_cv_values=self.store_cv_values)
estimator.fit(X, y, sample_weight=sample_weight)
self.alpha_ = estimator.alpha_
if self.store_cv_values:
self.cv_values_ = estimator.cv_values_
else:
if self.store_cv_values:
raise ValueError("cv!=None and store_cv_values=True "
" are incompatible")
parameters = {'alpha': self.alphas}
fit_params = {'sample_weight': sample_weight}
gs = GridSearchCV(Ridge(fit_intercept=self.fit_intercept),
parameters, fit_params=fit_params, cv=self.cv)
gs.fit(X, y)
estimator = gs.best_estimator_
self.alpha_ = gs.best_estimator_.alpha
self.coef_ = estimator.coef_
self.intercept_ = estimator.intercept_
return self
class RidgeCV(_BaseRidgeCV, RegressorMixin):
"""Ridge regression with built-in cross-validation.
By default, it performs Generalized Cross-Validation, which is a form of
efficient Leave-One-Out cross-validation.
Read more in the :ref:`User Guide <ridge_regression>`.
Parameters
----------
alphas : numpy array of shape [n_alphas]
Array of alpha values to try.
Small positive values of alpha improve the conditioning of the
problem and reduce the variance of the estimates.
Alpha corresponds to ``C^-1`` in other linear models such as
LogisticRegression or LinearSVC.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used, else, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
gcv_mode : {None, 'auto', 'svd', eigen'}, optional
Flag indicating which strategy to use when performing
Generalized Cross-Validation. Options are::
'auto' : use svd if n_samples > n_features or when X is a sparse
matrix, otherwise use eigen
'svd' : force computation via singular value decomposition of X
(does not work for sparse matrices)
'eigen' : force computation via eigendecomposition of X^T X
The 'auto' mode is the default and is intended to pick the cheaper
option of the two depending upon the shape and format of the training
data.
store_cv_values : boolean, default=False
Flag indicating if the cross-validation values corresponding to
each alpha should be stored in the `cv_values_` attribute (see
below). This flag is only compatible with `cv=None` (i.e. using
Generalized Cross-Validation).
Attributes
----------
cv_values_ : array, shape = [n_samples, n_alphas] or \
shape = [n_samples, n_targets, n_alphas], optional
Cross-validation values for each alpha (if `store_cv_values=True` and \
`cv=None`). After `fit()` has been called, this attribute will \
contain the mean squared errors (by default) or the values of the \
`{loss,score}_func` function (if provided in the constructor).
coef_ : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
intercept_ : float | array, shape = (n_targets,)
Independent term in decision function. Set to 0.0 if
``fit_intercept = False``.
alpha_ : float
Estimated regularization parameter.
See also
--------
Ridge: Ridge regression
RidgeClassifier: Ridge classifier
RidgeClassifierCV: Ridge classifier with built-in cross validation
"""
pass
class RidgeClassifierCV(LinearClassifierMixin, _BaseRidgeCV):
"""Ridge classifier with built-in cross-validation.
By default, it performs Generalized Cross-Validation, which is a form of
efficient Leave-One-Out cross-validation. Currently, only the n_features >
n_samples case is handled efficiently.
Read more in the :ref:`User Guide <ridge_regression>`.
Parameters
----------
alphas : numpy array of shape [n_alphas]
Array of alpha values to try.
Small positive values of alpha improve the conditioning of the
problem and reduce the variance of the estimates.
Alpha corresponds to ``C^-1`` in other linear models such as
LogisticRegression or LinearSVC.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the efficient Leave-One-Out cross-validation
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
class_weight : dict or 'balanced', optional
Weights associated with classes in the form ``{class_label: weight}``.
If not given, all classes are supposed to have weight one.
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``
Attributes
----------
cv_values_ : array, shape = [n_samples, n_alphas] or \
shape = [n_samples, n_responses, n_alphas], optional
Cross-validation values for each alpha (if `store_cv_values=True` and
`cv=None`). After `fit()` has been called, this attribute will contain \
the mean squared errors (by default) or the values of the \
`{loss,score}_func` function (if provided in the constructor).
coef_ : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
intercept_ : float | array, shape = (n_targets,)
Independent term in decision function. Set to 0.0 if
``fit_intercept = False``.
alpha_ : float
Estimated regularization parameter
See also
--------
Ridge: Ridge regression
RidgeClassifier: Ridge classifier
RidgeCV: Ridge regression with built-in cross validation
Notes
-----
For multi-class classification, n_class classifiers are trained in
a one-versus-all approach. Concretely, this is implemented by taking
advantage of the multi-variate response support in Ridge.
"""
def __init__(self, alphas=(0.1, 1.0, 10.0), fit_intercept=True,
normalize=False, scoring=None, cv=None, class_weight=None):
super(RidgeClassifierCV, self).__init__(
alphas=alphas, fit_intercept=fit_intercept, normalize=normalize,
scoring=scoring, cv=cv)
self.class_weight = class_weight
def fit(self, X, y, sample_weight=None):
"""Fit the ridge classifier.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape (n_samples,)
Target values.
sample_weight : float or numpy array of shape (n_samples,)
Sample weight.
Returns
-------
self : object
Returns self.
"""
self._label_binarizer = LabelBinarizer(pos_label=1, neg_label=-1)
Y = self._label_binarizer.fit_transform(y)
if not self._label_binarizer.y_type_.startswith('multilabel'):
y = column_or_1d(y, warn=True)
if self.class_weight:
if sample_weight is None:
sample_weight = 1.
# modify the sample weights with the corresponding class weight
sample_weight = (sample_weight *
compute_sample_weight(self.class_weight, y))
_BaseRidgeCV.fit(self, X, Y, sample_weight=sample_weight)
return self
@property
def classes_(self):
return self._label_binarizer.classes_
| bsd-3-clause |
crichardson17/starburst_atlas | Low_resolution_sims/DustFree_LowRes/Padova_cont/padova_cont_4/Optical1.py | 33 | 7366 | import csv
import matplotlib.pyplot as plt
from numpy import *
import scipy.interpolate
import math
from pylab import *
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
import matplotlib.patches as patches
from matplotlib.path import Path
import os
# ------------------------------------------------------------------------------------------------------
#inputs
for file in os.listdir('.'):
if file.endswith(".grd"):
inputfile = file
for file in os.listdir('.'):
if file.endswith(".txt"):
inputfile2 = file
# ------------------------------------------------------------------------------------------------------
#Patches data
#for the Kewley and Levesque data
verts = [
(1., 7.97712125471966000000), # left, bottom
(1., 9.57712125471966000000), # left, top
(2., 10.57712125471970000000), # right, top
(2., 8.97712125471966000000), # right, bottom
(0., 0.), # ignored
]
codes = [Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
path = Path(verts, codes)
# ------------------------
#for the Kewley 01 data
verts2 = [
(2.4, 9.243038049), # left, bottom
(2.4, 11.0211893), # left, top
(2.6, 11.0211893), # right, top
(2.6, 9.243038049), # right, bottom
(0, 0.), # ignored
]
path = Path(verts, codes)
path2 = Path(verts2, codes)
# -------------------------
#for the Moy et al data
verts3 = [
(1., 6.86712125471966000000), # left, bottom
(1., 10.18712125471970000000), # left, top
(3., 12.18712125471970000000), # right, top
(3., 8.86712125471966000000), # right, bottom
(0., 0.), # ignored
]
path = Path(verts, codes)
path3 = Path(verts3, codes)
# ------------------------------------------------------------------------------------------------------
#the routine to add patches for others peoples' data onto our plots.
def add_patches(ax):
patch3 = patches.PathPatch(path3, facecolor='yellow', lw=0)
patch2 = patches.PathPatch(path2, facecolor='green', lw=0)
patch = patches.PathPatch(path, facecolor='red', lw=0)
ax1.add_patch(patch3)
ax1.add_patch(patch2)
ax1.add_patch(patch)
# ------------------------------------------------------------------------------------------------------
#the subplot routine
def add_sub_plot(sub_num):
numplots = 16
plt.subplot(numplots/4.,4,sub_num)
rbf = scipy.interpolate.Rbf(x, y, z[:,sub_num-1], function='linear')
zi = rbf(xi, yi)
contour = plt.contour(xi,yi,zi, levels, colors='c', linestyles = 'dashed')
contour2 = plt.contour(xi,yi,zi, levels2, colors='k', linewidths=1.5)
plt.scatter(max_values[line[sub_num-1],2], max_values[line[sub_num-1],3], c ='k',marker = '*')
plt.annotate(headers[line[sub_num-1]], xy=(8,11), xytext=(6,8.5), fontsize = 10)
plt.annotate(max_values[line[sub_num-1],0], xy= (max_values[line[sub_num-1],2], max_values[line[sub_num-1],3]), xytext = (0, -10), textcoords = 'offset points', ha = 'right', va = 'bottom', fontsize=10)
if sub_num == numplots / 2.:
print "half the plots are complete"
#axis limits
yt_min = 8
yt_max = 23
xt_min = 0
xt_max = 12
plt.ylim(yt_min,yt_max)
plt.xlim(xt_min,xt_max)
plt.yticks(arange(yt_min+1,yt_max,1),fontsize=10)
plt.xticks(arange(xt_min+1,xt_max,1), fontsize = 10)
if sub_num in [2,3,4,6,7,8,10,11,12,14,15,16]:
plt.tick_params(labelleft = 'off')
else:
plt.tick_params(labelleft = 'on')
plt.ylabel('Log ($ \phi _{\mathrm{H}} $)')
if sub_num in [1,2,3,4,5,6,7,8,9,10,11,12]:
plt.tick_params(labelbottom = 'off')
else:
plt.tick_params(labelbottom = 'on')
plt.xlabel('Log($n _{\mathrm{H}} $)')
if sub_num == 1:
plt.yticks(arange(yt_min+1,yt_max+1,1),fontsize=10)
if sub_num == 13:
plt.yticks(arange(yt_min,yt_max,1),fontsize=10)
plt.xticks(arange(xt_min,xt_max,1), fontsize = 10)
if sub_num == 16 :
plt.xticks(arange(xt_min+1,xt_max+1,1), fontsize = 10)
# ---------------------------------------------------
#this is where the grid information (phi and hdens) is read in and saved to grid.
grid = [];
with open(inputfile, 'rb') as f:
csvReader = csv.reader(f,delimiter='\t')
for row in csvReader:
grid.append(row);
grid = asarray(grid)
#here is where the data for each line is read in and saved to dataEmissionlines
dataEmissionlines = [];
with open(inputfile2, 'rb') as f:
csvReader = csv.reader(f,delimiter='\t')
headers = csvReader.next()
for row in csvReader:
dataEmissionlines.append(row);
dataEmissionlines = asarray(dataEmissionlines)
print "import files complete"
# ---------------------------------------------------
#for grid
phi_values = grid[1:len(dataEmissionlines)+1,6]
hdens_values = grid[1:len(dataEmissionlines)+1,7]
#for lines
headers = headers[1:]
Emissionlines = dataEmissionlines[:, 1:]
concatenated_data = zeros((len(Emissionlines),len(Emissionlines[0])))
max_values = zeros((len(Emissionlines[0]),4))
#select the scaling factor
#for 1215
#incident = Emissionlines[1:,4]
#for 4860
incident = Emissionlines[:,57]
#take the ratio of incident and all the lines and put it all in an array concatenated_data
for i in range(len(Emissionlines)):
for j in range(len(Emissionlines[0])):
if math.log(4860.*(float(Emissionlines[i,j])/float(Emissionlines[i,57])), 10) > 0:
concatenated_data[i,j] = math.log(4860.*(float(Emissionlines[i,j])/float(Emissionlines[i,57])), 10)
else:
concatenated_data[i,j] == 0
# for 1215
#for i in range(len(Emissionlines)):
# for j in range(len(Emissionlines[0])):
# if math.log(1215.*(float(Emissionlines[i,j])/float(Emissionlines[i,4])), 10) > 0:
# concatenated_data[i,j] = math.log(1215.*(float(Emissionlines[i,j])/float(Emissionlines[i,4])), 10)
# else:
# concatenated_data[i,j] == 0
#find the maxima to plot onto the contour plots
for j in range(len(concatenated_data[0])):
max_values[j,0] = max(concatenated_data[:,j])
max_values[j,1] = argmax(concatenated_data[:,j], axis = 0)
max_values[j,2] = hdens_values[max_values[j,1]]
max_values[j,3] = phi_values[max_values[j,1]]
#to round off the maxima
max_values[:,0] = [ '%.1f' % elem for elem in max_values[:,0] ]
print "data arranged"
# ---------------------------------------------------
#Creating the grid to interpolate with for contours.
gridarray = zeros((len(Emissionlines),2))
gridarray[:,0] = hdens_values
gridarray[:,1] = phi_values
x = gridarray[:,0]
y = gridarray[:,1]
#change desired lines here!
line = [36, #NE 3 3343A
38, #BA C
39, #3646
40, #3726
41, #3727
42, #3729
43, #3869
44, #3889
45, #3933
46, #4026
47, #4070
48, #4074
49, #4078
50, #4102
51, #4340
52] #4363
#create z array for this plot
z = concatenated_data[:,line[:]]
# ---------------------------------------------------
# Interpolate
print "starting interpolation"
xi, yi = linspace(x.min(), x.max(), 10), linspace(y.min(), y.max(), 10)
xi, yi = meshgrid(xi, yi)
# ---------------------------------------------------
print "interpolatation complete; now plotting"
#plot
plt.subplots_adjust(wspace=0, hspace=0) #remove space between plots
levels = arange(10**-1,10, .2)
levels2 = arange(10**-2,10**2, 1)
plt.suptitle("Optical Lines", fontsize=14)
# ---------------------------------------------------
for i in range(16):
add_sub_plot(i)
ax1 = plt.subplot(4,4,1)
add_patches(ax1)
print "complete"
plt.savefig('optical_lines.pdf')
plt.clf()
| gpl-2.0 |
datapythonista/pandas | pandas/tests/io/test_compression.py | 3 | 8199 | import io
import os
from pathlib import Path
import subprocess
import sys
import textwrap
import time
import pytest
import pandas as pd
import pandas._testing as tm
import pandas.io.common as icom
@pytest.mark.parametrize(
"obj",
[
pd.DataFrame(
100 * [[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]],
columns=["X", "Y", "Z"],
),
pd.Series(100 * [0.123456, 0.234567, 0.567567], name="X"),
],
)
@pytest.mark.parametrize("method", ["to_pickle", "to_json", "to_csv"])
def test_compression_size(obj, method, compression_only):
with tm.ensure_clean() as path:
getattr(obj, method)(path, compression=compression_only)
compressed_size = os.path.getsize(path)
getattr(obj, method)(path, compression=None)
uncompressed_size = os.path.getsize(path)
assert uncompressed_size > compressed_size
@pytest.mark.parametrize(
"obj",
[
pd.DataFrame(
100 * [[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]],
columns=["X", "Y", "Z"],
),
pd.Series(100 * [0.123456, 0.234567, 0.567567], name="X"),
],
)
@pytest.mark.parametrize("method", ["to_csv", "to_json"])
def test_compression_size_fh(obj, method, compression_only):
with tm.ensure_clean() as path:
with icom.get_handle(path, "w", compression=compression_only) as handles:
getattr(obj, method)(handles.handle)
assert not handles.handle.closed
compressed_size = os.path.getsize(path)
with tm.ensure_clean() as path:
with icom.get_handle(path, "w", compression=None) as handles:
getattr(obj, method)(handles.handle)
assert not handles.handle.closed
uncompressed_size = os.path.getsize(path)
assert uncompressed_size > compressed_size
@pytest.mark.parametrize(
"write_method, write_kwargs, read_method",
[
("to_csv", {"index": False}, pd.read_csv),
("to_json", {}, pd.read_json),
("to_pickle", {}, pd.read_pickle),
],
)
def test_dataframe_compression_defaults_to_infer(
write_method, write_kwargs, read_method, compression_only
):
# GH22004
input = pd.DataFrame([[1.0, 0, -4], [3.4, 5, 2]], columns=["X", "Y", "Z"])
extension = icom._compression_to_extension[compression_only]
with tm.ensure_clean("compressed" + extension) as path:
getattr(input, write_method)(path, **write_kwargs)
output = read_method(path, compression=compression_only)
tm.assert_frame_equal(output, input)
@pytest.mark.parametrize(
"write_method,write_kwargs,read_method,read_kwargs",
[
("to_csv", {"index": False, "header": True}, pd.read_csv, {"squeeze": True}),
("to_json", {}, pd.read_json, {"typ": "series"}),
("to_pickle", {}, pd.read_pickle, {}),
],
)
def test_series_compression_defaults_to_infer(
write_method, write_kwargs, read_method, read_kwargs, compression_only
):
# GH22004
input = pd.Series([0, 5, -2, 10], name="X")
extension = icom._compression_to_extension[compression_only]
with tm.ensure_clean("compressed" + extension) as path:
getattr(input, write_method)(path, **write_kwargs)
output = read_method(path, compression=compression_only, **read_kwargs)
tm.assert_series_equal(output, input, check_names=False)
def test_compression_warning(compression_only):
# Assert that passing a file object to to_csv while explicitly specifying a
# compression protocol triggers a RuntimeWarning, as per GH21227.
df = pd.DataFrame(
100 * [[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]],
columns=["X", "Y", "Z"],
)
with tm.ensure_clean() as path:
with icom.get_handle(path, "w", compression=compression_only) as handles:
with tm.assert_produces_warning(RuntimeWarning):
df.to_csv(handles.handle, compression=compression_only)
def test_compression_binary(compression_only):
"""
Binary file handles support compression.
GH22555
"""
df = tm.makeDataFrame()
# with a file
with tm.ensure_clean() as path:
with open(path, mode="wb") as file:
df.to_csv(file, mode="wb", compression=compression_only)
file.seek(0) # file shouldn't be closed
tm.assert_frame_equal(
df, pd.read_csv(path, index_col=0, compression=compression_only)
)
# with BytesIO
file = io.BytesIO()
df.to_csv(file, mode="wb", compression=compression_only)
file.seek(0) # file shouldn't be closed
tm.assert_frame_equal(
df, pd.read_csv(file, index_col=0, compression=compression_only)
)
def test_gzip_reproducibility_file_name():
"""
Gzip should create reproducible archives with mtime.
Note: Archives created with different filenames will still be different!
GH 28103
"""
df = tm.makeDataFrame()
compression_options = {"method": "gzip", "mtime": 1}
# test for filename
with tm.ensure_clean() as path:
path = Path(path)
df.to_csv(path, compression=compression_options)
time.sleep(2)
output = path.read_bytes()
df.to_csv(path, compression=compression_options)
assert output == path.read_bytes()
def test_gzip_reproducibility_file_object():
"""
Gzip should create reproducible archives with mtime.
GH 28103
"""
df = tm.makeDataFrame()
compression_options = {"method": "gzip", "mtime": 1}
# test for file object
buffer = io.BytesIO()
df.to_csv(buffer, compression=compression_options, mode="wb")
output = buffer.getvalue()
time.sleep(2)
buffer = io.BytesIO()
df.to_csv(buffer, compression=compression_options, mode="wb")
assert output == buffer.getvalue()
def test_with_missing_lzma():
"""Tests if import pandas works when lzma is not present."""
# https://github.com/pandas-dev/pandas/issues/27575
code = textwrap.dedent(
"""\
import sys
sys.modules['lzma'] = None
import pandas
"""
)
subprocess.check_output([sys.executable, "-c", code], stderr=subprocess.PIPE)
def test_with_missing_lzma_runtime():
"""Tests if RuntimeError is hit when calling lzma without
having the module available.
"""
code = textwrap.dedent(
"""
import sys
import pytest
sys.modules['lzma'] = None
import pandas as pd
df = pd.DataFrame()
with pytest.raises(RuntimeError, match='lzma module'):
df.to_csv('foo.csv', compression='xz')
"""
)
subprocess.check_output([sys.executable, "-c", code], stderr=subprocess.PIPE)
@pytest.mark.parametrize(
"obj",
[
pd.DataFrame(
100 * [[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]],
columns=["X", "Y", "Z"],
),
pd.Series(100 * [0.123456, 0.234567, 0.567567], name="X"),
],
)
@pytest.mark.parametrize("method", ["to_pickle", "to_json", "to_csv"])
def test_gzip_compression_level(obj, method):
# GH33196
with tm.ensure_clean() as path:
getattr(obj, method)(path, compression="gzip")
compressed_size_default = os.path.getsize(path)
getattr(obj, method)(path, compression={"method": "gzip", "compresslevel": 1})
compressed_size_fast = os.path.getsize(path)
assert compressed_size_default < compressed_size_fast
@pytest.mark.parametrize(
"obj",
[
pd.DataFrame(
100 * [[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]],
columns=["X", "Y", "Z"],
),
pd.Series(100 * [0.123456, 0.234567, 0.567567], name="X"),
],
)
@pytest.mark.parametrize("method", ["to_pickle", "to_json", "to_csv"])
def test_bzip_compression_level(obj, method):
"""GH33196 bzip needs file size > 100k to show a size difference between
compression levels, so here we just check if the call works when
compression is passed as a dict.
"""
with tm.ensure_clean() as path:
getattr(obj, method)(path, compression={"method": "bz2", "compresslevel": 1})
| bsd-3-clause |
zooniverse/aggregation | experimental/penguins/newCluster.py | 2 | 11987 | #!/usr/bin/env python
__author__ = 'greg'
from sklearn.cluster import DBSCAN
from sklearn.cluster import AffinityPropagation
import numpy as np
import matplotlib.pyplot as plt
import csv
import sys
import os
import pymongo
import matplotlib.cbook as cbook
import cPickle as pickle
import shutil
import urllib
import math
def dist(c1,c2):
return math.sqrt((c1[0]-c2[0])**2 + (c1[1]-c2[1])**2)
def adaptiveDBSCAN(XYpts,user_ids):
if XYpts == []:
return []
pts_in_each_cluster = []
users_in_each_cluster = []
cluster_centers = []
#increase the epsilon until we don't have any nearby clusters corresponding to non-overlapping
#sets of users
X = np.array(XYpts)
for epsilon in [5,10,15,20,25,30]:
db = DBSCAN(eps=epsilon, min_samples=2).fit(X)
labels = db.labels_
pts_in_each_cluster = []
users_in_each_cluster = []
cluster_centers = []
for k in sorted(set(labels)):
if k == -1:
continue
class_member_mask = (labels == k)
pts_in_cluster = list(X[class_member_mask])
xSet,ySet = zip(*pts_in_cluster)
cluster_centers.append((np.mean(xSet),np.mean(ySet)))
pts_in_each_cluster.append(pts_in_cluster[:])
users_in_each_cluster.append([u for u,l in zip(user_ids,labels) if l == k])
#do we have any adjacent clusters with non-overlapping sets of users
#if so, we should merge them by increasing the epsilon value
cluster_compare = []
for cluster_index, (c1,users) in enumerate(zip(cluster_centers,users_in_each_cluster)):
for cluster_index, (c2,users2) in enumerate(zip(cluster_centers[cluster_index+1:],users_in_each_cluster[cluster_index+1:])):
overlappingUsers = [u for u in users if u in users2]
cluster_compare.append((dist(c1,c2),overlappingUsers))
cluster_compare.sort(key = lambda x:x[0])
needToMerge = [] in [c[1] for c in cluster_compare[:10]]
if not(needToMerge):
break
print epsilon
print [c[1] for c in cluster_compare[:10]]
centers_to_return = []
#do we need to split any clusters?
for cluster_index in range(len(cluster_centers)):
print "splitting"
needToSplit = (sorted(users_in_each_cluster[cluster_index]) != sorted(list(set(users_in_each_cluster[cluster_index]))))
if needToSplit:
subcluster_centers = []
X = np.array(pts_in_each_cluster[cluster_index])
for epsilon in [30,25,20,15,10,5,1,0.1,0.01]:
db = DBSCAN(eps=epsilon, min_samples=2).fit(X)
labels = db.labels_
subcluster_centers = []
needToSplit = False
for k in sorted(set(labels)):
if k == -1:
continue
class_member_mask = (labels == k)
users_in_subcluster = [u for u,l in zip(users_in_each_cluster[cluster_index],labels) if l == k]
needToSplit = (sorted(users_in_subcluster) != sorted(list(set(users_in_subcluster))))
if needToSplit:
break
pts_in_cluster = list(X[class_member_mask])
xSet,ySet = zip(*pts_in_cluster)
subcluster_centers.append((np.mean(xSet),np.mean(ySet)))
if not(needToSplit):
break
assert not(needToSplit)
centers_to_return.extend(subcluster_centers)
#if needToSplit:
# print pts_in_each_cluster[cluster_index]
# print users_in_each_cluster[cluster_index]
#else:
else:
centers_to_return.append(cluster_centers[cluster_index])
return centers_to_return
# def cluster(XYpts,user_ids):
# if XYpts == []:
# return []
#
# #find out which points are noise - don't care about the actual clusters
# needToSplit = False
# X = np.array(XYpts)
#
#
# #X = np.array([XYpts[i] for i in signal_pts])
# #user_ids = [user_ids[i] for i in signal_pts]
# oldCenters = None
#
# needToMerge = False
# needToSplit = False
#
# cluster_list = []
# usersInCluster = []
# centers = []
#
# for pref in [0,-100,-200,-400,-800,-1200,-2000,-2200,-2400,-2700,-3000,-3500,-4000,-5000,-6000,-10000]:
# #now run affinity propagation to find the actual clusters
# af = AffinityPropagation(preference=pref).fit(X)
# #cluster_centers_indices = af.cluster_centers_indices_
# labels = af.labels_
#
#
#
# unique_labels = set(labels)
#
# usersInCluster = []
# centers = []
# cluster_list = []
# for k in sorted(unique_labels):
# assert(k != -1)
# #print k
# usersInCluster.append([u for u,l in zip(user_ids,labels) if l == k])
# #print XYpts
# #print user_ids
#
# class_member_mask = (labels == k)
# pts_in_cluster = list(X[class_member_mask])
# xSet,ySet = zip(*pts_in_cluster)
# centers.append((np.mean(xSet),np.mean(ySet)))
# cluster_list.append(pts_in_cluster[:])
#
# compare = []
# for cluster_index, (c1,users) in enumerate(zip(centers,usersInCluster)):
# for cluster_index, (c2,users2) in enumerate(zip(centers[cluster_index+1:],usersInCluster[cluster_index+1:])):
# overlappingUsers = [u for u in users if u in users2]
# compare.append((dist(c1,c2),overlappingUsers))
#
# #needToSplit = False
# #for users in usersInCluster:
# # needToSplit = (sorted(users) != sorted(list(set(users))))
# # if needToSplit:
# # break
#
# compare.sort(key = lambda x:x[0])
#
# needToMerge = ([] in [c[1] for c in compare[:3]]) and (compare[-1][0] <= 200)
#
# #if needToSplit:
# # assert(oldCenters != None)
# # return oldCenters
# if not(needToMerge):
# break
#
# oldCenters = centers[:]
#
# if needToMerge:
# print compare[0:3]
# assert not(needToMerge)
#
# centers_to_return = []
# for cluster_index in range(len(cluster_list)):
# if len(list(set(usersInCluster[cluster_index]))) == 1:
# continue
# #split any individual cluster
# needToSplit = (sorted(usersInCluster[cluster_index]) != sorted(list(set(usersInCluster[cluster_index]))))
# if needToSplit:
# #print cluster_list[cluster_index]
# X = np.array(cluster_list[cluster_index])
# sub_center_list = []
# for pref in [-2400,-2200,-2000,-1200,-800,-400,-200,-100,-75,-50,-30,0]:
# af = AffinityPropagation(preference=pref).fit(X)
# #cluster_centers_indices = af.cluster_centers_indices_
# labels = af.labels_
# try:
# unique_labels = set(labels)
# except TypeError:
# print pref
# print X
# print usersInCluster[cluster_index]
# print labels
# raise
# #get the new "sub"clusters and check to see if we need to split even more
# for k in sorted(unique_labels):
# users = [u for u,l in zip(usersInCluster[cluster_index],labels) if l == k]
# needToSplit = (sorted(users) != sorted(list(set(users))))
#
# if needToSplit:
# break
#
# #add this new sub-cluster onto the list
# class_member_mask = (labels == k)
# pts_in_cluster = list(X[class_member_mask])
# xSet,ySet = zip(*pts_in_cluster)
# sub_center_list.append((np.mean(xSet),np.mean(ySet)))
#
# if not(needToSplit):
# break
#
# #if pref == 0:
# # print sub_center_list
# assert not(needToSplit)
# #print pref
# centers_to_return.extend([c for c in sub_center_list if len(c) > 1])
#
#
#
# else:
# centers_to_return.append(centers[cluster_index])
#
# assert not(needToSplit)
# return centers
client = pymongo.MongoClient()
db = client['penguin_2014-09-19']
collection = db["penguin_classifications"]
collection2 = db["penguin_subjects"]
images = {}
pts = {}
ids = {}
userCount = {}
errorCount = 0
total = 0
at_5 = {}
at_10 = {}
center_5 = {}
center_10 = {}
step_1 = 5
step_2 = 8
toSkip = ["APZ0002uw3","APZ0001v9f","APZ00010ww","APZ0000p99","APZ0002jc3","APZ00014t4","APZ0000v0n","APZ0000ifx","APZ0002pch","APZ0003kls","APZ0001iv3","APZ0003auc","APZ0002ezn"]
mainSubject = "APZ0003fgt" #APZ0001jre
toPlot = None
numClassifications = []
for r in collection.find():
subject_id = r["subjects"][0]["zooniverse_id"]
total += 1
if subject_id != "APZ0003kls":# in toSkip:
continue
if not(subject_id in pts):
pts[subject_id] = []
userCount[subject_id] = 0
ids[subject_id] = []
userCount[subject_id] += 1
animalsPresent = r["annotations"][0]["value"] == "yes"
#print animalsPresent
if animalsPresent:
c = 0
for marking_index in r["annotations"][1]["value"]:
try:
marking = r["annotations"][1]["value"][marking_index]
if True: # marking["value"] == "adult":
x = float(marking["x"])
y = float(marking["y"])
ip = r["user_ip"]
alreadyInList = False
try:
index = pts[subject_id].index((x,y))
if ids[subject_id][index] == ip:
alreadyInList = True
except ValueError:
pass
if not(alreadyInList):
pts[subject_id].append((x,y))
ids[subject_id].append(ip)
c += 1
except TypeError:
errorCount += 1
userCount[subject_id] += -1
break
except ValueError:
errorCount += 1
continue
numClassifications.append(c)
if userCount[subject_id] in [step_2]:
cluster_center = adaptiveDBSCAN(pts[subject_id],ids[subject_id])
mainSubject = subject_id
if cluster_center != []:
break
if userCount[subject_id] == step_1:
pass
#at_5[subject_id] = len(cluster_center)
else:
at_10[subject_id] = len(cluster_center)
# inBoth = [subject_id for subject_id in at_10 if (subject_id in at_5)]
# # print len(inBoth)
# x = [at_5[subject_id] for subject_id in inBoth]
# y = [at_10[subject_id] for subject_id in inBoth]
# print zip(inBoth,zip(x,y))
# plt.plot((0,100),(0,100),'--')
# # #print x
# # #print y
# plt.plot(x,y,'.')
# plt.show()
# print userCount
# print numClassifications
#
#
print mainSubject
r2 = collection2.find_one({"zooniverse_id":mainSubject})
url = r2["location"]["standard"]
if not(os.path.isfile("/home/greg/Databases/penguins/images/"+mainSubject+".JPG")):
urllib.urlretrieve (url, "/home/greg/Databases/penguins/images/"+mainSubject+".JPG")
image_file = cbook.get_sample_data("/home/greg/Databases/penguins/images/"+mainSubject+".JPG")
image = plt.imread(image_file)
fig, ax = plt.subplots()
im = ax.imshow(image)
#plt.show()
#
if cluster_center != []:
x,y = zip(*cluster_center)
plt.plot(x,y,'.',color='blue')
#
# x,y = zip(*center_5[mainSubject])
# plt.plot(x,y,'.',color='red')
# x,y = zip(*center_10[mainSubject])
# plt.plot(x,y,'.',color='green')
plt.show() | apache-2.0 |
shl198/Pipeline | Modules/PacBioEDA/PacBio_Productivity.py | 3 | 2900 | #!/usr/bin/env python
# Copyright (C) 2011 Genome Research Limited -- See full notice at end
# of module.
# Create a plot of ZMW productivity by x/y position on the
# SMRTcell. First parameter is input .bas.h5 file. Output png file is
# optional command line parameter, defaulting to productivity.png.
import sys
import optparse
import numpy as np
import h5py
from tt_log import logger
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
DEF_OUTPUT = 'productivity.png'
def main ():
logger.debug("%s starting" % sys.argv[0])
opt, args = getParms()
infile_name = args[0]
infile = h5py.File (infile_name, 'r')
colours = ('grey', 'red', 'green')
legends = ('non-seq', 'prod-0', 'prod-1')
top = h5py.Group (infile, '/')
ZMW = top["PulseData/BaseCalls/ZMW"]
ZMWMetrics = top["PulseData/BaseCalls/ZMWMetrics"]
holeStatus = ZMW["HoleStatus"]
holeXY = ZMW["HoleXY"]
holeProd = ZMWMetrics["Productivity"]
nonseqHoles = holeStatus[:]!=0 # ZMWs other than sequencing
prod0Holes = np.logical_and(holeProd[:]==0, np.logical_not(nonseqHoles))
prod1Holes = np.logical_and(holeProd[:]==1, np.logical_not(nonseqHoles))
holesByType = (nonseqHoles, prod0Holes, prod1Holes)
for which in xrange(len(holesByType)):
whichHoles = holesByType[which]
howMany = sum(whichHoles)
logger.debug("%5d %s" % (howMany, legends[which]));
if howMany > 0:
plt.scatter (holeXY[whichHoles,0], holeXY[whichHoles,1], \
s=1, c=colours[which], edgecolor='face', \
label="%5d %s" % (howMany, legends[which]))
plt.axis ('equal')
plt.legend (scatterpoints=3, prop={'size':8})
plt.savefig (opt.output)
infile.close()
logger.debug("complete")
def getParms (): # use default input sys.argv[1:]
parser = optparse.OptionParser(usage='%prog [options] <bas_file>')
parser.add_option ('--output', help='Output file name (def: %default)')
parser.set_defaults (output=DEF_OUTPUT)
opt, args = parser.parse_args()
return opt, args
if __name__ == "__main__":
main()
# Copyright (C) 2011 Genome Research Limited
#
# This library is free software. You can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
| mit |
zfrenchee/pandas | doc/sphinxext/ipython_sphinxext/ipython_directive.py | 1 | 37812 | # -*- coding: utf-8 -*-
"""
Sphinx directive to support embedded IPython code.
This directive allows pasting of entire interactive IPython sessions, prompts
and all, and their code will actually get re-executed at doc build time, with
all prompts renumbered sequentially. It also allows you to input code as a pure
python input by giving the argument python to the directive. The output looks
like an interactive ipython section.
To enable this directive, simply list it in your Sphinx ``conf.py`` file
(making sure the directory where you placed it is visible to sphinx, as is
needed for all Sphinx directives). For example, to enable syntax highlighting
and the IPython directive::
extensions = ['IPython.sphinxext.ipython_console_highlighting',
'IPython.sphinxext.ipython_directive']
The IPython directive outputs code-blocks with the language 'ipython'. So
if you do not have the syntax highlighting extension enabled as well, then
all rendered code-blocks will be uncolored. By default this directive assumes
that your prompts are unchanged IPython ones, but this can be customized.
The configurable options that can be placed in conf.py are:
ipython_savefig_dir:
The directory in which to save the figures. This is relative to the
Sphinx source directory. The default is `html_static_path`.
ipython_rgxin:
The compiled regular expression to denote the start of IPython input
lines. The default is re.compile('In \[(\d+)\]:\s?(.*)\s*'). You
shouldn't need to change this.
ipython_rgxout:
The compiled regular expression to denote the start of IPython output
lines. The default is re.compile('Out\[(\d+)\]:\s?(.*)\s*'). You
shouldn't need to change this.
ipython_promptin:
The string to represent the IPython input prompt in the generated ReST.
The default is 'In [%d]:'. This expects that the line numbers are used
in the prompt.
ipython_promptout:
The string to represent the IPython prompt in the generated ReST. The
default is 'Out [%d]:'. This expects that the line numbers are used
in the prompt.
ipython_mplbackend:
The string which specifies if the embedded Sphinx shell should import
Matplotlib and set the backend. The value specifies a backend that is
passed to `matplotlib.use()` before any lines in `ipython_execlines` are
executed. If not specified in conf.py, then the default value of 'agg' is
used. To use the IPython directive without matplotlib as a dependency, set
the value to `None`. It may end up that matplotlib is still imported
if the user specifies so in `ipython_execlines` or makes use of the
@savefig pseudo decorator.
ipython_execlines:
A list of strings to be exec'd in the embedded Sphinx shell. Typical
usage is to make certain packages always available. Set this to an empty
list if you wish to have no imports always available. If specified in
conf.py as `None`, then it has the effect of making no imports available.
If omitted from conf.py altogether, then the default value of
['import numpy as np', 'import matplotlib.pyplot as plt'] is used.
ipython_holdcount
When the @suppress pseudo-decorator is used, the execution count can be
incremented or not. The default behavior is to hold the execution count,
corresponding to a value of `True`. Set this to `False` to increment
the execution count after each suppressed command.
As an example, to use the IPython directive when `matplotlib` is not available,
one sets the backend to `None`::
ipython_mplbackend = None
An example usage of the directive is:
.. code-block:: rst
.. ipython::
In [1]: x = 1
In [2]: y = x**2
In [3]: print(y)
See http://matplotlib.org/sampledoc/ipython_directive.html for additional
documentation.
ToDo
----
- Turn the ad-hoc test() function into a real test suite.
- Break up ipython-specific functionality from matplotlib stuff into better
separated code.
Authors
-------
- John D Hunter: original author.
- Fernando Perez: refactoring, documentation, cleanups, port to 0.11.
- VáclavŠmilauer <eudoxos-AT-arcig.cz>: Prompt generalizations.
- Skipper Seabold, refactoring, cleanups, pure python addition
"""
from __future__ import print_function
from __future__ import unicode_literals
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
# Stdlib
import os
import re
import sys
import tempfile
import ast
from pandas.compat import zip, range, map, lmap, u, text_type, cStringIO as StringIO
import warnings
# To keep compatibility with various python versions
try:
from hashlib import md5
except ImportError:
from md5 import md5
# Third-party
import sphinx
from docutils.parsers.rst import directives
from docutils import nodes
from sphinx.util.compat import Directive
# Our own
try:
from traitlets.config import Config
except ImportError:
from IPython import Config
from IPython import InteractiveShell
from IPython.core.profiledir import ProfileDir
from IPython.utils import io
from IPython.utils.py3compat import PY3
if PY3:
from io import StringIO
else:
from StringIO import StringIO
#-----------------------------------------------------------------------------
# Globals
#-----------------------------------------------------------------------------
# for tokenizing blocks
COMMENT, INPUT, OUTPUT = range(3)
#-----------------------------------------------------------------------------
# Functions and class declarations
#-----------------------------------------------------------------------------
def block_parser(part, rgxin, rgxout, fmtin, fmtout):
"""
part is a string of ipython text, comprised of at most one
input, one output, comments, and blank lines. The block parser
parses the text into a list of::
blocks = [ (TOKEN0, data0), (TOKEN1, data1), ...]
where TOKEN is one of [COMMENT | INPUT | OUTPUT ] and
data is, depending on the type of token::
COMMENT : the comment string
INPUT: the (DECORATOR, INPUT_LINE, REST) where
DECORATOR: the input decorator (or None)
INPUT_LINE: the input as string (possibly multi-line)
REST : any stdout generated by the input line (not OUTPUT)
OUTPUT: the output string, possibly multi-line
"""
block = []
lines = part.split('\n')
N = len(lines)
i = 0
decorator = None
while 1:
if i==N:
# nothing left to parse -- the last line
break
line = lines[i]
i += 1
line_stripped = line.strip()
if line_stripped.startswith('#'):
block.append((COMMENT, line))
continue
if line_stripped.startswith('@'):
# we're assuming at most one decorator -- may need to
# rethink
decorator = line_stripped
continue
# does this look like an input line?
matchin = rgxin.match(line)
if matchin:
lineno, inputline = int(matchin.group(1)), matchin.group(2)
# the ....: continuation string
continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
Nc = len(continuation)
# input lines can continue on for more than one line, if
# we have a '\' line continuation char or a function call
# echo line 'print'. The input line can only be
# terminated by the end of the block or an output line, so
# we parse out the rest of the input line if it is
# multiline as well as any echo text
rest = []
while i<N:
# look ahead; if the next line is blank, or a comment, or
# an output line, we're done
nextline = lines[i]
matchout = rgxout.match(nextline)
#print "nextline=%s, continuation=%s, starts=%s"%(nextline, continuation, nextline.startswith(continuation))
if matchout or nextline.startswith('#'):
break
elif nextline.startswith(continuation):
nextline = nextline[Nc:]
if nextline and nextline[0] == ' ':
nextline = nextline[1:]
inputline += '\n' + nextline
else:
rest.append(nextline)
i+= 1
block.append((INPUT, (decorator, inputline, '\n'.join(rest))))
continue
# if it looks like an output line grab all the text to the end
# of the block
matchout = rgxout.match(line)
if matchout:
lineno, output = int(matchout.group(1)), matchout.group(2)
if i<N-1:
output = '\n'.join([output] + lines[i:])
block.append((OUTPUT, output))
break
return block
class DecodingStringIO(StringIO, object):
def __init__(self,buf='',encodings=('utf8',), *args, **kwds):
super(DecodingStringIO, self).__init__(buf, *args, **kwds)
self.set_encodings(encodings)
def set_encodings(self, encodings):
self.encodings = encodings
def write(self,data):
if isinstance(data, text_type):
return super(DecodingStringIO, self).write(data)
else:
for enc in self.encodings:
try:
data = data.decode(enc)
return super(DecodingStringIO, self).write(data)
except :
pass
# default to brute utf8 if no encoding succeeded
return super(DecodingStringIO, self).write(data.decode('utf8', 'replace'))
class EmbeddedSphinxShell(object):
"""An embedded IPython instance to run inside Sphinx"""
def __init__(self, exec_lines=None,state=None):
self.cout = DecodingStringIO(u'')
if exec_lines is None:
exec_lines = []
self.state = state
# Create config object for IPython
config = Config()
config.InteractiveShell.autocall = False
config.InteractiveShell.autoindent = False
config.InteractiveShell.colors = 'NoColor'
# create a profile so instance history isn't saved
tmp_profile_dir = tempfile.mkdtemp(prefix='profile_')
profname = 'auto_profile_sphinx_build'
pdir = os.path.join(tmp_profile_dir,profname)
profile = ProfileDir.create_profile_dir(pdir)
# Create and initialize global ipython, but don't start its mainloop.
# This will persist across different EmbededSphinxShell instances.
IP = InteractiveShell.instance(config=config, profile_dir=profile)
# io.stdout redirect must be done after instantiating InteractiveShell
io.stdout = self.cout
io.stderr = self.cout
# For debugging, so we can see normal output, use this:
#from IPython.utils.io import Tee
#io.stdout = Tee(self.cout, channel='stdout') # dbg
#io.stderr = Tee(self.cout, channel='stderr') # dbg
# Store a few parts of IPython we'll need.
self.IP = IP
self.user_ns = self.IP.user_ns
self.user_global_ns = self.IP.user_global_ns
self.input = ''
self.output = ''
self.is_verbatim = False
self.is_doctest = False
self.is_suppress = False
# Optionally, provide more detailed information to shell.
self.directive = None
# on the first call to the savefig decorator, we'll import
# pyplot as plt so we can make a call to the plt.gcf().savefig
self._pyplot_imported = False
# Prepopulate the namespace.
for line in exec_lines:
self.process_input_line(line, store_history=False)
def clear_cout(self):
self.cout.seek(0)
self.cout.truncate(0)
def process_input_line(self, line, store_history=True):
"""process the input, capturing stdout"""
stdout = sys.stdout
splitter = self.IP.input_splitter
try:
sys.stdout = self.cout
splitter.push(line)
more = splitter.push_accepts_more()
if not more:
try:
source_raw = splitter.source_raw_reset()[1]
except:
# recent ipython #4504
source_raw = splitter.raw_reset()
self.IP.run_cell(source_raw, store_history=store_history)
finally:
sys.stdout = stdout
def process_image(self, decorator):
"""
# build out an image directive like
# .. image:: somefile.png
# :width 4in
#
# from an input like
# savefig somefile.png width=4in
"""
savefig_dir = self.savefig_dir
source_dir = self.source_dir
saveargs = decorator.split(' ')
filename = saveargs[1]
# insert relative path to image file in source
outfile = os.path.relpath(os.path.join(savefig_dir,filename),
source_dir)
imagerows = ['.. image:: %s'%outfile]
for kwarg in saveargs[2:]:
arg, val = kwarg.split('=')
arg = arg.strip()
val = val.strip()
imagerows.append(' :%s: %s'%(arg, val))
image_file = os.path.basename(outfile) # only return file name
image_directive = '\n'.join(imagerows)
return image_file, image_directive
# Callbacks for each type of token
def process_input(self, data, input_prompt, lineno):
"""
Process data block for INPUT token.
"""
decorator, input, rest = data
image_file = None
image_directive = None
is_verbatim = decorator=='@verbatim' or self.is_verbatim
is_doctest = (decorator is not None and \
decorator.startswith('@doctest')) or self.is_doctest
is_suppress = decorator=='@suppress' or self.is_suppress
is_okexcept = decorator=='@okexcept' or self.is_okexcept
is_okwarning = decorator=='@okwarning' or self.is_okwarning
is_savefig = decorator is not None and \
decorator.startswith('@savefig')
# set the encodings to be used by DecodingStringIO
# to convert the execution output into unicode if
# needed. this attrib is set by IpythonDirective.run()
# based on the specified block options, defaulting to ['ut
self.cout.set_encodings(self.output_encoding)
input_lines = input.split('\n')
if len(input_lines) > 1:
if input_lines[-1] != "":
input_lines.append('') # make sure there's a blank line
# so splitter buffer gets reset
continuation = ' %s:'%''.join(['.']*(len(str(lineno))+2))
if is_savefig:
image_file, image_directive = self.process_image(decorator)
ret = []
is_semicolon = False
# Hold the execution count, if requested to do so.
if is_suppress and self.hold_count:
store_history = False
else:
store_history = True
# Note: catch_warnings is not thread safe
with warnings.catch_warnings(record=True) as ws:
for i, line in enumerate(input_lines):
if line.endswith(';'):
is_semicolon = True
if i == 0:
# process the first input line
if is_verbatim:
self.process_input_line('')
self.IP.execution_count += 1 # increment it anyway
else:
# only submit the line in non-verbatim mode
self.process_input_line(line, store_history=store_history)
formatted_line = '%s %s'%(input_prompt, line)
else:
# process a continuation line
if not is_verbatim:
self.process_input_line(line, store_history=store_history)
formatted_line = '%s %s'%(continuation, line)
if not is_suppress:
ret.append(formatted_line)
if not is_suppress and len(rest.strip()) and is_verbatim:
# the "rest" is the standard output of the
# input, which needs to be added in
# verbatim mode
ret.append(rest)
self.cout.seek(0)
output = self.cout.read()
if not is_suppress and not is_semicolon:
ret.append(output)
elif is_semicolon: # get spacing right
ret.append('')
# context information
filename = self.state.document.current_source
lineno = self.state.document.current_line
# output any exceptions raised during execution to stdout
# unless :okexcept: has been specified.
if not is_okexcept and "Traceback" in output:
s = "\nException in %s at block ending on line %s\n" % (filename, lineno)
s += "Specify :okexcept: as an option in the ipython:: block to suppress this message\n"
sys.stdout.write('\n\n>>>' + ('-' * 73))
sys.stdout.write(s)
sys.stdout.write(output)
sys.stdout.write('<<<' + ('-' * 73) + '\n\n')
# output any warning raised during execution to stdout
# unless :okwarning: has been specified.
if not is_okwarning:
for w in ws:
s = "\nWarning in %s at block ending on line %s\n" % (filename, lineno)
s += "Specify :okwarning: as an option in the ipython:: block to suppress this message\n"
sys.stdout.write('\n\n>>>' + ('-' * 73))
sys.stdout.write(s)
sys.stdout.write('-' * 76 + '\n')
s=warnings.formatwarning(w.message, w.category,
w.filename, w.lineno, w.line)
sys.stdout.write(s)
sys.stdout.write('<<<' + ('-' * 73) + '\n')
self.cout.truncate(0)
return (ret, input_lines, output, is_doctest, decorator, image_file,
image_directive)
def process_output(self, data, output_prompt,
input_lines, output, is_doctest, decorator, image_file):
"""
Process data block for OUTPUT token.
"""
TAB = ' ' * 4
if is_doctest and output is not None:
found = output
found = found.strip()
submitted = data.strip()
if self.directive is None:
source = 'Unavailable'
content = 'Unavailable'
else:
source = self.directive.state.document.current_source
content = self.directive.content
# Add tabs and join into a single string.
content = '\n'.join(TAB + line for line in content)
# Make sure the output contains the output prompt.
ind = found.find(output_prompt)
if ind < 0:
e = ('output does not contain output prompt\n\n'
'Document source: {0}\n\n'
'Raw content: \n{1}\n\n'
'Input line(s):\n{TAB}{2}\n\n'
'Output line(s):\n{TAB}{3}\n\n')
e = e.format(source, content, '\n'.join(input_lines),
repr(found), TAB=TAB)
raise RuntimeError(e)
found = found[len(output_prompt):].strip()
# Handle the actual doctest comparison.
if decorator.strip() == '@doctest':
# Standard doctest
if found != submitted:
e = ('doctest failure\n\n'
'Document source: {0}\n\n'
'Raw content: \n{1}\n\n'
'On input line(s):\n{TAB}{2}\n\n'
'we found output:\n{TAB}{3}\n\n'
'instead of the expected:\n{TAB}{4}\n\n')
e = e.format(source, content, '\n'.join(input_lines),
repr(found), repr(submitted), TAB=TAB)
raise RuntimeError(e)
else:
self.custom_doctest(decorator, input_lines, found, submitted)
def process_comment(self, data):
"""Process data fPblock for COMMENT token."""
if not self.is_suppress:
return [data]
def save_image(self, image_file):
"""
Saves the image file to disk.
"""
self.ensure_pyplot()
command = ('plt.gcf().savefig("%s", bbox_inches="tight", '
'dpi=100)' % image_file)
#print 'SAVEFIG', command # dbg
self.process_input_line('bookmark ipy_thisdir', store_history=False)
self.process_input_line('cd -b ipy_savedir', store_history=False)
self.process_input_line(command, store_history=False)
self.process_input_line('cd -b ipy_thisdir', store_history=False)
self.process_input_line('bookmark -d ipy_thisdir', store_history=False)
self.clear_cout()
def process_block(self, block):
"""
process block from the block_parser and return a list of processed lines
"""
ret = []
output = None
input_lines = None
lineno = self.IP.execution_count
input_prompt = self.promptin % lineno
output_prompt = self.promptout % lineno
image_file = None
image_directive = None
for token, data in block:
if token == COMMENT:
out_data = self.process_comment(data)
elif token == INPUT:
(out_data, input_lines, output, is_doctest, decorator,
image_file, image_directive) = \
self.process_input(data, input_prompt, lineno)
elif token == OUTPUT:
out_data = \
self.process_output(data, output_prompt,
input_lines, output, is_doctest,
decorator, image_file)
if out_data:
ret.extend(out_data)
# save the image files
if image_file is not None:
self.save_image(image_file)
return ret, image_directive
def ensure_pyplot(self):
"""
Ensures that pyplot has been imported into the embedded IPython shell.
Also, makes sure to set the backend appropriately if not set already.
"""
# We are here if the @figure pseudo decorator was used. Thus, it's
# possible that we could be here even if python_mplbackend were set to
# `None`. That's also strange and perhaps worthy of raising an
# exception, but for now, we just set the backend to 'agg'.
if not self._pyplot_imported:
if 'matplotlib.backends' not in sys.modules:
# Then ipython_matplotlib was set to None but there was a
# call to the @figure decorator (and ipython_execlines did
# not set a backend).
#raise Exception("No backend was set, but @figure was used!")
import matplotlib
matplotlib.use('agg')
# Always import pyplot into embedded shell.
self.process_input_line('import matplotlib.pyplot as plt',
store_history=False)
self._pyplot_imported = True
def process_pure_python(self, content):
"""
content is a list of strings. it is unedited directive content
This runs it line by line in the InteractiveShell, prepends
prompts as needed capturing stderr and stdout, then returns
the content as a list as if it were ipython code
"""
output = []
savefig = False # keep up with this to clear figure
multiline = False # to handle line continuation
multiline_start = None
fmtin = self.promptin
ct = 0
for lineno, line in enumerate(content):
line_stripped = line.strip()
if not len(line):
output.append(line)
continue
# handle decorators
if line_stripped.startswith('@'):
output.extend([line])
if 'savefig' in line:
savefig = True # and need to clear figure
continue
# handle comments
if line_stripped.startswith('#'):
output.extend([line])
continue
# deal with lines checking for multiline
continuation = u' %s:'% ''.join(['.']*(len(str(ct))+2))
if not multiline:
modified = u"%s %s" % (fmtin % ct, line_stripped)
output.append(modified)
ct += 1
try:
ast.parse(line_stripped)
output.append(u'')
except Exception: # on a multiline
multiline = True
multiline_start = lineno
else: # still on a multiline
modified = u'%s %s' % (continuation, line)
output.append(modified)
# if the next line is indented, it should be part of multiline
if len(content) > lineno + 1:
nextline = content[lineno + 1]
if len(nextline) - len(nextline.lstrip()) > 3:
continue
try:
mod = ast.parse(
'\n'.join(content[multiline_start:lineno+1]))
if isinstance(mod.body[0], ast.FunctionDef):
# check to see if we have the whole function
for element in mod.body[0].body:
if isinstance(element, ast.Return):
multiline = False
else:
output.append(u'')
multiline = False
except Exception:
pass
if savefig: # clear figure if plotted
self.ensure_pyplot()
self.process_input_line('plt.clf()', store_history=False)
self.clear_cout()
savefig = False
return output
def custom_doctest(self, decorator, input_lines, found, submitted):
"""
Perform a specialized doctest.
"""
from .custom_doctests import doctests
args = decorator.split()
doctest_type = args[1]
if doctest_type in doctests:
doctests[doctest_type](self, args, input_lines, found, submitted)
else:
e = "Invalid option to @doctest: {0}".format(doctest_type)
raise Exception(e)
class IPythonDirective(Directive):
has_content = True
required_arguments = 0
optional_arguments = 4 # python, suppress, verbatim, doctest
final_argumuent_whitespace = True
option_spec = { 'python': directives.unchanged,
'suppress' : directives.flag,
'verbatim' : directives.flag,
'doctest' : directives.flag,
'okexcept': directives.flag,
'okwarning': directives.flag,
'output_encoding': directives.unchanged_required
}
shell = None
seen_docs = set()
def get_config_options(self):
# contains sphinx configuration variables
config = self.state.document.settings.env.config
# get config variables to set figure output directory
confdir = self.state.document.settings.env.app.confdir
savefig_dir = config.ipython_savefig_dir
source_dir = os.path.dirname(self.state.document.current_source)
if savefig_dir is None:
savefig_dir = config.html_static_path
if isinstance(savefig_dir, list):
savefig_dir = savefig_dir[0] # safe to assume only one path?
savefig_dir = os.path.join(confdir, savefig_dir)
# get regex and prompt stuff
rgxin = config.ipython_rgxin
rgxout = config.ipython_rgxout
promptin = config.ipython_promptin
promptout = config.ipython_promptout
mplbackend = config.ipython_mplbackend
exec_lines = config.ipython_execlines
hold_count = config.ipython_holdcount
return (savefig_dir, source_dir, rgxin, rgxout,
promptin, promptout, mplbackend, exec_lines, hold_count)
def setup(self):
# Get configuration values.
(savefig_dir, source_dir, rgxin, rgxout, promptin, promptout,
mplbackend, exec_lines, hold_count) = self.get_config_options()
if self.shell is None:
# We will be here many times. However, when the
# EmbeddedSphinxShell is created, its interactive shell member
# is the same for each instance.
if mplbackend and 'matplotlib.backends' not in sys.modules:
import matplotlib
# Repeated calls to use() will not hurt us since `mplbackend`
# is the same each time.
matplotlib.use(mplbackend)
# Must be called after (potentially) importing matplotlib and
# setting its backend since exec_lines might import pylab.
self.shell = EmbeddedSphinxShell(exec_lines, self.state)
# Store IPython directive to enable better error messages
self.shell.directive = self
# reset the execution count if we haven't processed this doc
#NOTE: this may be borked if there are multiple seen_doc tmp files
#check time stamp?
if self.state.document.current_source not in self.seen_docs:
self.shell.IP.history_manager.reset()
self.shell.IP.execution_count = 1
try:
self.shell.IP.prompt_manager.width = 0
except AttributeError:
# GH14003: class promptManager has removed after IPython 5.x
pass
self.seen_docs.add(self.state.document.current_source)
# and attach to shell so we don't have to pass them around
self.shell.rgxin = rgxin
self.shell.rgxout = rgxout
self.shell.promptin = promptin
self.shell.promptout = promptout
self.shell.savefig_dir = savefig_dir
self.shell.source_dir = source_dir
self.shell.hold_count = hold_count
# setup bookmark for saving figures directory
self.shell.process_input_line('bookmark ipy_savedir %s'%savefig_dir,
store_history=False)
self.shell.clear_cout()
return rgxin, rgxout, promptin, promptout
def teardown(self):
# delete last bookmark
self.shell.process_input_line('bookmark -d ipy_savedir',
store_history=False)
self.shell.clear_cout()
def run(self):
debug = False
#TODO, any reason block_parser can't be a method of embeddable shell
# then we wouldn't have to carry these around
rgxin, rgxout, promptin, promptout = self.setup()
options = self.options
self.shell.is_suppress = 'suppress' in options
self.shell.is_doctest = 'doctest' in options
self.shell.is_verbatim = 'verbatim' in options
self.shell.is_okexcept = 'okexcept' in options
self.shell.is_okwarning = 'okwarning' in options
self.shell.output_encoding = [options.get('output_encoding', 'utf8')]
# handle pure python code
if 'python' in self.arguments:
content = self.content
self.content = self.shell.process_pure_python(content)
parts = '\n'.join(self.content).split('\n\n')
lines = ['.. code-block:: ipython', '']
figures = []
for part in parts:
block = block_parser(part, rgxin, rgxout, promptin, promptout)
if len(block):
rows, figure = self.shell.process_block(block)
for row in rows:
lines.extend([' %s'%line for line in row.split('\n')])
if figure is not None:
figures.append(figure)
for figure in figures:
lines.append('')
lines.extend(figure.split('\n'))
lines.append('')
if len(lines)>2:
if debug:
print('\n'.join(lines))
else:
# This has to do with input, not output. But if we comment
# these lines out, then no IPython code will appear in the
# final output.
self.state_machine.insert_input(
lines, self.state_machine.input_lines.source(0))
# cleanup
self.teardown()
return []
# Enable as a proper Sphinx directive
def setup(app):
setup.app = app
app.add_directive('ipython', IPythonDirective)
app.add_config_value('ipython_savefig_dir', None, 'env')
app.add_config_value('ipython_rgxin',
re.compile('In \[(\d+)\]:\s?(.*)\s*'), 'env')
app.add_config_value('ipython_rgxout',
re.compile('Out\[(\d+)\]:\s?(.*)\s*'), 'env')
app.add_config_value('ipython_promptin', 'In [%d]:', 'env')
app.add_config_value('ipython_promptout', 'Out[%d]:', 'env')
# We could just let matplotlib pick whatever is specified as the default
# backend in the matplotlibrc file, but this would cause issues if the
# backend didn't work in headless environments. For this reason, 'agg'
# is a good default backend choice.
app.add_config_value('ipython_mplbackend', 'agg', 'env')
# If the user sets this config value to `None`, then EmbeddedSphinxShell's
# __init__ method will treat it as [].
execlines = ['import numpy as np', 'import matplotlib.pyplot as plt']
app.add_config_value('ipython_execlines', execlines, 'env')
app.add_config_value('ipython_holdcount', True, 'env')
# Simple smoke test, needs to be converted to a proper automatic test.
def test():
examples = [
r"""
In [9]: pwd
Out[9]: '/home/jdhunter/py4science/book'
In [10]: cd bookdata/
/home/jdhunter/py4science/book/bookdata
In [2]: from pylab import *
In [2]: ion()
In [3]: im = imread('stinkbug.png')
@savefig mystinkbug.png width=4in
In [4]: imshow(im)
Out[4]: <matplotlib.image.AxesImage object at 0x39ea850>
""",
r"""
In [1]: x = 'hello world'
# string methods can be
# used to alter the string
@doctest
In [2]: x.upper()
Out[2]: 'HELLO WORLD'
@verbatim
In [3]: x.st<TAB>
x.startswith x.strip
""",
r"""
In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\
.....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv'
In [131]: print url.split('&')
['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22', 'f=2009', 'g=d', 'a=1', 'b=8', 'c=2006', 'ignore=.csv']
In [60]: import urllib
""",
r"""\
In [133]: import numpy.random
@suppress
In [134]: numpy.random.seed(2358)
@doctest
In [135]: numpy.random.rand(10,2)
Out[135]:
array([[ 0.64524308, 0.59943846],
[ 0.47102322, 0.8715456 ],
[ 0.29370834, 0.74776844],
[ 0.99539577, 0.1313423 ],
[ 0.16250302, 0.21103583],
[ 0.81626524, 0.1312433 ],
[ 0.67338089, 0.72302393],
[ 0.7566368 , 0.07033696],
[ 0.22591016, 0.77731835],
[ 0.0072729 , 0.34273127]])
""",
r"""
In [106]: print x
jdh
In [109]: for i in range(10):
.....: print i
.....:
.....:
0
1
2
3
4
5
6
7
8
9
""",
r"""
In [144]: from pylab import *
In [145]: ion()
# use a semicolon to suppress the output
@savefig test_hist.png width=4in
In [151]: hist(np.random.randn(10000), 100);
@savefig test_plot.png width=4in
In [151]: plot(np.random.randn(10000), 'o');
""",
r"""
# use a semicolon to suppress the output
In [151]: plt.clf()
@savefig plot_simple.png width=4in
In [151]: plot([1,2,3])
@savefig hist_simple.png width=4in
In [151]: hist(np.random.randn(10000), 100);
""",
r"""
# update the current fig
In [151]: ylabel('number')
In [152]: title('normal distribution')
@savefig hist_with_text.png
In [153]: grid(True)
@doctest float
In [154]: 0.1 + 0.2
Out[154]: 0.3
@doctest float
In [155]: np.arange(16).reshape(4,4)
Out[155]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
In [1]: x = np.arange(16, dtype=float).reshape(4,4)
In [2]: x[0,0] = np.inf
In [3]: x[0,1] = np.nan
@doctest float
In [4]: x
Out[4]:
array([[ inf, nan, 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[ 12., 13., 14., 15.]])
""",
]
# skip local-file depending first example:
examples = examples[1:]
#ipython_directive.DEBUG = True # dbg
#options = dict(suppress=True) # dbg
options = dict()
for example in examples:
content = example.split('\n')
IPythonDirective('debug', arguments=None, options=options,
content=content, lineno=0,
content_offset=None, block_text=None,
state=None, state_machine=None,
)
# Run test suite as a script
if __name__=='__main__':
if not os.path.isdir('_static'):
os.mkdir('_static')
test()
print('All OK? Check figures in _static/')
| bsd-3-clause |
xuewei4d/scikit-learn | asv_benchmarks/benchmarks/decomposition.py | 12 | 2754 | from sklearn.decomposition import (PCA, DictionaryLearning,
MiniBatchDictionaryLearning)
from .common import Benchmark, Estimator, Transformer
from .datasets import _olivetti_faces_dataset, _mnist_dataset
from .utils import make_pca_scorers, make_dict_learning_scorers
class PCABenchmark(Transformer, Estimator, Benchmark):
"""
Benchmarks for PCA.
"""
param_names = ['svd_solver']
params = (['full', 'arpack', 'randomized'],)
def setup_cache(self):
super().setup_cache()
def make_data(self, params):
return _mnist_dataset()
def make_estimator(self, params):
svd_solver, = params
estimator = PCA(n_components=32,
svd_solver=svd_solver,
random_state=0)
return estimator
def make_scorers(self):
make_pca_scorers(self)
class DictionaryLearningBenchmark(Transformer, Estimator, Benchmark):
"""
Benchmarks for DictionaryLearning.
"""
param_names = ['fit_algorithm', 'n_jobs']
params = (['lars', 'cd'], Benchmark.n_jobs_vals)
def setup_cache(self):
super().setup_cache()
def make_data(self, params):
return _olivetti_faces_dataset()
def make_estimator(self, params):
fit_algorithm, n_jobs = params
estimator = DictionaryLearning(n_components=15,
fit_algorithm=fit_algorithm,
alpha=0.1,
max_iter=20,
tol=1e-16,
random_state=0,
n_jobs=n_jobs)
return estimator
def make_scorers(self):
make_dict_learning_scorers(self)
class MiniBatchDictionaryLearningBenchmark(Transformer, Estimator, Benchmark):
"""
Benchmarks for MiniBatchDictionaryLearning
"""
param_names = ['fit_algorithm', 'n_jobs']
params = (['lars', 'cd'], Benchmark.n_jobs_vals)
def setup_cache(self):
super().setup_cache()
def make_data(self, params):
return _olivetti_faces_dataset()
def make_estimator(self, params):
fit_algorithm, n_jobs = params
estimator = MiniBatchDictionaryLearning(n_components=15,
fit_algorithm=fit_algorithm,
alpha=0.1,
batch_size=3,
random_state=0,
n_jobs=n_jobs)
return estimator
def make_scorers(self):
make_dict_learning_scorers(self)
| bsd-3-clause |
pywikibot-catfiles/file-metadata | setupdeps.py | 2 | 16766 | # -*- coding: utf-8 -*-
"""
Various dependencies that are required for file-metadata which need some
special handling.
"""
from __future__ import (division, absolute_import, unicode_literals,
print_function)
import ctypes.util
import hashlib
import os
import subprocess
import sys
from distutils import sysconfig
from distutils.errors import DistutilsSetupError
try:
from urllib.request import urlopen
except ImportError: # Python 2
from urllib2 import urlopen
PROJECT_PATH = os.path.abspath(os.path.dirname(__file__))
def data_path():
name = os.path.join(PROJECT_PATH, 'file_metadata', 'datafiles')
if not os.path.exists(name):
os.makedirs(name)
return name
def which(cmd):
try:
from shutil import which
return which(cmd)
except ImportError: # For python 3.2 and lower
try:
output = subprocess.check_output(["which", cmd],
stderr=subprocess.STDOUT)
except (OSError, subprocess.CalledProcessError):
return None
else:
output = output.decode(sys.getfilesystemencoding())
return output.strip()
def setup_install(packages):
"""
Install packages using pip to the current folder. Useful to import
packages during setup itself.
"""
packages = list(packages)
if not packages:
return True
try:
subprocess.call([sys.executable, "-m", "pip", "install",
"-t", PROJECT_PATH] + packages)
return True
except subprocess.CalledProcessError:
return False
def download(url, filename, overwrite=False, sha1=None):
"""
Download the given URL to the given filename. If the file exists,
it won't be downloaded unless asked to overwrite. Both, text data
like html, txt, etc. or binary data like images, audio, etc. are
acceptable.
:param url: A URL to download.
:param filename: The file to store the downloaded file to.
:param overwrite: Set to True if the file should be downloaded even if it
already exists.
:param sha1: The sha1 checksum to verify the file using.
"""
blocksize = 16 * 1024
_hash = hashlib.sha1()
if os.path.exists(filename) and not overwrite:
# Do a pass for the hash if it already exists
with open(filename, "rb") as downloaded_file:
while True:
block = downloaded_file.read(blocksize)
if not block:
break
_hash.update(block)
else:
# If it doesn't exist, or overwrite=True, find hash while downloading
response = urlopen(url)
with open(filename, 'wb') as out_file:
while True:
block = response.read(blocksize)
if not block:
break
out_file.write(block)
_hash.update(block)
return _hash.hexdigest() == sha1
class CheckFailed(Exception):
"""
Exception thrown when a ``SetupPackage.check()`` fails.
"""
pass
class SetupPackage(object):
name = None
optional = False
pkg_names = {
"apt-get": None,
"yum": None,
"dnf": None,
"pacman": None,
"zypper": None,
"brew": None,
"port": None,
"windows_url": None
}
def check(self):
"""
Check whether the dependencies are met. Should raise a ``CheckFailed``
exception if the dependency was not found.
"""
pass
def get_install_requires(self):
"""
Return a list of Python packages that are required by the package.
pip / easy_install will attempt to download and install this
package if it is not installed.
"""
return []
def get_setup_requires(self):
"""
Return a list of Python packages that are required by the setup.py
itself. pip / easy_install will attempt to download and install this
package if it is not installed on top of the setup.py script.
"""
return []
def get_data_files(self):
"""
Perform required actions to add the data files into the directory
given by ``data_path()``.
"""
pass
def install_help_msg(self):
"""
The help message to show if the package is not installed. The help
message shown depends on whether some class variables are present.
"""
def _try_managers(*managers):
for manager in managers:
pkg_name = self.pkg_names.get(manager, None)
if pkg_name and which(manager) is not None:
pkg_note = None
if isinstance(pkg_name, (tuple, list)):
pkg_name, pkg_note = pkg_name
msg = ('Try installing {0} with `{1} install {2}`.'
.format(self.name, manager, pkg_name))
if pkg_note:
msg += ' Note: ' + pkg_note
return msg
message = ""
if sys.platform == "win32":
url = self.pkg_names.get("windows_url", None)
if url:
return ('Please check {0} for instructions to install {1}'
.format(url, self.name))
elif sys.platform == "darwin":
manager_message = _try_managers("brew", "port")
return manager_message or message
elif sys.platform.startswith("linux"):
try:
import distro
except ImportError:
setup_install(['distro'])
import distro
release = distro.id()
if release in ('debian', 'ubuntu', 'linuxmint', 'raspbian'):
manager_message = _try_managers('apt-get')
if manager_message:
return manager_message
elif release in ('centos', 'rhel', 'redhat', 'fedora',
'scientific', 'amazon', ):
manager_message = _try_managers('dnf', 'yum')
if manager_message:
return manager_message
elif release in ('sles', 'opensuse'):
manager_message = _try_managers('zypper')
if manager_message:
return manager_message
elif release in ('arch'):
manager_message = _try_managers('pacman')
if manager_message:
return manager_message
return message
class PkgConfig(SetupPackage):
"""
This is a class for communicating with pkg-config.
"""
name = "pkg-config"
pkg_names = {
"apt-get": 'pkg-config',
"yum": None,
"dnf": None,
"pacman": None,
"zypper": None,
"brew": 'pkg-config',
"port": None,
"windows_url": None
}
def __init__(self):
if sys.platform == 'win32':
self.has_pkgconfig = False
else:
self.pkg_config = os.environ.get('PKG_CONFIG', 'pkg-config')
self.set_pkgconfig_path()
try:
with open(os.devnull) as nul:
subprocess.check_call([self.pkg_config, "--help"],
stdout=nul, stderr=nul)
self.has_pkgconfig = True
except (subprocess.CalledProcessError, OSError):
self.has_pkgconfig = False
raise DistutilsSetupError("pkg-config is not installed. "
"Please install it to continue.\n" +
self.install_help_msg())
def set_pkgconfig_path(self):
pkgconfig_path = sysconfig.get_config_var('LIBDIR')
if pkgconfig_path is None:
return
pkgconfig_path = os.path.join(pkgconfig_path, 'pkgconfig')
if not os.path.isdir(pkgconfig_path):
return
os.environ['PKG_CONFIG_PATH'] = ':'.join(
[os.environ.get('PKG_CONFIG_PATH', ""), pkgconfig_path])
def get_version(self, package):
"""
Get the version of the package from pkg-config.
"""
if not self.has_pkgconfig:
return None
try:
output = subprocess.check_output(
[self.pkg_config, package, "--modversion"],
stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
return None
else:
output = output.decode(sys.getfilesystemencoding())
return output.strip()
# The PkgConfig class should be used through this singleton
pkg_config = PkgConfig()
class Distro(SetupPackage):
name = "distro"
def check(self):
return 'Will be installed with pip.'
def get_setup_requires(self):
try:
import distro # noqa (unused import)
return []
except ImportError:
return ['distro']
class SetupTools(SetupPackage):
name = 'setuptools'
def check(self):
return 'Will be installed with pip.'
def get_setup_requires(self):
try:
import setuptools # noqa (unused import)
return []
except ImportError:
return ['setuptools']
class PathLib(SetupPackage):
name = 'pathlib'
def check(self):
if sys.version_info < (3, 4):
return 'Backported pathlib2 will be installed with pip.'
else:
return 'Already installed in python 3.4+'
def get_install_requires(self):
if sys.version_info < (3, 4):
return ['pathlib2']
else:
return []
class AppDirs(SetupPackage):
name = 'appdirs'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['appdirs']
class LibMagic(SetupPackage):
name = 'libmagic'
pkg_names = {
"apt-get": 'libmagic-dev',
"yum": 'file',
"dnf": 'file',
"pacman": None,
"zypper": None,
"brew": 'libmagic',
"port": None,
"windows_url": None
}
def check(self):
file_path = which('file')
if file_path is None:
raise CheckFailed('Needs to be installed manually.')
else:
return 'Found "file" utility at {0}.'.format(file_path)
class PythonMagic(SetupPackage):
name = 'python-magic'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['python-magic']
class Six(SetupPackage):
name = 'six'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['six>=1.8.0']
class ExifTool(SetupPackage):
name = 'exiftool'
pkg_names = {
"apt-get": 'exiftool',
"yum": 'perl-Image-ExifTool',
"dnf": 'perl-Image-ExifTool',
"pacman": None,
"zypper": None,
"brew": 'exiftool',
"port": 'p5-image-exiftool',
"windows_url": 'http://www.sno.phy.queensu.ca/~phil/exiftool/'
}
def check(self):
exiftool_path = which('exiftool')
if exiftool_path is None:
raise CheckFailed('Needs to be installed manually.')
else:
return 'Found at {0}.'.format(exiftool_path)
class Pillow(SetupPackage):
name = 'pillow'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['pillow>=2.5.0']
class Numpy(SetupPackage):
name = 'numpy'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['numpy>=1.7.2']
class Dlib(SetupPackage):
name = 'dlib'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['dlib']
class ScikitImage(SetupPackage):
name = 'scikit-image'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
# For some reason some dependencies of scikit-image aren't installed
# by pip: https://github.com/scikit-image/scikit-image/issues/2155
return ['scipy', 'matplotlib', 'scikit-image>=0.12']
class MagickWand(SetupPackage):
name = 'magickwand'
pkg_names = {
"apt-get": 'libmagickwand-dev',
"yum": 'ImageMagick-devel',
"dnf": 'ImageMagick-devel',
"pacman": None,
"zypper": None,
"brew": 'imagemagick',
"port": 'imagemagick',
"windows_url": ("http://docs.wand-py.org/en/latest/guide/"
"install.html#install-imagemagick-on-windows")
}
def check(self):
# `wand` already checks for magickwand, but only when importing, not
# during installation. See https://github.com/dahlia/wand/issues/293
magick_wand = pkg_config.get_version("MagickWand")
if magick_wand is None:
raise CheckFailed('Needs to be installed manually.')
else:
return 'Found with pkg-config.'
class Wand(SetupPackage):
name = 'wand'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['wand']
class PyColorName(SetupPackage):
name = 'pycolorname'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['pycolorname']
class LibZBar(SetupPackage):
name = 'libzbar'
pkg_names = {
"apt-get": 'libzbar-dev',
"yum": 'zbar-devel',
"dnf": 'zbar-devel',
"pacman": None,
"zypper": None,
"brew": 'zbar',
"port": None,
"windows_url": None
}
def check(self):
libzbar = ctypes.util.find_library('zbar')
if libzbar is None:
raise CheckFailed('Needs to be installed manually.')
else:
return 'Found {0}.'.format(libzbar)
class ZBar(SetupPackage):
name = 'zbar'
def check(self):
return 'Will be installed with pip.'
def get_install_requires(self):
return ['zbar']
class JavaJRE(SetupPackage):
name = 'java'
pkg_names = {
"apt-get": 'default-jre',
"yum": 'java',
"dnf": 'java',
"pacman": None,
"zypper": None,
"brew": None,
"port": None,
"windows_url": "https://java.com/download/"
}
def check(self):
java_path = which('java')
if java_path is None:
raise CheckFailed('Needs to be installed manually.')
else:
return 'Found at {0}.'.format(java_path)
class ZXing(SetupPackage):
name = 'zxing'
def check(self):
return 'Will be downloaded from their maven repositories.'
@staticmethod
def download_jar(data_folder, path, name, ver, **kwargs):
data = {'name': name, 'ver': ver, 'path': path}
fname = os.path.join(data_folder, '{name}-{ver}.jar'.format(**data))
url = ('http://central.maven.org/maven2/{path}/{name}/{ver}/'
'{name}-{ver}.jar'.format(**data))
download(url, fname, **kwargs)
return fname
def get_data_files(self):
msg = 'Unable to download "{0}" correctly.'
if not self.download_jar(
data_path(), 'com/google/zxing', 'core', '3.2.1',
sha1='2287494d4f5f9f3a9a2bb6980e3f32053721b315'):
return msg.format('zxing-core')
if not self.download_jar(
data_path(), 'com/google/zxing', 'javase', '3.2.1',
sha1='78e98099b87b4737203af1fcfb514954c4f479d9'):
return msg.format('zxing-javase')
if not self.download_jar(
data_path(), 'com/beust', 'jcommander', '1.48',
sha1='bfcb96281ea3b59d626704f74bc6d625ff51cbce'):
return msg.format('jcommander')
return 'Successfully downloaded zxing-javase, zxing-core, jcommander.'
class FFProbe(SetupPackage):
name = 'ffprobe'
pkg_names = {
"apt-get": 'libav-tools',
"yum": ('ffmpeg', 'This requires the RPMFusion repo to be enabled.'),
"dnf": ('ffmpeg', 'This requires the RPMFusion repo to be enabled.'),
"pacman": None,
"zypper": None,
"brew": 'ffmpeg',
"port": None,
"windows_url": None
}
def check(self):
ffprobe_path = which('ffprobe') or which('avprobe')
if ffprobe_path is None:
raise CheckFailed('Needs to be installed manually.')
else:
return 'Found at {0}.'.format(ffprobe_path)
| mit |
seanandrews/diskpop | phot/priors.py | 1 | 1143 | #
#
#
import numpy as np
import pandas as pd
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import sys
# effective temperature prior
# inputs
Sbar = 60.
eSbar = 1.
Tinput = 8700.
# load spectral type |-> temperature conversion file
dt = {'ST': np.str, 'STix': np.float64, 'Teff':np.float64, 'eTeff':np.float64}
a = pd.read_csv('data/adopted_spt-teff.txt', dtype=dt,
names=['ST','STix','Teff','eTeff'])
# discretized relationship
S_g = np.array(a['STix'])
T_g = np.array(a['Teff'])
eT_g = np.array(a['eTeff'])
# need to interpolate for appropriate integration
tint = interp1d(S_g, T_g)
eint = interp1d(S_g, eT_g)
S = np.linspace(np.min(S_g), np.max(S_g), num=10.*len(S_g))
T = tint(S)
eT = eint(S)
# calculate p(S)
p_S = np.exp(-0.5*((S-Sbar)/eSbar )**2) / (np.sqrt(2.*np.pi)*eSbar)
# now calculate p(T)
p_T = np.zeros_like(T)
for i in np.arange(len(T)):
p_TS = np.exp(-0.5*((T[i]-tint(S))/eint(S))**2) / \
(np.sqrt(2.*np.pi)*eint(S))
p_T[i] = np.trapz(p_TS*p_S, S)
# create an interpolator for p_T
p_tint = interp1d(T, p_T)
prior_T = p_tint(Tinput)
print(prior_T)
| mit |
ndingwall/scikit-learn | sklearn/tests/test_build.py | 17 | 1175 | import os
import pytest
import textwrap
from sklearn import __version__
from sklearn.utils._openmp_helpers import _openmp_parallelism_enabled
def test_openmp_parallelism_enabled():
# Check that sklearn is built with OpenMP-based parallelism enabled.
# This test can be skipped by setting the environment variable
# ``SKLEARN_SKIP_OPENMP_TEST``.
if os.getenv("SKLEARN_SKIP_OPENMP_TEST"):
pytest.skip("test explicitly skipped (SKLEARN_SKIP_OPENMP_TEST)")
base_url = "dev" if __version__.endswith(".dev0") else "stable"
err_msg = textwrap.dedent(
"""
This test fails because scikit-learn has been built without OpenMP.
This is not recommended since some estimators will run in sequential
mode instead of leveraging thread-based parallelism.
You can find instructions to build scikit-learn with OpenMP at this
address:
https://scikit-learn.org/{}/developers/advanced_installation.html
You can skip this test by setting the environment variable
SKLEARN_SKIP_OPENMP_TEST to any value.
""").format(base_url)
assert _openmp_parallelism_enabled(), err_msg
| bsd-3-clause |
ClimbsRocks/scikit-learn | examples/cluster/plot_color_quantization.py | 61 | 3444 | # -*- coding: utf-8 -*-
"""
==================================
Color Quantization using K-Means
==================================
Performs a pixel-wise Vector Quantization (VQ) of an image of the summer palace
(China), reducing the number of colors required to show the image from 96,615
unique colors to 64, while preserving the overall appearance quality.
In this example, pixels are represented in a 3D-space and K-means is used to
find 64 color clusters. In the image processing literature, the codebook
obtained from K-means (the cluster centers) is called the color palette. Using
a single byte, up to 256 colors can be addressed, whereas an RGB encoding
requires 3 bytes per pixel. The GIF file format, for example, uses such a
palette.
For comparison, a quantized image using a random codebook (colors picked up
randomly) is also shown.
"""
# Authors: Robert Layton <robertlayton@gmail.com>
# Olivier Grisel <olivier.grisel@ensta.org>
# Mathieu Blondel <mathieu@mblondel.org>
#
# License: BSD 3 clause
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances_argmin
from sklearn.datasets import load_sample_image
from sklearn.utils import shuffle
from time import time
n_colors = 64
# Load the Summer Palace photo
china = load_sample_image("china.jpg")
# Convert to floats instead of the default 8 bits integer coding. Dividing by
# 255 is important so that plt.imshow behaves works well on float data (need to
# be in the range [0-1])
china = np.array(china, dtype=np.float64) / 255
# Load Image and transform to a 2D numpy array.
w, h, d = original_shape = tuple(china.shape)
assert d == 3
image_array = np.reshape(china, (w * h, d))
print("Fitting model on a small sub-sample of the data")
t0 = time()
image_array_sample = shuffle(image_array, random_state=0)[:1000]
kmeans = KMeans(n_clusters=n_colors, random_state=0).fit(image_array_sample)
print("done in %0.3fs." % (time() - t0))
# Get labels for all points
print("Predicting color indices on the full image (k-means)")
t0 = time()
labels = kmeans.predict(image_array)
print("done in %0.3fs." % (time() - t0))
codebook_random = shuffle(image_array, random_state=0)[:n_colors + 1]
print("Predicting color indices on the full image (random)")
t0 = time()
labels_random = pairwise_distances_argmin(codebook_random,
image_array,
axis=0)
print("done in %0.3fs." % (time() - t0))
def recreate_image(codebook, labels, w, h):
"""Recreate the (compressed) image from the code book & labels"""
d = codebook.shape[1]
image = np.zeros((w, h, d))
label_idx = 0
for i in range(w):
for j in range(h):
image[i][j] = codebook[labels[label_idx]]
label_idx += 1
return image
# Display all results, alongside original image
plt.figure(1)
plt.clf()
ax = plt.axes([0, 0, 1, 1])
plt.axis('off')
plt.title('Original image (96,615 colors)')
plt.imshow(china)
plt.figure(2)
plt.clf()
ax = plt.axes([0, 0, 1, 1])
plt.axis('off')
plt.title('Quantized image (64 colors, K-Means)')
plt.imshow(recreate_image(kmeans.cluster_centers_, labels, w, h))
plt.figure(3)
plt.clf()
ax = plt.axes([0, 0, 1, 1])
plt.axis('off')
plt.title('Quantized image (64 colors, Random)')
plt.imshow(recreate_image(codebook_random, labels_random, w, h))
plt.show()
| bsd-3-clause |
cbertinato/pandas | pandas/_config/localization.py | 1 | 4655 | """
Helpers for configuring locale settings.
Name `localization` is chosen to avoid overlap with builtin `locale` module.
"""
from contextlib import contextmanager
import locale
import re
import subprocess
from pandas._config.config import options
@contextmanager
def set_locale(new_locale, lc_var=locale.LC_ALL):
"""
Context manager for temporarily setting a locale.
Parameters
----------
new_locale : str or tuple
A string of the form <language_country>.<encoding>. For example to set
the current locale to US English with a UTF8 encoding, you would pass
"en_US.UTF-8".
lc_var : int, default `locale.LC_ALL`
The category of the locale being set.
Notes
-----
This is useful when you want to run a particular block of code under a
particular locale, without globally setting the locale. This probably isn't
thread-safe.
"""
current_locale = locale.getlocale()
try:
locale.setlocale(lc_var, new_locale)
normalized_locale = locale.getlocale()
if all(x is not None for x in normalized_locale):
yield '.'.join(normalized_locale)
else:
yield new_locale
finally:
locale.setlocale(lc_var, current_locale)
def can_set_locale(lc, lc_var=locale.LC_ALL):
"""
Check to see if we can set a locale, and subsequently get the locale,
without raising an Exception.
Parameters
----------
lc : str
The locale to attempt to set.
lc_var : int, default `locale.LC_ALL`
The category of the locale being set.
Returns
-------
is_valid : bool
Whether the passed locale can be set
"""
try:
with set_locale(lc, lc_var=lc_var):
pass
except (ValueError, locale.Error):
# horrible name for a Exception subclass
return False
else:
return True
def _valid_locales(locales, normalize):
"""
Return a list of normalized locales that do not throw an ``Exception``
when set.
Parameters
----------
locales : str
A string where each locale is separated by a newline.
normalize : bool
Whether to call ``locale.normalize`` on each locale.
Returns
-------
valid_locales : list
A list of valid locales.
"""
if normalize:
normalizer = lambda x: locale.normalize(x.strip())
else:
normalizer = lambda x: x.strip()
return list(filter(can_set_locale, map(normalizer, locales)))
def _default_locale_getter():
try:
raw_locales = subprocess.check_output(['locale -a'], shell=True)
except subprocess.CalledProcessError as e:
raise type(e)("{exception}, the 'locale -a' command cannot be found "
"on your system".format(exception=e))
return raw_locales
def get_locales(prefix=None, normalize=True,
locale_getter=_default_locale_getter):
"""
Get all the locales that are available on the system.
Parameters
----------
prefix : str
If not ``None`` then return only those locales with the prefix
provided. For example to get all English language locales (those that
start with ``"en"``), pass ``prefix="en"``.
normalize : bool
Call ``locale.normalize`` on the resulting list of available locales.
If ``True``, only locales that can be set without throwing an
``Exception`` are returned.
locale_getter : callable
The function to use to retrieve the current locales. This should return
a string with each locale separated by a newline character.
Returns
-------
locales : list of strings
A list of locale strings that can be set with ``locale.setlocale()``.
For example::
locale.setlocale(locale.LC_ALL, locale_string)
On error will return None (no locale available, e.g. Windows)
"""
try:
raw_locales = locale_getter()
except Exception:
return None
try:
# raw_locales is "\n" separated list of locales
# it may contain non-decodable parts, so split
# extract what we can and then rejoin.
raw_locales = raw_locales.split(b'\n')
out_locales = []
for x in raw_locales:
out_locales.append(str(
x, encoding=options.display.encoding))
except TypeError:
pass
if prefix is None:
return _valid_locales(out_locales, normalize)
pattern = re.compile('{prefix}.*'.format(prefix=prefix))
found = pattern.findall('\n'.join(out_locales))
return _valid_locales(found, normalize)
| bsd-3-clause |
Microsoft/hummingbird | tests/test_sklearn_decomposition.py | 1 | 5758 | """
Tests sklearn matrix decomposition converters
"""
import unittest
import warnings
import sys
from distutils.version import LooseVersion
import numpy as np
import torch
import sklearn
from sklearn.decomposition import FastICA, KernelPCA, PCA, TruncatedSVD
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_digits
import hummingbird.ml
class TestSklearnMatrixDecomposition(unittest.TestCase):
def _fit_model_pca(self, model, precompute=False):
data = load_digits()
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.2, random_state=42)
X_test = X_test.astype("float32")
if precompute:
# For precompute we use a linear kernel
model.fit(np.dot(X_train, X_train.T))
X_test = np.dot(X_test, X_train.T)
else:
model.fit(X_train)
torch_model = hummingbird.ml.convert(model, "torch")
self.assertTrue(torch_model is not None)
np.testing.assert_allclose(model.transform(X_test), torch_model.transform(X_test), rtol=1e-6, atol=2 * 1e-5)
# PCA n_components none
def test_pca_converter_none(self):
self._fit_model_pca(PCA(n_components=None))
# PCA n_componenets two
def test_pca_converter_two(self):
self._fit_model_pca(PCA(n_components=2))
# PCA n_componenets mle and whiten true
@unittest.skipIf(
LooseVersion(sklearn.__version__) < LooseVersion("0.23.2"),
reason="With Sklearn version < 0.23.2 returns ValueError: math domain error (https://github.com/scikit-learn/scikit-learn/issues/4441)",
)
def test_pca_converter_mle_whiten(self):
self._fit_model_pca(PCA(n_components="mle", whiten=True))
# PCA n_componenets mle and solver full
@unittest.skipIf(
LooseVersion(sklearn.__version__) < LooseVersion("0.23.2"),
reason="With Sklearn version < 0.23.2 returns ValueError: math domain error (https://github.com/scikit-learn/scikit-learn/issues/4441)",
)
def test_pca_converter_mle_full(self):
self._fit_model_pca(PCA(n_components="mle", svd_solver="full"))
# PCA n_componenets none and solver arpack
def test_pca_converter_none_arpack(self):
self._fit_model_pca(PCA(n_components=None, svd_solver="arpack"))
# PCA n_componenets none and solver randomized
def test_pca_converter_none_randomized(self):
self._fit_model_pca(PCA(n_components=None, svd_solver="randomized"))
# KernelPCA linear kernel
def test_kernel_pca_converter_linear(self):
self._fit_model_pca(KernelPCA(n_components=5, kernel="linear"))
# KernelPCA linear kernel with inverse transform
def test_kernel_pca_converter_linear_fit_inverse_transform(self):
self._fit_model_pca(KernelPCA(n_components=5, kernel="linear", fit_inverse_transform=True))
# KernelPCA poly kernel
def test_kernel_pca_converter_poly(self):
self._fit_model_pca(KernelPCA(n_components=5, kernel="poly", degree=2))
# KernelPCA poly kernel coef0
def test_kernel_pca_converter_poly_coef0(self):
self._fit_model_pca(KernelPCA(n_components=10, kernel="poly", degree=3, coef0=10))
# KernelPCA poly kernel with inverse transform
def test_kernel_pca_converter_poly_fit_inverse_transform(self):
self._fit_model_pca(KernelPCA(n_components=5, kernel="poly", degree=3, fit_inverse_transform=True))
# KernelPCA poly kernel
def test_kernel_pca_converter_rbf(self):
self._fit_model_pca(KernelPCA(n_components=5, kernel="rbf"))
# KernelPCA sigmoid kernel
def test_kernel_pca_converter_sigmoid(self):
self._fit_model_pca(KernelPCA(n_components=5, kernel="sigmoid"))
# KernelPCA cosine kernel
def test_kernel_pca_converter_cosine(self):
self._fit_model_pca(KernelPCA(n_components=5, kernel="cosine"))
# KernelPCA precomputed kernel
def test_kernel_pca_converter_precomputed(self):
self._fit_model_pca(KernelPCA(n_components=5, kernel="precomputed"), precompute=True)
# TODO: Fails on macos-latest Python 3.8 due to a sklearn bug.
# FastICA converter with n_components none
# def test_fast_ica_converter_none(self):
# self._fit_model_pca(FastICA(n_components=None))
# FastICA converter with n_components 3
def test_fast_ica_converter_3(self):
self._fit_model_pca(FastICA(n_components=3))
# FastICA converter with n_components 3 whiten
def test_fast_ica_converter_3_whiten(self):
self._fit_model_pca(FastICA(n_components=3, whiten=True))
# FastICA converter with n_components 3 deflation algorithm
def test_fast_ica_converter_3_deflation(self):
self._fit_model_pca(FastICA(n_components=3, algorithm="deflation"))
# FastICA converter with n_components 3 fun exp
def test_fast_ica_converter_3_exp(self):
self._fit_model_pca(FastICA(n_components=3, fun="exp"))
# FastICA converter with n_components 3 fun cube
def test_fast_ica_converter_3_cube(self):
self._fit_model_pca(FastICA(n_components=3, fun="cube"))
# FastICA converter with n_components 3 fun custom
def test_fast_ica_converter_3_custom(self):
def my_g(x):
return x ** 3, (3 * x ** 2).mean(axis=-1)
self._fit_model_pca(FastICA(n_components=3, fun=my_g))
# TruncatedSVD converter with n_components 3
def test_truncated_svd_converter_3(self):
self._fit_model_pca(TruncatedSVD(n_components=3))
# TruncatedSVD converter with n_components 3 algorithm arpack
def test_truncated_svd_converter_3_arpack(self):
self._fit_model_pca(TruncatedSVD(n_components=3, algorithm="arpack"))
if __name__ == "__main__":
unittest.main()
| mit |
saullocastro/compmech | doc/pyplots/theory/fem/fsdt_donnell_kquad4.py | 3 | 1473 | from matplotlib.pyplot import *
from math import sqrt
m = 1/3.
xs = [+1, +1, -1, -1]
ys = [-1, +1, -1, +1]
figure(figsize=(4, 4))
ax = gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_position(('data', 0))
ax.spines['bottom'].set_position(('data', 0))
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.set_aspect('equal')
ax.set_xlim(-1.4, +1.6)
ax.set_ylim(-1.4, +1.6)
ax.text(1.8, 0., r'$\xi$', transform=ax.transData, va='center')
ax.text(0., 1.8, r'$\eta$', rotation='horizontal', transform=ax.transData,
ha='center')
ax.text(+1.1, +1.1, '$n_1$\n' + r'$(+1, +1)$', ha='center', va='bottom',
fontsize=10)
ax.text(-1.1, +1.1, '$n_2$\n' + r'$(-1, +1)$', ha='center', va='bottom',
fontsize=10)
ax.text(-1.1, -1.1, '$n_3$\n' + r'$(-1, -1)$', ha='center', va='top' ,
fontsize=10)
ax.text(+1.1, -1.1, '$n_4$\n' + r'$(+1, -1)$', ha='center', va='top' ,
fontsize=10)
# radius
ax.annotate('$r_1$', xy=(-1, 0.5), xytext=(-0.5, 0.2),
arrowprops=dict(arrowstyle='->'), va='center', ha='center')
ax.annotate('$r_2$', xy=(+1, 0.5), xytext=(+0.5, 0.2),
arrowprops=dict(arrowstyle='->'), va='center', ha='center')
ax.set_xticks([])
ax.set_yticks([])
#ax.set_xticklabels(['-1', '+1'])
#ax.set_yticklabels(['-1', '+1'])
plot([1, -1, -1, 1, 1], [1, 1, -1, -1, 1], '-k')
plot(xs, ys, 'ok', mfc='k')
tight_layout()
savefig('test.png')
#show()
| bsd-3-clause |
lamastex/scalable-data-science | db/xtraResources/edXBigDataSeries2015/CS100-1x/Module 4: Text Analysis and Entity Resolution Lab Solutions.py | 2 | 73278 | # Databricks notebook source exported at Mon, 14 Mar 2016 03:33:29 UTC
# MAGIC %md
# MAGIC **SOURCE:** This is from the Community Edition of databricks and has been added to this databricks shard at [/#workspace/scalable-data-science/xtraResources/edXBigDataSeries2015/CS100-1x](/#workspace/scalable-data-science/xtraResources/edXBigDataSeries2015/CS100-1x) as extra resources for the project-focussed course [Scalable Data Science](http://www.math.canterbury.ac.nz/~r.sainudiin/courses/ScalableDataScience/) that is prepared by [Raazesh Sainudiin](https://nz.linkedin.com/in/raazesh-sainudiin-45955845) and [Sivanand Sivaram](https://www.linkedin.com/in/sivanand), and *supported by* [![](https://raw.githubusercontent.com/raazesh-sainudiin/scalable-data-science/master/images/databricks_logoTM_200px.png)](https://databricks.com/)
# MAGIC and
# MAGIC [![](https://raw.githubusercontent.com/raazesh-sainudiin/scalable-data-science/master/images/AWS_logoTM_200px.png)](https://www.awseducate.com/microsite/CommunitiesEngageHome).
# COMMAND ----------
# MAGIC %md
# MAGIC <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
# COMMAND ----------
# MAGIC %md
# MAGIC #![Spark Logo](http://spark-mooc.github.io/web-assets/images/ta_Spark-logo-small.png) + ![Python Logo](http://spark-mooc.github.io/web-assets/images/python-logo-master-v3-TM-flattened_small.png)
# MAGIC # **Text Analysis and Entity Resolution**
# MAGIC Entity resolution is a common, yet difficult problem in data cleaning and integration. This lab will demonstrate how we can use Apache Spark to apply powerful and scalable text analysis techniques and perform entity resolution across two datasets of commercial products.
# COMMAND ----------
# MAGIC %md
# MAGIC Entity Resolution, or "[Record linkage][wiki]" is the term used by statisticians, epidemiologists, and historians, among others, to describe the process of joining records from one data source with another that describe the same entity. Our terms with the same meaning include, "entity disambiguation/linking", duplicate detection", "deduplication", "record matching", "(reference) reconciliation", "object identification", "data/information integration", and "conflation".
# MAGIC
# MAGIC Entity Resolution (ER) refers to the task of finding records in a dataset that refer to the same entity across different data sources (e.g., data files, books, websites, databases). ER is necessary when joining datasets based on entities that may or may not share a common identifier (e.g., database key, URI, National identification number), as may be the case due to differences in record shape, storage location, and/or curator style or preference. A dataset that has undergone ER may be referred to as being cross-linked.
# MAGIC [wiki]: https://en.wikipedia.org/wiki/Record_linkage
# COMMAND ----------
labVersion = 'cs100.1x-lab3-1.0.4'
# COMMAND ----------
# MAGIC %md
# MAGIC #### Code
# MAGIC This assignment can be completed using basic Python, pySpark Transformations and actions, and the plotting library matplotlib. Other libraries are not allowed.
# MAGIC
# MAGIC #### Files
# MAGIC Data files for this assignment are from the [metric-learning](https://code.google.com/p/metric-learning/) project and can be found at:
# MAGIC `cs100/lab3`
# MAGIC
# MAGIC The directory contains the following files:
# MAGIC * **Google.csv**, the Google Products dataset
# MAGIC * **Amazon.csv**, the Amazon dataset
# MAGIC * **Google_small.csv**, 200 records sampled from the Google data
# MAGIC * **Amazon_small.csv**, 200 records sampled from the Amazon data
# MAGIC * **Amazon_Google_perfectMapping.csv**, the "gold standard" mapping
# MAGIC * **stopwords.txt**, a list of common English words
# MAGIC
# MAGIC Besides the complete data files, there are "sample" data files for each dataset - we will use these for **Part 1**. In addition, there is a "gold standard" file that contains all of the true mappings between entities in the two datasets. Every row in the gold standard file has a pair of record IDs (one Google, one Amazon) that belong to two record that describe the same thing in the real world. We will use the gold standard to evaluate our algorithms.
# COMMAND ----------
# MAGIC %md
# MAGIC #### **Part 0: Preliminaries**
# MAGIC We read in each of the files and create an RDD consisting of lines.
# MAGIC For each of the data files ("Google.csv", "Amazon.csv", and the samples), we want to parse the IDs out of each record. The IDs are the first column of the file (they are URLs for Google, and alphanumeric strings for Amazon). Omitting the headers, we load these data files into pair RDDs where the *mapping ID* is the key, and the value is a string consisting of the name/title, description, and manufacturer from the record.
# MAGIC
# MAGIC The file format of an Amazon line is:
# MAGIC
# MAGIC `"id","title","description","manufacturer","price"`
# MAGIC
# MAGIC The file format of a Google line is:
# MAGIC
# MAGIC `"id","name","description","manufacturer","price"`
# COMMAND ----------
import re
DATAFILE_PATTERN = '^(.+),"(.+)",(.*),(.*),(.*)'
def removeQuotes(s):
""" Remove quotation marks from an input string
Args:
s (str): input string that might have the quote "" characters
Returns:
str: a string without the quote characters
"""
return ''.join(i for i in s if i!='"')
def parseDatafileLine(datafileLine):
""" Parse a line of the data file using the specified regular expression pattern
Args:
datafileLine (str): input string that is a line from the data file
Returns:
str: a string parsed using the given regular expression and without the quote characters
"""
match = re.search(DATAFILE_PATTERN, datafileLine)
if match is None:
print 'Invalid datafile line: %s' % datafileLine
return (datafileLine, -1)
elif match.group(1) == '"id"':
print 'Header datafile line: %s' % datafileLine
return (datafileLine, 0)
else:
product = '%s %s %s' % (match.group(2), match.group(3), match.group(4))
return ((removeQuotes(match.group(1)), product), 1)
# COMMAND ----------
display(dbutils.fs.ls('/databricks-datasets/cs100/lab3/data-001/'))
# COMMAND ----------
# MAGIC %md **WARNING:** If *test_helper*, required in the cell below, is not installed, follow the instructions [here](https://databricks-staging-cloudfront.staging.cloud.databricks.com/public/c65da9a2fa40e45a2028cddebe45b54c/8637560089690848/4187311313936645/6977722904629137/05f3c2ecc3.html).
# COMMAND ----------
import sys
import os
from test_helper import Test
baseDir = os.path.join('databricks-datasets')
inputPath = os.path.join('cs100', 'lab3', 'data-001')
GOOGLE_PATH = 'Google.csv'
GOOGLE_SMALL_PATH = 'Google_small.csv'
AMAZON_PATH = 'Amazon.csv'
AMAZON_SMALL_PATH = 'Amazon_small.csv'
GOLD_STANDARD_PATH = 'Amazon_Google_perfectMapping.csv'
STOPWORDS_PATH = 'stopwords.txt'
def parseData(filename):
""" Parse a data file
Args:
filename (str): input file name of the data file
Returns:
RDD: a RDD of parsed lines
"""
return (sc
.textFile(filename, 4, 0)
.map(parseDatafileLine))
def loadData(path):
""" Load a data file
Args:
path (str): input file name of the data file
Returns:
RDD: a RDD of parsed valid lines
"""
filename = os.path.join(baseDir, inputPath, path)
raw = parseData(filename).cache()
failed = (raw
.filter(lambda s: s[1] == -1)
.map(lambda s: s[0]))
for line in failed.take(10):
print '%s - Invalid datafile line: %s' % (path, line)
valid = (raw
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
print '%s - Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (path,
raw.count(),
valid.count(),
failed.count())
assert failed.count() == 0
assert raw.count() == (valid.count() + 1)
return valid
googleSmall = loadData(GOOGLE_SMALL_PATH)
google = loadData(GOOGLE_PATH)
amazonSmall = loadData(AMAZON_SMALL_PATH)
amazon = loadData(AMAZON_PATH)
# COMMAND ----------
# MAGIC %md
# MAGIC Let's examine the lines that were just loaded in the two subset (small) files - one from Google and one from Amazon
# COMMAND ----------
for line in googleSmall.take(3):
print 'google: %s: %s\n' % (line[0], line[1])
for line in amazonSmall.take(3):
print 'amazon: %s: %s\n' % (line[0], line[1])
# COMMAND ----------
# MAGIC %md
# MAGIC #### **Part 1: ER as Text Similarity - Bags of Words**
# MAGIC
# MAGIC A simple approach to entity resolution is to treat all records as strings and compute their similarity with a string distance function. In this part, we will build some components for performing bag-of-words text-analysis, and then use them to compute record similarity.
# MAGIC [Bag-of-words][bag-of-words] is a conceptually simple yet powerful approach to text analysis.
# MAGIC
# MAGIC The idea is to treat strings, a.k.a. **documents**, as *unordered collections* of words, or **tokens**, i.e., as bags of words.
# MAGIC > **Note on terminology**: a "token" is the result of parsing the document down to the elements we consider "atomic" for the task at hand. Tokens can be things like words, numbers, acronyms, or other exotica like word-roots or fixed-length character strings.
# MAGIC > Bag of words techniques all apply to any sort of token, so when we say "bag-of-words" we really mean "bag-of-tokens," strictly speaking.
# MAGIC Tokens become the atomic unit of text comparison. If we want to compare two documents, we count how many tokens they share in common. If we want to search for documents with keyword queries (this is what Google does), then we turn the keywords into tokens and find documents that contain them. The power of this approach is that it makes string comparisons insensitive to small differences that probably do not affect meaning much, for example, punctuation and word order.
# MAGIC [bag-of-words]: https://en.wikipedia.org/wiki/Bag-of-words_model
# COMMAND ----------
# MAGIC %md
# MAGIC #### **1(a) Tokenize a String**
# MAGIC Implement the function `simpleTokenize(string)` that takes a string and returns a list of non-empty tokens in the string. `simpleTokenize` should split strings using the provided regular expression. Since we want to make token-matching case insensitive, make sure all tokens are turned lower-case. Give an interpretation, in natural language, of what the regular expression, `split_regex`, matches.
# MAGIC If you need help with Regular Expressions, try the site [regex101](https://regex101.com/) where you can interactively explore the results of applying different regular expressions to strings. *Note that \W includes the "_" character*. You should use [re.split()](https://docs.python.org/2/library/re.html#re.split) to perform the string split. Also, make sure you remove any empty tokens.
# COMMAND ----------
# ANSWER
quickbrownfox = 'A quick brown fox jumps over the lazy dog.'
split_regex = r'\W+'
def simpleTokenize(string):
""" A simple implementation of input string tokenization
Args:
string (str): input string
Returns:
list: a list of tokens
"""
return [t for t in re.split(split_regex, string.lower()) if len(t)]
print simpleTokenize(quickbrownfox) # Should give ['a', 'quick', 'brown', ... ]
# COMMAND ----------
# TEST Tokenize a String (1a)
Test.assertEquals(simpleTokenize(quickbrownfox),
['a','quick','brown','fox','jumps','over','the','lazy','dog'],
'simpleTokenize should handle sample text')
Test.assertEquals(simpleTokenize(' '), [], 'simpleTokenize should handle empty string')
Test.assertEquals(simpleTokenize('!!!!123A/456_B/789C.123A'), ['123a','456_b','789c','123a'],
'simpleTokenize should handle punctuations and lowercase result')
Test.assertEquals(simpleTokenize('fox fox'), ['fox', 'fox'],
'simpleTokenize should not remove duplicates')
# COMMAND ----------
# PRIVATE_TEST Tokenize a String (1a)
Test.assertEquals(simpleTokenize(quickbrownfox),
['a','quick','brown','fox','jumps','over','the','lazy','dog'],
'simpleTokenize should handle sample text')
Test.assertEquals(simpleTokenize(' '), [], 'simpleTokenize should handle empty string')
Test.assertEquals(simpleTokenize('!!!!123A/456_B/789C.123A'), ['123a','456_b','789c','123a'],
'simpleTokenize should handle puntuations and lowercase result')
Test.assertEquals(simpleTokenize('fox fox'), ['fox', 'fox'],
'simpleTokenize should not remove duplicates')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(1b) Removing stopwords**
# MAGIC *[Stopwords][stopwords]* are common (English) words that do not contribute much to the content or meaning of a document (e.g., "the", "a", "is", "to", etc.). Stopwords add noise to bag-of-words comparisons, so they are usually excluded.
# MAGIC Using the included file "stopwords.txt", implement `tokenize`, an improved tokenizer that does not emit stopwords.
# MAGIC [stopwords]: https://en.wikipedia.org/wiki/Stop_words
# COMMAND ----------
# ANSWER
stopfile = os.path.join(baseDir, inputPath, STOPWORDS_PATH)
stopwords = set(sc.textFile(stopfile).collect())
print 'These are the stopwords: %s' % stopwords
def tokenize(string):
""" An implementation of input string tokenization that excludes stopwords
Args:
string (str): input string
Returns:
list: a list of tokens without stopwords
"""
return [t for t in simpleTokenize(string) if t not in stopwords]
print tokenize(quickbrownfox) # Should give ['quick', 'brown', ... ]
# COMMAND ----------
# TEST Removing stopwords (1b)
Test.assertEquals(tokenize("Why a the?"), [], 'tokenize should remove all stopwords')
Test.assertEquals(tokenize("Being at the_?"), ['the_'], 'tokenize should handle non-stopwords')
Test.assertEquals(tokenize(quickbrownfox), ['quick','brown','fox','jumps','lazy','dog'],
'tokenize should handle sample text')
# COMMAND ----------
# PRIVATE_TEST Removing stopwords (1b)
Test.assertEquals(tokenize("Why a the?"), [], 'tokenize should remove all stopwords')
Test.assertEquals(tokenize("Being at the_?"), ['the_'], 'tokenize should handle non-stopwords')
Test.assertEquals(tokenize(quickbrownfox), ['quick','brown','fox','jumps','lazy','dog'],
'tokenize should handle sample text')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(1c) Tokenizing the small datasets**
# MAGIC Now let's tokenize the two *small* datasets. For each ID in a dataset, `tokenize` the values, and then count the total number of tokens.
# MAGIC How many tokens, total, are there in the two datasets?
# COMMAND ----------
# ANSWER
amazonRecToToken = amazonSmall.map(lambda s: (s[0], tokenize(s[1])))
googleRecToToken = googleSmall.map(lambda s: (s[0], tokenize(s[1])))
def countTokens(vendorRDD):
""" Count and return the number of tokens
Args:
vendorRDD (RDD of (recordId, tokenizedValue)): Pair tuple of record ID to tokenized output
Returns:
count: count of all tokens
"""
recordCount = vendorRDD.map(lambda s: len(s[1]))
recordSum = recordCount.reduce(lambda a, b : a + b)
return recordSum
totalTokens = countTokens(amazonRecToToken) + countTokens(googleRecToToken)
print 'There are %s tokens in the combined datasets' % totalTokens
# COMMAND ----------
# TEST Tokenizing the small datasets (1c)
Test.assertEquals(totalTokens, 22520, 'incorrect totalTokens')
# COMMAND ----------
# PRIVATE_TEST Tokenizing the small datasets (1c)
Test.assertEquals(totalTokens, 22520, 'incorrect totalTokens')
Test.assertEquals(countTokens(amazonRecToToken), 16707, 'incorrect token count for Amazon records')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(1d) Amazon record with the most tokens**
# MAGIC Which Amazon record has the biggest number of tokens?
# MAGIC In other words, you want to sort the records and get the one with the largest count of tokens.
# COMMAND ----------
# ANSWER
def findBiggestRecord(vendorRDD):
""" Find and return the record with the largest number of tokens
Args:
vendorRDD (RDD of (recordId, tokens)): input Pair Tuple of record ID and tokens
Returns:
list: a list of 1 Pair Tuple of record ID and tokens
"""
return(vendorRDD.takeOrdered(1, lambda s: -1 * len(s[1])))
biggestRecordAmazon = findBiggestRecord(amazonRecToToken)
print 'The Amazon record with ID "%s" has the most tokens (%s)' % (biggestRecordAmazon[0][0],
len(biggestRecordAmazon[0][1]))
# COMMAND ----------
# TEST Amazon record with the most tokens (1d)
Test.assertEquals(biggestRecordAmazon[0][0], 'b000o24l3q', 'incorrect biggestRecordAmazon')
Test.assertEquals(len(biggestRecordAmazon[0][1]), 1547, 'incorrect len for biggestRecordAmazon')
# COMMAND ----------
# PRIVATE_TEST Amazon record with the most tokens (1d)
Test.assertEquals(biggestRecordAmazon[0][0], 'b000o24l3q', 'incorrect biggestRecordAmazon')
Test.assertEquals(len(biggestRecordAmazon[0][1]), 1547, 'incorrect len for biggestRecordAmazon')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **Part 2: ER as Text Similarity - Weighted Bag-of-Words using TF-IDF**
# MAGIC Bag-of-words comparisons are not very good when all tokens are treated the same: some tokens are more important than others. Weights give us a way to specify which tokens to favor. With weights, when we compare documents, instead of counting common tokens, we sum up the weights of common tokens. A good heuristic for assigning weights is called "Term-Frequency/Inverse-Document-Frequency," or [TF-IDF][tfidf] for short.
# MAGIC
# MAGIC **TF**
# MAGIC
# MAGIC TF rewards tokens that appear many times in the same document. It is computed as the frequency of a token in a document, that is, if document *d* contains 100 tokens and token *t* appears in *d* 5 times, then the TF weight of *t* in *d* is *5/100 = 1/20*. The intuition for TF is that if a word occurs often in a document, then it is more important to the meaning of the document.
# MAGIC
# MAGIC **IDF**
# MAGIC
# MAGIC IDF rewards tokens that are rare overall in a dataset. The intuition is that it is more significant if two documents share a rare word than a common one. IDF weight for a token, *t*, in a set of documents, *U*, is computed as follows:
# MAGIC * Let *N* be the total number of documents in *U*
# MAGIC * Find *n(t)*, the number of documents in *U* that contain *t*
# MAGIC * Then *IDF(t) = N/n(t)*.
# MAGIC
# MAGIC Note that *n(t)/N* is the frequency of *t* in *U*, and *N/n(t)* is the inverse frequency.
# MAGIC
# MAGIC > **Note on terminology**: Sometimes token weights depend on the document the token belongs to, that is, the same token may have a different weight when it's found in different documents. We call these weights *local* weights. TF is an example of a local weight, because it depends on the length of the source. On the other hand, some token weights only depend on the token, and are the same everywhere that token is found. We call these weights *global*, and IDF is one such weight.
# MAGIC
# MAGIC **TF-IDF**
# MAGIC
# MAGIC Finally, to bring it all together, the total TF-IDF weight for a token in a document is the product of its TF and IDF weights.
# MAGIC [tfidf]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(2a) Implement a TF function**
# MAGIC
# MAGIC Implement `tf(tokens)` that takes a list of tokens and returns a Python [dictionary](https://docs.python.org/2/tutorial/datastructures.html#dictionaries) mapping tokens to TF weights.
# MAGIC
# MAGIC The steps your function should perform are:
# MAGIC * Create an empty Python dictionary
# MAGIC * For each of the tokens in the input `tokens` list, count 1 for each occurance and add the token to the dictionary
# MAGIC * For each of the tokens in the dictionary, divide the token's count by the total number of tokens in the input `tokens` list
# COMMAND ----------
# ANSWER
def tf(tokens):
""" Compute TF
Args:
tokens (list of str): input list of tokens from tokenize
Returns:
dictionary: a dictionary of tokens to its TF values
"""
counts = {}
length = len(tokens)
for t in tokens:
counts.setdefault(t, 0.0)
counts[t] += 1
return { t: counts[t] / length for t in counts }
print tf(tokenize(quickbrownfox)) # Should give { 'quick': 0.1666 ... }
# COMMAND ----------
# TEST Implement a TF function (2a)
tf_test = tf(tokenize(quickbrownfox))
Test.assertEquals(tf_test, {'brown': 0.16666666666666666, 'lazy': 0.16666666666666666,
'jumps': 0.16666666666666666, 'fox': 0.16666666666666666,
'dog': 0.16666666666666666, 'quick': 0.16666666666666666},
'incorrect result for tf on sample text')
tf_test2 = tf(tokenize('one_ one_ two!'))
Test.assertEquals(tf_test2, {'one_': 0.6666666666666666, 'two': 0.3333333333333333},
'incorrect result for tf test')
# COMMAND ----------
# PRIVATE_TEST Implement a TF function (2a)
tf_test = tf(tokenize(quickbrownfox))
Test.assertEquals(tf_test, {'brown': 0.16666666666666666, 'lazy': 0.16666666666666666,
'jumps': 0.16666666666666666, 'fox': 0.16666666666666666,
'dog': 0.16666666666666666, 'quick': 0.16666666666666666},
'incorrect result for tf on sample text')
tf_test2 = tf(tokenize('one_ one_ two!'))
Test.assertEquals(tf_test2, {'one_': 0.6666666666666666, 'two': 0.3333333333333333},
'incorrect result for tf test')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(2b) Create a corpus**
# MAGIC Create a pair RDD called `corpusRDD`, consisting of a combination of the two small datasets, `amazonRecToToken` and `googleRecToToken`. Each element of the `corpusRDD` should be a pair consisting of a key from one of the small datasets (ID or URL) and the value is the associated value for that key from the small datasets.
# COMMAND ----------
# ANSWER
corpusRDD = amazonRecToToken.union(googleRecToToken)
# COMMAND ----------
# TEST Create a corpus (2b)
Test.assertEquals(corpusRDD.count(), 400, 'incorrect corpusRDD.count()')
# COMMAND ----------
# PRIVATE_TEST Create a corpus (2b)
Test.assertEquals(corpusRDD.count(), 400, 'incorrect corpusRDD.count()')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(2c) Implement an IDFs function**
# MAGIC Implement `idfs` that assigns an IDF weight to every unique token in an RDD called `corpus`. The function should return an pair RDD where the `key` is the unique token and value is the IDF weight for the token.
# MAGIC
# MAGIC Recall that the IDF weight for a token, *t*, in a set of documents, *U*, is computed as follows:
# MAGIC * Let *N* be the total number of documents in *U*.
# MAGIC * Find *n(t)*, the number of documents in *U* that contain *t*.
# MAGIC * Then *IDF(t) = N/n(t)*.
# MAGIC
# MAGIC The steps your function should perform are:
# MAGIC * Calculate *N*. Think about how you can calculate *N* from the input RDD.
# MAGIC * Create an RDD (*not a pair RDD*) containing the unique tokens from each document in the input `corpus`. For each document, you should only include a token once, *even if it appears multiple times in that document.*
# MAGIC * For each of the unique tokens, count how many times it appears in the document and then compute the IDF for that token: *N/n(t)*
# MAGIC
# MAGIC Use your `idfs` to compute the IDF weights for all tokens in `corpusRDD` (the combined small datasets).
# MAGIC How many unique tokens are there?
# COMMAND ----------
# ANSWER
def idfs(corpus):
""" Compute IDF
Args:
corpus (RDD): input corpus
Returns:
RDD: a RDD of (token, IDF value)
"""
uniqueTokens = corpus.flatMap(lambda s: list(set(s[1])))
tokenCountPairTuple = uniqueTokens.map(lambda token: (token, 1))
tokenSumPairTuple = tokenCountPairTuple.reduceByKey(lambda a, b : a + b)
N = float(corpus.count())
return (tokenSumPairTuple.map(lambda s: (s[0], float(N/s[1]))))
idfsSmall = idfs(amazonRecToToken.union(googleRecToToken))
uniqueTokenCount = idfsSmall.count()
print 'There are %s unique tokens in the small datasets.' % uniqueTokenCount
# COMMAND ----------
# TEST Implement an IDFs function (2c)
Test.assertEquals(uniqueTokenCount, 4772, 'incorrect uniqueTokenCount')
tokenSmallestIdf = idfsSmall.takeOrdered(1, lambda s: s[1])[0]
Test.assertEquals(tokenSmallestIdf[0], 'software', 'incorrect smallest IDF token')
Test.assertTrue(abs(tokenSmallestIdf[1] - 4.25531914894) < 0.0000000001,
'incorrect smallest IDF value')
# COMMAND ----------
# PRIVATE_TEST Implement an IDFs function (2c)
Test.assertEquals(uniqueTokenCount, 4772, 'incorrect uniqueTokenCount')
tokenSmallestIdf = idfsSmall.takeOrdered(1, lambda s: s[1])[0]
Test.assertEquals(tokenSmallestIdf[0], 'software', 'incorrect smallest IDF token')
Test.assertTrue(abs(tokenSmallestIdf[1] - 4.25531914894) < 0.0000000001,
'incorrect smallest IDF value')
firstElevenTokens = set(idfsSmall.takeOrdered(11, lambda s: s[1]))
Test.assertEquals(len(firstElevenTokens - set([('software', 4.25531914893617),('new', 6.896551724137931),('features', 6.896551724137931),('use', 7.017543859649122),('complete', 7.2727272727272725),('easy', 7.6923076923076925),('create', 8.333333333333334),('system', 8.333333333333334),('cd', 8.333333333333334),('1', 8.51063829787234), ('windows', 8.51063829787234)])), 0, 'incorrect firstTenTokens')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(2d) Tokens with the smallest IDF**
# MAGIC Print out the 11 tokens with the smallest IDF in the combined small dataset.
# COMMAND ----------
smallIDFTokens = idfsSmall.takeOrdered(11, lambda s: s[1])
print smallIDFTokens
# COMMAND ----------
# ANSWER
#*answer*: The 10 smallest IDFs are for: (1) software, (2) new, (3) features, (4) use, (5) complete, (6) easy, (7 tie) cd, (7 tie) system, (7 tie) create, (10 tie) windows, (10 tie) 1.
#These terms not useful for entity resolution because they are generic terms for marketing, prices, and product categories.
# COMMAND ----------
# ANSWER
# Quiz question:
# For part (2d), do you think the terms are useful for entity resolution?
# ( ) Yes
# (*) No
#
# Why or why not?
# ( ) These terms are useful for entity resolution because they describe distinguishing tokens in product descriptions
# ( ) These terms not useful for entity resolution because they are generic terms for marketing, prices, and product categories.
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(2e) IDF Histogram**
# MAGIC Plot a histogram of IDF values. Be sure to use appropriate scaling and bucketing for the data.
# MAGIC First plot the histogram using `matplotlib`
# COMMAND ----------
import matplotlib.pyplot as plt
small_idf_values = idfsSmall.map(lambda s: s[1]).collect()
fig = plt.figure(figsize=(8,3))
plt.hist(small_idf_values, 50, log=True)
display(fig)
pass
# COMMAND ----------
from pyspark.sql import Row
# Create a DataFrame and visualize using display()
idfsToCountRow = idfsSmall.map(lambda (x, y): Row(token=x, value=y))
idfsToCountDF = sqlContext.createDataFrame(idfsToCountRow)
display(idfsToCountDF)
# COMMAND ----------
# ANSWER
# Quiz question:
# Using the plot in (2e), what conclusions can you draw from the distribution of weights?
#
# *ANSWER:* There is a long tail of rare words in the corpus (these have large IDF values).
# [explanation]
# There are gaps between IDF values because IDF is a function of a discrete variable, i.e., a document count.
# [explanation]
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(2f) Implement a TF-IDF function**
# MAGIC Use your `tf` function to implement a `tfidf(tokens, idfs)` function that takes a list of tokens from a document and a Python dictionary of IDF weights and returns a Python dictionary mapping individual tokens to total TF-IDF weights.
# MAGIC
# MAGIC The steps your function should perform are:
# MAGIC * Calculate the token frequencies (TF) for `tokens`
# MAGIC * Create a Python dictionary where each token maps to the token's frequency times the token's IDF weight
# MAGIC
# MAGIC Use your `tfidf` function to compute the weights of Amazon product record 'b000hkgj8k'. To do this, we need to extract the record for the token from the tokenized small Amazon dataset and we need to convert the IDFs for the small dataset into a Python dictionary. We can do the first part, by using a `filter()` transformation to extract the matching record and a `collect()` action to return the value to the driver.
# MAGIC
# MAGIC For the second part, we use the [`collectAsMap()` action](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.collectAsMap) to return the IDFs to the driver as a Python dictionary.
# COMMAND ----------
# ANSWER
def tfidf(tokens, idfs):
""" Compute TF-IDF
Args:
tokens (list of str): input list of tokens from tokenize
idfs (dictionary): record to IDF value
Returns:
dictionary: a dictionary of records to TF-IDF values
"""
tfs = tf(tokens)
return { t: tfs[t] * idfs[t] for t in tfs }
rec_b000hkgj8k = amazonRecToToken.filter(lambda x: x[0] == 'b000hkgj8k').collect()[0][1]
idfsSmallWeights = idfsSmall.collectAsMap()
rec_b000hkgj8k_weights = tfidf(rec_b000hkgj8k, idfsSmallWeights)
print 'Amazon record "b000hkgj8k" has tokens and weights:\n%s' % rec_b000hkgj8k_weights
# COMMAND ----------
# TEST Implement a TF-IDF function (2f)
Test.assertEquals(rec_b000hkgj8k_weights,
{'autocad': 33.33333333333333, 'autodesk': 8.333333333333332,
'courseware': 66.66666666666666, 'psg': 33.33333333333333,
'2007': 3.5087719298245617, 'customizing': 16.666666666666664,
'interface': 3.0303030303030303}, 'incorrect rec_b000hkgj8k_weights')
# COMMAND ----------
# PRIVATE_TEST Implement a TF-IDF function (2f)
Test.assertEquals(rec_b000hkgj8k_weights, {'autocad': 33.33333333333333, 'autodesk': 8.333333333333332, 'courseware': 66.66666666666666, 'psg': 33.33333333333333, '2007': 3.5087719298245617, 'customizing': 16.666666666666664, 'interface': 3.0303030303030303}, 'incorrect rec_b000hkgj8k_weights')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **Part 3: ER as Text Similarity - Cosine Similarity**
# MAGIC Now we are ready to do text comparisons in a formal way. The metric of string distance we will use is called **[cosine similarity][cosine]**. We will treat each document as a vector in some high dimensional space. Then, to compare two documents we compute the cosine of the angle between their two document vectors. This is *much* easier than it sounds.
# MAGIC
# MAGIC The first question to answer is how do we represent documents as vectors? The answer is familiar: bag-of-words! We treat each unique token as a dimension, and treat token weights as magnitudes in their respective token dimensions. For example, suppose we use simple counts as weights, and we want to interpret the string "Hello, world! Goodbye, world!" as a vector. Then in the "hello" and "goodbye" dimensions the vector has value 1, in the "world" dimension it has value 2, and it is zero in all other dimensions.
# MAGIC
# MAGIC The next question is: given two vectors how do we find the cosine of the angle between them? Recall the formula for the dot product of two vectors:
# MAGIC \\[ a \cdot b = \| a \| \| b \| \cos \theta \\]
# MAGIC Here \\( a \cdot b = \sum a_i b_i \\) is the ordinary dot product of two vectors, and \\( \|a\| = \sqrt{ \sum a_i^2 } \\) is the norm of \\( a \\).
# MAGIC
# MAGIC We can rearrange terms and solve for the cosine to find it is simply the normalized dot product of the vectors. With our vector model, the dot product and norm computations are simple functions of the bag-of-words document representations, so we now have a formal way to compute similarity:
# MAGIC \\[ similarity = \cos \theta = \frac{a \cdot b}{\|a\| \|b\|} = \frac{\sum a_i b_i}{\sqrt{\sum a_i^2} \sqrt{\sum b_i^2}} \\]
# MAGIC
# MAGIC Setting aside the algebra, the geometric interpretation is more intuitive. The angle between two document vectors is small if they share many tokens in common, because they are pointing in roughly the same direction. For that case, the cosine of the angle will be large. Otherwise, if the angle is large (and they have few words in common), the cosine is small. Therefore, cosine similarity scales proportionally with our intuitive sense of similarity.
# MAGIC [cosine]: https://en.wikipedia.org/wiki/Cosine_similarity
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(3a) Implement the components of a `cosineSimilarity` function**
# MAGIC Implement the components of a `cosineSimilarity` function.
# MAGIC Use the `tokenize` and `tfidf` functions, and the IDF weights from Part 2 for extracting tokens and assigning them weights.
# MAGIC The steps you should perform are:
# MAGIC * Define a function `dotprod` that takes two Python dictionaries and produces the dot product of them, where the dot product is defined as the sum of the product of values for tokens that appear in *both* dictionaries
# MAGIC * Define a function `norm` that returns the square root of the dot product of a dictionary and itself
# MAGIC * Define a function `cossim` that returns the dot product of two dictionaries divided by the norm of the first dictionary and then by the norm of the second dictionary
# COMMAND ----------
# ANSWER
import math
def dotprod(a, b):
return sum([a[t] * b[t] for t in a if t in b])
def norm(a):
return math.sqrt(dotprod(a, a))
def cossim(a, b):
return dotprod(a, b) / norm(a) / norm(b)
testVec1 = {'foo': 2, 'bar': 3, 'baz': 5 }
testVec2 = {'foo': 1, 'bar': 0, 'baz': 20 }
dp = dotprod(testVec1, testVec2)
nm = norm(testVec1)
print dp, nm
# COMMAND ----------
# TEST Implement the components of a cosineSimilarity function (3a)
Test.assertEquals(dp, 102, 'incorrect dp')
Test.assertTrue(abs(nm - 6.16441400297) < 0.0000001, 'incorrrect nm')
# COMMAND ----------
# PRIVATE_TEST Implement the components of a cosineSimilarity function (3a)
Test.assertEquals(dp, 102, 'incorrect dp')
Test.assertTrue(abs(nm - 6.16441400297) < 0.0000001, 'incorrrect nm')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(3b) Implement a `cosineSimilarity` function**
# MAGIC Implement a `cosineSimilarity(string1, string2, idfsDictionary)` function that takes two strings and a dictionary of IDF weights, and computes their cosine similarity in the context of some global IDF weights.
# MAGIC
# MAGIC The steps you should perform are:
# MAGIC * Apply your `tfidf` function to the tokenized first and second strings, using the dictionary of IDF weights
# MAGIC * Compute and return your `cossim` function applied to the results of the two `tfidf` functions
# COMMAND ----------
# ANSWER
def cosineSimilarity(string1, string2, idfsDictionary):
""" Compute cosine similarity between two strings
Args:
string1 (str): first string
string2 (str): second string
idfsDictionary (dictionary): a dictionary of IDF values
Returns:
cossim: cosine similarity value
"""
w1 = tfidf(tokenize(string1), idfsDictionary)
w2 = tfidf(tokenize(string2), idfsDictionary)
return cossim(w1, w2)
cossimAdobe = cosineSimilarity('Adobe Photoshop',
'Adobe Illustrator',
idfsSmallWeights)
print cossimAdobe
# COMMAND ----------
# TEST Implement a cosineSimilarity function (3b)
Test.assertTrue(abs(cossimAdobe - 0.0577243382163) < 0.0000001, 'incorrect cossimAdobe')
# COMMAND ----------
# PRIVATE_TEST Implement a cosineSimilarity function (3b)
Test.assertTrue(abs(cossimAdobe - 0.0577243382163) < 0.0000001, 'incorrect cossimAdobe')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(3c) Perform Entity Resolution**
# MAGIC Now we can finally do some entity resolution!
# MAGIC For *every* product record in the small Google dataset, use your `cosineSimilarity` function to compute its similarity to every record in the small Amazon dataset. Then, build a dictionary mapping `(Google URL, Amazon ID)` tuples to similarity scores between 0 and 1.
# MAGIC We'll do this computation two different ways, first we'll do it without a broadcast variable, and then we'll use a broadcast variable
# MAGIC
# MAGIC The steps you should perform are:
# MAGIC * Create an RDD that is a combination of the small Google and small Amazon datasets that has as elements all pairs of elements (a, b) where a is in self and b is in other. The result will be an RDD of the form: `[ ((Google URL1, Google String1), (Amazon ID1, Amazon String1)), ((Google URL1, Google String1), (Amazon ID2, Amazon String2)), ((Google URL2, Google String2), (Amazon ID1, Amazon String1)), ... ]`
# MAGIC * Define a worker function that given an element from the combination RDD computes the cosineSimlarity for the two records in the element
# MAGIC * Apply the worker function to every element in the RDD
# MAGIC
# MAGIC Now, compute the similarity between Amazon record `b000o24l3q` and Google record `http://www.google.com/base/feeds/snippets/17242822440574356561`.
# COMMAND ----------
# ANSWER
crossSmall = (googleSmall
.cartesian(amazonSmall)
.cache())
def computeSimilarity(record):
""" Compute similarity on a combination record
Args:
record: a pair, (google record, amazon record)
Returns:
pair: a pair, (google URL, amazon ID, cosine similarity value)
"""
googleRec = record[0]
amazonRec = record[1]
googleURL = googleRec[0]
amazonID = amazonRec[0]
googleValue = googleRec[1]
amazonValue = amazonRec[1]
cs = cosineSimilarity(googleValue, amazonValue, idfsSmallWeights)
return (googleURL, amazonID, cs)
similarities = (crossSmall
.map(computeSimilarity)
.cache())
def similar(amazonID, googleURL):
""" Return similarity value
Args:
amazonID: amazon ID
googleURL: google URL
Returns:
similar: cosine similarity value
"""
return (similarities
.filter(lambda record: (record[0] == googleURL and record[1] == amazonID))
.collect()[0][2])
similarityAmazonGoogle = similar('b000o24l3q', 'http://www.google.com/base/feeds/snippets/17242822440574356561')
print 'Requested similarity is %s.' % similarityAmazonGoogle
# COMMAND ----------
# TEST Perform Entity Resolution (3c)
Test.assertTrue(abs(similarityAmazonGoogle - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
# COMMAND ----------
# PRIVATE_TEST Perform Entity Resolution (3c)
Test.assertTrue(abs(similarityAmazonGoogle - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
similarityAnother = similar('b000o24l3q', 'http://www.google.com/base/feeds/snippets/18274317756231697680')
Test.assertTrue(abs(similarityAnother - 0.093899589276) < 0.0000001, 'incorrect another similarity test')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(3d) Perform Entity Resolution with Broadcast Variables**
# MAGIC The solution in (3c) works well for small datasets, but it requires Spark to (automatically) send the `idfsSmallWeights` variable to all the workers. If we didn't `cache()` similarities, then it might have to be recreated if we run `similar()` multiple times. This would cause Spark to send `idfsSmallWeights` every time.
# MAGIC
# MAGIC Instead, we can use a broadcast variable - we define the broadcast variable in the driver and then we can refer to it in each worker. Spark saves the broadcast variable at each worker, so it is only sent once.
# MAGIC
# MAGIC The steps you should perform are:
# MAGIC * Define a `computeSimilarityBroadcast` function that given an element from the combination RDD computes the cosine simlarity for the two records in the element. This will be the same as the worker function `computeSimilarity` in (3c) except that it uses a broadcast variable.
# MAGIC * Apply the worker function to every element in the RDD
# MAGIC
# MAGIC Again, compute the similarity between Amazon record `b000o24l3q` and Google record `http://www.google.com/base/feeds/snippets/17242822440574356561`.
# COMMAND ----------
# ANSWER
def computeSimilarityBroadcast(record):
""" Compute similarity on a combination record, using Broadcast variable
Args:
record: a pair, (google record, amazon record)
Returns:
pair: a pair, (google URL, amazon ID, cosine similarity value)
"""
googleRec = record[0]
amazonRec = record[1]
googleURL = googleRec[0]
amazonID = amazonRec[0]
googleValue = googleRec[1]
amazonValue = amazonRec[1]
cs = cosineSimilarity(googleValue, amazonValue, idfsSmallBroadcast.value)
return (googleURL, amazonID, cs)
idfsSmallBroadcast = sc.broadcast(idfsSmallWeights)
similaritiesBroadcast = (crossSmall
.map(computeSimilarityBroadcast)
.cache())
def similarBroadcast(amazonID, googleURL):
""" Return similarity value, computed using Broadcast variable
Args:
amazonID: amazon ID
googleURL: google URL
Returns:
similar: cosine similarity value
"""
return (similaritiesBroadcast
.filter(lambda record: (record[0] == googleURL and record[1] == amazonID))
.collect()[0][2])
similarityAmazonGoogleBroadcast = similarBroadcast('b000o24l3q', 'http://www.google.com/base/feeds/snippets/17242822440574356561')
print 'Requested similarity is %s.' % similarityAmazonGoogleBroadcast
# COMMAND ----------
# TEST Perform Entity Resolution with Broadcast Variables (3d)
from pyspark import Broadcast
Test.assertTrue(isinstance(idfsSmallBroadcast, Broadcast), 'incorrect idfsSmallBroadcast')
Test.assertEquals(len(idfsSmallBroadcast.value), 4772, 'incorrect idfsSmallBroadcast value')
Test.assertTrue(abs(similarityAmazonGoogleBroadcast - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
# COMMAND ----------
# PRIVATE_TEST Perform Entity Resolution with Broadcast Variables (3d)
from pyspark import Broadcast
Test.assertTrue(isinstance(idfsSmallBroadcast, Broadcast), 'incorrect idfsSmallBroadcast')
Test.assertEquals(len(idfsSmallBroadcast.value), 4772, 'incorrect idfsSmallBroadcast value')
Test.assertTrue(abs(similarityAmazonGoogleBroadcast - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
similarityAnotherBroadcast = similarBroadcast('b000o24l3q', 'http://www.google.com/base/feeds/snippets/18274317756231697680')
Test.assertTrue(abs(similarityAnotherBroadcast - 0.093899589276) < 0.0000001,
'incorrect another similarity test')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(3e) Perform a Gold Standard evaluation**
# MAGIC
# MAGIC First, we'll load the "gold standard" data and use it to answer several questions. We read and parse the Gold Standard data, where the format of each line is "Amazon Product ID","Google URL". The resulting RDD has elements of the form ("AmazonID GoogleURL", 'gold')
# COMMAND ----------
GOLDFILE_PATTERN = '^(.+),(.+)'
# Parse each line of a data file useing the specified regular expression pattern
def parse_goldfile_line(goldfile_line):
""" Parse a line from the 'golden standard' data file
Args:
goldfile_line: a line of data
Returns:
pair: ((key, 'gold', 1 if successful or else 0))
"""
match = re.search(GOLDFILE_PATTERN, goldfile_line)
if match is None:
print 'Invalid goldfile line: %s' % goldfile_line
return (goldfile_line, -1)
elif match.group(1) == '"idAmazon"':
print 'Header datafile line: %s' % goldfile_line
return (goldfile_line, 0)
else:
key = '%s %s' % (removeQuotes(match.group(1)), removeQuotes(match.group(2)))
return ((key, 'gold'), 1)
goldfile = os.path.join(baseDir, inputPath, GOLD_STANDARD_PATH)
gsRaw = (sc
.textFile(goldfile)
.map(parse_goldfile_line)
.cache())
gsFailed = (gsRaw
.filter(lambda s: s[1] == -1)
.map(lambda s: s[0]))
for line in gsFailed.take(10):
print 'Invalid goldfile line: %s' % line
goldStandard = (gsRaw
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
print 'Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (gsRaw.count(),
goldStandard.count(),
gsFailed.count())
assert (gsFailed.count() == 0)
assert (gsRaw.count() == (goldStandard.count() + 1))
# COMMAND ----------
# MAGIC %md
# MAGIC #### Using the "gold standard" data we can answer the following questions:
# MAGIC
# MAGIC * How many true duplicate pairs are there in the small datasets?
# MAGIC * What is the average similarity score for true duplicates?
# MAGIC * What about for non-duplicates?
# MAGIC The steps you should perform are:
# MAGIC * Create a new `sims` RDD from the `similaritiesBroadcast` RDD, where each element consists of a pair of the form ("AmazonID GoogleURL", cosineSimilarityScore). An example entry from `sims` is: ('b000bi7uqs http://www.google.com/base/feeds/snippets/18403148885652932189', 0.40202896125621296)
# MAGIC * Combine the `sims` RDD with the `goldStandard` RDD by creating a new `trueDupsRDD` RDD that has the just the cosine similarity scores for those "AmazonID GoogleURL" pairs that appear in both the `sims` RDD and `goldStandard` RDD. Hint: you can do this using the join() transformation.
# MAGIC * Count the number of true duplicate pairs in the `trueDupsRDD` dataset
# MAGIC * Compute the average similarity score for true duplicates in the `trueDupsRDD` datasets. Remember to use `float` for calculation
# MAGIC * Create a new `nonDupsRDD` RDD that has the just the cosine similarity scores for those "AmazonID GoogleURL" pairs from the `similaritiesBroadcast` RDD that **do not** appear in both the *sims* RDD and gold standard RDD.
# MAGIC * Compute the average similarity score for non-duplicates in the last datasets. Remember to use `float` for calculation
# COMMAND ----------
# ANSWER
sims = similaritiesBroadcast.map(lambda x: ("%s %s" % (x[1], x[0]), x[2]))
trueDupsRDD = (sims
.join(goldStandard)
.map(lambda a: a[1][0]))
trueDupsCount = trueDupsRDD.count()
avgSimDups = float(trueDupsRDD.reduce(lambda a, b: a + b)) / float(trueDupsCount)
nonDupsRDD = (sims
.leftOuterJoin(goldStandard)
.filter(lambda x: (x[1][1] is None))
.map(lambda a: a[1][0]))
avgSimNon = float(nonDupsRDD.reduce(lambda a, b: a + b)) / float(sims.count() - trueDupsCount)
print 'There are %s true duplicates.' % trueDupsCount
print 'The average similarity of true duplicates is %s.' % avgSimDups
print 'And for non duplicates, it is %s.' % avgSimNon
# COMMAND ----------
# TEST Perform a Gold Standard evaluation (3e)
Test.assertEquals(trueDupsCount, 146, 'incorrect trueDupsCount')
Test.assertTrue(abs(avgSimDups - 0.264332573435) < 0.0000001, 'incorrect avgSimDups')
Test.assertTrue(abs(avgSimNon - 0.00123476304656) < 0.0000001, 'incorrect avgSimNon')
# COMMAND ----------
# PRIVATE_TEST Perform a Gold Standard evaluation (3e)
Test.assertEquals(trueDupsCount, 146, 'incorrect trueDupsCount')
Test.assertTrue(abs(avgSimDups - 0.264332573435) < 0.0000001, 'incorrect avgSimDups')
Test.assertTrue(abs(avgSimNon - 0.00123476304656) < 0.0000001, 'incorrect avgSimNon')
# COMMAND ----------
# ANSWER
# Quiz question:
# Based on the answers to the questions in part (3e), is cosine similarity doing a good job, qualitatively speaking, of identifying duplicates?
# (*) Yes
# ( ) No
# *answer*: Cosine similarity looks useful, because duplicates on average are 250X more similar than non-duplicates. As long as variance isn't too high, that's a good signal.
# COMMAND ----------
# MAGIC %md
# MAGIC #### **Part 4: Scalable ER**
# MAGIC In the previous parts, we built a text similarity function and used it for small scale entity resolution. Our implementation is limited by its quadratic run time complexity, and is not practical for even modestly sized datasets. In this part, we will implement a more scalable algorithm and use it to do entity resolution on the full dataset.
# MAGIC
# MAGIC #### Inverted Indices
# MAGIC To improve our ER algorithm from the earlier parts, we should begin by analyzing its running time. In particular, the algorithm above is quadratic in two ways. First, we did a lot of redundant computation of tokens and weights, since each record was reprocessed every time it was compared. Second, we made quadratically many token comparisons between records.
# MAGIC
# MAGIC The first source of quadratic overhead can be eliminated with precomputation and look-up tables, but the second source is a little more tricky. In the worst case, every token in every record in one dataset exists in every record in the other dataset, and therefore every token makes a non-zero contribution to the cosine similarity. In this case, token comparison is unavoidably quadratic.
# MAGIC
# MAGIC But in reality most records have nothing (or very little) in common. Moreover, it is typical for a record in one dataset to have at most one duplicate record in the other dataset (this is the case assuming each dataset has been de-duplicated against itself). In this case, the output is linear in the size of the input and we can hope to achieve linear running time.
# MAGIC
# MAGIC An [**inverted index**](https://en.wikipedia.org/wiki/Inverted_index) is a data structure that will allow us to avoid making quadratically many token comparisons. It maps each token in the dataset to the list of documents that contain the token. So, instead of comparing, record by record, each token to every other token to see if they match, we will use inverted indices to *look up* records that match on a particular token.
# MAGIC
# MAGIC > **Note on terminology**: In text search, a *forward* index maps documents in a dataset to the tokens they contain. An *inverted* index supports the inverse mapping.
# MAGIC
# MAGIC > **Note**: For this section, use the complete Google and Amazon datasets, not the samples
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(4a) Tokenize the full dataset**
# MAGIC Tokenize each of the two full datasets for Google and Amazon.
# COMMAND ----------
# ANSWER
amazonFullRecToToken = amazon.map(lambda s: (s[0], tokenize(s[1])))
googleFullRecToToken = google.map(lambda s: (s[0], tokenize(s[1])))
print 'Amazon full dataset is %s products, Google full dataset is %s products' % (amazonFullRecToToken.count(),
googleFullRecToToken.count())
# COMMAND ----------
# TEST Tokenize the full dataset (4a)
Test.assertEquals(amazonFullRecToToken.count(), 1363, 'incorrect amazonFullRecToToken.count()')
Test.assertEquals(googleFullRecToToken.count(), 3226, 'incorrect googleFullRecToToken.count()')
# COMMAND ----------
# PRIVATE_TEST Tokenize the full dataset (4a)
Test.assertEquals(amazonFullRecToToken.count(), 1363, 'incorrect amazonFullRecToToken.count()')
Test.assertEquals(googleFullRecToToken.count(), 3226, 'incorrect googleFullRecToToken.count()')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(4b) Compute IDFs and TF-IDFs for the full datasets**
# MAGIC
# MAGIC We will reuse your code from above to compute IDF weights for the complete combined datasets.
# MAGIC The steps you should perform are:
# MAGIC * Create a new `fullCorpusRDD` that contains the tokens from the full Amazon and Google datasets.
# MAGIC * Apply your `idfs` function to the `fullCorpusRDD`
# MAGIC * Create a broadcast variable containing a dictionary of the IDF weights for the full dataset.
# MAGIC * For each of the Amazon and Google full datasets, create weight RDDs that map IDs/URLs to TF-IDF weighted token vectors.
# COMMAND ----------
# ANSWER
fullCorpusRDD = amazonFullRecToToken.union(googleFullRecToToken)
idfsFull = idfs(fullCorpusRDD)
idfsFullCount = idfsFull.count()
print 'There are %s unique tokens in the full datasets.' % idfsFullCount
# Recompute IDFs for full dataset
idfsFullWeights = idfsFull.collectAsMap()
idfsFullBroadcast = sc.broadcast(idfsFullWeights)
# Pre-compute TF-IDF weights. Build mappings from record ID weight vector.
amazonWeightsRDD = amazonFullRecToToken.map(lambda x: (x[0], tfidf(x[1], idfsFullBroadcast.value)))
googleWeightsRDD = googleFullRecToToken.map(lambda x: (x[0], tfidf(x[1], idfsFullBroadcast.value)))
print 'There are %s Amazon weights and %s Google weights.' % (amazonWeightsRDD.count(),
googleWeightsRDD.count())
# COMMAND ----------
# TEST Compute IDFs and TF-IDFs for the full datasets (4b)
Test.assertEquals(idfsFullCount, 17078, 'incorrect idfsFullCount')
Test.assertEquals(amazonWeightsRDD.count(), 1363, 'incorrect amazonWeightsRDD.count()')
Test.assertEquals(googleWeightsRDD.count(), 3226, 'incorrect googleWeightsRDD.count()')
# COMMAND ----------
# PRIVATE_TEST Compute IDFs and TF-IDFs for the full datasets (4b)
Test.assertEquals(idfsFullCount, 17078, 'incorrect idfsFullCount')
Test.assertEquals(amazonWeightsRDD.count(), 1363, 'incorrect amazonWeightsRDD.count()')
Test.assertEquals(googleWeightsRDD.count(), 3226, 'incorrect googleWeightsRDD.count()')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(4c) Compute Norms for the weights from the full datasets**
# MAGIC
# MAGIC We will reuse your code from above to compute norms of the IDF weights for the complete combined dataset.
# MAGIC The steps you should perform are:
# MAGIC * Create two collections, one for each of the full Amazon and Google datasets, where IDs/URLs map to the norm of the associated TF-IDF weighted token vectors.
# MAGIC * Convert each collection into a broadcast variable, containing a dictionary of the norm of IDF weights for the full dataset
# COMMAND ----------
# ANSWER
amazonNorms = amazonWeightsRDD.map(lambda x: (x[0], norm(x[1]))).collectAsMap()
amazonNormsBroadcast = sc.broadcast(amazonNorms)
googleNorms = googleWeightsRDD.map(lambda x: (x[0], norm(x[1]))).collectAsMap()
googleNormsBroadcast = sc.broadcast(googleNorms)
print 'There are %s Amazon norms and %s Google norms.' % (len(amazonNorms), len(googleNorms))
# COMMAND ----------
# TEST Compute Norms for the weights from the full datasets (4c)
Test.assertTrue(isinstance(amazonNormsBroadcast, Broadcast), 'incorrect amazonNormsBroadcast')
Test.assertEquals(len(amazonNormsBroadcast.value), 1363, 'incorrect amazonNormsBroadcast.value')
Test.assertTrue(isinstance(googleNormsBroadcast, Broadcast), 'incorrect googleNormsBroadcast')
Test.assertEquals(len(googleNormsBroadcast.value), 3226, 'incorrect googleNormsBroadcast.value')
# COMMAND ----------
# PRIVATE_TEST Compute Norms for the weights from the full datasets (4c)
Test.assertTrue(isinstance(amazonNormsBroadcast, Broadcast), 'incorrect amazonNormsBroadcast')
Test.assertEquals(len(amazonNormsBroadcast.value), 1363, 'incorrect amazonNormsBroadcast.value')
Test.assertTrue(isinstance(googleNormsBroadcast, Broadcast), 'incorrect googleNormsBroadcast')
Test.assertEquals(len(googleNormsBroadcast.value), 3226, 'incorrect googleNormsBroadcast.value')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(4d) Create inverted indicies from the full datasets**
# MAGIC
# MAGIC Build inverted indices of both data sources.
# MAGIC The steps you should perform are:
# MAGIC * Create an invert function that given a pair of (ID/URL, TF-IDF weighted token vector), returns a list of pairs of (token, ID/URL). Recall that the TF-IDF weighted token vector is a Python dictionary with keys that are tokens and values that are weights.
# MAGIC * Use your invert function to convert the full Amazon and Google TF-IDF weighted token vector datasets into two RDDs where each element is a pair of a token and an ID/URL that contain that token. These are inverted indicies.
# COMMAND ----------
# ANSWER
def invert(record):
""" Invert (ID, tokens) to a list of (token, ID)
Args:
record: a pair, (ID, token vector)
Returns:
pairs: a list of pairs of token to ID
"""
value = record[0]
keys = record[1].keys()
pairs = []
for key in keys:
pairs.append((key, value))
return (pairs)
amazonInvPairsRDD = (amazonWeightsRDD
.flatMap(invert)
.cache())
googleInvPairsRDD = (googleWeightsRDD
.flatMap(invert)
.cache())
print 'There are %s Amazon inverted pairs and %s Google inverted pairs.' % (amazonInvPairsRDD.count(),
googleInvPairsRDD.count())
# COMMAND ----------
# TEST Create inverted indicies from the full datasets (4d)
invertedPair = invert((1, {'foo': 2}))
Test.assertEquals(invertedPair[0][1], 1, 'incorrect invert result')
Test.assertEquals(amazonInvPairsRDD.count(), 111387, 'incorrect amazonInvPairsRDD.count()')
Test.assertEquals(googleInvPairsRDD.count(), 77678, 'incorrect googleInvPairsRDD.count()')
# COMMAND ----------
# PRIVATE_TEST Create inverted indicies from the full datasets (4d)
invertedPair = invert((1, {'foo': 2}))
Test.assertEquals(invertedPair[0][1], 1, 'incorrect invert result')
Test.assertEquals(amazonInvPairsRDD.count(), 111387, 'incorrect amazonInvPairsRDD.count()')
Test.assertEquals(googleInvPairsRDD.count(), 77678, 'incorrect googleInvPairsRDD.count()')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(4e) Identify common tokens from the full dataset**
# MAGIC
# MAGIC We are now in position to efficiently perform ER on the full datasets. Implement the following algorithm to build an RDD that maps a pair of (ID, URL) to a list of tokens they share in common:
# MAGIC * Using the two inverted indicies (RDDs where each element is a pair of a token and an ID or URL that contains that token), create a new RDD that contains only tokens that appear in both datasets. This will yield an RDD of pairs of (token, iterable(ID, URL)).
# MAGIC * We need a mapping from (ID, URL) to token, so create a function that will swap the elements of the RDD you just created to create this new RDD consisting of ((ID, URL), token) pairs.
# MAGIC * Finally, create an RDD consisting of pairs mapping (ID, URL) to all the tokens the pair shares in common
# COMMAND ----------
# ANSWER
def swap(record):
""" Swap (token, (ID, URL)) to ((ID, URL), token)
Args:
record: a pair, (token, (ID, URL))
Returns:
pair: ((ID, URL), token)
"""
token = record[0]
keys = (record[1][0], record[1][1])
return (keys, token)
commonTokens = (amazonInvPairsRDD.join(googleInvPairsRDD)
.map(swap)
.groupByKey()
.map(lambda rec: (rec[0], list(rec[1])))
.cache())
print 'Found %d common tokens' % commonTokens.count()
# COMMAND ----------
# TEST Identify common tokens from the full dataset (4e)
Test.assertEquals(commonTokens.count(), 2441100, 'incorrect commonTokens.count()')
# COMMAND ----------
# PRIVATE_TEST Identify common tokens from the full dataset (4e)
Test.assertEquals(commonTokens.count(), 2441100, 'incorrect commonTokens.count()')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(4f) Identify common tokens from the full dataset**
# MAGIC
# MAGIC Use the data structures from parts **(4a)** and **(4e)** to build a dictionary to map record pairs to cosine similarity scores.
# MAGIC The steps you should perform are:
# MAGIC * Create two broadcast dictionaries from the amazonWeights and googleWeights RDDs
# MAGIC * Create a `fastCosinesSimilarity` function that takes in a record consisting of the pair ((Amazon ID, Google URL), tokens list) and computes the sum for each of the tokens in the token list of the products of the Amazon weight for the token times the Google weight for the token. The sum should then be divided by the norm for the Google URL and then divided by the norm for the Amazon ID. The function should return this value in a pair with the key being the (Amazon ID, Google URL). *Make sure you use broadcast variables you created for both the weights and norms*
# MAGIC * Apply your `fastCosinesSimilarity` function to the common tokens from the full dataset
# COMMAND ----------
# ANSWER
amazonWeightsBroadcast = sc.broadcast(amazonWeightsRDD.collectAsMap())
googleWeightsBroadcast = sc.broadcast(googleWeightsRDD.collectAsMap())
def fastCosineSimilarity(record):
""" Compute Cosine Similarity using Broadcast variables
Args:
record: ((ID, URL), token)
Returns:
pair: ((ID, URL), cosine similarity value)
"""
amazonRec = record[0][0]
googleRec = record[0][1]
tokens = record[1]
s = sum([amazonWeightsBroadcast.value[amazonRec][t] * googleWeightsBroadcast.value[googleRec][t]
for t in tokens])
value = s / googleNormsBroadcast.value[googleRec] / amazonNormsBroadcast.value[amazonRec]
key = (amazonRec, googleRec)
return (key, value)
similaritiesFullRDD = (commonTokens
.map(fastCosineSimilarity)
.cache())
print similaritiesFullRDD.count()
# COMMAND ----------
# TEST Identify common tokens from the full dataset (4f)
similarityTest = similaritiesFullRDD.filter(lambda ((aID, gURL), cs): aID == 'b00005lzly' and gURL == 'http://www.google.com/base/feeds/snippets/13823221823254120257').collect()
Test.assertEquals(len(similarityTest), 1, 'incorrect len(similarityTest)')
Test.assertTrue(abs(similarityTest[0][1] - 4.286548414e-06) < 0.000000000001, 'incorrect similarityTest fastCosineSimilarity')
Test.assertEquals(similaritiesFullRDD.count(), 2441100, 'incorrect similaritiesFullRDD.count()')
# COMMAND ----------
# PRIVATE_TEST Identify common tokens from the full dataset (4f)
similarityTest = similaritiesFullRDD.filter(lambda ((aID, gURL), cs): aID == 'b00005lzly' and gURL == 'http://www.google.com/base/feeds/snippets/13823221823254120257').collect()
Test.assertEquals(len(similarityTest), 1, 'incorrect len(similarityTest)')
Test.assertTrue(abs(similarityTest[0][1] - 4.286548414e-06) < 0.000000000001, 'incorrect similarityTest fastCosineSimilarity')
Test.assertEquals(similaritiesFullRDD.count(), 2441100, 'incorrect similaritiesFullRDD.count()')
# COMMAND ----------
# MAGIC %md
# MAGIC #### **Part 5: Analysis**
# MAGIC
# MAGIC Now we have an authoritative list of record-pair similarities, but we need a way to use those similarities to decide if two records are duplicates or not. The simplest approach is to pick a **threshold**. Pairs whose similarity is above the threshold are declared duplicates, and pairs below the threshold are declared distinct.
# MAGIC
# MAGIC To decide where to set the threshold we need to understand what kind of errors result at different levels. If we set the threshold too low, we get more **false positives**, that is, record-pairs we say are duplicates that in reality are not. If we set the threshold too high, we get more **false negatives**, that is, record-pairs that really are duplicates but that we miss.
# MAGIC
# MAGIC ER algorithms are evaluated by the common metrics of information retrieval and search called **precision** and **recall**. Precision asks of all the record-pairs marked duplicates, what fraction are true duplicates? Recall asks of all the true duplicates in the data, what fraction did we successfully find? As with false positives and false negatives, there is a trade-off between precision and recall. A third metric, called **F-measure**, takes the harmonic mean of precision and recall to measure overall goodness in a single value:
# MAGIC \\[ Fmeasure = 2 \frac{precision * recall}{precision + recall} \\]
# MAGIC
# MAGIC > **Note**: In this part, we use the "gold standard" mapping from the included file to look up true duplicates, and the results of Part 4.
# MAGIC
# MAGIC > **Note**: In this part, you will not be writing any code. We've written all of the code for you. Run each cell and then answer the quiz questions on Studio.
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(5a) Counting True Positives, False Positives, and False Negatives**
# MAGIC
# MAGIC We need functions that count True Positives (true duplicates above the threshold), and False Positives and False Negatives:
# MAGIC * We start with creating the `simsFullRDD` from our `similaritiesFullRDD` that consists of a pair of ((Amazon ID, Google URL), simlarity score)
# MAGIC * From this RDD, we create an RDD consisting of only the similarity scores
# MAGIC * To look up the similarity scores for true duplicates, we perform a left outer join using the `goldStandard` RDD and `simsFullRDD` and extract the
# COMMAND ----------
# Create an RDD of ((Amazon ID, Google URL), similarity score)
simsFullRDD = similaritiesFullRDD.map(lambda x: ("%s %s" % (x[0][0], x[0][1]), x[1]))
assert (simsFullRDD.count() == 2441100)
# Create an RDD of just the similarity scores
simsFullValuesRDD = (simsFullRDD
.map(lambda x: x[1])
.cache())
assert (simsFullValuesRDD.count() == 2441100)
# Look up all similarity scores for true duplicates
# This helper function will return the similarity score for records that are in the gold standard and the simsFullRDD (True positives), and will return 0 for records that are in the gold standard but not in simsFullRDD (False Negatives).
def gs_value(record):
if (record[1][1] is None):
return 0
else:
return record[1][1]
# Join the gold standard and simsFullRDD, and then extract the similarities scores using the helper function
trueDupSimsRDD = (goldStandard
.leftOuterJoin(simsFullRDD)
.map(gs_value)
.cache())
print 'There are %s true duplicates.' % trueDupSimsRDD.count()
assert(trueDupSimsRDD.count() == 1300)
# COMMAND ----------
# MAGIC %md
# MAGIC The next step is to pick a threshold between 0 and 1 for the count of True Positives (true duplicates above the threshold). However, we would like to explore many different thresholds.
# MAGIC
# MAGIC To do this, we divide the space of thresholds into 100 bins, and take the following actions:
# MAGIC * We use Spark Accumulators to implement our counting function. We define a custom accumulator type, `VectorAccumulatorParam`, along with functions to initialize the accumulator's vector to zero, and to add two vectors. Note that we have to use the += operator because you can only add to an accumulator.
# MAGIC * We create a helper function to create a list with one entry (bit) set to a value and all others set to 0.
# MAGIC * We create 101 bins for the 100 threshold values between 0 and 1.
# MAGIC * Now, for each similarity score, we can compute the false positives. We do this by adding each similarity score to the appropriate bin of the vector. Then we remove true positives from the vector by using the gold standard data.
# MAGIC * We define functions for computing false positive and negative and true positives, for a given threshold.
# COMMAND ----------
from pyspark.accumulators import AccumulatorParam
class VectorAccumulatorParam(AccumulatorParam):
# Initialize the VectorAccumulator to 0
def zero(self, value):
return [0] * len(value)
# Add two VectorAccumulator variables
def addInPlace(self, val1, val2):
for i in xrange(len(val1)):
val1[i] += val2[i]
return val1
# Return a list with entry x set to value and all other entries set to 0
def set_bit(x, value, length):
bits = []
for y in xrange(length):
if (x == y):
bits.append(value)
else:
bits.append(0)
return bits
# Pre-bin counts of false positives for different threshold ranges
BINS = 101
nthresholds = 100
def bin(similarity):
return int(similarity * nthresholds)
# fpCounts[i] = number of entries (possible false positives) where bin(similarity) == i
zeros = [0] * BINS
fpCounts = sc.accumulator(zeros, VectorAccumulatorParam())
def add_element(score):
global fpCounts
b = bin(score)
fpCounts += set_bit(b, 1, BINS)
simsFullValuesRDD.foreach(add_element)
# Remove true positives from FP counts
def sub_element(score):
global fpCounts
b = bin(score)
fpCounts += set_bit(b, -1, BINS)
trueDupSimsRDD.foreach(sub_element)
def falsepos(threshold):
fpList = fpCounts.value
return sum([fpList[b] for b in range(0, BINS) if float(b) / nthresholds >= threshold])
def falseneg(threshold):
return trueDupSimsRDD.filter(lambda x: x < threshold).count()
def truepos(threshold):
return trueDupSimsRDD.count() - falsenegDict[threshold]
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(5b) Precision, Recall, and F-measures**
# MAGIC We define functions so that we can compute the [Precision](https://en.wikipedia.org/wiki/Precision_and_recall), [Recall](https://en.wikipedia.org/wiki/Precision_and_recall), and [F-measure](https://en.wikipedia.org/wiki/Precision_and_recall#F-measure) as a function of threshold value:
# MAGIC * Precision = true-positives / (true-positives + false-positives)
# MAGIC * Recall = true-positives / (true-positives + false-negatives)
# MAGIC * F-measure = 2 x Recall x Precision / (Recall + Precision)
# COMMAND ----------
# Precision = true-positives / (true-positives + false-positives)
# Recall = true-positives / (true-positives + false-negatives)
# F-measure = 2 x Recall x Precision / (Recall + Precision)
def precision(threshold):
tp = trueposDict[threshold]
return float(tp) / (tp + falseposDict[threshold])
def recall(threshold):
tp = trueposDict[threshold]
return float(tp) / (tp + falsenegDict[threshold])
def fmeasure(threshold):
r = recall(threshold)
p = precision(threshold)
return 2 * r * p / (r + p)
# COMMAND ----------
# MAGIC %md
# MAGIC #### **(5c) Line Plots**
# MAGIC We can make line plots of precision, recall, and F-measure as a function of threshold value, for thresholds between 0.0 and 1.0. You can change `nthresholds` (above in part **(5a)**) to change the threshold values to plot.
# COMMAND ----------
thresholds = [float(n) / nthresholds for n in range(0, nthresholds)]
falseposDict = dict([(t, falsepos(t)) for t in thresholds])
falsenegDict = dict([(t, falseneg(t)) for t in thresholds])
trueposDict = dict([(t, truepos(t)) for t in thresholds])
precisions = [precision(t) for t in thresholds]
recalls = [recall(t) for t in thresholds]
fmeasures = [fmeasure(t) for t in thresholds]
print precisions[0], fmeasures[0]
assert (abs(precisions[0] - 0.000532546802671) < 0.0000001)
assert (abs(fmeasures[0] - 0.00106452669505) < 0.0000001)
fig = plt.figure()
plt.plot(thresholds, precisions)
plt.plot(thresholds, recalls)
plt.plot(thresholds, fmeasures)
plt.legend(['Precision', 'Recall', 'F-measure'])
display(fig)
pass
# COMMAND ----------
# Create a DataFrame and visualize using display()
graph = [(t, precision(t), recall(t),fmeasure(t)) for t in thresholds]
graphRDD = sc.parallelize(graph)
graphRow = graphRDD.map(lambda (t, x, y, z): Row(threshold=t, precision=x, recall=y, fmeasure=z))
graphDF = sqlContext.createDataFrame(graphRow)
display(graphDF)
# COMMAND ----------
# MAGIC %md
# MAGIC #### Discussion
# MAGIC
# MAGIC State-of-the-art tools can get an F-measure of about 60% on this dataset. In this lab exercise, our best F-measure is closer to 40%. Look at some examples of errors (both False Positives and False Negatives) and think about what went wrong.
# MAGIC
# MAGIC #### There are several ways we might improve our simple classifier, including:
# MAGIC * Using additional attributes
# MAGIC * Performing better featurization of our textual data (e.g., stemming, n-grams, etc.)
# MAGIC * Using different similarity functions | unlicense |
hofschroeer/gnuradio | gr-filter/examples/resampler.py | 7 | 4489 | #!/usr/bin/env python
#
# Copyright 2009,2012,2013 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from __future__ import print_function
from __future__ import division
from __future__ import unicode_literals
from gnuradio import gr
from gnuradio import filter
from gnuradio import blocks
import sys
import numpy
try:
from gnuradio import analog
except ImportError:
sys.stderr.write("Error: Program requires gr-analog.\n")
sys.exit(1)
try:
from matplotlib import pyplot
except ImportError:
sys.stderr.write("Error: Program requires matplotlib (see: matplotlib.sourceforge.net).\n")
sys.exit(1)
class mytb(gr.top_block):
def __init__(self, fs_in, fs_out, fc, N=10000):
gr.top_block.__init__(self)
rerate = float(fs_out) / float(fs_in)
print("Resampling from %f to %f by %f " %(fs_in, fs_out, rerate))
# Creating our own taps
taps = filter.firdes.low_pass_2(32, 32, 0.25, 0.1, 80)
self.src = analog.sig_source_c(fs_in, analog.GR_SIN_WAVE, fc, 1)
#self.src = analog.noise_source_c(analog.GR_GAUSSIAN, 1)
self.head = blocks.head(gr.sizeof_gr_complex, N)
# A resampler with our taps
self.resamp_0 = filter.pfb.arb_resampler_ccf(rerate, taps,
flt_size=32)
# A resampler that just needs a resampling rate.
# Filter is created for us and designed to cover
# entire bandwidth of the input signal.
# An optional atten=XX rate can be used here to
# specify the out-of-band rejection (default=80).
self.resamp_1 = filter.pfb.arb_resampler_ccf(rerate)
self.snk_in = blocks.vector_sink_c()
self.snk_0 = blocks.vector_sink_c()
self.snk_1 = blocks.vector_sink_c()
self.connect(self.src, self.head, self.snk_in)
self.connect(self.head, self.resamp_0, self.snk_0)
self.connect(self.head, self.resamp_1, self.snk_1)
def main():
fs_in = 8000
fs_out = 20000
fc = 1000
N = 10000
tb = mytb(fs_in, fs_out, fc, N)
tb.run()
# Plot PSD of signals
nfftsize = 2048
fig1 = pyplot.figure(1, figsize=(10,10), facecolor="w")
sp1 = fig1.add_subplot(2,1,1)
sp1.psd(tb.snk_in.data(), NFFT=nfftsize,
noverlap=nfftsize / 4, Fs = fs_in)
sp1.set_title(("Input Signal at f_s=%.2f kHz" % (fs_in / 1000.0)))
sp1.set_xlim([-fs_in / 2, fs_in / 2])
sp2 = fig1.add_subplot(2,1,2)
sp2.psd(tb.snk_0.data(), NFFT=nfftsize,
noverlap=nfftsize / 4, Fs = fs_out,
label="With our filter")
sp2.psd(tb.snk_1.data(), NFFT=nfftsize,
noverlap=nfftsize / 4, Fs = fs_out,
label="With auto-generated filter")
sp2.set_title(("Output Signals at f_s=%.2f kHz" % (fs_out / 1000.0)))
sp2.set_xlim([-fs_out / 2, fs_out / 2])
sp2.legend()
# Plot signals in time
Ts_in = 1.0 / fs_in
Ts_out = 1.0 / fs_out
t_in = numpy.arange(0, len(tb.snk_in.data())*Ts_in, Ts_in)
t_out = numpy.arange(0, len(tb.snk_0.data())*Ts_out, Ts_out)
fig2 = pyplot.figure(2, figsize=(10,10), facecolor="w")
sp21 = fig2.add_subplot(2,1,1)
sp21.plot(t_in, tb.snk_in.data())
sp21.set_title(("Input Signal at f_s=%.2f kHz" % (fs_in / 1000.0)))
sp21.set_xlim([t_in[100], t_in[200]])
sp22 = fig2.add_subplot(2,1,2)
sp22.plot(t_out, tb.snk_0.data(),
label="With our filter")
sp22.plot(t_out, tb.snk_1.data(),
label="With auto-generated filter")
sp22.set_title(("Output Signals at f_s=%.2f kHz" % (fs_out / 1000.0)))
r = float(fs_out) / float(fs_in)
sp22.set_xlim([t_out[r * 100], t_out[r * 200]])
sp22.legend()
pyplot.show()
if __name__ == "__main__":
main()
| gpl-3.0 |
mgymrek/lobstr-code | scripts/lobSTR_capillary_comparator.py | 1 | 6675 | #!/usr/bin/env python
"""
Compare capillary vs. lobSTR calls
This script is part of lobSTR_validation_suite.sh and
is not mean to be called directly.
"""
import argparse
import numpy as np
import pandas as pd
import sys
from scipy.stats import pearsonr
def ConvertSample(x):
"""
Convert HGDP samples numbers to standard format
HGDPXXXXX
"""
num = x.split("_")[1]
zeros = 5-len(num)
return "HGDP"+"0"*zeros + num
def LoadCapillaryFromStru(capfile, convfile):
"""
Input:
capfile: filename for .stru file
convfile: filename giving illumina sample id->HGDP id
Output: data frame with capillary calls
Construct data frame with:
marker
sample
allele1.cap
allele2.cap
Use sample names converted to Illumina format
Ignore alleles that are -9,-9
"""
# Load conversions
conv = pd.read_csv(convfile, sep="\t")
converter = dict(zip(conv.hgdp, conv.sample))
# Load genotypes
markers = []
samples = []
allele1s = []
allele2s = []
f = open(capfile, "r")
marker_names = f.readline().strip().split()
line = f.readline()
while line != "":
items = line.strip().split()
ident = "HGDP_%s"%items[0]
pop_code = items[1]
pop_name = items[2]
geo = items[3]
geo2 = items[4]
alleles1 = items[5:]
line = f.readline() # get second allele for the individual
items = line.strip().split()
ident2 = "HGDP_%s"%items[0]
if ident != ident2:
sys.stderr.write("ERROR parsing .stru file for individual %s\n"%items[0])
sys.exit(1)
alleles2 = items[5:]
sample = converter.get(ConvertSample(ident), "NA")
if sample != "NA":
for i in range(len(alleles1)):
if str(alleles1[i]) != "-9" and str(alleles2[i]) != "-9":
markers.append(marker_names[i])
samples.append(sample)
allele1s.append(int(alleles1[i]))
allele2s.append(int(alleles2[i]))
line = f.readline()
return pd.DataFrame({"marker": markers, "sample": samples, \
"allele1.cap": allele1s, "allele2.cap": allele2s})
def GetAllele(x, allele_num):
if allele_num == 1:
al = x["allele1.cap"]
else: al = x["allele2.cap"]
raw_allele = ((al-x["effective_product_size"])/x["period"])*x["period"]
corr_allele = raw_allele - x["correction"]
return corr_allele
def GetDosage(a1, a2):
if str(a1) == "." or str(a2) == ".": return "NA"
else: return (float(a1)+float(a2))*0.5
if __name__ == "__main__":
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--lobSTR", help="Tab file with lobSTR calls created by lobSTR_vcf_to_tab.py", type=str, required=True)
parser.add_argument("--cap", help=".stru file with capillary calls", type=str, required=True)
parser.add_argument("--corrections", help="Tab file with Marshfield marker corrections", type=str, required=True)
parser.add_argument("--sample-conversions", help="Tab file with conversion between sample ids", type=str, required=True)
parser.add_argument("--output-stats", help="Output sample, call, and locus level stats to files with this prefix", type=str, required=False)
args = parser.parse_args()
LOBFILE = args.lobSTR
CAPFILE = args.cap
CORRFILE = args.corrections
CONVFILE = args.sample_conversions
# Load lobSTR calls and corrections
lob = pd.read_csv(LOBFILE, sep="\t")
corr = pd.read_csv(CORRFILE, sep="\t")
# Load capillary calls to data frame
cap = LoadCapillaryFromStru(CAPFILE, CONVFILE)
# Merge ddatasets
res = pd.merge(lob, corr, on=["chrom", "start"])
res = pd.merge(res, cap, on=["marker", "sample"])
res["allele1.cap.corr"] = res.apply(lambda x: GetAllele(x, 1), 1)
res["allele2.cap.corr"] = res.apply(lambda x: GetAllele(x, 2), 1)
res["correct"] = res.apply(lambda x: str(x["allele1"])==str(x["allele1.cap.corr"]) and \
str(x["allele2"])==str(x["allele2.cap.corr"]), 1)
res["dosage_lob"] = res.apply(lambda x: GetDosage(x["allele1"], x["allele2"]), 1)
res["dosage_cap"] = res.apply(lambda x: GetDosage(x["allele1.cap.corr"], x["allele2.cap.corr"]), 1)
##### Stats #####
if args.output_stats:
# Call level stats
res.to_csv(args.output_stats+".calllevel.tab", index=False, sep="\t")
# Sample level stats
sample_level = res.groupby("sample", as_index=False).agg({"DP": np.mean,
"Q": np.mean,
"start": len,
"correct": np.mean})
sample_level.to_csv(args.output_stats+".samplelevel.tab", index=False, sep="\t")
# Locus level stats
res["length"] = res["end_x"]-res["start"]+1
locus_level = res.groupby(["chrom","start"], as_index=False).agg({"length": np.mean,
"DP": np.mean,
"Q": np.mean,
"GT": len,
"SB": np.mean,
"DISTENDS": np.mean,
"correct": np.mean})
locus_level.to_csv(args.output_stats+".locuslevel.tab", index=False, sep="\t")
##### Results #####
sys.stdout.write("########## Results ########\n")
# Get stats about calls
num_samples = len(set(res[res["allele1"].apply(str)!="."]["sample"]))
num_markers = len(set(res[res["allele1"].apply(str)!="."]["marker"]))
num_nocalls = res[res["allele1"].apply(str)=="."].shape[0]
sys.stdout.write("# Samples: %s\n"%num_samples)
sys.stdout.write("# Markers: %s\n"%num_markers)
sys.stdout.write("# No call rate: %s\n"%(num_nocalls*1.0/res.shape[0]))
sys.stdout.write("# Number of calls compared: %s\n"%res.shape[0])
# Accuracy
acc = np.mean(res[res["allele1"].apply(str)!="."]["correct"])
sys.stdout.write("# Accuracy: %s\n"%acc)
# R2
dl = map(float, list(res[res["allele1"].apply(str)!="."]["dosage_lob"]))
dc = map(float, list(res[res["allele1"].apply(str)!="."]["dosage_cap"]))
r2 = pearsonr(dl, dc)[0]**2
sys.stdout.write("# R2: %s\n"%r2)
| gpl-3.0 |
louisLouL/pair_trading | capstone_env/lib/python3.6/site-packages/pandas/tests/tools/test_numeric.py | 6 | 14437 | import pytest
import decimal
import numpy as np
import pandas as pd
from pandas import to_numeric, _np_version_under1p9
from pandas.util import testing as tm
from numpy import iinfo
class TestToNumeric(object):
def test_empty(self):
# see gh-16302
s = pd.Series([], dtype=object)
res = to_numeric(s)
expected = pd.Series([], dtype=np.int64)
tm.assert_series_equal(res, expected)
# Original issue example
res = to_numeric(s, errors='coerce', downcast='integer')
expected = pd.Series([], dtype=np.int8)
tm.assert_series_equal(res, expected)
def test_series(self):
s = pd.Series(['1', '-3.14', '7'])
res = to_numeric(s)
expected = pd.Series([1, -3.14, 7])
tm.assert_series_equal(res, expected)
s = pd.Series(['1', '-3.14', 7])
res = to_numeric(s)
tm.assert_series_equal(res, expected)
def test_series_numeric(self):
s = pd.Series([1, 3, 4, 5], index=list('ABCD'), name='XXX')
res = to_numeric(s)
tm.assert_series_equal(res, s)
s = pd.Series([1., 3., 4., 5.], index=list('ABCD'), name='XXX')
res = to_numeric(s)
tm.assert_series_equal(res, s)
# bool is regarded as numeric
s = pd.Series([True, False, True, True],
index=list('ABCD'), name='XXX')
res = to_numeric(s)
tm.assert_series_equal(res, s)
def test_error(self):
s = pd.Series([1, -3.14, 'apple'])
msg = 'Unable to parse string "apple" at position 2'
with tm.assert_raises_regex(ValueError, msg):
to_numeric(s, errors='raise')
res = to_numeric(s, errors='ignore')
expected = pd.Series([1, -3.14, 'apple'])
tm.assert_series_equal(res, expected)
res = to_numeric(s, errors='coerce')
expected = pd.Series([1, -3.14, np.nan])
tm.assert_series_equal(res, expected)
s = pd.Series(['orange', 1, -3.14, 'apple'])
msg = 'Unable to parse string "orange" at position 0'
with tm.assert_raises_regex(ValueError, msg):
to_numeric(s, errors='raise')
def test_error_seen_bool(self):
s = pd.Series([True, False, 'apple'])
msg = 'Unable to parse string "apple" at position 2'
with tm.assert_raises_regex(ValueError, msg):
to_numeric(s, errors='raise')
res = to_numeric(s, errors='ignore')
expected = pd.Series([True, False, 'apple'])
tm.assert_series_equal(res, expected)
# coerces to float
res = to_numeric(s, errors='coerce')
expected = pd.Series([1., 0., np.nan])
tm.assert_series_equal(res, expected)
def test_list(self):
s = ['1', '-3.14', '7']
res = to_numeric(s)
expected = np.array([1, -3.14, 7])
tm.assert_numpy_array_equal(res, expected)
def test_list_numeric(self):
s = [1, 3, 4, 5]
res = to_numeric(s)
tm.assert_numpy_array_equal(res, np.array(s, dtype=np.int64))
s = [1., 3., 4., 5.]
res = to_numeric(s)
tm.assert_numpy_array_equal(res, np.array(s))
# bool is regarded as numeric
s = [True, False, True, True]
res = to_numeric(s)
tm.assert_numpy_array_equal(res, np.array(s))
def test_numeric(self):
s = pd.Series([1, -3.14, 7], dtype='O')
res = to_numeric(s)
expected = pd.Series([1, -3.14, 7])
tm.assert_series_equal(res, expected)
s = pd.Series([1, -3.14, 7])
res = to_numeric(s)
tm.assert_series_equal(res, expected)
# GH 14827
df = pd.DataFrame(dict(
a=[1.2, decimal.Decimal(3.14), decimal.Decimal("infinity"), '0.1'],
b=[1.0, 2.0, 3.0, 4.0],
))
expected = pd.DataFrame(dict(
a=[1.2, 3.14, np.inf, 0.1],
b=[1.0, 2.0, 3.0, 4.0],
))
# Test to_numeric over one column
df_copy = df.copy()
df_copy['a'] = df_copy['a'].apply(to_numeric)
tm.assert_frame_equal(df_copy, expected)
# Test to_numeric over multiple columns
df_copy = df.copy()
df_copy[['a', 'b']] = df_copy[['a', 'b']].apply(to_numeric)
tm.assert_frame_equal(df_copy, expected)
def test_numeric_lists_and_arrays(self):
# Test to_numeric with embedded lists and arrays
df = pd.DataFrame(dict(
a=[[decimal.Decimal(3.14), 1.0], decimal.Decimal(1.6), 0.1]
))
df['a'] = df['a'].apply(to_numeric)
expected = pd.DataFrame(dict(
a=[[3.14, 1.0], 1.6, 0.1],
))
tm.assert_frame_equal(df, expected)
df = pd.DataFrame(dict(
a=[np.array([decimal.Decimal(3.14), 1.0]), 0.1]
))
df['a'] = df['a'].apply(to_numeric)
expected = pd.DataFrame(dict(
a=[[3.14, 1.0], 0.1],
))
tm.assert_frame_equal(df, expected)
def test_all_nan(self):
s = pd.Series(['a', 'b', 'c'])
res = to_numeric(s, errors='coerce')
expected = pd.Series([np.nan, np.nan, np.nan])
tm.assert_series_equal(res, expected)
def test_type_check(self):
# GH 11776
df = pd.DataFrame({'a': [1, -3.14, 7], 'b': ['4', '5', '6']})
with tm.assert_raises_regex(TypeError, "1-d array"):
to_numeric(df)
for errors in ['ignore', 'raise', 'coerce']:
with tm.assert_raises_regex(TypeError, "1-d array"):
to_numeric(df, errors=errors)
def test_scalar(self):
assert pd.to_numeric(1) == 1
assert pd.to_numeric(1.1) == 1.1
assert pd.to_numeric('1') == 1
assert pd.to_numeric('1.1') == 1.1
with pytest.raises(ValueError):
to_numeric('XX', errors='raise')
assert to_numeric('XX', errors='ignore') == 'XX'
assert np.isnan(to_numeric('XX', errors='coerce'))
def test_numeric_dtypes(self):
idx = pd.Index([1, 2, 3], name='xxx')
res = pd.to_numeric(idx)
tm.assert_index_equal(res, idx)
res = pd.to_numeric(pd.Series(idx, name='xxx'))
tm.assert_series_equal(res, pd.Series(idx, name='xxx'))
res = pd.to_numeric(idx.values)
tm.assert_numpy_array_equal(res, idx.values)
idx = pd.Index([1., np.nan, 3., np.nan], name='xxx')
res = pd.to_numeric(idx)
tm.assert_index_equal(res, idx)
res = pd.to_numeric(pd.Series(idx, name='xxx'))
tm.assert_series_equal(res, pd.Series(idx, name='xxx'))
res = pd.to_numeric(idx.values)
tm.assert_numpy_array_equal(res, idx.values)
def test_str(self):
idx = pd.Index(['1', '2', '3'], name='xxx')
exp = np.array([1, 2, 3], dtype='int64')
res = pd.to_numeric(idx)
tm.assert_index_equal(res, pd.Index(exp, name='xxx'))
res = pd.to_numeric(pd.Series(idx, name='xxx'))
tm.assert_series_equal(res, pd.Series(exp, name='xxx'))
res = pd.to_numeric(idx.values)
tm.assert_numpy_array_equal(res, exp)
idx = pd.Index(['1.5', '2.7', '3.4'], name='xxx')
exp = np.array([1.5, 2.7, 3.4])
res = pd.to_numeric(idx)
tm.assert_index_equal(res, pd.Index(exp, name='xxx'))
res = pd.to_numeric(pd.Series(idx, name='xxx'))
tm.assert_series_equal(res, pd.Series(exp, name='xxx'))
res = pd.to_numeric(idx.values)
tm.assert_numpy_array_equal(res, exp)
def test_datetimelike(self):
for tz in [None, 'US/Eastern', 'Asia/Tokyo']:
idx = pd.date_range('20130101', periods=3, tz=tz, name='xxx')
res = pd.to_numeric(idx)
tm.assert_index_equal(res, pd.Index(idx.asi8, name='xxx'))
res = pd.to_numeric(pd.Series(idx, name='xxx'))
tm.assert_series_equal(res, pd.Series(idx.asi8, name='xxx'))
res = pd.to_numeric(idx.values)
tm.assert_numpy_array_equal(res, idx.asi8)
def test_timedelta(self):
idx = pd.timedelta_range('1 days', periods=3, freq='D', name='xxx')
res = pd.to_numeric(idx)
tm.assert_index_equal(res, pd.Index(idx.asi8, name='xxx'))
res = pd.to_numeric(pd.Series(idx, name='xxx'))
tm.assert_series_equal(res, pd.Series(idx.asi8, name='xxx'))
res = pd.to_numeric(idx.values)
tm.assert_numpy_array_equal(res, idx.asi8)
def test_period(self):
idx = pd.period_range('2011-01', periods=3, freq='M', name='xxx')
res = pd.to_numeric(idx)
tm.assert_index_equal(res, pd.Index(idx.asi8, name='xxx'))
# ToDo: enable when we can support native PeriodDtype
# res = pd.to_numeric(pd.Series(idx, name='xxx'))
# tm.assert_series_equal(res, pd.Series(idx.asi8, name='xxx'))
def test_non_hashable(self):
# Test for Bug #13324
s = pd.Series([[10.0, 2], 1.0, 'apple'])
res = pd.to_numeric(s, errors='coerce')
tm.assert_series_equal(res, pd.Series([np.nan, 1.0, np.nan]))
res = pd.to_numeric(s, errors='ignore')
tm.assert_series_equal(res, pd.Series([[10.0, 2], 1.0, 'apple']))
with tm.assert_raises_regex(TypeError, "Invalid object type"):
pd.to_numeric(s)
def test_downcast(self):
# see gh-13352
mixed_data = ['1', 2, 3]
int_data = [1, 2, 3]
date_data = np.array(['1970-01-02', '1970-01-03',
'1970-01-04'], dtype='datetime64[D]')
invalid_downcast = 'unsigned-integer'
msg = 'invalid downcasting method provided'
smallest_int_dtype = np.dtype(np.typecodes['Integer'][0])
smallest_uint_dtype = np.dtype(np.typecodes['UnsignedInteger'][0])
# support below np.float32 is rare and far between
float_32_char = np.dtype(np.float32).char
smallest_float_dtype = float_32_char
for data in (mixed_data, int_data, date_data):
with tm.assert_raises_regex(ValueError, msg):
pd.to_numeric(data, downcast=invalid_downcast)
expected = np.array([1, 2, 3], dtype=np.int64)
res = pd.to_numeric(data)
tm.assert_numpy_array_equal(res, expected)
res = pd.to_numeric(data, downcast=None)
tm.assert_numpy_array_equal(res, expected)
expected = np.array([1, 2, 3], dtype=smallest_int_dtype)
for signed_downcast in ('integer', 'signed'):
res = pd.to_numeric(data, downcast=signed_downcast)
tm.assert_numpy_array_equal(res, expected)
expected = np.array([1, 2, 3], dtype=smallest_uint_dtype)
res = pd.to_numeric(data, downcast='unsigned')
tm.assert_numpy_array_equal(res, expected)
expected = np.array([1, 2, 3], dtype=smallest_float_dtype)
res = pd.to_numeric(data, downcast='float')
tm.assert_numpy_array_equal(res, expected)
# if we can't successfully cast the given
# data to a numeric dtype, do not bother
# with the downcast parameter
data = ['foo', 2, 3]
expected = np.array(data, dtype=object)
res = pd.to_numeric(data, errors='ignore',
downcast='unsigned')
tm.assert_numpy_array_equal(res, expected)
# cannot cast to an unsigned integer because
# we have a negative number
data = ['-1', 2, 3]
expected = np.array([-1, 2, 3], dtype=np.int64)
res = pd.to_numeric(data, downcast='unsigned')
tm.assert_numpy_array_equal(res, expected)
# cannot cast to an integer (signed or unsigned)
# because we have a float number
data = (['1.1', 2, 3],
[10000.0, 20000, 3000, 40000.36, 50000, 50000.00])
expected = (np.array([1.1, 2, 3], dtype=np.float64),
np.array([10000.0, 20000, 3000,
40000.36, 50000, 50000.00], dtype=np.float64))
for _data, _expected in zip(data, expected):
for downcast in ('integer', 'signed', 'unsigned'):
res = pd.to_numeric(_data, downcast=downcast)
tm.assert_numpy_array_equal(res, _expected)
# the smallest integer dtype need not be np.(u)int8
data = ['256', 257, 258]
for downcast, expected_dtype in zip(
['integer', 'signed', 'unsigned'],
[np.int16, np.int16, np.uint16]):
expected = np.array([256, 257, 258], dtype=expected_dtype)
res = pd.to_numeric(data, downcast=downcast)
tm.assert_numpy_array_equal(res, expected)
def test_downcast_limits(self):
# Test the limits of each downcast. Bug: #14401.
# Check to make sure numpy is new enough to run this test.
if _np_version_under1p9:
pytest.skip("Numpy version is under 1.9")
i = 'integer'
u = 'unsigned'
dtype_downcast_min_max = [
('int8', i, [iinfo(np.int8).min, iinfo(np.int8).max]),
('int16', i, [iinfo(np.int16).min, iinfo(np.int16).max]),
('int32', i, [iinfo(np.int32).min, iinfo(np.int32).max]),
('int64', i, [iinfo(np.int64).min, iinfo(np.int64).max]),
('uint8', u, [iinfo(np.uint8).min, iinfo(np.uint8).max]),
('uint16', u, [iinfo(np.uint16).min, iinfo(np.uint16).max]),
('uint32', u, [iinfo(np.uint32).min, iinfo(np.uint32).max]),
('uint64', u, [iinfo(np.uint64).min, iinfo(np.uint64).max]),
('int16', i, [iinfo(np.int8).min, iinfo(np.int8).max + 1]),
('int32', i, [iinfo(np.int16).min, iinfo(np.int16).max + 1]),
('int64', i, [iinfo(np.int32).min, iinfo(np.int32).max + 1]),
('int16', i, [iinfo(np.int8).min - 1, iinfo(np.int16).max]),
('int32', i, [iinfo(np.int16).min - 1, iinfo(np.int32).max]),
('int64', i, [iinfo(np.int32).min - 1, iinfo(np.int64).max]),
('uint16', u, [iinfo(np.uint8).min, iinfo(np.uint8).max + 1]),
('uint32', u, [iinfo(np.uint16).min, iinfo(np.uint16).max + 1]),
('uint64', u, [iinfo(np.uint32).min, iinfo(np.uint32).max + 1])
]
for dtype, downcast, min_max in dtype_downcast_min_max:
series = pd.to_numeric(pd.Series(min_max), downcast=downcast)
assert series.dtype == dtype
| mit |
googleinterns/cabby | cabby/geo/visualize.py | 1 | 3273 | # coding=utf-8
# Copyright 2020 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''Library to support geographical visualization.'''
import folium
import geopandas as gpd
import pandas as pd
import shapely.geometry as geom
from shapely.geometry import Polygon, Point, LineString
from typing import Tuple, Sequence, Optional, Dict, Text
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.getcwd() )))
from cabby.geo import util
from cabby.geo import walk
from cabby.geo import geo_item
def get_osm_map(entity: geo_item.GeoEntity) -> Sequence[folium.Map]:
'''Create the OSM maps.
Arguments:
gdf: the GeoDataFrame from which to create the OSM map.
Returns:
OSM maps from the GeoDataFrame.
'''
mid_point = util.midpoint(
entity.geo_landmarks['end_point'].geometry,
entity.geo_landmarks['start_point'].geometry)
zoom_location = util.list_yx_from_point(mid_point)
# create a map
map_osm = folium.Map(location=zoom_location,
zoom_start=15, tiles='OpenStreetMap')
# draw the points
colors = [
'pink', 'black', 'white', 'yellow', 'red', 'green', 'blue', 'orange']
for landmark_type, landmark in entity.geo_landmarks.items():
if landmark.geometry is not None:
landmark_geom = util.list_yx_from_point(landmark.geometry)
folium.Marker(
landmark_geom,
popup=f'{landmark_type}: {landmark.main_tag}',
icon=folium.Icon(color=colors.pop(0))).add_to(map_osm)
lat_lng_list = []
for coord in entity.route.coords:
lat_lng_list.append([coord[1], coord[0]])
for index, coord_lat_lng in enumerate(lat_lng_list):
folium.Circle(location = coord_lat_lng,
radius = 5,
color='crimson',
).add_to(map_osm)
return map_osm
def get_maps_and_instructions(path: Text
) -> Sequence[Tuple[folium.Map, str]]:
'''Create the OSM maps and instructions.
Arguments:
path: The path from the start point to the goal location.
Returns:
OSM maps from the GeoDataFrame.
'''
map_osms_instructions = []
entities = walk.load_entities(path)
for entity in entities:
map_osm = get_osm_map(entity)
features_list = []
for feature_type, feature in entity.geo_features.items():
features_list.append(feature_type + ": " + str(feature))
landmark_list = []
for landmark_type, landmark in entity.geo_landmarks.items():
landmark_list.append(landmark_type + ": " + str(landmark.main_tag))
instruction = '; '.join(features_list) + '; '.join(landmark_list)
map_osms_instructions.append((map_osm, instruction))
return map_osms_instructions
| apache-2.0 |
perryjohnson/biplaneblade | sandia_blade_lib/plot_MK.py | 1 | 6585 | """Plot the mass and stiffness data from Sandia and VABS.
First, data from mass and stiffness matrices for the Sandia blade are written
to the file 'sandia_blade/blade_props_from_VABS.csv'
Then, these data are plotted against published data from Griffith & Resor 2011.
Usage
-----
Open an IPython terminal and type:
|> %run plot_MK
Author: Perry Roth-Johnson
Last modified: April 22, 2014
"""
import lib.blade as bl
reload(bl)
import pandas as pd
import matplotlib.pyplot as plt
from numpy import average
from matplotlib import rc
rc('font', size=14.0)
def rel_diff(vabs_data, sandia_data):
"""Calculate the percent relative difference."""
return ((vabs_data-sandia_data)/average([vabs_data,sandia_data]))*100.0
def prep_rel_diff_plot(axis,ymin=-40,ymax=40):
"""Prepare a relative difference plot."""
axis2 = axis.twinx()
axis2.set_ylabel('difference from average [%]', color='m')
axis2.set_ylim([ymin,ymax])
for tl in axis2.get_yticklabels():
tl.set_color('m')
axis2.grid('on')
return axis2
# write all the mass and stiffness matrices from VABS to a csv file -----------
m = bl.MonoplaneBlade('Sandia blade SNL100-00', 'sandia_blade')
m.writecsv_mass_and_stiffness_props()
# plot VABS and Sandia datasets against one another ---------------------------
v=pd.DataFrame.from_csv('sandia_blade/blade_props_from_VABS.csv')
s=pd.DataFrame.from_csv('sandia_blade/blade_props_from_Sandia.csv')
plt.close('all')
# stiffness properties --------------------------------------------------------
f, axarr = plt.subplots(2,2, figsize=(10*1.5,6.5*1.5))
# ref for dual-axis plotting: http://matplotlib.org/examples/api/two_scales.html
# flapwise stiffness
twin_axis00 = prep_rel_diff_plot(axarr[0,0])
twin_axis00.plot(
s['Blade Spanwise Coordinate'], rel_diff(v['K_55, EI_flap'],s['EI_flap']),
'm^:', mec='m', mfc='None', mew=1, label='difference')
axarr[0,0].plot(s['Blade Spanwise Coordinate'],s['EI_flap'],'gx--',mfc='None',mew=1,label='Sandia (PreComp)')
axarr[0,0].plot(v['Blade Spanwise Coordinate'],v['K_55, EI_flap'],'ko-',mfc='None',mew=1,label='UCLA (VABS)')
axarr[0,0].set_xlabel('span [m]')
axarr[0,0].set_ylabel('flapwise stiffness [N*m^2]')
axarr[0,0].legend()
axarr[0,0].grid('on', axis='x')
# edgewise stiffness
twin_axis01 = prep_rel_diff_plot(axarr[0,1])
twin_axis01.plot(
s['Blade Spanwise Coordinate'], rel_diff(v['K_66, EI_edge'],s['EI_edge']),
'm^:', mec='m', mfc='None', mew=1, label='difference')
axarr[0,1].plot(s['Blade Spanwise Coordinate'],s['EI_edge'],'gx--',mfc='None',mew=1,label='Sandia (PreComp)')
axarr[0,1].plot(v['Blade Spanwise Coordinate'],v['K_66, EI_edge'],'ko-',mfc='None',mew=1,label='UCLA (VABS)')
axarr[0,1].set_xlabel('span [m]')
axarr[0,1].set_ylabel('edgewise stiffness [N*m^2]')
axarr[0,1].grid('on', axis='x')
axarr[0,1].legend()
# axial stiffness
twin_axis10 = prep_rel_diff_plot(axarr[1,0])
twin_axis10.plot(
s['Blade Spanwise Coordinate'], rel_diff(v['K_11, EA_axial'],s['EA_axial']),
'm^:', mec='m', mfc='None', mew=1, label='difference')
axarr[1,0].plot(s['Blade Spanwise Coordinate'],s['EA_axial'],'gx--',mfc='None',mew=1,label='Sandia (PreComp)')
axarr[1,0].plot(v['Blade Spanwise Coordinate'],v['K_11, EA_axial'],'ko-',mfc='None',mew=1,label='UCLA (VABS)')
axarr[1,0].set_xlabel('span [m]')
axarr[1,0].set_ylabel('axial stiffness [N]')
axarr[1,0].legend()
axarr[1,0].grid('on', axis='x')
# torsional stiffness
twin_axis11 = prep_rel_diff_plot(axarr[1,1])
twin_axis11.plot(
s['Blade Spanwise Coordinate'], rel_diff(v['K_44, GJ_twist'],s['GJ_twist']),
'm^:', mec='m', mfc='None', mew=1, label='difference')
axarr[1,1].plot(s['Blade Spanwise Coordinate'],s['GJ_twist'],'gx--',mfc='None',mew=1,label='Sandia (PreComp)')
axarr[1,1].plot(v['Blade Spanwise Coordinate'],v['K_44, GJ_twist'],'ko-',mfc='None',mew=1,label='UCLA (VABS)')
axarr[1,1].set_xlabel('span [m]')
axarr[1,1].set_ylabel('torsional stiffness [N*m^2]')
axarr[1,1].legend()
axarr[1,1].grid('on', axis='x')
plt.tight_layout()
plt.subplots_adjust(left=0.05, bottom=0.07, right=0.94, top=0.96, wspace=0.33, hspace=0.28)
plt.savefig('sandia_blade/Sandia_vs_VABS_stiffness_props.png')
plt.savefig('sandia_blade/Sandia_vs_VABS_stiffness_props.pdf')
# mass properties -------------------------------------------------------------
f2, axarr2 = plt.subplots(2,2, figsize=(10*1.5,6.5*1.5))
# mass density
twin_axis2_10 = prep_rel_diff_plot(axarr2[1,0])
twin_axis2_10.plot(
s['Blade Spanwise Coordinate'], rel_diff(v['M_11, mu_mass'],s['mu_mass']),
'm^:', mec='m', mfc='None', mew=1, label='difference')
axarr2[1,0].plot(s['Blade Spanwise Coordinate'],s['mu_mass'],'gx--',mfc='None',mew=1,label='Sandia (PreComp)')
axarr2[1,0].plot(v['Blade Spanwise Coordinate'],v['M_11, mu_mass'],'ko-',mfc='None',mew=1,label='UCLA (VABS)')
axarr2[1,0].set_xlabel('span [m]')
axarr2[1,0].set_ylabel('mass [kg/m]')
axarr2[1,0].legend()
axarr2[1,0].grid('on', axis='x')
# flapwise mass moment of inertia
twin_axis2_00 = prep_rel_diff_plot(axarr2[0,0])
twin_axis2_00.plot(
s['Blade Spanwise Coordinate'], rel_diff(v['M_55, i22_flap'],s['i22_flap']),
'm^:', mec='m', mfc='None', mew=1, label='difference')
axarr2[0,0].plot(s['Blade Spanwise Coordinate'],s['i22_flap'],'gx--',mfc='None',mew=1,label='Sandia (PreComp)')
axarr2[0,0].plot(v['Blade Spanwise Coordinate'],v['M_55, i22_flap'],'ko-',mfc='None',mew=1,label='UCLA (VABS)')
axarr2[0,0].set_xlabel('span [m]')
axarr2[0,0].set_ylabel('flapwise mass moment of inertia [kg*m]')
axarr2[0,0].legend()
axarr2[0,0].grid('on', axis='x')
# edgewise mass moment of inertia
twin_axis2_01 = prep_rel_diff_plot(axarr2[0,1])
twin_axis2_01.plot(
s['Blade Spanwise Coordinate'], rel_diff(v['M_66, i33_edge'],s['i33_edge']),
'm^:', mec='m', mfc='None', mew=1, label='difference')
axarr2[0,1].plot(s['Blade Spanwise Coordinate'],s['i33_edge'],'gx--',mfc='None',mew=1,label='Sandia (PreComp)')
axarr2[0,1].plot(v['Blade Spanwise Coordinate'],v['M_66, i33_edge'],'ko-',mfc='None',mew=1,label='UCLA (VABS)')
axarr2[0,1].set_xlabel('span [m]')
axarr2[0,1].set_ylabel('edgewise mass moment of inertia [kg*m]')
axarr2[0,1].legend()
axarr2[0,1].grid('on', axis='x')
plt.tight_layout()
plt.subplots_adjust(left=0.07, bottom=0.07, right=0.94, top=0.96, wspace=0.33, hspace=0.28)
plt.savefig('sandia_blade/Sandia_vs_VABS_mass_props.png')
plt.savefig('sandia_blade/Sandia_vs_VABS_mass_props.pdf')
plt.show()
| gpl-3.0 |
giorgiop/scikit-learn | examples/mixture/plot_gmm_sin.py | 103 | 6101 | """
=================================
Gaussian Mixture Model Sine Curve
=================================
This example demonstrates the behavior of Gaussian mixture models fit on data
that was not sampled from a mixture of Gaussian random variables. The dataset
is formed by 100 points loosely spaced following a noisy sine curve. There is
therefore no ground truth value for the number of Gaussian components.
The first model is a classical Gaussian Mixture Model with 10 components fit
with the Expectation-Maximization algorithm.
The second model is a Bayesian Gaussian Mixture Model with a Dirichlet process
prior fit with variational inference. The low value of the concentration prior
makes the model favor a lower number of active components. This models
"decides" to focus its modeling power on the big picture of the structure of
the dataset: groups of points with alternating directions modeled by
non-diagonal covariance matrices. Those alternating directions roughly capture
the alternating nature of the original sine signal.
The third model is also a Bayesian Gaussian mixture model with a Dirichlet
process prior but this time the value of the concentration prior is higher
giving the model more liberty to model the fine-grained structure of the data.
The result is a mixture with a larger number of active components that is
similar to the first model where we arbitrarily decided to fix the number of
components to 10.
Which model is the best is a matter of subjective judgement: do we want to
favor models that only capture the big picture to summarize and explain most of
the structure of the data while ignoring the details or do we prefer models
that closely follow the high density regions of the signal?
The last two panels show how we can sample from the last two models. The
resulting samples distributions do not look exactly like the original data
distribution. The difference primarily stems from the approximation error we
made by using a model that assumes that the data was generated by a finite
number of Gaussian components instead of a continuous noisy sine curve.
"""
import itertools
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
print(__doc__)
color_iter = itertools.cycle(['navy', 'c', 'cornflowerblue', 'gold',
'darkorange'])
def plot_results(X, Y, means, covariances, index, title):
splot = plt.subplot(5, 1, 1 + index)
for i, (mean, covar, color) in enumerate(zip(
means, covariances, color_iter)):
v, w = linalg.eigh(covar)
v = 2. * np.sqrt(2.) * np.sqrt(v)
u = w[0] / linalg.norm(w[0])
# as the DP will not use every component it has access to
# unless it needs it, we shouldn't plot the redundant
# components.
if not np.any(Y == i):
continue
plt.scatter(X[Y == i, 0], X[Y == i, 1], .8, color=color)
# Plot an ellipse to show the Gaussian component
angle = np.arctan(u[1] / u[0])
angle = 180. * angle / np.pi # convert to degrees
ell = mpl.patches.Ellipse(mean, v[0], v[1], 180. + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
plt.xlim(-6., 4. * np.pi - 6.)
plt.ylim(-5., 5.)
plt.title(title)
plt.xticks(())
plt.yticks(())
def plot_samples(X, Y, n_components, index, title):
plt.subplot(5, 1, 4 + index)
for i, color in zip(range(n_components), color_iter):
# as the DP will not use every component it has access to
# unless it needs it, we shouldn't plot the redundant
# components.
if not np.any(Y == i):
continue
plt.scatter(X[Y == i, 0], X[Y == i, 1], .8, color=color)
plt.xlim(-6., 4. * np.pi - 6.)
plt.ylim(-5., 5.)
plt.title(title)
plt.xticks(())
plt.yticks(())
# Parameters
n_samples = 100
# Generate random sample following a sine curve
np.random.seed(0)
X = np.zeros((n_samples, 2))
step = 4. * np.pi / n_samples
for i in range(X.shape[0]):
x = i * step - 6.
X[i, 0] = x + np.random.normal(0, 0.1)
X[i, 1] = 3. * (np.sin(x) + np.random.normal(0, .2))
plt.figure(figsize=(10, 10))
plt.subplots_adjust(bottom=.04, top=0.95, hspace=.2, wspace=.05,
left=.03, right=.97)
# Fit a Gaussian mixture with EM using ten components
gmm = mixture.GaussianMixture(n_components=10, covariance_type='full',
max_iter=100).fit(X)
plot_results(X, gmm.predict(X), gmm.means_, gmm.covariances_, 0,
'Expectation-maximization')
dpgmm = mixture.BayesianGaussianMixture(
n_components=10, covariance_type='full', weight_concentration_prior=1e-2,
weight_concentration_prior_type='dirichlet_process',
mean_precision_prior=1e-2, covariance_prior=1e0 * np.eye(2),
init_params="random", max_iter=100, random_state=2).fit(X)
plot_results(X, dpgmm.predict(X), dpgmm.means_, dpgmm.covariances_, 1,
"Bayesian Gaussian mixture models with a Dirichlet process prior "
r"for $\gamma_0=0.01$.")
X_s, y_s = dpgmm.sample(n_samples=2000)
plot_samples(X_s, y_s, dpgmm.n_components, 0,
"Gaussian mixture with a Dirichlet process prior "
r"for $\gamma_0=0.01$ sampled with $2000$ samples.")
dpgmm = mixture.BayesianGaussianMixture(
n_components=10, covariance_type='full', weight_concentration_prior=1e+2,
weight_concentration_prior_type='dirichlet_process',
mean_precision_prior=1e-2, covariance_prior=1e0 * np.eye(2),
init_params="kmeans", max_iter=100, random_state=2).fit(X)
plot_results(X, dpgmm.predict(X), dpgmm.means_, dpgmm.covariances_, 2,
"Bayesian Gaussian mixture models with a Dirichlet process prior "
r"for $\gamma_0=100$")
X_s, y_s = dpgmm.sample(n_samples=2000)
plot_samples(X_s, y_s, dpgmm.n_components, 1,
"Gaussian mixture with a Dirichlet process prior "
r"for $\gamma_0=100$ sampled with $2000$ samples.")
plt.show()
| bsd-3-clause |
yavalvas/yav_com | build/matplotlib/examples/widgets/slider_demo.py | 13 | 1179 | import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider, Button, RadioButtons
fig, ax = plt.subplots()
plt.subplots_adjust(left=0.25, bottom=0.25)
t = np.arange(0.0, 1.0, 0.001)
a0 = 5
f0 = 3
s = a0*np.sin(2*np.pi*f0*t)
l, = plt.plot(t,s, lw=2, color='red')
plt.axis([0, 1, -10, 10])
axcolor = 'lightgoldenrodyellow'
axfreq = plt.axes([0.25, 0.1, 0.65, 0.03], axisbg=axcolor)
axamp = plt.axes([0.25, 0.15, 0.65, 0.03], axisbg=axcolor)
sfreq = Slider(axfreq, 'Freq', 0.1, 30.0, valinit=f0)
samp = Slider(axamp, 'Amp', 0.1, 10.0, valinit=a0)
def update(val):
amp = samp.val
freq = sfreq.val
l.set_ydata(amp*np.sin(2*np.pi*freq*t))
fig.canvas.draw_idle()
sfreq.on_changed(update)
samp.on_changed(update)
resetax = plt.axes([0.8, 0.025, 0.1, 0.04])
button = Button(resetax, 'Reset', color=axcolor, hovercolor='0.975')
def reset(event):
sfreq.reset()
samp.reset()
button.on_clicked(reset)
rax = plt.axes([0.025, 0.5, 0.15, 0.15], axisbg=axcolor)
radio = RadioButtons(rax, ('red', 'blue', 'green'), active=0)
def colorfunc(label):
l.set_color(label)
fig.canvas.draw_idle()
radio.on_clicked(colorfunc)
plt.show()
| mit |
yavalvas/yav_com | build/matplotlib/examples/user_interfaces/embedding_in_wx2.py | 9 | 2706 | #!/usr/bin/env python
"""
An example of how to use wx or wxagg in an application with the new
toolbar - comment out the setA_toolbar line for no toolbar
"""
# Used to guarantee to use at least Wx2.8
import wxversion
wxversion.ensureMinimal('2.8')
from numpy import arange, sin, pi
import matplotlib
# uncomment the following to use wx rather than wxagg
#matplotlib.use('WX')
#from matplotlib.backends.backend_wx import FigureCanvasWx as FigureCanvas
# comment out the following to use wx rather than wxagg
matplotlib.use('WXAgg')
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas
from matplotlib.backends.backend_wx import NavigationToolbar2Wx
from matplotlib.figure import Figure
import wx
class CanvasFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self,None,-1,
'CanvasFrame',size=(550,350))
self.SetBackgroundColour(wx.NamedColour("WHITE"))
self.figure = Figure()
self.axes = self.figure.add_subplot(111)
t = arange(0.0,3.0,0.01)
s = sin(2*pi*t)
self.axes.plot(t,s)
self.canvas = FigureCanvas(self, -1, self.figure)
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.canvas, 1, wx.LEFT | wx.TOP | wx.GROW)
self.SetSizer(self.sizer)
self.Fit()
self.add_toolbar() # comment this out for no toolbar
def add_toolbar(self):
self.toolbar = NavigationToolbar2Wx(self.canvas)
self.toolbar.Realize()
if wx.Platform == '__WXMAC__':
# Mac platform (OSX 10.3, MacPython) does not seem to cope with
# having a toolbar in a sizer. This work-around gets the buttons
# back, but at the expense of having the toolbar at the top
self.SetToolBar(self.toolbar)
else:
# On Windows platform, default window size is incorrect, so set
# toolbar width to figure width.
tw, th = self.toolbar.GetSizeTuple()
fw, fh = self.canvas.GetSizeTuple()
# By adding toolbar in sizer, we are able to put it at the bottom
# of the frame - so appearance is closer to GTK version.
# As noted above, doesn't work for Mac.
self.toolbar.SetSize(wx.Size(fw, th))
self.sizer.Add(self.toolbar, 0, wx.LEFT | wx.EXPAND)
# update the axes menu on the toolbar
self.toolbar.update()
def OnPaint(self, event):
self.canvas.draw()
class App(wx.App):
def OnInit(self):
'Create the main window and insert the custom frame'
frame = CanvasFrame()
frame.Show(True)
return True
app = App(0)
app.MainLoop()
| mit |
neale/CS-program | 434-MachineLearning/final_project/linearClassifier/sklearn/decomposition/tests/test_incremental_pca.py | 297 | 8265 | """Tests for Incremental PCA."""
import numpy as np
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_raises
from sklearn import datasets
from sklearn.decomposition import PCA, IncrementalPCA
iris = datasets.load_iris()
def test_incremental_pca():
# Incremental PCA on dense arrays.
X = iris.data
batch_size = X.shape[0] // 3
ipca = IncrementalPCA(n_components=2, batch_size=batch_size)
pca = PCA(n_components=2)
pca.fit_transform(X)
X_transformed = ipca.fit_transform(X)
np.testing.assert_equal(X_transformed.shape, (X.shape[0], 2))
assert_almost_equal(ipca.explained_variance_ratio_.sum(),
pca.explained_variance_ratio_.sum(), 1)
for n_components in [1, 2, X.shape[1]]:
ipca = IncrementalPCA(n_components, batch_size=batch_size)
ipca.fit(X)
cov = ipca.get_covariance()
precision = ipca.get_precision()
assert_array_almost_equal(np.dot(cov, precision),
np.eye(X.shape[1]))
def test_incremental_pca_check_projection():
# Test that the projection of data is correct.
rng = np.random.RandomState(1999)
n, p = 100, 3
X = rng.randn(n, p) * .1
X[:10] += np.array([3, 4, 5])
Xt = 0.1 * rng.randn(1, p) + np.array([3, 4, 5])
# Get the reconstruction of the generated data X
# Note that Xt has the same "components" as X, just separated
# This is what we want to ensure is recreated correctly
Yt = IncrementalPCA(n_components=2).fit(X).transform(Xt)
# Normalize
Yt /= np.sqrt((Yt ** 2).sum())
# Make sure that the first element of Yt is ~1, this means
# the reconstruction worked as expected
assert_almost_equal(np.abs(Yt[0][0]), 1., 1)
def test_incremental_pca_inverse():
# Test that the projection of data can be inverted.
rng = np.random.RandomState(1999)
n, p = 50, 3
X = rng.randn(n, p) # spherical data
X[:, 1] *= .00001 # make middle component relatively small
X += [5, 4, 3] # make a large mean
# same check that we can find the original data from the transformed
# signal (since the data is almost of rank n_components)
ipca = IncrementalPCA(n_components=2, batch_size=10).fit(X)
Y = ipca.transform(X)
Y_inverse = ipca.inverse_transform(Y)
assert_almost_equal(X, Y_inverse, decimal=3)
def test_incremental_pca_validation():
# Test that n_components is >=1 and <= n_features.
X = [[0, 1], [1, 0]]
for n_components in [-1, 0, .99, 3]:
assert_raises(ValueError, IncrementalPCA(n_components,
batch_size=10).fit, X)
def test_incremental_pca_set_params():
# Test that components_ sign is stable over batch sizes.
rng = np.random.RandomState(1999)
n_samples = 100
n_features = 20
X = rng.randn(n_samples, n_features)
X2 = rng.randn(n_samples, n_features)
X3 = rng.randn(n_samples, n_features)
ipca = IncrementalPCA(n_components=20)
ipca.fit(X)
# Decreasing number of components
ipca.set_params(n_components=10)
assert_raises(ValueError, ipca.partial_fit, X2)
# Increasing number of components
ipca.set_params(n_components=15)
assert_raises(ValueError, ipca.partial_fit, X3)
# Returning to original setting
ipca.set_params(n_components=20)
ipca.partial_fit(X)
def test_incremental_pca_num_features_change():
# Test that changing n_components will raise an error.
rng = np.random.RandomState(1999)
n_samples = 100
X = rng.randn(n_samples, 20)
X2 = rng.randn(n_samples, 50)
ipca = IncrementalPCA(n_components=None)
ipca.fit(X)
assert_raises(ValueError, ipca.partial_fit, X2)
def test_incremental_pca_batch_signs():
# Test that components_ sign is stable over batch sizes.
rng = np.random.RandomState(1999)
n_samples = 100
n_features = 3
X = rng.randn(n_samples, n_features)
all_components = []
batch_sizes = np.arange(10, 20)
for batch_size in batch_sizes:
ipca = IncrementalPCA(n_components=None, batch_size=batch_size).fit(X)
all_components.append(ipca.components_)
for i, j in zip(all_components[:-1], all_components[1:]):
assert_almost_equal(np.sign(i), np.sign(j), decimal=6)
def test_incremental_pca_batch_values():
# Test that components_ values are stable over batch sizes.
rng = np.random.RandomState(1999)
n_samples = 100
n_features = 3
X = rng.randn(n_samples, n_features)
all_components = []
batch_sizes = np.arange(20, 40, 3)
for batch_size in batch_sizes:
ipca = IncrementalPCA(n_components=None, batch_size=batch_size).fit(X)
all_components.append(ipca.components_)
for i, j in zip(all_components[:-1], all_components[1:]):
assert_almost_equal(i, j, decimal=1)
def test_incremental_pca_partial_fit():
# Test that fit and partial_fit get equivalent results.
rng = np.random.RandomState(1999)
n, p = 50, 3
X = rng.randn(n, p) # spherical data
X[:, 1] *= .00001 # make middle component relatively small
X += [5, 4, 3] # make a large mean
# same check that we can find the original data from the transformed
# signal (since the data is almost of rank n_components)
batch_size = 10
ipca = IncrementalPCA(n_components=2, batch_size=batch_size).fit(X)
pipca = IncrementalPCA(n_components=2, batch_size=batch_size)
# Add one to make sure endpoint is included
batch_itr = np.arange(0, n + 1, batch_size)
for i, j in zip(batch_itr[:-1], batch_itr[1:]):
pipca.partial_fit(X[i:j, :])
assert_almost_equal(ipca.components_, pipca.components_, decimal=3)
def test_incremental_pca_against_pca_iris():
# Test that IncrementalPCA and PCA are approximate (to a sign flip).
X = iris.data
Y_pca = PCA(n_components=2).fit_transform(X)
Y_ipca = IncrementalPCA(n_components=2, batch_size=25).fit_transform(X)
assert_almost_equal(np.abs(Y_pca), np.abs(Y_ipca), 1)
def test_incremental_pca_against_pca_random_data():
# Test that IncrementalPCA and PCA are approximate (to a sign flip).
rng = np.random.RandomState(1999)
n_samples = 100
n_features = 3
X = rng.randn(n_samples, n_features) + 5 * rng.rand(1, n_features)
Y_pca = PCA(n_components=3).fit_transform(X)
Y_ipca = IncrementalPCA(n_components=3, batch_size=25).fit_transform(X)
assert_almost_equal(np.abs(Y_pca), np.abs(Y_ipca), 1)
def test_explained_variances():
# Test that PCA and IncrementalPCA calculations match
X = datasets.make_low_rank_matrix(1000, 100, tail_strength=0.,
effective_rank=10, random_state=1999)
prec = 3
n_samples, n_features = X.shape
for nc in [None, 99]:
pca = PCA(n_components=nc).fit(X)
ipca = IncrementalPCA(n_components=nc, batch_size=100).fit(X)
assert_almost_equal(pca.explained_variance_, ipca.explained_variance_,
decimal=prec)
assert_almost_equal(pca.explained_variance_ratio_,
ipca.explained_variance_ratio_, decimal=prec)
assert_almost_equal(pca.noise_variance_, ipca.noise_variance_,
decimal=prec)
def test_whitening():
# Test that PCA and IncrementalPCA transforms match to sign flip.
X = datasets.make_low_rank_matrix(1000, 10, tail_strength=0.,
effective_rank=2, random_state=1999)
prec = 3
n_samples, n_features = X.shape
for nc in [None, 9]:
pca = PCA(whiten=True, n_components=nc).fit(X)
ipca = IncrementalPCA(whiten=True, n_components=nc,
batch_size=250).fit(X)
Xt_pca = pca.transform(X)
Xt_ipca = ipca.transform(X)
assert_almost_equal(np.abs(Xt_pca), np.abs(Xt_ipca), decimal=prec)
Xinv_ipca = ipca.inverse_transform(Xt_ipca)
Xinv_pca = pca.inverse_transform(Xt_pca)
assert_almost_equal(X, Xinv_ipca, decimal=prec)
assert_almost_equal(X, Xinv_pca, decimal=prec)
assert_almost_equal(Xinv_pca, Xinv_ipca, decimal=prec)
| unlicense |
lindsayad/sympy | sympy/interactive/tests/test_ipythonprinting.py | 11 | 6263 | """Tests that the IPython printing module is properly loaded. """
from sympy.core.compatibility import u
from sympy.interactive.session import init_ipython_session
from sympy.external import import_module
from sympy.utilities.pytest import raises
# run_cell was added in IPython 0.11
ipython = import_module("IPython", min_module_version="0.11")
# disable tests if ipython is not present
if not ipython:
disabled = True
def test_ipythonprinting():
# Initialize and setup IPython session
app = init_ipython_session()
app.run_cell("ip = get_ipython()")
app.run_cell("inst = ip.instance()")
app.run_cell("format = inst.display_formatter.format")
app.run_cell("from sympy import Symbol")
# Printing without printing extension
app.run_cell("a = format(Symbol('pi'))")
app.run_cell("a2 = format(Symbol('pi')**2)")
# Deal with API change starting at IPython 1.0
if int(ipython.__version__.split(".")[0]) < 1:
assert app.user_ns['a']['text/plain'] == "pi"
assert app.user_ns['a2']['text/plain'] == "pi**2"
else:
assert app.user_ns['a'][0]['text/plain'] == "pi"
assert app.user_ns['a2'][0]['text/plain'] == "pi**2"
# Load printing extension
app.run_cell("from sympy import init_printing")
app.run_cell("init_printing()")
# Printing with printing extension
app.run_cell("a = format(Symbol('pi'))")
app.run_cell("a2 = format(Symbol('pi')**2)")
# Deal with API change starting at IPython 1.0
if int(ipython.__version__.split(".")[0]) < 1:
assert app.user_ns['a']['text/plain'] in (u('\N{GREEK SMALL LETTER PI}'), 'pi')
assert app.user_ns['a2']['text/plain'] in (u(' 2\n\N{GREEK SMALL LETTER PI} '), ' 2\npi ')
else:
assert app.user_ns['a'][0]['text/plain'] in (u('\N{GREEK SMALL LETTER PI}'), 'pi')
assert app.user_ns['a2'][0]['text/plain'] in (u(' 2\n\N{GREEK SMALL LETTER PI} '), ' 2\npi ')
def test_print_builtin_option():
# Initialize and setup IPython session
app = init_ipython_session()
app.run_cell("ip = get_ipython()")
app.run_cell("inst = ip.instance()")
app.run_cell("format = inst.display_formatter.format")
app.run_cell("from sympy import Symbol")
app.run_cell("from sympy import init_printing")
app.run_cell("a = format({Symbol('pi'): 3.14, Symbol('n_i'): 3})")
# Deal with API change starting at IPython 1.0
if int(ipython.__version__.split(".")[0]) < 1:
text = app.user_ns['a']['text/plain']
raises(KeyError, lambda: app.user_ns['a']['text/latex'])
else:
text = app.user_ns['a'][0]['text/plain']
raises(KeyError, lambda: app.user_ns['a'][0]['text/latex'])
# Note : Unicode of Python2 is equivalent to str in Python3. In Python 3 we have one
# text type: str which holds Unicode data and two byte types bytes and bytearray.
# XXX: How can we make this ignore the terminal width? This test fails if
# the terminal is too narrow.
assert text in ("{pi: 3.14, n_i: 3}",
u('{n\N{LATIN SUBSCRIPT SMALL LETTER I}: 3, \N{GREEK SMALL LETTER PI}: 3.14}'),
"{n_i: 3, pi: 3.14}",
u('{\N{GREEK SMALL LETTER PI}: 3.14, n\N{LATIN SUBSCRIPT SMALL LETTER I}: 3}'))
# If we enable the default printing, then the dictionary's should render
# as a LaTeX version of the whole dict: ${\pi: 3.14, n_i: 3}$
app.run_cell("inst.display_formatter.formatters['text/latex'].enabled = True")
app.run_cell("init_printing(use_latex=True)")
app.run_cell("a = format({Symbol('pi'): 3.14, Symbol('n_i'): 3})")
# Deal with API change starting at IPython 1.0
if int(ipython.__version__.split(".")[0]) < 1:
text = app.user_ns['a']['text/plain']
latex = app.user_ns['a']['text/latex']
else:
text = app.user_ns['a'][0]['text/plain']
latex = app.user_ns['a'][0]['text/latex']
assert text in ("{pi: 3.14, n_i: 3}",
u('{n\N{LATIN SUBSCRIPT SMALL LETTER I}: 3, \N{GREEK SMALL LETTER PI}: 3.14}'),
"{n_i: 3, pi: 3.14}",
u('{\N{GREEK SMALL LETTER PI}: 3.14, n\N{LATIN SUBSCRIPT SMALL LETTER I}: 3}'))
assert latex == r'$$\left \{ n_{i} : 3, \quad \pi : 3.14\right \}$$'
app.run_cell("inst.display_formatter.formatters['text/latex'].enabled = True")
app.run_cell("init_printing(use_latex=True, print_builtin=False)")
app.run_cell("a = format({Symbol('pi'): 3.14, Symbol('n_i'): 3})")
# Deal with API change starting at IPython 1.0
if int(ipython.__version__.split(".")[0]) < 1:
text = app.user_ns['a']['text/plain']
raises(KeyError, lambda: app.user_ns['a']['text/latex'])
else:
text = app.user_ns['a'][0]['text/plain']
raises(KeyError, lambda: app.user_ns['a'][0]['text/latex'])
# Note : Unicode of Python2 is equivalent to str in Python3. In Python 3 we have one
# text type: str which holds Unicode data and two byte types bytes and bytearray.
# Python 3.3.3 + IPython 0.13.2 gives: '{n_i: 3, pi: 3.14}'
# Python 3.3.3 + IPython 1.1.0 gives: '{n_i: 3, pi: 3.14}'
# Python 2.7.5 + IPython 1.1.0 gives: '{pi: 3.14, n_i: 3}'
assert text in ("{pi: 3.14, n_i: 3}", "{n_i: 3, pi: 3.14}")
def test_matplotlib_bad_latex():
# Initialize and setup IPython session
app = init_ipython_session()
app.run_cell("import IPython")
app.run_cell("ip = get_ipython()")
app.run_cell("inst = ip.instance()")
app.run_cell("format = inst.display_formatter.format")
app.run_cell("from sympy import init_printing, Matrix")
app.run_cell("init_printing(use_latex='matplotlib')")
# The png formatter is not enabled by default in this context
app.run_cell("inst.display_formatter.formatters['image/png'].enabled = True")
# Make sure no warnings are raised by IPython
app.run_cell("import warnings")
app.run_cell("warnings.simplefilter('error', IPython.core.formatters.FormatterWarning)")
# This should not raise an exception
app.run_cell("a = format(Matrix([1, 2, 3]))")
# issue 9799
app.run_cell("from sympy import Piecewise, Symbol, Eq")
app.run_cell("x = Symbol('x'); pw = format(Piecewise((1, Eq(x, 0)), (0, True)))")
| bsd-3-clause |
iarroyof/distributionalSemanticStabilityThesis | mklObj.py | 2 | 55729 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
__author__ = 'Ignacio Arroyo-Fernandez'
from modshogun import *
from tools.load import LoadMatrix
from sklearn.metrics import r2_score
import random
from math import sqrt
import numpy
from os import getcwd
from sys import stderr
from pdb import set_trace as st
def open_configuration_file(fileName):
""" Loads the input data configuration file. Lines which start with '#' are ignored. No lines different from
configuration ones (even blank ones) at top are allowed. The amount of lines at top are exclusively either three or
five (see below for allowed contents).
The first line may be specifying the train data file in sparse market matrix format.
The second line may be specifying the test data file in sparse market matrix format.
The third line may be specifying the train labels file. An scalar by line must be associated as label of a vector
in training data.
The fourth line may be specifying the test labels file. An scalar by line must be associated as label of a vector
in test data.
The fifth line indicates options for the MKL object:
First character : Problem type : valid_options = {r: regression, b: binary, m: multiclass}
Second character: Machine mode : valid_options = {l: learning_mode, p: pattern_recognition_mode}
Any other characters and amount of they will be ignored or caught as errors.
For all configuration lines no other kind of content is allowed (e.g. comments in line ahead).
Training data (and its labels) is optional. Whenever no five configuration lines are detected in this file,
the first line will be considered as the test data file name, the second line as de test labels and third line as
the MKL options. An error exception will be raised otherwise (e.g. no three or no five configuration lines).
"""
with open(fileName) as f:
configuration_lines = f.read().splitlines()
problem_modes = {'r':'regression', 'b':'binary', 'm':'multiclass'}
machine_modes = {'l':'learning', 'p':'pattern_recognition'}
cls = 0 # Counted number of configuration lines from top.
ncls = 5 # Number of configuration lines allowed.
for line in configuration_lines:
if not line.startswith('#'):
cls += 1
else:
break
if cls == ncls:
mode = configuration_lines[4]
configuration = {}
if len(mode) == 2:
try:
configuration['problem_mode'] = problem_modes[mode[0]]
configuration['machine_mode'] = machine_modes[mode[1]]
except KeyError:
sys.stderr.write('\nERROR: Incorrect configuration file. Invalid machine mode. See help for mklObj.open_configuration_file().')
else:
sys.stderr.write('\nERROR: Incorrect configuration file. Invalid number of lines. See help for mklObj.open_configuration_file().')
exit()
Null = ncls # Null index
if configuration['machine_mode'] == 'learning': # According to availability of training files, indexes are setted.
trf = 0; tsf = 1; trlf = 2 # training_file, test_file, training_labels_file, test_labels_file, mode
tslf = 3; mf = Null
configuration_lines[ncls] = None
del(configuration_lines[ncls+1:]) # All from the first '#' onwards is ignored.
elif configuration['machine_mode'] == 'pattern_recognition':
trf = 0; tsf = 1; trlf = Null # training_file, test_file, test_labels_file, mode, model_file
tslf = 2; mf = 3
configuration_lines[ncls] = None
del(configuration_lines[ncls+1:])
configuration['training_file'] = configuration_lines[trf]
configuration['test_file'] = configuration_lines[tsf]
configuration['training_labels_file'] = configuration_lines[trlf]
configuration['test_labels_file'] = configuration_lines[tslf]
configuration['model_file'] = configuration_lines[mf]
return configuration
# Loading toy multiclass data from files
def load_multiclassToy(dataRoute, fileTrain, fileLabels):
""" :returns: [RealFeatures(training_data), RealFeatures(test_data), MulticlassLabels(train_labels),
MulticlassLabels(test_labels)]. It is a set of Shogun training objects for raising a 10-class classification
problem. This function is a modified version from http://www.shogun-toolbox.org/static/notebook/current/MKL.html
Pay attention to input parameters because their documentations is valid for acquiring data for any multiclass
problem with Shogun.
:param dataRoute: The allocation directory of plain text file containing the train and test data.
:param fileTrain: The name of the text file containing the train and test data. Each row of the file contains a
sample vector and each column is a dimension of such a sample vector.
:param fileLabels: The name of the text file containing the train and test labels. Each row must to correspond to
each sample in fileTrain. It must be at the same directory specified by dataRoute.
"""
lm = LoadMatrix()
dataSet = lm.load_numbers(dataRoute + fileTrain)
labels = lm.load_labels(dataRoute + fileLabels)
return (RealFeatures(dataSet.T[0:3 * len(dataSet.T) / 4].T), # Return the training set, 3/4 * dataSet
RealFeatures(dataSet.T[(3 * len(dataSet.T) / 4):].T), # Return the test set, 1/4 * dataSet
MulticlassLabels(labels[0:3 * len(labels) / 4]), # Return corresponding train and test labels
MulticlassLabels(labels[(3 * len(labels) / 4):]))
# 2D Toy data generator
def generate_binToy(file_data = None, file_labels = None):
""":return: [RealFeatures(train_data),RealFeatures(train_data),BinaryLabels(train_labels),BinaryLabels(test_labels)]
This method generates random 2D training and test data for binary classification. The labels are {-1, 1} vectors.
"""
num = 30
num_components = 4
means = numpy.zeros((num_components, 2))
means[0] = [-1, 1]
means[1] = [2, -1.5]
means[2] = [-1, -3]
means[3] = [2, 1]
covs = numpy.array([[1.0, 0.0], [0.0, 1.0]])
gmm = GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs, i) for i in range(num_components)]
gmm.set_coef(numpy.array([1.0, 0.0, 0.0, 0.0]))
xntr = numpy.array([gmm.sample() for i in xrange(num)]).T
xnte = numpy.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(numpy.array([0.0, 1.0, 0.0, 0.0]))
xntr1 = numpy.array([gmm.sample() for i in xrange(num)]).T
xnte1 = numpy.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(numpy.array([0.0, 0.0, 1.0, 0.0]))
xptr = numpy.array([gmm.sample() for i in xrange(num)]).T
xpte = numpy.array([gmm.sample() for i in xrange(5000)]).T
gmm.set_coef(numpy.array([0.0, 0.0, 0.0, 1.0]))
xptr1 = numpy.array([gmm.sample() for i in xrange(num)]).T
xpte1 = numpy.array([gmm.sample() for i in xrange(5000)]).T
if not file_data:
return (RealFeatures(numpy.concatenate((xntr, xntr1, xptr, xptr1), axis=1)), # Train Data
RealFeatures(numpy.concatenate((xnte, xnte1, xpte, xpte1), axis=1)), # Test Data
BinaryLabels(numpy.concatenate((-numpy.ones(2 * num), numpy.ones(2 * num)))), # Train Labels
BinaryLabels(numpy.concatenate((-numpy.ones(10000), numpy.ones(10000))))) # Test Labels
else:
data_set = numpy.concatenate((numpy.concatenate((xntr, xntr1, xptr, xptr1), axis=1),
numpy.concatenate((xnte, xnte1, xpte, xpte1), axis=1)), axis = 1).T
labels = numpy.concatenate((numpy.concatenate((-numpy.ones(2 * num), numpy.ones(2 * num))),
numpy.concatenate((-numpy.ones(10000), numpy.ones(10000)))), axis = 1).astype(int)
indexes = range(len(data_set))
numpy.random.shuffle(indexes)
fd = open(file_data, 'w')
fl = open(file_labels, 'w')
for i in indexes:
fd.write('%f %f\n' % (data_set[i][0],data_set[i][1]))
fl.write(str(labels[i])+'\n')
fd.close()
fl.close()
#numpy.savetxt(file_data, data_set, fmt='%f')
#numpy.savetxt(file_labels, labels, fmt='%d')
def load_binData(tr_ts_portion = None, fileTrain = None, fileLabels = None, dataRoute = None):
if not dataRoute:
dataRoute = getcwd()+'/'
assert fileTrain and fileLabels # One (or both) of the input files are not given.
assert (tr_ts_portion > 0.0 and tr_ts_portion <= 1.0) # The proportion of dividing the data set into train and test is in (0, 1]
lm = LoadMatrix()
dataSet = lm.load_numbers(dataRoute + fileTrain)
labels = lm.load_labels(dataRoute + fileLabels)
return (RealFeatures(dataSet.T[0:tr_ts_portion * len(dataSet.T)].T), # Return the training set, 3/4 * dataSet
RealFeatures(dataSet.T[tr_ts_portion * len(dataSet.T):].T), # Return the test set, 1/4 * dataSet
BinaryLabels(labels[0:tr_ts_portion * len(labels)]), # Return corresponding train and test labels
BinaryLabels(labels[tr_ts_portion * len(labels):]))
def load_regression_data(fileTrain = None, fileTest = None, fileLabelsTr = None, fileLabelsTs = None, sparse=False):
""" This method loads data from sparse mtx file format ('CSR' preferably. See Python sci.sparse matrix
format, also referred to as matrix market read and write methods). Label files should contain a column of
these labels, e.g. see the contents of a three labels file:
1.23
-102.45
2.2998438943
Loading uniquely test labels is allowed (training labels are optional). In pattern_recognition mode no
training labels are required. None is returned out for corresponding Shogun label object. Feature list
returned:
[features_tr, features_ts, labels_tr, labels_ts]
Returned data is float type (dtype='float64'). This is the minimum data length allowed by Shogun given the
sparse distance functions does not allow other ones, e.g. short (float32).
"""
assert fileTrain and fileTest and fileLabelsTs # Necessary test labels as well as test and train data sets specification.
from scipy.io import mmread
lm = LoadMatrix()
if sparse:
sci_data_tr = mmread(fileTrain).asformat('csr').astype('float64').T
features_tr = SparseRealFeatures(sci_data_tr) # Reformated as CSR and 'float64' type for
sci_data_ts = mmread(fileTest).asformat('csr').astype('float64').T # compatibility with SparseRealFeatures
features_ts = SparseRealFeatures(sci_data_ts)
else:
features_tr = RealFeatures(lm.load_numbers(fileTrain).astype('float64'))
features_ts = RealFeatures(lm.load_numbers(fileTest).astype('float64'))
labels_ts = RegressionLabels(lm.load_labels(fileLabelsTs))
if fileTrain and fileLabelsTr: # sci_data_x: Any sparse data type in the file.
labels_tr = RegressionLabels(lm.load_labels(fileLabelsTr))
else:
labels_tr = None
return features_tr, features_ts, labels_tr, labels_ts
# Exception handling:
class customException(Exception):
""" This exception prevents training inconsistencies. It could be edited for accepting a complete
dictionary of exceptions if desired.
"""
def __init__(self, message):
self.parameter = message
def __str__(self):
return repr(self.parameter)
# Basis kernel parameter generation:
def sigmaGen(self, hyperDistribution, size, rango, parameters):
""" :return: list of float
This module generates the pseudorandom vector of widths for basis Gaussian kernels according to a distribution, i.e.
hyperDistribution =
{'linear',
'quadratic',
'loggauss'*,
'gaussian'*,
'triangular', # parameters[0] is the median of the distribution. parameters[1] has not effect.
'pareto',
'beta'*,
'gamma',
'weibull'}.
Names marked with * require parameters, e.g. for 'gaussian', parameters = [mean, width]. The input 'size' is the
amount of segments the distribution domain will be discretized out. The 'rango' input are the minimum and maximum
values of the obtained distributed values. The 'parameters' of these weight vector distributions are set to common
values of each distribution by default, but they can be modified.
:param hyperDistribution: string
:param size: It is the number of basis kernels for the MKL object.
:param rango: It is the range to which the basis kernel parameters will pertain. For some basis kernels families
this input parameter has not effect.
:param parameters: It is a list of parameters of the distribution of the random weights, e.g. for a gaussian
distribution with mean zero and variance 1, parameters = [0, 1]. For some basis kernel families this input parameter
has not effect: {'linear', 'quadratic', 'triangular', 'pareto', 'gamma', 'weilbull', }
.. seealso: fit_kernel() function documentation.
"""
# Validating th inputs
assert (isinstance(size, int) and size > 0)
assert (rango[0] < rango[1] and len(rango) == 2)
# .. todo: Revise the other linespaces of the other distributions. They must be equally consistent than the
# .. todo: Gaussian one. Change 'is' when verifying equality between strings (PEP008 recommendation).
sig = []
if hyperDistribution == 'linear':
line = numpy.linspace(rango[0], rango[1], size*2)
sig = random.sample(line, size)
return sig
elif hyperDistribution == 'quadratic':
sig = numpy.square(random.sample(numpy.linspace(int(sqrt(rango[0])), int(sqrt(rango[1]))), size))
return sig
elif hyperDistribution == 'gaussian':
assert parameters[1] > 0 # The width is greater than zero?
i = 0
while i < size:
numero = random.gauss(parameters[0], parameters[1])
if rango[0] <= numero <= rango[1]: # Validate the initial point of
sig.append(numero) # 'range'. If not met, loop does
i += 1 # not end, but resets
# If met, the number is appended
return sig # to 'sig' width list.
elif hyperDistribution == 'triangular':
assert rango[0] <= parameters[0] <= rango[1] # The median is in the range?
sig = numpy.random.triangular(rango[0], parameters[0], rango[1], size)
return sig
elif hyperDistribution == 'beta':
assert (parameters[0] >= 0 and parameters[1] >= 0) # Alpha and Beta parameters are non-negative?
sig = numpy.random.beta(parameters[0], parameters[1], size) * (rango[1] - rango[0]) + rango[0]
return sig
elif hyperDistribution == 'pareto':
return numpy.random.pareto(5, size=size) * (rango[1] - rango[0]) + rango[0]
elif hyperDistribution == 'gamma':
return numpy.random.gamma(shape=1, size=size) * (rango[1] - rango[0]) + rango[0]
elif hyperDistribution == 'weibull':
return numpy.random.weibull(2, size=size) * (rango[1] - rango[0]) + rango[0]
elif hyperDistribution == 'loggauss':
assert parameters[1] > 0 # The width is greater than zero?
i = 0
while i < size:
numero = random.lognormvariate(parameters[0], parameters[1])
if numero > rango[0] and numero < rango[1]:
sig.append(numero)
i += 1
return sig
else:
print 'The entered hyperparameter distribution is not allowed: '+hyperDistribution
#pdb.set_trace()
# Combining kernels
def genKer(self, featsL, featsR, basisFam, widths=[5.0, 4.0, 3.0, 2.0, 1.0], sparse = False):
""":return: Shogun CombinedKernel object.
This module generates a list of basis kernels. These kernels are tuned according to the vector ''widths''. Input
parameters ''featsL'' and ''featsR'' are Shogun feature objects. In the case of a learnt RKHS, these both objects
should be derived from the training SLM vectors, by means of the Shogun constructor realFeatures(). This module also
appends basis kernels to a Shogun combinedKernel object.
The kernels to be append are left in ''combKer'' object (see code), which is returned. We have analyzed some basis
families available in Shogun, so possible string values of 'basisFam' are:
basisFam = ['gaussian',
'inverseQuadratic',
'polynomial',
'power',
'rationalQuadratic',
'spherical',
'tstudent',
'wave',
'wavelet',
'cauchy',
'exponential']
"""
allowed_sparse = ['gaussian', 'polynomial'] # Change this assertion list and function if different kernels are needed.
assert not (featsL.get_feature_class() == featsR.get_feature_class() == 'C_SPARSE') or basisFam in allowed_sparse # Sparse type is not compatible with specified kernel or feature types are different.
kernels = []
if basisFam == 'gaussian':
for w in widths:
k=GaussianKernel()
#k.init(featsL, featsR)
#st()
kernels.append(k)
kernels[-1].set_width(w)
kernels[-1].init(featsL, featsR)
#st()
elif basisFam == 'inverseQuadratic': # For this (and others below) kernel it is necessary fitting the
if not sparse:
dst = MinkowskiMetric(l=featsL, r=featsR, k=2) # distance matrix at this moment k = 2 is for l_2 norm
else:
dst = SparseEuclideanDistance(l=featsL, r=featsR)
for w in widths:
kernels.append(InverseMultiQuadricKernel(0, w, dst))
elif basisFam == 'polynomial':
for w in widths:
kernels.append(PolyKernel(0, w, False))
elif basisFam == 'power': # At least for images, the used norm does not make differences in performace
if not sparse:
dst = MinkowskiMetric(l=featsL, r=featsR, k=2)
else:
dst = SparseEuclideanDistance(l=featsL, r=featsR)
for w in widths:
kernels.append(PowerKernel(0, w, dst))
elif basisFam == 'rationalQuadratic': # At least for images, using 3-norm make differences
if not sparse:
dst = MinkowskiMetric(l=featsL, r=featsR, k=2) # in performance
else:
dst = SparseEuclideanDistance(l=featsL, r=featsR)
for w in widths:
kernels.append(RationalQuadraticKernel(0, w, dst))
elif basisFam == 'spherical': # At least for images, the used norm does not make differences in performace
if not sparse:
dst = MinkowskiMetric(l=featsL, r=featsR, k=2)
else:
dst = SparseEuclideanDistance(l=featsL, r=featsR)
for w in widths:
kernels.append(SphericalKernel(0, w, dst))
elif basisFam == 'tstudent': # At least for images, the used norm does not make differences in performace
if not sparse:
dst = MinkowskiMetric(l=featsL, r=featsR, k=2)
else:
dst = SparseEuclideanDistance(l=featsL, r=featsR)
for w in widths:
kernels.append(TStudentKernel(0, w, dst))
elif basisFam == 'wave': # At least for images, the used norm does not make differences in performace
if not sparse:
dst = MinkowskiMetric(l=featsL, r=featsR, k=2)
else:
dst = SparseEuclideanDistance(l=featsL, r=featsR)
for w in widths:
kernels.append(WaveKernel(0, w, dst))
elif basisFam == 'wavelet' and not sparse: # At least for images it is very low the performance with this kernel.
for w in widths: # It remains pending, for now, analysing its parameters.
kernels.append(WaveletKernel(0, w, 0))
elif basisFam == 'cauchy':
if not sparse:
dst = MinkowskiMetric(l=featsL, r=featsR, k=2)
else:
dst = SparseEuclideanDistance(l=featsL, r=featsR)
for w in widths:
kernels.append(CauchyKernel(0, w, dst))
elif basisFam == 'exponential': # For this kernel it is necessary specifying features at the constructor
if not sparse:
dst = MinkowskiMetric(l=featsL, r=featsR, k=2)
else:
dst = SparseEuclideanDistance(l=featsL, r=featsR)
for w in widths:
kernels.append(ExponentialKernel(featsL, featsR, w, dst, 0))
elif basisFam == 'anova' and not sparse: # This kernel presents a warning in training:
"""RuntimeWarning: [WARN] In file /home/iarroyof/shogun/src/shogun/classifier/mkl/MKLMulticlass.cpp line
198: CMKLMulticlass::evaluatefinishcriterion(...): deltanew<=0.Switching back to weight norsm
difference as criterion.
"""
for w in widths:
kernels.append(ANOVAKernel(0, w))
else:
raise NameError('Unknown Kernel family name!!!')
combKer = CombinedKernel()
#features_tr = CombinedFeatures()
for k in kernels:
combKer.append_kernel(k)
#features_tr.append_feature_obj(featsL)
#combKer.init(features_tr, features_tr)
#combKer.init(featsL,featsR)
return combKer#, features_tr
# Defining the compounding kernel object
class mklObj(object):
"""Default self definition of the Multiple Kernel Learning object. This object uses previously defined methods for
generating a linear combination of basis kernels that can be constituted from different families. See at
fit_kernel() function documentation for details. This function trains the kernel weights. The object has other
member functions offering utilities. See the next instantiation and using example:
import mklObj as mk
kernel = mk.mklObj(weightRegNorm = 2,
mklC = 2, # This is the Cparameter of the underlaying SVM.
SVMepsilon = 1e-5,
threads = 2,
MKLepsilon = 0.001,
probome = 'Multiclass',
verbose = False) # IMPORTANT: Don't use this feature (True) if you are working in pipe mode.
# The object will print undesired outputs to the stdout.
The above values are the defaults, so if they are suitable for you it is possible instantiating the object by simply
stating: kernel = mk.mklObj(). Even it is possible modifying a subset of input parameters (keeping others as
default): kernel = mk.mklObj(weightRegNorm = 1, mklC = 10, SVMepsilon = 1e-2). See the documentation of each setter
below for allowed setting parameters without new instantiations.
Now, once main parameters has been setted, fit the kernel:
kernel.fit_kernel(featsTr = feats_train,
targetsTr = labelsTr,
featsTs = feats_test,
targetsTs = labelsTs,
kernelFamily = 'gaussian',
randomRange = [50, 200], # For homogeneous poly kernels these two parameter
randomParams = [50, 20], # sets have not effect. No basis kernel parameters
hyper = 'linear', # Also with not effect when kernel family is polynomial
pKers = 3) # and some other powering forms.
Once the MKL object has been fitted, you can get what you need from it. See getters documentation listed below.
"""
def __init__(self, weightRegNorm=2.0, mklC=2.0, SVMepsilon=0.01, model_file = None,
threads=4, MKLepsilon=0.01, problem='regression', verbose=False, mode = 'learning', sparse = False):
"""Object initialization. This procedure is regardless of the input data, basis kernels and corresponding
hyperparameters (kernel fitting).
"""
mkl_problem_object = {'regression':(MKLRegression, [mklC, mklC]),
'binary': (MKLClassification, [mklC, mklC]),
'multiclass': (MKLMulticlass, mklC)}
self.mode = mode
self.sparse = sparse
assert not model_file and mode != 'pattern_recognition' or (
model_file and mode == 'pattern_recognition')# Model file or pattern_recognition mode must be specified.
self.__problem = problem
self.verbose = verbose # inner training process verbose flag
self.Matrx = False # Kind of returned learned kernel object. See getter documentation of these
self.expansion = False # object configuration parameters for details. Only modifiable by setter.
self.__testerr = 0
if mode == 'learning':
try:
self.mkl = mkl_problem_object[problem][0]()
self.mklC = mkl_problem_object[problem][1]
except KeyError:
sys.stderr.write('Error: Given problem type is not valid.')
exit()
#################<<<<<<<<<<<>>>>>>>>>>
self.mkl.set_C_mkl(5.0) # This is the regularization parameter for the MKL weights regularizer (NOT the SVM C)
self.weightRegNorm = weightRegNorm # Setting the basis' weight vector norm
self.SVMepsilon = SVMepsilon # setting the transducer stop (convergence) criterion
self.MKLepsilon = MKLepsilon # setting the MKL stop criterion. The value suggested by
# Shogun examples is 0.001. See setter docs for details
elif mode == 'pattern_recognition':
[self.mkl, self.mkl_model] = self.load_mkl_model(file_name = model_file, model_type = problem)
self.sigmas = self.mkl_model['widths']
self.threads = threads # setting number of training threads. Verify functionality!!
def fit_pretrained(self, featsTr, featsTs):
""" This method sets up a MKL machine by using parameters from self.mkl_model preloaded dictionary which
contains preptrained model paremeters, e.g. weights and widths.
"""
self.ker = genKer(self, featsTr, featsTs, sparse = self.sparse,
basisFam = self.family_translation[self.mkl_model['family']], widths = self.sigmas)
self.ker.set_subkernel_weights(self.mkl_model['weights']) # Setting up pretrained weights to the
self.ker.init(featsTr, featsTs) # new kernel
# Self Function for kernel generation
def fit_kernel(self, featsTr, targetsTr, featsTs, targetsTs, randomRange=[1, 50], randomParams=[1, 1],
hyper='linear', kernelFamily='gaussian', pKers=3):
""" :return: CombinedKernel Shogun object.
This method is used for training the desired compound kernel. See documentation of the 'mklObj'
object for using example. 'featsTr' and 'featsTs' are the training and test data respectively.
'targetsTr' and 'targetsTs' are the training and test labels, respectively. All they must be Shogun
'RealFeatures' and 'MulticlassLabels' objects respectively.
The 'randomRange' parameter defines the range of numbers from which the basis kernel parameters will be
drawn, e.g. Gaussian random widths between 1 and 50 (the default). The 'randomParams' input parameter
states the parameters of the pseudorandom distribution of the basis kernel parameters to be drawn, e.g.
Gaussian-pseudorandom-generated weights with std. deviation equal to 1 and mean equal to 1 (the default).
The 'hyper' input parameter defines the distribution of the pseudorandom-generated weights. See
documentation of the sigmaGen() method of the 'mklObj' object to see a list of possible basis kernel
parameter distributions. The 'kernelFamily' input parameter is the basis kernel family to be append to
the desired compound kernel if you select, e.g., the default 'gaussian' family, all elements of the
learned linear combination will be gaussians (each differently weighted and parametrized). See
documentation of the genKer() method of the 'mklObj' object to see a list of allowed basis kernel
families. The 'pKers' input parameter defines the size of the learned kernel linear combination, i.e.
how many basis kernels to be weighted in the training and therefore, how many coefficients will have the
Fourier series of data (the default is 3).
.. note:: In the cases of kernelFamily = {'polynomial' or 'power' or 'tstudent' or 'anova'}, the input
parameters {'randomRange', 'randomParams', 'hyper'} have not effect, because these kernel families do not
require basis kernel parameters.
:param featsTr: RealFeatures Shogun object conflating the training data.
:param targetsTr: MulticlassLabels Shogun object conflating the training labels.
:param featsTr: RealFeatures Shogun object conflating the test data.
:param targetsTr: MulticlassLabels Shogun object conflating the test labels.
:param randomRange: It is the range to which the basis kernel parameters will pertain. For some basis
kernels families this input parameter has not effect.
:param randomParams: It is a list of parameters of the distribution of the random weights, e.g. for a
gaussian distribution with mean zero and variance 1, parameters = [0, 1]. For some basis kernel
families this input parameter has not effect.
:param hyper: string which specifies the name of the basis kernel parameter distribution. See
documentation for sigmaGen() function for viewing allowed strings (names).
:param kernelFamily: string which specifies the name of the basis kernel family. See documentation for
genKer() function for viewing allowed strings (names).
:param pKers: This is the number of basis kernels for the MKL object (linear combination).
"""
# Inner variable copying:
self._featsTr = featsTr
self._targetsTr = targetsTr
self._hyper = hyper
self._pkers = pKers
self.basisFamily = kernelFamily
if self.verbose: # Printing the training progress
print '\nNacho, multiple <' + kernelFamily + '> Kernels have been initialized...'
print "\nInput main parameters: "
print "\nHyperarameter distribution: ", self._hyper, "\nLinear combination size: ", pKers, \
'\nWeight regularization norm: ', self.weightRegNorm, \
'Weight regularization parameter: ',self.mklC
if self.__problem == 'multiclass':
print "Classes: ", targetsTr.get_num_classes()
elif self.__problem == 'binary':
print "Classes: Binary"
elif self.__problem == 'regression':
print 'Regression problem'
# Generating the list of subkernels. Creating the compound kernel. For monomial-nonhomogeneous (polynomial)
# kernels the hyperparameters are uniquely the degree of each monomial, in the form of a sequence. MKL finds the
# coefficient (weight) for each monomial in order to find a compound polynomial.
if kernelFamily == 'polynomial' or kernelFamily == 'power' or \
kernelFamily == 'tstudent' or kernelFamily == 'anova':
self.sigmas = range(1, pKers+1)
self.ker = genKer(self, self._featsTr, self._featsTr, basisFam=kernelFamily, widths=self.sigmas, sparse = self.sparse)
else:
# We have called 'sigmas' to any basis kernel parameter, regardless if the kernel is Gaussian or not. So
# let's generate the widths:
self.sigmas = sorted(sigmaGen(self, hyperDistribution=hyper, size=pKers,
rango=randomRange, parameters=randomParams))
try:
z = self.sigmas.index(0)
self.sigmas[z] = 0.1
except ValueError:
pass
try: # Verifying if number of kernels is greater or equal to 2
if pKers <= 1 or len(self.sigmas) < 2:
raise customException('Senseless MKLClassification use!!!')
except customException, (instance):
print 'Caugth: ' + instance.parameter
print "-----------------------------------------------------"
print """The multikernel learning object is meaningless for less than 2 basis
kernels, i.e. pKers <= 1, so 'mklObj' couldn't be instantiated."""
print "-----------------------------------------------------"
self.ker = genKer(self, self._featsTr, self._featsTr, basisFam=kernelFamily, widths=self.sigmas, sparse = self.sparse)
if self.verbose:
print 'Widths: ', self.sigmas
# Initializing the compound kernel
# combf_tr = CombinedFeatures()
# combf_tr.append_feature_obj(self._featsTr)
# self.ker.init(combf_tr, combf_tr)
try: # Verifying if number of kernels was greater or equal to 2 after training
if self.ker.get_num_subkernels() < 2:
raise customException(
'Multikernel coefficients were less than 2 after training. Revise object settings!!!')
except customException, (instance):
print 'Caugth: ' + instance.parameter
# Verbose for learning surveying
if self.verbose:
print '\nKernel fitted...'
# Initializing the transducer for multiclassification
features_tr = CombinedFeatures()
features_ts = CombinedFeatures()
for k in self.sigmas:
features_tr.append_feature_obj(self._featsTr)
features_ts.append_feature_obj(featsTs)
self.ker.init(features_tr, features_tr)
self.mkl.set_kernel(self.ker)
self.mkl.set_labels(self._targetsTr)
# Train to return the learnt kernel
if self.verbose:
print '\nLearning the machine coefficients...'
# ------------------ The most time consuming code segment --------------------------
self.crashed = False
try:
self.mkl.train()
except SystemError:
self.crashed = True
self.mkl_model = self.keep_mkl_model(self.mkl, self.ker, self.sigmas) # Let's keep the trained model
if self.verbose: # for future use.
print 'Kernel trained... Weights: ', self.weights
# Evaluate the learnt Kernel. Here it is assumed 'ker' is learnt, so we only need for initialize it again but
# with the test set object. Then, set the initialized kernel to the mkl object in order to 'apply'.
self.ker.init(features_tr, features_ts) # Now with test examples. The inner product between training
#st()
def pattern_recognition(self, targetsTs):
self.mkl.set_kernel(self.ker) # and test examples generates the corresponding Gram Matrix.
if not self.crashed:
out = self.mkl.apply() # Applying the obtained Gram Matrix
else:
out = RegressionLabels(-1.0*numpy.ones(targetsTs.get_num_labels()))
self.estimated_out = list(out.get_labels())
# ----------------------------------------------------------------------------------
if self.__problem == 'binary': # If the problem is either binary or multiclass, different
evalua = ErrorRateMeasure() # performance measures are computed.
self.__testerr = 100 - evalua.evaluate(out, targetsTs) * 100
elif self.__problem == 'multiclass':
evalua = MulticlassAccuracy()
self.__testerr = evalua.evaluate(out, targetsTs) * 100
elif self.__problem == 'regression': # Determination Coefficient was selected for measuring performance
#evalua = MeanSquaredError()
#self.__testerr = evalua.evaluate(out, targetsTs)
self.__testerr = r2_score(self.estimated_out, list(targetsTs.get_labels()))
# Verbose for learning surveying
if self.verbose:
print 'Kernel evaluation ready. The precision was: ', self.__testerr, '%'
def keep_mkl_model(self, mkl, kernel, widths, file_name = None):
""" Python reimplementated function for saving a pretrained MKL machine.
This method saves a trained MKL machine to the file 'file_name'. If not 'file_name' is given, a
dictionary 'mkl_machine' containing parameters of the given trained MKL object is returned.
Here we assumed all subkernels of the passed CombinedKernel are of the same family, so uniquely the
first kernel is used for verifying if the passed 'kernel' is a Gaussian mixture. If it is so, we insert
the 'widths' to the model dictionary 'mkl_machine'. An error is returned otherwise.
"""
mkl_machine = {}
support=[]
mkl_machine['num_support_vectors'] = mkl.get_num_support_vectors()
mkl_machine['bias']=mkl.get_bias()
for i in xrange(mkl_machine['num_support_vectors']):
support.append((mkl.get_alpha(i), mkl.get_support_vector(i)))
mkl_machine['support'] = support
mkl_machine['weights'] = list(kernel.get_subkernel_weights())
mkl_machine['family'] = kernel.get_first_kernel().get_name()
mkl_machine['widths'] = widths
if file_name:
f = open(file_name,'w')
f.write(str(mkl_machine)+'\n')
f.close()
else:
return mkl_machine
def load_mkl_model(self, file_name, model_type = 'regression'):
""" This method receives a file name (if it is not in pwd, full path must be given) and a model type to
be loaded {'regression', 'binary', 'multiclass'}. The loaded file must contain a t least a dictionary at
its top. This dictionary must contain a key called 'model' whose value must be a dictionary, from which
model parameters will be read. For example:
{'key_0':value, 'key_1':value,..., 'model':{'family':'PolyKernel', 'bias':1.001,...}, key_n:value}
Four objects are returned. The MKL model which is tuned to those parameters stored at the given file. A
numpy array containing learned weights of a CombinedKernel. The widths corresponding to returned kernel
weights and the kernel family. Be careful with the kernel family you are loading because widths no
necessarily are it, but probably 'degrees', e.g. for the PolyKernel family.
The Combined kernel must be instantiated outside this method, thereby loading to it corresponding
weights and widths.
"""
with open(file_name, 'r') as pointer:
mkl_machine = eval(pointer.read())['learned_model']
if model_type == 'regression':
mkl = MKLRegression() # A new two-class MKL object
elif model_type == 'binary':
mkl = MKLClassification()
elif model_type == 'multiclass':
mkl = MKLMulticlass()
else:
sys.stderr.write('ERROR: Unknown problem type in model loading.')
exit()
mkl.set_bias(mkl_machine['bias'])
mkl.create_new_model(mkl_machine['num_support_vectors']) # Initialize the inner SVM
for i in xrange(mkl_machine['num_support_vectors']):
mkl.set_alpha(i, mkl_machine['support'][i][0])
mkl.set_support_vector(i, mkl_machine['support'][i][1])
mkl_machine['weights'] = numpy.array(mkl_machine['weights'])
return mkl, mkl_machine
# Getters (properties):
@property
def family_translation(self):
"""
"""
self.__family_translation = {'PolyKernel':'polynomial', 'GaussianKernel':'gaussian',
'ExponentialKernel':'exponential'}
return self.__family_translation
@property
def mkl_model(self):
""" This property stores the MKL model parameters learned by the self-object. These parameters can be
stored into a file for future configuration of a non-trained MKL new MKL object. Also probably passed
onwards for showing results.
"""
return self.__mkl_model
@property
def estimated_out(self):
""" This property is the mkl result after applying.
"""
return self.__estimated_out
@property
def compoundKernel(self):
"""This method is used for getting the kernel object, i.e. the learned MKL object, which can be unwrapped
into its matrix form instead of getting a Shogun object. Use the input parameters Matrix = True,
expansion = False for getting the compound matrix of reals. For instance:
mklObj.Matrix = True
mklObj.expansion = False
kernelMatrix = mklObj.compoundKernel
Use Matrix = True, expansion = True for getting the expanded linear combination of matrices and weights
separately, for instance:
mklObj.Matrix = True
mklObj.expansion = True
basis, weights = mklObj.compoundKernel
Use Matrix = False, expansion = False for getting the learned kernel Shogun object, for instance:
mklObj.Matrix = False
mklObj.expansion = False
kernelObj = mklObj.compoundKernel
.. warning:: Be careful with this latter variant of the method becuase of the large amount of needed
physical memory.
"""
if self.Matrx:
kernels = []
size = self.ker.get_num_subkernels()
for k in xrange(0, size - 1):
kernels.append(self.ker.get_kernel(k).get_kernel_matrix())
ws = self.weights
if self.expansion:
return kernels, ws # Returning the full expansion of the learned kernel.
else:
return sum(kernels * ws) # Returning the matrix linear combination, in other words,
else: # a projector matrix representation.
return self.ker # If matrix representation is not required, only the Shogun kernel
# object is returned.
@property
def sigmas(self):
"""This method is used for getting the current set of basis kernel parameters, i.e. widths, in the case
of the gaussian basis kernel.
:rtype : list of float
"""
return self.__sigmas
@property
def verbose(self):
"""This is the verbose flag, which is used for monitoring the object training procedure.
IMPORTANT: Don't use this feature (True) if you are working in pipe mode. The object will print undesired
outputs to the stdout.
:rtype : bool
"""
return self._verbose
@property
def Matrx(self):
"""This is a boolean property of the object. Its aim is getting and, mainly, setting the kind of object
we want to obtain as learned kernel, i.e. a Kernel Shogun object or a Kernel Matrix whose entries are
reals. The latter could require large amounts of physical memory. See the mklObj.compoundKernel property
documentation in this object for using details.
:rtype :bool
"""
return self.__Matrx
@property
def expansion(self):
"""This is a boolean property. Its aim is getting and, mainly, setting the mklObj object to return the
complete expansion of the learned kernel, i.e. a list of basis kernel matrices as well as their
corresponding coefficients. This configuration may require large amounts of physical memory. See the
mklObj.compoundKernel property documentation in this object for using details.
:rtype :bool
.. seealso:: the code and examples and documentation about :@property:`compoundKernel`
"""
return self.__expansion
@property
def weightRegNorm(self):
""" The value of this property is the basis' weight vector norm, e.g. :math:`||\beta||_p`, to be used as
regularizer. It controls the smoothing among basis kernel weights of the learned multiple kernel combination. On
one hand, If p=1 (the l_1 norm) the weight values B_i will be disproportionally between them, i.e. a few of them
will be >> 0,some other simply > 0 and many of them will be zero or very near to zero (the vector B will be
sparse). On the other hand, if p = 2 the weights B_i linearly distributed, i.e. their distribution shows an
uniform tilt in such a way the differences between pairs of them are not significant, but rather proportional to
the tilt of the distribution.
To our knowledge, such a tilt is certainly not explicitly taken into account as regularization hyperparameter,
although the parameter C \in [0, 1] is directly associated to it as scalar factor. Thus specifically for
C \in [0, 1], it operates the vector B by forcing to it to certain orientation which describes a tilt
m \in (0, 1)U(1, \infty) (with minima in the extremes of these subsets and maxima in their medians). Given that
C \n [0, 1], the scaling effect behaves such that linearly depresses low values of B_i, whilst highlights their
high values. The effect of C \in (1, \infty) is still not clearly studied, however it will be a bit different
than the above, but keeping its scalar effect.
Overall, as p tends to be >> 1 (or even p --> \\infty) the B_i values tend to be ever more uniformly
distributed. More specific and complex regularization operators are explained in .. seealso:: Schölkopf, B., & Smola, A. J.
(2002). Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press.
:rtype : vector of float
"""
return self.__weightRegNorm
# function getters
@property
def weights(self):
"""This method is used for getting the learned weights of the MKL object. We first get the kernel weights into
a list object, before returning it. This is because 'get_subkernel_weights()' causes error while printing to an
output file by means of returning a nonlist object.
:rtype : list of float
"""
self.__weights = list(self.ker.get_subkernel_weights())
return self.__weights
@property
def SVMepsilon(self):
"""This method is used for getting the SVM convergence criterion (the minimum allowed error commited by
the transducer in training).
:rtype : float
.. seealso:: See at page 22 of Sonnemburg et.al., (2006) Large Scale Multiple Kernel Learning.
.. seealso:: @SVMepsilon.setter
"""
return self.__SVMepsion
@property
def MKLepsilon(self):
"""This method is used for getting the MKL convergence criterion (the minimum allowed error committed by
the MKL object in test).
:rtype : float
.. seealso:: See at page 22 of Sonnemburg et.al., (2006) Large Scale Multiple Kernel Learning.
.. seealso:: @MKLepsilon.setter
"""
return self.__MKLepsilon
@property
def mklC(self):
"""This method is used for setting regularization parameters. 'mklC' is a real value in multiclass problems,
while in binary problems it must be a list of two elements. These must be different when the two classes are
imbalanced, but must be equal for balanced densities in binary classification problems. For multiclass
problems, imbalanced densities are not considered.
:rtype : float
.. seealso:: See at page 4 of Bagchi, (2014) SVM Classifiers Based On Imperfect Training Data.
.. seealso:: @weightRegNorm property documentation for more details about C as regularization parameter.
"""
return self.__mklC
@property
def threads(self):
""" This property is used for getting and setting the number of threads in which the training procedure will be
will be segmented into a single machine processor core.
:rtype : int
.. seealso:: @threads.setter documentation.
"""
return self.__threads
# Readonly properties:
@property
def problem(self):
"""This method is used for getting the kind of problem the mklObj object will be trained for. If binary == True,
the you want to train the object for a two-class classification problem. Otherwise if binary == False, you want
to train the object for multiclass classification problems. This property can't be modified once the object has
been instantiated.
:rtype : bool
"""
return self.__problem
@property
def testerr(self):
"""This method is used for getting the test accuracy after training the MKL object. 'testerr' is a readonly
object property.
:rtype : float
"""
return self.__testerr
@property
def sparse(self):
"""This method is used for getting the sparse/dense mode of the MKL object.
:rtype : float
"""
return self.__sparse
@property
def crashed(self):
"""This method is used for getting the sparse/dense mode of the MKL object.
:rtype : float
"""
return self.__crashed
# mklObj (decorated) Setters: Binary configuration of the classifier cant be changed. It is needed to instantiate
# a new mklObj object.
@crashed.setter
def crashed(self, value):
assert isinstance(value, bool) # The model is not stored as a dictionary
self.__crashed = value
@mkl_model.setter
def mkl_model(self, value):
assert isinstance(value, dict) # The model is not stored as a dictionary
self.__mkl_model = value
@estimated_out.setter
def estimated_out(self, value):
self.__estimated_out = value
@sparse.setter
def sparse(self, value):
self.__sparse = value
@Matrx.setter
def Matrx(self, value):
"""
:type value: bool
.. seealso:: @Matrx property documentation.
"""
assert isinstance(value, bool)
self.__Matrx = value
@expansion.setter
def expansion(self, value):
"""
.. seealso:: @expansion property documentation
:type value: bool
"""
assert isinstance(value, bool)
self.__expansion = value
@sigmas.setter
def sigmas(self, value):
""" This method is used for setting desired basis kernel parameters for the MKL object. 'value' is a list of
real values of 'pKers' length. In 'learning' mode, be careful to avoid mismatching between the number of basis kernels of the
current compound kernel and the one you have in mind. A mismatch error could be arisen. In 'pattern_recognition'
mode, this quantity is taken from the learned model, which is stored at disk.
@type value: list of float
.. seealso:: @sigmas property documentation
"""
try:
if self.mode == 'learning':
if len(value) == self._pkers:
self.__sigmas = value
else:
raise customException('Size of basis kernel parameter list mismatches the size of the combined\
kernel. You can use len(CMKLobj.sigmas) to revise the mismatching.')
elif self.mode == 'pattern_recognition':
self.__sigmas = value
except customException, (instance):
print "Caught: " + instance.parameter
@verbose.setter
def verbose(self, value):
"""This method sets to True of False the verbose flag, which is used in turn for monitoring the object training
procedure.
@type value: bool
"""
assert isinstance(value, bool)
self._verbose = value
@weightRegNorm.setter
def weightRegNorm(self, value):
"""This method is used for changing the norm of the weight regularizer of the MKL object. Typically this
changing is useful for retrain the model with other regularizer.
@type value: float
..seealso:: @weightRegNorm property documentation.
"""
assert (isinstance(value, float) and value >= 0.0)
self.mkl.set_mkl_norm(value)
self.__weightRegNorm = value
@SVMepsilon.setter
def SVMepsilon(self, value):
"""This method is used for setting the SVM convergence criterion (the minimum allowed error commited by
the transducer in training). In other words, the low level of the learning process. The current basis
kernel combination is tested as the SVM kernel. Regardless of each basis' weights.
@type value: float
.. seealso:: Page 22 of Sonnemburg et.al., (2006) Large Scale Multiple Kernel Learning.
"""
assert (isinstance(value, float) and value >= 0.0)
self.mkl.set_epsilon(value)
self.__SVMepsion = value
@MKLepsilon.setter
def MKLepsilon(self, value):
"""This method is used for setting the MKL convergence criterion (the minimum allowed error committed by
the MKL object in test). In other words, the high level of the learning process. The current basis
kernel combination is tested as the SVM kernel. The basis' weights are tuned until 'MKLeps' is reached.
@type value: float
.. seealso:: Page 22 of Sonnemburg et.al., (2006) Large Scale Multiple Kernel Learning.
"""
assert (isinstance(value, float) and value >= 0.0)
self.mkl.set_mkl_epsilon(value)
self.__MKLepsilon = value
@mklC.setter
def mklC(self, value):
"""This method is used for setting regularization parameters. These are different when the two classes
are imbalanced and Equal for balanced densities in binary classification problems. For multiclass
problems imbalanced densities are not considered, so uniquely the first argument is caught by the method.
If one or both arguments are misplaced the default values are one both them.
@type value: float (for multiclass problems), [float, float] for binary and regression problems.
.. seealso:: Page 4 of Bagchi,(2014) SVM Classifiers Based On Imperfect Training Data.
"""
if self.__problem == 'binary' or self.__problem == 'regression':
assert len(value) == 2
assert (isinstance(value, (list, float)) and value[0] > 0.0 and value[1] > 0.0)
self.mkl.set_C(value[0], value[1])
elif self.__problem == 'multiclass':
assert (isinstance(value, float) and value > 0.0)
self.mkl.set_C(value)
self.__mklC = value
@threads.setter
def threads(self, value):
"""This method is used for changing the number of threads we want to be running with a single machine core.
These threads are not different parallel processes running in different machine cores.
"""
assert (isinstance(value, int) and value > 0)
self.mkl.parallel.set_num_threads(value) # setting number of training threads
self.__threads = value
| gpl-2.0 |
JamesClough/networkx | examples/drawing/giant_component.py | 15 | 2287 | #!/usr/bin/env python
"""
This example illustrates the sudden appearance of a
giant connected component in a binomial random graph.
Requires pygraphviz and matplotlib to draw.
"""
# Copyright (C) 2006-2016
# Aric Hagberg <hagberg@lanl.gov>
# Dan Schult <dschult@colgate.edu>
# Pieter Swart <swart@lanl.gov>
# All rights reserved.
# BSD license.
try:
import matplotlib.pyplot as plt
except:
raise
import networkx as nx
import math
try:
import pygraphviz
from networkx.drawing.nx_agraph import graphviz_layout
layout = graphviz_layout
except ImportError:
try:
import pydotplus
from networkx.drawing.nx_pydot import graphviz_layout
layout = graphviz_layout
except ImportError:
print("PyGraphviz and PyDotPlus not found;\n"
"drawing with spring layout;\n"
"will be slow.")
layout = nx.spring_layout
n=150 # 150 nodes
# p value at which giant component (of size log(n) nodes) is expected
p_giant=1.0/(n-1)
# p value at which graph is expected to become completely connected
p_conn=math.log(n)/float(n)
# the following range of p values should be close to the threshold
pvals=[0.003, 0.006, 0.008, 0.015]
region=220 # for pylab 2x2 subplot layout
plt.subplots_adjust(left=0,right=1,bottom=0,top=0.95,wspace=0.01,hspace=0.01)
for p in pvals:
G=nx.binomial_graph(n,p)
pos=layout(G)
region+=1
plt.subplot(region)
plt.title("p = %6.3f"%(p))
nx.draw(G,pos,
with_labels=False,
node_size=10
)
# identify largest connected component
Gcc=sorted(nx.connected_component_subgraphs(G), key = len, reverse=True)
G0=Gcc[0]
nx.draw_networkx_edges(G0,pos,
with_labels=False,
edge_color='r',
width=6.0
)
# show other connected components
for Gi in Gcc[1:]:
if len(Gi)>1:
nx.draw_networkx_edges(Gi,pos,
with_labels=False,
edge_color='r',
alpha=0.3,
width=5.0
)
plt.savefig("giant_component.png")
plt.show() # display
| bsd-3-clause |
0x0all/scikit-learn | sklearn/manifold/tests/test_t_sne.py | 10 | 9541 | import sys
from sklearn.externals.six.moves import cStringIO as StringIO
import numpy as np
import scipy.sparse as sp
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_raises_regexp
from sklearn.utils import check_random_state
from sklearn.manifold.t_sne import _joint_probabilities
from sklearn.manifold.t_sne import _kl_divergence
from sklearn.manifold.t_sne import _gradient_descent
from sklearn.manifold.t_sne import trustworthiness
from sklearn.manifold.t_sne import TSNE
from sklearn.manifold._utils import _binary_search_perplexity
from scipy.optimize import check_grad
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
def test_gradient_descent_stops():
"""Test stopping conditions of gradient descent."""
class ObjectiveSmallGradient:
def __init__(self):
self.it = -1
def __call__(self, _):
self.it += 1
return (10 - self.it) / 10.0, np.array([1e-5])
def flat_function(_):
return 0.0, np.ones(1)
# Gradient norm
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
_, error, it = _gradient_descent(
ObjectiveSmallGradient(), np.zeros(1), 0, n_iter=100,
n_iter_without_progress=100, momentum=0.0, learning_rate=0.0,
min_gain=0.0, min_grad_norm=1e-5, min_error_diff=0.0, verbose=2)
finally:
out = sys.stdout.getvalue()
sys.stdout.close()
sys.stdout = old_stdout
assert_equal(error, 1.0)
assert_equal(it, 0)
assert("gradient norm" in out)
# Error difference
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
_, error, it = _gradient_descent(
ObjectiveSmallGradient(), np.zeros(1), 0, n_iter=100,
n_iter_without_progress=100, momentum=0.0, learning_rate=0.0,
min_gain=0.0, min_grad_norm=0.0, min_error_diff=0.2, verbose=2)
finally:
out = sys.stdout.getvalue()
sys.stdout.close()
sys.stdout = old_stdout
assert_equal(error, 0.9)
assert_equal(it, 1)
assert("error difference" in out)
# Maximum number of iterations without improvement
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
_, error, it = _gradient_descent(
flat_function, np.zeros(1), 0, n_iter=100,
n_iter_without_progress=10, momentum=0.0, learning_rate=0.0,
min_gain=0.0, min_grad_norm=0.0, min_error_diff=-1.0, verbose=2)
finally:
out = sys.stdout.getvalue()
sys.stdout.close()
sys.stdout = old_stdout
assert_equal(error, 0.0)
assert_equal(it, 11)
assert("did not make any progress" in out)
# Maximum number of iterations
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
_, error, it = _gradient_descent(
ObjectiveSmallGradient(), np.zeros(1), 0, n_iter=11,
n_iter_without_progress=100, momentum=0.0, learning_rate=0.0,
min_gain=0.0, min_grad_norm=0.0, min_error_diff=0.0, verbose=2)
finally:
out = sys.stdout.getvalue()
sys.stdout.close()
sys.stdout = old_stdout
assert_equal(error, 0.0)
assert_equal(it, 10)
assert("Iteration 10" in out)
def test_binary_search():
"""Test if the binary search finds Gaussians with desired perplexity."""
random_state = check_random_state(0)
distances = random_state.randn(50, 2)
distances = distances.dot(distances.T)
np.fill_diagonal(distances, 0.0)
desired_perplexity = 25.0
P = _binary_search_perplexity(distances, desired_perplexity, verbose=0)
P = np.maximum(P, np.finfo(np.double).eps)
mean_perplexity = np.mean([np.exp(-np.sum(P[i] * np.log(P[i])))
for i in range(P.shape[0])])
assert_almost_equal(mean_perplexity, desired_perplexity, decimal=3)
def test_gradient():
"""Test gradient of Kullback-Leibler divergence."""
random_state = check_random_state(0)
n_samples = 50
n_features = 2
n_components = 2
alpha = 1.0
distances = random_state.randn(n_samples, n_features)
distances = distances.dot(distances.T)
np.fill_diagonal(distances, 0.0)
X_embedded = random_state.randn(n_samples, n_components)
P = _joint_probabilities(distances, desired_perplexity=25.0,
verbose=0)
fun = lambda params: _kl_divergence(params, P, alpha, n_samples,
n_components)[0]
grad = lambda params: _kl_divergence(params, P, alpha, n_samples,
n_components)[1]
assert_almost_equal(check_grad(fun, grad, X_embedded.ravel()), 0.0,
decimal=5)
def test_trustworthiness():
"""Test trustworthiness score."""
random_state = check_random_state(0)
# Affine transformation
X = random_state.randn(100, 2)
assert_equal(trustworthiness(X, 5.0 + X / 10.0), 1.0)
# Randomly shuffled
X = np.arange(100).reshape(-1, 1)
X_embedded = X.copy()
random_state.shuffle(X_embedded)
assert_less(trustworthiness(X, X_embedded), 0.6)
# Completely different
X = np.arange(5).reshape(-1, 1)
X_embedded = np.array([[0], [2], [4], [1], [3]])
assert_almost_equal(trustworthiness(X, X_embedded, n_neighbors=1), 0.2)
def test_preserve_trustworthiness_approximately():
"""Nearest neighbors should be preserved approximately."""
random_state = check_random_state(0)
X = random_state.randn(100, 2)
for init in ('random', 'pca'):
tsne = TSNE(n_components=2, perplexity=10, learning_rate=100.0,
init=init, random_state=0)
X_embedded = tsne.fit_transform(X)
assert_almost_equal(trustworthiness(X, X_embedded, n_neighbors=1), 1.0,
decimal=1)
def test_fit_csr_matrix():
"""X can be a sparse matrix."""
random_state = check_random_state(0)
X = random_state.randn(100, 2)
X[(np.random.randint(0, 100, 50), np.random.randint(0, 2, 50))] = 0.0
X_csr = sp.csr_matrix(X)
tsne = TSNE(n_components=2, perplexity=10, learning_rate=100.0,
random_state=0)
X_embedded = tsne.fit_transform(X_csr)
assert_almost_equal(trustworthiness(X_csr, X_embedded, n_neighbors=1), 1.0,
decimal=1)
def test_preserve_trustworthiness_approximately_with_precomputed_distances():
"""Nearest neighbors should be preserved approximately."""
random_state = check_random_state(0)
X = random_state.randn(100, 2)
D = squareform(pdist(X), "sqeuclidean")
tsne = TSNE(n_components=2, perplexity=10, learning_rate=100.0,
metric="precomputed", random_state=0)
X_embedded = tsne.fit_transform(D)
assert_almost_equal(trustworthiness(D, X_embedded, n_neighbors=1,
precomputed=True), 1.0, decimal=1)
def test_early_exaggeration_too_small():
"""Early exaggeration factor must be >= 1."""
tsne = TSNE(early_exaggeration=0.99)
assert_raises_regexp(ValueError, "early_exaggeration .*",
tsne.fit_transform, np.array([[0.0]]))
def test_too_few_iterations():
"""Number of gradient descent iterations must be at least 200."""
tsne = TSNE(n_iter=199)
assert_raises_regexp(ValueError, "n_iter .*", tsne.fit_transform,
np.array([[0.0]]))
def test_non_square_precomputed_distances():
"""Precomputed distance matrices must be square matrices."""
tsne = TSNE(metric="precomputed")
assert_raises_regexp(ValueError, ".* square distance matrix",
tsne.fit_transform, np.array([[0.0], [1.0]]))
def test_init_not_available():
"""'init' must be 'pca' or 'random'."""
assert_raises_regexp(ValueError, "'init' must be either 'pca' or 'random'",
TSNE, init="not available")
def test_distance_not_available():
"""'metric' must be valid."""
tsne = TSNE(metric="not available")
assert_raises_regexp(ValueError, "Unknown metric not available.*",
tsne.fit_transform, np.array([[0.0], [1.0]]))
def test_pca_initialization_not_compatible_with_precomputed_kernel():
"""Precomputed distance matrices must be square matrices."""
tsne = TSNE(metric="precomputed", init="pca")
assert_raises_regexp(ValueError, "The parameter init=\"pca\" cannot be "
"used with metric=\"precomputed\".",
tsne.fit_transform, np.array([[0.0], [1.0]]))
def test_verbose():
random_state = check_random_state(0)
tsne = TSNE(verbose=2)
X = random_state.randn(5, 2)
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
tsne.fit_transform(X)
finally:
out = sys.stdout.getvalue()
sys.stdout.close()
sys.stdout = old_stdout
assert("[t-SNE]" in out)
assert("Computing pairwise distances" in out)
assert("Computed conditional probabilities" in out)
assert("Mean sigma" in out)
assert("Finished" in out)
assert("early exaggeration" in out)
assert("Finished" in out)
def test_chebyshev_metric():
"""t-SNE should allow metrics that cannot be squared (issue #3526)."""
random_state = check_random_state(0)
tsne = TSNE(verbose=2, metric="chebyshev")
X = random_state.randn(5, 2)
tsne.fit_transform(X)
| bsd-3-clause |
turian/batchtrain | hyperparameters.py | 1 | 5497 | from locals import *
from collections import OrderedDict
import itertools
import sklearn.linear_model
import sklearn.svm
import sklearn.ensemble
import sklearn.neighbors
import sklearn.semi_supervised
import sklearn.naive_bayes
# Code from http://rosettacode.org/wiki/Power_set#Python
def list_powerset2(lst):
return reduce(lambda result, x: result + [subset + [x] for subset in result],
lst, [[]])
def powerset(s):
return frozenset(map(frozenset, list_powerset2(list(s))))
def all_hyperparameters(odict):
hyperparams = list(itertools.product(*odict.values()))
for h in hyperparams:
yield dict(zip(odict.keys(), h))
MODEL_HYPERPARAMETERS = {
"MultinomialNB": OrderedDict({
"alpha": [0.01, 0.032, 0.1, 0.32, 1.0, 10.]
}),
"SGDClassifier": OrderedDict({
"loss": ['hinge', 'log', 'modified_huber'],
"penalty": ['l2', 'l1', 'elasticnet'],
"alpha": [0.001, 0.0001, 0.00001, 0.000001],
"rho": [0.15, 0.30, 0.55, 0.85, 0.95],
# "l1_ratio": [0.05, 0.15, 0.45],
"fit_intercept": [True],
"n_iter": [1, 5, 25, 100],
"shuffle": [True, False],
# "epsilon": [
"learning_rate": ["constant", "optimal", "invscaling"],
"eta0": [0.001, 0.01, 0.1],
"power_t": [0.05, 0.1, 0.25, 0.5, 1.],
"warm_start": [True, False],
}),
"BayesianRidge": OrderedDict({
"n_iter": [100, 300, 1000],
"tol": [1e-2, 1e-3, 1e-4],
"alpha_1": [1e-5, 1e-6, 1e-7],
"alpha_2": [1e-5, 1e-6, 1e-7],
"lambda_1": [1e-5, 1e-6, 1e-7],
"lambda_2": [1e-5, 1e-6, 1e-7],
"normalize": [True, False],
}),
"Perceptron": OrderedDict({
"penalty": ["l2", "l1", "elasticnet"],
"alpha": [1e-2, 1e-3, 1e-4, 1e-5, 1e-6],
"n_iter": [1, 5, 25],
"shuffle": [True, False],
"eta0": [0.1, 1., 10.],
"warm_start": [True, False],
}),
"SVC": OrderedDict({
"C": [0.1, 1, 10, 100],
"kernel": ["rbf", "sigmoid", "linear", "poly"],
"degree": [1,2,3,4,5],
"gamma": [1e-3, 1e-5, 0.],
"probability": [False, True],
"cache_size": [CACHESIZE],
"shrinking": [False, True],
}),
"SVR": OrderedDict({
"C": [0.1, 1, 10, 100],
"epsilon": [0.001, 0.01, 0.1, 1.0],
"kernel": ["rbf", "sigmoid", "linear", "poly"],
"degree": [1,2,3,4,5],
"gamma": [1e-3, 1e-5, 0.],
"cache_size": [CACHESIZE],
"shrinking": [False, True],
}),
"GradientBoostingClassifier": OrderedDict({
'loss': ['deviance'],
#'learn_rate': [1., 0.1, 0.01],
'learn_rate': [1., 0.1],
#'n_estimators': [10, 32, 100, 320],
'n_estimators': [10, 32, 100],
'max_depth': [1, 3, None],
'min_samples_split': [1, 3],
'min_samples_leaf': [1, 3],
#'subsample': [0.032, 0.1, 0.32, 1],
'subsample': [0.1, 0.32, 1],
# 'alpha': [0.5, 0.9],
}),
"GradientBoostingRegressor": OrderedDict({
'loss': ['ls', 'lad', 'huber', 'quantile'],
'learn_rate': [1., 0.1, 0.01],
'n_estimators': [10, 32, 100, 320],
'max_depth': [1, 3, None],
'min_samples_split': [1, 3],
'min_samples_leaf': [1, 3],
'subsample': [0.032, 0.1, 0.32, 1],
}),
"RandomForestClassifier": OrderedDict({
'n_estimators': [10, 32, 100, 320],
'criterion': ['gini', 'entropy'],
'max_depth': [1, 3, None],
'min_samples_split': [1, 3],
'min_samples_leaf': [1, 3],
'min_density': [0.032, 0.1, 0.32],
'max_features': ["sqrt", "log2", None],
# 'bootstrap': [True, False],
'bootstrap': [True],
'oob_score': [True, False],
# 'verbose': [True],
}),
"RandomForestRegressor": OrderedDict({
'n_estimators': [10, 32, 100, 320],
'max_depth': [1, 3, None],
'min_samples_split': [1, 3],
'min_samples_leaf': [1, 3],
'min_density': [0.032, 0.1, 0.32],
'max_features': ["sqrt", "log2", None],
# 'bootstrap': [True, False],
'bootstrap': [True],
'oob_score': [True, False],
# 'verbose': [True],
}),
"KNeighborsClassifier": OrderedDict({
'n_neighbors': [3, 5, 7],
'weights': ['uniform', 'distance'],
'algorithm': ['ball_tree', 'kd_tree', 'brute'],
'leaf_size': [10, 30, 100],
'p': [1, 2],
}),
"LabelSpreading": OrderedDict({
'kernel': ['knn', 'rbf'],
'gamma': [10, 20, 100, 200],
'n_neighbors': [3, 5, 7, 9],
'alpha': [0, 0.02, 0.2, 1.0],
'max_iters': [3, 10, 30, 100],
'tol': [1e-5, 1e-3, 1e-1, 1.],
})
}
MODEL_NAME_TO_CLASS = {
"MultinomialNB": sklearn.naive_bayes.MultinomialNB,
"SGDClassifier": sklearn.linear_model.SGDClassifier,
"BayesianRidge": sklearn.linear_model.BayesianRidge,
"Perceptron": sklearn.linear_model.Perceptron,
"SVC": sklearn.svm.SVC,
"SVR": sklearn.svm.SVR,
"GradientBoostingClassifier": sklearn.ensemble.GradientBoostingClassifier,
"GradientBoostingRegressor": sklearn.ensemble.GradientBoostingRegressor,
"RandomForestClassifier": sklearn.ensemble.RandomForestClassifier,
"RandomForestRegressor": sklearn.ensemble.RandomForestRegressor,
"KNeighborsClassifier": sklearn.neighbors.KNeighborsClassifier,
"LabelSpreading": sklearn.semi_supervised.LabelSpreading,
}
| bsd-3-clause |
rcfduarte/nmsat | projects/examples/scripts/single_neuron_dcinput.py | 1 | 7658 | __author__ = 'duarte'
from modules.parameters import ParameterSet, ParameterSpace, extract_nestvalid_dict
from modules.input_architect import EncodingLayer
from modules.net_architect import Network
from modules.io import set_storage_locations
from modules.signals import iterate_obj_list
from modules.analysis import single_neuron_dcresponse
import cPickle as pickle
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as pl
import nest
# ######################################################################################################################
# Experiment options
# ======================================================================================================================
plot = True
display = True
save = True
# ######################################################################################################################
# Extract parameters from file and build global ParameterSet
# ======================================================================================================================
params_file = '../parameters/single_neuron_fI.py'
parameter_set = ParameterSpace(params_file)[0]
parameter_set = parameter_set.clean(termination='pars')
if not isinstance(parameter_set, ParameterSet):
if isinstance(parameter_set, basestring) or isinstance(parameter_set, dict):
parameter_set = ParameterSet(parameter_set)
else:
raise TypeError("parameter_set must be ParameterSet, string with full path to parameter file or dictionary")
# ######################################################################################################################
# Setup extra variables and parameters
# ======================================================================================================================
if plot:
import modules.visualization as vis
vis.set_global_rcParams(parameter_set.kernel_pars['mpl_path'])
paths = set_storage_locations(parameter_set, save)
np.random.seed(parameter_set.kernel_pars['np_seed'])
results = dict()
# ######################################################################################################################
# Set kernel and simulation parameters
# ======================================================================================================================
print('\nRuning ParameterSet {0}'.format(parameter_set.label))
nest.ResetKernel()
nest.set_verbosity('M_WARNING')
nest.SetKernelStatus(extract_nestvalid_dict(parameter_set.kernel_pars.as_dict(), param_type='kernel'))
# ######################################################################################################################
# Build network
# ======================================================================================================================
net = Network(parameter_set.net_pars)
# ######################################################################################################################
# Randomize initial variable values
# ======================================================================================================================
for idx, n in enumerate(list(iterate_obj_list(net.populations))):
if hasattr(parameter_set.net_pars, "randomize_neuron_pars"):
randomize = parameter_set.net_pars.randomize_neuron_pars[idx]
for k, v in randomize.items():
n.randomize_initial_states(k, randomization_function=v[0], **v[1])
# ######################################################################################################################
# Build and connect input
# ======================================================================================================================
enc_layer = EncodingLayer(parameter_set.encoding_pars)
enc_layer.connect(parameter_set.encoding_pars, net)
# ######################################################################################################################
# Set-up Analysis
# ======================================================================================================================
net.connect_devices()
# ######################################################################################################################
# Simulate
# ======================================================================================================================
if parameter_set.kernel_pars.transient_t:
net.simulate(parameter_set.kernel_pars.transient_t)
net.flush_records()
net.simulate(parameter_set.kernel_pars.sim_time + nest.GetKernelStatus()['resolution'])
# ######################################################################################################################
# Extract and store data
# ======================================================================================================================
net.extract_population_activity(t_start=parameter_set.kernel_pars.transient_t + nest.GetKernelStatus()['resolution'],
t_stop=parameter_set.kernel_pars.sim_time + parameter_set.kernel_pars.transient_t)
net.extract_network_activity()
net.flush_records()
# ######################################################################################################################
# Analyse / plot data
# ======================================================================================================================
analysis_interval = [parameter_set.kernel_pars.transient_t + nest.GetKernelStatus()['resolution'],
parameter_set.kernel_pars.sim_time + parameter_set.kernel_pars.transient_t]
for idd, nam in enumerate(net.population_names):
results.update({nam: {}})
results[nam] = single_neuron_dcresponse(net.populations[idd],
parameter_set, start=analysis_interval[0],
stop=analysis_interval[1], plot=plot,
display=display, save=paths['figures'] + paths['label'])
idx = np.min(np.where(results[nam]['output_rate']))
print("Rate range for neuron {0} = [{1}, {2}] Hz".format(
str(nam), str(np.min(results[nam]['output_rate'][results[nam]['output_rate'] > 0.])),
str(np.max(results[nam]['output_rate'][results[nam]['output_rate'] > 0.]))))
results[nam].update({'min_rate': np.min(results[nam]['output_rate'][results[nam]['output_rate'] > 0.]),
'max_rate': np.max(results[nam]['output_rate'][results[nam]['output_rate'] > 0.])})
print("Rheobase Current for neuron {0} in [{1}, {2}]".format(
str(nam), str(results[nam]['input_amplitudes'][idx - 1]), str(results[nam]['input_amplitudes'][idx])))
x = np.array(results[nam]['input_amplitudes'])
y = np.array(results[nam]['output_rate'])
iddxs = np.where(y)
slope, intercept, r_value, p_value, std_err = stats.linregress(x[iddxs], y[iddxs])
print("fI Slope for neuron {0} = {1} Hz/nA [linreg method]".format(nam, str(slope * 1000.)))
results[nam].update({'fI_slope': slope * 1000., 'I_rh': [results[nam]['input_amplitudes'][idx - 1],
results[nam]['input_amplitudes'][idx]]})
# ######################################################################################################################
# Save data
# ======================================================================================================================
if save:
with open(paths['results'] + 'Results_' + parameter_set.label, 'w') as f:
pickle.dump(results, f)
parameter_set.save(paths['parameters'] + 'Parameters_' + parameter_set.label)
| gpl-2.0 |
kgullikson88/General | Feiden.py | 2 | 4640 | from __future__ import division, print_function
import os
import os.path
import pickle
import numpy as np
from pkg_resources import resource_filename
from scipy.interpolate import LinearNDInterpolator as interpnd
try:
import pandas as pd
except ImportError:
pd = None
from isochrones.isochrone import Isochrone
DATADIR = os.getenv('ISOCHRONES',
os.path.expanduser(os.path.join('~', '.isochrones')))
if not os.path.exists(DATADIR):
os.mkdir(DATADIR)
MASTERFILE = '{}/Feiden.h5'.format(DATADIR)
TRI_FILE = '{}/Feiden.tri'.format(DATADIR)
MAXAGES = np.load(resource_filename('isochrones', 'data/dartmouth_maxages.npz'))
MAXAGE = interpnd(MAXAGES['points'], MAXAGES['maxages'])
# def _download_h5():
# """
# Downloads HDF5 file containing Dartmouth grids from Zenodo.
# """
# #url = 'http://zenodo.org/record/12800/files/dartmouth.h5'
# url = 'http://zenodo.org/record/15843/files/dartmouth.h5'
# from six.moves import urllib
# print('Downloading Dartmouth stellar model data (should happen only once)...')
# if os.path.exists(MASTERFILE):
# os.remove(MASTERFILE)
# urllib.request.urlretrieve(url,MASTERFILE)
#def _download_tri():
# """
# Downloads pre-computed triangulation for Dartmouth grids from Zenodo.
# """
# #url = 'http://zenodo.org/record/12800/files/dartmouth.tri'
# #url = 'http://zenodo.org/record/15843/files/dartmouth.tri'
# url = 'http://zenodo.org/record/17627/files/dartmouth.tri'
# from six.moves import urllib
# print('Downloading Dartmouth isochrone pre-computed triangulation (should happen only once...)')
# if os.path.exists(TRI_FILE):
# os.remove(TRI_FILE)
# urllib.request.urlretrieve(url,TRI_FILE)
#if not os.path.exists(MASTERFILE):
# _download_h5()
#if not os.path.exists(TRI_FILE):
# _download_tri()
#Check to see if you have the right dataframe and tri file
#import hashlib
#DF_SHASUM = '0515e83521f03cfe3ab8bafcb9c8187a90fd50c7'
#TRI_SHASUM = 'e05a06c799abae3d526ac83ceeea5e6df691a16d'
#if hashlib.sha1(open(MASTERFILE, 'rb').read()).hexdigest() != DF_SHASUM:
# raise ImportError('You have a wrong/corrupted/outdated Dartmouth DataFrame!' +
# ' Delete {} and try re-importing to download afresh.'.format(MASTERFILE))
#if hashlib.sha1(open(TRI_FILE, 'rb').read()).hexdigest() != TRI_SHASUM:
# raise ImportError('You have a wrong/corrupted/outdated Dartmouth triangulation!' +
# ' Delete {} and try re-importing to download afresh.'.format(TRI_FILE))
#
if pd is not None:
MASTERDF = pd.read_hdf(MASTERFILE, 'df').dropna() #temporary hack
else:
MASTERDF = None
class Feiden_Isochrone(Isochrone):
"""Dotter (2008) Stellar Models, at solar a/Fe and He abundances.
:param bands: (optional)
List of desired photometric bands. Must be a subset of
``['U','B','V','R','I','J','H','K','g','r','i','z','Kepler','D51',
'W1','W2','W3']``, which is the default. W4 is not included
because it does not have a well-measured A(lambda)/A(V).
"""
def __init__(self, bands=None, **kwargs):
df = MASTERDF
log_ages = np.log10(df['Age'])
minage = log_ages.min()
maxage = log_ages.max()
# make copies that claim to have different metallicities. This is a lie, but makes things work.
lowmet = df.copy()
lowmet['feh'] = -0.1
highmet = df.copy()
highmet['feh'] = 0.1
df = pd.concat((df, lowmet, highmet))
mags = {}
if bands is not None:
for band in bands:
try:
if band in ['g', 'r', 'i', 'z']:
mags[band] = df['sdss_{}'.format(band)]
else:
mags[band] = df[band]
except:
if band == 'kep' or band == 'Kepler':
mags[band] = df['Kp']
elif band == 'K':
mags['K'] = df['Ks']
else:
raise
tri = None
try:
f = open(TRI_FILE, 'rb')
tri = pickle.load(f)
except:
f = open(TRI_FILE, 'rb')
tri = pickle.load(f, encoding='latin-1')
finally:
f.close()
Isochrone.__init__(self, m_ini=df['Msun'], age=np.log10(df['Age']),
feh=df['feh'], m_act=df['Msun'], logL=df['logL'],
Teff=10 ** df['logT'], logg=df['logg'], mags=mags,
tri=tri, minage=minage, maxage=maxage, **kwargs)
| gpl-3.0 |
cmcantalupo/geopm | integration/experiment/power_sweep/gen_plot_power_limit.py | 1 | 5090 | #!/usr/bin/env python
#
# Copyright (c) 2015 - 2021, Intel Corporation
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
#
# * Neither the name of Intel Corporation nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY LOG OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
'''
Shows balancer chosen power limits on each socket over time.
'''
import pandas
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import sys
import os
import argparse
from experiment import common_args
from experiment import plotting
def plot_lines(traces, label, analysis_dir):
if not os.path.exists(analysis_dir):
os.mkdir(analysis_dir)
fig, axs = plt.subplots(2)
fig.set_size_inches((20, 10))
num_traces = len(traces)
colormap = cm.jet
colors = [colormap(i) for i in np.linspace(0, 1, num_traces*2)]
idx = 0
for path in traces:
node_name = path.split('-')[-1]
df = pandas.read_csv(path, delimiter='|', comment='#')
time = df['TIME']
pl0 = df['MSR::PKG_POWER_LIMIT:PL1_POWER_LIMIT-package-0']
pl1 = df['MSR::PKG_POWER_LIMIT:PL1_POWER_LIMIT-package-1']
rt0 = df['EPOCH_RUNTIME-package-0'] - df['EPOCH_RUNTIME_NETWORK-package-0']
rt1 = df['EPOCH_RUNTIME-package-1'] - df['EPOCH_RUNTIME_NETWORK-package-1']
plot_tgt = False
try:
tgt = df['POLICY_MAX_EPOCH_RUNTIME']
plot_tgt = True
except:
sys.stdout.write('POLICY_MAX_EPOCH_RUNTIME missing from trace {}; data will be omitted from plot.\n'.format(path))
color0 = colors[idx]
color1 = colors[idx + 1]
idx += 2
axs[0].plot(time, pl0, color=color0)
axs[0].plot(time, pl1, color=color1)
axs[1].plot(time, rt0, label='pkg-0-{}'.format(node_name), color=color0)
axs[1].plot(time, rt1, label='pkg-1-{}'.format(node_name), color=color1)
axs[0].set_title('Per socket power limits')
axs[0].set_ylabel('Power (w)')
axs[1].set_title('Per socket runtimes and target')
axs[1].set_xlabel('Time (s)')
axs[1].set_ylabel('Epoch duration (s)')
if plot_tgt:
# draw target once on top of other lines
axs[1].plot(time, tgt, label='target')
fig.legend(loc='lower right')
agent = ' '.join(traces[0].split('_')[1:3]).title()
fig.suptitle('{} - {}'.format(label, agent), fontsize=20)
dirname = os.path.dirname(traces[0])
if len(traces) == 1:
plot_name = traces[0].split('.')[0] # gadget_power_governor_330_0.trace-epb001
plot_name += '_' + traces[0].split('-')[1]
else:
plot_name = '_'.join(traces[0].split('_')[0:3]) # gadget_power_governor
outfile = os.path.join(analysis_dir, plot_name + '_power_and_runtime.png')
sys.stdout.write('Writing {}...\n'.format(outfile))
fig.savefig(outfile)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
common_args.add_output_dir(parser)
common_args.add_label(parser)
common_args.add_analysis_dir(parser)
# Positional arg for gathering all traces into a list
# Works for files listed explicitly, or with a glob pattern e.g. *trace*
parser.add_argument('tracepath', metavar='TRACE_PATH', nargs='+',
action='store',
help='path or glob pattern for trace files to analyze')
args, _ = parser.parse_known_args()
# see if paths are valid
for path in args.tracepath:
lp = os.path.join(args.output_dir, path)
if not (os.path.isfile(lp) and os.path.getsize(lp) > 0):
sys.stderr.write('<geopm> Error: No trace data found in {}\n'.format(lp))
sys.exit(1)
plot_lines(args.tracepath, args.label, args.analysis_dir)
| bsd-3-clause |
eramirem/astroML | book_figures/chapter8/fig_huber_loss.py | 3 | 2933 | """
Huber Loss Function
-------------------
Figure 8.8
An example of fitting a simple linear model to data which includes outliers
(data is from table 1 of Hogg et al 2010). A comparison of linear regression
using the squared-loss function (equivalent to ordinary least-squares
regression) and the Huber loss function, with c = 1 (i.e., beyond 1 standard
deviation, the loss becomes linear).
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
from __future__ import print_function, division
import numpy as np
from matplotlib import pyplot as plt
from scipy import optimize
from astroML.datasets import fetch_hogg2010test
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Get data: this includes outliers
data = fetch_hogg2010test()
x = data['x']
y = data['y']
dy = data['sigma_y']
# Define the standard squared-loss function
def squared_loss(m, b, x, y, dy):
y_fit = m * x + b
return np.sum(((y - y_fit) / dy) ** 2, -1)
# Define the log-likelihood via the Huber loss function
def huber_loss(m, b, x, y, dy, c=2):
y_fit = m * x + b
t = abs((y - y_fit) / dy)
flag = t > c
return np.sum((~flag) * (0.5 * t ** 2) - (flag) * c * (0.5 * c - t), -1)
f_squared = lambda beta: squared_loss(beta[0], beta[1], x=x, y=y, dy=dy)
f_huber = lambda beta: huber_loss(beta[0], beta[1], x=x, y=y, dy=dy, c=1)
#------------------------------------------------------------
# compute the maximum likelihood using the huber loss
beta0 = (2, 30)
beta_squared = optimize.fmin(f_squared, beta0)
beta_huber = optimize.fmin(f_huber, beta0)
print(beta_squared)
print(beta_huber)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111)
x_fit = np.linspace(0, 350, 10)
ax.plot(x_fit, beta_squared[0] * x_fit + beta_squared[1], '--k',
label="squared loss:\n $y=%.2fx + %.1f$" % tuple(beta_squared))
ax.plot(x_fit, beta_huber[0] * x_fit + beta_huber[1], '-k',
label="Huber loss:\n $y=%.2fx + %.1f$" % tuple(beta_huber))
ax.legend(loc=4)
ax.errorbar(x, y, dy, fmt='.k', lw=1, ecolor='gray')
ax.set_xlim(0, 350)
ax.set_ylim(100, 700)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.show()
| bsd-2-clause |
cbertinato/pandas | pandas/tests/io/test_packers.py | 1 | 33090 | import datetime
from distutils.version import LooseVersion
import glob
from io import BytesIO
import os
from warnings import catch_warnings
import numpy as np
import pytest
from pandas._libs.tslib import iNaT
from pandas.errors import PerformanceWarning
import pandas
from pandas import (
Categorical, DataFrame, Index, Interval, MultiIndex, NaT, Period, Series,
Timestamp, bdate_range, date_range, period_range)
import pandas.util.testing as tm
from pandas.util.testing import (
assert_categorical_equal, assert_frame_equal, assert_index_equal,
assert_series_equal, ensure_clean)
from pandas.io.packers import read_msgpack, to_msgpack
nan = np.nan
try:
import blosc # NOQA
except ImportError:
_BLOSC_INSTALLED = False
else:
_BLOSC_INSTALLED = True
try:
import zlib # NOQA
except ImportError:
_ZLIB_INSTALLED = False
else:
_ZLIB_INSTALLED = True
@pytest.fixture(scope='module')
def current_packers_data():
# our current version packers data
from pandas.tests.io.generate_legacy_storage_files import (
create_msgpack_data)
return create_msgpack_data()
@pytest.fixture(scope='module')
def all_packers_data():
# our all of our current version packers data
from pandas.tests.io.generate_legacy_storage_files import (
create_data)
return create_data()
def check_arbitrary(a, b):
if isinstance(a, (list, tuple)) and isinstance(b, (list, tuple)):
assert(len(a) == len(b))
for a_, b_ in zip(a, b):
check_arbitrary(a_, b_)
elif isinstance(a, DataFrame):
assert_frame_equal(a, b)
elif isinstance(a, Series):
assert_series_equal(a, b)
elif isinstance(a, Index):
assert_index_equal(a, b)
elif isinstance(a, Categorical):
# Temp,
# Categorical.categories is changed from str to bytes in PY3
# maybe the same as GH 13591
if b.categories.inferred_type == 'string':
pass
else:
tm.assert_categorical_equal(a, b)
elif a is NaT:
assert b is NaT
elif isinstance(a, Timestamp):
assert a == b
assert a.freq == b.freq
else:
assert(a == b)
@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
class TestPackers:
def setup_method(self, method):
self.path = '__%s__.msg' % tm.rands(10)
def teardown_method(self, method):
pass
def encode_decode(self, x, compress=None, **kwargs):
with ensure_clean(self.path) as p:
to_msgpack(p, x, compress=compress, **kwargs)
return read_msgpack(p, **kwargs)
@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
class TestAPI(TestPackers):
def test_string_io(self):
df = DataFrame(np.random.randn(10, 2))
s = df.to_msgpack(None)
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
s = df.to_msgpack()
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
s = df.to_msgpack()
result = read_msgpack(BytesIO(s))
tm.assert_frame_equal(result, df)
s = to_msgpack(None, df)
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
with ensure_clean(self.path) as p:
s = df.to_msgpack()
with open(p, 'wb') as fh:
fh.write(s)
result = read_msgpack(p)
tm.assert_frame_equal(result, df)
def test_path_pathlib(self):
df = tm.makeDataFrame()
result = tm.round_trip_pathlib(df.to_msgpack, read_msgpack)
tm.assert_frame_equal(df, result)
def test_path_localpath(self):
df = tm.makeDataFrame()
result = tm.round_trip_localpath(df.to_msgpack, read_msgpack)
tm.assert_frame_equal(df, result)
def test_iterator_with_string_io(self):
dfs = [DataFrame(np.random.randn(10, 2)) for i in range(5)]
s = to_msgpack(None, *dfs)
for i, result in enumerate(read_msgpack(s, iterator=True)):
tm.assert_frame_equal(result, dfs[i])
def test_invalid_arg(self):
# GH10369
class A:
def __init__(self):
self.read = 0
msg = "Invalid file path or buffer object type: <class '{}'>"
with pytest.raises(ValueError, match=msg.format('NoneType')):
read_msgpack(path_or_buf=None)
with pytest.raises(ValueError, match=msg.format('dict')):
read_msgpack(path_or_buf={})
with pytest.raises(ValueError, match=msg.format(r'.*\.A')):
read_msgpack(path_or_buf=A())
class TestNumpy(TestPackers):
def test_numpy_scalar_float(self):
x = np.float32(np.random.rand())
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_numpy_scalar_complex(self):
x = np.complex64(np.random.rand() + 1j * np.random.rand())
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_scalar_float(self):
x = np.random.rand()
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_scalar_bool(self):
x = np.bool_(1)
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
x = np.bool_(0)
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_scalar_complex(self):
x = np.random.rand() + 1j * np.random.rand()
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_list_numpy_float(self):
x = [np.float32(np.random.rand()) for i in range(5)]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
def test_list_numpy_float_complex(self):
if not hasattr(np, 'complex128'):
pytest.skip('numpy can not handle complex128')
x = [np.float32(np.random.rand()) for i in range(5)] + \
[np.complex128(np.random.rand() + 1j * np.random.rand())
for i in range(5)]
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_list_float(self):
x = [np.random.rand() for i in range(5)]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
def test_list_float_complex(self):
x = [np.random.rand() for i in range(5)] + \
[(np.random.rand() + 1j * np.random.rand()) for i in range(5)]
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_dict_float(self):
x = {'foo': 1.0, 'bar': 2.0}
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_dict_complex(self):
x = {'foo': 1.0 + 1.0j, 'bar': 2.0 + 2.0j}
x_rec = self.encode_decode(x)
tm.assert_dict_equal(x, x_rec)
for key in x:
tm.assert_class_equal(x[key], x_rec[key], obj="complex value")
def test_dict_numpy_float(self):
x = {'foo': np.float32(1.0), 'bar': np.float32(2.0)}
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_dict_numpy_complex(self):
x = {'foo': np.complex128(1.0 + 1.0j),
'bar': np.complex128(2.0 + 2.0j)}
x_rec = self.encode_decode(x)
tm.assert_dict_equal(x, x_rec)
for key in x:
tm.assert_class_equal(x[key], x_rec[key], obj="numpy complex128")
def test_numpy_array_float(self):
# run multiple times
for n in range(10):
x = np.random.rand(10)
for dtype in ['float32', 'float64']:
x = x.astype(dtype)
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_numpy_array_complex(self):
x = (np.random.rand(5) + 1j * np.random.rand(5)).astype(np.complex128)
x_rec = self.encode_decode(x)
assert (all(map(lambda x, y: x == y, x, x_rec)) and
x.dtype == x_rec.dtype)
def test_list_mixed(self):
x = [1.0, np.float32(3.5), np.complex128(4.25), 'foo', np.bool_(1)]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
class TestBasic(TestPackers):
def test_timestamp(self):
for i in [Timestamp(
'20130101'), Timestamp('20130101', tz='US/Eastern'),
Timestamp('201301010501')]:
i_rec = self.encode_decode(i)
assert i == i_rec
def test_nat(self):
nat_rec = self.encode_decode(NaT)
assert NaT is nat_rec
def test_datetimes(self):
for i in [datetime.datetime(2013, 1, 1),
datetime.datetime(2013, 1, 1, 5, 1),
datetime.date(2013, 1, 1),
np.datetime64(datetime.datetime(2013, 1, 5, 2, 15))]:
i_rec = self.encode_decode(i)
assert i == i_rec
def test_timedeltas(self):
for i in [datetime.timedelta(days=1),
datetime.timedelta(days=1, seconds=10),
np.timedelta64(1000000)]:
i_rec = self.encode_decode(i)
assert i == i_rec
def test_periods(self):
# 13463
for i in [Period('2010-09', 'M'), Period('2014-Q1', 'Q')]:
i_rec = self.encode_decode(i)
assert i == i_rec
def test_intervals(self):
# 19967
for i in [Interval(0, 1), Interval(0, 1, 'left'),
Interval(10, 25., 'right')]:
i_rec = self.encode_decode(i)
assert i == i_rec
class TestIndex(TestPackers):
def setup_method(self, method):
super().setup_method(method)
self.d = {
'string': tm.makeStringIndex(100),
'date': tm.makeDateIndex(100),
'int': tm.makeIntIndex(100),
'rng': tm.makeRangeIndex(100),
'float': tm.makeFloatIndex(100),
'empty': Index([]),
'tuple': Index(zip(['foo', 'bar', 'baz'], [1, 2, 3])),
'period': Index(period_range('2012-1-1', freq='M', periods=3)),
'date2': Index(date_range('2013-01-1', periods=10)),
'bdate': Index(bdate_range('2013-01-02', periods=10)),
'cat': tm.makeCategoricalIndex(100),
'interval': tm.makeIntervalIndex(100),
'timedelta': tm.makeTimedeltaIndex(100, 'H')
}
self.mi = {
'reg': MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'),
('foo', 'two'),
('qux', 'one'), ('qux', 'two')],
names=['first', 'second']),
}
def test_basic_index(self):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
# datetime with no freq (GH5506)
i = Index([Timestamp('20130101'), Timestamp('20130103')])
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
# datetime with timezone
i = Index([Timestamp('20130101 9:00:00'), Timestamp(
'20130103 11:00:00')]).tz_localize('US/Eastern')
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def test_multi_index(self):
for s, i in self.mi.items():
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def test_unicode(self):
i = tm.makeUnicodeIndex(100)
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def categorical_index(self):
# GH15487
df = DataFrame(np.random.randn(10, 2))
df = df.astype({0: 'category'}).set_index(0)
result = self.encode_decode(df)
tm.assert_frame_equal(result, df)
class TestSeries(TestPackers):
def setup_method(self, method):
super().setup_method(method)
self.d = {}
s = tm.makeStringSeries()
s.name = 'string'
self.d['string'] = s
s = tm.makeObjectSeries()
s.name = 'object'
self.d['object'] = s
s = Series(iNaT, dtype='M8[ns]', index=range(5))
self.d['date'] = s
data = {
'A': [0., 1., 2., 3., np.nan],
'B': [0, 1, 0, 1, 0],
'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
'D': date_range('1/1/2009', periods=5),
'E': [0., 1, Timestamp('20100101'), 'foo', 2.],
'F': [Timestamp('20130102', tz='US/Eastern')] * 2 +
[Timestamp('20130603', tz='CET')] * 3,
'G': [Timestamp('20130102', tz='US/Eastern')] * 5,
'H': Categorical([1, 2, 3, 4, 5]),
'I': Categorical([1, 2, 3, 4, 5], ordered=True),
'J': (np.bool_(1), 2, 3, 4, 5),
}
self.d['float'] = Series(data['A'])
self.d['int'] = Series(data['B'])
self.d['mixed'] = Series(data['E'])
self.d['dt_tz_mixed'] = Series(data['F'])
self.d['dt_tz'] = Series(data['G'])
self.d['cat_ordered'] = Series(data['H'])
self.d['cat_unordered'] = Series(data['I'])
self.d['numpy_bool_mixed'] = Series(data['J'])
def test_basic(self):
# run multiple times here
for n in range(10):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
assert_series_equal(i, i_rec)
class TestCategorical(TestPackers):
def setup_method(self, method):
super().setup_method(method)
self.d = {}
self.d['plain_str'] = Categorical(['a', 'b', 'c', 'd', 'e'])
self.d['plain_str_ordered'] = Categorical(['a', 'b', 'c', 'd', 'e'],
ordered=True)
self.d['plain_int'] = Categorical([5, 6, 7, 8])
self.d['plain_int_ordered'] = Categorical([5, 6, 7, 8], ordered=True)
def test_basic(self):
# run multiple times here
for n in range(10):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
assert_categorical_equal(i, i_rec)
@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
class TestNDFrame(TestPackers):
def setup_method(self, method):
super().setup_method(method)
data = {
'A': [0., 1., 2., 3., np.nan],
'B': [0, 1, 0, 1, 0],
'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
'D': date_range('1/1/2009', periods=5),
'E': [0., 1, Timestamp('20100101'), 'foo', 2.],
'F': [Timestamp('20130102', tz='US/Eastern')] * 5,
'G': [Timestamp('20130603', tz='CET')] * 5,
'H': Categorical(['a', 'b', 'c', 'd', 'e']),
'I': Categorical(['a', 'b', 'c', 'd', 'e'], ordered=True),
}
self.frame = {
'float': DataFrame(dict(A=data['A'], B=Series(data['A']) + 1)),
'int': DataFrame(dict(A=data['B'], B=Series(data['B']) + 1)),
'mixed': DataFrame(data)}
def test_basic_frame(self):
for s, i in self.frame.items():
i_rec = self.encode_decode(i)
assert_frame_equal(i, i_rec)
def test_multi(self):
i_rec = self.encode_decode(self.frame)
for k in self.frame.keys():
assert_frame_equal(self.frame[k], i_rec[k])
packed_items = tuple([self.frame['float'], self.frame['float'].A,
self.frame['float'].B, None])
l_rec = self.encode_decode(packed_items)
check_arbitrary(packed_items, l_rec)
# this is an oddity in that packed lists will be returned as tuples
packed_items = [self.frame['float'], self.frame['float'].A,
self.frame['float'].B, None]
l_rec = self.encode_decode(packed_items)
assert isinstance(l_rec, tuple)
check_arbitrary(packed_items, l_rec)
def test_iterator(self):
packed_items = [self.frame['float'], self.frame['float'].A,
self.frame['float'].B, None]
with ensure_clean(self.path) as path:
to_msgpack(path, *packed_items)
for i, packed in enumerate(read_msgpack(path, iterator=True)):
check_arbitrary(packed, packed_items[i])
def tests_datetimeindex_freq_issue(self):
# GH 5947
# inferring freq on the datetimeindex
df = DataFrame([1, 2, 3], index=date_range('1/1/2013', '1/3/2013'))
result = self.encode_decode(df)
assert_frame_equal(result, df)
df = DataFrame([1, 2], index=date_range('1/1/2013', '1/2/2013'))
result = self.encode_decode(df)
assert_frame_equal(result, df)
def test_dataframe_duplicate_column_names(self):
# GH 9618
expected_1 = DataFrame(columns=['a', 'a'])
expected_2 = DataFrame(columns=[1] * 100)
expected_2.loc[0] = np.random.randn(100)
expected_3 = DataFrame(columns=[1, 1])
expected_3.loc[0] = ['abc', np.nan]
result_1 = self.encode_decode(expected_1)
result_2 = self.encode_decode(expected_2)
result_3 = self.encode_decode(expected_3)
assert_frame_equal(result_1, expected_1)
assert_frame_equal(result_2, expected_2)
assert_frame_equal(result_3, expected_3)
@pytest.mark.filterwarnings("ignore:Sparse:FutureWarning")
@pytest.mark.filterwarnings("ignore:Series.to_sparse:FutureWarning")
@pytest.mark.filterwarnings("ignore:DataFrame.to_sparse:FutureWarning")
class TestSparse(TestPackers):
def _check_roundtrip(self, obj, comparator, **kwargs):
# currently these are not implemetned
# i_rec = self.encode_decode(obj)
# comparator(obj, i_rec, **kwargs)
msg = r"msgpack sparse (series|frame) is not implemented"
with pytest.raises(NotImplementedError, match=msg):
self.encode_decode(obj)
def test_sparse_series(self):
s = tm.makeStringSeries()
s[3:5] = np.nan
ss = s.to_sparse()
self._check_roundtrip(ss, tm.assert_series_equal,
check_series_type=True)
ss2 = s.to_sparse(kind='integer')
self._check_roundtrip(ss2, tm.assert_series_equal,
check_series_type=True)
ss3 = s.to_sparse(fill_value=0)
self._check_roundtrip(ss3, tm.assert_series_equal,
check_series_type=True)
def test_sparse_frame(self):
s = tm.makeDataFrame()
s.loc[3:5, 1:3] = np.nan
s.loc[8:10, -2] = np.nan
ss = s.to_sparse()
self._check_roundtrip(ss, tm.assert_frame_equal,
check_frame_type=True)
ss2 = s.to_sparse(kind='integer')
self._check_roundtrip(ss2, tm.assert_frame_equal,
check_frame_type=True)
ss3 = s.to_sparse(fill_value=0)
self._check_roundtrip(ss3, tm.assert_frame_equal,
check_frame_type=True)
class TestCompression(TestPackers):
"""See https://github.com/pandas-dev/pandas/pull/9783
"""
def setup_method(self, method):
try:
from sqlalchemy import create_engine
self._create_sql_engine = create_engine
except ImportError:
self._SQLALCHEMY_INSTALLED = False
else:
self._SQLALCHEMY_INSTALLED = True
super().setup_method(method)
data = {
'A': np.arange(1000, dtype=np.float64),
'B': np.arange(1000, dtype=np.int32),
'C': list(100 * 'abcdefghij'),
'D': date_range(datetime.datetime(2015, 4, 1), periods=1000),
'E': [datetime.timedelta(days=x) for x in range(1000)],
}
self.frame = {
'float': DataFrame({k: data[k] for k in ['A', 'A']}),
'int': DataFrame({k: data[k] for k in ['B', 'B']}),
'mixed': DataFrame(data),
}
def test_plain(self):
i_rec = self.encode_decode(self.frame)
for k in self.frame.keys():
assert_frame_equal(self.frame[k], i_rec[k])
def _test_compression(self, compress):
i_rec = self.encode_decode(self.frame, compress=compress)
for k in self.frame.keys():
value = i_rec[k]
expected = self.frame[k]
assert_frame_equal(value, expected)
# make sure that we can write to the new frames
for block in value._data.blocks:
assert block.values.flags.writeable
def test_compression_zlib(self):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_compression('zlib')
def test_compression_blosc(self):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_compression('blosc')
def _test_compression_warns_when_decompress_caches(
self, monkeypatch, compress):
not_garbage = []
control = [] # copied data
compress_module = globals()[compress]
real_decompress = compress_module.decompress
def decompress(ob):
"""mock decompress function that delegates to the real
decompress but caches the result and a copy of the result.
"""
res = real_decompress(ob)
not_garbage.append(res) # hold a reference to this bytes object
control.append(bytearray(res)) # copy the data here to check later
return res
# types mapped to values to add in place.
rhs = {
np.dtype('float64'): 1.0,
np.dtype('int32'): 1,
np.dtype('object'): 'a',
np.dtype('datetime64[ns]'): np.timedelta64(1, 'ns'),
np.dtype('timedelta64[ns]'): np.timedelta64(1, 'ns'),
}
with monkeypatch.context() as m, \
tm.assert_produces_warning(PerformanceWarning) as ws:
m.setattr(compress_module, 'decompress', decompress)
i_rec = self.encode_decode(self.frame, compress=compress)
for k in self.frame.keys():
value = i_rec[k]
expected = self.frame[k]
assert_frame_equal(value, expected)
# make sure that we can write to the new frames even though
# we needed to copy the data
for block in value._data.blocks:
assert block.values.flags.writeable
# mutate the data in some way
block.values[0] += rhs[block.dtype]
for w in ws:
# check the messages from our warnings
assert str(w.message) == ('copying data after decompressing; '
'this may mean that decompress is '
'caching its result')
for buf, control_buf in zip(not_garbage, control):
# make sure none of our mutations above affected the
# original buffers
assert buf == control_buf
def test_compression_warns_when_decompress_caches_zlib(self, monkeypatch):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_compression_warns_when_decompress_caches(
monkeypatch, 'zlib')
def test_compression_warns_when_decompress_caches_blosc(self, monkeypatch):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_compression_warns_when_decompress_caches(
monkeypatch, 'blosc')
def _test_small_strings_no_warn(self, compress):
empty = np.array([], dtype='uint8')
with tm.assert_produces_warning(None):
empty_unpacked = self.encode_decode(empty, compress=compress)
tm.assert_numpy_array_equal(empty_unpacked, empty)
assert empty_unpacked.flags.writeable
char = np.array([ord(b'a')], dtype='uint8')
with tm.assert_produces_warning(None):
char_unpacked = self.encode_decode(char, compress=compress)
tm.assert_numpy_array_equal(char_unpacked, char)
assert char_unpacked.flags.writeable
# if this test fails I am sorry because the interpreter is now in a
# bad state where b'a' points to 98 == ord(b'b').
char_unpacked[0] = ord(b'b')
# we compare the ord of bytes b'a' with unicode 'a' because the should
# always be the same (unless we were able to mutate the shared
# character singleton in which case ord(b'a') == ord(b'b').
assert ord(b'a') == ord('a')
tm.assert_numpy_array_equal(
char_unpacked,
np.array([ord(b'b')], dtype='uint8'),
)
def test_small_strings_no_warn_zlib(self):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_small_strings_no_warn('zlib')
def test_small_strings_no_warn_blosc(self):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_small_strings_no_warn('blosc')
def test_readonly_axis_blosc(self):
# GH11880
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
df1 = DataFrame({'A': list('abcd')})
df2 = DataFrame(df1, index=[1., 2., 3., 4.])
assert 1 in self.encode_decode(df1['A'], compress='blosc')
assert 1. in self.encode_decode(df2['A'], compress='blosc')
def test_readonly_axis_zlib(self):
# GH11880
df1 = DataFrame({'A': list('abcd')})
df2 = DataFrame(df1, index=[1., 2., 3., 4.])
assert 1 in self.encode_decode(df1['A'], compress='zlib')
assert 1. in self.encode_decode(df2['A'], compress='zlib')
def test_readonly_axis_blosc_to_sql(self):
# GH11880
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
if not self._SQLALCHEMY_INSTALLED:
pytest.skip('no sqlalchemy')
expected = DataFrame({'A': list('abcd')})
df = self.encode_decode(expected, compress='blosc')
eng = self._create_sql_engine("sqlite:///:memory:")
df.to_sql('test', eng, if_exists='append')
result = pandas.read_sql_table('test', eng, index_col='index')
result.index.names = [None]
assert_frame_equal(expected, result)
def test_readonly_axis_zlib_to_sql(self):
# GH11880
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
if not self._SQLALCHEMY_INSTALLED:
pytest.skip('no sqlalchemy')
expected = DataFrame({'A': list('abcd')})
df = self.encode_decode(expected, compress='zlib')
eng = self._create_sql_engine("sqlite:///:memory:")
df.to_sql('test', eng, if_exists='append')
result = pandas.read_sql_table('test', eng, index_col='index')
result.index.names = [None]
assert_frame_equal(expected, result)
class TestEncoding(TestPackers):
def setup_method(self, method):
super().setup_method(method)
data = {
'A': ['\u2019'] * 1000,
'B': np.arange(1000, dtype=np.int32),
'C': list(100 * 'abcdefghij'),
'D': date_range(datetime.datetime(2015, 4, 1), periods=1000),
'E': [datetime.timedelta(days=x) for x in range(1000)],
'G': [400] * 1000
}
self.frame = {
'float': DataFrame({k: data[k] for k in ['A', 'A']}),
'int': DataFrame({k: data[k] for k in ['B', 'B']}),
'mixed': DataFrame(data),
}
self.utf_encodings = ['utf8', 'utf16', 'utf32']
def test_utf(self):
# GH10581
for encoding in self.utf_encodings:
for frame in self.frame.values():
result = self.encode_decode(frame, encoding=encoding)
assert_frame_equal(result, frame)
def test_default_encoding(self):
for frame in self.frame.values():
result = frame.to_msgpack()
expected = frame.to_msgpack(encoding='utf8')
assert result == expected
result = self.encode_decode(frame)
assert_frame_equal(result, frame)
files = glob.glob(os.path.join(os.path.dirname(__file__), "data",
"legacy_msgpack", "*", "*.msgpack"))
@pytest.fixture(params=files)
def legacy_packer(request, datapath):
return datapath(request.param)
@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
@pytest.mark.filterwarnings("ignore:Sparse:FutureWarning")
class TestMsgpack:
"""
How to add msgpack tests:
1. Install pandas version intended to output the msgpack.
2. Execute "generate_legacy_storage_files.py" to create the msgpack.
$ python generate_legacy_storage_files.py <output_dir> msgpack
3. Move the created pickle to "data/legacy_msgpack/<version>" directory.
"""
minimum_structure = {'series': ['float', 'int', 'mixed',
'ts', 'mi', 'dup'],
'frame': ['float', 'int', 'mixed', 'mi'],
'panel': ['float'],
'index': ['int', 'date', 'period'],
'mi': ['reg2']}
def check_min_structure(self, data, version):
for typ, v in self.minimum_structure.items():
if typ == "panel":
# FIXME: kludge; get this key out of the legacy file
continue
assert typ in data, '"{0}" not found in unpacked data'.format(typ)
for kind in v:
msg = '"{0}" not found in data["{1}"]'.format(kind, typ)
assert kind in data[typ], msg
def compare(self, current_data, all_data, vf, version):
# GH12277 encoding default used to be latin-1, now utf-8
if LooseVersion(version) < LooseVersion('0.18.0'):
data = read_msgpack(vf, encoding='latin-1')
else:
data = read_msgpack(vf)
if "panel" in data:
# FIXME: kludge; get the key out of the stored file
del data["panel"]
self.check_min_structure(data, version)
for typ, dv in data.items():
assert typ in all_data, ('unpacked data contains '
'extra key "{0}"'
.format(typ))
for dt, result in dv.items():
assert dt in current_data[typ], ('data["{0}"] contains extra '
'key "{1}"'.format(typ, dt))
try:
expected = current_data[typ][dt]
except KeyError:
continue
# use a specific comparator
# if available
comp_method = "compare_{typ}_{dt}".format(typ=typ, dt=dt)
comparator = getattr(self, comp_method, None)
if comparator is not None:
comparator(result, expected, typ, version)
else:
check_arbitrary(result, expected)
return data
def compare_series_dt_tz(self, result, expected, typ, version):
# 8260
# dtype is object < 0.17.0
if LooseVersion(version) < LooseVersion('0.17.0'):
expected = expected.astype(object)
tm.assert_series_equal(result, expected)
else:
tm.assert_series_equal(result, expected)
def compare_frame_dt_mixed_tzs(self, result, expected, typ, version):
# 8260
# dtype is object < 0.17.0
if LooseVersion(version) < LooseVersion('0.17.0'):
expected = expected.astype(object)
tm.assert_frame_equal(result, expected)
else:
tm.assert_frame_equal(result, expected)
def test_msgpacks_legacy(self, current_packers_data, all_packers_data,
legacy_packer, datapath):
version = os.path.basename(os.path.dirname(legacy_packer))
# GH12142 0.17 files packed in P2 can't be read in P3
if (version.startswith('0.17.') and
legacy_packer.split('.')[-4][-1] == '2'):
msg = "Files packed in Py2 can't be read in Py3 ({})"
pytest.skip(msg.format(version))
try:
with catch_warnings(record=True):
self.compare(current_packers_data, all_packers_data,
legacy_packer, version)
except ImportError:
# blosc not installed
pass
def test_msgpack_period_freq(self):
# https://github.com/pandas-dev/pandas/issues/24135
s = Series(np.random.rand(5), index=date_range('20130101', periods=5))
r = read_msgpack(s.to_msgpack())
repr(r)
| bsd-3-clause |
vipmunot/Data-Analysis-using-Python | Exploratory Data Visualization/Multiple plots-216.py | 1 | 2691 | ## 1. Recap ##
import pandas as pd
import matplotlib.pyplot as plt
unrate = pd.read_csv('unrate.csv')
unrate['DATE'] = pd.to_datetime(unrate['DATE'])
plt.plot(unrate['DATE'].head(12),unrate['VALUE'].head(12))
plt.xticks(rotation=90)
plt.xlabel('Month')
plt.ylabel('Unemployment Rate')
plt.title('Monthly Unemployment Trends, 1948')
## 2. Matplotlib Classes ##
import matplotlib.pyplot as plt
fig = plt.figure()
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
plt.show()
## 4. Adding Data ##
fig = plt.figure()
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
ax1.plot(unrate['DATE'].head(12),unrate['VALUE'].head(12))
ax2.plot(unrate['DATE'].iloc[12:24],unrate['VALUE'].iloc[12:24])
plt.show()
## 5. Formatting And Spacing ##
fig = plt.figure(figsize=(12,6))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
ax1.plot(unrate[0:12]['DATE'], unrate[0:12]['VALUE'])
ax1.set_title('Monthly Unemployment Rate, 1948')
ax2.plot(unrate[12:24]['DATE'], unrate[12:24]['VALUE'])
ax2.set_title('Monthly Unemployment Rate, 1949')
plt.show()
## 6. Comparing Across More Years ##
fig = plt.figure(figsize=(12,12))
x = [0,12,24,36,48]
y = [12,24,36,48,60]
for i in range(5):
ax = fig.add_subplot(5,1,(i+1))
ax.plot(unrate[x[i]:y[i]]['DATE'],unrate[x[i]:y[i]]['VALUE'])
plt.show()
## 7. Overlaying Line Charts ##
unrate['MONTH'] = unrate['DATE'].dt.month
fig = plt.figure(figsize=(6,3))
plt.plot(unrate[0:12]['MONTH'], unrate[0:12]['VALUE'],c='red')
plt.plot(unrate[12:24]['MONTH'], unrate[12:24]['VALUE'],c='blue')
plt.show()
## 8. Adding More Lines ##
fig = plt.figure(figsize=(10,6))
x = [0,12,24,36,48]
y = [12,24,36,48,60]
color = ['red','blue','green','orange','black']
for i in range(5):
plt.plot(unrate[x[i]:y[i]]['MONTH'],unrate[x[i]:y[i]]['VALUE'],c = color[i])
plt.show()
## 9. Adding A Legend ##
fig = plt.figure(figsize=(10,6))
colors = ['red', 'blue', 'green', 'orange', 'black']
for i in range(5):
start_index = i*12
end_index = (i+1)*12
label = str(1948 + i)
subset = unrate[start_index:end_index]
plt.plot(subset['MONTH'], subset['VALUE'], c=colors[i],label=label)
plt.legend(loc='upper left')
plt.show()
## 10. Final Tweaks ##
fig = plt.figure(figsize=(10,6))
colors = ['red', 'blue', 'green', 'orange', 'black']
for i in range(5):
start_index = i*12
end_index = (i+1)*12
subset = unrate[start_index:end_index]
label = str(1948 + i)
plt.plot(subset['MONTH'], subset['VALUE'], c=colors[i], label=label)
plt.legend(loc='upper left')
plt.title("Monthly Unemployment Trends, 1948-1952")
plt.xlabel('Month, Integer')
plt.ylabel('Unemployment Rate, Percent')
plt.show() | mit |
NLeSC/cptm | cptm/utils/topics.py | 1 | 1882 | import pandas as pd
def get_top_topic_words(topics, opinions, t, top=10):
"""Return dataframe containing top topics and opinions.
Parameters
t : str - index of topic number
top : int - the number of words to store in the dataframe
Returns Pandas DataFrame
The DataFrame contains top topic words, weights of topic words and for
each perspective opinion words and weigths of opinion words.
"""
t = str(t)
topic = topics[t].copy()
topic.sort(ascending=False)
topic = topic[0:top]
df_t = pd.DataFrame(topic)
df_t.reset_index(level=0, inplace=True)
df_t.columns = ['topic', 'weights_topic']
dfs = [df_t]
for p, o in opinions.iteritems():
opinion = o[t].copy()
opinion.sort(ascending=False)
opinion = opinion[0:top]
df_o = pd.DataFrame(opinion)
df_o.reset_index(level=0, inplace=True)
df_o.columns = ['{}'.format(p),
'weights_{}'.format(p)]
dfs.append(df_o)
return pd.concat(dfs, axis=1)
def topic_str(df, single_line=False, weights=False, opinions=True):
if opinions:
opinion_labels = [l for l in df.columns if not l.startswith('weights')]
else:
opinion_labels = [l for l in df.columns if l.startswith('topic')]
if not single_line:
if not weights:
return str(df[opinion_labels])
else:
return str(df)
else:
lines = []
if not weights:
for l in opinion_labels:
lines.append(u'{}:\t'.format(l)+' '.join(df[l]))
else:
for l in opinion_labels:
zipped = zip(df[l], df['weights_{}'.format(l)])
line = [u'{}*{:.4f}'.format(wo, we) for wo, we in zipped]
lines.append(' '.join([u'{}:\t'.format(l)]+line))
return u'\n'.join(lines)
| apache-2.0 |
sfepy/sfepy | examples/linear_elasticity/dispersion_analysis.py | 2 | 35004 | #!/usr/bin/env python
"""
Dispersion analysis of a heterogeneous finite scale periodic cell.
The periodic cell mesh has to contain two subdomains Y1 (with the cell ids 1),
Y2 (with the cell ids 2), so that different material properties can be defined
in each of the subdomains (see ``--pars`` option). The command line parameters
can be given in any consistent unit set, for example the basic SI units. The
``--unit-multipliers`` option can be used to rescale the input units to ones
more suitable to the simulation, for example to prevent having different
matrix blocks with large differences of matrix entries magnitudes. The results
are then in the rescaled units.
Usage Examples
--------------
Default material parameters, a square periodic cell with a spherical inclusion,
logs also standard pressure dilatation and shear waves, no eigenvectors::
python examples/linear_elasticity/dispersion_analysis.py meshes/2d/special/circle_in_square.mesh --log-std-waves --eigs-only
As above, with custom eigenvalue solver parameters, and different number of
eigenvalues, mesh size and units used in the calculation::
python examples/linear_elasticity/dispersion_analysis.py meshes/2d/special/circle_in_square.mesh --solver-conf="kind='eig.scipy', method='eigsh', tol=1e-10, maxiter=1000, which='LM', sigma=0" --log-std-waves -n 5 --range=0,640,101 --mode=omega --unit-multipliers=1e-6,1e-2,1e-3 --mesh-size=1e-2 --eigs-only
Default material parameters, a square periodic cell with a square inclusion,
and a very small mesh to allow comparing the omega and kappa modes (full matrix
solver required!)::
python examples/linear_elasticity/dispersion_analysis.py meshes/2d/square_2m.mesh --solver-conf="kind='eig.scipy', method='eigh'" --log-std-waves -n 10 --range=0,640,101 --mesh-size=1e-2 --mode=omega --eigs-only --no-legends --unit-multipliers=1e-6,1e-2,1e-3 -o output/omega
python examples/linear_elasticity/dispersion_analysis.py meshes/2d/square_2m.mesh --solver-conf="kind='eig.qevp', method='companion', mode='inverted', solver={kind='eig.scipy', method='eig'}" --log-std-waves -n 500 --range=0,4000000,1001 --mesh-size=1e-2 --mode=kappa --eigs-only --no-legends --unit-multipliers=1e-6,1e-2,1e-3 -o output/kappa
View/compare the resulting logs::
python script/plot_logs.py output/omega/frequencies.txt --no-legends -g 1 -o mode-omega.png
python script/plot_logs.py output/kappa/wave-numbers.txt --no-legends -o mode-kappa.png
python script/plot_logs.py output/kappa/wave-numbers.txt --no-legends --swap-axes -o mode-kappa-t.png
In contrast to the heterogeneous square periodic cell, a homogeneous
square periodic cell (the region Y2 is empty)::
python examples/linear_elasticity/dispersion_analysis.py meshes/2d/square_1m.mesh --solver-conf="kind='eig.scipy', method='eigh'" --log-std-waves -n 10 --range=0,640,101 --mesh-size=1e-2 --mode=omega --eigs-only --no-legends --unit-multipliers=1e-6,1e-2,1e-3 -o output/omega-h
python script/plot_logs.py output/omega-h/frequencies.txt --no-legends -g 1 -o mode-omega-h.png
Use the Brillouin stepper::
python examples/linear_elasticity/dispersion_analysis.py meshes/2d/special/circle_in_square.mesh --log-std-waves -n=60 --eigs-only --no-legends --stepper=brillouin
python script/plot_logs.py output/frequencies.txt -g 0 --rc="'font.size':14, 'lines.linewidth' : 3, 'lines.markersize' : 4" -o brillouin-stepper-kappas.png
python script/plot_logs.py output/frequencies.txt -g 1 --no-legends --rc="'font.size':14, 'lines.linewidth' : 3, 'lines.markersize' : 4" -o brillouin-stepper-omegas.png
Additional arguments can be passed to the problem configuration's
:func:`define()` function using the ``--define-kwargs`` option. In this file,
only the mesh vertex separation parameter `mesh_eps` can be used::
python examples/linear_elasticity/dispersion_analysis.py meshes/2d/special/circle_in_square.mesh --log-std-waves --eigs-only --define-kwargs="mesh_eps=1e-10" --save-regions
"""
from __future__ import absolute_import
import os
import sys
sys.path.append('.')
import gc
from copy import copy
from argparse import ArgumentParser, RawDescriptionHelpFormatter
import numpy as nm
import matplotlib.pyplot as plt
from sfepy.base.base import import_file, output, Struct
from sfepy.base.conf import dict_from_string, ProblemConf
from sfepy.base.ioutils import ensure_path, remove_files_patterns, save_options
from sfepy.base.log import Log
from sfepy.discrete.fem import MeshIO
from sfepy.mechanics.matcoefs import stiffness_from_youngpoisson as stiffness
import sfepy.mechanics.matcoefs as mc
from sfepy.mechanics.units import apply_unit_multipliers, apply_units_to_pars
import sfepy.discrete.fem.periodic as per
from sfepy.discrete.fem.meshio import convert_complex_output
from sfepy.homogenization.utils import define_box_regions
from sfepy.discrete import Problem
from sfepy.mechanics.tensors import get_von_mises_stress
from sfepy.solvers import Solver
from sfepy.solvers.ts import get_print_info, TimeStepper
from sfepy.linalg.utils import output_array_stats, max_diff_csr
pars_kinds = {
'young1' : 'stress',
'poisson1' : 'one',
'density1' : 'density',
'young2' : 'stress',
'poisson2' : 'one',
'density2' : 'density',
}
def define(filename_mesh, pars, approx_order, refinement_level, solver_conf,
plane='strain', post_process=False, mesh_eps=1e-8):
io = MeshIO.any_from_filename(filename_mesh)
bbox = io.read_bounding_box()
dim = bbox.shape[1]
options = {
'absolute_mesh_path' : True,
'refinement_level' : refinement_level,
'allow_empty_regions' : True,
'post_process_hook' : 'compute_von_mises' if post_process else None,
}
fields = {
'displacement': ('complex', dim, 'Omega', approx_order),
}
materials = {
'm' : ({
'D' : {'Y1' : stiffness(dim,
young=pars.young1,
poisson=pars.poisson1,
plane=plane),
'Y2' : stiffness(dim,
young=pars.young2,
poisson=pars.poisson2,
plane=plane)},
'density' : {'Y1' : pars.density1, 'Y2' : pars.density2},
},),
'wave' : 'get_wdir',
}
variables = {
'u' : ('unknown field', 'displacement', 0),
'v' : ('test field', 'displacement', 'u'),
}
regions = {
'Omega' : 'all',
'Y1': 'cells of group 1',
'Y2': 'cells of group 2',
}
regions.update(define_box_regions(dim,
bbox[0], bbox[1], mesh_eps))
ebcs = {
}
if dim == 3:
epbcs = {
'periodic_x' : (['Left', 'Right'], {'u.all' : 'u.all'},
'match_x_plane'),
'periodic_y' : (['Near', 'Far'], {'u.all' : 'u.all'},
'match_y_plane'),
'periodic_z' : (['Top', 'Bottom'], {'u.all' : 'u.all'},
'match_z_plane'),
}
else:
epbcs = {
'periodic_x' : (['Left', 'Right'], {'u.all' : 'u.all'},
'match_y_line'),
'periodic_y' : (['Bottom', 'Top'], {'u.all' : 'u.all'},
'match_x_line'),
}
per.set_accuracy(mesh_eps)
functions = {
'match_x_plane' : (per.match_x_plane,),
'match_y_plane' : (per.match_y_plane,),
'match_z_plane' : (per.match_z_plane,),
'match_x_line' : (per.match_x_line,),
'match_y_line' : (per.match_y_line,),
'get_wdir' : (get_wdir,),
}
integrals = {
'i' : 2 * approx_order,
}
equations = {
'K' : 'dw_lin_elastic.i.Omega(m.D, v, u)',
'S' : 'dw_elastic_wave.i.Omega(m.D, wave.vec, v, u)',
'R' : """1j * dw_elastic_wave_cauchy.i.Omega(m.D, wave.vec, u, v)
- 1j * dw_elastic_wave_cauchy.i.Omega(m.D, wave.vec, v, u)""",
'M' : 'dw_dot.i.Omega(m.density, v, u)',
}
solver_0 = solver_conf.copy()
solver_0['name'] = 'eig'
return locals()
def get_wdir(ts, coors, mode=None,
equations=None, term=None, problem=None, wdir=None, **kwargs):
if mode == 'special':
return {'vec' : wdir}
def set_wave_dir(pb, wdir):
materials = pb.get_materials()
wave_mat = materials['wave']
wave_mat.set_extra_args(wdir=wdir)
def save_materials(output_dir, pb, options):
stiffness = pb.evaluate('ev_integrate_mat.2.Omega(m.D, u)',
mode='el_avg', copy_materials=False, verbose=False)
young, poisson = mc.youngpoisson_from_stiffness(stiffness,
plane=options.plane)
density = pb.evaluate('ev_integrate_mat.2.Omega(m.density, u)',
mode='el_avg', copy_materials=False, verbose=False)
out = {}
out['young'] = Struct(name='young', mode='cell',
data=young[..., None, None])
out['poisson'] = Struct(name='poisson', mode='cell',
data=poisson[..., None, None])
out['density'] = Struct(name='density', mode='cell', data=density)
materials_filename = os.path.join(output_dir, 'materials.vtk')
pb.save_state(materials_filename, out=out)
def get_std_wave_fun(pb, options):
stiffness = pb.evaluate('ev_integrate_mat.2.Omega(m.D, u)',
mode='el_avg', copy_materials=False, verbose=False)
young, poisson = mc.youngpoisson_from_stiffness(stiffness,
plane=options.plane)
density = pb.evaluate('ev_integrate_mat.2.Omega(m.density, u)',
mode='el_avg', copy_materials=False, verbose=False)
lam, mu = mc.lame_from_youngpoisson(young, poisson,
plane=options.plane)
alam = nm.average(lam)
amu = nm.average(mu)
adensity = nm.average(density)
cp = nm.sqrt((alam + 2.0 * amu) / adensity)
cs = nm.sqrt(amu / adensity)
output('average p-wave speed:', cp)
output('average shear wave speed:', cs)
log_names = [r'$\omega_p$', r'$\omega_s$']
log_plot_kwargs = [{'ls' : '--', 'color' : 'k'},
{'ls' : '--', 'color' : 'gray'}]
if options.mode == 'omega':
fun = lambda wmag, wdir: (cp * wmag, cs * wmag)
else:
fun = lambda wmag, wdir: (wmag / cp, wmag / cs)
return fun, log_names, log_plot_kwargs
def get_stepper(rng, pb, options):
if options.stepper == 'linear':
stepper = TimeStepper(rng[0], rng[1], dt=None, n_step=rng[2])
return stepper
bbox = pb.domain.mesh.get_bounding_box()
bzone = 2.0 * nm.pi / (bbox[1] - bbox[0])
num = rng[2] // 3
class BrillouinStepper(Struct):
"""
Step over 1. Brillouin zone in xy plane.
"""
def __init__(self, t0, t1, dt=None, n_step=None, step=None, **kwargs):
Struct.__init__(self, t0=t0, t1=t1, dt=dt, n_step=n_step, step=step)
self.n_digit, self.format, self.suffix = get_print_info(self.n_step)
def __iter__(self):
ts = TimeStepper(0, bzone[0], dt=None, n_step=num)
for ii, val in ts:
yield ii, val, nm.array([1.0, 0.0])
if ii == (num-2): break
ts = TimeStepper(0, bzone[1], dt=None, n_step=num)
for ii, k1 in ts:
wdir = nm.array([bzone[0], k1])
val = nm.linalg.norm(wdir)
wdir = wdir / val
yield num + ii, val, wdir
if ii == (num-2): break
wdir = nm.array([bzone[0], bzone[1]])
val = nm.linalg.norm(wdir)
wdir = wdir / val
ts = TimeStepper(0, 1, dt=None, n_step=num)
for ii, _ in ts:
yield 2 * num + ii, val * (1.0 - float(ii)/(num-1)), wdir
stepper = BrillouinStepper(0, 1, n_step=rng[2])
return stepper
def compute_von_mises(out, pb, state, extend=False, wmag=None, wdir=None):
"""
Calculate the von Mises stress.
"""
stress = pb.evaluate('ev_cauchy_stress.i.Omega(m.D, u)', mode='el_avg')
vms = get_von_mises_stress(stress.squeeze())
vms.shape = (vms.shape[0], 1, 1, 1)
out['von_mises_stress'] = Struct(name='output_data', mode='cell',
data=vms)
return out
def save_eigenvectors(filename, svecs, wmag, wdir, pb):
if svecs is None: return
variables = pb.get_variables()
# Make full eigenvectors (add DOFs fixed by boundary conditions).
vecs = nm.empty((variables.di.ptr[-1], svecs.shape[1]),
dtype=svecs.dtype)
for ii in range(svecs.shape[1]):
vecs[:, ii] = variables.make_full_vec(svecs[:, ii])
# Save the eigenvectors.
out = {}
state = pb.create_state()
pp_name = pb.conf.options.get('post_process_hook')
pp = getattr(pb.conf.funmod, pp_name if pp_name is not None else '',
lambda out, *args, **kwargs: out)
for ii in range(svecs.shape[1]):
state.set_full(vecs[:, ii])
aux = state.create_output_dict()
aux2 = {}
pp(aux2, pb, state, wmag=wmag, wdir=wdir)
aux.update(convert_complex_output(aux2))
out.update({key + '%03d' % ii : aux[key] for key in aux})
pb.save_state(filename, out=out)
def assemble_matrices(define, mod, pars, set_wave_dir, options, wdir=None):
"""
Assemble the blocks of dispersion eigenvalue problem matrices.
"""
define_dict = define(filename_mesh=options.mesh_filename,
pars=pars,
approx_order=options.order,
refinement_level=options.refine,
solver_conf=options.solver_conf,
plane=options.plane,
post_process=options.post_process,
**options.define_kwargs)
conf = ProblemConf.from_dict(define_dict, mod)
pb = Problem.from_conf(conf)
pb.dispersion_options = options
pb.set_output_dir(options.output_dir)
dim = pb.domain.shape.dim
# Set the normalized wave vector direction to the material(s).
if wdir is None:
wdir = nm.asarray(options.wave_dir[:dim], dtype=nm.float64)
wdir = wdir / nm.linalg.norm(wdir)
set_wave_dir(pb, wdir)
bbox = pb.domain.mesh.get_bounding_box()
size = (bbox[1] - bbox[0]).max()
scaling0 = apply_unit_multipliers([1.0], ['length'],
options.unit_multipliers)[0]
scaling = scaling0
if options.mesh_size is not None:
scaling *= options.mesh_size / size
output('scaling factor of periodic cell mesh coordinates:', scaling)
output('new mesh size with applied unit multipliers:', scaling * size)
pb.domain.mesh.coors[:] *= scaling
pb.set_mesh_coors(pb.domain.mesh.coors, update_fields=True)
bzone = 2.0 * nm.pi / (scaling * size)
output('1. Brillouin zone size:', bzone * scaling0)
output('1. Brillouin zone size with applied unit multipliers:', bzone)
pb.time_update()
pb.update_materials()
# Assemble the matrices.
mtxs = {}
for key, eq in pb.equations.iteritems():
mtxs[key] = mtx = pb.mtx_a.copy()
mtx = eq.evaluate(mode='weak', dw_mode='matrix', asm_obj=mtx)
mtx.eliminate_zeros()
output_array_stats(mtx.data, 'nonzeros in %s' % key)
output('symmetry checks:')
output('%s - %s^T:' % (key, key), max_diff_csr(mtx, mtx.T))
output('%s - %s^H:' % (key, key), max_diff_csr(mtx, mtx.H))
return pb, wdir, bzone, mtxs
def setup_n_eigs(options, pb, mtxs):
"""
Setup the numbers of eigenvalues based on options and numbers of DOFs.
"""
solver_n_eigs = n_eigs = options.n_eigs
n_dof = mtxs['K'].shape[0]
if options.mode == 'omega':
if options.n_eigs > n_dof:
n_eigs = n_dof
solver_n_eigs = None
else:
if options.n_eigs > 2 * n_dof:
n_eigs = 2 * n_dof
solver_n_eigs = None
return solver_n_eigs, n_eigs
def build_evp_matrices(mtxs, val, mode, pb):
"""
Build the matrices of the dispersion eigenvalue problem.
"""
if mode == 'omega':
mtx_a = mtxs['K'] + val**2 * mtxs['S'] + val * mtxs['R']
output('A - A^H:', max_diff_csr(mtx_a, mtx_a.H))
evp_mtxs = (mtx_a, mtxs['M'])
else:
evp_mtxs = (mtxs['S'], mtxs['R'], mtxs['K'] - val**2 * mtxs['M'])
return evp_mtxs
def process_evp_results(eigs, svecs, val, wdir, bzone, pb, mtxs, options,
std_wave_fun=None):
"""
Transform eigenvalues to either omegas or kappas, depending on `mode`.
Transform eigenvectors, if available, depending on `mode`.
Return also the values to log.
"""
if options.mode == 'omega':
omegas = nm.sqrt(eigs)
output('eigs, omegas:')
for ii, om in enumerate(omegas):
output('{:>3}. {: .10e}, {:.10e}'.format(ii, eigs[ii], om))
if options.stepper == 'linear':
out = tuple(eigs) + tuple(omegas)
else:
out = tuple(val * wdir) + tuple(omegas)
if std_wave_fun is not None:
out = out + std_wave_fun(val, wdir)
return omegas, svecs, out
else:
kappas = eigs.copy()
rks = kappas.copy()
# Mask modes far from 1. Brillouin zone.
max_kappa = 1.2 * bzone
kappas[kappas.real > max_kappa] = nm.nan
# Mask non-physical modes.
kappas[kappas.real < 0] = nm.nan
kappas[nm.abs(kappas.imag) > 1e-10] = nm.nan
out = tuple(kappas.real)
output('raw kappas, masked real part:',)
for ii, kr in enumerate(kappas.real):
output('{:>3}. {: 23.5e}, {:.10e}'.format(ii, rks[ii], kr))
if svecs is not None:
n_dof = mtxs['K'].shape[0]
# Select only vectors corresponding to physical modes.
ii = nm.isfinite(kappas.real)
svecs = svecs[:n_dof, ii]
if std_wave_fun is not None:
out = out + tuple(ii if ii <= max_kappa else nm.nan
for ii in std_wave_fun(val, wdir))
return kappas, svecs, out
helps = {
'pars' :
'material parameters in Y1, Y2 subdomains in basic units.'
' The default parameters are:'
' young1, poisson1, density1, young2, poisson2, density2'
' [default: %(default)s]',
'conf' :
'if given, an alternative problem description file with apply_units() and'
' define() functions [default: %(default)s]',
'define_kwargs' : 'additional keyword arguments passed to define()',
'mesh_size' :
'desired mesh size (max. of bounding box dimensions) in basic units'
' - the input periodic cell mesh is rescaled to this size'
' [default: %(default)s]',
'unit_multipliers' :
'basic unit multipliers (time, length, mass) [default: %(default)s]',
'plane' :
'for 2D problems, plane strain or stress hypothesis selection'
' [default: %(default)s]',
'wave_dir' : 'the wave vector direction (will be normalized)'
' [default: %(default)s]',
'mode' : 'solution mode: omega = solve a generalized EVP for omega,'
' kappa = solve a quadratic generalized EVP for kappa'
' [default: %(default)s]',
'stepper' : 'the range stepper. For "brillouin", only the number'
' of items from --range is used'
' [default: %(default)s]',
'range' : 'the wave vector magnitude / frequency range'
' (like numpy.linspace) depending on the mode option'
' [default: %(default)s]',
'order' : 'displacement field approximation order [default: %(default)s]',
'refine' : 'number of uniform mesh refinements [default: %(default)s]',
'n_eigs' : 'the number of eigenvalues to compute [default: %(default)s]',
'eigs_only' : 'compute only eigenvalues, not eigenvectors',
'post_process' : 'post-process eigenvectors',
'solver_conf' : 'eigenvalue problem solver configuration options'
' [default: %(default)s]',
'save_regions' : 'save defined regions into'
' <output_directory>/regions.vtk',
'save_materials' : 'save material parameters into'
' <output_directory>/materials.vtk',
'log_std_waves' : 'log also standard pressure dilatation and shear waves',
'no_legends' :
'do not show legends in the log plots',
'no_show' :
'do not show the log figure',
'silent' : 'do not print messages to screen',
'clear' :
'clear old solution files from output directory',
'output_dir' :
'output directory [default: %(default)s]',
'mesh_filename' :
'input periodic cell mesh file name [default: %(default)s]',
}
def main():
# Aluminium and epoxy.
default_pars = '70e9,0.35,2.799e3,3.8e9,0.27,1.142e3'
default_solver_conf = ("kind='eig.scipy',method='eigsh',tol=1.0e-5,"
"maxiter=1000,which='LM',sigma=0.0")
parser = ArgumentParser(description=__doc__,
formatter_class=RawDescriptionHelpFormatter)
parser.add_argument('--pars', metavar='name1=value1,name2=value2,...'
' or value1,value2,...',
action='store', dest='pars',
default=default_pars, help=helps['pars'])
parser.add_argument('--conf', metavar='filename',
action='store', dest='conf',
default=None, help=helps['conf'])
parser.add_argument('--define-kwargs', metavar='dict-like',
action='store', dest='define_kwargs',
default=None, help=helps['define_kwargs'])
parser.add_argument('--mesh-size', type=float, metavar='float',
action='store', dest='mesh_size',
default=None, help=helps['mesh_size'])
parser.add_argument('--unit-multipliers',
metavar='c_time,c_length,c_mass',
action='store', dest='unit_multipliers',
default='1.0,1.0,1.0', help=helps['unit_multipliers'])
parser.add_argument('--plane', action='store', dest='plane',
choices=['strain', 'stress'],
default='strain', help=helps['plane'])
parser.add_argument('--wave-dir', metavar='float,float[,float]',
action='store', dest='wave_dir',
default='1.0,0.0,0.0', help=helps['wave_dir'])
parser.add_argument('--mode', action='store', dest='mode',
choices=['omega', 'kappa'],
default='omega', help=helps['mode'])
parser.add_argument('--stepper', action='store', dest='stepper',
choices=['linear', 'brillouin'],
default='linear', help=helps['stepper'])
parser.add_argument('--range', metavar='start,stop,count',
action='store', dest='range',
default='0,6.4,33', help=helps['range'])
parser.add_argument('--order', metavar='int', type=int,
action='store', dest='order',
default=1, help=helps['order'])
parser.add_argument('--refine', metavar='int', type=int,
action='store', dest='refine',
default=0, help=helps['refine'])
parser.add_argument('-n', '--n-eigs', metavar='int', type=int,
action='store', dest='n_eigs',
default=6, help=helps['n_eigs'])
group = parser.add_mutually_exclusive_group()
group.add_argument('--eigs-only',
action='store_true', dest='eigs_only',
default=False, help=helps['eigs_only'])
group.add_argument('--post-process',
action='store_true', dest='post_process',
default=False, help=helps['post_process'])
parser.add_argument('--solver-conf', metavar='dict-like',
action='store', dest='solver_conf',
default=default_solver_conf, help=helps['solver_conf'])
parser.add_argument('--save-regions',
action='store_true', dest='save_regions',
default=False, help=helps['save_regions'])
parser.add_argument('--save-materials',
action='store_true', dest='save_materials',
default=False, help=helps['save_materials'])
parser.add_argument('--log-std-waves',
action='store_true', dest='log_std_waves',
default=False, help=helps['log_std_waves'])
parser.add_argument('--no-legends',
action='store_false', dest='show_legends',
default=True, help=helps['no_legends'])
parser.add_argument('--no-show',
action='store_false', dest='show',
default=True, help=helps['no_show'])
parser.add_argument('--silent',
action='store_true', dest='silent',
default=False, help=helps['silent'])
parser.add_argument('-c', '--clear',
action='store_true', dest='clear',
default=False, help=helps['clear'])
parser.add_argument('-o', '--output-dir', metavar='path',
action='store', dest='output_dir',
default='output', help=helps['output_dir'])
parser.add_argument('mesh_filename', default='',
help=helps['mesh_filename'])
options = parser.parse_args()
output_dir = options.output_dir
output.set_output(filename=os.path.join(output_dir,'output_log.txt'),
combined=options.silent == False)
if options.conf is not None:
mod = import_file(options.conf)
else:
mod = sys.modules[__name__]
pars_kinds = mod.pars_kinds
define = mod.define
set_wave_dir = mod.set_wave_dir
setup_n_eigs = mod.setup_n_eigs
build_evp_matrices = mod.build_evp_matrices
save_materials = mod.save_materials
get_std_wave_fun = mod.get_std_wave_fun
get_stepper = mod.get_stepper
process_evp_results = mod.process_evp_results
save_eigenvectors = mod.save_eigenvectors
try:
options.pars = dict_from_string(options.pars)
except:
aux = [float(ii) for ii in options.pars.split(',')]
options.pars = {key : aux[ii]
for ii, key in enumerate(pars_kinds.keys())}
options.unit_multipliers = [float(ii)
for ii in options.unit_multipliers.split(',')]
options.wave_dir = [float(ii)
for ii in options.wave_dir.split(',')]
aux = options.range.split(',')
options.range = [float(aux[0]), float(aux[1]), int(aux[2])]
options.solver_conf = dict_from_string(options.solver_conf)
options.define_kwargs = dict_from_string(options.define_kwargs)
if options.clear:
remove_files_patterns(output_dir,
['*.h5', '*.vtk', '*.txt'],
ignores=['output_log.txt'],
verbose=True)
filename = os.path.join(output_dir, 'options.txt')
ensure_path(filename)
save_options(filename, [('options', vars(options))],
quote_command_line=True)
pars = apply_units_to_pars(options.pars, pars_kinds,
options.unit_multipliers)
output('material parameter names and kinds:')
output(pars_kinds)
output('material parameters with applied unit multipliers:')
output(pars)
pars = Struct(**pars)
if options.mode == 'omega':
rng = copy(options.range)
rng[:2] = apply_unit_multipliers(options.range[:2],
['wave_number', 'wave_number'],
options.unit_multipliers)
output('wave number range with applied unit multipliers:', rng)
else:
if options.stepper == 'brillouin':
raise ValueError('Cannot use "brillouin" stepper in kappa mode!')
rng = copy(options.range)
rng[:2] = apply_unit_multipliers(options.range[:2],
['frequency', 'frequency'],
options.unit_multipliers)
output('frequency range with applied unit multipliers:', rng)
pb, wdir, bzone, mtxs = assemble_matrices(define, mod, pars, set_wave_dir,
options)
dim = pb.domain.shape.dim
if dim != 2:
options.plane = 'strain'
if options.save_regions:
pb.save_regions_as_groups(os.path.join(output_dir, 'regions'))
if options.save_materials:
save_materials(output_dir, pb, options)
conf = pb.solver_confs['eig']
eig_solver = Solver.any_from_conf(conf)
n_eigs, options.n_eigs = setup_n_eigs(options, pb, mtxs)
get_color = lambda ii: plt.cm.viridis((float(ii)
/ (max(options.n_eigs, 2) - 1)))
plot_kwargs = [{'color' : get_color(ii), 'ls' : '', 'marker' : 'o'}
for ii in range(options.n_eigs)]
get_color_dim = lambda ii: plt.cm.viridis((float(ii) / (max(dim, 2) -1)))
plot_kwargs_dim = [{'color' : get_color_dim(ii), 'ls' : '', 'marker' : 'o'}
for ii in range(dim)]
log_names = []
log_plot_kwargs = []
if options.log_std_waves:
std_wave_fun, log_names, log_plot_kwargs = get_std_wave_fun(
pb, options)
else:
std_wave_fun = None
stepper = get_stepper(rng, pb, options)
if options.mode == 'omega':
eigenshapes_filename = os.path.join(output_dir,
'frequency-eigenshapes-%s.vtk'
% stepper.suffix)
if options.stepper == 'linear':
log = Log([[r'$\lambda_{%d}$' % ii for ii in range(options.n_eigs)],
[r'$\omega_{%d}$'
% ii for ii in range(options.n_eigs)] + log_names],
plot_kwargs=[plot_kwargs, plot_kwargs + log_plot_kwargs],
formats=[['{:.12e}'] * options.n_eigs,
['{:.12e}'] * (options.n_eigs + len(log_names))],
yscales=['linear', 'linear'],
xlabels=[r'$\kappa$', r'$\kappa$'],
ylabels=[r'eigenvalues $\lambda_i$',
r'frequencies $\omega_i$'],
show_legends=options.show_legends,
is_plot=options.show,
log_filename=os.path.join(output_dir, 'frequencies.txt'),
aggregate=1000, sleep=0.1)
else:
log = Log([[r'$\kappa_{%d}$'% ii for ii in range(dim)],
[r'$\omega_{%d}$'
% ii for ii in range(options.n_eigs)] + log_names],
plot_kwargs=[plot_kwargs_dim,
plot_kwargs + log_plot_kwargs],
formats=[['{:.12e}'] * dim,
['{:.12e}'] * (options.n_eigs + len(log_names))],
yscales=['linear', 'linear'],
xlabels=[r'', r''],
ylabels=[r'wave vector $\kappa$',
r'frequencies $\omega_i$'],
show_legends=options.show_legends,
is_plot=options.show,
log_filename=os.path.join(output_dir, 'frequencies.txt'),
aggregate=1000, sleep=0.1)
for aux in stepper:
if options.stepper == 'linear':
iv, wmag = aux
else:
iv, wmag, wdir = aux
output('step %d: wave vector %s' % (iv, wmag * wdir))
if options.stepper == 'brillouin':
pb, _, bzone, mtxs = assemble_matrices(
define, mod, pars, set_wave_dir, options, wdir=wdir)
evp_mtxs = build_evp_matrices(mtxs, wmag, options.mode, pb)
if options.eigs_only:
eigs = eig_solver(*evp_mtxs, n_eigs=n_eigs,
eigenvectors=False)
svecs = None
else:
eigs, svecs = eig_solver(*evp_mtxs, n_eigs=n_eigs,
eigenvectors=True)
omegas, svecs, out = process_evp_results(
eigs, svecs, wmag, wdir, bzone, pb, mtxs, options,
std_wave_fun=std_wave_fun
)
if options.stepper == 'linear':
log(*out, x=[wmag, wmag])
else:
log(*out, x=[iv, iv])
save_eigenvectors(eigenshapes_filename % iv, svecs, wmag, wdir, pb)
gc.collect()
log(save_figure=os.path.join(output_dir, 'frequencies.png'))
log(finished=True)
else:
eigenshapes_filename = os.path.join(output_dir,
'wave-number-eigenshapes-%s.vtk'
% stepper.suffix)
log = Log([[r'$\kappa_{%d}$' % ii for ii in range(options.n_eigs)]
+ log_names],
plot_kwargs=[plot_kwargs + log_plot_kwargs],
formats=[['{:.12e}'] * (options.n_eigs + len(log_names))],
yscales=['linear'],
xlabels=[r'$\omega$'],
ylabels=[r'wave numbers $\kappa_i$'],
show_legends=options.show_legends,
is_plot=options.show,
log_filename=os.path.join(output_dir, 'wave-numbers.txt'),
aggregate=1000, sleep=0.1)
for io, omega in stepper:
output('step %d: frequency %s' % (io, omega))
evp_mtxs = build_evp_matrices(mtxs, omega, options.mode, pb)
if options.eigs_only:
eigs = eig_solver(*evp_mtxs, n_eigs=n_eigs,
eigenvectors=False)
svecs = None
else:
eigs, svecs = eig_solver(*evp_mtxs, n_eigs=n_eigs,
eigenvectors=True)
kappas, svecs, out = process_evp_results(
eigs, svecs, omega, wdir, bzone, pb, mtxs, options,
std_wave_fun=std_wave_fun
)
log(*out, x=[omega])
save_eigenvectors(eigenshapes_filename % io, svecs, kappas, wdir,
pb)
gc.collect()
log(save_figure=os.path.join(output_dir, 'wave-numbers.png'))
log(finished=True)
if __name__ == '__main__':
main()
| bsd-3-clause |
tosolveit/scikit-learn | sklearn/datasets/lfw.py | 141 | 19372 | """Loader for the Labeled Faces in the Wild (LFW) dataset
This dataset is a collection of JPEG pictures of famous people collected
over the internet, all details are available on the official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. The typical task is called
Face Verification: given a pair of two pictures, a binary classifier
must predict whether the two images are from the same person.
An alternative task, Face Recognition or Face Identification is:
given the picture of the face of an unknown person, identify the name
of the person by referring to a gallery of previously seen pictures of
identified persons.
Both Face Verification and Face Recognition are tasks that are typically
performed on the output of a model trained to perform Face Detection. The
most popular model for Face Detection is called Viola-Johns and is
implemented in the OpenCV library. The LFW faces were extracted by this face
detector from various online websites.
"""
# Copyright (c) 2011 Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
from os import listdir, makedirs, remove
from os.path import join, exists, isdir
from sklearn.utils import deprecated
import logging
import numpy as np
try:
import urllib.request as urllib # for backwards compatibility
except ImportError:
import urllib
from .base import get_data_home, Bunch
from ..externals.joblib import Memory
from ..externals.six import b
logger = logging.getLogger(__name__)
BASE_URL = "http://vis-www.cs.umass.edu/lfw/"
ARCHIVE_NAME = "lfw.tgz"
FUNNELED_ARCHIVE_NAME = "lfw-funneled.tgz"
TARGET_FILENAMES = [
'pairsDevTrain.txt',
'pairsDevTest.txt',
'pairs.txt',
]
def scale_face(face):
"""Scale back to 0-1 range in case of normalization for plotting"""
scaled = face - face.min()
scaled /= scaled.max()
return scaled
#
# Common private utilities for data fetching from the original LFW website
# local disk caching, and image decoding.
#
def check_fetch_lfw(data_home=None, funneled=True, download_if_missing=True):
"""Helper function to download any missing LFW data"""
data_home = get_data_home(data_home=data_home)
lfw_home = join(data_home, "lfw_home")
if funneled:
archive_path = join(lfw_home, FUNNELED_ARCHIVE_NAME)
data_folder_path = join(lfw_home, "lfw_funneled")
archive_url = BASE_URL + FUNNELED_ARCHIVE_NAME
else:
archive_path = join(lfw_home, ARCHIVE_NAME)
data_folder_path = join(lfw_home, "lfw")
archive_url = BASE_URL + ARCHIVE_NAME
if not exists(lfw_home):
makedirs(lfw_home)
for target_filename in TARGET_FILENAMES:
target_filepath = join(lfw_home, target_filename)
if not exists(target_filepath):
if download_if_missing:
url = BASE_URL + target_filename
logger.warning("Downloading LFW metadata: %s", url)
urllib.urlretrieve(url, target_filepath)
else:
raise IOError("%s is missing" % target_filepath)
if not exists(data_folder_path):
if not exists(archive_path):
if download_if_missing:
logger.warning("Downloading LFW data (~200MB): %s", archive_url)
urllib.urlretrieve(archive_url, archive_path)
else:
raise IOError("%s is missing" % target_filepath)
import tarfile
logger.info("Decompressing the data archive to %s", data_folder_path)
tarfile.open(archive_path, "r:gz").extractall(path=lfw_home)
remove(archive_path)
return lfw_home, data_folder_path
def _load_imgs(file_paths, slice_, color, resize):
"""Internally used to load images"""
# Try to import imread and imresize from PIL. We do this here to prevent
# the whole sklearn.datasets module from depending on PIL.
try:
try:
from scipy.misc import imread
except ImportError:
from scipy.misc.pilutil import imread
from scipy.misc import imresize
except ImportError:
raise ImportError("The Python Imaging Library (PIL)"
" is required to load data from jpeg files")
# compute the portion of the images to load to respect the slice_ parameter
# given by the caller
default_slice = (slice(0, 250), slice(0, 250))
if slice_ is None:
slice_ = default_slice
else:
slice_ = tuple(s or ds for s, ds in zip(slice_, default_slice))
h_slice, w_slice = slice_
h = (h_slice.stop - h_slice.start) // (h_slice.step or 1)
w = (w_slice.stop - w_slice.start) // (w_slice.step or 1)
if resize is not None:
resize = float(resize)
h = int(resize * h)
w = int(resize * w)
# allocate some contiguous memory to host the decoded image slices
n_faces = len(file_paths)
if not color:
faces = np.zeros((n_faces, h, w), dtype=np.float32)
else:
faces = np.zeros((n_faces, h, w, 3), dtype=np.float32)
# iterate over the collected file path to load the jpeg files as numpy
# arrays
for i, file_path in enumerate(file_paths):
if i % 1000 == 0:
logger.info("Loading face #%05d / %05d", i + 1, n_faces)
# Checks if jpeg reading worked. Refer to issue #3594 for more
# details.
img = imread(file_path)
if img.ndim is 0:
raise RuntimeError("Failed to read the image file %s, "
"Please make sure that libjpeg is installed"
% file_path)
face = np.asarray(img[slice_], dtype=np.float32)
face /= 255.0 # scale uint8 coded colors to the [0.0, 1.0] floats
if resize is not None:
face = imresize(face, resize)
if not color:
# average the color channels to compute a gray levels
# representaion
face = face.mean(axis=2)
faces[i, ...] = face
return faces
#
# Task #1: Face Identification on picture with names
#
def _fetch_lfw_people(data_folder_path, slice_=None, color=False, resize=None,
min_faces_per_person=0):
"""Perform the actual data loading for the lfw people dataset
This operation is meant to be cached by a joblib wrapper.
"""
# scan the data folder content to retain people with more that
# `min_faces_per_person` face pictures
person_names, file_paths = [], []
for person_name in sorted(listdir(data_folder_path)):
folder_path = join(data_folder_path, person_name)
if not isdir(folder_path):
continue
paths = [join(folder_path, f) for f in listdir(folder_path)]
n_pictures = len(paths)
if n_pictures >= min_faces_per_person:
person_name = person_name.replace('_', ' ')
person_names.extend([person_name] * n_pictures)
file_paths.extend(paths)
n_faces = len(file_paths)
if n_faces == 0:
raise ValueError("min_faces_per_person=%d is too restrictive" %
min_faces_per_person)
target_names = np.unique(person_names)
target = np.searchsorted(target_names, person_names)
faces = _load_imgs(file_paths, slice_, color, resize)
# shuffle the faces with a deterministic RNG scheme to avoid having
# all faces of the same person in a row, as it would break some
# cross validation and learning algorithms such as SGD and online
# k-means that make an IID assumption
indices = np.arange(n_faces)
np.random.RandomState(42).shuffle(indices)
faces, target = faces[indices], target[indices]
return faces, target, target_names
def fetch_lfw_people(data_home=None, funneled=True, resize=0.5,
min_faces_per_person=0, color=False,
slice_=(slice(70, 195), slice(78, 172)),
download_if_missing=True):
"""Loader for the Labeled Faces in the Wild (LFW) people dataset
This dataset is a collection of JPEG pictures of famous people
collected on the internet, all details are available on the
official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. Each pixel of each channel
(color in RGB) is encoded by a float in range 0.0 - 1.0.
The task is called Face Recognition (or Identification): given the
picture of a face, find the name of the person given a training set
(gallery).
The original images are 250 x 250 pixels, but the default slice and resize
arguments reduce them to 62 x 74.
Parameters
----------
data_home : optional, default: None
Specify another download and cache folder for the datasets. By default
all scikit learn data is stored in '~/scikit_learn_data' subfolders.
funneled : boolean, optional, default: True
Download and use the funneled variant of the dataset.
resize : float, optional, default 0.5
Ratio used to resize the each face picture.
min_faces_per_person : int, optional, default None
The extracted dataset will only retain pictures of people that have at
least `min_faces_per_person` different pictures.
color : boolean, optional, default False
Keep the 3 RGB channels instead of averaging them to a single
gray level channel. If color is True the shape of the data has
one more dimension than than the shape with color = False.
slice_ : optional
Provide a custom 2D slice (height, width) to extract the
'interesting' part of the jpeg files and avoid use statistical
correlation from the background
download_if_missing : optional, True by default
If False, raise a IOError if the data is not locally available
instead of trying to download the data from the source site.
Returns
-------
dataset : dict-like object with the following attributes:
dataset.data : numpy array of shape (13233, 2914)
Each row corresponds to a ravelled face image of original size 62 x 47
pixels. Changing the ``slice_`` or resize parameters will change the shape
of the output.
dataset.images : numpy array of shape (13233, 62, 47)
Each row is a face image corresponding to one of the 5749 people in
the dataset. Changing the ``slice_`` or resize parameters will change the shape
of the output.
dataset.target : numpy array of shape (13233,)
Labels associated to each face image. Those labels range from 0-5748
and correspond to the person IDs.
dataset.DESCR : string
Description of the Labeled Faces in the Wild (LFW) dataset.
"""
lfw_home, data_folder_path = check_fetch_lfw(
data_home=data_home, funneled=funneled,
download_if_missing=download_if_missing)
logger.info('Loading LFW people faces from %s', lfw_home)
# wrap the loader in a memoizing function that will return memmaped data
# arrays for optimal memory usage
m = Memory(cachedir=lfw_home, compress=6, verbose=0)
load_func = m.cache(_fetch_lfw_people)
# load and memoize the pairs as np arrays
faces, target, target_names = load_func(
data_folder_path, resize=resize,
min_faces_per_person=min_faces_per_person, color=color, slice_=slice_)
# pack the results as a Bunch instance
return Bunch(data=faces.reshape(len(faces), -1), images=faces,
target=target, target_names=target_names,
DESCR="LFW faces dataset")
#
# Task #2: Face Verification on pairs of face pictures
#
def _fetch_lfw_pairs(index_file_path, data_folder_path, slice_=None,
color=False, resize=None):
"""Perform the actual data loading for the LFW pairs dataset
This operation is meant to be cached by a joblib wrapper.
"""
# parse the index file to find the number of pairs to be able to allocate
# the right amount of memory before starting to decode the jpeg files
with open(index_file_path, 'rb') as index_file:
split_lines = [ln.strip().split(b('\t')) for ln in index_file]
pair_specs = [sl for sl in split_lines if len(sl) > 2]
n_pairs = len(pair_specs)
# interating over the metadata lines for each pair to find the filename to
# decode and load in memory
target = np.zeros(n_pairs, dtype=np.int)
file_paths = list()
for i, components in enumerate(pair_specs):
if len(components) == 3:
target[i] = 1
pair = (
(components[0], int(components[1]) - 1),
(components[0], int(components[2]) - 1),
)
elif len(components) == 4:
target[i] = 0
pair = (
(components[0], int(components[1]) - 1),
(components[2], int(components[3]) - 1),
)
else:
raise ValueError("invalid line %d: %r" % (i + 1, components))
for j, (name, idx) in enumerate(pair):
try:
person_folder = join(data_folder_path, name)
except TypeError:
person_folder = join(data_folder_path, str(name, 'UTF-8'))
filenames = list(sorted(listdir(person_folder)))
file_path = join(person_folder, filenames[idx])
file_paths.append(file_path)
pairs = _load_imgs(file_paths, slice_, color, resize)
shape = list(pairs.shape)
n_faces = shape.pop(0)
shape.insert(0, 2)
shape.insert(0, n_faces // 2)
pairs.shape = shape
return pairs, target, np.array(['Different persons', 'Same person'])
@deprecated("Function 'load_lfw_people' has been deprecated in 0.17 and will be "
"removed in 0.19."
"Use fetch_lfw_people(download_if_missing=False) instead.")
def load_lfw_people(download_if_missing=False, **kwargs):
"""Alias for fetch_lfw_people(download_if_missing=False)
Check fetch_lfw_people.__doc__ for the documentation and parameter list.
"""
return fetch_lfw_people(download_if_missing=download_if_missing, **kwargs)
def fetch_lfw_pairs(subset='train', data_home=None, funneled=True, resize=0.5,
color=False, slice_=(slice(70, 195), slice(78, 172)),
download_if_missing=True):
"""Loader for the Labeled Faces in the Wild (LFW) pairs dataset
This dataset is a collection of JPEG pictures of famous people
collected on the internet, all details are available on the
official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. Each pixel of each channel
(color in RGB) is encoded by a float in range 0.0 - 1.0.
The task is called Face Verification: given a pair of two pictures,
a binary classifier must predict whether the two images are from
the same person.
In the official `README.txt`_ this task is described as the
"Restricted" task. As I am not sure as to implement the
"Unrestricted" variant correctly, I left it as unsupported for now.
.. _`README.txt`: http://vis-www.cs.umass.edu/lfw/README.txt
The original images are 250 x 250 pixels, but the default slice and resize
arguments reduce them to 62 x 74.
Read more in the :ref:`User Guide <labeled_faces_in_the_wild>`.
Parameters
----------
subset : optional, default: 'train'
Select the dataset to load: 'train' for the development training
set, 'test' for the development test set, and '10_folds' for the
official evaluation set that is meant to be used with a 10-folds
cross validation.
data_home : optional, default: None
Specify another download and cache folder for the datasets. By
default all scikit learn data is stored in '~/scikit_learn_data'
subfolders.
funneled : boolean, optional, default: True
Download and use the funneled variant of the dataset.
resize : float, optional, default 0.5
Ratio used to resize the each face picture.
color : boolean, optional, default False
Keep the 3 RGB channels instead of averaging them to a single
gray level channel. If color is True the shape of the data has
one more dimension than than the shape with color = False.
slice_ : optional
Provide a custom 2D slice (height, width) to extract the
'interesting' part of the jpeg files and avoid use statistical
correlation from the background
download_if_missing : optional, True by default
If False, raise a IOError if the data is not locally available
instead of trying to download the data from the source site.
Returns
-------
The data is returned as a Bunch object with the following attributes:
data : numpy array of shape (2200, 5828)
Each row corresponds to 2 ravel'd face images of original size 62 x 47
pixels. Changing the ``slice_`` or resize parameters will change the shape
of the output.
pairs : numpy array of shape (2200, 2, 62, 47)
Each row has 2 face images corresponding to same or different person
from the dataset containing 5749 people. Changing the ``slice_`` or resize
parameters will change the shape of the output.
target : numpy array of shape (13233,)
Labels associated to each pair of images. The two label values being
different persons or the same person.
DESCR : string
Description of the Labeled Faces in the Wild (LFW) dataset.
"""
lfw_home, data_folder_path = check_fetch_lfw(
data_home=data_home, funneled=funneled,
download_if_missing=download_if_missing)
logger.info('Loading %s LFW pairs from %s', subset, lfw_home)
# wrap the loader in a memoizing function that will return memmaped data
# arrays for optimal memory usage
m = Memory(cachedir=lfw_home, compress=6, verbose=0)
load_func = m.cache(_fetch_lfw_pairs)
# select the right metadata file according to the requested subset
label_filenames = {
'train': 'pairsDevTrain.txt',
'test': 'pairsDevTest.txt',
'10_folds': 'pairs.txt',
}
if subset not in label_filenames:
raise ValueError("subset='%s' is invalid: should be one of %r" % (
subset, list(sorted(label_filenames.keys()))))
index_file_path = join(lfw_home, label_filenames[subset])
# load and memoize the pairs as np arrays
pairs, target, target_names = load_func(
index_file_path, data_folder_path, resize=resize, color=color,
slice_=slice_)
# pack the results as a Bunch instance
return Bunch(data=pairs.reshape(len(pairs), -1), pairs=pairs,
target=target, target_names=target_names,
DESCR="'%s' segment of the LFW pairs dataset" % subset)
@deprecated("Function 'load_lfw_pairs' has been deprecated in 0.17 and will be "
"removed in 0.19."
"Use fetch_lfw_pairs(download_if_missing=False) instead.")
def load_lfw_pairs(download_if_missing=False, **kwargs):
"""Alias for fetch_lfw_pairs(download_if_missing=False)
Check fetch_lfw_pairs.__doc__ for the documentation and parameter list.
"""
return fetch_lfw_pairs(download_if_missing=download_if_missing, **kwargs)
| bsd-3-clause |
nhuntwalker/astroML | examples/algorithms/plot_bayesian_blocks.py | 3 | 2706 | """
Bayesian Blocks for Histograms
------------------------------
.. currentmodule:: astroML
Bayesian Blocks is a dynamic histogramming method which optimizes one of
several possible fitness functions to determine an optimal binning for
data, where the bins are not necessarily uniform width. The astroML
implementation is based on [1]_. For more discussion of this technique,
see the blog post at [2]_.
The code below uses a fitness function suitable for event data with possible
repeats. More fitness functions are available: see :mod:`density_estimation`
References
~~~~~~~~~~
.. [1] Scargle, J `et al.` (2012)
http://adsabs.harvard.edu/abs/2012arXiv1207.5578S
.. [2] http://jakevdp.github.com/blog/2012/09/12/dynamic-programming-in-python/
"""
# Author: Jake VanderPlas <vanderplas@astro.washington.edu>
# License: BSD
# The figure is an example from astroML: see http://astroML.github.com
import numpy as np
from scipy import stats
from matplotlib import pyplot as plt
from astroML.plotting import hist
# draw a set of variables
np.random.seed(0)
t = np.concatenate([stats.cauchy(-5, 1.8).rvs(500),
stats.cauchy(-4, 0.8).rvs(2000),
stats.cauchy(-1, 0.3).rvs(500),
stats.cauchy(2, 0.8).rvs(1000),
stats.cauchy(4, 1.5).rvs(500)])
# truncate values to a reasonable range
t = t[(t > -15) & (t < 15)]
#------------------------------------------------------------
# First figure: show normal histogram binning
fig = plt.figure(figsize=(10, 4))
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.15)
ax1 = fig.add_subplot(121)
ax1.hist(t, bins=15, histtype='stepfilled', alpha=0.2, normed=True)
ax1.set_xlabel('t')
ax1.set_ylabel('P(t)')
ax2 = fig.add_subplot(122)
ax2.hist(t, bins=200, histtype='stepfilled', alpha=0.2, normed=True)
ax2.set_xlabel('t')
ax2.set_ylabel('P(t)')
#------------------------------------------------------------
# Second & Third figure: Knuth bins & Bayesian Blocks
fig = plt.figure(figsize=(10, 4))
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.15)
for bins, title, subplot in zip(['knuth', 'blocks'],
["Knuth's rule", 'Bayesian blocks'],
[121, 122]):
ax = fig.add_subplot(subplot)
# plot a standard histogram in the background, with alpha transparency
hist(t, bins=200, histtype='stepfilled',
alpha=0.2, normed=True, label='standard histogram')
# plot an adaptive-width histogram on top
hist(t, bins=bins, ax=ax, color='black',
histtype='step', normed=True, label=title)
ax.legend(prop=dict(size=12))
ax.set_xlabel('t')
ax.set_ylabel('P(t)')
plt.show()
| bsd-2-clause |
ky822/scikit-learn | sklearn/utils/estimator_checks.py | 9 | 51912 | from __future__ import print_function
import types
import warnings
import sys
import traceback
import inspect
import pickle
from copy import deepcopy
import numpy as np
from scipy import sparse
import struct
from sklearn.externals.six.moves import zip
from sklearn.externals.joblib import hash, Memory
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raises_regex
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_in
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import META_ESTIMATORS
from sklearn.utils.testing import set_random_state
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import SkipTest
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.testing import assert_warns
from sklearn.base import (clone, ClassifierMixin, RegressorMixin,
TransformerMixin, ClusterMixin, BaseEstimator)
from sklearn.metrics import accuracy_score, adjusted_rand_score, f1_score
from sklearn.lda import LDA
from sklearn.random_projection import BaseRandomProjection
from sklearn.feature_selection import SelectKBest
from sklearn.svm.base import BaseLibSVM
from sklearn.pipeline import make_pipeline
from sklearn.utils.validation import DataConversionWarning
from sklearn.utils import ConvergenceWarning
from sklearn.cross_validation import train_test_split
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_iris, load_boston, make_blobs
BOSTON = None
CROSS_DECOMPOSITION = ['PLSCanonical', 'PLSRegression', 'CCA', 'PLSSVD']
MULTI_OUTPUT = ['CCA', 'DecisionTreeRegressor', 'ElasticNet',
'ExtraTreeRegressor', 'ExtraTreesRegressor', 'GaussianProcess',
'KNeighborsRegressor', 'KernelRidge', 'Lars', 'Lasso',
'LassoLars', 'LinearRegression', 'MultiTaskElasticNet',
'MultiTaskElasticNetCV', 'MultiTaskLasso', 'MultiTaskLassoCV',
'OrthogonalMatchingPursuit', 'PLSCanonical', 'PLSRegression',
'RANSACRegressor', 'RadiusNeighborsRegressor',
'RandomForestRegressor', 'Ridge', 'RidgeCV']
def _yield_non_meta_checks(name, Estimator):
yield check_estimators_dtypes
yield check_fit_score_takes_y
yield check_dtype_object
yield check_estimators_fit_returns_self
# Check that all estimator yield informative messages when
# trained on empty datasets
yield check_estimators_empty_data_messages
if name not in CROSS_DECOMPOSITION + ['SpectralEmbedding']:
# SpectralEmbedding is non-deterministic,
# see issue #4236
# cross-decomposition's "transform" returns X and Y
yield check_pipeline_consistency
if name not in ['Imputer']:
# Test that all estimators check their input for NaN's and infs
yield check_estimators_nan_inf
if name not in ['GaussianProcess']:
# FIXME!
# in particular GaussianProcess!
yield check_estimators_overwrite_params
if hasattr(Estimator, 'sparsify'):
yield check_sparsify_coefficients
yield check_estimator_sparse_data
# Test that estimators can be pickled, and once pickled
# give the same answer as before.
yield check_estimators_pickle
def _yield_classifier_checks(name, Classifier):
# test classfiers can handle non-array data
yield check_classifier_data_not_an_array
# test classifiers trained on a single label always return this label
yield check_classifiers_one_label
yield check_classifiers_classes
yield check_estimators_partial_fit_n_features
# basic consistency testing
yield check_classifiers_train
if (name not in ["MultinomialNB", "LabelPropagation", "LabelSpreading"]
# TODO some complication with -1 label
and name not in ["DecisionTreeClassifier",
"ExtraTreeClassifier"]):
# We don't raise a warning in these classifiers, as
# the column y interface is used by the forests.
yield check_supervised_y_2d
# test if NotFittedError is raised
yield check_estimators_unfitted
if 'class_weight' in Classifier().get_params().keys():
yield check_class_weight_classifiers
def _yield_regressor_checks(name, Regressor):
# TODO: test with intercept
# TODO: test with multiple responses
# basic testing
yield check_regressors_train
yield check_regressor_data_not_an_array
yield check_estimators_partial_fit_n_features
yield check_regressors_no_decision_function
yield check_supervised_y_2d
if name != 'CCA':
# check that the regressor handles int input
yield check_regressors_int
# Test if NotFittedError is raised
yield check_estimators_unfitted
def _yield_transformer_checks(name, Transformer):
# All transformers should either deal with sparse data or raise an
# exception with type TypeError and an intelligible error message
if name not in ['AdditiveChi2Sampler', 'Binarizer', 'Normalizer',
'PLSCanonical', 'PLSRegression', 'CCA', 'PLSSVD']:
yield check_transformer_data_not_an_array
# these don't actually fit the data, so don't raise errors
if name not in ['AdditiveChi2Sampler', 'Binarizer',
'FunctionTransformer', 'Normalizer']:
# basic tests
yield check_transformer_general
yield check_transformers_unfitted
def _yield_clustering_checks(name, Clusterer):
yield check_clusterer_compute_labels_predict
if name not in ('WardAgglomeration', "FeatureAgglomeration"):
# this is clustering on the features
# let's not test that here.
yield check_clustering
yield check_estimators_partial_fit_n_features
def _yield_all_checks(name, Estimator):
for check in _yield_non_meta_checks(name, Estimator):
yield check
if issubclass(Estimator, ClassifierMixin):
for check in _yield_classifier_checks(name, Estimator):
yield check
if issubclass(Estimator, RegressorMixin):
for check in _yield_regressor_checks(name, Estimator):
yield check
if issubclass(Estimator, TransformerMixin):
for check in _yield_transformer_checks(name, Estimator):
yield check
if issubclass(Estimator, ClusterMixin):
for check in _yield_clustering_checks(name, Estimator):
yield check
yield check_fit2d_predict1d
yield check_fit2d_1sample
yield check_fit2d_1feature
yield check_fit1d_1feature
yield check_fit1d_1sample
def check_estimator(Estimator):
"""Check if estimator adheres to sklearn conventions.
This estimator will run an extensive test-suite for input validation,
shapes, etc.
Additional tests for classifiers, regressors, clustering or transformers
will be run if the Estimator class inherits from the corresponding mixin
from sklearn.base.
Parameters
----------
Estimator : class
Class to check.
"""
name = Estimator.__class__.__name__
check_parameters_default_constructible(name, Estimator)
for check in _yield_all_checks(name, Estimator):
check(name, Estimator)
def _boston_subset(n_samples=200):
global BOSTON
if BOSTON is None:
boston = load_boston()
X, y = boston.data, boston.target
X, y = shuffle(X, y, random_state=0)
X, y = X[:n_samples], y[:n_samples]
X = StandardScaler().fit_transform(X)
BOSTON = X, y
return BOSTON
def set_fast_parameters(estimator):
# speed up some estimators
params = estimator.get_params()
if ("n_iter" in params
and estimator.__class__.__name__ != "TSNE"):
estimator.set_params(n_iter=5)
if "max_iter" in params:
warnings.simplefilter("ignore", ConvergenceWarning)
if estimator.max_iter is not None:
estimator.set_params(max_iter=min(5, estimator.max_iter))
# LinearSVR
if estimator.__class__.__name__ == 'LinearSVR':
estimator.set_params(max_iter=20)
if "n_resampling" in params:
# randomized lasso
estimator.set_params(n_resampling=5)
if "n_estimators" in params:
# especially gradient boosting with default 100
estimator.set_params(n_estimators=min(5, estimator.n_estimators))
if "max_trials" in params:
# RANSAC
estimator.set_params(max_trials=10)
if "n_init" in params:
# K-Means
estimator.set_params(n_init=2)
if estimator.__class__.__name__ == "SelectFdr":
# be tolerant of noisy datasets (not actually speed)
estimator.set_params(alpha=.5)
if estimator.__class__.__name__ == "TheilSenRegressor":
estimator.max_subpopulation = 100
if isinstance(estimator, BaseRandomProjection):
# Due to the jl lemma and often very few samples, the number
# of components of the random matrix projection will be probably
# greater than the number of features.
# So we impose a smaller number (avoid "auto" mode)
estimator.set_params(n_components=1)
if isinstance(estimator, SelectKBest):
# SelectKBest has a default of k=10
# which is more feature than we have in most case.
estimator.set_params(k=1)
class NotAnArray(object):
" An object that is convertable to an array"
def __init__(self, data):
self.data = data
def __array__(self, dtype=None):
return self.data
def _is_32bit():
"""Detect if process is 32bit Python."""
return struct.calcsize('P') * 8 == 32
def check_estimator_sparse_data(name, Estimator):
rng = np.random.RandomState(0)
X = rng.rand(40, 10)
X[X < .8] = 0
X_csr = sparse.csr_matrix(X)
y = (4 * rng.rand(40)).astype(np.int)
for sparse_format in ['csr', 'csc', 'dok', 'lil', 'coo', 'dia', 'bsr']:
X = X_csr.asformat(sparse_format)
# catch deprecation warnings
with warnings.catch_warnings():
if name in ['Scaler', 'StandardScaler']:
estimator = Estimator(with_mean=False)
else:
estimator = Estimator()
set_fast_parameters(estimator)
# fit and predict
try:
estimator.fit(X, y)
if hasattr(estimator, "predict"):
pred = estimator.predict(X)
assert_equal(pred.shape, (X.shape[0],))
if hasattr(estimator, 'predict_proba'):
probs = estimator.predict_proba(X)
assert_equal(probs.shape, (X.shape[0], 4))
except TypeError as e:
if 'sparse' not in repr(e):
print("Estimator %s doesn't seem to fail gracefully on "
"sparse data: error message state explicitly that "
"sparse input is not supported if this is not the case."
% name)
raise
except Exception:
print("Estimator %s doesn't seem to fail gracefully on "
"sparse data: it should raise a TypeError if sparse input "
"is explicitly not supported." % name)
raise
def check_dtype_object(name, Estimator):
# check that estimators treat dtype object as numeric if possible
rng = np.random.RandomState(0)
X = rng.rand(40, 10).astype(object)
y = (X[:, 0] * 4).astype(np.int)
y = multioutput_estimator_convert_y_2d(name, y)
with warnings.catch_warnings():
estimator = Estimator()
set_fast_parameters(estimator)
estimator.fit(X, y)
if hasattr(estimator, "predict"):
estimator.predict(X)
if hasattr(estimator, "transform"):
estimator.transform(X)
try:
estimator.fit(X, y.astype(object))
except Exception as e:
if "Unknown label type" not in str(e):
raise
X[0, 0] = {'foo': 'bar'}
msg = "argument must be a string or a number"
assert_raises_regex(TypeError, msg, estimator.fit, X, y)
@ignore_warnings
def check_fit2d_predict1d(name, Estimator):
# check by fitting a 2d array and prediting with a 1d array
rnd = np.random.RandomState(0)
X = 3 * rnd.uniform(size=(20, 3))
y = X[:, 0].astype(np.int)
y = multioutput_estimator_convert_y_2d(name, y)
estimator = Estimator()
set_fast_parameters(estimator)
if hasattr(estimator, "n_components"):
estimator.n_components = 1
if hasattr(estimator, "n_clusters"):
estimator.n_clusters = 1
set_random_state(estimator, 1)
estimator.fit(X, y)
for method in ["predict", "transform", "decision_function",
"predict_proba"]:
if hasattr(estimator, method):
try:
assert_warns(DeprecationWarning,
getattr(estimator, method), X[0])
except ValueError:
pass
@ignore_warnings
def check_fit2d_1sample(name, Estimator):
# check by fitting a 2d array and prediting with a 1d array
rnd = np.random.RandomState(0)
X = 3 * rnd.uniform(size=(1, 10))
y = X[:, 0].astype(np.int)
y = multioutput_estimator_convert_y_2d(name, y)
estimator = Estimator()
set_fast_parameters(estimator)
if hasattr(estimator, "n_components"):
estimator.n_components = 1
if hasattr(estimator, "n_clusters"):
estimator.n_clusters = 1
set_random_state(estimator, 1)
try:
estimator.fit(X, y)
except ValueError:
pass
@ignore_warnings
def check_fit2d_1feature(name, Estimator):
# check by fitting a 2d array and prediting with a 1d array
rnd = np.random.RandomState(0)
X = 3 * rnd.uniform(size=(10, 1))
y = X[:, 0].astype(np.int)
y = multioutput_estimator_convert_y_2d(name, y)
estimator = Estimator()
set_fast_parameters(estimator)
if hasattr(estimator, "n_components"):
estimator.n_components = 1
if hasattr(estimator, "n_clusters"):
estimator.n_clusters = 1
set_random_state(estimator, 1)
try:
estimator.fit(X, y)
except ValueError:
pass
@ignore_warnings
def check_fit1d_1feature(name, Estimator):
# check fitting 1d array with 1 feature
rnd = np.random.RandomState(0)
X = 3 * rnd.uniform(size=(20))
y = X.astype(np.int)
y = multioutput_estimator_convert_y_2d(name, y)
estimator = Estimator()
set_fast_parameters(estimator)
if hasattr(estimator, "n_components"):
estimator.n_components = 1
if hasattr(estimator, "n_clusters"):
estimator.n_clusters = 1
set_random_state(estimator, 1)
try:
estimator.fit(X, y)
except ValueError:
pass
@ignore_warnings
def check_fit1d_1sample(name, Estimator):
# check fitting 1d array with 1 feature
rnd = np.random.RandomState(0)
X = 3 * rnd.uniform(size=(20))
y = np.array([1])
y = multioutput_estimator_convert_y_2d(name, y)
estimator = Estimator()
set_fast_parameters(estimator)
if hasattr(estimator, "n_components"):
estimator.n_components = 1
if hasattr(estimator, "n_clusters"):
estimator.n_clusters = 1
set_random_state(estimator, 1)
try:
estimator.fit(X, y)
except ValueError :
pass
def check_transformer_general(name, Transformer):
X, y = make_blobs(n_samples=30, centers=[[0, 0, 0], [1, 1, 1]],
random_state=0, n_features=2, cluster_std=0.1)
X = StandardScaler().fit_transform(X)
X -= X.min()
_check_transformer(name, Transformer, X, y)
_check_transformer(name, Transformer, X.tolist(), y.tolist())
def check_transformer_data_not_an_array(name, Transformer):
X, y = make_blobs(n_samples=30, centers=[[0, 0, 0], [1, 1, 1]],
random_state=0, n_features=2, cluster_std=0.1)
X = StandardScaler().fit_transform(X)
# We need to make sure that we have non negative data, for things
# like NMF
X -= X.min() - .1
this_X = NotAnArray(X)
this_y = NotAnArray(np.asarray(y))
_check_transformer(name, Transformer, this_X, this_y)
def check_transformers_unfitted(name, Transformer):
X, y = _boston_subset()
with warnings.catch_warnings(record=True):
transformer = Transformer()
assert_raises((AttributeError, ValueError), transformer.transform, X)
def _check_transformer(name, Transformer, X, y):
if name in ('CCA', 'LocallyLinearEmbedding', 'KernelPCA') and _is_32bit():
# Those transformers yield non-deterministic output when executed on
# a 32bit Python. The same transformers are stable on 64bit Python.
# FIXME: try to isolate a minimalistic reproduction case only depending
# on numpy & scipy and/or maybe generate a test dataset that does not
# cause such unstable behaviors.
msg = name + ' is non deterministic on 32bit Python'
raise SkipTest(msg)
n_samples, n_features = np.asarray(X).shape
# catch deprecation warnings
with warnings.catch_warnings(record=True):
transformer = Transformer()
set_random_state(transformer)
set_fast_parameters(transformer)
# fit
if name in CROSS_DECOMPOSITION:
y_ = np.c_[y, y]
y_[::2, 1] *= 2
else:
y_ = y
transformer.fit(X, y_)
X_pred = transformer.fit_transform(X, y=y_)
if isinstance(X_pred, tuple):
for x_pred in X_pred:
assert_equal(x_pred.shape[0], n_samples)
else:
# check for consistent n_samples
assert_equal(X_pred.shape[0], n_samples)
if hasattr(transformer, 'transform'):
if name in CROSS_DECOMPOSITION:
X_pred2 = transformer.transform(X, y_)
X_pred3 = transformer.fit_transform(X, y=y_)
else:
X_pred2 = transformer.transform(X)
X_pred3 = transformer.fit_transform(X, y=y_)
if isinstance(X_pred, tuple) and isinstance(X_pred2, tuple):
for x_pred, x_pred2, x_pred3 in zip(X_pred, X_pred2, X_pred3):
assert_array_almost_equal(
x_pred, x_pred2, 2,
"fit_transform and transform outcomes not consistent in %s"
% Transformer)
assert_array_almost_equal(
x_pred, x_pred3, 2,
"consecutive fit_transform outcomes not consistent in %s"
% Transformer)
else:
assert_array_almost_equal(
X_pred, X_pred2, 2,
"fit_transform and transform outcomes not consistent in %s"
% Transformer)
assert_array_almost_equal(
X_pred, X_pred3, 2,
"consecutive fit_transform outcomes not consistent in %s"
% Transformer)
assert_equal(len(X_pred2), n_samples)
assert_equal(len(X_pred3), n_samples)
# raises error on malformed input for transform
if hasattr(X, 'T'):
# If it's not an array, it does not have a 'T' property
assert_raises(ValueError, transformer.transform, X.T)
@ignore_warnings
def check_pipeline_consistency(name, Estimator):
if name in ('CCA', 'LocallyLinearEmbedding', 'KernelPCA') and _is_32bit():
# Those transformers yield non-deterministic output when executed on
# a 32bit Python. The same transformers are stable on 64bit Python.
# FIXME: try to isolate a minimalistic reproduction case only depending
# scipy and/or maybe generate a test dataset that does not
# cause such unstable behaviors.
msg = name + ' is non deterministic on 32bit Python'
raise SkipTest(msg)
# check that make_pipeline(est) gives same score as est
X, y = make_blobs(n_samples=30, centers=[[0, 0, 0], [1, 1, 1]],
random_state=0, n_features=2, cluster_std=0.1)
X -= X.min()
y = multioutput_estimator_convert_y_2d(name, y)
estimator = Estimator()
set_fast_parameters(estimator)
set_random_state(estimator)
pipeline = make_pipeline(estimator)
estimator.fit(X, y)
pipeline.fit(X, y)
funcs = ["score", "fit_transform"]
for func_name in funcs:
func = getattr(estimator, func_name, None)
if func is not None:
func_pipeline = getattr(pipeline, func_name)
result = func(X, y)
result_pipe = func_pipeline(X, y)
assert_array_almost_equal(result, result_pipe)
@ignore_warnings
def check_fit_score_takes_y(name, Estimator):
# check that all estimators accept an optional y
# in fit and score so they can be used in pipelines
rnd = np.random.RandomState(0)
X = rnd.uniform(size=(10, 3))
y = np.arange(10) % 3
y = multioutput_estimator_convert_y_2d(name, y)
estimator = Estimator()
set_fast_parameters(estimator)
set_random_state(estimator)
funcs = ["fit", "score", "partial_fit", "fit_predict", "fit_transform"]
for func_name in funcs:
func = getattr(estimator, func_name, None)
if func is not None:
func(X, y)
args = inspect.getargspec(func).args
assert_true(args[2] in ["y", "Y"])
@ignore_warnings
def check_estimators_dtypes(name, Estimator):
rnd = np.random.RandomState(0)
X_train_32 = 3 * rnd.uniform(size=(20, 5)).astype(np.float32)
X_train_64 = X_train_32.astype(np.float64)
X_train_int_64 = X_train_32.astype(np.int64)
X_train_int_32 = X_train_32.astype(np.int32)
y = X_train_int_64[:, 0]
y = multioutput_estimator_convert_y_2d(name, y)
for X_train in [X_train_32, X_train_64, X_train_int_64, X_train_int_32]:
with warnings.catch_warnings(record=True):
estimator = Estimator()
set_fast_parameters(estimator)
set_random_state(estimator, 1)
estimator.fit(X_train, y)
for method in ["predict", "transform", "decision_function",
"predict_proba"]:
if hasattr(estimator, method):
getattr(estimator, method)(X_train)
def check_estimators_empty_data_messages(name, Estimator):
e = Estimator()
set_fast_parameters(e)
set_random_state(e, 1)
X_zero_samples = np.empty(0).reshape(0, 3)
# The precise message can change depending on whether X or y is
# validated first. Let us test the type of exception only:
assert_raises(ValueError, e.fit, X_zero_samples, [])
X_zero_features = np.empty(0).reshape(3, 0)
# the following y should be accepted by both classifiers and regressors
# and ignored by unsupervised models
y = multioutput_estimator_convert_y_2d(name, np.array([1, 0, 1]))
msg = "0 feature\(s\) \(shape=\(3, 0\)\) while a minimum of \d* is required."
assert_raises_regex(ValueError, msg, e.fit, X_zero_features, y)
def check_estimators_nan_inf(name, Estimator):
rnd = np.random.RandomState(0)
X_train_finite = rnd.uniform(size=(10, 3))
X_train_nan = rnd.uniform(size=(10, 3))
X_train_nan[0, 0] = np.nan
X_train_inf = rnd.uniform(size=(10, 3))
X_train_inf[0, 0] = np.inf
y = np.ones(10)
y[:5] = 0
y = multioutput_estimator_convert_y_2d(name, y)
error_string_fit = "Estimator doesn't check for NaN and inf in fit."
error_string_predict = ("Estimator doesn't check for NaN and inf in"
" predict.")
error_string_transform = ("Estimator doesn't check for NaN and inf in"
" transform.")
for X_train in [X_train_nan, X_train_inf]:
# catch deprecation warnings
with warnings.catch_warnings(record=True):
estimator = Estimator()
set_fast_parameters(estimator)
set_random_state(estimator, 1)
# try to fit
try:
estimator.fit(X_train, y)
except ValueError as e:
if 'inf' not in repr(e) and 'NaN' not in repr(e):
print(error_string_fit, Estimator, e)
traceback.print_exc(file=sys.stdout)
raise e
except Exception as exc:
print(error_string_fit, Estimator, exc)
traceback.print_exc(file=sys.stdout)
raise exc
else:
raise AssertionError(error_string_fit, Estimator)
# actually fit
estimator.fit(X_train_finite, y)
# predict
if hasattr(estimator, "predict"):
try:
estimator.predict(X_train)
except ValueError as e:
if 'inf' not in repr(e) and 'NaN' not in repr(e):
print(error_string_predict, Estimator, e)
traceback.print_exc(file=sys.stdout)
raise e
except Exception as exc:
print(error_string_predict, Estimator, exc)
traceback.print_exc(file=sys.stdout)
else:
raise AssertionError(error_string_predict, Estimator)
# transform
if hasattr(estimator, "transform"):
try:
estimator.transform(X_train)
except ValueError as e:
if 'inf' not in repr(e) and 'NaN' not in repr(e):
print(error_string_transform, Estimator, e)
traceback.print_exc(file=sys.stdout)
raise e
except Exception as exc:
print(error_string_transform, Estimator, exc)
traceback.print_exc(file=sys.stdout)
else:
raise AssertionError(error_string_transform, Estimator)
def check_estimators_pickle(name, Estimator):
"""Test that we can pickle all estimators"""
check_methods = ["predict", "transform", "decision_function",
"predict_proba"]
X, y = make_blobs(n_samples=30, centers=[[0, 0, 0], [1, 1, 1]],
random_state=0, n_features=2, cluster_std=0.1)
# some estimators can't do features less than 0
X -= X.min()
# some estimators only take multioutputs
y = multioutput_estimator_convert_y_2d(name, y)
# catch deprecation warnings
with warnings.catch_warnings(record=True):
estimator = Estimator()
set_random_state(estimator)
set_fast_parameters(estimator)
estimator.fit(X, y)
result = dict()
for method in check_methods:
if hasattr(estimator, method):
result[method] = getattr(estimator, method)(X)
# pickle and unpickle!
pickled_estimator = pickle.dumps(estimator)
unpickled_estimator = pickle.loads(pickled_estimator)
for method in result:
unpickled_result = getattr(unpickled_estimator, method)(X)
assert_array_almost_equal(result[method], unpickled_result)
def check_estimators_partial_fit_n_features(name, Alg):
# check if number of features changes between calls to partial_fit.
if not hasattr(Alg, 'partial_fit'):
return
X, y = make_blobs(n_samples=50, random_state=1)
X -= X.min()
with warnings.catch_warnings(record=True):
alg = Alg()
set_fast_parameters(alg)
if isinstance(alg, ClassifierMixin):
classes = np.unique(y)
alg.partial_fit(X, y, classes=classes)
else:
alg.partial_fit(X, y)
assert_raises(ValueError, alg.partial_fit, X[:, :-1], y)
def check_clustering(name, Alg):
X, y = make_blobs(n_samples=50, random_state=1)
X, y = shuffle(X, y, random_state=7)
X = StandardScaler().fit_transform(X)
n_samples, n_features = X.shape
# catch deprecation and neighbors warnings
with warnings.catch_warnings(record=True):
alg = Alg()
set_fast_parameters(alg)
if hasattr(alg, "n_clusters"):
alg.set_params(n_clusters=3)
set_random_state(alg)
if name == 'AffinityPropagation':
alg.set_params(preference=-100)
alg.set_params(max_iter=100)
# fit
alg.fit(X)
# with lists
alg.fit(X.tolist())
assert_equal(alg.labels_.shape, (n_samples,))
pred = alg.labels_
assert_greater(adjusted_rand_score(pred, y), 0.4)
# fit another time with ``fit_predict`` and compare results
if name is 'SpectralClustering':
# there is no way to make Spectral clustering deterministic :(
return
set_random_state(alg)
with warnings.catch_warnings(record=True):
pred2 = alg.fit_predict(X)
assert_array_equal(pred, pred2)
def check_clusterer_compute_labels_predict(name, Clusterer):
"""Check that predict is invariant of compute_labels"""
X, y = make_blobs(n_samples=20, random_state=0)
clusterer = Clusterer()
if hasattr(clusterer, "compute_labels"):
# MiniBatchKMeans
if hasattr(clusterer, "random_state"):
clusterer.set_params(random_state=0)
X_pred1 = clusterer.fit(X).predict(X)
clusterer.set_params(compute_labels=False)
X_pred2 = clusterer.fit(X).predict(X)
assert_array_equal(X_pred1, X_pred2)
def check_classifiers_one_label(name, Classifier):
error_string_fit = "Classifier can't train when only one class is present."
error_string_predict = ("Classifier can't predict when only one class is "
"present.")
rnd = np.random.RandomState(0)
X_train = rnd.uniform(size=(10, 3))
X_test = rnd.uniform(size=(10, 3))
y = np.ones(10)
# catch deprecation warnings
with warnings.catch_warnings(record=True):
classifier = Classifier()
set_fast_parameters(classifier)
# try to fit
try:
classifier.fit(X_train, y)
except ValueError as e:
if 'class' not in repr(e):
print(error_string_fit, Classifier, e)
traceback.print_exc(file=sys.stdout)
raise e
else:
return
except Exception as exc:
print(error_string_fit, Classifier, exc)
traceback.print_exc(file=sys.stdout)
raise exc
# predict
try:
assert_array_equal(classifier.predict(X_test), y)
except Exception as exc:
print(error_string_predict, Classifier, exc)
raise exc
def check_classifiers_train(name, Classifier):
X_m, y_m = make_blobs(n_samples=300, random_state=0)
X_m, y_m = shuffle(X_m, y_m, random_state=7)
X_m = StandardScaler().fit_transform(X_m)
# generate binary problem from multi-class one
y_b = y_m[y_m != 2]
X_b = X_m[y_m != 2]
for (X, y) in [(X_m, y_m), (X_b, y_b)]:
# catch deprecation warnings
classes = np.unique(y)
n_classes = len(classes)
n_samples, n_features = X.shape
with warnings.catch_warnings(record=True):
classifier = Classifier()
if name in ['BernoulliNB', 'MultinomialNB']:
X -= X.min()
set_fast_parameters(classifier)
set_random_state(classifier)
# raises error on malformed input for fit
assert_raises(ValueError, classifier.fit, X, y[:-1])
# fit
classifier.fit(X, y)
# with lists
classifier.fit(X.tolist(), y.tolist())
assert_true(hasattr(classifier, "classes_"))
y_pred = classifier.predict(X)
assert_equal(y_pred.shape, (n_samples,))
# training set performance
if name not in ['BernoulliNB', 'MultinomialNB']:
assert_greater(accuracy_score(y, y_pred), 0.83)
# raises error on malformed input for predict
assert_raises(ValueError, classifier.predict, X.T)
if hasattr(classifier, "decision_function"):
try:
# decision_function agrees with predict
decision = classifier.decision_function(X)
if n_classes is 2:
assert_equal(decision.shape, (n_samples,))
dec_pred = (decision.ravel() > 0).astype(np.int)
assert_array_equal(dec_pred, y_pred)
if (n_classes is 3
and not isinstance(classifier, BaseLibSVM)):
# 1on1 of LibSVM works differently
assert_equal(decision.shape, (n_samples, n_classes))
assert_array_equal(np.argmax(decision, axis=1), y_pred)
# raises error on malformed input
assert_raises(ValueError,
classifier.decision_function, X.T)
# raises error on malformed input for decision_function
assert_raises(ValueError,
classifier.decision_function, X.T)
except NotImplementedError:
pass
if hasattr(classifier, "predict_proba"):
# predict_proba agrees with predict
y_prob = classifier.predict_proba(X)
assert_equal(y_prob.shape, (n_samples, n_classes))
assert_array_equal(np.argmax(y_prob, axis=1), y_pred)
# check that probas for all classes sum to one
assert_array_almost_equal(np.sum(y_prob, axis=1),
np.ones(n_samples))
# raises error on malformed input
assert_raises(ValueError, classifier.predict_proba, X.T)
# raises error on malformed input for predict_proba
assert_raises(ValueError, classifier.predict_proba, X.T)
def check_estimators_fit_returns_self(name, Estimator):
"""Check if self is returned when calling fit"""
X, y = make_blobs(random_state=0, n_samples=9, n_features=4)
y = multioutput_estimator_convert_y_2d(name, y)
# some want non-negative input
X -= X.min()
estimator = Estimator()
set_fast_parameters(estimator)
set_random_state(estimator)
assert_true(estimator.fit(X, y) is estimator)
@ignore_warnings
def check_estimators_unfitted(name, Estimator):
"""Check that predict raises an exception in an unfitted estimator.
Unfitted estimators should raise either AttributeError or ValueError.
The specific exception type NotFittedError inherits from both and can
therefore be adequately raised for that purpose.
"""
# Common test for Regressors as well as Classifiers
X, y = _boston_subset()
with warnings.catch_warnings(record=True):
est = Estimator()
msg = "fit"
if hasattr(est, 'predict'):
assert_raise_message((AttributeError, ValueError), msg,
est.predict, X)
if hasattr(est, 'decision_function'):
assert_raise_message((AttributeError, ValueError), msg,
est.decision_function, X)
if hasattr(est, 'predict_proba'):
assert_raise_message((AttributeError, ValueError), msg,
est.predict_proba, X)
if hasattr(est, 'predict_log_proba'):
assert_raise_message((AttributeError, ValueError), msg,
est.predict_log_proba, X)
def check_supervised_y_2d(name, Estimator):
if "MultiTask" in name:
# These only work on 2d, so this test makes no sense
return
rnd = np.random.RandomState(0)
X = rnd.uniform(size=(10, 3))
y = np.arange(10) % 3
# catch deprecation warnings
with warnings.catch_warnings(record=True):
estimator = Estimator()
set_fast_parameters(estimator)
set_random_state(estimator)
# fit
estimator.fit(X, y)
y_pred = estimator.predict(X)
set_random_state(estimator)
# Check that when a 2D y is given, a DataConversionWarning is
# raised
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always", DataConversionWarning)
warnings.simplefilter("ignore", RuntimeWarning)
estimator.fit(X, y[:, np.newaxis])
y_pred_2d = estimator.predict(X)
msg = "expected 1 DataConversionWarning, got: %s" % (
", ".join([str(w_x) for w_x in w]))
if name not in MULTI_OUTPUT:
# check that we warned if we don't support multi-output
assert_greater(len(w), 0, msg)
assert_true("DataConversionWarning('A column-vector y"
" was passed when a 1d array was expected" in msg)
assert_array_almost_equal(y_pred.ravel(), y_pred_2d.ravel())
def check_classifiers_classes(name, Classifier):
X, y = make_blobs(n_samples=30, random_state=0, cluster_std=0.1)
X, y = shuffle(X, y, random_state=7)
X = StandardScaler().fit_transform(X)
# We need to make sure that we have non negative data, for things
# like NMF
X -= X.min() - .1
y_names = np.array(["one", "two", "three"])[y]
for y_names in [y_names, y_names.astype('O')]:
if name in ["LabelPropagation", "LabelSpreading"]:
# TODO some complication with -1 label
y_ = y
else:
y_ = y_names
classes = np.unique(y_)
# catch deprecation warnings
with warnings.catch_warnings(record=True):
classifier = Classifier()
if name == 'BernoulliNB':
classifier.set_params(binarize=X.mean())
set_fast_parameters(classifier)
set_random_state(classifier)
# fit
classifier.fit(X, y_)
y_pred = classifier.predict(X)
# training set performance
assert_array_equal(np.unique(y_), np.unique(y_pred))
if np.any(classifier.classes_ != classes):
print("Unexpected classes_ attribute for %r: "
"expected %s, got %s" %
(classifier, classes, classifier.classes_))
def check_regressors_int(name, Regressor):
X, _ = _boston_subset()
X = X[:50]
rnd = np.random.RandomState(0)
y = rnd.randint(3, size=X.shape[0])
y = multioutput_estimator_convert_y_2d(name, y)
rnd = np.random.RandomState(0)
# catch deprecation warnings
with warnings.catch_warnings(record=True):
# separate estimators to control random seeds
regressor_1 = Regressor()
regressor_2 = Regressor()
set_fast_parameters(regressor_1)
set_fast_parameters(regressor_2)
set_random_state(regressor_1)
set_random_state(regressor_2)
if name in CROSS_DECOMPOSITION:
y_ = np.vstack([y, 2 * y + rnd.randint(2, size=len(y))])
y_ = y_.T
else:
y_ = y
# fit
regressor_1.fit(X, y_)
pred1 = regressor_1.predict(X)
regressor_2.fit(X, y_.astype(np.float))
pred2 = regressor_2.predict(X)
assert_array_almost_equal(pred1, pred2, 2, name)
def check_regressors_train(name, Regressor):
X, y = _boston_subset()
y = StandardScaler().fit_transform(y.reshape(-1, 1)) # X is already scaled
y = y.ravel()
y = multioutput_estimator_convert_y_2d(name, y)
rnd = np.random.RandomState(0)
# catch deprecation warnings
with warnings.catch_warnings(record=True):
regressor = Regressor()
set_fast_parameters(regressor)
if not hasattr(regressor, 'alphas') and hasattr(regressor, 'alpha'):
# linear regressors need to set alpha, but not generalized CV ones
regressor.alpha = 0.01
if name == 'PassiveAggressiveRegressor':
regressor.C = 0.01
# raises error on malformed input for fit
assert_raises(ValueError, regressor.fit, X, y[:-1])
# fit
if name in CROSS_DECOMPOSITION:
y_ = np.vstack([y, 2 * y + rnd.randint(2, size=len(y))])
y_ = y_.T
else:
y_ = y
set_random_state(regressor)
regressor.fit(X, y_)
regressor.fit(X.tolist(), y_.tolist())
y_pred = regressor.predict(X)
assert_equal(y_pred.shape, y_.shape)
# TODO: find out why PLS and CCA fail. RANSAC is random
# and furthermore assumes the presence of outliers, hence
# skipped
if name not in ('PLSCanonical', 'CCA', 'RANSACRegressor'):
print(regressor)
assert_greater(regressor.score(X, y_), 0.5)
@ignore_warnings
def check_regressors_no_decision_function(name, Regressor):
# checks whether regressors have decision_function or predict_proba
rng = np.random.RandomState(0)
X = rng.normal(size=(10, 4))
y = multioutput_estimator_convert_y_2d(name, X[:, 0])
regressor = Regressor()
set_fast_parameters(regressor)
if hasattr(regressor, "n_components"):
# FIXME CCA, PLS is not robust to rank 1 effects
regressor.n_components = 1
regressor.fit(X, y)
funcs = ["decision_function", "predict_proba", "predict_log_proba"]
for func_name in funcs:
func = getattr(regressor, func_name, None)
if func is None:
# doesn't have function
continue
# has function. Should raise deprecation warning
msg = func_name
assert_warns_message(DeprecationWarning, msg, func, X)
def check_class_weight_classifiers(name, Classifier):
if name == "NuSVC":
# the sparse version has a parameter that doesn't do anything
raise SkipTest
if name.endswith("NB"):
# NaiveBayes classifiers have a somewhat different interface.
# FIXME SOON!
raise SkipTest
for n_centers in [2, 3]:
# create a very noisy dataset
X, y = make_blobs(centers=n_centers, random_state=0, cluster_std=20)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
random_state=0)
n_centers = len(np.unique(y_train))
if n_centers == 2:
class_weight = {0: 1000, 1: 0.0001}
else:
class_weight = {0: 1000, 1: 0.0001, 2: 0.0001}
with warnings.catch_warnings(record=True):
classifier = Classifier(class_weight=class_weight)
if hasattr(classifier, "n_iter"):
classifier.set_params(n_iter=100)
if hasattr(classifier, "min_weight_fraction_leaf"):
classifier.set_params(min_weight_fraction_leaf=0.01)
set_random_state(classifier)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
assert_greater(np.mean(y_pred == 0), 0.89)
def check_class_weight_balanced_classifiers(name, Classifier, X_train, y_train,
X_test, y_test, weights):
with warnings.catch_warnings(record=True):
classifier = Classifier()
if hasattr(classifier, "n_iter"):
classifier.set_params(n_iter=100)
set_random_state(classifier)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
classifier.set_params(class_weight='balanced')
classifier.fit(X_train, y_train)
y_pred_balanced = classifier.predict(X_test)
assert_greater(f1_score(y_test, y_pred_balanced, average='weighted'),
f1_score(y_test, y_pred, average='weighted'))
def check_class_weight_balanced_linear_classifier(name, Classifier):
"""Test class weights with non-contiguous class labels."""
X = np.array([[-1.0, -1.0], [-1.0, 0], [-.8, -1.0],
[1.0, 1.0], [1.0, 0.0]])
y = np.array([1, 1, 1, -1, -1])
with warnings.catch_warnings(record=True):
classifier = Classifier()
if hasattr(classifier, "n_iter"):
# This is a very small dataset, default n_iter are likely to prevent
# convergence
classifier.set_params(n_iter=1000)
set_random_state(classifier)
# Let the model compute the class frequencies
classifier.set_params(class_weight='balanced')
coef_balanced = classifier.fit(X, y).coef_.copy()
# Count each label occurrence to reweight manually
n_samples = len(y)
n_classes = float(len(np.unique(y)))
class_weight = {1: n_samples / (np.sum(y == 1) * n_classes),
-1: n_samples / (np.sum(y == -1) * n_classes)}
classifier.set_params(class_weight=class_weight)
coef_manual = classifier.fit(X, y).coef_.copy()
assert_array_almost_equal(coef_balanced, coef_manual)
def check_estimators_overwrite_params(name, Estimator):
X, y = make_blobs(random_state=0, n_samples=9)
y = multioutput_estimator_convert_y_2d(name, y)
# some want non-negative input
X -= X.min()
with warnings.catch_warnings(record=True):
# catch deprecation warnings
estimator = Estimator()
set_fast_parameters(estimator)
set_random_state(estimator)
# Make a physical copy of the orginal estimator parameters before fitting.
params = estimator.get_params()
original_params = deepcopy(params)
# Fit the model
estimator.fit(X, y)
# Compare the state of the model parameters with the original parameters
new_params = estimator.get_params()
for param_name, original_value in original_params.items():
new_value = new_params[param_name]
# We should never change or mutate the internal state of input
# parameters by default. To check this we use the joblib.hash function
# that introspects recursively any subobjects to compute a checksum.
# The only exception to this rule of immutable constructor parameters
# is possible RandomState instance but in this check we explicitly
# fixed the random_state params recursively to be integer seeds.
assert_equal(hash(new_value), hash(original_value),
"Estimator %s should not change or mutate "
" the parameter %s from %s to %s during fit."
% (name, param_name, original_value, new_value))
def check_sparsify_coefficients(name, Estimator):
X = np.array([[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1],
[-1, -2], [2, 2], [-2, -2]])
y = [1, 1, 1, 2, 2, 2, 3, 3, 3]
est = Estimator()
est.fit(X, y)
pred_orig = est.predict(X)
# test sparsify with dense inputs
est.sparsify()
assert_true(sparse.issparse(est.coef_))
pred = est.predict(X)
assert_array_equal(pred, pred_orig)
# pickle and unpickle with sparse coef_
est = pickle.loads(pickle.dumps(est))
assert_true(sparse.issparse(est.coef_))
pred = est.predict(X)
assert_array_equal(pred, pred_orig)
def check_classifier_data_not_an_array(name, Estimator):
X = np.array([[3, 0], [0, 1], [0, 2], [1, 1], [1, 2], [2, 1]])
y = [1, 1, 1, 2, 2, 2]
y = multioutput_estimator_convert_y_2d(name, y)
check_estimators_data_not_an_array(name, Estimator, X, y)
def check_regressor_data_not_an_array(name, Estimator):
X, y = _boston_subset(n_samples=50)
y = multioutput_estimator_convert_y_2d(name, y)
check_estimators_data_not_an_array(name, Estimator, X, y)
def check_estimators_data_not_an_array(name, Estimator, X, y):
if name in CROSS_DECOMPOSITION:
raise SkipTest
# catch deprecation warnings
with warnings.catch_warnings(record=True):
# separate estimators to control random seeds
estimator_1 = Estimator()
estimator_2 = Estimator()
set_fast_parameters(estimator_1)
set_fast_parameters(estimator_2)
set_random_state(estimator_1)
set_random_state(estimator_2)
y_ = NotAnArray(np.asarray(y))
X_ = NotAnArray(np.asarray(X))
# fit
estimator_1.fit(X_, y_)
pred1 = estimator_1.predict(X_)
estimator_2.fit(X, y)
pred2 = estimator_2.predict(X)
assert_array_almost_equal(pred1, pred2, 2, name)
def check_parameters_default_constructible(name, Estimator):
classifier = LDA()
# test default-constructibility
# get rid of deprecation warnings
with warnings.catch_warnings(record=True):
if name in META_ESTIMATORS:
estimator = Estimator(classifier)
else:
estimator = Estimator()
# test cloning
clone(estimator)
# test __repr__
repr(estimator)
# test that set_params returns self
assert_true(estimator.set_params() is estimator)
# test if init does nothing but set parameters
# this is important for grid_search etc.
# We get the default parameters from init and then
# compare these against the actual values of the attributes.
# this comes from getattr. Gets rid of deprecation decorator.
init = getattr(estimator.__init__, 'deprecated_original',
estimator.__init__)
try:
args, varargs, kws, defaults = inspect.getargspec(init)
except TypeError:
# init is not a python function.
# true for mixins
return
params = estimator.get_params()
if name in META_ESTIMATORS:
# they need a non-default argument
args = args[2:]
else:
args = args[1:]
if args:
# non-empty list
assert_equal(len(args), len(defaults))
else:
return
for arg, default in zip(args, defaults):
assert_in(type(default), [str, int, float, bool, tuple, type(None),
np.float64, types.FunctionType, Memory])
if arg not in params.keys():
# deprecated parameter, not in get_params
assert_true(default is None)
continue
if isinstance(params[arg], np.ndarray):
assert_array_equal(params[arg], default)
else:
assert_equal(params[arg], default)
def multioutput_estimator_convert_y_2d(name, y):
# Estimators in mono_output_task_error raise ValueError if y is of 1-D
# Convert into a 2-D y for those estimators.
if name in (['MultiTaskElasticNetCV', 'MultiTaskLassoCV',
'MultiTaskLasso', 'MultiTaskElasticNet']):
return y[:, np.newaxis]
return y
def check_non_transformer_estimators_n_iter(name, estimator,
multi_output=False):
# Check if all iterative solvers, run for more than one iteratiom
iris = load_iris()
X, y_ = iris.data, iris.target
if multi_output:
y_ = y_[:, np.newaxis]
set_random_state(estimator, 0)
if name == 'AffinityPropagation':
estimator.fit(X)
else:
estimator.fit(X, y_)
assert_greater(estimator.n_iter_, 0)
def check_transformer_n_iter(name, estimator):
if name in CROSS_DECOMPOSITION:
# Check using default data
X = [[0., 0., 1.], [1., 0., 0.], [2., 2., 2.], [2., 5., 4.]]
y_ = [[0.1, -0.2], [0.9, 1.1], [0.1, -0.5], [0.3, -0.2]]
else:
X, y_ = make_blobs(n_samples=30, centers=[[0, 0, 0], [1, 1, 1]],
random_state=0, n_features=2, cluster_std=0.1)
X -= X.min() - 0.1
set_random_state(estimator, 0)
estimator.fit(X, y_)
# These return a n_iter per component.
if name in CROSS_DECOMPOSITION:
for iter_ in estimator.n_iter_:
assert_greater(iter_, 1)
else:
assert_greater(estimator.n_iter_, 1)
def check_get_params_invariance(name, estimator):
class T(BaseEstimator):
"""Mock classifier
"""
def __init__(self):
pass
def fit(self, X, y):
return self
if name in ('FeatureUnion', 'Pipeline'):
e = estimator([('clf', T())])
elif name in ('GridSearchCV' 'RandomizedSearchCV'):
return
else:
e = estimator()
shallow_params = e.get_params(deep=False)
deep_params = e.get_params(deep=True)
assert_true(all(item in deep_params.items() for item in
shallow_params.items()))
| bsd-3-clause |
Achuth17/scikit-learn | examples/linear_model/plot_logistic.py | 312 | 1426 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Logit function
=========================================================
Show in the plot is how the logistic regression would, in this
synthetic dataset, classify values as either 0 or 1,
i.e. class one or two, using the logit-curve.
"""
print(__doc__)
# Code source: Gael Varoquaux
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# this is our test set, it's just a straight line with some
# Gaussian noise
xmin, xmax = -5, 5
n_samples = 100
np.random.seed(0)
X = np.random.normal(size=n_samples)
y = (X > 0).astype(np.float)
X[X > 0] *= 4
X += .3 * np.random.normal(size=n_samples)
X = X[:, np.newaxis]
# run the classifier
clf = linear_model.LogisticRegression(C=1e5)
clf.fit(X, y)
# and plot the result
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.scatter(X.ravel(), y, color='black', zorder=20)
X_test = np.linspace(-5, 10, 300)
def model(x):
return 1 / (1 + np.exp(-x))
loss = model(X_test * clf.coef_ + clf.intercept_).ravel()
plt.plot(X_test, loss, color='blue', linewidth=3)
ols = linear_model.LinearRegression()
ols.fit(X, y)
plt.plot(X_test, ols.coef_ * X_test + ols.intercept_, linewidth=1)
plt.axhline(.5, color='.5')
plt.ylabel('y')
plt.xlabel('X')
plt.xticks(())
plt.yticks(())
plt.ylim(-.25, 1.25)
plt.xlim(-4, 10)
plt.show()
| bsd-3-clause |
massmutual/scikit-learn | examples/manifold/plot_compare_methods.py | 259 | 4031 | """
=========================================
Comparison of Manifold Learning methods
=========================================
An illustration of dimensionality reduction on the S-curve dataset
with various manifold learning methods.
For a discussion and comparison of these algorithms, see the
:ref:`manifold module page <manifold>`
For a similar example, where the methods are applied to a
sphere dataset, see :ref:`example_manifold_plot_manifold_sphere.py`
Note that the purpose of the MDS is to find a low-dimensional
representation of the data (here 2D) in which the distances respect well
the distances in the original high-dimensional space, unlike other
manifold-learning algorithms, it does not seeks an isotropic
representation of the data in the low-dimensional space.
"""
# Author: Jake Vanderplas -- <vanderplas@astro.washington.edu>
print(__doc__)
from time import time
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.ticker import NullFormatter
from sklearn import manifold, datasets
# Next line to silence pyflakes. This import is needed.
Axes3D
n_points = 1000
X, color = datasets.samples_generator.make_s_curve(n_points, random_state=0)
n_neighbors = 10
n_components = 2
fig = plt.figure(figsize=(15, 8))
plt.suptitle("Manifold Learning with %i points, %i neighbors"
% (1000, n_neighbors), fontsize=14)
try:
# compatibility matplotlib < 1.0
ax = fig.add_subplot(251, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=color, cmap=plt.cm.Spectral)
ax.view_init(4, -72)
except:
ax = fig.add_subplot(251, projection='3d')
plt.scatter(X[:, 0], X[:, 2], c=color, cmap=plt.cm.Spectral)
methods = ['standard', 'ltsa', 'hessian', 'modified']
labels = ['LLE', 'LTSA', 'Hessian LLE', 'Modified LLE']
for i, method in enumerate(methods):
t0 = time()
Y = manifold.LocallyLinearEmbedding(n_neighbors, n_components,
eigen_solver='auto',
method=method).fit_transform(X)
t1 = time()
print("%s: %.2g sec" % (methods[i], t1 - t0))
ax = fig.add_subplot(252 + i)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("%s (%.2g sec)" % (labels[i], t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
Y = manifold.Isomap(n_neighbors, n_components).fit_transform(X)
t1 = time()
print("Isomap: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(257)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("Isomap (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
mds = manifold.MDS(n_components, max_iter=100, n_init=1)
Y = mds.fit_transform(X)
t1 = time()
print("MDS: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(258)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("MDS (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
se = manifold.SpectralEmbedding(n_components=n_components,
n_neighbors=n_neighbors)
Y = se.fit_transform(X)
t1 = time()
print("SpectralEmbedding: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(259)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("SpectralEmbedding (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
tsne = manifold.TSNE(n_components=n_components, init='pca', random_state=0)
Y = tsne.fit_transform(X)
t1 = time()
print("t-SNE: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(250)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("t-SNE (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
plt.show()
| bsd-3-clause |
karstenw/nodebox-pyobjc | examples/Extended Application/sklearn/examples/datasets/plot_iris_dataset.py | 1 | 2738 |
"""
=========================================================
The Iris Dataset
=========================================================
This data sets consists of 3 different types of irises'
(Setosa, Versicolour, and Virginica) petal and sepal
length, stored in a 150x4 numpy.ndarray
The rows being the samples and the columns being:
Sepal Length, Sepal Width, Petal Length and Petal Width.
The below plot uses the first two features.
See `here <https://en.wikipedia.org/wiki/Iris_flower_data_set>`_ for more
information on this dataset.
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import datasets
from sklearn.decomposition import PCA
# nodebox section
if __name__ == '__builtin__':
# were in nodebox
import os
import tempfile
W = 800
inset = 20
size(W, 600)
plt.cla()
plt.clf()
plt.close('all')
def tempimage():
fob = tempfile.NamedTemporaryFile(mode='w+b', suffix='.png', delete=False)
fname = fob.name
fob.close()
return fname
imgx = 20
imgy = 0
def pltshow(plt, dpi=150):
global imgx, imgy
temppath = tempimage()
plt.savefig(temppath, dpi=dpi)
dx,dy = imagesize(temppath)
w = min(W,dx)
image(temppath,imgx,imgy,width=w)
imgy = imgy + dy + 20
os.remove(temppath)
size(W, HEIGHT+dy+40)
else:
def pltshow(mplpyplot):
mplpyplot.show()
# nodebox section end
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = iris.target
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
plt.figure(2, figsize=(8, 6))
plt.clf()
# Plot the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1,
edgecolor='k')
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
# To getter a better understanding of interaction of the dimensions
# plot the first three PCA dimensions
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
X_reduced = PCA(n_components=3).fit_transform(iris.data)
ax.scatter(X_reduced[:, 0], X_reduced[:, 1], X_reduced[:, 2], c=y,
cmap=plt.cm.Set1, edgecolor='k', s=40)
ax.set_title("First three PCA directions")
ax.set_xlabel("1st eigenvector")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("2nd eigenvector")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("3rd eigenvector")
ax.w_zaxis.set_ticklabels([])
#plt.show()
pltshow(plt)
| mit |
jgomezdans/KaFKA | kafka/inference/solvers.py | 1 | 5323 | #!/usr/bin/env python
"""Some solvers"""
# KaFKA A fast Kalman filter implementation for raster based datasets.
# Copyright (c) 2017 J Gomez-Dans. All rights reserved.
#
# This file is part of KaFKA.
#
# KaFKA is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# KaFKA is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with KaFKA. If not, see <http://www.gnu.org/licenses/>.
from collections import namedtuple
import numpy as np
import scipy.sparse as sp
import matplotlib.pyplot as plt
#from utils import matrix_squeeze, spsolve2, reconstruct_array
# Set up logging
import logging
LOG = logging.getLogger(__name__+".solvers")
__author__ = "J Gomez-Dans"
__copyright__ = "Copyright 2017 J Gomez-Dans"
__version__ = "1.0 (09.03.2017)"
__license__ = "GPLv3"
__email__ = "j.gomez-dans@ucl.ac.uk"
def variational_kalman( observations, mask, state_mask, uncertainty, H_matrix, n_params,
x_forecast, P_forecast, P_forecast_inv, the_metadata, approx_diagonal=True):
"""We can just use """
if len(H_matrix) == 2:
non_linear = True
H0, H_matrix_ = H_matrix
else:
H0 = 0.
non_linear = False
R_mat = sp.diags(uncertainty.diagonal()[state_mask.flatten()])
LOG.info("Creating linear problem")
y = observations[state_mask]
y = np.where(mask[state_mask], y, 0.)
y_orig = y*1.
if non_linear:
y = y + H_matrix_.dot(x_forecast) - H0
#Aa = matrix_squeeze (P_forecast_inv, mask=maska.ravel())
A = H_matrix_.T.dot(R_mat).dot(H_matrix_) + P_forecast_inv
b = H_matrix_.T.dot(R_mat).dot(y) + P_forecast_inv.dot (x_forecast)
b = b.astype(np.float32)
A = A.astype(np.float32)
# Here we can either do a spLU of A, and solve, or we can have a first go
# by assuming P_forecast_inv is diagonal, and use the inverse of A_approx as
# a preconditioner
LOG.info("Solving")
AI = sp.linalg.splu (A)
x_analysis = AI.solve (b)
# So retval is the solution vector and A is the Hessian
# (->inv(A) is posterior cov)
fwd_modelled = H_matrix_.dot(x_analysis-x_forecast) + H0
innovations = y_orig - fwd_modelled
#x_analysis = reconstruct_array ( x_analysis_prime, x_forecast,
# mask.ravel(), n_params=n_params)
return x_analysis, None, A, innovations, fwd_modelled
def sort_band_data(H_matrix, observations, uncertainty, mask,
x0, x_forecast, state_mask):
if len(H_matrix) == 2:
non_linear = True
H0, H_matrix_ = H_matrix
else:
H0 = 0.
H_matrix_ = H_matrix
non_linear = False
R = uncertainty.diagonal()[state_mask.flatten()]
y = observations[state_mask]
y = np.where(mask[state_mask], y, 0.)
y_orig = y*1.
if non_linear:
y = y + H_matrix_.dot(x0) - H0
return H_matrix_, H0, R, y, y_orig
def variational_kalman_multiband( observations_b, mask_b, state_mask, uncertainty_b, H_matrix_b, n_params,
x0, x_forecast, P_forecast, P_forecast_inv, the_metadata_b, approx_diagonal=True):
"""We can just use """
n_bands = len(observations_b)
y = []
y_orig = []
H_matrix = []
H0 = []
R_mat = []
for i in range(n_bands):
a, b, c, d, e = sort_band_data(H_matrix_b[i], observations_b[i],
uncertainty_b[i], mask_b[i], x0, x_forecast, state_mask)
H_matrix.append(a)
H0.append(b)
R_mat.append(c)
y.append(d)
y_orig.append(e)
H_matrix_ = sp.vstack(H_matrix)
H0 = np.hstack(H0)
R_mat = sp.diags(np.hstack(R_mat))
y = np.hstack(y)
y_orig = np.hstack(y_orig)
#Aa = matrix_squeeze (P_forecast_inv, mask=maska.ravel())
A = H_matrix_.T.dot(R_mat).dot(H_matrix_) + P_forecast_inv
b = H_matrix_.T.dot(R_mat).dot(y) + P_forecast_inv.dot (x_forecast)
b = b.astype(np.float32)
A = A.astype(np.float32)
# Here we can either do a spLU of A, and solve, or we can have a first go
# by assuming P_forecast_inv is diagonal, and use the inverse of A_approx as
# a preconditioner
LOG.info("Solving")
AI = sp.linalg.splu (A)
x_analysis = AI.solve (b)
# So retval is the solution vector and A is the Hessian
# (->inv(A) is posterior cov)
fwd_modelled = H_matrix_.dot(x_analysis-x_forecast) + H0
innovations = y_orig - fwd_modelled
""" For now I am going to return innovations as y_orig - H0 as
That is what is needed by the Hessian correction. Need to discuss with Jose
What the intention for innovations is and then we can find the best solution"""
innovations = y_orig - H0
#x_analysis = reconstruct_array ( x_analysis_prime, x_forecast,
# mask.ravel(), n_params=n_params)
return x_analysis, None, A, innovations, fwd_modelled
| gpl-3.0 |
CKehl/pylearn2 | pylearn2/models/tests/test_s3c_inference.py | 44 | 14386 | from __future__ import print_function
from pylearn2.models.s3c import S3C
from pylearn2.models.s3c import E_Step_Scan
from pylearn2.models.s3c import Grad_M_Step
from pylearn2.models.s3c import E_Step
from pylearn2.utils import contains_nan
from theano import function
import numpy as np
from theano.compat.six.moves import xrange
import theano.tensor as T
from theano import config
#from pylearn2.utils import serial
def broadcast(mat, shape_0):
rval = mat
if mat.shape[0] != shape_0:
assert mat.shape[0] == 1
rval = np.zeros((shape_0, mat.shape[1]),dtype=mat.dtype)
for i in xrange(shape_0):
rval[i,:] = mat[0,:]
return rval
class Test_S3C_Inference:
def setUp(self):
# Temporarily change config.floatX to float64, as s3c inference
# tests currently fail due to numerical issues for float32.
self.prev_floatX = config.floatX
config.floatX = 'float64'
def tearDown(self):
# Restore previous value of floatX
config.floatX = self.prev_floatX
def __init__(self):
""" gets a small batch of data
sets up an S3C model
"""
# We also have to change the value of config.floatX in __init__.
self.prev_floatX = config.floatX
config.floatX = 'float64'
try:
self.tol = 1e-5
#dataset = serial.load('${PYLEARN2_DATA_PATH}/stl10/stl10_patches/data.pkl')
#X = dataset.get_batch_design(1000)
#X = X[:,0:5]
X = np.random.RandomState([1,2,3]).randn(1000,5)
X -= X.mean()
X /= X.std()
m, D = X.shape
N = 5
#don't give the model an e_step or learning rate so it won't spend years compiling a learn_func
self.model = S3C(nvis = D,
nhid = N,
irange = .1,
init_bias_hid = 0.,
init_B = 3.,
min_B = 1e-8,
max_B = 1000.,
init_alpha = 1., min_alpha = 1e-8, max_alpha = 1000.,
init_mu = 1., e_step = None,
m_step = Grad_M_Step(),
min_bias_hid = -1e30, max_bias_hid = 1e30,
)
self.model.make_pseudoparams()
self.h_new_coeff_schedule = [.1, .2, .3, .4, .5, .6, .7, .8, .9, 1. ]
self.e_step = E_Step_Scan(h_new_coeff_schedule = self.h_new_coeff_schedule)
self.e_step.register_model(self.model)
self.X = X
self.N = N
self.m = m
finally:
config.floatX = self.prev_floatX
def test_match_unrolled(self):
""" tests that inference with scan matches result using unrolled loops """
unrolled_e_step = E_Step(h_new_coeff_schedule = self.h_new_coeff_schedule)
unrolled_e_step.register_model(self.model)
V = T.matrix()
scan_result = self.e_step.infer(V)
unrolled_result = unrolled_e_step.infer(V)
outputs = []
for key in scan_result:
outputs.append(scan_result[key])
outputs.append(unrolled_result[key])
f = function([V], outputs)
outputs = f(self.X)
assert len(outputs) % 2 == 0
for i in xrange(0,len(outputs),2):
assert np.allclose(outputs[i],outputs[i+1])
def test_grad_s(self):
"tests that the gradients with respect to s_i are 0 after doing a mean field update of s_i "
model = self.model
e_step = self.e_step
X = self.X
assert X.shape[0] == self.m
model.test_batch_size = X.shape[0]
init_H = e_step.init_H_hat(V = X)
init_Mu1 = e_step.init_S_hat(V = X)
prev_setting = config.compute_test_value
config.compute_test_value= 'off'
H, Mu1 = function([], outputs=[init_H, init_Mu1])()
config.compute_test_value = prev_setting
H = broadcast(H, self.m)
Mu1 = broadcast(Mu1, self.m)
H = np.cast[config.floatX](self.model.rng.uniform(0.,1.,H.shape))
Mu1 = np.cast[config.floatX](self.model.rng.uniform(-5.,5.,Mu1.shape))
H_var = T.matrix(name='H_var')
H_var.tag.test_value = H
Mu1_var = T.matrix(name='Mu1_var')
Mu1_var.tag.test_value = Mu1
idx = T.iscalar()
idx.tag.test_value = 0
S = e_step.infer_S_hat(V = X, H_hat = H_var, S_hat = Mu1_var)
s_idx = S[:,idx]
s_i_func = function([H_var,Mu1_var,idx],s_idx)
sigma0 = 1. / model.alpha
Sigma1 = e_step.infer_var_s1_hat()
mu0 = T.zeros_like(model.mu)
#by truncated KL, I mean that I am dropping terms that don't depend on H and Mu1
# (they don't affect the outcome of this test and some of them are intractable )
trunc_kl = - model.entropy_hs(H_hat = H_var, var_s0_hat = sigma0, var_s1_hat = Sigma1) + \
model.expected_energy_vhs(V = X, H_hat = H_var, S_hat = Mu1_var, var_s0_hat = sigma0, var_s1_hat = Sigma1)
grad_Mu1 = T.grad(trunc_kl.sum(), Mu1_var)
grad_Mu1_idx = grad_Mu1[:,idx]
grad_func = function([H_var, Mu1_var, idx], grad_Mu1_idx)
for i in xrange(self.N):
Mu1[:,i] = s_i_func(H, Mu1, i)
g = grad_func(H,Mu1,i)
assert not contains_nan(g)
g_abs_max = np.abs(g).max()
if g_abs_max > self.tol:
raise Exception('after mean field step, gradient of kl divergence wrt mean field parameter should be 0, but here the max magnitude of a gradient element is '+str(g_abs_max)+' after updating s_'+str(i))
def test_value_s(self):
"tests that the value of the kl divergence decreases with each update to s_i "
model = self.model
e_step = self.e_step
X = self.X
assert X.shape[0] == self.m
init_H = e_step.init_H_hat(V = X)
init_Mu1 = e_step.init_S_hat(V = X)
prev_setting = config.compute_test_value
config.compute_test_value= 'off'
H, Mu1 = function([], outputs=[init_H, init_Mu1])()
config.compute_test_value = prev_setting
H = broadcast(H, self.m)
Mu1 = broadcast(Mu1, self.m)
H = np.cast[config.floatX](self.model.rng.uniform(0.,1.,H.shape))
Mu1 = np.cast[config.floatX](self.model.rng.uniform(-5.,5.,Mu1.shape))
H_var = T.matrix(name='H_var')
H_var.tag.test_value = H
Mu1_var = T.matrix(name='Mu1_var')
Mu1_var.tag.test_value = Mu1
idx = T.iscalar()
idx.tag.test_value = 0
S = e_step.infer_S_hat( V = X, H_hat = H_var, S_hat = Mu1_var)
s_idx = S[:,idx]
s_i_func = function([H_var,Mu1_var,idx],s_idx)
sigma0 = 1. / model.alpha
Sigma1 = e_step.infer_var_s1_hat()
mu0 = T.zeros_like(model.mu)
#by truncated KL, I mean that I am dropping terms that don't depend on H and Mu1
# (they don't affect the outcome of this test and some of them are intractable )
trunc_kl = - model.entropy_hs(H_hat = H_var, var_s0_hat = sigma0, var_s1_hat = Sigma1) + \
model.expected_energy_vhs(V = X, H_hat = H_var, S_hat = Mu1_var, var_s0_hat = sigma0, var_s1_hat = Sigma1)
trunc_kl_func = function([H_var, Mu1_var], trunc_kl)
for i in xrange(self.N):
prev_kl = trunc_kl_func(H,Mu1)
Mu1[:,i] = s_i_func(H, Mu1, i)
new_kl = trunc_kl_func(H,Mu1)
increase = new_kl - prev_kl
mx = increase.max()
if mx > 1e-3:
raise Exception('after mean field step in s, kl divergence should decrease, but some elements increased by as much as '+str(mx)+' after updating s_'+str(i))
def test_grad_h(self):
"tests that the gradients with respect to h_i are 0 after doing a mean field update of h_i "
model = self.model
e_step = self.e_step
X = self.X
assert X.shape[0] == self.m
init_H = e_step.init_H_hat(V = X)
init_Mu1 = e_step.init_S_hat(V = X)
prev_setting = config.compute_test_value
config.compute_test_value= 'off'
H, Mu1 = function([], outputs=[init_H, init_Mu1])()
config.compute_test_value = prev_setting
H = broadcast(H, self.m)
Mu1 = broadcast(Mu1, self.m)
H = np.cast[config.floatX](self.model.rng.uniform(0.,1.,H.shape))
Mu1 = np.cast[config.floatX](self.model.rng.uniform(-5.,5.,Mu1.shape))
H_var = T.matrix(name='H_var')
H_var.tag.test_value = H
Mu1_var = T.matrix(name='Mu1_var')
Mu1_var.tag.test_value = Mu1
idx = T.iscalar()
idx.tag.test_value = 0
new_H = e_step.infer_H_hat(V = X, H_hat = H_var, S_hat = Mu1_var)
h_idx = new_H[:,idx]
updates_func = function([H_var,Mu1_var,idx], h_idx)
sigma0 = 1. / model.alpha
Sigma1 = e_step.infer_var_s1_hat()
mu0 = T.zeros_like(model.mu)
#by truncated KL, I mean that I am dropping terms that don't depend on H and Mu1
# (they don't affect the outcome of this test and some of them are intractable )
trunc_kl = - model.entropy_hs(H_hat = H_var, var_s0_hat = sigma0, var_s1_hat = Sigma1) + \
model.expected_energy_vhs(V = X, H_hat = H_var, S_hat = Mu1_var, var_s0_hat = sigma0,
var_s1_hat = Sigma1)
grad_H = T.grad(trunc_kl.sum(), H_var)
assert len(grad_H.type.broadcastable) == 2
#from theano.printing import min_informative_str
#print min_informative_str(grad_H)
#grad_H = Print('grad_H')(grad_H)
#grad_H_idx = grad_H[:,idx]
grad_func = function([H_var, Mu1_var], grad_H)
failed = False
for i in xrange(self.N):
rval = updates_func(H, Mu1, i)
H[:,i] = rval
g = grad_func(H,Mu1)[:,i]
assert not contains_nan(g)
g_abs_max = np.abs(g).max()
if g_abs_max > self.tol:
#print "new values of H"
#print H[:,i]
#print "gradient on new values of H"
#print g
failed = True
print('iteration ',i)
#print 'max value of new H: ',H[:,i].max()
#print 'H for failing g: '
failing_h = H[np.abs(g) > self.tol, i]
#print failing_h
#from matplotlib import pyplot as plt
#plt.scatter(H[:,i],g)
#plt.show()
#ignore failures extremely close to h=1
high_mask = failing_h > .001
low_mask = failing_h < .999
mask = high_mask * low_mask
print('masked failures: ',mask.shape[0],' err ',g_abs_max)
if mask.sum() > 0:
print('failing h passing the range mask')
print(failing_h[ mask.astype(bool) ])
raise Exception('after mean field step, gradient of kl divergence'
' wrt freshly updated variational parameter should be 0, '
'but here the max magnitude of a gradient element is '
+str(g_abs_max)+' after updating h_'+str(i))
#assert not failed
def test_value_h(self):
"tests that the value of the kl divergence decreases with each update to h_i "
model = self.model
e_step = self.e_step
X = self.X
assert X.shape[0] == self.m
init_H = e_step.init_H_hat(V = X)
init_Mu1 = e_step.init_S_hat(V = X)
prev_setting = config.compute_test_value
config.compute_test_value= 'off'
H, Mu1 = function([], outputs=[init_H, init_Mu1])()
config.compute_test_value = prev_setting
H = broadcast(H, self.m)
Mu1 = broadcast(Mu1, self.m)
H = np.cast[config.floatX](self.model.rng.uniform(0.,1.,H.shape))
Mu1 = np.cast[config.floatX](self.model.rng.uniform(-5.,5.,Mu1.shape))
H_var = T.matrix(name='H_var')
H_var.tag.test_value = H
Mu1_var = T.matrix(name='Mu1_var')
Mu1_var.tag.test_value = Mu1
idx = T.iscalar()
idx.tag.test_value = 0
newH = e_step.infer_H_hat(V = X, H_hat = H_var, S_hat = Mu1_var)
h_idx = newH[:,idx]
h_i_func = function([H_var,Mu1_var,idx],h_idx)
sigma0 = 1. / model.alpha
Sigma1 = e_step.infer_var_s1_hat()
mu0 = T.zeros_like(model.mu)
#by truncated KL, I mean that I am dropping terms that don't depend on H and Mu1
# (they don't affect the outcome of this test and some of them are intractable )
trunc_kl = - model.entropy_hs(H_hat = H_var, var_s0_hat = sigma0, var_s1_hat = Sigma1) + \
model.expected_energy_vhs(V = X, H_hat = H_var, S_hat = Mu1_var, var_s0_hat = sigma0, var_s1_hat = Sigma1)
trunc_kl_func = function([H_var, Mu1_var], trunc_kl)
for i in xrange(self.N):
prev_kl = trunc_kl_func(H,Mu1)
H[:,i] = h_i_func(H, Mu1, i)
#we don't update mu, the whole point of the split e step is we don't have to
new_kl = trunc_kl_func(H,Mu1)
increase = new_kl - prev_kl
print('failures after iteration ',i,': ',(increase > self.tol).sum())
mx = increase.max()
if mx > 1e-4:
print('increase amounts of failing examples:')
print(increase[increase > self.tol])
print('failing H:')
print(H[increase > self.tol,:])
print('failing Mu1:')
print(Mu1[increase > self.tol,:])
print('failing V:')
print(X[increase > self.tol,:])
raise Exception('after mean field step in h, kl divergence should decrease, but some elements increased by as much as '+str(mx)+' after updating h_'+str(i))
if __name__ == '__main__':
obj = Test_S3C_Inference()
#obj.test_grad_h()
#obj.test_grad_s()
#obj.test_value_s()
obj.test_value_h()
| bsd-3-clause |
heli522/scikit-learn | sklearn/utils/arpack.py | 265 | 64837 | """
This contains a copy of the future version of
scipy.sparse.linalg.eigen.arpack.eigsh
It's an upgraded wrapper of the ARPACK library which
allows the use of shift-invert mode for symmetric matrices.
Find a few eigenvectors and eigenvalues of a matrix.
Uses ARPACK: http://www.caam.rice.edu/software/ARPACK/
"""
# Wrapper implementation notes
#
# ARPACK Entry Points
# -------------------
# The entry points to ARPACK are
# - (s,d)seupd : single and double precision symmetric matrix
# - (s,d,c,z)neupd: single,double,complex,double complex general matrix
# This wrapper puts the *neupd (general matrix) interfaces in eigs()
# and the *seupd (symmetric matrix) in eigsh().
# There is no Hermetian complex/double complex interface.
# To find eigenvalues of a Hermetian matrix you
# must use eigs() and not eigsh()
# It might be desirable to handle the Hermetian case differently
# and, for example, return real eigenvalues.
# Number of eigenvalues returned and complex eigenvalues
# ------------------------------------------------------
# The ARPACK nonsymmetric real and double interface (s,d)naupd return
# eigenvalues and eigenvectors in real (float,double) arrays.
# Since the eigenvalues and eigenvectors are, in general, complex
# ARPACK puts the real and imaginary parts in consecutive entries
# in real-valued arrays. This wrapper puts the real entries
# into complex data types and attempts to return the requested eigenvalues
# and eigenvectors.
# Solver modes
# ------------
# ARPACK and handle shifted and shift-inverse computations
# for eigenvalues by providing a shift (sigma) and a solver.
__docformat__ = "restructuredtext en"
__all__ = ['eigs', 'eigsh', 'svds', 'ArpackError', 'ArpackNoConvergence']
import warnings
from scipy.sparse.linalg.eigen.arpack import _arpack
import numpy as np
from scipy.sparse.linalg.interface import aslinearoperator, LinearOperator
from scipy.sparse import identity, isspmatrix, isspmatrix_csr
from scipy.linalg import lu_factor, lu_solve
from scipy.sparse.sputils import isdense
from scipy.sparse.linalg import gmres, splu
import scipy
from distutils.version import LooseVersion
_type_conv = {'f': 's', 'd': 'd', 'F': 'c', 'D': 'z'}
_ndigits = {'f': 5, 'd': 12, 'F': 5, 'D': 12}
DNAUPD_ERRORS = {
0: "Normal exit.",
1: "Maximum number of iterations taken. "
"All possible eigenvalues of OP has been found. IPARAM(5) "
"returns the number of wanted converged Ritz values.",
2: "No longer an informational error. Deprecated starting "
"with release 2 of ARPACK.",
3: "No shifts could be applied during a cycle of the "
"Implicitly restarted Arnoldi iteration. One possibility "
"is to increase the size of NCV relative to NEV. ",
-1: "N must be positive.",
-2: "NEV must be positive.",
-3: "NCV-NEV >= 2 and less than or equal to N.",
-4: "The maximum number of Arnoldi update iterations allowed "
"must be greater than zero.",
-5: " WHICH must be one of 'LM', 'SM', 'LR', 'SR', 'LI', 'SI'",
-6: "BMAT must be one of 'I' or 'G'.",
-7: "Length of private work array WORKL is not sufficient.",
-8: "Error return from LAPACK eigenvalue calculation;",
-9: "Starting vector is zero.",
-10: "IPARAM(7) must be 1,2,3,4.",
-11: "IPARAM(7) = 1 and BMAT = 'G' are incompatible.",
-12: "IPARAM(1) must be equal to 0 or 1.",
-13: "NEV and WHICH = 'BE' are incompatible.",
-9999: "Could not build an Arnoldi factorization. "
"IPARAM(5) returns the size of the current Arnoldi "
"factorization. The user is advised to check that "
"enough workspace and array storage has been allocated."
}
SNAUPD_ERRORS = DNAUPD_ERRORS
ZNAUPD_ERRORS = DNAUPD_ERRORS.copy()
ZNAUPD_ERRORS[-10] = "IPARAM(7) must be 1,2,3."
CNAUPD_ERRORS = ZNAUPD_ERRORS
DSAUPD_ERRORS = {
0: "Normal exit.",
1: "Maximum number of iterations taken. "
"All possible eigenvalues of OP has been found.",
2: "No longer an informational error. Deprecated starting with "
"release 2 of ARPACK.",
3: "No shifts could be applied during a cycle of the Implicitly "
"restarted Arnoldi iteration. One possibility is to increase "
"the size of NCV relative to NEV. ",
-1: "N must be positive.",
-2: "NEV must be positive.",
-3: "NCV must be greater than NEV and less than or equal to N.",
-4: "The maximum number of Arnoldi update iterations allowed "
"must be greater than zero.",
-5: "WHICH must be one of 'LM', 'SM', 'LA', 'SA' or 'BE'.",
-6: "BMAT must be one of 'I' or 'G'.",
-7: "Length of private work array WORKL is not sufficient.",
-8: "Error return from trid. eigenvalue calculation; "
"Informational error from LAPACK routine dsteqr .",
-9: "Starting vector is zero.",
-10: "IPARAM(7) must be 1,2,3,4,5.",
-11: "IPARAM(7) = 1 and BMAT = 'G' are incompatible.",
-12: "IPARAM(1) must be equal to 0 or 1.",
-13: "NEV and WHICH = 'BE' are incompatible. ",
-9999: "Could not build an Arnoldi factorization. "
"IPARAM(5) returns the size of the current Arnoldi "
"factorization. The user is advised to check that "
"enough workspace and array storage has been allocated.",
}
SSAUPD_ERRORS = DSAUPD_ERRORS
DNEUPD_ERRORS = {
0: "Normal exit.",
1: "The Schur form computed by LAPACK routine dlahqr "
"could not be reordered by LAPACK routine dtrsen. "
"Re-enter subroutine dneupd with IPARAM(5)NCV and "
"increase the size of the arrays DR and DI to have "
"dimension at least dimension NCV and allocate at least NCV "
"columns for Z. NOTE: Not necessary if Z and V share "
"the same space. Please notify the authors if this error "
"occurs.",
-1: "N must be positive.",
-2: "NEV must be positive.",
-3: "NCV-NEV >= 2 and less than or equal to N.",
-5: "WHICH must be one of 'LM', 'SM', 'LR', 'SR', 'LI', 'SI'",
-6: "BMAT must be one of 'I' or 'G'.",
-7: "Length of private work WORKL array is not sufficient.",
-8: "Error return from calculation of a real Schur form. "
"Informational error from LAPACK routine dlahqr .",
-9: "Error return from calculation of eigenvectors. "
"Informational error from LAPACK routine dtrevc.",
-10: "IPARAM(7) must be 1,2,3,4.",
-11: "IPARAM(7) = 1 and BMAT = 'G' are incompatible.",
-12: "HOWMNY = 'S' not yet implemented",
-13: "HOWMNY must be one of 'A' or 'P' if RVEC = .true.",
-14: "DNAUPD did not find any eigenvalues to sufficient "
"accuracy.",
-15: "DNEUPD got a different count of the number of converged "
"Ritz values than DNAUPD got. This indicates the user "
"probably made an error in passing data from DNAUPD to "
"DNEUPD or that the data was modified before entering "
"DNEUPD",
}
SNEUPD_ERRORS = DNEUPD_ERRORS.copy()
SNEUPD_ERRORS[1] = ("The Schur form computed by LAPACK routine slahqr "
"could not be reordered by LAPACK routine strsen . "
"Re-enter subroutine dneupd with IPARAM(5)=NCV and "
"increase the size of the arrays DR and DI to have "
"dimension at least dimension NCV and allocate at least "
"NCV columns for Z. NOTE: Not necessary if Z and V share "
"the same space. Please notify the authors if this error "
"occurs.")
SNEUPD_ERRORS[-14] = ("SNAUPD did not find any eigenvalues to sufficient "
"accuracy.")
SNEUPD_ERRORS[-15] = ("SNEUPD got a different count of the number of "
"converged Ritz values than SNAUPD got. This indicates "
"the user probably made an error in passing data from "
"SNAUPD to SNEUPD or that the data was modified before "
"entering SNEUPD")
ZNEUPD_ERRORS = {0: "Normal exit.",
1: "The Schur form computed by LAPACK routine csheqr "
"could not be reordered by LAPACK routine ztrsen. "
"Re-enter subroutine zneupd with IPARAM(5)=NCV and "
"increase the size of the array D to have "
"dimension at least dimension NCV and allocate at least "
"NCV columns for Z. NOTE: Not necessary if Z and V share "
"the same space. Please notify the authors if this error "
"occurs.",
-1: "N must be positive.",
-2: "NEV must be positive.",
-3: "NCV-NEV >= 1 and less than or equal to N.",
-5: "WHICH must be one of 'LM', 'SM', 'LR', 'SR', 'LI', 'SI'",
-6: "BMAT must be one of 'I' or 'G'.",
-7: "Length of private work WORKL array is not sufficient.",
-8: "Error return from LAPACK eigenvalue calculation. "
"This should never happened.",
-9: "Error return from calculation of eigenvectors. "
"Informational error from LAPACK routine ztrevc.",
-10: "IPARAM(7) must be 1,2,3",
-11: "IPARAM(7) = 1 and BMAT = 'G' are incompatible.",
-12: "HOWMNY = 'S' not yet implemented",
-13: "HOWMNY must be one of 'A' or 'P' if RVEC = .true.",
-14: "ZNAUPD did not find any eigenvalues to sufficient "
"accuracy.",
-15: "ZNEUPD got a different count of the number of "
"converged Ritz values than ZNAUPD got. This "
"indicates the user probably made an error in passing "
"data from ZNAUPD to ZNEUPD or that the data was "
"modified before entering ZNEUPD"}
CNEUPD_ERRORS = ZNEUPD_ERRORS.copy()
CNEUPD_ERRORS[-14] = ("CNAUPD did not find any eigenvalues to sufficient "
"accuracy.")
CNEUPD_ERRORS[-15] = ("CNEUPD got a different count of the number of "
"converged Ritz values than CNAUPD got. This indicates "
"the user probably made an error in passing data from "
"CNAUPD to CNEUPD or that the data was modified before "
"entering CNEUPD")
DSEUPD_ERRORS = {
0: "Normal exit.",
-1: "N must be positive.",
-2: "NEV must be positive.",
-3: "NCV must be greater than NEV and less than or equal to N.",
-5: "WHICH must be one of 'LM', 'SM', 'LA', 'SA' or 'BE'.",
-6: "BMAT must be one of 'I' or 'G'.",
-7: "Length of private work WORKL array is not sufficient.",
-8: ("Error return from trid. eigenvalue calculation; "
"Information error from LAPACK routine dsteqr."),
-9: "Starting vector is zero.",
-10: "IPARAM(7) must be 1,2,3,4,5.",
-11: "IPARAM(7) = 1 and BMAT = 'G' are incompatible.",
-12: "NEV and WHICH = 'BE' are incompatible.",
-14: "DSAUPD did not find any eigenvalues to sufficient accuracy.",
-15: "HOWMNY must be one of 'A' or 'S' if RVEC = .true.",
-16: "HOWMNY = 'S' not yet implemented",
-17: ("DSEUPD got a different count of the number of converged "
"Ritz values than DSAUPD got. This indicates the user "
"probably made an error in passing data from DSAUPD to "
"DSEUPD or that the data was modified before entering "
"DSEUPD.")
}
SSEUPD_ERRORS = DSEUPD_ERRORS.copy()
SSEUPD_ERRORS[-14] = ("SSAUPD did not find any eigenvalues "
"to sufficient accuracy.")
SSEUPD_ERRORS[-17] = ("SSEUPD got a different count of the number of "
"converged "
"Ritz values than SSAUPD got. This indicates the user "
"probably made an error in passing data from SSAUPD to "
"SSEUPD or that the data was modified before entering "
"SSEUPD.")
_SAUPD_ERRORS = {'d': DSAUPD_ERRORS,
's': SSAUPD_ERRORS}
_NAUPD_ERRORS = {'d': DNAUPD_ERRORS,
's': SNAUPD_ERRORS,
'z': ZNAUPD_ERRORS,
'c': CNAUPD_ERRORS}
_SEUPD_ERRORS = {'d': DSEUPD_ERRORS,
's': SSEUPD_ERRORS}
_NEUPD_ERRORS = {'d': DNEUPD_ERRORS,
's': SNEUPD_ERRORS,
'z': ZNEUPD_ERRORS,
'c': CNEUPD_ERRORS}
# accepted values of parameter WHICH in _SEUPD
_SEUPD_WHICH = ['LM', 'SM', 'LA', 'SA', 'BE']
# accepted values of parameter WHICH in _NAUPD
_NEUPD_WHICH = ['LM', 'SM', 'LR', 'SR', 'LI', 'SI']
class ArpackError(RuntimeError):
"""
ARPACK error
"""
def __init__(self, info, infodict=_NAUPD_ERRORS):
msg = infodict.get(info, "Unknown error")
RuntimeError.__init__(self, "ARPACK error %d: %s" % (info, msg))
class ArpackNoConvergence(ArpackError):
"""
ARPACK iteration did not converge
Attributes
----------
eigenvalues : ndarray
Partial result. Converged eigenvalues.
eigenvectors : ndarray
Partial result. Converged eigenvectors.
"""
def __init__(self, msg, eigenvalues, eigenvectors):
ArpackError.__init__(self, -1, {-1: msg})
self.eigenvalues = eigenvalues
self.eigenvectors = eigenvectors
class _ArpackParams(object):
def __init__(self, n, k, tp, mode=1, sigma=None,
ncv=None, v0=None, maxiter=None, which="LM", tol=0):
if k <= 0:
raise ValueError("k must be positive, k=%d" % k)
if maxiter is None:
maxiter = n * 10
if maxiter <= 0:
raise ValueError("maxiter must be positive, maxiter=%d" % maxiter)
if tp not in 'fdFD':
raise ValueError("matrix type must be 'f', 'd', 'F', or 'D'")
if v0 is not None:
# ARPACK overwrites its initial resid, make a copy
self.resid = np.array(v0, copy=True)
info = 1
else:
self.resid = np.zeros(n, tp)
info = 0
if sigma is None:
#sigma not used
self.sigma = 0
else:
self.sigma = sigma
if ncv is None:
ncv = 2 * k + 1
ncv = min(ncv, n)
self.v = np.zeros((n, ncv), tp) # holds Ritz vectors
self.iparam = np.zeros(11, "int")
# set solver mode and parameters
ishfts = 1
self.mode = mode
self.iparam[0] = ishfts
self.iparam[2] = maxiter
self.iparam[3] = 1
self.iparam[6] = mode
self.n = n
self.tol = tol
self.k = k
self.maxiter = maxiter
self.ncv = ncv
self.which = which
self.tp = tp
self.info = info
self.converged = False
self.ido = 0
def _raise_no_convergence(self):
msg = "No convergence (%d iterations, %d/%d eigenvectors converged)"
k_ok = self.iparam[4]
num_iter = self.iparam[2]
try:
ev, vec = self.extract(True)
except ArpackError as err:
msg = "%s [%s]" % (msg, err)
ev = np.zeros((0,))
vec = np.zeros((self.n, 0))
k_ok = 0
raise ArpackNoConvergence(msg % (num_iter, k_ok, self.k), ev, vec)
class _SymmetricArpackParams(_ArpackParams):
def __init__(self, n, k, tp, matvec, mode=1, M_matvec=None,
Minv_matvec=None, sigma=None,
ncv=None, v0=None, maxiter=None, which="LM", tol=0):
# The following modes are supported:
# mode = 1:
# Solve the standard eigenvalue problem:
# A*x = lambda*x :
# A - symmetric
# Arguments should be
# matvec = left multiplication by A
# M_matvec = None [not used]
# Minv_matvec = None [not used]
#
# mode = 2:
# Solve the general eigenvalue problem:
# A*x = lambda*M*x
# A - symmetric
# M - symmetric positive definite
# Arguments should be
# matvec = left multiplication by A
# M_matvec = left multiplication by M
# Minv_matvec = left multiplication by M^-1
#
# mode = 3:
# Solve the general eigenvalue problem in shift-invert mode:
# A*x = lambda*M*x
# A - symmetric
# M - symmetric positive semi-definite
# Arguments should be
# matvec = None [not used]
# M_matvec = left multiplication by M
# or None, if M is the identity
# Minv_matvec = left multiplication by [A-sigma*M]^-1
#
# mode = 4:
# Solve the general eigenvalue problem in Buckling mode:
# A*x = lambda*AG*x
# A - symmetric positive semi-definite
# AG - symmetric indefinite
# Arguments should be
# matvec = left multiplication by A
# M_matvec = None [not used]
# Minv_matvec = left multiplication by [A-sigma*AG]^-1
#
# mode = 5:
# Solve the general eigenvalue problem in Cayley-transformed mode:
# A*x = lambda*M*x
# A - symmetric
# M - symmetric positive semi-definite
# Arguments should be
# matvec = left multiplication by A
# M_matvec = left multiplication by M
# or None, if M is the identity
# Minv_matvec = left multiplication by [A-sigma*M]^-1
if mode == 1:
if matvec is None:
raise ValueError("matvec must be specified for mode=1")
if M_matvec is not None:
raise ValueError("M_matvec cannot be specified for mode=1")
if Minv_matvec is not None:
raise ValueError("Minv_matvec cannot be specified for mode=1")
self.OP = matvec
self.B = lambda x: x
self.bmat = 'I'
elif mode == 2:
if matvec is None:
raise ValueError("matvec must be specified for mode=2")
if M_matvec is None:
raise ValueError("M_matvec must be specified for mode=2")
if Minv_matvec is None:
raise ValueError("Minv_matvec must be specified for mode=2")
self.OP = lambda x: Minv_matvec(matvec(x))
self.OPa = Minv_matvec
self.OPb = matvec
self.B = M_matvec
self.bmat = 'G'
elif mode == 3:
if matvec is not None:
raise ValueError("matvec must not be specified for mode=3")
if Minv_matvec is None:
raise ValueError("Minv_matvec must be specified for mode=3")
if M_matvec is None:
self.OP = Minv_matvec
self.OPa = Minv_matvec
self.B = lambda x: x
self.bmat = 'I'
else:
self.OP = lambda x: Minv_matvec(M_matvec(x))
self.OPa = Minv_matvec
self.B = M_matvec
self.bmat = 'G'
elif mode == 4:
if matvec is None:
raise ValueError("matvec must be specified for mode=4")
if M_matvec is not None:
raise ValueError("M_matvec must not be specified for mode=4")
if Minv_matvec is None:
raise ValueError("Minv_matvec must be specified for mode=4")
self.OPa = Minv_matvec
self.OP = lambda x: self.OPa(matvec(x))
self.B = matvec
self.bmat = 'G'
elif mode == 5:
if matvec is None:
raise ValueError("matvec must be specified for mode=5")
if Minv_matvec is None:
raise ValueError("Minv_matvec must be specified for mode=5")
self.OPa = Minv_matvec
self.A_matvec = matvec
if M_matvec is None:
self.OP = lambda x: Minv_matvec(matvec(x) + sigma * x)
self.B = lambda x: x
self.bmat = 'I'
else:
self.OP = lambda x: Minv_matvec(matvec(x)
+ sigma * M_matvec(x))
self.B = M_matvec
self.bmat = 'G'
else:
raise ValueError("mode=%i not implemented" % mode)
if which not in _SEUPD_WHICH:
raise ValueError("which must be one of %s"
% ' '.join(_SEUPD_WHICH))
if k >= n:
raise ValueError("k must be less than rank(A), k=%d" % k)
_ArpackParams.__init__(self, n, k, tp, mode, sigma,
ncv, v0, maxiter, which, tol)
if self.ncv > n or self.ncv <= k:
raise ValueError("ncv must be k<ncv<=n, ncv=%s" % self.ncv)
self.workd = np.zeros(3 * n, self.tp)
self.workl = np.zeros(self.ncv * (self.ncv + 8), self.tp)
ltr = _type_conv[self.tp]
if ltr not in ["s", "d"]:
raise ValueError("Input matrix is not real-valued.")
self._arpack_solver = _arpack.__dict__[ltr + 'saupd']
self._arpack_extract = _arpack.__dict__[ltr + 'seupd']
self.iterate_infodict = _SAUPD_ERRORS[ltr]
self.extract_infodict = _SEUPD_ERRORS[ltr]
self.ipntr = np.zeros(11, "int")
def iterate(self):
self.ido, self.resid, self.v, self.iparam, self.ipntr, self.info = \
self._arpack_solver(self.ido, self.bmat, self.which, self.k,
self.tol, self.resid, self.v, self.iparam,
self.ipntr, self.workd, self.workl, self.info)
xslice = slice(self.ipntr[0] - 1, self.ipntr[0] - 1 + self.n)
yslice = slice(self.ipntr[1] - 1, self.ipntr[1] - 1 + self.n)
if self.ido == -1:
# initialization
self.workd[yslice] = self.OP(self.workd[xslice])
elif self.ido == 1:
# compute y = Op*x
if self.mode == 1:
self.workd[yslice] = self.OP(self.workd[xslice])
elif self.mode == 2:
self.workd[xslice] = self.OPb(self.workd[xslice])
self.workd[yslice] = self.OPa(self.workd[xslice])
elif self.mode == 5:
Bxslice = slice(self.ipntr[2] - 1, self.ipntr[2] - 1 + self.n)
Ax = self.A_matvec(self.workd[xslice])
self.workd[yslice] = self.OPa(Ax + (self.sigma *
self.workd[Bxslice]))
else:
Bxslice = slice(self.ipntr[2] - 1, self.ipntr[2] - 1 + self.n)
self.workd[yslice] = self.OPa(self.workd[Bxslice])
elif self.ido == 2:
self.workd[yslice] = self.B(self.workd[xslice])
elif self.ido == 3:
raise ValueError("ARPACK requested user shifts. Assure ISHIFT==0")
else:
self.converged = True
if self.info == 0:
pass
elif self.info == 1:
self._raise_no_convergence()
else:
raise ArpackError(self.info, infodict=self.iterate_infodict)
def extract(self, return_eigenvectors):
rvec = return_eigenvectors
ierr = 0
howmny = 'A' # return all eigenvectors
sselect = np.zeros(self.ncv, 'int') # unused
d, z, ierr = self._arpack_extract(rvec, howmny, sselect, self.sigma,
self.bmat, self.which, self.k,
self.tol, self.resid, self.v,
self.iparam[0:7], self.ipntr,
self.workd[0:2 * self.n],
self.workl, ierr)
if ierr != 0:
raise ArpackError(ierr, infodict=self.extract_infodict)
k_ok = self.iparam[4]
d = d[:k_ok]
z = z[:, :k_ok]
if return_eigenvectors:
return d, z
else:
return d
class _UnsymmetricArpackParams(_ArpackParams):
def __init__(self, n, k, tp, matvec, mode=1, M_matvec=None,
Minv_matvec=None, sigma=None,
ncv=None, v0=None, maxiter=None, which="LM", tol=0):
# The following modes are supported:
# mode = 1:
# Solve the standard eigenvalue problem:
# A*x = lambda*x
# A - square matrix
# Arguments should be
# matvec = left multiplication by A
# M_matvec = None [not used]
# Minv_matvec = None [not used]
#
# mode = 2:
# Solve the generalized eigenvalue problem:
# A*x = lambda*M*x
# A - square matrix
# M - symmetric, positive semi-definite
# Arguments should be
# matvec = left multiplication by A
# M_matvec = left multiplication by M
# Minv_matvec = left multiplication by M^-1
#
# mode = 3,4:
# Solve the general eigenvalue problem in shift-invert mode:
# A*x = lambda*M*x
# A - square matrix
# M - symmetric, positive semi-definite
# Arguments should be
# matvec = None [not used]
# M_matvec = left multiplication by M
# or None, if M is the identity
# Minv_matvec = left multiplication by [A-sigma*M]^-1
# if A is real and mode==3, use the real part of Minv_matvec
# if A is real and mode==4, use the imag part of Minv_matvec
# if A is complex and mode==3,
# use real and imag parts of Minv_matvec
if mode == 1:
if matvec is None:
raise ValueError("matvec must be specified for mode=1")
if M_matvec is not None:
raise ValueError("M_matvec cannot be specified for mode=1")
if Minv_matvec is not None:
raise ValueError("Minv_matvec cannot be specified for mode=1")
self.OP = matvec
self.B = lambda x: x
self.bmat = 'I'
elif mode == 2:
if matvec is None:
raise ValueError("matvec must be specified for mode=2")
if M_matvec is None:
raise ValueError("M_matvec must be specified for mode=2")
if Minv_matvec is None:
raise ValueError("Minv_matvec must be specified for mode=2")
self.OP = lambda x: Minv_matvec(matvec(x))
self.OPa = Minv_matvec
self.OPb = matvec
self.B = M_matvec
self.bmat = 'G'
elif mode in (3, 4):
if matvec is None:
raise ValueError("matvec must be specified "
"for mode in (3,4)")
if Minv_matvec is None:
raise ValueError("Minv_matvec must be specified "
"for mode in (3,4)")
self.matvec = matvec
if tp in 'DF': # complex type
if mode == 3:
self.OPa = Minv_matvec
else:
raise ValueError("mode=4 invalid for complex A")
else: # real type
if mode == 3:
self.OPa = lambda x: np.real(Minv_matvec(x))
else:
self.OPa = lambda x: np.imag(Minv_matvec(x))
if M_matvec is None:
self.B = lambda x: x
self.bmat = 'I'
self.OP = self.OPa
else:
self.B = M_matvec
self.bmat = 'G'
self.OP = lambda x: self.OPa(M_matvec(x))
else:
raise ValueError("mode=%i not implemented" % mode)
if which not in _NEUPD_WHICH:
raise ValueError("Parameter which must be one of %s"
% ' '.join(_NEUPD_WHICH))
if k >= n - 1:
raise ValueError("k must be less than rank(A)-1, k=%d" % k)
_ArpackParams.__init__(self, n, k, tp, mode, sigma,
ncv, v0, maxiter, which, tol)
if self.ncv > n or self.ncv <= k + 1:
raise ValueError("ncv must be k+1<ncv<=n, ncv=%s" % self.ncv)
self.workd = np.zeros(3 * n, self.tp)
self.workl = np.zeros(3 * self.ncv * (self.ncv + 2), self.tp)
ltr = _type_conv[self.tp]
self._arpack_solver = _arpack.__dict__[ltr + 'naupd']
self._arpack_extract = _arpack.__dict__[ltr + 'neupd']
self.iterate_infodict = _NAUPD_ERRORS[ltr]
self.extract_infodict = _NEUPD_ERRORS[ltr]
self.ipntr = np.zeros(14, "int")
if self.tp in 'FD':
self.rwork = np.zeros(self.ncv, self.tp.lower())
else:
self.rwork = None
def iterate(self):
if self.tp in 'fd':
self.ido, self.resid, self.v, self.iparam, self.ipntr, self.info =\
self._arpack_solver(self.ido, self.bmat, self.which, self.k,
self.tol, self.resid, self.v, self.iparam,
self.ipntr, self.workd, self.workl,
self.info)
else:
self.ido, self.resid, self.v, self.iparam, self.ipntr, self.info =\
self._arpack_solver(self.ido, self.bmat, self.which, self.k,
self.tol, self.resid, self.v, self.iparam,
self.ipntr, self.workd, self.workl,
self.rwork, self.info)
xslice = slice(self.ipntr[0] - 1, self.ipntr[0] - 1 + self.n)
yslice = slice(self.ipntr[1] - 1, self.ipntr[1] - 1 + self.n)
if self.ido == -1:
# initialization
self.workd[yslice] = self.OP(self.workd[xslice])
elif self.ido == 1:
# compute y = Op*x
if self.mode in (1, 2):
self.workd[yslice] = self.OP(self.workd[xslice])
else:
Bxslice = slice(self.ipntr[2] - 1, self.ipntr[2] - 1 + self.n)
self.workd[yslice] = self.OPa(self.workd[Bxslice])
elif self.ido == 2:
self.workd[yslice] = self.B(self.workd[xslice])
elif self.ido == 3:
raise ValueError("ARPACK requested user shifts. Assure ISHIFT==0")
else:
self.converged = True
if self.info == 0:
pass
elif self.info == 1:
self._raise_no_convergence()
else:
raise ArpackError(self.info, infodict=self.iterate_infodict)
def extract(self, return_eigenvectors):
k, n = self.k, self.n
ierr = 0
howmny = 'A' # return all eigenvectors
sselect = np.zeros(self.ncv, 'int') # unused
sigmar = np.real(self.sigma)
sigmai = np.imag(self.sigma)
workev = np.zeros(3 * self.ncv, self.tp)
if self.tp in 'fd':
dr = np.zeros(k + 1, self.tp)
di = np.zeros(k + 1, self.tp)
zr = np.zeros((n, k + 1), self.tp)
dr, di, zr, ierr = \
self._arpack_extract(
return_eigenvectors, howmny, sselect, sigmar, sigmai,
workev, self.bmat, self.which, k, self.tol, self.resid,
self.v, self.iparam, self.ipntr, self.workd, self.workl,
self.info)
if ierr != 0:
raise ArpackError(ierr, infodict=self.extract_infodict)
nreturned = self.iparam[4] # number of good eigenvalues returned
# Build complex eigenvalues from real and imaginary parts
d = dr + 1.0j * di
# Arrange the eigenvectors: complex eigenvectors are stored as
# real,imaginary in consecutive columns
z = zr.astype(self.tp.upper())
# The ARPACK nonsymmetric real and double interface (s,d)naupd
# return eigenvalues and eigenvectors in real (float,double)
# arrays.
# Efficiency: this should check that return_eigenvectors == True
# before going through this construction.
if sigmai == 0:
i = 0
while i <= k:
# check if complex
if abs(d[i].imag) != 0:
# this is a complex conjugate pair with eigenvalues
# in consecutive columns
if i < k:
z[:, i] = zr[:, i] + 1.0j * zr[:, i + 1]
z[:, i + 1] = z[:, i].conjugate()
i += 1
else:
#last eigenvalue is complex: the imaginary part of
# the eigenvector has not been returned
#this can only happen if nreturned > k, so we'll
# throw out this case.
nreturned -= 1
i += 1
else:
# real matrix, mode 3 or 4, imag(sigma) is nonzero:
# see remark 3 in <s,d>neupd.f
# Build complex eigenvalues from real and imaginary parts
i = 0
while i <= k:
if abs(d[i].imag) == 0:
d[i] = np.dot(zr[:, i], self.matvec(zr[:, i]))
else:
if i < k:
z[:, i] = zr[:, i] + 1.0j * zr[:, i + 1]
z[:, i + 1] = z[:, i].conjugate()
d[i] = ((np.dot(zr[:, i],
self.matvec(zr[:, i]))
+ np.dot(zr[:, i + 1],
self.matvec(zr[:, i + 1])))
+ 1j * (np.dot(zr[:, i],
self.matvec(zr[:, i + 1]))
- np.dot(zr[:, i + 1],
self.matvec(zr[:, i]))))
d[i + 1] = d[i].conj()
i += 1
else:
#last eigenvalue is complex: the imaginary part of
# the eigenvector has not been returned
#this can only happen if nreturned > k, so we'll
# throw out this case.
nreturned -= 1
i += 1
# Now we have k+1 possible eigenvalues and eigenvectors
# Return the ones specified by the keyword "which"
if nreturned <= k:
# we got less or equal as many eigenvalues we wanted
d = d[:nreturned]
z = z[:, :nreturned]
else:
# we got one extra eigenvalue (likely a cc pair, but which?)
# cut at approx precision for sorting
rd = np.round(d, decimals=_ndigits[self.tp])
if self.which in ['LR', 'SR']:
ind = np.argsort(rd.real)
elif self.which in ['LI', 'SI']:
# for LI,SI ARPACK returns largest,smallest
# abs(imaginary) why?
ind = np.argsort(abs(rd.imag))
else:
ind = np.argsort(abs(rd))
if self.which in ['LR', 'LM', 'LI']:
d = d[ind[-k:]]
z = z[:, ind[-k:]]
if self.which in ['SR', 'SM', 'SI']:
d = d[ind[:k]]
z = z[:, ind[:k]]
else:
# complex is so much simpler...
d, z, ierr =\
self._arpack_extract(
return_eigenvectors, howmny, sselect, self.sigma, workev,
self.bmat, self.which, k, self.tol, self.resid, self.v,
self.iparam, self.ipntr, self.workd, self.workl,
self.rwork, ierr)
if ierr != 0:
raise ArpackError(ierr, infodict=self.extract_infodict)
k_ok = self.iparam[4]
d = d[:k_ok]
z = z[:, :k_ok]
if return_eigenvectors:
return d, z
else:
return d
def _aslinearoperator_with_dtype(m):
m = aslinearoperator(m)
if not hasattr(m, 'dtype'):
x = np.zeros(m.shape[1])
m.dtype = (m * x).dtype
return m
class SpLuInv(LinearOperator):
"""
SpLuInv:
helper class to repeatedly solve M*x=b
using a sparse LU-decopposition of M
"""
def __init__(self, M):
self.M_lu = splu(M)
LinearOperator.__init__(self, M.shape, self._matvec, dtype=M.dtype)
self.isreal = not np.issubdtype(self.dtype, np.complexfloating)
def _matvec(self, x):
# careful here: splu.solve will throw away imaginary
# part of x if M is real
if self.isreal and np.issubdtype(x.dtype, np.complexfloating):
return (self.M_lu.solve(np.real(x))
+ 1j * self.M_lu.solve(np.imag(x)))
else:
return self.M_lu.solve(x)
class LuInv(LinearOperator):
"""
LuInv:
helper class to repeatedly solve M*x=b
using an LU-decomposition of M
"""
def __init__(self, M):
self.M_lu = lu_factor(M)
LinearOperator.__init__(self, M.shape, self._matvec, dtype=M.dtype)
def _matvec(self, x):
return lu_solve(self.M_lu, x)
class IterInv(LinearOperator):
"""
IterInv:
helper class to repeatedly solve M*x=b
using an iterative method.
"""
def __init__(self, M, ifunc=gmres, tol=0):
if tol <= 0:
# when tol=0, ARPACK uses machine tolerance as calculated
# by LAPACK's _LAMCH function. We should match this
tol = np.finfo(M.dtype).eps
self.M = M
self.ifunc = ifunc
self.tol = tol
if hasattr(M, 'dtype'):
dtype = M.dtype
else:
x = np.zeros(M.shape[1])
dtype = (M * x).dtype
LinearOperator.__init__(self, M.shape, self._matvec, dtype=dtype)
def _matvec(self, x):
b, info = self.ifunc(self.M, x, tol=self.tol)
if info != 0:
raise ValueError("Error in inverting M: function "
"%s did not converge (info = %i)."
% (self.ifunc.__name__, info))
return b
class IterOpInv(LinearOperator):
"""
IterOpInv:
helper class to repeatedly solve [A-sigma*M]*x = b
using an iterative method
"""
def __init__(self, A, M, sigma, ifunc=gmres, tol=0):
if tol <= 0:
# when tol=0, ARPACK uses machine tolerance as calculated
# by LAPACK's _LAMCH function. We should match this
tol = np.finfo(A.dtype).eps
self.A = A
self.M = M
self.sigma = sigma
self.ifunc = ifunc
self.tol = tol
x = np.zeros(A.shape[1])
if M is None:
dtype = self.mult_func_M_None(x).dtype
self.OP = LinearOperator(self.A.shape,
self.mult_func_M_None,
dtype=dtype)
else:
dtype = self.mult_func(x).dtype
self.OP = LinearOperator(self.A.shape,
self.mult_func,
dtype=dtype)
LinearOperator.__init__(self, A.shape, self._matvec, dtype=dtype)
def mult_func(self, x):
return self.A.matvec(x) - self.sigma * self.M.matvec(x)
def mult_func_M_None(self, x):
return self.A.matvec(x) - self.sigma * x
def _matvec(self, x):
b, info = self.ifunc(self.OP, x, tol=self.tol)
if info != 0:
raise ValueError("Error in inverting [A-sigma*M]: function "
"%s did not converge (info = %i)."
% (self.ifunc.__name__, info))
return b
def get_inv_matvec(M, symmetric=False, tol=0):
if isdense(M):
return LuInv(M).matvec
elif isspmatrix(M):
if isspmatrix_csr(M) and symmetric:
M = M.T
return SpLuInv(M).matvec
else:
return IterInv(M, tol=tol).matvec
def get_OPinv_matvec(A, M, sigma, symmetric=False, tol=0):
if sigma == 0:
return get_inv_matvec(A, symmetric=symmetric, tol=tol)
if M is None:
#M is the identity matrix
if isdense(A):
if (np.issubdtype(A.dtype, np.complexfloating)
or np.imag(sigma) == 0):
A = np.copy(A)
else:
A = A + 0j
A.flat[::A.shape[1] + 1] -= sigma
return LuInv(A).matvec
elif isspmatrix(A):
A = A - sigma * identity(A.shape[0])
if symmetric and isspmatrix_csr(A):
A = A.T
return SpLuInv(A.tocsc()).matvec
else:
return IterOpInv(_aslinearoperator_with_dtype(A), M, sigma,
tol=tol).matvec
else:
if ((not isdense(A) and not isspmatrix(A)) or
(not isdense(M) and not isspmatrix(M))):
return IterOpInv(_aslinearoperator_with_dtype(A),
_aslinearoperator_with_dtype(M), sigma,
tol=tol).matvec
elif isdense(A) or isdense(M):
return LuInv(A - sigma * M).matvec
else:
OP = A - sigma * M
if symmetric and isspmatrix_csr(OP):
OP = OP.T
return SpLuInv(OP.tocsc()).matvec
def _eigs(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None,
maxiter=None, tol=0, return_eigenvectors=True, Minv=None, OPinv=None,
OPpart=None):
"""
Find k eigenvalues and eigenvectors of the square matrix A.
Solves ``A * x[i] = w[i] * x[i]``, the standard eigenvalue problem
for w[i] eigenvalues with corresponding eigenvectors x[i].
If M is specified, solves ``A * x[i] = w[i] * M * x[i]``, the
generalized eigenvalue problem for w[i] eigenvalues
with corresponding eigenvectors x[i]
Parameters
----------
A : An N x N matrix, array, sparse matrix, or LinearOperator representing \
the operation A * x, where A is a real or complex square matrix.
k : int, default 6
The number of eigenvalues and eigenvectors desired.
`k` must be smaller than N. It is not possible to compute all
eigenvectors of a matrix.
return_eigenvectors : boolean, default True
Whether to return the eigenvectors along with the eigenvalues.
M : An N x N matrix, array, sparse matrix, or LinearOperator representing
the operation M*x for the generalized eigenvalue problem
``A * x = w * M * x``
M must represent a real symmetric matrix. For best results, M should
be of the same type as A. Additionally:
* If sigma==None, M is positive definite
* If sigma is specified, M is positive semi-definite
If sigma==None, eigs requires an operator to compute the solution
of the linear equation `M * x = b`. This is done internally via a
(sparse) LU decomposition for an explicit matrix M, or via an
iterative solver for a general linear operator. Alternatively,
the user can supply the matrix or operator Minv, which gives
x = Minv * b = M^-1 * b
sigma : real or complex
Find eigenvalues near sigma using shift-invert mode. This requires
an operator to compute the solution of the linear system
`[A - sigma * M] * x = b`, where M is the identity matrix if
unspecified. This is computed internally via a (sparse) LU
decomposition for explicit matrices A & M, or via an iterative
solver if either A or M is a general linear operator.
Alternatively, the user can supply the matrix or operator OPinv,
which gives x = OPinv * b = [A - sigma * M]^-1 * b.
For a real matrix A, shift-invert can either be done in imaginary
mode or real mode, specified by the parameter OPpart ('r' or 'i').
Note that when sigma is specified, the keyword 'which' (below)
refers to the shifted eigenvalues w'[i] where:
* If A is real and OPpart == 'r' (default),
w'[i] = 1/2 * [ 1/(w[i]-sigma) + 1/(w[i]-conj(sigma)) ]
* If A is real and OPpart == 'i',
w'[i] = 1/2i * [ 1/(w[i]-sigma) - 1/(w[i]-conj(sigma)) ]
* If A is complex,
w'[i] = 1/(w[i]-sigma)
v0 : array
Starting vector for iteration.
ncv : integer
The number of Lanczos vectors generated
`ncv` must be greater than `k`; it is recommended that ``ncv > 2*k``.
which : string ['LM' | 'SM' | 'LR' | 'SR' | 'LI' | 'SI']
Which `k` eigenvectors and eigenvalues to find:
- 'LM' : largest magnitude
- 'SM' : smallest magnitude
- 'LR' : largest real part
- 'SR' : smallest real part
- 'LI' : largest imaginary part
- 'SI' : smallest imaginary part
When sigma != None, 'which' refers to the shifted eigenvalues w'[i]
(see discussion in 'sigma', above). ARPACK is generally better
at finding large values than small values. If small eigenvalues are
desired, consider using shift-invert mode for better performance.
maxiter : integer
Maximum number of Arnoldi update iterations allowed
tol : float
Relative accuracy for eigenvalues (stopping criterion)
The default value of 0 implies machine precision.
return_eigenvectors : boolean
Return eigenvectors (True) in addition to eigenvalues
Minv : N x N matrix, array, sparse matrix, or linear operator
See notes in M, above.
OPinv : N x N matrix, array, sparse matrix, or linear operator
See notes in sigma, above.
OPpart : 'r' or 'i'.
See notes in sigma, above
Returns
-------
w : array
Array of k eigenvalues.
v : array
An array of `k` eigenvectors.
``v[:, i]`` is the eigenvector corresponding to the eigenvalue w[i].
Raises
------
ArpackNoConvergence
When the requested convergence is not obtained.
The currently converged eigenvalues and eigenvectors can be found
as ``eigenvalues`` and ``eigenvectors`` attributes of the exception
object.
See Also
--------
eigsh : eigenvalues and eigenvectors for symmetric matrix A
svds : singular value decomposition for a matrix A
Examples
--------
Find 6 eigenvectors of the identity matrix:
>>> from sklearn.utils.arpack import eigs
>>> id = np.identity(13)
>>> vals, vecs = eigs(id, k=6)
>>> vals
array([ 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j])
>>> vecs.shape
(13, 6)
Notes
-----
This function is a wrapper to the ARPACK [1]_ SNEUPD, DNEUPD, CNEUPD,
ZNEUPD, functions which use the Implicitly Restarted Arnoldi Method to
find the eigenvalues and eigenvectors [2]_.
References
----------
.. [1] ARPACK Software, http://www.caam.rice.edu/software/ARPACK/
.. [2] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK USERS GUIDE:
Solution of Large Scale Eigenvalue Problems by Implicitly Restarted
Arnoldi Methods. SIAM, Philadelphia, PA, 1998.
"""
if A.shape[0] != A.shape[1]:
raise ValueError('expected square matrix (shape=%s)' % (A.shape,))
if M is not None:
if M.shape != A.shape:
raise ValueError('wrong M dimensions %s, should be %s'
% (M.shape, A.shape))
if np.dtype(M.dtype).char.lower() != np.dtype(A.dtype).char.lower():
warnings.warn('M does not have the same type precision as A. '
'This may adversely affect ARPACK convergence')
n = A.shape[0]
if k <= 0 or k >= n:
raise ValueError("k must be between 1 and rank(A)-1")
if sigma is None:
matvec = _aslinearoperator_with_dtype(A).matvec
if OPinv is not None:
raise ValueError("OPinv should not be specified "
"with sigma = None.")
if OPpart is not None:
raise ValueError("OPpart should not be specified with "
"sigma = None or complex A")
if M is None:
#standard eigenvalue problem
mode = 1
M_matvec = None
Minv_matvec = None
if Minv is not None:
raise ValueError("Minv should not be "
"specified with M = None.")
else:
#general eigenvalue problem
mode = 2
if Minv is None:
Minv_matvec = get_inv_matvec(M, symmetric=True, tol=tol)
else:
Minv = _aslinearoperator_with_dtype(Minv)
Minv_matvec = Minv.matvec
M_matvec = _aslinearoperator_with_dtype(M).matvec
else:
#sigma is not None: shift-invert mode
if np.issubdtype(A.dtype, np.complexfloating):
if OPpart is not None:
raise ValueError("OPpart should not be specified "
"with sigma=None or complex A")
mode = 3
elif OPpart is None or OPpart.lower() == 'r':
mode = 3
elif OPpart.lower() == 'i':
if np.imag(sigma) == 0:
raise ValueError("OPpart cannot be 'i' if sigma is real")
mode = 4
else:
raise ValueError("OPpart must be one of ('r','i')")
matvec = _aslinearoperator_with_dtype(A).matvec
if Minv is not None:
raise ValueError("Minv should not be specified when sigma is")
if OPinv is None:
Minv_matvec = get_OPinv_matvec(A, M, sigma,
symmetric=False, tol=tol)
else:
OPinv = _aslinearoperator_with_dtype(OPinv)
Minv_matvec = OPinv.matvec
if M is None:
M_matvec = None
else:
M_matvec = _aslinearoperator_with_dtype(M).matvec
params = _UnsymmetricArpackParams(n, k, A.dtype.char, matvec, mode,
M_matvec, Minv_matvec, sigma,
ncv, v0, maxiter, which, tol)
while not params.converged:
params.iterate()
return params.extract(return_eigenvectors)
def _eigsh(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None,
maxiter=None, tol=0, return_eigenvectors=True, Minv=None,
OPinv=None, mode='normal'):
"""
Find k eigenvalues and eigenvectors of the real symmetric square matrix
or complex hermitian matrix A.
Solves ``A * x[i] = w[i] * x[i]``, the standard eigenvalue problem for
w[i] eigenvalues with corresponding eigenvectors x[i].
If M is specified, solves ``A * x[i] = w[i] * M * x[i]``, the
generalized eigenvalue problem for w[i] eigenvalues
with corresponding eigenvectors x[i]
Parameters
----------
A : An N x N matrix, array, sparse matrix, or LinearOperator representing
the operation A * x, where A is a real symmetric matrix
For buckling mode (see below) A must additionally be positive-definite
k : integer
The number of eigenvalues and eigenvectors desired.
`k` must be smaller than N. It is not possible to compute all
eigenvectors of a matrix.
M : An N x N matrix, array, sparse matrix, or linear operator representing
the operation M * x for the generalized eigenvalue problem
``A * x = w * M * x``.
M must represent a real, symmetric matrix. For best results, M should
be of the same type as A. Additionally:
* If sigma == None, M is symmetric positive definite
* If sigma is specified, M is symmetric positive semi-definite
* In buckling mode, M is symmetric indefinite.
If sigma == None, eigsh requires an operator to compute the solution
of the linear equation `M * x = b`. This is done internally via a
(sparse) LU decomposition for an explicit matrix M, or via an
iterative solver for a general linear operator. Alternatively,
the user can supply the matrix or operator Minv, which gives
x = Minv * b = M^-1 * b
sigma : real
Find eigenvalues near sigma using shift-invert mode. This requires
an operator to compute the solution of the linear system
`[A - sigma * M] x = b`, where M is the identity matrix if
unspecified. This is computed internally via a (sparse) LU
decomposition for explicit matrices A & M, or via an iterative
solver if either A or M is a general linear operator.
Alternatively, the user can supply the matrix or operator OPinv,
which gives x = OPinv * b = [A - sigma * M]^-1 * b.
Note that when sigma is specified, the keyword 'which' refers to
the shifted eigenvalues w'[i] where:
- if mode == 'normal',
w'[i] = 1 / (w[i] - sigma)
- if mode == 'cayley',
w'[i] = (w[i] + sigma) / (w[i] - sigma)
- if mode == 'buckling',
w'[i] = w[i] / (w[i] - sigma)
(see further discussion in 'mode' below)
v0 : array
Starting vector for iteration.
ncv : integer
The number of Lanczos vectors generated
ncv must be greater than k and smaller than n;
it is recommended that ncv > 2*k
which : string ['LM' | 'SM' | 'LA' | 'SA' | 'BE']
If A is a complex hermitian matrix, 'BE' is invalid.
Which `k` eigenvectors and eigenvalues to find
- 'LM' : Largest (in magnitude) eigenvalues
- 'SM' : Smallest (in magnitude) eigenvalues
- 'LA' : Largest (algebraic) eigenvalues
- 'SA' : Smallest (algebraic) eigenvalues
- 'BE' : Half (k/2) from each end of the spectrum
When k is odd, return one more (k/2+1) from the high end
When sigma != None, 'which' refers to the shifted eigenvalues w'[i]
(see discussion in 'sigma', above). ARPACK is generally better
at finding large values than small values. If small eigenvalues are
desired, consider using shift-invert mode for better performance.
maxiter : integer
Maximum number of Arnoldi update iterations allowed
tol : float
Relative accuracy for eigenvalues (stopping criterion).
The default value of 0 implies machine precision.
Minv : N x N matrix, array, sparse matrix, or LinearOperator
See notes in M, above
OPinv : N x N matrix, array, sparse matrix, or LinearOperator
See notes in sigma, above.
return_eigenvectors : boolean
Return eigenvectors (True) in addition to eigenvalues
mode : string ['normal' | 'buckling' | 'cayley']
Specify strategy to use for shift-invert mode. This argument applies
only for real-valued A and sigma != None. For shift-invert mode,
ARPACK internally solves the eigenvalue problem
``OP * x'[i] = w'[i] * B * x'[i]``
and transforms the resulting Ritz vectors x'[i] and Ritz values w'[i]
into the desired eigenvectors and eigenvalues of the problem
``A * x[i] = w[i] * M * x[i]``.
The modes are as follows:
- 'normal' : OP = [A - sigma * M]^-1 * M
B = M
w'[i] = 1 / (w[i] - sigma)
- 'buckling' : OP = [A - sigma * M]^-1 * A
B = A
w'[i] = w[i] / (w[i] - sigma)
- 'cayley' : OP = [A - sigma * M]^-1 * [A + sigma * M]
B = M
w'[i] = (w[i] + sigma) / (w[i] - sigma)
The choice of mode will affect which eigenvalues are selected by
the keyword 'which', and can also impact the stability of
convergence (see [2] for a discussion)
Returns
-------
w : array
Array of k eigenvalues
v : array
An array of k eigenvectors
The v[i] is the eigenvector corresponding to the eigenvector w[i]
Raises
------
ArpackNoConvergence
When the requested convergence is not obtained.
The currently converged eigenvalues and eigenvectors can be found
as ``eigenvalues`` and ``eigenvectors`` attributes of the exception
object.
See Also
--------
eigs : eigenvalues and eigenvectors for a general (nonsymmetric) matrix A
svds : singular value decomposition for a matrix A
Notes
-----
This function is a wrapper to the ARPACK [1]_ SSEUPD and DSEUPD
functions which use the Implicitly Restarted Lanczos Method to
find the eigenvalues and eigenvectors [2]_.
Examples
--------
>>> from sklearn.utils.arpack import eigsh
>>> id = np.identity(13)
>>> vals, vecs = eigsh(id, k=6)
>>> vals # doctest: +SKIP
array([ 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j])
>>> print(vecs.shape)
(13, 6)
References
----------
.. [1] ARPACK Software, http://www.caam.rice.edu/software/ARPACK/
.. [2] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK USERS GUIDE:
Solution of Large Scale Eigenvalue Problems by Implicitly Restarted
Arnoldi Methods. SIAM, Philadelphia, PA, 1998.
"""
# complex hermitian matrices should be solved with eigs
if np.issubdtype(A.dtype, np.complexfloating):
if mode != 'normal':
raise ValueError("mode=%s cannot be used with "
"complex matrix A" % mode)
if which == 'BE':
raise ValueError("which='BE' cannot be used with complex matrix A")
elif which == 'LA':
which = 'LR'
elif which == 'SA':
which = 'SR'
ret = eigs(A, k, M=M, sigma=sigma, which=which, v0=v0,
ncv=ncv, maxiter=maxiter, tol=tol,
return_eigenvectors=return_eigenvectors, Minv=Minv,
OPinv=OPinv)
if return_eigenvectors:
return ret[0].real, ret[1]
else:
return ret.real
if A.shape[0] != A.shape[1]:
raise ValueError('expected square matrix (shape=%s)' % (A.shape,))
if M is not None:
if M.shape != A.shape:
raise ValueError('wrong M dimensions %s, should be %s'
% (M.shape, A.shape))
if np.dtype(M.dtype).char.lower() != np.dtype(A.dtype).char.lower():
warnings.warn('M does not have the same type precision as A. '
'This may adversely affect ARPACK convergence')
n = A.shape[0]
if k <= 0 or k >= n:
raise ValueError("k must be between 1 and rank(A)-1")
if sigma is None:
A = _aslinearoperator_with_dtype(A)
matvec = A.matvec
if OPinv is not None:
raise ValueError("OPinv should not be specified "
"with sigma = None.")
if M is None:
#standard eigenvalue problem
mode = 1
M_matvec = None
Minv_matvec = None
if Minv is not None:
raise ValueError("Minv should not be "
"specified with M = None.")
else:
#general eigenvalue problem
mode = 2
if Minv is None:
Minv_matvec = get_inv_matvec(M, symmetric=True, tol=tol)
else:
Minv = _aslinearoperator_with_dtype(Minv)
Minv_matvec = Minv.matvec
M_matvec = _aslinearoperator_with_dtype(M).matvec
else:
# sigma is not None: shift-invert mode
if Minv is not None:
raise ValueError("Minv should not be specified when sigma is")
# normal mode
if mode == 'normal':
mode = 3
matvec = None
if OPinv is None:
Minv_matvec = get_OPinv_matvec(A, M, sigma,
symmetric=True, tol=tol)
else:
OPinv = _aslinearoperator_with_dtype(OPinv)
Minv_matvec = OPinv.matvec
if M is None:
M_matvec = None
else:
M = _aslinearoperator_with_dtype(M)
M_matvec = M.matvec
# buckling mode
elif mode == 'buckling':
mode = 4
if OPinv is None:
Minv_matvec = get_OPinv_matvec(A, M, sigma,
symmetric=True, tol=tol)
else:
Minv_matvec = _aslinearoperator_with_dtype(OPinv).matvec
matvec = _aslinearoperator_with_dtype(A).matvec
M_matvec = None
# cayley-transform mode
elif mode == 'cayley':
mode = 5
matvec = _aslinearoperator_with_dtype(A).matvec
if OPinv is None:
Minv_matvec = get_OPinv_matvec(A, M, sigma,
symmetric=True, tol=tol)
else:
Minv_matvec = _aslinearoperator_with_dtype(OPinv).matvec
if M is None:
M_matvec = None
else:
M_matvec = _aslinearoperator_with_dtype(M).matvec
# unrecognized mode
else:
raise ValueError("unrecognized mode '%s'" % mode)
params = _SymmetricArpackParams(n, k, A.dtype.char, matvec, mode,
M_matvec, Minv_matvec, sigma,
ncv, v0, maxiter, which, tol)
while not params.converged:
params.iterate()
return params.extract(return_eigenvectors)
def _svds(A, k=6, ncv=None, tol=0):
"""Compute k singular values/vectors for a sparse matrix using ARPACK.
Parameters
----------
A : sparse matrix
Array to compute the SVD on
k : int, optional
Number of singular values and vectors to compute.
ncv : integer
The number of Lanczos vectors generated
ncv must be greater than k+1 and smaller than n;
it is recommended that ncv > 2*k
tol : float, optional
Tolerance for singular values. Zero (default) means machine precision.
Notes
-----
This is a naive implementation using an eigensolver on A.H * A or
A * A.H, depending on which one is more efficient.
"""
if not (isinstance(A, np.ndarray) or isspmatrix(A)):
A = np.asarray(A)
n, m = A.shape
if np.issubdtype(A.dtype, np.complexfloating):
herm = lambda x: x.T.conjugate()
eigensolver = eigs
else:
herm = lambda x: x.T
eigensolver = eigsh
if n > m:
X = A
XH = herm(A)
else:
XH = A
X = herm(A)
if hasattr(XH, 'dot'):
def matvec_XH_X(x):
return XH.dot(X.dot(x))
else:
def matvec_XH_X(x):
return np.dot(XH, np.dot(X, x))
XH_X = LinearOperator(matvec=matvec_XH_X, dtype=X.dtype,
shape=(X.shape[1], X.shape[1]))
# Ignore deprecation warnings here: dot on matrices is deprecated,
# but this code is a backport anyhow
with warnings.catch_warnings():
warnings.simplefilter('ignore', DeprecationWarning)
eigvals, eigvec = eigensolver(XH_X, k=k, tol=tol ** 2)
s = np.sqrt(eigvals)
if n > m:
v = eigvec
if hasattr(X, 'dot'):
u = X.dot(v) / s
else:
u = np.dot(X, v) / s
vh = herm(v)
else:
u = eigvec
if hasattr(X, 'dot'):
vh = herm(X.dot(u) / s)
else:
vh = herm(np.dot(X, u) / s)
return u, s, vh
# check if backport is actually needed:
if scipy.version.version >= LooseVersion('0.10'):
from scipy.sparse.linalg import eigs, eigsh, svds
else:
eigs, eigsh, svds = _eigs, _eigsh, _svds
| bsd-3-clause |
nlproc/splunkml | bin/multiclassify.py | 1 | 2142 | import sys, os, itertools
try:
import cStringIO as StringIO
except:
import StringIO
import numpy as np
import scipy.sparse as sp
from gensim.corpora import TextCorpus
from gensim.models import LsiModel, TfidfModel, LdaModel
from gensim.matutils import corpus2csc
from sklearn.feature_extraction import FeatureHasher
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import PCA
def is_number(str):
try:
n = float(str)
return True
except (ValueError, TypeError):
return False
def process_records(records, fields, target, textmodel=None):
tokenize = CountVectorizer().build_analyzer()
input = None
X = None
y_labels = []
for i, record in enumerate(records):
nums = []
strs = []
y_labels.append(record.get(target))
for field in fields:
if is_number(record.get(field)):
nums.append(record[field])
else:
strs.append(str(record.get(field) or "").lower())
if strs:
if input is None:
input = StringIO.StringIO()
print >> input, " ".join(tokenize(" ".join(strs)))
if nums:
if X is None:
X = sp.lil_matrix((len(records),len(nums)))
X[i] = np.array(nums, dtype=np.float64)
if input is not None:
if X is not None:
X_2 = X.tocsr()
else:
X_2 = None
if isinstance(textmodel,basestring):
if textmodel == 'lsi':
corpus = TextCorpus(input)
textmodel = LsiModel(corpus, chunksize=1000)
elif textmodel == 'tfidf':
corpus = TextCorpus(input)
textmodel = TfidfModel(corpus)
elif textmodel == 'hashing':
textmodel = None
hasher = FeatureHasher(n_features=2 ** 18, input_type="string")
input.seek(0)
X = hasher.transform(tokenize(line.strip()) for line in input)
if textmodel:
num_terms = len(textmodel.id2word or getattr(textmodel, 'dfs',[]))
X = corpus2csc(textmodel[corpus], num_terms).transpose()
if X_2 is not None:
# print >> sys.stderr, "X SHAPE:", X.shape
# print >> sys.stderr, "X_2 SHAPE:", X_2.shape
X = sp.hstack([X, X_2], format='csr')
elif X is not None:
textmodel = None
X = X.tocsr()
print >> sys.stderr, "X SHAPE:", X.shape
return X, y_labels, textmodel
| apache-2.0 |
ZenDevelopmentSystems/scikit-learn | sklearn/metrics/cluster/unsupervised.py | 230 | 8281 | """ Unsupervised evaluation metrics. """
# Authors: Robert Layton <robertlayton@gmail.com>
#
# License: BSD 3 clause
import numpy as np
from ...utils import check_random_state
from ..pairwise import pairwise_distances
def silhouette_score(X, labels, metric='euclidean', sample_size=None,
random_state=None, **kwds):
"""Compute the mean Silhouette Coefficient of all samples.
The Silhouette Coefficient is calculated using the mean intra-cluster
distance (``a``) and the mean nearest-cluster distance (``b``) for each
sample. The Silhouette Coefficient for a sample is ``(b - a) / max(a,
b)``. To clarify, ``b`` is the distance between a sample and the nearest
cluster that the sample is not a part of.
Note that Silhouette Coefficent is only defined if number of labels
is 2 <= n_labels <= n_samples - 1.
This function returns the mean Silhouette Coefficient over all samples.
To obtain the values for each sample, use :func:`silhouette_samples`.
The best value is 1 and the worst value is -1. Values near 0 indicate
overlapping clusters. Negative values generally indicate that a sample has
been assigned to the wrong cluster, as a different cluster is more similar.
Read more in the :ref:`User Guide <silhouette_coefficient>`.
Parameters
----------
X : array [n_samples_a, n_samples_a] if metric == "precomputed", or, \
[n_samples_a, n_features] otherwise
Array of pairwise distances between samples, or a feature array.
labels : array, shape = [n_samples]
Predicted labels for each sample.
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string, it must be one of the options
allowed by :func:`metrics.pairwise.pairwise_distances
<sklearn.metrics.pairwise.pairwise_distances>`. If X is the distance
array itself, use ``metric="precomputed"``.
sample_size : int or None
The size of the sample to use when computing the Silhouette Coefficient
on a random subset of the data.
If ``sample_size is None``, no sampling is used.
random_state : integer or numpy.RandomState, optional
The generator used to randomly select a subset of samples if
``sample_size is not None``. If an integer is given, it fixes the seed.
Defaults to the global numpy random number generator.
`**kwds` : optional keyword parameters
Any further parameters are passed directly to the distance function.
If using a scipy.spatial.distance metric, the parameters are still
metric dependent. See the scipy docs for usage examples.
Returns
-------
silhouette : float
Mean Silhouette Coefficient for all samples.
References
----------
.. [1] `Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the
Interpretation and Validation of Cluster Analysis". Computational
and Applied Mathematics 20: 53-65.
<http://www.sciencedirect.com/science/article/pii/0377042787901257>`_
.. [2] `Wikipedia entry on the Silhouette Coefficient
<http://en.wikipedia.org/wiki/Silhouette_(clustering)>`_
"""
n_labels = len(np.unique(labels))
n_samples = X.shape[0]
if not 1 < n_labels < n_samples:
raise ValueError("Number of labels is %d. Valid values are 2 "
"to n_samples - 1 (inclusive)" % n_labels)
if sample_size is not None:
random_state = check_random_state(random_state)
indices = random_state.permutation(X.shape[0])[:sample_size]
if metric == "precomputed":
X, labels = X[indices].T[indices].T, labels[indices]
else:
X, labels = X[indices], labels[indices]
return np.mean(silhouette_samples(X, labels, metric=metric, **kwds))
def silhouette_samples(X, labels, metric='euclidean', **kwds):
"""Compute the Silhouette Coefficient for each sample.
The Silhouette Coefficient is a measure of how well samples are clustered
with samples that are similar to themselves. Clustering models with a high
Silhouette Coefficient are said to be dense, where samples in the same
cluster are similar to each other, and well separated, where samples in
different clusters are not very similar to each other.
The Silhouette Coefficient is calculated using the mean intra-cluster
distance (``a``) and the mean nearest-cluster distance (``b``) for each
sample. The Silhouette Coefficient for a sample is ``(b - a) / max(a,
b)``.
Note that Silhouette Coefficent is only defined if number of labels
is 2 <= n_labels <= n_samples - 1.
This function returns the Silhouette Coefficient for each sample.
The best value is 1 and the worst value is -1. Values near 0 indicate
overlapping clusters.
Read more in the :ref:`User Guide <silhouette_coefficient>`.
Parameters
----------
X : array [n_samples_a, n_samples_a] if metric == "precomputed", or, \
[n_samples_a, n_features] otherwise
Array of pairwise distances between samples, or a feature array.
labels : array, shape = [n_samples]
label values for each sample
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string, it must be one of the options
allowed by :func:`sklearn.metrics.pairwise.pairwise_distances`. If X is
the distance array itself, use "precomputed" as the metric.
`**kwds` : optional keyword parameters
Any further parameters are passed directly to the distance function.
If using a ``scipy.spatial.distance`` metric, the parameters are still
metric dependent. See the scipy docs for usage examples.
Returns
-------
silhouette : array, shape = [n_samples]
Silhouette Coefficient for each samples.
References
----------
.. [1] `Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the
Interpretation and Validation of Cluster Analysis". Computational
and Applied Mathematics 20: 53-65.
<http://www.sciencedirect.com/science/article/pii/0377042787901257>`_
.. [2] `Wikipedia entry on the Silhouette Coefficient
<http://en.wikipedia.org/wiki/Silhouette_(clustering)>`_
"""
distances = pairwise_distances(X, metric=metric, **kwds)
n = labels.shape[0]
A = np.array([_intra_cluster_distance(distances[i], labels, i)
for i in range(n)])
B = np.array([_nearest_cluster_distance(distances[i], labels, i)
for i in range(n)])
sil_samples = (B - A) / np.maximum(A, B)
return sil_samples
def _intra_cluster_distance(distances_row, labels, i):
"""Calculate the mean intra-cluster distance for sample i.
Parameters
----------
distances_row : array, shape = [n_samples]
Pairwise distance matrix between sample i and each sample.
labels : array, shape = [n_samples]
label values for each sample
i : int
Sample index being calculated. It is excluded from calculation and
used to determine the current label
Returns
-------
a : float
Mean intra-cluster distance for sample i
"""
mask = labels == labels[i]
mask[i] = False
if not np.any(mask):
# cluster of size 1
return 0
a = np.mean(distances_row[mask])
return a
def _nearest_cluster_distance(distances_row, labels, i):
"""Calculate the mean nearest-cluster distance for sample i.
Parameters
----------
distances_row : array, shape = [n_samples]
Pairwise distance matrix between sample i and each sample.
labels : array, shape = [n_samples]
label values for each sample
i : int
Sample index being calculated. It is used to determine the current
label.
Returns
-------
b : float
Mean nearest-cluster distance for sample i
"""
label = labels[i]
b = np.min([np.mean(distances_row[labels == cur_label])
for cur_label in set(labels) if not cur_label == label])
return b
| bsd-3-clause |
tomlof/scikit-learn | examples/svm/plot_custom_kernel.py | 93 | 1562 | """
======================
SVM with custom kernel
======================
Simple usage of Support Vector Machines to classify a sample. It will
plot the decision surface and the support vectors.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
Y = iris.target
def my_kernel(X, Y):
"""
We create a custom kernel:
(2 0)
k(X, Y) = X ( ) Y.T
(0 1)
"""
M = np.array([[2, 0], [0, 1.0]])
return np.dot(np.dot(X, M), Y.T)
h = .02 # step size in the mesh
# we create an instance of SVM and fit out data.
clf = svm.SVC(kernel=my_kernel)
clf.fit(X, Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired, edgecolors='k')
plt.title('3-Class classification using Support Vector Machine with custom'
' kernel')
plt.axis('tight')
plt.show()
| bsd-3-clause |
mila-iqia/babyai | scripts/il_perf.py | 1 | 2047 | #!/usr/bin/env python3
import argparse
import pandas
import os
import json
import re
import numpy as np
from scipy import stats
from babyai import plotting as bp
parser = argparse.ArgumentParser("Analyze performance of imitation learning")
parser.add_argument("--path", default='.',
help="path to model logs")
parser.add_argument("--regex", default='.*',
help="filter out some logs")
parser.add_argument("--other", default=None,
help="path to model logs for ttest comparison")
parser.add_argument("--other_regex", default='.*',
help="filter out some logs from comparison")
parser.add_argument("--window", type=int, default=100,
help="size of sliding window average, 10 for GoToRedBallGrey, 100 otherwise")
args = parser.parse_args()
def get_data(path, regex):
df = pandas.concat(bp.load_logs(path), sort=True)
fps = bp.get_fps(df)
models = df['model'].unique()
models = [model for model in df['model'].unique() if re.match(regex, model)]
maxes = []
for model in models:
df_model = df[df['model'] == model]
success_rate = df_model['validation_success_rate']
success_rate = success_rate.rolling(args.window, center=True).mean()
success_rate = max(success_rate[np.logical_not(np.isnan(success_rate))])
print(model, success_rate)
maxes.append(success_rate)
return np.array(maxes), fps
if args.other is not None:
print("is this architecture better")
print(args.regex)
maxes, fps = get_data(args.path, args.regex)
result = {'samples': len(maxes), 'mean': maxes.mean(), 'std': maxes.std(),
'fps_mean': fps.mean(), 'fps_std': fps.std()}
print(result)
if args.other is not None:
print("\nthan this one")
maxes_ttest, fps = get_data(args.other, args.other_regex)
result = {'samples': len(maxes_ttest),
'mean': maxes_ttest.mean(), 'std': maxes_ttest.std(),
'fps_mean': fps.mean(), 'fps_std': fps.std()}
print(result)
ttest = stats.ttest_ind(maxes, maxes_ttest, equal_var=False)
print(f"\n{ttest}")
| bsd-3-clause |
beni55/networkx | examples/drawing/giant_component.py | 33 | 2084 | #!/usr/bin/env python
"""
This example illustrates the sudden appearance of a
giant connected component in a binomial random graph.
Requires pygraphviz and matplotlib to draw.
"""
# Copyright (C) 2006-2008
# Aric Hagberg <hagberg@lanl.gov>
# Dan Schult <dschult@colgate.edu>
# Pieter Swart <swart@lanl.gov>
# All rights reserved.
# BSD license.
try:
import matplotlib.pyplot as plt
except:
raise
import networkx as nx
import math
try:
from networkx import graphviz_layout
layout=nx.graphviz_layout
except ImportError:
print("PyGraphviz not found; drawing with spring layout; will be slow.")
layout=nx.spring_layout
n=150 # 150 nodes
# p value at which giant component (of size log(n) nodes) is expected
p_giant=1.0/(n-1)
# p value at which graph is expected to become completely connected
p_conn=math.log(n)/float(n)
# the following range of p values should be close to the threshold
pvals=[0.003, 0.006, 0.008, 0.015]
region=220 # for pylab 2x2 subplot layout
plt.subplots_adjust(left=0,right=1,bottom=0,top=0.95,wspace=0.01,hspace=0.01)
for p in pvals:
G=nx.binomial_graph(n,p)
pos=layout(G)
region+=1
plt.subplot(region)
plt.title("p = %6.3f"%(p))
nx.draw(G,pos,
with_labels=False,
node_size=10
)
# identify largest connected component
Gcc=sorted(nx.connected_component_subgraphs(G), key = len, reverse=True)
G0=Gcc[0]
nx.draw_networkx_edges(G0,pos,
with_labels=False,
edge_color='r',
width=6.0
)
# show other connected components
for Gi in Gcc[1:]:
if len(Gi)>1:
nx.draw_networkx_edges(Gi,pos,
with_labels=False,
edge_color='r',
alpha=0.3,
width=5.0
)
plt.savefig("giant_component.png")
plt.show() # display
| bsd-3-clause |