markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Pandas DataFrame
HTML(table.dframe().head().to_html())
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
e2a55a08a85ab3650b0cd72de889e6b9
Dataset dictionary
table.columns()
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
d6cfb90a777d30b945a28f2c6724f00b
Creating tabular data from Elements using the .table and .dframe methods If you have data in some other HoloViews element and would like to use the columnar data features, you can easily tabularize any of the core Element types into a Table Element, using the .table() method. Similarly, the .dframe() method will convert an Element into a pandas DataFrame. These methods are very useful if you want to then transform the data into a different Element type, or to perform different types of analysis. Tabularizing simple Elements For a simple example, we can create a Curve of an exponential function and convert it to a Table with the .table method, with the same result as creating the Table directly from the data as done earlier on this Tutorial:
xs = np.arange(10) curve = hv.Curve(zip(xs, np.exp(xs))) curve * hv.Scatter(zip(xs, curve)) + curve.table()
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
bd0971f29c2db153508336255f094051
Similarly, we can get a pandas dataframe of the Curve using curve.dframe(). Here we wrap that call as raw HTML to allow automated testing of this notebook, but just calling curve.dframe() would give the same result visually:
HTML(curve.dframe().to_html())
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
bbed9b3e2239fa947c78191dd6cf3be5
Although 2D image-like objects are not inherently well suited to a flat columnar representation, serializing them by converting to tabular data is a good way to reveal the differences between Image and Raster elements. Rasters are a very simple type of element, using array-like integer indexing of rows and columns from their top-left corner as in computer graphics applications. Conversely, Image elements are a higher-level abstraction that provides a general-purpose continuous Cartesian coordinate system, with x and y increasing to the right and upwards as in mathematical applications, and each point interpreted as a sample representing the pixel in which it is located (and thus centered within that pixel). Given the same data, the .table() representation will show how the data is being interpreted (and accessed) differently in the two cases (as explained in detail in the Continuous Coordinates Tutorial):
%%opts Points (s=200) [size_index=None] extents = (-1.6,-2.7,2.0,3) np.random.seed(42) mat = np.random.rand(3, 3) img = hv.Image(mat, bounds=extents) raster = hv.Raster(mat) img * hv.Points(img) + img.table() + \ raster * hv.Points(raster) + raster.table()
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
66ea36ae538d0c63e7d2ca6487e7b84e
Tabularizing space containers Even deeply nested objects can be deconstructed in this way, serializing them to make it easier to get your raw data out of a collection of specialized Element types. Let's say we want to make multiple observations of a noisy signal. We can collect the data into a HoloMap to visualize it and then call .table() to get a columnar object where we can perform operations or transform it to other Element types. Deconstructing nested data in this way only works if the data is homogenous. In practical terms, the requirement is that your data structure contains Elements (of any types) in these Container types: NdLayout, GridSpace, HoloMap, and NdOverlay, with all dimensions consistent throughout (so that they can all fit into the same set of columns). Let's now go back to the Image example. We will now collect a number of observations of some noisy data into a HoloMap and display it:
obs_hmap = hv.HoloMap({i: hv.Image(np.random.randn(10, 10), bounds=(0,0,3,3)) for i in range(3)}, key_dimensions=['Observation']) obs_hmap
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
bcd437ab8cf800ccf67fdaf368f016fa
Now we can serialize this data just as before, where this time we get a four-column (4D) table. The key dimensions of both the HoloMap and the Images, as well as the z-values of each Image, are all merged into a single table. We can visualize the samples we have collected by converting it to a Scatter3D object.
%%opts Layout [fig_size=150] Scatter3D [color_index=3 size_index=None] (cmap='hot' edgecolor='k' s=50) obs_hmap.table().to.scatter3d() + obs_hmap.table()
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
005e9a2924942c9a04ee612f95db45ca
Here the z dimension is shown by color, as in the original images, and the other three dimensions determine where the datapoint is shown in 3D. This way of deconstructing will work for any data structure that satisfies the conditions described above, no matter how nested. If we vary the amount of noise while continuing to performing multiple observations, we can create an NdLayout of HoloMaps, one for each level of noise, and animated by the observation number.
from itertools import product extents = (0,0,3,3) error_hmap = hv.HoloMap({(i, j): hv.Image(j*np.random.randn(3, 3), bounds=extents) for i, j in product(range(3), np.linspace(0, 1, 3))}, key_dimensions=['Observation', 'noise']) noise_layout = error_hmap.layout('noise') noise_layout
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
08dbafe68aa6606b5d5486f9e9ca44c1
Applying operations to the data Sorting by columns Once data is in columnar form, it is simple to apply a variety of operations. For instance, Dataset can be sorted by their dimensions using the .sort() method. By default, this method will sort by the key dimensions, but any other dimension(s) can be supplied to specify sorting along any other dimensions:
bars = hv.Bars((['C', 'A', 'B', 'D'], [2, 7, 3, 4])) bars + bars.sort() + bars.sort(['y'])
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
16bf42b19dcfcbcdf43e38cae8bc2154
Working with categorical or grouped data Data is often grouped in various ways, and the Dataset interface provides various means to easily compare between groups and apply statistical aggregates. We'll start by generating some synthetic data with two groups along the x-axis and 4 groups along the y axis.
n = np.arange(1000) xs = np.repeat(range(2), 500) ys = n%4 zs = np.random.randn(1000) table = hv.Table((xs, ys, zs), kdims=['x', 'y'], vdims=['z']) table
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
cc656b3a4869dbea690c78ea7fed86ef
Since there are repeat observations of the same x- and y-values, we have to reduce the data before we display it or else use a datatype that supports plotting distributions in this way. The BoxWhisker type allows doing exactly that:
%%opts BoxWhisker [aspect=2 fig_size=200 bgcolor='w'] hv.BoxWhisker(table)
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
4d5bab7613af69cdc74721ba2750a8ca
Aggregating/Reducing dimensions Most types require the data to be non-duplicated before being displayed. For this purpose, HoloViews makes it easy to aggregate and reduce the data. These two operations are simple inverses of each other--aggregate computes a statistic for each group in the supplied dimensions, while reduce combines all the groups except the supplied dimensions. Supplying only a function and no dimensions will simply aggregate or reduce all available key dimensions.
%%opts Bars [show_legend=False] {+axiswise} hv.Bars(table).aggregate(function=np.mean) + hv.Bars(table).reduce(x=np.mean)
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
89057aa8e06f934c69a596cd1bf97742
(A) aggregates over both the x and y dimension, computing the mean for each x/y group, while (B) reduces the x dimension leaving just the mean for each group along y. Collapsing multiple Dataset Elements When multiple observations are broken out into a HoloMap they can easily be combined using the collapse method. Here we create a number of Curves with increasingly larger y-values. By collapsing them with a function and a spreadfn we can compute the mean curve with a confidence interval. We then simply cast the collapsed Curve to a Spread and Curve Element to visualize them.
hmap = hv.HoloMap({i: hv.Curve(np.arange(10)*i) for i in range(10)}) collapsed = hmap.collapse(function=np.mean, spreadfn=np.std) hv.Spread(collapsed) * hv.Curve(collapsed) + collapsed.table()
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
81824b31aec5facb1bedc107838dc3ec
Working with complex data In the last section we only scratched the surface of what the Dataset interface can do. When it really comes into its own is when working with high-dimensional datasets. As an illustration, we'll load a dataset of some macro-economic indicators for OECD countries from 1964-1990, cached on the HoloViews website.
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t') dimensions = {'unem': 'Unemployment', 'capmob': 'Capital Mobility', 'gdp': 'GDP Growth', 'trade': 'Trade', 'year': 'Year', 'country': 'Country'} macro_df = macro_df.rename(columns=dimensions)
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
be3d112728de4bd7f0e3be4ee00ea05c
We'll also take this opportunity to set default options for all the following plots.
%output dpi=100 options = hv.Store.options() opts = hv.Options('plot', aspect=2, fig_size=250, show_frame=False, show_grid=True, legend_position='right') options.NdOverlay = opts options.Overlay = opts
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
66d797b01b318554aede9ee3e41539af
Loading the data As we saw above, we can supply a dataframe to any Dataset type. When dealing with so many dimensions it would be cumbersome to supply all the dimensions explicitly, but luckily Dataset can easily infer the dimensions from the dataframe itself. We simply supply the kdims, and it will infer that all other numeric dimensions should be treated as value dimensions (vdims).
macro = hv.Table(macro_df, kdims=['Year', 'Country'])
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
e820e7d4ef5444dd01590f9a0260c409
To get an overview of the data we'll quickly sort it and then view the data for one year.
%%opts Table [aspect=1.5 fig_size=300] macro = macro.sort() macro[1988]
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
4525ac9994e9cd739e138c7b10dbe45d
Most of the examples above focus on converting a Table to simple Element types, but HoloViews also provides powerful container objects to explore high-dimensional data, such as HoloMap, NdOverlay, NdLayout, and GridSpace. HoloMaps work as a useful interchange format from which you can conveniently convert to the other container types using its .overlay(), .layout(), and .grid() methods. This way we can easily create an overlay of GDP Growth curves by year for each country. Here Year is a key dimension and GDP Growth a value dimension. We are then left with the Country dimension, which we can overlay using the .overlay() method.
%%opts Curve (color=Palette('Set3')) gdp_curves = macro.to.curve('Year', 'GDP Growth') gdp_curves.overlay('Country')
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
2177fab1299f17d481d7b657a1951b85
Now that we've extracted the gdp_curves, we can apply some operations to them. As in the simpler example above we will collapse the HoloMap of Curves using a number of functions to visualize the distribution of GDP Growth rates over time. First we find the mean curve with np.std as the spreadfn and cast the result to a Spread type, then we compute the min, mean and max curve in the same way and put them all inside an Overlay.
%%opts Overlay [bgcolor='w' legend_position='top_right'] Curve (color='k' linewidth=1) Spread (facecolor='gray' alpha=0.2) hv.Spread(gdp_curves.collapse('Country', np.mean, np.std), label='std') *\ hv.Overlay([gdp_curves.collapse('Country', fn).relabel(name)(style=dict(linestyle=ls)) for name, fn, ls in [('max', np.max, '--'), ('mean', np.mean, '-'), ('min', np.min, '--')]])
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
edae7f5cf99d46d3469f932c0c327911
Many HoloViews Element types support multiple kdims, including HeatMap, Points, Scatter, Scatter3D, and Bars. Bars in particular allows you to lay out your data in groups, categories and stacks. By supplying the index of that dimension as a plotting option you can choose to lay out your data as groups of bars, categories in each group, and stacks. Here we choose to lay out the trade surplus of each country with groups for each year, no categories, and stacked by country. Finally, we choose to color the Bars for each item in the stack.
%opts Bars [bgcolor='w' aspect=3 figure_size=450 show_frame=False] %%opts Bars [category_index=2 stack_index=0 group_index=1 legend_position='top' legend_cols=7 color_by=['stack']] (color=Palette('Dark2')) macro.to.bars(['Country', 'Year'], 'Trade', [])
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
08a65820ff50da386744502ea4f4175b
This plot contains a lot of data, and so it's probably a good idea to focus on specific aspects of it, telling a simpler story about them. For instance, using the .select method we can then customize the palettes (e.g. to use consistent colors per country across multiple analyses). Palettes can customized by selecting only a subrange of the underlying cmap to draw the colors from. The Palette draws samples from the colormap using the supplied sample_fn, which by default just draws linear samples but may be overriden with any function that draws samples in the supplied ranges. By slicing the Set1 colormap we draw colors only from the upper half of the palette and then reverse it.
%%opts Bars [padding=0.02 color_by=['group']] (alpha=0.6, color=Palette('Set1', reverse=True)[0.:.2]) countries = {'Belgium', 'Netherlands', 'Sweden', 'Norway'} macro.to.bars(['Country', 'Year'], 'Unemployment').select(Year=(1978, 1985), Country=countries)
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
cb6e16e172083a6da53f95ab358f2ebc
Many HoloViews Elements support multiple key and value dimensions. A HeatMap is indexed by two kdims, so we can visualize each of the economic indicators by year and country in a Layout. Layouts are useful for heterogeneous data you want to lay out next to each other. Before we display the Layout let's apply some styling; we'll suppress the value labels applied to a HeatMap by default and substitute it for a colorbar. Additionally we up the number of xticks that are drawn and rotate them by 90 degrees to avoid overlapping. Flipping the y-axis ensures that the countries appear in alphabetical order. Finally we reduce some of the margins of the Layout and increase the size.
%opts HeatMap [show_values=False xticks=40 xrotation=90 aspect=1.2 invert_yaxis=True colorbar=True] %opts Layout [figure_size=120 aspect_weight=0.5 hspace=0.8 vspace=0] hv.Layout([macro.to.heatmap(['Year', 'Country'], value) for value in macro.data.columns[2:]]).cols(2)
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
4a7e03d482447a757d5944afca42294c
Another way of combining heterogeneous data dimensions is to map them to a multi-dimensional plot type. Scatter Elements, for example, support multiple vdims, which may be mapped onto the color and size of the drawn points in addition to the y-axis position. As for the Curves above we supply 'Year' as the sole key dimension and rely on the Table to automatically convert the Country to a map dimension, which we'll overlay. However this time we select both GDP Growth and Unemployment, to be plotted as points. To get a sensible chart, we adjust the scaling_factor for the points to get a reasonable distribution in sizes and apply a categorical Palette so we can distinguish each country.
%%opts Scatter [scaling_method='width' scaling_factor=2] (color=Palette('Set3') edgecolors='k') gdp_unem_scatter = macro.to.scatter('Year', ['GDP Growth', 'Unemployment']) gdp_unem_scatter.overlay('Country')
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
6d5618ac2bf699167b920deff2cf5a0c
In this way we can plot any dimension against any other dimension, very easily allowing us to iterate through different ways of revealing relationships in the dataset.
%%opts NdOverlay [legend_cols=2] Scatter [size_index=1] (color=Palette('Blues')) macro.to.scatter('GDP Growth', 'Unemployment', ['Year']).overlay()
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
060645fab6c432d16d7345e406e01d4e
This view, for example, immediately highlights the high unemployment rates of the 1980s. Since all HoloViews Elements are composable, we can generate complex figures just by applying the * operator. We'll simply reuse the GDP curves we generated earlier, combine them with the scatter points (which indicate the unemployment rate by size) and annotate the data with some descriptions of what happened economically in these years.
%%opts Curve (color='k') Scatter [color_index=2 size_index=2 scaling_factor=1.4] (cmap='Blues' edgecolors='k') macro_overlay = gdp_curves * gdp_unem_scatter annotations = hv.Arrow(1973, 8, 'Oil Crisis', 'v') * hv.Arrow(1975, 6, 'Stagflation', 'v') *\ hv.Arrow(1979, 8, 'Energy Crisis', 'v') * hv.Arrow(1981.9, 5, 'Early Eighties\n Recession', 'v') macro_overlay * annotations
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
6b7b25d089ebef1c3cb6f314317c9724
Since we didn't map the country to some other container type, we get a widget allowing us to view the plot separately for each country, reducing the forest of curves we encountered before to manageable chunks. While looking at the plots individually like this allows us to study trends for each country, we may want to lay out a subset of the countries side by side, e.g. for non-interactive publications. We can easily achieve this by selecting the countries we want to view and and then applying the .layout method. We'll also want to restore the square aspect ratio so the plots compose nicely.
%opts Overlay [aspect=1] %%opts NdLayout [figure_size=100] Scatter [color_index=2] (cmap='Reds') countries = {'United States', 'Canada', 'United Kingdom'} (gdp_curves * gdp_unem_scatter).select(Country=countries).layout('Country')
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
e975a4249a45860d9de3875b6fc7be1f
Finally, let's combine some plots for each country into a Layout, giving us a quick overview of each economic indicator for each country:
%%opts Layout [fig_size=100] Scatter [color_index=2] (cmap='Reds') (macro_overlay.relabel('GDP Growth', depth=1) +\ macro.to.curve('Year', 'Unemployment', ['Country'], group='Unemployment',) +\ macro.to.curve('Year', 'Trade', ['Country'], group='Trade') +\ macro.to.scatter('GDP Growth', 'Unemployment', ['Country'])).cols(2)
doc/Tutorials/Columnar_Data.ipynb
vascotenner/holoviews
bsd-3-clause
7c3365a1cd6f6e76dea93f0669b39102
Find maximum of Bernoulli distribution Single experiment $$\phi(x) = p ^ {x} * (1 - p) ^ { 1 - x }$$ Series of experiments $$\mathcal{L}(p|x) = \prod_{i=1}^{n} p^{x_{i}}*(p-1)^{1-x_{i}}$$ Hints sympy.diff() sympy.expand() sympy.expand_log() sympy.solve() sympy.symbols() sympy gotchas
import sympy from sympy.abc import x p = sympy.symbols('p', positive=True) phi = p ** x * (1 - p) ** (1 - x) L = np.prod([phi.subs(x, i) for i in xs]) # objective function to maximize log_L = sympy.expand_log(sympy.log(L)) sol = sympy.solve(sympy.diff(log_L, p), p)[0] import matplotlib.pyplot as plt x_space = np.linspace(1/100, 1, 100, endpoint=False) plt.plot(x_space, list(map(sympy.lambdify(p, log_L, 'numpy'), x_space)), sol, log_L.subs(p, sol), 'o', p_true, log_L.subs(p, p_true), 's', ) plt.xlabel('$p$', fontsize=18) plt.ylabel('Likelihood', fontsize=18) plt.title('Estimate not equal to true value', fontsize=18) plt.grid(True) plt.show()
mle.ipynb
hyzhak/mle
mit
7f2aa1e2b57622f25a1f154d003b0127
Empirically examine the behavior of the maximum likelihood estimator evalf()
def estimator_gen(niter=10, ns=100): """ generate data to estimate distribution of maximum likelihood estimator' """ x = sympy.symbols('x', real=True) phi = p**x*(1-p)**(1-x) for i in range(niter): xs = sample(ns) # generate some samples from the experiment L = np.prod([phi.subs(x,i) for i in xs]) # objective function to maximize log_L = sympy.expand_log(sympy.log(L)) sol = sympy.solve(sympy.diff(log_L, p), p)[0] yield float(sol.evalf()) entries = list(estimator_gen(100)) # this may take awhile, depending on how much data you want to generate plt.hist(entries) # histogram of maximum likelihood estimator plt.title('$\mu={:3.3f},\sigma={:3.3f}$'.format(np.mean(entries), np.std(entries)), fontsize=18) plt.show()
mle.ipynb
hyzhak/mle
mit
e41afa8c6d85d59a675120a98ade61b2
Dynamic of MLE by length sample sequence
def estimator_dynamics(ns_space, num_tries = 20): for ns in ns_space: estimations = list(estimator_gen(num_tries, ns)) yield np.mean(estimations), np.std(estimations) ns_space = list(range(10, 100, 5)) entries = list(estimator_dynamics(ns_space)) entries_mean = list(map(lambda e: e[0], entries)) entries_std = list(map(lambda e: e[1], entries)) plt.errorbar(ns_space, entries_mean, entries_std, fmt='-o') plt.show()
mle.ipynb
hyzhak/mle
mit
175c199d6f3bdce610039efd699da540
I'll fit a significantly larger vocabular this time, as the embeddings are basically given for us.
num_words = 5000 max_len = 400 tok = Tokenizer(num_words) tok.fit_on_texts(input_text[:25000]) X_train = tok.texts_to_sequences(input_text[:25000]) X_test = tok.texts_to_sequences(input_text[25000:]) y_train = input_label[:25000] y_test = input_label[25000:] X_train = sequence.pad_sequences(X_train, maxlen=max_len) X_test = sequence.pad_sequences(X_test, maxlen=max_len) words = [] for iter in range(num_words): words += [key for key,value in tok.word_index.items() if value==iter+1] loc = "/Users/taylor/files/word2vec_python/GoogleNews-vectors-negative300.bin" w2v = word2vec.Word2Vec.load_word2vec_format(loc, binary=True) weights = np.zeros((num_words,300)) for idx, w in enumerate(words): try: weights[idx,:] = w2v[w] except KeyError as e: pass model = Sequential() model.add(Embedding(num_words, 300, input_length=max_len)) model.add(Dropout(0.5)) model.add(GRU(16,activation='relu')) model.add(Dense(128)) model.add(Dropout(0.5)) model.add(Activation('relu')) model.add(Dense(1)) model.add(Activation('sigmoid')) model.layers[0].set_weights([weights]) model.layers[0].trainable = False model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=32, nb_epoch=10, verbose=1, validation_data=(X_test, y_test))
lectures/lec22/.ipynb_checkpoints/notebook22-checkpoint.ipynb
statsmaths/stat665
gpl-2.0
6967f0e9efcac1acb666b23712af5843
Session 1.4 Collections Sets and dictionaries
# Sets my_set = set([1, 2, 3, 3, 3, 4]) print(my_set) len(my_set) my_set.add(3) # sets are unordered print(my_set) my_set.remove(3) print(my_set) # set operation using union | or intersection & my_first_set = set([1, 2, 4, 6, 8]) my_second_set = set([8, 9, 10]) my_first_set | my_second_set my_first_set & my_second_set
live/python_basic_1_4_live.ipynb
pycam/python-basic
unlicense
88c1a5cf003fcad52dc1eae8b11ea93d
Exercises 1.4.1 Given the protein sequence "MPISEPTFFEIF", find the unique amino acids in the sequence.
# Dictionnaries are collections of key/value pairs my_dict = {'A': 'Adenine', 'C': 'Cytosine', 'T': 'Thymine', 'G': 'Guanine'} print(my_dict) my_dict['C'] my_dict['N'] ?my_dict.get my_dict.get('N', 'unknown') print(my_dict) len(my_dict) type(my_dict) 'T' in my_dict # Assign new key/value pair my_dict['Y'] = 'Pyrimidine' print(my_dict) my_dict['Y'] = 'Cytosine or Thymine' print(my_dict) del my_dict['Y'] print(my_dict) help(dict) my_dict.keys() list(my_dict.keys()) my_dict.values() my_dict.items()
live/python_basic_1_4_live.ipynb
pycam/python-basic
unlicense
7fa3f517ce9e760c0846fb27cf99449a
To begin, let's make a function which will create $N$ noisy, irregularly-spaced data points containing a periodic signal, and plot one realization of that data:
def create_data(N, period=2.5, err=0.1, rseed=0): rng = np.random.RandomState(rseed) t = np.arange(N, dtype=float) + 0.3 * rng.randn(N) y = np.sin(2 * np.pi * t / period) + err * rng.randn(N) return t, y, err t, y, dy = create_data(100, period=20) plt.errorbar(t, y, dy, fmt='o');
examples/FastLombScargle.ipynb
nhuntwalker/gatspy
bsd-2-clause
fbebb1765beebdc435d04ad08bc8bf65
From this, our algorithm should be able to identify any periodicity that is present. Choosing the Frequency Grid The Lomb-Scargle Periodogram works by evaluating a power for a set of candidate frequencies $f$. So the first question is, how many candidate frequencies should we choose? It turns out that this question is very important. If you choose the frequency spacing poorly, it may lead you to miss strong periodic signal in the data! Frequency spacing First, let's think about the frequency spacing we need in our grid. If you're asking about a candidate frequency $f$, then data with range $T$ contains $T \cdot f$ complete cycles. If our error in frequency is $\delta f$, then $T\cdot\delta f$ is the error in number of cycles between the endpoints of the data. If this error is a significant fraction of a cycle, this will cause problems. This givs us the criterion $$ T\cdot\delta f \ll 1 $$ Commonly, we'll choose some oversampling factor around 5 and use $\delta f = (5T)^{-1}$ as our frequency grid spacing. Frequency limits Next, we need to choose the limits of the frequency grid. On the low end, $f=0$ is suitable, but causes some problems – we'll go one step away and use $\delta f$ as our minimum frequency. But on the high end, we need to make a choice: what's the highest frequency we'd trust our data to be sensitive to? At this point, many people are tempted to mis-apply the Nyquist-Shannon sampling theorem, and choose some version of the Nyquist limit for the data. But this is entirely wrong! The Nyquist frequency applies for regularly-sampled data, but irregularly-sampled data can be sensitive to much, much higher frequencies, and the upper limit should be determined based on what kind of signals you are looking for. Still, a common (if dubious) rule-of-thumb is that the high frequency is some multiple of what Press & Rybicki call the "average" Nyquist frequency, $$ \hat{f}_{Ny} = \frac{N}{2T} $$ With this in mind, we'll use the following function to determine a suitable frequency grid:
def freq_grid(t, oversampling=5, nyquist_factor=3): T = t.max() - t.min() N = len(t) df = 1. / (oversampling * T) fmax = 0.5 * nyquist_factor * N / T N = int(fmax // df) return df + df * np.arange(N)
examples/FastLombScargle.ipynb
nhuntwalker/gatspy
bsd-2-clause
6ade5ec8a7b3c3c50dfaa092a49deb59
Now let's use the gatspy tools to plot the periodogram:
t, y, dy = create_data(100, period=2.5) freq = freq_grid(t) print(len(freq)) from gatspy.periodic import LombScargle model = LombScargle().fit(t, y, dy) period = 1. / freq power = model.periodogram(period) plt.plot(period, power) plt.xlim(0, 5);
examples/FastLombScargle.ipynb
nhuntwalker/gatspy
bsd-2-clause
de020f312d556570370a1afdc81f89cb
The algorithm finds a strong signal at a period of 2.5. To demonstrate explicitly that the Nyquist rate doesn't apply in irregularly-sampled data, let's use a period below the averaged sampling rate and show that we can find it:
t, y, dy = create_data(100, period=0.3) period = 1. / freq_grid(t, nyquist_factor=10) model = LombScargle().fit(t, y, dy) power = model.periodogram(period) plt.plot(period, power) plt.xlim(0, 1);
examples/FastLombScargle.ipynb
nhuntwalker/gatspy
bsd-2-clause
5ff69bc472abcdcbfa0bbf735d81e90d
With a data sampling rate of approximately $1$ time unit, we easily find a period of $0.3$ time units. The averaged Nyquist limit clearly does not apply for irregularly-spaced data! Nevertheless, short of a full analysis of the temporal window function, it remains a useful milepost in estimating the upper limit of frequency. Scaling with $N$ With these rules in mind, we see that the size of the frequency grid is approximately $$ N_f = \frac{f_{max}}{\delta f} \propto \frac{N/(2T)}{1/T} \propto N $$ So for $N$ data points, we will require some multiple of $N$ frequencies (with a constant of proportionality typically on order 10) to suitably explore the frequency space. This is the source of the $N^2$ scaling of the typical periodogram: finding periods in $N$ datapoints requires a grid of $\sim 10N$ frequencies, and $O[N^2]$ operations. When $N$ gets very, very large, this becomes a problem. Fast Periodograms with LombScargleFast Finally we get to the meat of this discussion. In a 1989 paper, Press and Rybicki proposed a clever method whereby a Fast Fourier Transform is used on a grid extirpolated from the original data, such that this problem can be solved in $O[N\log N]$ time. The gatspy package contains a pure-Python implementation of this algorithm, and we'll explore it here. If you're interested in seeing how the algorithm works in Python, check out the code in the gatspy source. It's far more readible and understandable than the Fortran source presented in Press et al. For convenience, the implementation has a periodogram_auto method which automatically selects a frequency/period range based on an oversampling factor and a nyquist factor:
from gatspy.periodic import LombScargleFast help(LombScargleFast.periodogram_auto) from gatspy.periodic import LombScargleFast t, y, dy = create_data(100) model = LombScargleFast().fit(t, y, dy) period, power = model.periodogram_auto() plt.plot(period, power) plt.xlim(0, 5);
examples/FastLombScargle.ipynb
nhuntwalker/gatspy
bsd-2-clause
83d0a71920ff1139e198d890791d0a13
Here, to illustrate the different computational scalings, we'll evaluate the computational time for a number of inputs, using LombScargleAstroML (a fast implementation of the $O[N^2]$ algorithm) and LombScargleFast, which is the fast FFT-based implementation:
from time import time from gatspy.periodic import LombScargleAstroML, LombScargleFast def get_time(N, Model): t, y, dy = create_data(N) model = Model().fit(t, y, dy) t0 = time() model.periodogram_auto() t1 = time() result = t1 - t0 # for fast operations, we should do several and take the median if result < 0.1: N = min(50, 0.5 / result) times = [] for i in range(5): t0 = time() model.periodogram_auto() t1 = time() times.append(t1 - t0) result = np.median(times) return result N_obs = list(map(int, 10 ** np.linspace(1, 4, 5))) times1 = [get_time(N, LombScargleAstroML) for N in N_obs] times2 = [get_time(N, LombScargleFast) for N in N_obs] plt.loglog(N_obs, times1, label='Naive Implmentation') plt.loglog(N_obs, times2, label='FFT Implementation') plt.xlabel('N observations') plt.ylabel('t (sec)') plt.legend(loc='upper left');
examples/FastLombScargle.ipynb
nhuntwalker/gatspy
bsd-2-clause
f60641000b692a8c224e4f8c686fd54a
Now, let's apply the theory of rotation matrices to write some code which will rotate a vector by amount $\theta$. The function rotmat(th) returns the rotation matrix.
def rotmat(th): rotator = np.array([[ma.cos(th), -ma.sin(th)],[ma.sin(th), ma.cos(th)]]) return rotator
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
cydcowley/Imperial-Visualizations
mit
ff58c5f5bb9ac340bc20f482fb02b0f9
This function rotation(th, vec) takes in a rotation angle and vector input and returns a tuple of numpy arrays which can be animated to create a "smooth transition" of the rotation using Plotly Animate.
def rotation(th, vec): # Parameters t = np.linspace(0,1,50) tt = th*t # Rotation matrix BigR = np.identity(2) for i in range(len(tt)-1): BigR = np.vstack((BigR,rotmat(tt[i+1]))) newvec = np.matmul(BigR,vec) x = newvec[::2] y = newvec[1::2] return (x,y)
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
cydcowley/Imperial-Visualizations
mit
bd05deccbc4a8df6a362668c374fa31d
In the cell below, enter a rotation angle and vector inside the rotation() function which has some inputs inside already and hit shift enter to generate an animation of the rotation! (<b>N.B. Don't worry too much if you're not familiar with the plotly syntax, it's more important you understand what the matrices are doing, the cell will run itself after you choose the input arguments and hit Shift + Enter</b>)
# Enter a 2D vector here... vec = [1,0] # Enter rotation angle here... th = 4 (x0,y0) = rotation(th, vec) x0 = list(x0) y0 = list(y0) # Syntax for plotly, see documentation for more info data = [{"x": [x0[i],0], "y": [y0[i],0], "frame": i} for i in range(len(x0))] figure = {'data': [{'x': data[0]['x'], 'y': data[0]['y']}], 'layout': {'xaxis': {'range': [-2, 2], 'autorange': False}, 'yaxis': {'range': [-2, 2], 'autorange': False}, 'height': 600, 'width': 600, 'title': 'Rotation Animation', 'updatemenus': [{'type': 'buttons', 'buttons': [{'label': 'Play', 'method': 'animate', 'args': [None, dict(frame=dict(duration=50, redraw=False), transition=dict(duration=50), fromcurrent=True, mode='immediate')]}]}] }, 'frames': [{'data': [{'x': data[i]['x'], 'y': data[i]['y']}]} for i in range(len(x0))] } # Plot iplot(figure)
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
cydcowley/Imperial-Visualizations
mit
373d26913f4d092d35c029e34915f976
3. Scaling Matrices Now we are familiar with rotation matrices, we will move onto another type of matrix transformation known as a "scaling" matrix. Scaling matrices have the form: <br> <br> $$ \text{Scale} = \begin{pmatrix} s1 & 0 \ 0 & s2 \end{pmatrix} $$ <br> Now let's look at what this matrix does to an arbitrary vector $(x, y)$: <br><br> $$ \begin{pmatrix} s1 & 0 \ 0 & s2 \end{pmatrix}\begin{pmatrix} x \ y\end{pmatrix} = s1\begin{pmatrix}x\0\end{pmatrix}+s2\begin{pmatrix}0\y\end{pmatrix}$$ <br> As we can see, this "scale" matrix scales the vector in the $x$-direction by a factor $s1$ and scales the vector in the $y$-direction by a factor s2. Now we write a function scale(vec, *args) which takes in a vector input as well as an additional 1 OR 2 arguments. If one is given, then a matrix which scales both $x$ and $y$ directions equally is returned while if 2 are given then a matrix which scales by the arguments given is returned.
# Input vector, scale 1, scale 2 as arguments def scale(vec, *args): assert len(vec)==2, "Please provide a 2D vector for the first argument" assert len(args)==1 or len(args)==2, "Please provide 1 or 2 scale arguments" t = np.linspace(1,args[0],50) # If only one scale argument given then scale in both directions by same amount if len(args) == 1: x = vec[0]*t y = vec[1]*t return(x,y) # If two scale arguments given then scale individual directions else: s = np.linspace(1,args[1],50) x = vec[0]*t y = vec[1]*s return(x,y)
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
cydcowley/Imperial-Visualizations
mit
0d6c51e5f7eca2da1215f879a7a1ff46
Now try it for yourself by running the function with your own inputs, by default 2 scale arguments have been inputted but you can try 1 if you like as well.
# Again input vector here vec = [1,1] # Arguments here s1 = 2 s2 = 3 (x1,y1) = scale(vec, s1, s2) x1 = list(x1) y1 = list(y1) # Plotly syntax again data = [{"x": [x1[i],0], "y": [y1[i],0], "frame": i} for i in range(len(x1))] figure = {'data': [{'x': data[0]['x'], 'y': data[0]['y']}], 'layout': {'xaxis': {'range': [-2, 2], 'autorange': False}, 'yaxis': {'range': [-2, 2], 'autorange': False}, 'height': 600, 'width': 600, 'title': 'Scale Animation', 'updatemenus': [{'type': 'buttons', 'buttons': [{'label': 'Play', 'method': 'animate', 'args': [None, dict(frame=dict(duration=50, redraw=False), transition=dict(duration=50), fromcurrent=True, mode='immediate')]}]}] }, 'frames': [{'data': [{'x': data[i]['x'], 'y': data[i]['y']}]} for i in range(len(x1))] } iplot(figure)
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
cydcowley/Imperial-Visualizations
mit
edd3cf1392678b84ea9a7a52d0eebfa6
4. Custom Matrix Now we have explained some basic matrix transformations, feel free to use the following code to create your own 2x2 matrix transformations.
# Custom 2D transformation def custom(vec): print("Enter values for 2x2 matrix [[a,b],[c,d]] ") a = input("Enter a value for a: ") b = input("Enter a value for b: ") c = input("Enter a value for c: ") d = input("Enter a value for d: ") try: a = float(a) except ValueError: print("Enter a float or integer for a") try: b = float(b) except ValueError: print("Enter a float or integer for b") try: c = float(c) except ValueError: print("Enter a float or integer for c") try: d = float(d) except ValueError: print("Enter a float or integer for d") A = [[a,b],[c,d]] t = np.linspace(0,1,50) w = np.matmul(A,vec)-vec x = [vec[0]+tt*w[0] for tt in t] y = [vec[1]+tt*w[1] for tt in t] return(x,y) (x2,y2) = custom([1,1]) x2 = list(x2) y2 = list(y2) data = [{"x": [x2[i],0], "y": [y2[i],0], "frame": i} for i in range(len(x2))] figure = {'data': [{'x': data[0]['x'], 'y': data[0]['y']}], 'layout': {'xaxis': {'range': [-2, 2], 'autorange': False}, 'yaxis': {'range': [-2, 2], 'autorange': False}, 'height': 600, 'width': 600, 'title': 'Custom Animation', 'updatemenus': [{'type': 'buttons', 'buttons': [{'label': 'Play', 'method': 'animate', 'args': [None, dict(frame=dict(duration=50, redraw=False), transition=dict(duration=50), fromcurrent=True, mode='immediate')]}]}] }, 'frames': [{'data': [{'x': data[i]['x'], 'y': data[i]['y']}]} for i in range(len(x2))] } iplot(figure)
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
cydcowley/Imperial-Visualizations
mit
b636abf978d4f3a01475384d3e60cec6
5. Skew Matrices For the next matrix we will use a slightly different approach to visualize what this transformation does. Instead of taking one vector and following what the matrix does to it, we will take 3 vectors ((1, 0), (1, 1) and (0, 1)) and look at what the matrix does to the entire area captured between these 3 points and the origin (i.e. the unit box). Why is this? <br> Well, matrix transformations are linear transformations and any point inside the box is a linear combination of $\mathbf{\hat{i}},\,\mathbf{\hat{j}}$ unit vectors. Consider a matrix $A$ acting upon a vector (x,y). <br><br> $$ A \begin{pmatrix}x\y\end{pmatrix} = \begin{pmatrix}a&b\c&d\end{pmatrix}\begin{pmatrix}x\y\end{pmatrix} = x\begin{pmatrix}a\c\end{pmatrix}+y\begin{pmatrix}b\d\end{pmatrix} $$ <br> As we can see, the $\mathbf{\hat{i}},\,\mathbf{\hat{j}}$ unit vectors are mapped to vectors $(a,\,c)$ and $(b,\,d)$ , respectively, so any points inside the unit square are mapped inside the parallelogram formed by the 2 vectors $(a,\,c)$ and $(b,\,d)$, (see the <b>Parallelipiped</b> visualization for more info). To visualize this, let's write a function which returns a skew matrix and see how it deforms the unit square. It's okay if you're not sure what a skew matrix is or what it does as you'll see what happens when we make the animation.
def skew(axis, vec): t = np.linspace(0,1,50) # Skew in x-direction if axis == 0: A = [[1,1],[0,1]] w = np.matmul(A,vec)-vec x = [vec[0]+tt*w[0] for tt in t] y = [vec[1]+tt*w[1] for tt in t] return(x, y) # Skew in y-direction elif axis == 1: A = [[1,0],[1,1]] w = np.matmul(A,vec)-vec x = [vec[0]+tt*w[0] for tt in t] y = [vec[1]+tt*w[1] for tt in t] return(x, y) else: return ValueError('Axis must be 0 or 1')
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
cydcowley/Imperial-Visualizations
mit
883bc37c466e6c6c040825306f3dd929
Now we write a function which will take 6 arrays in total (2 for (1, 0), 2 for (0, 1) and 2 for (1, 1)) and shows an animation of how the 3 vectors are transformed. Remember that we can forget about the origin as it is always mapped to itself (this is a standard property of linear transformations).
# Function that returns data in a format to be used by plotly and then plots it def sqtransformation(x0,x1,x2,y0,y1,y2): data = [{"x": [0,x0[i],x1[i],x2[i],0], "y": [0,y0[i],y1[i],y2[i],0], "frame": i} for i in range(len(x0))] figure = {'data': [{'x': data[0]['x'], 'y': data[0]['y'], 'fill':'tonexty'}], 'layout': {'xaxis': {'range': [-2, 2], 'autorange': False}, 'yaxis': {'range': [-2, 2], 'autorange': False}, 'height': 600, 'width': 600, 'title': 'Square Animation', 'updatemenus': [{'type': 'buttons', 'buttons': [{'label': 'Play', 'method': 'animate', 'args': [None, dict(frame=dict(duration=50, redraw=False), transition=dict(duration=50), fromcurrent=True, mode='immediate')]}]}] }, 'frames': [{'data': [{'x': data[i]['x'], 'y': data[i]['y']}]} for i in range(len(x0))] } iplot(figure) # Transform the 3 vectors that form the unit box. (x0,y0) = skew(1,[1,0]) (x1,y1) = skew(1,[1,1]) (x2,y2) = skew(1,[0,1]) sqtransformation(x0,x1,x2,y0,y1,y2)
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
cydcowley/Imperial-Visualizations
mit
2b7890a8cd4a5d2fa4e3e00d9444db0e
1. Single Day Analysis
ref_date = '2020-01-02' engine = SqlEngine(os.environ['DB_URI']) universe = Universe('hs300') codes = engine.fetch_codes(ref_date, universe) total_data = engine.fetch_data(ref_date, 'EMA5D', codes, 300, industry='sw', risk_model='short') all_styles = risk_styles + industry_styles + ['COUNTRY'] risk_cov = total_data['risk_cov'][all_styles].values factor = total_data['factor'] risk_exposure = factor[all_styles].values special_risk = factor['srisk'].values
notebooks/Example 6 - Target Volatility Builder.ipynb
wegamekinglc/alpha-mind
mit
64882faebe36c19d2d0feaeb3af24d76
Portfolio Construction using EPS factor as alpha factor; short selling is forbiden; target of volatility for the activate weight is setting at 2.5% annually level.
er = factor['EMA5D'].fillna(factor["EMA5D"].median()).values bm = factor['weight'].values lbound = np.zeros(len(er)) ubound = bm + 0.01 cons_mat = np.ones((len(er), 1)) risk_targets = (bm.sum(), bm.sum()) target_vol = 0.025 risk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.) status, p_er, p_weight = \ target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol) sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000 # check the result print(f"total weight is {p_weight.sum(): .4f}") print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}") print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
notebooks/Example 6 - Target Volatility Builder.ipynb
wegamekinglc/alpha-mind
mit
95613eca60d022d38e3423c7c9477e19
2. Porfolio Construction: 2016 ~ 2018
""" Back test parameter settings """ start_date = '2020-01-01' end_date = '2020-02-21' freq = '10b' neutralized_risk = industry_styles industry_name = 'sw' industry_level = 1 risk_model = 'short' batch = 0 horizon = map_freq(freq) universe = Universe('hs300') data_source = os.environ['DB_URI'] benchmark_code = 300 target_vol = 0.05 weights_bandwidth = 0.02 """ Factor Model """ alpha_factors = {'f01': CSRank(LAST('EMA5D'))} weights = dict(f01=1.) alpha_model = ConstLinearModel(features=alpha_factors, weights=weights) data_meta = DataMeta(freq=freq, universe=universe, batch=batch, neutralized_risk=neutralized_risk, risk_model='short', pre_process=[winsorize_normal, standardize], post_process=[standardize], warm_start=0, data_source=data_source) """ Constraintes settings """ constraint_risk = ['SIZE', 'SIZENL', 'BETA'] total_risk_names = constraint_risk + ['benchmark', 'total'] b_type = [] l_val = [] u_val = [] previous_pos = pd.DataFrame() rets = [] turn_overs = [] leverags = [] for name in total_risk_names: if name == 'benchmark': b_type.append(BoundaryType.RELATIVE) l_val.append(0.8) u_val.append(1.0) else: b_type.append(BoundaryType.ABSOLUTE) l_val.append(0.0) u_val.append(0.0) bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val) """ Running Settings """ running_setting = RunningSetting(weights_bandwidth=weights_bandwidth, rebalance_method='tv', bounds=bounds, target_vol=target_vol) """ Strategy run """ strategy = Strategy(alpha_model, data_meta, universe=universe, start_date=start_date, end_date=end_date, freq=freq, benchmark=benchmark_code) strategy.prepare_backtest_data() ret_df, positions = strategy.run(running_setting) ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7), title='Fixed freq rebalanced with target vol \ at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol), secondary_y='turn_over')
notebooks/Example 6 - Target Volatility Builder.ipynb
wegamekinglc/alpha-mind
mit
040f422156f18e8de2ddc02a515f4fb4
We're going to load the data in using h5py from an hdf5 file. HDF5 is a file format that allows for very simple storage of numerical data; in this particular case, we'll be loading in a 3D array, and then examining it.
f = h5py.File("/srv/nbgrader/data/koala.hdf5", "r") print(list(f.keys()))
week05/examples_week05.ipynb
UIUC-iSchool-DataViz/spring2017
mit
438449a972bb55de35a9576c9a5dc7da
Here, we load in the data by reading from the key koala that we just found.
koala = f["/koala"][:] print(koala.shape)
week05/examples_week05.ipynb
UIUC-iSchool-DataViz/spring2017
mit
6a4b189eb155ff8b9d7ee5d5ee654162
We'll use subplots to show the maximum value along each of the three axes, along with a histogram of all the values. The .max() function here accepts and axis argument, which means "max along a given axis."
for i in range(3): plt.subplot(2,2,i+1) plt.imshow(koala.max(axis=i), interpolation='nearest', origin='lower', cmap='viridis') plt.subplot(2,2,4) plt.hist(koala.ravel(), bins = 32, log = True)
week05/examples_week05.ipynb
UIUC-iSchool-DataViz/spring2017
mit
3e7cde3178ae7ab864fdf06b6ccf9112
We'll make a slicer, too -- this one is along the x value. Note how we take a floating point value and turn that into an index to make the image.
def xslicer(coord = 0.5): # We're accepting a float here, so we convert that into the right index we want ind = int(coord * koala.shape[0]) plt.imshow(koala[ind,:,:], interpolation = 'nearest', origin='lower') ipywidgets.interact(xslicer, coord = (0.0, 1.0, 0.01))
week05/examples_week05.ipynb
UIUC-iSchool-DataViz/spring2017
mit
bbc75cfd7e71fbc8f068fde78413a866
Download the dataset from its repository at github https://github.com/jeanpat/DeepFISH/tree/master/dataset
!wget https://github.com/jeanpat/DeepFISH/blob/master/dataset/LowRes_13434_overlapping_pairs.h5 filename = './LowRes_13434_overlapping_pairs.h5' h5f = h5py.File(filename,'r') pairs = h5f['dataset_1'][:] h5f.close() print('dataset is a numpy array of shape:', pairs.shape) N = 11508 grey = pairs[N,:,:,0] g_truth = pairs[N,:,:,1] l1, l2, l3, seg = clean_ground_truth(g_truth, size = 1) test = np.dstack((grey, g_truth)) print(test.shape) t2 = np.stack((test,test)) print(t2.shape)
notebooks/Clean Dataset from their spurious pixels.ipynb
jeanpat/DeepFISH
gpl-3.0
82a95750922634fc6d2223eb5273f393
Let's compare the groundtruth image befor and after cleaning
plt.figure(figsize=(20,10)) plt.subplot(251,xticks=[],yticks=[]) plt.imshow(grey, cmap=plt.cm.gray) plt.subplot(252,xticks=[],yticks=[]) plt.imshow(g_truth, cmap=plt.cm.flag_r) plt.subplot(253,xticks=[],yticks=[]) plt.imshow(g_truth == 1, cmap=plt.cm.flag_r) plt.subplot(254,xticks=[],yticks=[]) plt.imshow(g_truth == 2, cmap=plt.cm.flag_r) plt.subplot(255,xticks=[],yticks=[]) plt.imshow(g_truth == 3, cmap=plt.cm.flag_r) #plt.subplot(256,xticks=[],yticks=[]) #plt.imshow(mo.white_tophat(grey, selem = mo.disk(2)) > 0, cmap=plt.cm.jet) plt.subplot(257,xticks=[],yticks=[]) plt.imshow(l1+2*l2+3*l3, cmap=plt.cm.flag_r) plt.subplot(258,xticks=[],yticks=[]) plt.imshow(l1, cmap=plt.cm.flag_r) plt.subplot(259,xticks=[],yticks=[]) plt.imshow(l2, cmap=plt.cm.flag_r) plt.subplot(2,5,10,xticks=[],yticks=[]) plt.imshow(l3, cmap=plt.cm.flag_r)
notebooks/Clean Dataset from their spurious pixels.ipynb
jeanpat/DeepFISH
gpl-3.0
dd4ab010caa4f51ea1821d5d61b5fe23
Clean the whole dataset
new_data = np.zeros((1,94,93,2), dtype = int) N = pairs.shape[0]#10 for idx in range(N): g_truth = pairs[idx,:,:,1] grey = pairs[idx,:,:,0] _, _, _, seg = clean_ground_truth(g_truth, size = 1) paired = np.dstack((grey, seg)) # #https://stackoverflow.com/questions/7372316/how-to-make-a-2d-numpy-array-a-3d-array/7372678 # new_data = np.concatenate((new_data, paired[newaxis,:, :, :])) new_data = new_data[1:,:,:,:] plt.figure(figsize=(20,10)) N=10580 grey = new_data[N,:,:,0] g_truth = new_data[N,:,:,1] plt.subplot(121,xticks=[],yticks=[]) plt.imshow(grey, cmap=plt.cm.gray) plt.subplot(122,xticks=[],yticks=[]) plt.imshow(g_truth, cmap=plt.cm.flag_r)
notebooks/Clean Dataset from their spurious pixels.ipynb
jeanpat/DeepFISH
gpl-3.0
9a80de62316c4ebc3a1e526c4a62a208
Save the dataset using hdf5 format
filename = './Cleaned_LowRes_13434_overlapping_pairs.h5' hf = h5py.File(filename,'w') hf.create_dataset('13434_overlapping_chrom_pairs_LowRes', data=new_data, compression='gzip', compression_opts=9) hf.close()
notebooks/Clean Dataset from their spurious pixels.ipynb
jeanpat/DeepFISH
gpl-3.0
d2bfcc83caad61ad592b38361815ca74
K-means Clustering non-distributed implementation
X, y_true = make_blobs(n_samples=300, centers=4, cluster_std=0.60, random_state=0) # Save simulated data to be used in MapReduce code np.savetxt("kmeans_simulated_data.txt", X, fmt='%.18e', delimiter=' ') plt.scatter(X[:, 0], X[:, 1], s=50); # Write modules for the simulation of local K-Meanss def assign_clusters(X, m): clusters = {} labels = [] for x in X: #Calculate pair wise distance from each centroid pair_dist = [(i[0], np.linalg.norm(x-m[i[0]])) for i in enumerate(m)] #Sort and select the minimum distance centroid best_centroid = min(pair_dist, key=lambda t:t[1])[0] labels.append(best_centroid) try: clusters[best_centroid].append(x) except KeyError: clusters[best_centroid] = [x] return(clusters, labels) def evaluate_cluster_mean(clusters): new_centroid = [] keys = sorted(clusters.keys()) for k in keys: #Calculate new centroid new_centroid.append(np.mean(clusters[k], axis = 0)) return(new_centroid) def check_convergence(new_centroid, old_centroid): #Check if new and old centroid have changed or not error = np.all(np.array(new_centroid) == np.array(old_centroid)) return(error) def driver_kmeans(X, K): # Initialize Random K centres old_centroid = random.sample(list(X), K) new_centroid = random.sample(list(X), K) #Saving centroid co-ordinates for the comparison with MapReduce code np.savetxt("kmeans_cache.txt", new_centroid, fmt='%.18e', delimiter=' ') counter = 0 while not check_convergence(new_centroid, old_centroid): old_centroid = new_centroid #Map points to nearest centroid clusters, labels = assign_clusters(X, new_centroid) # Find new centroids new_centroid = evaluate_cluster_mean(clusters) counter += 1 return(new_centroid, clusters, labels, counter) #Driver code intialize the mapreduce code #Not used in the current implementation, added for completion def init_kmeans(X, K): centroid = random.sample(list(X), K) init_centroid = np.array([np.concatenate(([i[0]], i[1])) for i in enumerate(centroid)]) np.savetxt("kmeans_cache.txt", init_centroid, fmt='%.18e', delimiter=' ') centers, d, labels, counter = driver_kmeans(X, 4) plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis') cx = [i[0] for i in centers] cy = [i[1] for i in centers] plt.scatter(cx, cy, c='black', s=200, alpha=0.5);
codes/driver_kmeans.ipynb
r2rahul/numericalanalysis
gpl-2.0
ef6ed96fd9405ac01f7206fca8c09829
Simulating MapReduce K-Means Algorithm Mapper Script Assumes mapper data input in tidy format and all varibales are properly encoded
%%writefile mapper_kmeans.py import sys import csv import math import numpy as np #Read the centroids iteratively and its co-ordinates with open('kmeans_cache.txt', 'r') as f: fp = csv.reader(f, delimiter = " ") m = np.array([[float(i) for i in j] for j in fp]) # input comes from STDIN (standard input) for line in sys.stdin: # remove leading and trailing whitespace line = line.strip().split() features = np.array([float(j) for j in line]) # Calculate the pair wise distance pair_dist = [(i[0], np.linalg.norm(features - m[i[0]])) for i in enumerate(m)] #Sort and select the minimum distance centroid best_centroid = min(pair_dist, key=lambda t:t[1])[0] #emit cluster id and coressponding values out_features = ",".join([str(k) for k in features]) print('{}\t{}'.format(best_centroid, out_features))
codes/driver_kmeans.ipynb
r2rahul/numericalanalysis
gpl-2.0
32ca7c68dec2c6af721a257fd00e7656
Reducer Script
%%writefile reducer_kmeans.py from operator import itemgetter import sys import numpy as np current_cluster = None current_val = 0 # input comes from STDIN for line in sys.stdin: # remove leading and trailing whitespace line = line.strip() cluster, value = line.split('\t', 1) #Convert value to float try: value = [float(i) for i in value.split(',')] except ValueError: #Accounts for error in value inputs. Skips the error lines continue #Cluster id as key and corresponding value is passed here if current_cluster == cluster: current_val = np.vstack((current_val, value)) else: if current_cluster: #Updates the centroids center = [str(i) for i in np.mean(current_val, axis = 0)] print('{}'.format(" ".join(center))) current_val = value current_cluster= cluster # To print the last line/clutster id if current_cluster == cluster: #Updates the centroids center = [str(i) for i in np.mean(current_val, axis = 0)] print('{}'.format(" ".join(center)))
codes/driver_kmeans.ipynb
r2rahul/numericalanalysis
gpl-2.0
c260d86a11d82183b28d935eaeb69c94
Simulate Job Chaining with Shell Script for loop iterates over each reducer output . Inside for loop Centroid co-ordinates are updated in kmeans_cache.txt at each iteration . Final output is stored in kmeans_cache.txt
%%sh #Initialize the initial clusters for i in `seq 1 20`; do echo 'Iteration Number = '$i cat kmeans_simulated_data.txt | python mapper_kmeans.py | sort | python reducer_kmeans.py > kmeans_temp.txt mv kmeans_temp.txt kmeans_cache.txt done
codes/driver_kmeans.ipynb
r2rahul/numericalanalysis
gpl-2.0
2754cd1749c06f464173cbd408729bf4
Test MapReduce Implementation
#Check if the centroid calculated in the non-distributed and distributed method are in same range def check_mapreduce(centroid_non, centroid_dist): #Check if new and old centroid have changed or not error = np.all(np.array(centroid_non) == np.array(centroid_dist)) #error calculation second way: Relative Error num_error = np.linalg.norm(np.array(centroid_non) - np.array(centroid_dist)) return(error, num_error) #Read the final centroid file with open('kmeans_cache.txt', 'r') as f: fp = csv.reader(f, delimiter = " ") centroid_map = np.array([[float(i) for i in j] for j in fp]) flag, relative_error = check_mapreduce(centers, centroid_map) if flag: print("Test Succeded: Both MapReduce and local algorithm returns the same centroids") elif relative_error < 1e-6: msg = "Test Succeded: Both MapReduce and local algorithm returns the same centroids with tolerance = " print('{}\t{}'.format(msg, relative_error)) else: errmsg = '''Check MapReduce code, perhaps check if both Mapreduce and Local are initalized from same centroids Rerun both the codes multiple time to verify''' print(errmsg)
codes/driver_kmeans.ipynb
r2rahul/numericalanalysis
gpl-2.0
b96b5191c180c3efe9d84ca9df2455ea
Value in relationship If we assume that a relationsip holds some sort of value, how is that value divided? Think about it...what kinds of value could a relationship hold? If we think about power in terms of an imbalance in social exchange, how is the value of a relationship distributed based on the power of the individuals of the network? Where does power come from? Network Exchange Theory addresses questions of social imbalance and its relation to network structure. <img style="float:left; width: 400px" src="img/Nelson_and_bart.gif" /> Principles of power
g.add_edges_from([("B", "C"), ("B", "D"), ("D", "E")]) nx.draw_networkx(g)
class19.ipynb
davebshow/DH3501
mit
f47052988ed5e850b3fb493b6631863b
Dependence - if relationships confer value, nodes A and C are completely dependent on node B for value. Exclusion - node B can easily exclude node A or C from the value conferred by the network. Satiation - at a certain point, nodes like B begin to see diminishing returns and only maintains relations from which they can receive an unequal share of the value. Betweenness - can confer power, this sort of centrality allows nodes like B to take advantages of structural holes and also control the flow of information througout the network. Note: high betweenness does not always confer an advantage in bargaining situations (as we will soon see). Experimental methodology: Riddle me this... <img style="float:left; width: 300px" src="img/experiment_comic.jpg" /> Recall the experimental methodology typically used to study power and exchange? Get together with your pods and refresh your memories...there are five steps. Are the results of these experiments considered to be robust? Why or why not (according to E & K)? Application: The following visualizations show 4 commonly tested paths. What were the experimental results for each path?
g = nx.Graph([("A", "B")]) nx.draw_networkx(g) plt.title("2-Node Path") g = nx.Graph([("A", "B"), ("B", "C")]) nx.draw_networkx(g) plt.title("3-Node Path") g = nx.Graph([("A", "B"), ("B", "C"), ("C", "D")]) nx.draw_networkx(g) plt.title("4-Node Path") g = nx.Graph([("A", "B"), ("B", "C"), ("C", "D"), ("D", "E")]) nx.draw_networkx(g) plt.title("5-Node Path")
class19.ipynb
davebshow/DH3501
mit
704082b03d9563aba21c508304213f32
How about power in a network that looks like this?
g = nx.Graph([("A", "B"), ("B", "C"), ("B", "D"), ("C", "D")]) nx.draw_networkx(g) plt.title("Triangle with outlier")
class19.ipynb
davebshow/DH3501
mit
d3aa52db99214146eb2f97c3df9964f8
Or this?
g = nx.Graph([("A", "B"), ("B", "C"), ("C", "A")]) nx.draw_networkx(g) plt.title("Triangle")
class19.ipynb
davebshow/DH3501
mit
2b4471aa0bf80061f8bd1f6f2df45ec2
Locations
HOME_DIR = os.path.expanduser('~').replace('\\', '/') BASE_DIR = '{}/Documents/DANS/projects/has/dacs'.format(HOME_DIR) FM_DIR = '{}/fm'.format(BASE_DIR) FMNS = '{http://www.filemaker.com/fmpxmlresult}' CONFIG_DIR = '.'
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
170afb99a4e91d9d2e418ea45f00b00f
Config All configuration in a big yaml file
with open('{}/config.yaml')
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
87f922727673c534517e0d1fe6c3c006
Data description Main source tables and fields to skip
CONFIG = yaml.load(''' mainTables: - contrib - country ''') mainTables = ('contrib', 'country') SKIP_FIELDS = dict( contrib=set(''' dateandtime_ciozero ikid ikid_base find_country_id find_type gnewpassword gnewpassword2 goldpassword help_description help_text message message_allert teller total_costs_total whois '''.strip().split()), country=set(''' '''.strip().split()), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
361529016221f1748bdba04928ac21f3
Fields to merge
MERGE_FIELDS = dict( contrib=dict( academic_entity_url=['academic_entity_url_2'], contribution_url=['contribution_url_2'], contact_person_mail=['contact_person_mail_2'], type_of_inkind=['other_type_of_inkind'], vcc11_name=[ 'vcc12_name', 'vcc21_name', 'vcc22_name', 'vcc31_name', 'vcc32_name', 'vcc41_name', 'vcc42_name', ], vcc_head_decision_vcc11=[ 'vcc_head_decision_vcc12', 'vcc_head_decision_vcc21', 'vcc_head_decision_vcc22', 'vcc_head_decision_vcc31', 'vcc_head_decision_vcc32', 'vcc_head_decision_vcc41', 'vcc_head_decision_vcc42', ], ), country=dict(), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
ddebb406bd00ea3d1fdfca31cb397e90
Fields to rename
MAP_FIELDS = dict( contrib=dict( approved='approved', academic_entity_url='urlAcademic', contribution_url='urlContribution', contact_person_mail='contactPersonEmail', contact_person_name='contactPersonName', costs_description='costDescription', costs_total='costTotal', country='country', creation_date_time='dateCreated', creator='creator', dateandtime_approval='dateApproved', dateandtime_cioapproval='dateApprovedCIO', description_of_contribution='description', disciplines_associated='discipline', last_modifier='modifiedBy', modification_date_time='dateModified', other_keywords='keyword', submit='submitted', tadirah_research_activities='tadirahActivity', tadirah_research_objects='tadirahObject', tadirah_research_techniques='tadirahTechnique', title='title', total_costs_total='costTotalTotal', type_of_inkind='typeContribution', vcc='vcc', vcc11_name='reviewerName', vcc_head_decision='vccDecision', vcc_head_decision_vcc11='reviewerDecision', vcchead_approval='vccApproval', vcchead_disapproval='vccDisApproval', year='year', ), country=dict( countrycode='iso', countryname='name', member_dariah='isMember', ), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
72c68b1109e6918e7fafa3b2345aad87
Fields to split into multiple values
generic = re.compile('[ \t]*[\n+][ \t\n]*') # split on newlines (with surrounding white space) genericComma = re.compile('[ \t]*[\n+,;][ \t\n]*') # split on newlines or commas (with surrounding white space) SPLIT_FIELDS=dict( contrib=dict( discipline=generic, keyword=genericComma, typeContribution=generic, tadirahActivity=generic, tadirahObject=generic, tadirahTechnique=generic, vcc=generic, ), country=dict(), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
ac4aaac6f29689561dde61bee806b9c2
Fields to hack
STRIP_NUM = re.compile('^[0-9]\s*\.?\s+') def stripNum(v): return STRIP_NUM.sub('', v) HACK_FIELDS=dict( contrib=dict( tadirahActivity=stripNum, ), country=dict(), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
454d8abdf9a672e7974a40f901973e19
Fields to decompose into several fields
DECOMPOSE_FIELDS=dict( contrib=dict( typeContribution='typeContributionOther', ), country=dict(), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
f01d271638585e2df44c8f3548ffd454
Custom field types
FIELD_TYPE = dict( contrib=dict( costTotal='valuta', dateCreated='datetime', dateModified='datetime', dateApproved='datetime', dateApprovedCIO='datetime', contactPersonEmail='email', submitted='bool', approved='bool', reviewerDecision='bool', vccApproval='bool', vccDecision='bool', vccDisApproval='bool', ), country=dict( isMember='bool', ), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
bb8f494ab9eca28e14c0f36d75a47980
Default values
DEFAULT_VALUES=dict( contrib=dict( dateCreated=datetime(2000,1,1,0,0,0), creator="admin", type_of_inkind="General", ), country=dict(), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
f0ce9c10865c8b8d69090d2ab1830962
Fields to move to other tables
MOVE_FIELDS=dict( contrib=dict( assessment=set(''' approved dateApproved dateApprovedCIO submitted reviewerName reviewerDecision vccDecision vccApproval vccDisApproval '''.strip().split()), ), country=dict(), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
7cbe65eaf1e5708a86bc4e80d3fca8a3
Fields to value lists
MAKE_VALUE_LISTS = dict( contrib=set(''' keyword year '''.strip().split()), ) VALUE_LISTS = dict( contrib=set(''' discipline keyword tadirahActivity tadirahObject tadirahTechnique typeContribution typeContributionOther:typeContribution vcc year '''.strip().split()), ) MOVE_MISSING = dict( contrib='description', )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
5cf8ace9ce6fb75ab69c3b1dfc4a6364
Field values Patterns for value types
# Source field types, including types assigned by type overriding (see FIELD_TYPE_OVERRIDE above). # These will be translated into appropriate SQL field types TYPES = {'bool', 'number', 'decimal', 'text', 'valuta', 'email', 'date', 'datetime'} # dates are already in ISO (date2_pattern). # If we encounter other dates, we could use date_pattern instead) # datetimes are not in iso, they will be transformed to iso. DECIMAL_PATTERN = re.compile( r'^-?[0-9]+\.?[0-9]*' ) DATE_PATTERN = re.compile( r'^\s*([0-9]{2})/([0-9]{2})/([0-9]{4})$' ) DATE2_PATTERN = re.compile( r'^\s*([0-9]{4})-([0-9]{2})-([0-9]{2})$' ) DATETIME_PATTERN = re.compile( r'^\s*([0-9]{2})/([0-9]{2})/([0-9]{4})\s+([0-9]{2}):([0-9]{2})(?::([0-9]{2}))?$' ) # meaningless values will be translated into None NULL_VALUES = { 'http://', 'https://', '@', } BOOL_VALUES = { True: {'Yes', 'YES', 'yes', 1, '1', True}, False: {'No', 'NO', 'no', 0, '0', 'NULL', False}, }
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
83381ca3fcc574fbdaf1d2779c3c5f2a
Date and Time values
def date_repl(match): [d,m,y] = list(match.groups()) return '{}-{}-{}'.format(y,m,d) def date2_repl(match): [y,m,d] = list(match.groups()) return '{}-{}-{}'.format(y,m,d) def datetime_repl(match): [d,m,y,hr,mn,sc] = list(match.groups()) return '{}-{}-{}T{}:{}:{}'.format(y,m,d,hr,mn,sc or '00') def dt(v_raw, i, t, fname): if not DATE2_PATTERN.match(v_raw): warning( 'table `{}` field `{}` record {}: not a valid date: "{}"'.format( t, fname, i, v_raw )) return v_raw return datetime(*map(int, re.split('[:T-]', DATE2_PATTERN.sub(date2_repl, v_raw)))) def dtm(v_raw, i, t, fname): if not DATETIME_PATTERN.match(v_raw): warning( 'table `{}` field `{}` record {}: not a valid date time: "{}"'.format( t, fname, i, v_raw )) return v_raw return datetime(*map(int, re.split('[:T-]', DATETIME_PATTERN.sub(datetime_repl, v_raw))))
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
7d1257571585efe3b8c38fadaa8b795f
Boolean, numeric and string values
def bools(v_raw, i, t, fname): if v_raw in BOOL_VALUES[True]: return True if v_raw in BOOL_VALUES[False]: return False warning( 'table `{}` field `{}` record {}: not a boolean value: "{}"'.format( t, fname, i, v_raw )) return v_raw def num(v_raw, i, t, fname): if type(v_raw) is int: return v_raw if v_raw.isdigit(): return int(v_raw) warning( 'table `{}` field `{}` record {}: not an integer: "{}"'.format( t, fname, i, v_raw )) return v_raw def decimal(v_raw, i, t, fname): if type(v_raw) is float: return v_raw if v_raw.isdigit(): return float(v_raw) if DECIMAL_PATTERN.match(v_raw): return float(v_raw) warning( 'table `{}` field `{}` record {}: not an integer: "{}"'.format( t, fname, i, v_raw )) return v_raw def email(v_raw, i, t, fname): return v_raw.replace('mailto:', '', 1) if v_raw.startswith('mailto:') else v_raw
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
c878423cd3f2ccbd24802dd76d401d83
Money values
def money(v_raw, i, t, fname): note = ',' in v_raw or '.' in v_raw v = v_raw.strip().lower().replace(' ','').replace('€', '').replace('euro', '').replace('\u00a0', '') for p in range(2,4): # interpret . or , as decimal point if less than 3 digits follow it if len(v) >= p and v[-p] in '.,': v_i = v[::-1] if v_i[p-1] == ',': v_i = v_i.replace(',', 'D', 1) elif v_i[p-1] == '.': v_i = v_i.replace('.', 'D', 1) v = v_i[::-1] v = v.replace('.','').replace(',','') v = v.replace('D', '.') if not v.replace('.','').isdigit(): if len(set(v) & set('0123456789')): warning( 'table `{}` field `{}` record {}: not a decimal number: "{}" <= "{}"'.format( t, fname, i, v, v_raw, )) money_warnings.setdefault('{}:{}'.format(t, fname), {}).setdefault(v, set()).add(v_raw) v = None else: v = None money_notes.setdefault('{}:{}'.format(t, fname), {}).setdefault('NULL', set()).add(v_raw) elif note: money_notes.setdefault('{}:{}'.format(t, fname), {}).setdefault(v, set()).add(v_raw) return None if v == None else float(v)
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
d838966f42786a2b6e1b4d07e97be711
Clean up field values
def sanitize(t, i, fname, value): if fname == '_id': return value (ftype, fmult) = allFields[t][fname] newValue = [] for v_raw in value: if v_raw == None or v_raw in NULL_VALUES: continue elif ftype == 'text': v = v_raw elif ftype == 'bool': v = bools(v_raw, i, t, fname) elif ftype == 'number': v = num(v_raw, i, t, fname) elif ftype == 'decimal': v = decimal(v_raw, i, t, fname) elif ftype == 'email': v = email(v_raw, i, t, fname) elif ftype == 'valuta': v = money(v_raw, i, t, fname) elif ftype == 'date': v = dt(v_raw, i, t, fname) elif ftype == 'datetime': v = dtm(v_raw, i, t, fname) else: v = v_raw if v != None and (fmult <= 1 or v != ''): newValue.append(v) if len(newValue) == 0: defValue = DEFAULT_VALUES.get(t, {}).get(fname, None) if defValue != None: newValue = [defValue] return newValue
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
e7eacc4b6cfa6c7dc5170652562f0013
Show information
def info(x): sys.stdout.write('{}\n'.format(x)) def warning(x): sys.stderr.write('{}\n'.format(x)) def showFields(): for (mt, defs) in sorted(allFields.items()): info(mt) for (fname, fdef) in sorted(defs.items()): info('{:>25}: {:<10} ({})'.format(fname, *fdef)) def showdata(rows): for row in rows: for f in sorted(row.items()): info('{:>20} = {}'.format(*f)) info('o-o-o-o-o-o-o-o-o-o-o-o') def showData(): for (mt, rows) in sorted(allData.items()): info('o-o-o-o-o-o-o TABLE {} with {} rows o-o-o-o-o-o-o-o '.format(mt, len(rows))) showdata(rows[0:2]) def showMoney(): for tf in sorted(money_notes): for v in sorted(money_notes[tf]): info('{} "{}" <= {}'.format( tf, v, ' | '.join(money_notes[tf][v]), ))
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
deaf197c71858630e86c28b56d4aa366
Read FM fields
def readFmFields(): for mt in mainTables: infile = '{}/{}.xml'.format(FM_DIR, mt) root = etree.parse(infile, parser).getroot() fieldroots = [x for x in root.iter(FMNS+'METADATA')] fieldroot = fieldroots[0] fields = [] fieldDefs = {} for x in fieldroot.iter(FMNS+'FIELD'): fname = x.get('NAME').lower().replace(' ','_').replace(':', '_') ftype = x.get('TYPE').lower() fmult = int(x.get('MAXREPEAT')) fields.append(fname) fieldDefs[fname] = [ftype, fmult] rawFields[mt] = fields allFields[mt] = fieldDefs for f in SKIP_FIELDS[mt]: del allFields[mt][f] for (f, mfs) in MERGE_FIELDS[mt].items(): allFields[mt][f][1] += 1 for mf in mfs: del allFields[mt][mf] allFields[mt] = dict((MAP_FIELDS[mt][f], v) for (f,v) in allFields[mt].items()) for f in SPLIT_FIELDS[mt]: allFields[mt][f][1] += 1 for (f, fo) in DECOMPOSE_FIELDS[mt].items(): allFields[mt][fo] = allFields[mt][f] allFields[mt][f] = [allFields[mt][f][0], 1] for (f, t) in FIELD_TYPE[mt].items(): allFields[mt][f][0] = t
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
9ec8ec8992c9e8f1ddc44094545c1d31
Read FM data
def readFmData(): for mt in mainTables: infile = '{}/{}.xml'.format(FM_DIR, mt) root = etree.parse(infile, parser).getroot() dataroots = [x for x in root.iter(FMNS+'RESULTSET')] dataroot = dataroots[0] rows = [] rowsRaw = [] fields = rawFields[mt] for (i, r) in enumerate(dataroot.iter(FMNS+'ROW')): rowRaw = [] for c in r.iter(FMNS+'COL'): data = [x.text.strip() for x in c.iter(FMNS+'DATA') if x.text != None] rowRaw.append(data) if len(rowRaw) != len(fields): warning('row {}: fields encountered = {}, should be {}'.format(len(row), len(fields))) rowsRaw.append(dict((f,v) for (f, v) in zip(fields, rowRaw))) row = dict((f,v) for (f, v) in zip(fields, rowRaw) if f not in SKIP_FIELDS[mt]) for (f, mfs) in MERGE_FIELDS[mt].items(): for mf in mfs: row[f].extend(row[mf]) del row[mf] row = dict((MAP_FIELDS[mt][f], v) for (f,v) in row.items()) for (f, spl) in SPLIT_FIELDS[mt].items(): row[f] = reduce(lambda x,y: x+y, [spl.split(v) for v in row[f]], []) for (f, hack) in HACK_FIELDS[mt].items(): row[f] = [hack(v) for v in row[f]] for (f, fo) in DECOMPOSE_FIELDS[mt].items(): row[fo] = row[f][1:] row[f] = [row[f][0]] if len(row[f]) else [] row['_id'] = ObjectId() #info('\n'.join('{}={}'.format(*x) for x in sorted(row.items()))) for (f, v) in row.items(): row[f] = sanitize(mt, i, f, v) rows.append(row) allData[mt] = rows rawData[mt] = rowsRaw if money_warnings: for tf in sorted(money_warnings): for v in sorted(money_warnings[tf]): warning('{} "{}" <= {}'.format( tf, v, ' | '.join(money_warnings[tf][v]), ))
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
f703f2c95a124736fcfafa8bac1f218b
Split tables into several tables by column groups
def moveFields(): for mt in mainTables: for (omt, mfs) in MOVE_FIELDS[mt].items(): for mf in mfs: allFields.setdefault(omt, dict())[mf] = allFields[mt][mf] del allFields[mt][mf] allFields.setdefault(omt, dict)['{}_id'.format(mt)] = ('id', 1) for row in allData[mt]: for (omt, mfs) in MOVE_FIELDS[mt].items(): orow = dict((mf, row[mf]) for mf in mfs) orow['_id'] = ObjectId() orow['{}_id'.format(mt)] = row['_id'] allData.setdefault(omt, []).append(orow) for mf in mfs: del row[mf]
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
bd88a8b42a30e56e9c423f276bf13bd0
Value Lists
def readLists(): valueLists = dict() for path in glob('{}/*.txt'.format(FM_DIR)): tname = basename(splitext(path)[0]) data = [] with open(path) as fh: for line in fh: data.append(line.rstrip().split('\t')) valueLists[tname] = data for (vList, data) in valueLists.items(): if vList == 'countryExtra': mapping = dict((x[0], x[1:]) for x in data) else: mapping = dict((i+1, x[0]) for (i, x) in enumerate(data)) valueDict[vList] = mapping allFields[vList] = dict( _id=('id', 1), value=('string', 1), ) for mt in allData: fs = MAKE_VALUE_LISTS.get(mt, set()) for f in fs: valSet = set() for row in allData[mt]: values = row.get(f, []) if type(values) is not list: values = [values] valSet |= set(values) valueDict[f] = dict((i+1, x) for (i, x) in enumerate(sorted(valSet))) allFields[f] = dict( _id=('id', 1), value=('string', 1), )
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
c5eda7aea31f6d844d9982b4e79ae50b
Country table
def countryTable(): extraInfo = valueDict['countryExtra'] idMapping = dict() for row in allData['country']: for f in row: if type(row[f]) is list: row[f] = row[f][0] iso = row['iso'] row['_id'] = ObjectId() idMapping[iso] = row['_id'] (name, lat, long) = extraInfo[iso] row['latitude'] = lat row['longitude'] = long for row in allData['contrib']: newValue = [] for iso in row['country']: newValue.append(dict(_id=idMapping[iso], iso=iso, value=extraInfo[iso][0])) row['country'] = newValue allFields['country']['_id'] = ('id', 1) allFields['country']['iso'] = ('string', 1) allFields['country']['latitude'] = ('float', 1) allFields['country']['longitude'] = ('float', 1)
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
fc38350ba378d81f8700e538bfc9e8ab
User table
def userTable(): idMapping = dict() existingUsers = [] testUsers = [ dict(eppn='suzan', email='suzan1@test.eu', mayLogin=True, authority='local', firstName='Suzan', lastName='Karelse'), dict(eppn='marie', email='suzan2@test.eu', mayLogin=True, authority='local', firstName='Marie', lastName='Pieterse'), dict(eppn='gertjan', email='gertjan@test.eu', mayLogin=False, authority='local', firstName='Gert Jan', lastName='Klein-Holgerink'), dict(eppn='lisa', email='lisa@test.eu', mayLogin=True, authority='local', firstName='Lisa', lastName='de Leeuw'), dict(eppn='dirk', email='dirk@test.eu', mayLogin=True, authority='local', firstName='Dirk', lastName='Roorda'), ] users = collections.defaultdict(set) eppnSet = set() for c in allData['contrib']: crs = c.get('creator', []) + c.get('modifiedBy', []) for cr in crs: eppnSet.add(cr) idMapping = dict((eppn, ObjectId()) for eppn in sorted(eppnSet)) for c in allData['contrib']: c['creator'] = [dict(_id=idMapping[cr]) for cr in c['creator']] if 'modifiedBy' not in c: c['modifiedBy'] = [] else: c['modifiedBy'] = [dict(_id=idMapping[cr]) for cr in c['modifiedBy']] users = dict((i, eppn) for (eppn, i) in idMapping.items()) for (i, eppn) in sorted(users.items()): existingUsers.append(dict(_id=i, eppn=eppn, mayLogin=False, authority='legacy')) for u in testUsers: u['_id'] = ObjectId() idMapping[u['eppn']] = u['_id'] existingUsers.append(u) inGroups = [ dict(eppn='DirkRoorda@dariah.eu', authority='DARIAH', group='system'), dict(eppn='LisaDeLeeuw@dariah.eu', authority='DARIAH', group='office'), dict(eppn='suzan', authority='local', group='auth'), dict(eppn='marie', authority='local', group='auth'), dict(eppn='gertjan', authority='local', group='auth'), dict(eppn='lisa', authority='local', group='office'), dict(eppn='dirk', authority='local', group='system'), ] inGroups = [dict(tuple(ig.items())+(('_id', ObjectId()),)) for ig in inGroups] allData['user'] = existingUsers allData['group'] = inGroups allFields['user'] = dict( _id=('id', 1), eppn=('string', 1), email=('email', 1), mayLogin=('bool', 1), authority=('string', 1), firstName=('string', 1), lastName=('string', 1), ) allFields['group'] = dict( _id=('id', 1), eppn=('string', 1), authority=('string', 1), group=('string', 1), ) uidMapping.update(idMapping)
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
1deeb048623e57d844bcad324c79b53f
Related tables
def relTables(): def norm(x): return x.strip().lower() relIndex = dict() for mt in sorted(VALUE_LISTS): rows = allData[mt] for f in sorted(VALUE_LISTS[mt]): comps = f.split(':') if len(comps) == 2: (f, fAs) = comps else: fAs = f relInfo = valueDict[fAs] if not fAs in relIndex: idMapping = dict((i, ObjectId()) for i in relInfo) allData[fAs] = [dict(_id=idMapping[i], value=v) for (i, v) in relInfo.items()] relIndex[fAs] = dict((norm(v), (idMapping[i], v)) for (i, v) in relInfo.items()) for row in rows: newValue = [] for v in row[f]: rnv = norm(v) (i, nv) = relIndex[fAs].get(rnv, ("-1", None)) if nv == None: target = MOVE_MISSING[mt] if target not in row: row[target] = [''] row[target][0] += '\nMOVED FROM {}: {}'.format(f, v) else: newValue.append(dict(_id=i, value=nv)) row[f] = newValue
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
3d661698622acdadd6c8442cdad692aa
Test tweaks Tweaks for testing purposes.
def testTweaks(): mt = 'contrib' myContribs = {'3DHOP', 'AAI'} my = uidMapping['dirk'] for row in allData[mt]: title = row.get('title', [None]) if len(title) == 0: title = [None] if title[0] in myContribs: row['creator'] = [dict(_id=my)]
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
22a0dda7061c11ff888e0710150eaecd
Insert into MongoDB
def importMongo(): client = MongoClient() client.drop_database('dariah') db = client.dariah for (mt, rows) in allData.items(): info(mt) db[mt].insert_many(rows)
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
621393b433f40b7c442688a2b6482299
The whole pipeline
money_warnings = {} money_notes = {} valueDict = dict() rawFields = dict() allFields = dict() rawData = dict() allData = dict() uidMapping = dict() parser = etree.XMLParser(remove_blank_text=True, ns_clean=True) readFmFields() readFmData() readLists() moveFields() countryTable() userTable() relTables() testTweaks() importMongo() #showData() #showMoney()
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
2b5ad80b7ee3dc5291a28af7fb08db5f
To import the bson dump in another mongodb installation, use the commandline to dump the dariah database here mongodump -d dariah -o dariah and to import it elsewhere. mongorestore --drop -d dariah dariah
valueDict.keys() valueDict['keywords']
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
c0d29bcdaa45f9ab3c2fee79f61991c0
Exploration The process has finished, but here is space to explore the data, in order to find patterns, regularities, and, more importantly, irregularities. First step: create csv files of the data and combine them into an excel sheet.
import xlsxwriter EXPORT_DIR = os.path.expanduser('~/Downloads') EXPORT_ORIG = '{}/contribFromFileMaker.xlsx'.format(EXPORT_DIR) EXPORT_MONGO = '{}/contribInMongoDB.xlsx'.format(EXPORT_DIR) workbook = xlsxwriter.Workbook(EXPORT_ORIG, {'strings_to_urls': False}) for mt in rawData: worksheet = workbook.add_worksheet(mt) for (f, field) in enumerate(rawFields[mt]): worksheet.write(0, f, field) for (r, row) in enumerate(rawData[mt]): for (f, field) in enumerate(rawFields[mt]): val = row[field] val = [] if val == None else val if type(val) is list else [val] val = '|'.join(val) worksheet.write(r+1, f, val) workbook.close() workbook = xlsxwriter.Workbook(EXPORT_MONGO, {'strings_to_urls': False}) for mt in allData: worksheet = workbook.add_worksheet(mt) fields = sorted(allFields[mt]) for (f, field) in enumerate(fields): worksheet.write(0, f, field) for (r, row) in enumerate(allData[mt]): for (f, field) in enumerate(fields): fmt = None val = row.get(field, []) (ftype, fmult) = allFields[mt][field] val = [] if val == None else [val] if type(val) is not list else val exportVal = [] for v in val: if type(v) is dict: exportVal.append(','.join(str(vv) for vv in v.values())) elif ftype == 'date' or ftype == 'datetime': exportVal.append(v if type(v) is str else v.isoformat()) else: exportVal.append(str(v)) worksheet.write(r+1, f, ' | '.join(exportVal)) workbook.close() showFields() client = MongoClient() dbm = client.dariah for d in dbm.contrib.find({'title': '3DHOP'}).limit(2): print('=' * 50) for f in sorted(d): print('{}={}'.format(f, d[f]))
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
395f577037a8191b3fd18fcf550d8259
Here is a query to get all 'type_of_inkind' values for contributions.
for c in dbm.contrib.distinct('typeContribution', {}): print(c)
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
d9fe22a609a0873449ccd2f68e0723c8
Here are the users:
for c in dbm.users.find({}): print(c)
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
93da8e4f368bcdc03efbd4406cd296e0
Here are the countries:
for c in dbm.country.find({'isMember': True}): print(c) for c in dbm.contrib.distinct('country', {}): print(c)
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
Dans-labs/dariah
mit
208df5c45278228c784cded57b32dc81