markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Choosing the right model and learning algorithm
# creating a error calc fuction def error(f, x, y): return np.sum((f(x) - y)**2)
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
6ed660327c1f96990cc7aa4f45ca3dc6
Linear 1-d model
# sp's polyfit func do the same fp1, residuals, rank, sv, rcond = sp.polyfit(X, y, 1, full=True) print(fp1) print(residuals) # generating the one order function f1 = sp.poly1d(fp1) # checking error print("Error : ",error(f1, X, y)) x1 = np.array([-100, np.max(X)+100]) y1 = f1(x1) ax.plot(x1, y1, c='g', linewidth=2) ax.legend(["data", "d = %i" % f1.order], loc='best') fig
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
65979fcf8de36d260acbf52e14bcbc90
$$ f(x) = 2.59619213 * x + 989.02487106 $$ Polynomial 2-d
# sp's polyfit func do the same fp2 = sp.polyfit(X, y, 2) print(fp2) # generating the 2 order function f2= sp.poly1d(fp2) # checking error print("Error : ",error(f2, X, y)) x1= np.linspace(-100, np.max(X)+100, 2000) y2= f2(x1) ax.plot(x1, y2, c='r', linewidth=2) ax.legend(["data", "d = %i" % f1.order, "d = %i" % f2.order], loc='best') fig
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
a0bfdd65767188b9781bf828c21ddc3f
$$ f(x) = 0.0105322215 * x^2 - 5.26545650 * x + 1974.6082 $$ What if we want to regress two response output instead of one, As we can see in the graph that there is a steep change in data between week 3 and 4, so let's draw two reponses line, one for the data between week0 and week3.5 and second for week3.5 to week5
# we are going to divide the data on time so div = 3.5*7*24 X1 = X[X<=div] Y1 = y[X<=div] X2 = X[X>div] Y2 = y[X>div] # now plotting the both data fa = sp.poly1d(sp.polyfit(X1, Y1, 1)) fb = sp.poly1d(sp.polyfit(X2, Y2, 1)) fa_error = error(fa, X1, Y1) fb_error = error(fb, X2, Y2) print("Error inflection = %f" % (fa_error + fb_error)) x1 = np.linspace(-100, X1[-1]+100, 1000) x2 = np.linspace(X1[-10], X2[-1]+100, 1000) ya = fa(x1) yb = fb(x2) ax.plot(x1, ya, c='#800000', linewidth=2) # brown ax.plot(x2, yb, c='#FFA500', linewidth=2) # orange ax.grid(True) fig
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
a7625f6297ae0e3fad562d83831d1244
Suppose we choose that function with degree 2 is best fit for our data and want to predict that if everything will go same then when we will hit the 100000 count ?? $$ 0 = f(x) - 100000 = 0.0105322215 * x^2 - 5.26545650 * x + 1974.6082 - 100000 $$ SciPy's optimize module has the function fsolve that achieves this, when providing an initial starting position with parameter x0. As every entry in our input data file corresponds to one hour, and we have 743 of them, we set the starting position to some value after that. Let fbt2 be the winning polynomial of degree 2.
print(f2) print(f2 - 100000) # import from scipy.optimize import fsolve reached_max = fsolve(f2-100000, x0=800)/(7*24) print("100,000 hits/hour expected at week %f" % reached_max[0])
BMLSwPython/01_GettingStarted_withPython.ipynb
atulsingh0/MachineLearning
gpl-3.0
ed90aafa0b6b99c65c9d0be17ba70899
datacleaning The datacleaning module is used to clean and organize the data into 51 CSV files corresponding to the 50 states of the US and the District of Columbia. The wrapping function clean_all_data takes all the data sets as input and sorts the data in to CSV files of the states. The CSVs are stored in the Cleaned Data directory which is under the Data directory.
data_cleaning.clean_all_data()
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
19a244e6e4e46c0b7739a807332cb756
missing_data The missing_data module is used to estimate the missing data of the GDP (from 1960 - 1962) and determine the values of the predictors (from 2016-2020). The wrapping function predict_all takes the CSV files of the states as input and stores the predicted missing values in the same CSV files. The CSVs generated replace the previous CSV files in the Cleaned Data directory which is under the Data directory.
missing_data.predict_all()
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
834850ec004b9a309acaa0e43bf45d4e
ridge_prediction The ridge_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using ridge regression. The wrapping function ridge_predict_all takes the CSV files of the states as input and stores the future values of the energies in another CSV file under Ridge Regression folder under the Predicted Data directory.
ridge_prediction.ridge_predict_all()
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
25d10c0e81e0081fb194a2382c2123a4
svr_prediction The svr_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using Support Vector Regression The wrapping function SVR_predict_all takes the CSV files of the states as input and stores the future values of the energies in another CSV file under SVR folder under the Predicted Data directory.
svr_prediction.SVR_predict_all()
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
7576af95fdbb23c7ab943d7146fedd4b
plots Visualizations is done using Tableau software. The Tableau workbook for the predicted data is included in the repository. The Tableau dashboard created for this data is illustrated below:
%%HTML <div class='tableauPlaceholder' id='viz1489609724011' style='position: relative'><noscript><a href='#'><img alt='Clean Energy Production in the contiguous United States(in million kWh) ' src='https:&#47;&#47;public.tableau.com&#47;static&#47;images&#47;PB&#47;PB87S38NW&#47;1_rss.png' style='border: none' /></a></noscript><object class='tableauViz' style='display:none;'><param name='host_url' value='https%3A%2F%2Fpublic.tableau.com%2F' /> <param name='path' value='shared&#47;PB87S38NW' /> <param name='toolbar' value='yes' /><param name='static_image' value='https:&#47;&#47;public.tableau.com&#47;static&#47;images&#47;PB&#47;PB87S38NW&#47;1.png' /> <param name='animate_transition' value='yes' /><param name='display_static_image' value='yes' /><param name='display_spinner' value='yes' /><param name='display_overlay' value='yes' /><param name='display_count' value='yes' /></object></div> <script type='text/javascript'> var divElement = document.getElementById('viz1489609724011'); var vizElement = divElement.getElementsByTagName('object')[0]; vizElement.style.width='1004px';vizElement.style.height='869px'; var scriptElement = document.createElement('script'); scriptElement.src = 'https://public.tableau.com/javascripts/api/viz_v1.js'; vizElement.parentNode.insertBefore(scriptElement, vizElement); </script>
examples/Demo.ipynb
uwkejia/Clean-Energy-Outlook
mit
df90050c55868bafe641af42f1c25107
Visualize source leakage among labels using a circular graph This example computes all-to-all pairwise leakage among 68 regions in source space based on MNE inverse solutions and a FreeSurfer cortical parcellation. Label-to-label leakage is estimated as the correlation among the labels' point-spread functions (PSFs). It is visualized using a circular graph which is ordered based on the locations of the regions in the axial plane.
# Authors: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk> # Martin Luessi <mluessi@nmr.mgh.harvard.edu> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # Nicolas P. Rougier (graph code borrowed from his matplotlib gallery) # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.minimum_norm import (read_inverse_operator, make_inverse_resolution_matrix, get_point_spread) from mne.viz import circular_layout, plot_connectivity_circle print(__doc__)
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
3961b40e182867d0c2ea4e6cbf10639e
Load forward solution and inverse operator We need a matching forward solution and inverse operator to compute resolution matrices for different methods.
data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-fixed-inv.fif' forward = mne.read_forward_solution(fname_fwd) # Convert forward solution to fixed source orientations mne.convert_forward_solution( forward, surf_ori=True, force_fixed=True, copy=False) inverse_operator = read_inverse_operator(fname_inv) # Compute resolution matrices for MNE rm_mne = make_inverse_resolution_matrix(forward, inverse_operator, method='MNE', lambda2=1. / 3.**2) src = inverse_operator['src'] del forward, inverse_operator # save memory
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
06d12fd5d8c9462d6af03e0bd5b3deb1
Read and organise labels for cortical parcellation Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels = mne.read_labels_from_annot('sample', parc='aparc', subjects_dir=subjects_dir) n_labels = len(labels) label_colors = [label.color for label in labels] # First, we reorder the labels based on their location in the left hemi label_names = [label.name for label in labels] lh_labels = [name for name in label_names if name.endswith('lh')] # Get the y-location of the label label_ypos = list() for name in lh_labels: idx = label_names.index(name) ypos = np.mean(labels[idx].pos[:, 1]) label_ypos.append(ypos) # Reorder the labels based on their location lh_labels = [label for (yp, label) in sorted(zip(label_ypos, lh_labels))] # For the right hemi rh_labels = [label[:-2] + 'rh' for label in lh_labels]
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
494da79986ead2e50421d3ff04aafc9d
Compute point-spread function summaries (PCA) for all labels We summarise the PSFs per label by their first five principal components, and use the first component to evaluate label-to-label leakage below.
# Compute first PCA component across PSFs within labels. # Note the differences in explained variance, probably due to different # spatial extents of labels. n_comp = 5 stcs_psf_mne, pca_vars_mne = get_point_spread( rm_mne, src, labels, mode='pca', n_comp=n_comp, norm=None, return_pca_vars=True) n_verts = rm_mne.shape[0] del rm_mne
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
3cfc6b058f911eec3d5af31522a03869
We can show the explained variances of principal components per label. Note how they differ across labels, most likely due to their varying spatial extent.
with np.printoptions(precision=1): for [name, var] in zip(label_names, pca_vars_mne): print(f'{name}: {var.sum():.1f}% {var}')
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
b361a8c1d98d0c50a6530b60be793a7f
The output shows the summed variance explained by the first five principal components as well as the explained variances of the individual components. Evaluate leakage based on label-to-label PSF correlations Note that correlations ignore the overall amplitude of PSFs, i.e. they do not show which region will potentially be the bigger "leaker".
# get PSFs from Source Estimate objects into matrix psfs_mat = np.zeros([n_labels, n_verts]) # Leakage matrix for MNE, get first principal component per label for [i, s] in enumerate(stcs_psf_mne): psfs_mat[i, :] = s.data[:, 0] # Compute label-to-label leakage as Pearson correlation of PSFs # Sign of correlation is arbitrary, so take absolute values leakage_mne = np.abs(np.corrcoef(psfs_mat)) # Save the plot order and create a circular layout node_order = lh_labels[::-1] + rh_labels # mirror label order across hemis node_angles = circular_layout(label_names, node_order, start_pos=90, group_boundaries=[0, len(label_names) / 2]) # Plot the graph using node colors from the FreeSurfer parcellation. We only # show the 200 strongest connections. fig = plt.figure(num=None, figsize=(8, 8), facecolor='black') plot_connectivity_circle(leakage_mne, label_names, n_lines=200, node_angles=node_angles, node_colors=label_colors, title='MNE Leakage', fig=fig)
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
4b9319d572033046768c64686f5d4ccf
Most leakage occurs for neighbouring regions, but also for deeper regions across hemispheres. Save the figure (optional) Matplotlib controls figure facecolor separately for interactive display versus for saved figures. Thus when saving you must specify facecolor, else your labels, title, etc will not be visible:: &gt;&gt;&gt; fname_fig = data_path + '/MEG/sample/plot_label_leakage.png' &gt;&gt;&gt; fig.savefig(fname_fig, facecolor='black') Plot PSFs for individual labels Let us confirm for left and right lateral occipital lobes that there is indeed no leakage between them, as indicated by the correlation graph. We can plot the summary PSFs for both labels to examine the spatial extent of their leakage.
# left and right lateral occipital idx = [22, 23] stc_lh = stcs_psf_mne[idx[0]] stc_rh = stcs_psf_mne[idx[1]] # Maximum for scaling across plots max_val = np.max([stc_lh.data, stc_rh.data])
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
bb40cbb1d2eab7e954e249f60e06c947
Point-spread function for the lateral occipital label in the left hemisphere
brain_lh = stc_lh.plot(subjects_dir=subjects_dir, subject='sample', hemi='both', views='caudal', clim=dict(kind='value', pos_lims=(0, max_val / 2., max_val))) brain_lh.add_text(0.1, 0.9, label_names[idx[0]], 'title', font_size=16)
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
a80c2c2c22b6eae541bd071f264b8eb8
and in the right hemisphere.
brain_rh = stc_rh.plot(subjects_dir=subjects_dir, subject='sample', hemi='both', views='caudal', clim=dict(kind='value', pos_lims=(0, max_val / 2., max_val))) brain_rh.add_text(0.1, 0.9, label_names[idx[1]], 'title', font_size=16)
0.23/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
c0fbe1f37be111bec00f6332e15d7b9c
DTensor Concepts <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/dtensor_overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/dtensor_overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/dtensor_overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/dtensor_overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview This colab introduces DTensor, an extension to TensorFlow for synchronous distributed computing. DTensor provides a global programming model that allows developers to compose applications that operate on Tensors globally while managing the distribution across devices internally. DTensor distributes the program and tensors according to the sharding directives through a procedure called Single program, multiple data (SPMD) expansion. By decoupling the application from sharding directives, DTensor enables running the same application on a single device, multiple devices, or even multiple clients, while preserving its global semantics. This guide introduces DTensor concepts for distributed computing, and how DTensor integrates with TensorFlow. To see a demo of using DTensor in model training, see Distributed training with DTensor tutorial. Setup DTensor is part of TensorFlow 2.9.0 release, and also included in the TensorFlow nightly builds since 04/09/2022.
!pip install --quiet --upgrade --pre tensorflow
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
03b1157b60a73ed82c14765923243f47
Once installed, import tensorflow and tf.experimental.dtensor. Then configure TensorFlow to use 6 virtual CPUs. Even though this example uses vCPUs, DTensor works the same way on CPU, GPU or TPU devices.
import tensorflow as tf from tensorflow.experimental import dtensor print('TensorFlow version:', tf.__version__) def configure_virtual_cpus(ncpu): phy_devices = tf.config.list_physical_devices('CPU') tf.config.set_logical_device_configuration(phy_devices[0], [ tf.config.LogicalDeviceConfiguration(), ] * ncpu) configure_virtual_cpus(6) DEVICES = [f'CPU:{i}' for i in range(6)] tf.config.list_logical_devices('CPU')
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
59cff9c7773e75b1fca9ec30df0089c9
DTensor's model of distributed tensors DTensor introduces two concepts: dtensor.Mesh and dtensor.Layout. They are abstractions to model the sharding of tensors across topologically related devices. Mesh defines the device list for computation. Layout defines how to shard the Tensor dimension on a Mesh. Mesh Mesh represents a logical Cartisian topology of a set of devices. Each dimension of the Cartisian grid is called a Mesh dimension, and referred to with a name. Names of mesh dimension within the same Mesh must be unique. Names of mesh dimensions are referenced by Layout to describe the sharding behavior of a tf.Tensor along each of its axes. This is described in more detail later in the section on Layout. Mesh can be thought of as a multi-dimensional array of devices. In a 1 dimensional Mesh, all devices form a list in a single mesh dimension. The following example uses dtensor.create_mesh to create a mesh from 6 CPU devices along a mesh dimension 'x' with a size of 6 devices: <img src="https://www.tensorflow.org/images/dtensor/dtensor_mesh_1d.png" alt="A 1 dimensional mesh with 6 CPUs" class="no-filter">
mesh_1d = dtensor.create_mesh([('x', 6)], devices=DEVICES) print(mesh_1d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
2490996c84f7d212a378976c7c5b4d5e
A Mesh can be multi dimensional as well. In the following example, 6 CPU devices form a 3x2 mesh, where the 'x' mesh dimension has a size of 3 devices, and the 'y' mesh dimension has a size of 2 devices: <img src="https://www.tensorflow.org/images/dtensor/dtensor_mesh_2d.png" alt="A 2 dimensional mesh with 6 CPUs" class="no-filter">
mesh_2d = dtensor.create_mesh([('x', 3), ('y', 2)], devices=DEVICES) print(mesh_2d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
5fc9c33d46c45778fea336e57e4882f4
Layout Layout specifies how a tensor is distributed, or sharded, on a Mesh. Note: In order to avoid confusions between Mesh and Layout, the term dimension is always associated with Mesh, and the term axis with Tensor and Layout in this guide. The rank of Layout should be the same as the rank of the Tensor where the Layout is applied. For each of the Tensor's axes the Layout may specify a mesh dimension to shard the tensor across, or specify the axis as "unsharded". The tensor is replicated across any mesh dimensions that it is not sharded across. The rank of a Layout and the number of dimensions of a Mesh do not need to match. The unsharded axes of a Layout do not need to be associated to a mesh dimension, and unsharded mesh dimensions do not need to be associated with a layout axis. <img src="https://www.tensorflow.org/images/dtensor/dtensor_components_diag.png" alt="Diagram of dtensor components." class="no-filter"> Let's analyze a few examples of Layout for the Mesh's created in the previous section. On a 1-dimensional mesh such as [("x", 6)] (mesh_1d in the previous section), Layout(["unsharded", "unsharded"], mesh_1d) is a layout for a rank-2 tensor replicated across 6 devices. <img src="https://www.tensorflow.org/images/dtensor/dtensor_layout_replicated.png" alt="A tensor replicated across a rank-1 mesh" class="no-filter">
layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh_1d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
05f781507b9b85467c7aaf5aa26d94a4
Using the same tensor and mesh the layout Layout(['unsharded', 'x']) would shard the second axis of the tensor across the 6 devices. <img src="https://www.tensorflow.org/images/dtensor/dtensor_layout_rank1.png" alt="A tensor sharded across a rank-1 mesh" class="no-filter">
layout = dtensor.Layout([dtensor.UNSHARDED, 'x'], mesh_1d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
f03ce55633a392855c7ca4bc77f4a53b
Given a 2-dimensional 3x2 mesh such as [("x", 3), ("y", 2)], (mesh_2d from the previous section), Layout(["y", "x"], mesh_2d) is a layout for a rank-2 Tensor whose first axis is sharded across across mesh dimension "y", and whose second axis is sharded across mesh dimension "x". <img src="https://www.tensorflow.org/images/dtensor/dtensor_layout_rank2.png" alt="A tensorr with it's first axis sharded across mesh dimension 'y' and it's second axis sharded across mesh dimension 'x'" class="no-filter">
layout = dtensor.Layout(['y', 'x'], mesh_2d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
6b69bb6d61d709bd622389e4f23c5724
For the same mesh_2d, the layout Layout(["x", dtensor.UNSHARDED], mesh_2d) is a layout for a rank-2 Tensor that is replicated across "y", and whose first axis is sharded on mesh dimension x. <img src="https://www.tensorflow.org/images/dtensor/dtensor_layout_hybrid.png" alt="A tensor replicated across mesh-dimension y, with it's first axis sharded across mesh dimension 'x'" class="no-filter">
layout = dtensor.Layout(["x", dtensor.UNSHARDED], mesh_2d)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
c5439787581a807f3e41e0ec8053fc57
Single-Client and Multi-Client Applications DTensor supports both single-client and multi-client applications. The colab Python kernel is an example of a single client DTensor application, where there is a single Python process. In a multi-client DTensor application, multiple Python processes collectively perform as a coherent application. The Cartisian grid of a Mesh in a multi-client DTensor application can span across devices regardless of whether they are attached locally to the current client or attached remotely to another client. The set of all devices used by a Mesh are called the global device list. The creation of a Mesh in a multi-client DTensor application is a collective operation where the global device list is identicial for all of the participating clients, and the creation of the Mesh serves as a global barrier. During Mesh creation, each client provides its local device list together with the expected global device list. DTensor validates that both lists are consistent. Please refer to the API documentation for dtensor.create_mesh and dtensor.create_distributed_mesh for more information on multi-client mesh creation and the global device list. Single-client can be thought of as a special case of multi-client, with 1 client. In a single-client application, the global device list is identical to the local device list. DTensor as a sharded tensor Now let's start coding with DTensor. The helper function, dtensor_from_array, demonstrates creating DTensors from something that looks like a tf.Tensor. The function performs 2 steps: - Replicates the tensor to every device on the mesh. - Shards the copy according to the layout requested in its arguments.
def dtensor_from_array(arr, layout, shape=None, dtype=None): """Convert a DTensor from something that looks like an array or Tensor. This function is convenient for quick doodling DTensors from a known, unsharded data object in a single-client environment. This is not the most efficient way of creating a DTensor, but it will do for this tutorial. """ if shape is not None or dtype is not None: arr = tf.constant(arr, shape=shape, dtype=dtype) # replicate the input to the mesh a = dtensor.copy_to_mesh(arr, layout=dtensor.Layout.replicated(layout.mesh, rank=layout.rank)) # shard the copy to the desirable layout return dtensor.relayout(a, layout=layout)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
b257994a34295752fb75d5166f1b6657
Anatomy of a DTensor A DTensor is a tf.Tensor object, but augumented with the Layout annotation that defines its sharding behavior. A DTensor consists of the following: Global tensor meta-data, including the global shape and dtype of the tensor. A Layout, which defines the Mesh the Tensor belongs to, and how the Tensor is sharded onto the Mesh. A list of component tensors, one item per local device in the Mesh. With dtensor_from_array, you can create your first DTensor, my_first_dtensor, and examine its contents.
mesh = dtensor.create_mesh([("x", 6)], devices=DEVICES) layout = dtensor.Layout([dtensor.UNSHARDED], mesh) my_first_dtensor = dtensor_from_array([0, 1], layout) # Examine the dtensor content print(my_first_dtensor) print("global shape:", my_first_dtensor.shape) print("dtype:", my_first_dtensor.dtype)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
cc9f93da1e75d0909750bcaf7684a186
Layout and fetch_layout The layout of a DTensor is not a regular attribute of tf.Tensor. Instead, DTensor provides a function, dtensor.fetch_layout to access the layout of a DTensor.
print(dtensor.fetch_layout(my_first_dtensor)) assert layout == dtensor.fetch_layout(my_first_dtensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
5ef53d86a783f23ceee052cdc005937f
Component tensors, pack and unpack A DTensor consists of a list of component tensors. The component tensor for a device in the Mesh is the Tensor object representing the piece of the global DTensor that is stored on this device. A DTensor can be unpacked into component tensors through dtensor.unpack. You can make use of dtensor.unpack to inspect the components of the DTensor, and confirm they are on all devices of the Mesh. Note that the positions of component tensors in the global view may overlap each other. For example, in the case of a fully replicated layout, all components are identical replicas of the global tensor.
for component_tensor in dtensor.unpack(my_first_dtensor): print("Device:", component_tensor.device, ",", component_tensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
5270d905a69a9b6382f1aef8eaa094e1
As shown, my_first_dtensor is a tensor of [0, 1] replicated to all 6 devices. The inverse operation of dtensor.unpack is dtensor.pack. Component tensors can be packed back into a DTensor. The components must have the same rank and dtype, which will be the rank and dtype of the returned DTensor. However there is no strict requirement on the device placement of component tensors as inputs of dtensor.unpack: the function will automatically copy the component tensors to their respective corresponding devices.
packed_dtensor = dtensor.pack( [[0, 1], [0, 1], [0, 1], [0, 1], [0, 1], [0, 1]], layout=layout ) print(packed_dtensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
350e560e7a63722c489e6459cf504980
Sharding a DTensor to a Mesh So far you've worked with the my_first_dtensor, which is a rank-1 DTensor fully replicated across a dim-1 Mesh. Next create and inspect DTensors that are sharded across a dim-2 Mesh. The next example does this with a 3x2 Mesh on 6 CPU devices, where size of mesh dimension 'x' is 3 devices, and size of mesh dimension'y' is 2 devices.
mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
5e7d74af116c927a9845eb86c6acb45a
Fully sharded rank-2 Tensor on a dim-2 Mesh Create a 3x2 rank-2 DTensor, sharding its first axis along the 'x' mesh dimension, and its second axis along the 'y' mesh dimension. Because the tensor shape equals to the mesh dimension along all of the sharded axes, each device receives a single element of the DTensor. The rank of the component tensor is always the same as the rank of the global shape. DTensor adopts this convention as a simple way to preserve information for locating the relation between a component tensor and the global DTensor.
fully_sharded_dtensor = dtensor_from_array( tf.reshape(tf.range(6), (3, 2)), layout=dtensor.Layout(["x", "y"], mesh)) for raw_component in dtensor.unpack(fully_sharded_dtensor): print("Device:", raw_component.device, ",", raw_component)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
d9fb7d6fff125a98c2af3cbc1b9ea653
Fully replicated rank-2 Tensor on a dim-2 Mesh For comparison, create a 3x2 rank-2 DTensor, fully replicated to the same dim-2 Mesh. Because the DTensor is fully replicated, each device receives a full replica of the 3x2 DTensor. The rank of the component tensors are the same as the rank of the global shape -- this fact is trivial, because in this case, the shape of the component tensors are the same as the global shape anyway.
fully_replicated_dtensor = dtensor_from_array( tf.reshape(tf.range(6), (3, 2)), layout=dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)) # Or, layout=tensor.Layout.fully_replicated(mesh, rank=2) for component_tensor in dtensor.unpack(fully_replicated_dtensor): print("Device:", component_tensor.device, ",", component_tensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
3efbefa9ccede1030fa542df20e8ed20
Hybrid rank-2 Tensor on a dim-2 Mesh What about somewhere between fully sharded and fully replicated? DTensor allows a Layout to be a hybrid, sharded along some axes, but replicated along others. For example, you can shard the same 3x2 rank-2 DTensor in the following way: 1st axis sharded along the 'x' mesh dimension. 2nd axis replicated along the 'y' mesh dimension. To achieve this sharding scheme, you just need to replace the sharding spec of the 2nd axis from 'y' to dtensor.UNSHARDED, to indicate your intention of replicating along the 2nd axis. The layout object will look like Layout(['x', dtensor.UNSHARDED], mesh).
hybrid_sharded_dtensor = dtensor_from_array( tf.reshape(tf.range(6), (3, 2)), layout=dtensor.Layout(['x', dtensor.UNSHARDED], mesh)) for component_tensor in dtensor.unpack(hybrid_sharded_dtensor): print("Device:", component_tensor.device, ",", component_tensor)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
d2a77ad347f206c6a5e9176ec85d413e
You can inspect the component tensors of the created DTensor and verify they are indeed sharded according to your scheme. It may be helpful to illustrate the situation with a chart: <img src="https://www.tensorflow.org/images/dtensor/dtensor_hybrid_mesh.png" alt="A 3x2 hybrid mesh with 6 CPUs" class="no-filter" width=75%> Tensor.numpy() and sharded DTensor Be aware that calling the .numpy() method on a sharded DTensor raises an error. The rationale for erroring is to protect against unintended gathering of data from multiple computing devices to the host CPU device backing the returned numpy array.
print(fully_replicated_dtensor.numpy()) try: fully_sharded_dtensor.numpy() except tf.errors.UnimplementedError: print("got an error as expected for fully_sharded_dtensor") try: hybrid_sharded_dtensor.numpy() except tf.errors.UnimplementedError: print("got an error as expected for hybrid_sharded_dtensor")
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
e5de386ac083ccbe11d9d74e25e06d94
TensorFlow API on DTensor DTensor strives to be a drop-in replacement for tensor in your program. The TensorFlow Python API that consume tf.Tensor, such as the Ops library functions, tf.function, tf.GradientTape, also work with DTensor. To accomplish this, for each TensorFlow Graph, DTensor produces and executes an equivalent SPMD graph in a procedure called SPMD expansion. A few critical steps in DTensor SPMD expansion are: Propagating the sharding Layout of DTensor in the TensorFlow graph Rewriting TensorFlow Ops on the global DTensor with equivalent TensorFlow Ops on the component tensors, inserting collective and communication Ops when necessary Lowering backend neutral TensorFlow Ops to backend specific TensorFlow Ops. The final result is that DTensor is a drop-in replacement for Tensor. Note: DTensor is still an experimental API which means you will be exploring and pushing the boundaries and limits of the DTensor programming model. There are 2 ways of triggering DTensor execution: - DTensor as operands of a Python function, e.g. tf.matmul(a, b) will run through DTensor if a, b, or both are DTensors. - Requesting the result of a Python function to be a DTensor, e.g. dtensor.call_with_layout(tf.ones, layout, shape=(3, 2)) will run through DTensor because we requested the output of tf.ones to be sharded according to a layout. DTensor as Operands Many TensorFlow API functions take tf.Tensor as their operands, and returns tf.Tensor as their results. For these functions, you can express intention to run a function through DTensor by passing in DTensor as operands. This section uses tf.matmul(a, b) as an example. Fully replicated input and output In this case, the DTensors are fully replicated. On each of the devices of the Mesh, - the component tensor for operand a is [[1, 2, 3], [4, 5, 6]] (2x3) - the component tensor for operand b is [[6, 5], [4, 3], [2, 1]] (3x2) - the computation consists of a single MatMul of (2x3, 3x2) -&gt; 2x2, - the component tensor for result c is [[20, 14], [56,41]] (2x2) Total number of floating point mul operations is 6 device * 4 result * 3 mul = 72.
mesh = dtensor.create_mesh([("x", 6)], devices=DEVICES) layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh) a = dtensor_from_array([[1, 2, 3], [4, 5, 6]], layout=layout) b = dtensor_from_array([[6, 5], [4, 3], [2, 1]], layout=layout) c = tf.matmul(a, b) # runs 6 identical matmuls in parallel on 6 devices # `c` is a DTensor replicated on all devices (same as `a` and `b`) print('Sharding spec:', dtensor.fetch_layout(c).sharding_specs) print("components:") for component_tensor in dtensor.unpack(c): print(component_tensor.device, component_tensor.numpy())
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
d1a280812e296d3c7a2cab1f240af309
Sharding operands along the contracted axis You can reduce the amount of computation per device by sharding the operands a and b. A popular sharding scheme for tf.matmul is to shard the operands along the axis of the contraction, which means sharding a along the second axis, and b along the first axis. The global matrix product sharded under this scheme can be performed efficiently, by local matmuls that runs concurrently, followed by a collective reduction to aggregate the local results. This is also the canonical way of implementing a distributed matrix dot product. Total number of floating point mul operations is 6 devices * 4 result * 1 = 24, a factor of 3 reduction compared to the fully replicated case (72) above. The factor of 3 is due to the sharing along x mesh dimension with a size of 3 devices. The reduction of the number of operations run sequentially is the main mechansism with which synchronuous model parallelism accelerates training.
mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES) a_layout = dtensor.Layout([dtensor.UNSHARDED, 'x'], mesh) a = dtensor_from_array([[1, 2, 3], [4, 5, 6]], layout=a_layout) b_layout = dtensor.Layout(['x', dtensor.UNSHARDED], mesh) b = dtensor_from_array([[6, 5], [4, 3], [2, 1]], layout=b_layout) c = tf.matmul(a, b) # `c` is a DTensor replicated on all devices (same as `a` and `b`) print('Sharding spec:', dtensor.fetch_layout(c).sharding_specs)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
85dd0cc269a7241bd6ce5b7ead3bdd6c
Additional Sharding You can perform additional sharding on the inputs, and they are appropriately carried over to the results. For example, you can apply additional sharding of operand a along its first axis to the 'y' mesh dimension. The additional sharding will be carried over to the first axis of the result c. Total number of floating point mul operations is 6 devices * 2 result * 1 = 12, an additional factor of 2 reduction compared to the case (24) above. The factor of 2 is due to the sharing along y mesh dimension with a size of 2 devices.
mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES) a_layout = dtensor.Layout(['y', 'x'], mesh) a = dtensor_from_array([[1, 2, 3], [4, 5, 6]], layout=a_layout) b_layout = dtensor.Layout(['x', dtensor.UNSHARDED], mesh) b = dtensor_from_array([[6, 5], [4, 3], [2, 1]], layout=b_layout) c = tf.matmul(a, b) # The sharding of `a` on the first axis is carried to `c' print('Sharding spec:', dtensor.fetch_layout(c).sharding_specs) print("components:") for component_tensor in dtensor.unpack(c): print(component_tensor.device, component_tensor.numpy())
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
53944aa6f977a01c437196e8260a28b0
DTensor as Output What about Python functions that do not take operands, but returns a Tensor result that can be sharded? Examples of such functions are tf.ones, tf.zeros, tf.random.stateless_normal, For these Python functions, DTensor provides dtensor.call_with_layout which eagelry executes a Python function with DTensor, and ensures that the returned Tensor is a DTensor with the requested Layout.
help(dtensor.call_with_layout)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
417d7da0459b730fe169d21702284301
The eagerly executed Python function usually only contain a single non-trivial TensorFlow Op. To use a Python function that emits multiple TensorFlow Ops with dtensor.call_with_layout, the function should be converted to a tf.function. Calling a tf.function is a single TensorFlow Op. When the tf.function is called, DTensor can perform layout propagation when it analyzes the computing graph of the tf.function, before any of the intermediate tensors are materialized. APIs that emit a single TensorFlow Op If a function emits a single TensorFlow Op, you can directly apply dtensor.call_with_layout to the function.
help(tf.ones) mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES) ones = dtensor.call_with_layout(tf.ones, dtensor.Layout(['x', 'y'], mesh), shape=(6, 4)) print(ones)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
114cf7432522466403afe7c10847a66b
APIs that emit multiple TensorFlow Ops If the API emits multiple TensorFlow Ops, convert the function into a single Op through tf.function. For example tf.random.stateleess_normal
help(tf.random.stateless_normal) ones = dtensor.call_with_layout( tf.function(tf.random.stateless_normal), dtensor.Layout(['x', 'y'], mesh), shape=(6, 4), seed=(1, 1)) print(ones)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
b46807ae042f484f8bcb7566882a22d2
Wrapping a Python function that emits a single TensorFlow Op with tf.function is allowed. The only caveat is paying the associated cost and complexity of creating a tf.function from a Python function.
ones = dtensor.call_with_layout( tf.function(tf.ones), dtensor.Layout(['x', 'y'], mesh), shape=(6, 4)) print(ones)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
9a86518049e12a36d902104bd0c6ceb8
From tf.Variable to dtensor.DVariable In Tensorflow, tf.Variable is the holder for a mutable Tensor value. With DTensor, the corresponding variable semantics is provided by dtensor.DVariable. The reason a new type DVariable was introduced for DTensor variable is because DVariables have an additional requirement that the layout cannot change from its initial value.
mesh = dtensor.create_mesh([("x", 6)], devices=DEVICES) layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh) v = dtensor.DVariable( initial_value=dtensor.call_with_layout( tf.function(tf.random.stateless_normal), layout=layout, shape=tf.TensorShape([64, 32]), seed=[1, 1], dtype=tf.float32)) print(v.handle) assert layout == dtensor.fetch_layout(v)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
1cdfbae2bcd4d682f3adc02d21c03449
Other than the requirement on matching the layout, a DVariable behaves the same as a tf.Variable. For example, you can add a DVariable to a DTensor,
a = dtensor.call_with_layout(tf.ones, layout=layout, shape=(64, 32)) b = v + a # add DVariable and DTensor print(b)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
c1beddcf625db80ca9e918df56082d25
You can also assign a DTensor to a DVariable.
v.assign(a) # assign a DTensor to a DVariable print(a)
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
3ce565232d01c1644d3781f4909794ba
Attempting to mutate the layout of a DVariable, by assigning a DTensor with an incompatible layout produces an error.
# variable's layout is immutable. another_mesh = dtensor.create_mesh([("x", 3), ("y", 2)], devices=DEVICES) b = dtensor.call_with_layout(tf.ones, layout=dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], another_mesh), shape=(64, 32)) try: v.assign(b) except: print("exception raised")
site/en/guide/dtensor_overview.ipynb
tensorflow/docs
apache-2.0
ebf85085da3fd83912c98f27ad91c4ca
Reading TSV files
CWD = osp.join(osp.expanduser('~'), 'documents','grants_projects','roberto_projects', \ 'guillaume_huguet_CNV','File_OK') filename = 'Imagen_QC_CIA_MMAP_V2_Annotation.tsv' fullfname = osp.join(CWD, filename) arr = np.loadtxt(fullfname, dtype='str', comments=None, delimiter='\Tab', converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0) EXPECTED_LINES = 19542 expected_nb_values = EXPECTED_LINES - 1 assert arr.shape[0] == EXPECTED_LINES line0 = arr[0].split('\t') print(line0) danger = 'Pvalue_MMAP_V2_sans_intron_and_Intergenic' score = 'SCORE' i_danger = line0.index(danger) i_score = line0.index(score) print(i_danger, i_score) # check that all lines have the same number of tab separated elements larr = np.asarray([len(arr[i].split('\t')) for i in range(arr.shape[0])]) assert not (larr - larr[0]).any() # all element have the same value dangers = np.asarray([line.split('\t')[i_danger] for line in arr[1:]]) scores = np.asarray([line.split('\t')[i_score] for line in arr[1:]]) # print(np.unique(scores)) assert len(dangers) == expected_nb_values assert len(scores) == expected_nb_values
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
da0ebe12a888db7e0811e9c9b7a2588a
transforming the "Pvalue_MMAP_V2_..." into danger score Testing the function danger_score
assert util._test_danger_score_1() assert util._test_danger_score()
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
5ca0918f94f135edd1a8e83a6f70f86c
QUESTION pour Guillaume: a quoi correspondent les '' dans la colonne "Pvalue_MMAP_V2_sans_intron_and_Intergenic" (danger)? Ansewer: cnv for which we have no dangerosity information
""" danger_not_empty = dangers != '' danger_scores = dangers[danger_not_empty] danger_scores = np.asarray([util.danger_score(pstr, util.pH1_with_apriori) for pstr in danger_scores]) """;
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
b768ed54a15885b605d01474ad8b79d6
To be or not to be a CNV: p value from the 'SCORE' column
reload(util) #get the scores scores = np.asarray([line.split('\t')[i_score] for line in arr[1:]]) assert len(scores) == expected_nb_values print(len(np.unique(scores))) #tmp_score = np.asarray([util.str2floats(s, comma2point=True, sep=' ')[0] for s in scores]) assert scores.shape[0] == EXPECTED_LINES - 1 # h = plt.hist(tmp[tmp > sst.scoreatpercentile(tmp, 99)], bins=100) # h = plt.hist(tmp[tmp < 50], bins=100) """ print("# CNV with score == 0.: ", (tmp==0.).sum()) print("# CNV with score >=15 < 17.5 : ", np.logical_and(tmp >= 15., tmp < 17.5).sum()) tmp.max() """;
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
86821bfdb82479172f8410ed4109a61e
Replace the zero score by the maximum score: cf Guillaume's procedure
scoresf = np.asarray([util.str2floats(s, comma2point=True, sep=' ')[0] for s in scores]) print(scoresf.max(), scoresf.min(),(scoresf==0).sum()) #clean_score = util.process_scores(scores) #h = plt.hist(clean_score[clean_score < 60], bins=100) #h = plt.hist(scoresf[scoresf < 60], bins=100) h = plt.hist(scoresf, bins=100, range=(0,150))
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
a59e4ecacbfa4eb3f8c94c97bf734e11
Transforms the scores into P(cnv is real)
# Creating a function from score to proba from Guillaume's values p_cnv = util._build_dict_prob_cnv() #print(p_cnv.keys()) #print(p_cnv.values()) #### Definition with a piecewise linear function #score2prob = util.create_score2prob_lin_piecewise(p_cnv) #scores = np.arange(15,50,1) #probs = [score2prob(sc) for sc in scores] #plt.plot(scores, probs) #### Definition with a corrected regression line score2prob = util.create_score2prob_lin(p_cnv) #x = np.arange(0,50,1) #plt.plot(x, [score2prob(_) for _ in x], '-', p_cnv.keys(), p_cnv.values(), '+') p_scores = [score2prob(sc) for sc in clean_score] assert len(p_scores) == EXPECTED_LINES -1
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
bc633e2286488ab56c1011fd97e030b1
Finally, putting things together
# re-loading reload(util) CWD = osp.join(osp.expanduser('~'), 'documents','grants_projects','roberto_projects', \ 'guillaume_huguet_CNV','File_OK') filename = 'Imagen_QC_CIA_MMAP_V2_Annotation.tsv' fullfname = osp.join(CWD, filename) # in numpy array arr = np.loadtxt(fullfname, dtype='str', comments=None, delimiter='\Tab', converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0) line0 = arr[0].split('\t') i_DANGER = line0.index('Pvalue_MMAP_V2_sans_intron_and_Intergenic') i_SCORE = line0.index('SCORE') i_START = line0.index('START') i_STOP = line0.index('STOP') i_5pGENE = line0.index("5'gene") i_3pGENE = line0.index("3'gene") i_5pDIST = line0.index("5'dist(kb)") i_3pDIST = line0.index("3'dist(kb)") #i_LOC = line0.index('Location') scores = np.asarray([line.split('\t')[i_SCORE] for line in arr[1:]]) clean_score = util.process_scores(scores) max_score = clean_score.max() print(line0) #names_from = ['START', 'STOP', "5'gene", "3'gene", "5'dist(kb)", "3'dist(kb)"] #---------- ligne uniques: names_from = ['IID_projet', 'IID_genotype', "CHR de Merge_CIA_610_660_QC", 'START', 'STOP'] cnv_names = util.make_uiid(arr, names_from) print("with names from: ", names_from) print("we have {} unique elements out of {} rows in the tsv".format( len(np.unique(cnv_names)), len(cnv_names))) #---------- CNV uniques ? names_from = ["CHR de Merge_CIA_610_660_QC", 'START', 'STOP'] cnv_names = util.make_uiid(arr, names_from) print("with names from: ", names_from) print("we have {} unique elements out of {} rows in the tsv".format( len(np.unique(cnv_names)), len(cnv_names))) #---------- sujets uniques ? names_from = ['IID_projet'] # , 'IID_genotype'] cnv_names = util.make_uiid(arr, names_from) print("with names from: ", names_from) print("we have {} unique elements out of {} rows in the tsv".format( len(np.unique(cnv_names)), len(cnv_names))) dangers = np.asarray([line.split('\t')[i_DANGER] for line in arr[1:]]) scores = np.asarray([line.split('\t')[i_SCORE] for line in arr[1:]]) #danger_not_empty = dangers != '' #print(danger_not_empty.sum()) #print(len(np.unique(cnv_name))) #print(cnv_name[:10])
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
830ffb1847a961cdd467c3358bb2d893
Create a dict of the cnv
from collections import OrderedDict cnv = OrderedDict() names_from = ["CHR de Merge_CIA_610_660_QC", 'START', 'STOP'] #, "5'gene", "3'gene", "5'dist(kb)", "3'dist(kb)"] blank_dgr = 0 for line in arr[1:]: lline = line.split('\t') dgr = lline[i_DANGER] scr = lline[i_SCORE] cnv_iid = util.make_uiid(line, names_from, arr[0]) if dgr != '': add_cnv = (util.danger_score(lline[i_DANGER], util.pH1_with_apriori), score2prob(util.process_one_score(lline[i_SCORE], max_score))) if cnv_iid in cnv.keys(): cnv[cnv_iid].append(add_cnv) else: cnv[cnv_iid] = [add_cnv] else: blank_dgr += 1 print(len(cnv), (blank_dgr)) print([k for k in cnv.keys()[:5]]) print([k for k in cnv.values()[:5]]) for k in cnv.keys()[3340:3350]: print(k,': ',cnv[k])
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
891dac5213d36d0979791dac49df0ed5
Create a dictionary of the subjects -
cnv = OrderedDict({}) #names_from = ['START', 'STOP', "5'gene", "3'gene", "5'dist(kb)", "3'dist(kb)"] names_from = ['IID_projet'] for line in arr[1:]: lline = line.split('\t') dgr = lline[i_DANGER] scr = lline[i_SCORE] sub_iid = util.make_uiid(line, names_from, arr[0]) if dgr != '': add_cnv = (util.danger_score(lline[i_DANGER], util.pH1_with_apriori), score2prob(util.process_one_score(lline[i_SCORE], max_score))) if sub_iid in cnv.keys(): cnv[sub_iid].append(add_cnv) else: cnv[sub_iid] = [add_cnv]
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
c8ddc7a7eb75b06fd5989398a5ea2e70
Histogram of the number of cnv used to compute dangerosity
print(len(cnv)) nbcnv = [len(cnv[sb]) for sb in cnv] hist = plt.hist(nbcnv, bins=50) print(np.max(np.asarray(nbcnv))) # definition of dangerosity from a list of cnv def dangerosity(listofcnvs): """ inputs: list tuples (danger_score, proba_cnv) returns: a dangerosity score """ last = -1 #slicing the last tmp = [np.asarray(t) for t in zip(*listofcnvs)] return tmp[0].dot(tmp[1]) # or: return np.asarray([dgr*prob for (dgr,prob) in listofcnvs]).cumsum()[last]
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
e2b10070bcdffb06f0225264e1384f29
Testing dangerosity
for k in range(1,30, 30): print(cnv[cnv.keys()[k]], ' yields ', dangerosity(cnv[cnv.keys()[k]])) test_dangerosity_input = [[(1., .5), (1., .5), (1., .5), (1., .5)], [(2., 1.)], [(10000., 0.)]] test_dangerosity_output = [2., 2., 0] #print( [dangerosity(icnv) for icnv in test_dangerosity_input]) # == test_dangerosity_output assert( [dangerosity(icnv) for icnv in test_dangerosity_input] == test_dangerosity_output)
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
fac429c8ccff4074fa14f397ebd55d4f
Printing out results
dtime = datetime.now().strftime("%y-%m-%d_h%H-%M") outfile = dtime+'dangerosity_cnv.txt' fulloutfile = osp.join(CWD, outfile) with open(fulloutfile, 'w') as outf: for sub in cnv: outf.write("\t".join([sub, str(dangerosity(cnv[sub]))]) + "\n")
CNV_dangerosite.ipynb
jbpoline/cnv_analysis
artistic-2.0
ad0b2185563f10dcdd5a6146a1d5d6e1
Testing of playing pyguessgame. Generates random numbers and plays a game. Create two random lists of numbers 0/9,10/19,20/29 etc to 100. Compare the two lists. If win mark, if lose mark. Debian
#for ronum in ranumlis: # print ronum randict = dict() othgues = [] othlow = 0 othhigh = 9 for ranez in range(10): randxz = random.randint(othlow, othhigh) othgues.append(randxz) othlow = (othlow + 10) othhigh = (othhigh + 10) #print othgues tenlis = ['zero', 'ten', 'twenty', 'thirty', 'fourty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] #for telis in tenlis: # for diez in dieci: # print telis #randict
pggNumAdd.ipynb
wcmckee/signinlca
mit
312d387a713dfb9157078289aed82b71
Makes dict with keys pointing to the 10s numbers. The value needs the list of random numbers updated. Currently it just adds the number one. How to add the random number list?
for ronum in ranumlis: #print ronum if ronum in othgues: print (str(ronum) + ' You Win!') else: print (str(ronum) + ' You Lose!') #dieci = dict() #for ranz in range(10): #print str(ranz) + str(1)# # dieci.update({str(ranz) + str(1): str(ranz)}) # for numz in range(10): #print str(ranz) + str(numz) # print numz #print zetoo #for diez in dieci: # print diez #for sinum in ranumlis: # print str(sinum) + (str('\n')) #if str(sinum) in othhigh: # print 'Win' #import os #os.system('sudo adduser joemanz --disabled-login --quiet -D') #uslis = os.listdir('/home/wcmckee/signinlca/usernames/') #print ('User List: ') #for usl in uslis: # print usl # os.system('sudo adduser ' + usl + ' ' + '--disabled-login --quiet') # os.system('sudo mv /home/wcmckee/signinlca/usernames/' + usl + ' ' + '/home/' + usl + ' ') #print dieci
pggNumAdd.ipynb
wcmckee/signinlca
mit
447b2467c6688d36c950581354968abe
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Creating a Pandas DataFrame from a CSV file<br></p>
data = pd.read_csv('./weather/daily_weather.csv')
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
f9c79e5b17c79e39aa01a81b74f4cba0
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold">Daily Weather Data Description</p> <br> The file daily_weather.csv is a comma-separated file that contains weather data. This data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data for different seasons and weather conditions is captured.<br><br> Let's now check all the columns in the data.
data.columns
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
c2287a746e5ea02538f0977da654ac3f
<br>Each row in daily_weather.csv captures weather data for a separate day. <br><br> Sensor measurements from the weather station were captured at one-minute intervals. These measurements were then processed to generate values to describe daily weather. Since this dataset was created to classify low-humidity days vs. non-low-humidity days (that is, days with normal or high humidity), the variables included are weather measurements in the morning, with one measurement, namely relatively humidity, in the afternoon. The idea is to use the morning weather values to predict whether the day will be low-humidity or not based on the afternoon measurement of relative humidity. Each row, or sample, consists of the following variables: number: unique number for each row air_pressure_9am: air pressure averaged over a period from 8:55am to 9:04am (Unit: hectopascals) air_temp_9am: air temperature averaged over a period from 8:55am to 9:04am (Unit: degrees Fahrenheit) air_wind_direction_9am: wind direction averaged over a period from 8:55am to 9:04am (Unit: degrees, with 0 means coming from the North, and increasing clockwise) air_wind_speed_9am: wind speed averaged over a period from 8:55am to 9:04am (Unit: miles per hour) max_wind_direction_9am: wind gust direction averaged over a period from 8:55am to 9:10am (Unit: degrees, with 0 being North and increasing clockwise) max_wind_speed_9am: wind gust speed averaged over a period from 8:55am to 9:04am (Unit: miles per hour) rain_accumulation_9am: amount of rain accumulated in the 24 hours prior to 9am (Unit: millimeters) rain_duration_9am: amount of time rain was recorded in the 24 hours prior to 9am (Unit: seconds) relative_humidity_9am: relative humidity averaged over a period from 8:55am to 9:04am (Unit: percent) relative_humidity_3pm: relative humidity averaged over a period from 2:55pm to 3:04pm (Unit: percent )
data data[data.isnull().any(axis=1)]
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
d90b6d39048ad6d66674a5b894648e2b
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Data Cleaning Steps<br><br></p> We will not need to number for each row so we can clean it.
del data['number']
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
36e6b75ffecf21d90be6057a4bee6cfd
Now let's drop null values using the pandas dropna function.
before_rows = data.shape[0] print(before_rows) data = data.dropna() after_rows = data.shape[0] print(after_rows)
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
59b952d3e30e8c9938f2401cea913fd2
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> How many rows dropped due to cleaning?<br><br></p>
before_rows - after_rows
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
65bb19f45207b1483ec8af918b326d90
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"> Convert to a Classification Task <br><br></p> Binarize the relative_humidity_3pm to 0 or 1.<br>
clean_data = data.copy() clean_data['high_humidity_label'] = (clean_data['relative_humidity_3pm'] > 24.99)*1 print(clean_data['high_humidity_label'])
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
84c27b8651456155fa25b199e0b3f406
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Target is stored in 'y'. <br><br></p>
y=clean_data[['high_humidity_label']].copy() #y clean_data['relative_humidity_3pm'].head() y.head()
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
f9a95c04d7d3ab77917a5fa54d47dd7b
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Use 9am Sensor Signals as Features to Predict Humidity at 3pm <br><br></p>
morning_features = ['air_pressure_9am','air_temp_9am','avg_wind_direction_9am','avg_wind_speed_9am', 'max_wind_direction_9am','max_wind_speed_9am','rain_accumulation_9am', 'rain_duration_9am'] X = clean_data[morning_features].copy() X.columns y.columns
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
fd1c0ef06976de75f4edf38c0baa8433
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Perform Test and Train split <br><br></p> REMINDER: Training Phase In the training phase, the learning algorithm uses the training data to adjust the model’s parameters to minimize errors. At the end of the training phase, you get the trained model. <img src="TrainingVSTesting.jpg" align="middle" style="width:550px;height:360px;"/> <BR> In the testing phase, the trained model is applied to test data. Test data is separate from the training data, and is previously unseen by the model. The model is then evaluated on how it performs on the test data. The goal in building a classifier model is to have the model perform well on training as well as test data.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=324) #type(X_train) #type(X_test) #type(y_train) #type(y_test) #X_train.head() #y_train.describe()
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
3e9948e926faa2ca70645d17edbaaeec
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Fit on Train Set <br><br></p>
humidity_classifier = DecisionTreeClassifier(max_leaf_nodes=10, random_state=0) humidity_classifier.fit(X_train, y_train) type(humidity_classifier)
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
35393f0395e59e016c78ee4574ea2a49
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Predict on Test Set <br><br></p>
predictions = humidity_classifier.predict(X_test) predictions[:10] y_test['high_humidity_label'][:10]
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
d5f5a598a58d6661dd8f8e4defe88c61
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Measure Accuracy of the Classifier <br><br></p>
accuracy_score(y_true = y_test, y_pred = predictions)
Week-7-MachineLearning/Weather Data Classification using Decision Trees.ipynb
harishkrao/DSE200x
mit
dfe28641121f2af16787d0b8fe7a7b5a
2 使用类(class)装饰器
from functools import wraps def singleton(cls): instances = {} @wraps(cls) def wrapper(*args, **kwargs): if cls not in instances: instances[cls] = cls(*args, **kwargs) return instances[cls] return wrapper @singleton class MyClass(object): pass myclass1 = MyClass() myclass2 = MyClass() print id(myclass1) == id(myclass2)
python-statatics-tutorial/advance-theme/Singleton.ipynb
gaufung/Data_Analytics_Learning_Note
mit
f3aa39982c74e8608d2d525deae0d499
3 使用GetInstance方法,非线程安全
class MySingleton(object): @classmethod def getInstance(cls): if not hasattr(cls, '_instance'): cls._instance = cls() return cls._instance mysingleton1 = MySingleton.getInstance() mysingleton2 = MySingleton.getInstance() print id(mysingleton1) == id(mysingleton2)
python-statatics-tutorial/advance-theme/Singleton.ipynb
gaufung/Data_Analytics_Learning_Note
mit
9ddaaf7cf63fe6ee5611d26f45746e1d
Indefinite integrals Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc. Find five of these integrals and perform the following steps: Typeset the integral using LateX in a Markdown cell. Define an integrand function that computes the value of the integrand. Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral. Define an integral_exact function that computes the exact value of the integral. Call and print the return value of integral_approx and integral_exact for one set of parameters. Here is an example to show what your solutions should look like: Example Here is the integral I am performing: $$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
def integrand(x, a): return 1.0/(x**2 + a**2) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*np.pi/a print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
aa130bef52160cffbec11911a1cb0893
Integral 1 \begin{equation} \int_{0}^{a}{\sqrt{a^2 - x^2}} dx=\frac{\pi a^2}{4} \end{equation}
# YOUR CODE HERE def integrand(x, a): return (np.sqrt(a**2 - x**2)) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, a, args=(a,)) return I def integral_exact(a): return (0.25*np.pi*a**2) print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
01535a6c9deb40715f5b65a05ebfec88
Integral 2 \begin{equation} \int_{0}^{\infty} e^{-ax^2} dx =\frac{1}{2}\sqrt{\frac{\pi}{a}} \end{equation}
# YOUR CODE HERE def integrand(x, a): return np.exp(-a*x**2) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*np.sqrt(np.pi/a) print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
42a5110dcd4ee6eecef3a91fdf47056a
Integral 3 \begin{equation} \int_{0}^{\infty} \frac{x}{e^x-1} dx =\frac{\pi^2}{6} \end{equation}
# YOUR CODE HERE def integrand(x, a): return x/(np.exp(x)-1) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return (1/6.0)*np.pi**2 print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
e8b06c14d2d1524f70d85293ca5b46c4
Integral 4 \begin{equation} \int_{0}^{\infty} \frac{x}{e^x+1} dx =\frac{\pi^2}{12} \end{equation}
# YOUR CODE HERE def integrand(x, a): return x/(np.exp(x)+1) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return (1/12.0)*np.pi**2 print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
a79d5d7517a75da1c02fb688e5883366
Integral 5 \begin{equation} \int_{0}^{1} \frac{ln x}{1-x} dx =-\frac{\pi^2}{6} \end{equation}
# YOUR CODE HERE def integrand(x, a): return np.log(x)/(1-x) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, 1, args=(a,)) return I def integral_exact(a): return (-1.0/6.0)*np.pi**2 print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
JackDi/phys202-2015-work
mit
efb9d086072b96d88126c452520f2adc
Lets analyze this graph: - the first ir basic block has the name set to main - it is composed of 2 AssignBlocks - the first AssignBlock contains only one assignment, EAX = EBX - the second one is IRDst = loc_key_1 The IRDst is a special register which represent a kind of program counter in intermediate representation. Each IRBlock has one and only one assignment to IRDst. The position of the IRDst assignment is not always in the last AssignBlock of the IRBlock. In our case, the shellcode stops after the MOV EAX, EBX, so the next location to execution is unknown: end. This label has been artificially added by the script. Let's take another instruction.
graph_ir_x86(""" main: ADD EAX, 3 """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
90121c48882115ab1412c16f9903271c
In this graph, we can note that each instruction side effect is represented. Note that in the equation: zf = FLAG_EQ_CMP(EAX, -0x3) The detailed version of the expression: ExprId('zf', 1) = ExprOp('FLAG_EQ_CMP', ExprId('EAX', 32), ExprInt(-0x3, 32)) The operator FLAG_EQ_CMP is a kind of high level representation. But you can customize the lifter in order to get the real equation of the zf. This will be presented in a documentation dedicated to modification of the intermediate representation control flow graph. ExprId('zf', 1) = ExprCond(ExprId('EAX', 32) - ExprInt(-0x3, 32), ExprInt(0, 1), ExprInt(1, 1)) which is, in a simplified form: zf = (EAX - 3) ? (0, 1)
graph_ir_x86(""" main: XCHG EAX, EBX """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
3bb6340b44f5242649d6d7d408845b2b
This one is interesting, as it demonstrate perfectly the parallel execution of multiple assignments. In you are puzzled by this notation, imagine this describes equations, which expresses destination variables of an output state depending on an input state. The equations can be rewritten: EAX_out = EBX_in EBX_out = EAX_in And this matches the xchg semantic. After the execution, those variables are committed, which means that EAX takes the value of EAX_out, and EBX takes the value of EBX_out Some arbitrary choices have been done in order to try to match as best as possible. For example lets take the instruction: CMOVZ EAX, EBX This conditional move is done if the zero flag is activated. So we may want to translate it as: EAX = zf ? EBX : EAX Which can be read: if zf is 1, EAX is set to EBX else EAX is set to EAX, which is equivalent to no modifications. This representation seems good at first, as the semantic of the conditional move seems ok. But let's question the system on the equation EAX = zf ? EBX, EAX: - which register is written ? EAX is always written - which register is read ? zf, EBX, EAX are read IF we ask the same question on the instruction CMOVZ EAX, EBX, the answers are a bit different: - which register is written ? EAX is written only if the zf is 1 - which register is read ? zf is always read, EBX may be read is zf is 1 The conclusion is the representation we gave doesn't represent properly the instruction. Here is what Miasm will gave as intermediate representation for it:
# Here is a push graph_ir_x86(""" main: PUSH EAX """) graph_ir_x86(""" main: CMOVZ EAX, EBX """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
f1d06bf7dbb5604692065a9800aa865d
Here are some remarks we can do on this version: - one x86 instruction has generated multiple IRBlocks - the first IRBlock only reads the zf (we don't take the locations into account here) - EAX is assigned only in the case of zf equals to 1 - EBX is read only in the case of zf equals to 1 We can dispute on the fact that in this form, it's harder to get what is read and what is written. But one argument is: If cmovz doesn't exist (for example in older cpus) what may be the code to do this ?
graph_ir_x86(""" main: JZ end MOV EAX, EBX end: """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
d525b195f09fa0940d83ead7e94944ab
The conclusion is that in intermediate representation, the cmovz is exactly as difficult as analyzing the code using jz/mov So an important point is that in intermediate representation, one instruction can generate multiple IRBlocks. Here are some interesting examples:
graph_ir_x86(""" main: MOVSB """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
5459a567ef9a9e53538ee3b6605660a3
And now, the version using a repeat prefix:
graph_ir_x86(""" main: REP MOVSB """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
cbb632af670122a625d5348e7371aa0f
In the very same way as cmovz, if the rep movsb instruction didn't exist, we would use a more complex code. The translation of some instructions are tricky:
graph_ir_x86(""" main: SHR EAX, 1 """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
d6a3c8e24c8bab6938ec39f90f193e39
For the moment, nothing special. EAX is updated correctly, and the flags are updated according to the result (note those side effects are in parallel here). But look at the next one:
graph_ir_x86(""" main: SHR EAX, CL """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
f3a7c4235e98c8255ad39f6c98a7b2ad
In this case, if CL is zero, the destination is shifted by a zero amount. The instruction behaves (in 32 bit mode) as a nop, and the flags are not assigned. We could have done the same trick as in the cmovz, but this representation matches more accurately the instruction semantic. Here is another one:
graph_ir_x86(""" main: DIV ECX """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
4027ed3d5a8d962f48616bbc6ac875d2
This instruction may generate an exception in case of the divisor is zero. The intermediate representation generates a test in which it evaluate the divisor value and assigns a special register exception_flags to a constant. This constant represents the division by zero. Note this is arbitrary. We could have done the choice to not explicit the possible division by zero, and keep in mind that the umod and udiv operator may generate exceptions. This may change in a future version of Miasm. Indeed, each memory access may generate a exception, and Miasm doesn't explicit them in the intermediate representation: this may be misleading and very hard to analyze in a post pass. This is why we may accept to implicitly raise exception in both those operators rather than generating such a code. The same choice has been done in other instructions:
graph_ir_x86(""" main: INT 0x3 """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
0910546627e33ae386d91598f51e3949
Memory accesses by default explicit segmentation:
graph_ir_x86(""" main: MOV EAX, DWORD PTR FS:[EBX] """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
a60690b2f272294450c1bf7530ff1d68
The pointer of the memory uses the special operator segm, which takes two arguments: - the value of the segment used the memory access - the base address Note that if you work in a flat segmentation model, you can add a post translation pass which will simplify ExprOp("segm", A, B) into B. This will ease code analysis. Note: If you read carefully the documentation on expressions, you know that the word ExprOp is n-ary and that all of its arguments must have the same size. The operator segm is one of the exceptions. The register FS has a size of 16 bit (as a segment selector register) and EBX has a size of 32. In this case, the size of ExprOp("segm", FS, EBX) has the size of EBX Intermediate representation translation In this part, we will explain some manipulations which can be done during the native code lifting. Let's take the example of a call to a subfunction:
asmcfg = gen_x86_asmcfg(""" main: CALL 0x11223344 MOV EBX, EAX """) asmcfg.graphviz() graph_ir_x86(""" main: CALL 0x11223344 MOV EBX, EAX """)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
4c83969e9195c010d931d488304bfced
What did happened here ? - the call instruction has 2 side effects: stacking the return address and jumping to the subfunction address - here, the subfunction address is 0x1122334455, and the return address is located at offset 0x5, which is represented here by loc_5 The question is: why are there unlinked nodes in the graph? The answer is that the graph only analyzes destinations of the IRBlocks, which means the value of IRDst. So in the main, Miasm knowns that the next IRBlock is located at loc_11223344. But as we didn't disassemble code at this address, we don't have its intermediate representation. But the disassembler engine knowns (this behavior can be customized) that a call returns back to the instruction just next to the call. So the basic block at end has been disassembled and translated. If we analyze IRDst only, there are no links between them. This raw way of translating is interesting to see low level moves of stack and return address, but it makes code analysis a bit hard. What we may want is to consider subcalls like an unknown operator, with arguments and side effects. This may model the call to a subfunction. This is the difference in Miasm between translating using lifter (raw translation) and lifter_model_call (ilifter + call modelization) which models subfunction calls. By default, Miasm uses a basic model which is wrong in most cases. But this model can (and must ?) be replaced by the user behavior. You can observe the difference in the examples: example/disasm/dis_binary_lift.py and example/disasm/dis_binary_lifter_model_call.py
graph_ir_x86(""" main: MOV EBX, 0x1234 CALL 0x11223344 MOV ECX, EAX RET """, True)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
3bf4c2e0eb779cf41bb1d81dc731fa5f
What happened here? The translation of the call is replaced by two side effects which occur in parallel: - EAX is set to the result of the operator call_func_ret which has two arguments: loc_11223344 and ESP - ESP is set to the result of the operator call_func_stack which has two arguments: loc_11223344 and ESP The first one is there to model the assignment in 'classic' x86 code of the return value. The second one is there to model a possible change of the stack pointer depending on the function called, that the old stack pointer. Everything here can be subclassed in order to customize the translation behavior. Subfunction call custom modeling The code responsible of the modelisation of function calls is located in the LifterModelCall class (the lifter with call modeling) in miasm/ir/analysis.py: ```python ... def call_effects(self, addr, instr): """Default modelisation of a function call to @addr. This may be used to: * insert dependencies to arguments (stack base, registers, ...) * add some side effects (stack clean, return value, ...) Return a couple: * list of assignments to add to the current irblock * list of additional irblocks @addr: (Expr) address of the called function @instr: native instruction which is responsible of the call """ call_assignblk = AssignBlock( [ ExprAssign(self.ret_reg, ExprOp('call_func_ret', addr, self.sp)), ExprAssign(self.sp, ExprOp('call_func_stack', addr, self.sp)) ], instr ) return [call_assignblk], [] ``` Some architectures subclass it to include some architecture dependent stuffs, for example in miasm/arch/x86/lifter_model_call.py in which we use a default calling convention linked to arguments passed through registers: ```python ... def call_effects(self, ad, instr): call_assignblk = AssignBlock( [ ExprAssign( self.ret_reg, ExprOp( 'call_func_ret', ad, self.sp, self.arch.regs.RCX, self.arch.regs.RDX, self.arch.regs.R8, self.arch.regs.R9, ) ), ExprAssign(self.sp, ExprOp('call_func_stack', ad, self.sp)), ], instr ) return [call_assignblk], [] ``` This is the generic code used in x86_64 to model function calls. But you can finely model functions. For example, suppose you are analysing code on x86_32 with stdcall convention. Suppose you know the callee clean its stack arguments. Supppose as well you know for each function how many arguments it has. You can then customize the model to match the callee and compute the correct stack modification, as well as getting the arguments from stack:
# Construct a custom lifter class LifterFixCallStack(LifterModelCall_x86_32): def call_effects(self, addr, instr): if addr.is_loc(): if self.loc_db.get_location_offset(addr.loc_key) == 0x11223344: # Suppose the function at 0x11223344 has 3 arguments args_count = 3 else: # It's a function we didn't analyze raise RuntimeError("Unknown function parameters") else: # It's a dynamic call ! raise RuntimeError("Dynamic destination ?") # Arguments are taken from stack args = [] for i in range(args_count): args.append(ExprMem(self.sp + ExprInt(i * 4, 32), 32)) # Generate the model call_assignblk = AssignBlock( [ ExprAssign(self.ret_reg, ExprOp('call_func_ret', addr, *args)), ExprAssign(self.sp, self.sp + ExprInt(args_count * 4, self.sp.size)) ], instr ) return [call_assignblk], [] graph_ir_x86(""" main: MOV EBX, 0x1234 PUSH 3 PUSH 2 PUSH 1 CALL 0x11223344 MOV ECX, EAX RET """, lifter_custom=LifterFixCallStack)
doc/ir/lift.ipynb
serpilliere/miasm
gpl-2.0
4c5ac4b4b3093a59a877d2f663ef5669
Read File Containing Zones Using the read_zbarray utility, we can import zonebudget-style array files.
from flopy.utils import read_zbarray zone_file = os.path.join(loadpth, 'zonef_mlt') zon = read_zbarray(zone_file) nlay, nrow, ncol = zon.shape fig = plt.figure(figsize=(10, 4)) for lay in range(nlay): ax = fig.add_subplot(1, nlay, lay+1) im = ax.pcolormesh(zon[lay, :, :]) cbar = plt.colorbar(im) plt.gca().set_aspect('equal') plt.show() np.unique(zon)
examples/Notebooks/flopy3_ZoneBudget_example.ipynb
bdestombe/flopy-1
bsd-3-clause
18b7cd3a275865a1b98fca47a79c6402
Extract Budget Information from ZoneBudget Object At the core of the ZoneBudget object is a numpy structured array. The class provides some wrapper functions to help us interogate the array and save it to disk.
# Create a ZoneBudget object and get the budget record array zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=(0, 1096)) zb.get_budget() # Get a list of the unique budget record names zb.get_record_names() # Look at a subset of fluxes names = ['RECHARGE_IN', 'ZONE_1_IN', 'ZONE_3_IN'] zb.get_budget(names=names) # Look at fluxes in from zone 2 names = ['RECHARGE_IN', 'ZONE_1_IN', 'ZONE_3_IN'] zones = ['ZONE_2'] zb.get_budget(names=names, zones=zones) # Look at all of the mass-balance records names = ['TOTAL_IN', 'TOTAL_OUT', 'IN-OUT', 'PERCENT_DISCREPANCY'] zb.get_budget(names=names)
examples/Notebooks/flopy3_ZoneBudget_example.ipynb
bdestombe/flopy-1
bsd-3-clause
c80d3a34687baf5ab028c072233f474d
Convert Units The ZoneBudget class supports the use of mathematical operators and returns a new copy of the object.
cmd = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=(0, 0)) cfd = cmd / 35.3147 inyr = (cfd / (250 * 250)) * 365 * 12 cmdbud = cmd.get_budget() cfdbud = cfd.get_budget() inyrbud = inyr.get_budget() names = ['RECHARGE_IN'] rowidx = np.in1d(cmdbud['name'], names) colidx = 'ZONE_1' print('{:,.1f} cubic meters/day'.format(cmdbud[rowidx][colidx][0])) print('{:,.1f} cubic feet/day'.format(cfdbud[rowidx][colidx][0])) print('{:,.1f} inches/year'.format(inyrbud[rowidx][colidx][0])) cmd is cfd
examples/Notebooks/flopy3_ZoneBudget_example.ipynb
bdestombe/flopy-1
bsd-3-clause
4d3cd8c1ad0df5ec7c830249909197f9