markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Return Value Name | Description | Type --- | --- | --- Forward_Flow | The forward flow. | Flow Forward_Traces | The forward traces. | List of Trace New_Sessions | Sessions initialized by the forward trace. | List of str Reverse_Flow | The reverse flow. | Flow Reverse_Traces | The reverse traces. | List of Trace Retrieving the Forward flow definition
result.Forward_Flow
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
21ba5a860702f5117c2d8fae76e63c46
Retrieving the detailed Forward Trace information
len(result.Forward_Traces) result.Forward_Traces[0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
6da8830dc50804a2cdcf068b7bfc6275
Evaluating the first Forward Trace
result.Forward_Traces[0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
005269bece3850be4c0772bd42924730
Retrieving the disposition of the first Forward Trace
result.Forward_Traces[0][0].disposition
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
a6a5bf1729ab7c113f6a7d21c73fc3d0
Retrieving the first hop of the first Forward Trace
result.Forward_Traces[0][0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
0ac4269785b42491e06389caf81c6212
Retrieving the last hop of the first Forward Trace
result.Forward_Traces[0][0][-1]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
060b577d94153707ec43449bbf4deb75
Retrieving the Return flow definition
result.Reverse_Flow
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
081f3ddc062d4c3bd6a259f9c18807ba
Retrieving the detailed Return Trace information
len(result.Reverse_Traces) result.Reverse_Traces[0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
bd0b029bbfef6a00cc007663a09adc05
Evaluating the first Reverse Trace
result.Reverse_Traces[0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
5ec7cf8f5a47ef808fcbedc65654a976
Retrieving the disposition of the first Reverse Trace
result.Reverse_Traces[0][0].disposition
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
7fa49bd2cdb3a49642e8084121cf9dba
Retrieving the first hop of the first Reverse Trace
result.Reverse_Traces[0][0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
211bf99c87323a17167b3b8b9720bb4e
Retrieving the last hop of the first Reverse Trace
result.Reverse_Traces[0][0][-1] bf.set_network('generate_questions') bf.set_snapshot('generate_questions')
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
eba38301f7808b0bcd19eae1c1ce2134
Reachability Finds flows that match the specified path and header space conditions. Searches across all flows that match the specified conditions and returns examples of such flows. This question can be used to ensure that certain services are globally accessible and parts of the network are perfectly isolated from each other. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- pathConstraints | Constraint the path a flow can take (start/end/transit locations). | PathConstraints | True | headers | Packet header constraints. | HeaderConstraints | True | actions | Only return flows for which the disposition is from this set. | DispositionSpec | True | success maxTraces | Limit the number of traces returned. | int | True | invertSearch | Search for packet headers outside the specified headerspace, rather than inside the space. | bool | True | ignoreFilters | Do not apply filters/ACLs during analysis. | bool | True | Invocation
result = bf.q.reachability(pathConstraints=PathConstraints(startLocation = '/as2/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), actions='SUCCESS').answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
e1b673580fb3ac9a35b15a1edef90cd2
Bi-directional Reachability Searches for successfully delivered flows that can successfully receive a response. Performs two reachability analyses, first originating from specified sources, then returning back to those sources. After the first (forward) pass, sets up sessions in the network and creates returning flows for each successfully delivered forward flow. The second pass searches for return flows that can be successfully delivered in the presence of the setup sessions. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- pathConstraints | Constraint the path a flow can take (start/end/transit locations). | PathConstraints | True | headers | Packet header constraints. | HeaderConstraints | False | returnFlowType | Specifies the type of return flows to search. | str | True | SUCCESS Invocation
result = bf.q.bidirectionalReachability(pathConstraints=PathConstraints(startLocation = '/as2dist1/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), returnFlowType='SUCCESS').answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
3f713cfcb4affd8d640a1d5b5caff566
Loop detection Detects forwarding loops. Searches across all possible flows in the network and returns example flows that will experience forwarding loops. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- maxTraces | Limit the number of traces returned. | int | True | Invocation
result = bf.q.detectLoops().answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
ec21b1fcf8ebb5cab9d58d7399e22bd9
Return Value Name | Description | Type --- | --- | --- Flow | The flow | Flow Traces | The traces for this flow | Set of Trace TraceCount | The total number traces for this flow | int Print the first 5 rows of the returned Dataframe
result.head(5) bf.set_network('generate_questions') bf.set_snapshot('generate_questions')
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
c41bbed1e07a5be7b06d107217726e40
Multipath Consistency for host-subnets Validates multipath consistency between all pairs of subnets. Searches across all flows between subnets that are treated differently (i.e., dropped versus forwarded) by different paths in the network and returns example flows. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- maxTraces | Limit the number of traces returned. | int | True | Invocation
result = bf.q.subnetMultipathConsistency().answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
daf3ac3b31b93708b7db49ec8c1f4d31
Multipath Consistency for router loopbacks Validates multipath consistency between all pairs of loopbacks. Finds flows between loopbacks that are treated differently (i.e., dropped versus forwarded) by different paths in the presence of multipath routing. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- maxTraces | Limit the number of traces returned. | int | True | Invocation
result = bf.q.loopbackMultipathConsistency().answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
289913cad5e45cdb14ca9cba2a238503
Retrieving the last hop of the first Trace
result.Traces[0][0][-1]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
92a41ce8120462a521a7c9597dcc2c66
He picked one of these cards and kept it in his mind. Next, the 9 playing cards would flash one-by-one in a random order across the screen. Each card was presented a total of 30 times. The subject would mentally count the number of times his card would appear on the screen (which was 30 if he was paying attention, we are not interested in the answer he got, it just helps keep the subject focused on the cards). In this tutorial we will analyse the average response to each card. The card that the subject had in mind should produce a larger response than the others. The data used in this tutorial is EEG data that has been bandpass filtered with a 3rd order Butterworth filter with a passband of 0.5-30 Hz. This results in relatively clean looking data. When doing ERP analysis on other data, you will probably have to filter it yourself. Don't do ERP analysis on non-filtered, non-baselined data! Bandpass filtering is covered in the 3rd tutorial. The EEG data is stored on the virtual server you are talking to right now, as a MATLAB file, which we can load by using the SciPy module:
import scipy.io m = scipy.io.loadmat('data/tutorial1-01.mat') print(m.keys())
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
041f179d741efad4b5cd748c2e358b2a
The scipy.io.loadmat function returns a dictionary containing the variables stored in the matlab file. Two of them are of interest to us, the actual EEG and the labels which indicate at which point in time which card was presented to the subject.
EEG = m['EEG'] labels = m['labels'].flatten() print('EEG dimensions:', EEG.shape) print('Label dimensions:', labels.shape)
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
5e83ed8a519bc1b4b861f376b2531087
All channels are drawn on top of each other, which is not convenient. Usually, EEG data is plotted with the channels vertically stacked, an artefact stemming from the days where EEG machines drew on large rolls of paper. Lets add a constant value to each EEG channel before plotting them and some decoration like a meaningful x and y axis. I'll write this as a function, since this will come in handy later on:
from matplotlib.collections import LineCollection def plot_eeg(EEG, vspace=100, color='k'): ''' Plot the EEG data, stacking the channels horizontally on top of each other. Parameters ---------- EEG : array (channels x samples) The EEG data vspace : float (default 100) Amount of vertical space to put between the channels color : string (default 'k') Color to draw the EEG in ''' bases = vspace * arange(7) # vspace * 0, vspace * 1, vspace * 2, ..., vspace * 6 # To add the bases (a vector of length 7) to the EEG (a 2-D Matrix), we don't use # loops, but rely on a NumPy feature called broadcasting: # http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html EEG = EEG.T + bases # Calculate a timeline in seconds, knowing that the sample rate of the EEG recorder was 2048 Hz. samplerate = 2048. time = arange(EEG.shape[0]) / samplerate # Plot EEG versus time plot(time, EEG, color=color) # Add gridlines to the plot grid() # Label the axes xlabel('Time (s)') ylabel('Channels') # The y-ticks are set to the locations of the electrodes. The international 10-20 system defines # default names for them. gca().yaxis.set_ticks(bases) gca().yaxis.set_ticklabels(['Fz', 'Cz', 'Pz', 'CP1', 'CP3', 'C3', 'C4']) # Put a nice title on top of the plot title('EEG data') # Testing our function figure(figsize=(15, 4)) plot_eeg(EEG)
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
1b95600fea1addd20ae410f2f4b308f6
And to top it off, lets add vertical lines whenever a card was shown to the subject:
figure(figsize=(15, 4)) plot_eeg(EEG) for onset in flatnonzero(labels): axvline(onset / 2048., color='r')
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
907f15a061a9804081c9839411681dd9
As you can see, cards were shown at a rate of 2 per second. We are interested in the response generated whenever a card was shown, so we cut one-second-long pieces of EEG signal that start from the moment a card was shown. These pieces will be named 'trials'. A useful function here is flatnonzero which returns all the indices of an array which contain to a non-zero value. It effectively gives us the time (as an index) when a card was shown, if we use it in a clever way.
onsets = flatnonzero(labels) print(onsets[:10]) # Print the first 10 onsets print('Number of onsets:', len(onsets)) classes = labels[onsets] print('Card shown at each onset:', classes[:10])
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
02402ba419efb0e29b5a5720f2be657b
Lets create a 3-dimensional array containing all the trials:
nchannels = 7 # 7 EEG channels sample_rate = 2048. # The sample rate of the EEG recording device was 2048Hz nsamples = int(1.0 * sample_rate) # one second's worth of data samples ntrials = len(onsets) trials = zeros((ntrials, nchannels, nsamples)) for i, onset in enumerate(onsets): trials[i, :, :] = EEG[:, onset:onset + nsamples] print(trials.shape)
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
ea6749bcc823dd3602002a0bfabc2727
All that's left is to calculate this score for each card, and pick the card with the highest score:
nclasses = len(cards) scores = [pearsonr(classes == i+1, p300_amplitudes)[0] for i in range(nclasses)] # Plot the scores figure(figsize=(4,3)) bar(arange(nclasses)+1, scores, align='center') xticks(arange(nclasses)+1, cards, rotation=-90) ylabel('score') # Pick the card with the highest score winning_card = argmax(scores) print('Was your card the %s?' % cards[winning_card])
eeg-bci/1. Load EEG data and plot ERP.ipynb
wmvanvliet/neuroscience_tutorials
bsd-2-clause
0e190e1eaa29f2dad9a3582993d4934d
To compute if an optical transition between two states is possible or not, we first get some libraries to make this easier.
# Importing necessary extensions import numpy as np import itertools import functools import operator # The use of type annotations requires Python 3.6 or newer from typing import List
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
3f918aa076c0c3f8c80c7a08e97a7ddc
The question is, whether there is a transition matrix element between the aforementioned initial and final states. We can easily anser that with a yes, since the receiving level $\beta,+$ is empty in the initial state and no spin flip is involved when moving the particle from $\alpha,+$ to $\beta,+$. Thus, the question now is how to compute it in a systematic way. We can start by taking the XOR (exclusive or) operation between the constructed representations of the states. This means that we check where the changes between the states in question are located between the two states in question. Then, we check the positions (index) where we get a 1, and if we find that both are odd or both are even, we can say that the transition is allowed. Whereas, if one is in odd and the other in even positions it is not allowed as it would imply a spin flip in the transition. This is equivalent as to write $<f|c^\dagger_{i\alpha'\sigma}c^\dagger_{j\beta'\sigma}|i>$. Now we can go step by step codifying this procedure. First checking for the XOR operation and then asking where the two states differ.
# looking for the positions/levels with different occupation changes = np.logical_xor(initial, final) # obtaining the indexes of those positions np.nonzero(changes)[0].tolist()
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
c6eed161ae26eba44b945d59e6bb524c
We can see that we get a change in positions $0$ and $6$ which correspond to $\alpha,+$ and $\beta,+$ in site $i$ and $j$, respectively. Now we apply modulo 2, which will allow us to check if the changes are in even or odd positions mapping even positions to $0$ whereas odd positions to $1$. Thus, if both are even or odd there will be just one unique element in the list otherwise there will be two unique elements.
modulo = 2 np.unique(np.remainder(np.nonzero(changes), modulo)).size == 1
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
1a57fb89c79622559412f3bf5c393545
Thus, in this case of chosen initial and finals states, the transition is allowed since both are even. We can wraps all of this logic in a function.
def is_allowed(initial: List[int], final: List[int]) -> bool: """ Given an initial and final states as represented in a binary list, returns if it is allowed considering spin conservation. """ return np.unique( np.remainder( np.nonzero( np.logical_xor(initial,final)), 2)).size == 1
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
5b357f6a7de1817d2b1f23dec9c51607
Now we have a function that tells us if between two states an optical transition is possible or not. To recapitulate, we can recompute our previous case and then with a different final state that is not allowed since it involves a spin flip, e.g., [0 0 0 0 0 1 1 0].
is_allowed(initial, final) is_allowed(initial, [0, 0, 0, 0, 0, 1, 1, 0])
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
e9219791bb38521ff24d07a6ef92c4cc
With this preamble, we are equiped to handle more complex cases. Given the chosen computational representation for the states, the normalization coefficients of the states are left out. Thus, one has to take care to keep track of them when constructing properly the transition matrix element in question later on. Ca$_2$RuO$_4$ Let us first explore the $d^4$ system. In a low spin $d^4$ system, we have only the t$_{2g}$ orbitals ($xy$, $yz$, $xz$) active which leads to a 6 elements representation for a site. Two neighboring states involved in a transition are concatenateed into a single array consisting of 12 elements. For this, we create the function to generate the list representation of states given an amount of electrons and levels.
def generate_states(electrons: int, levels: int) -> List[List[int]]: """ Generates the list representation of a given number of electrons and levels (degeneracy not considered). """ # create an array of length equal to the amount of levels # with an amount of 1's equal to the number of electrons # specified which will be used as a seed/template seed = [1 if position < electrons else 0 for position in range(levels)] # taking the seed state, we generate all possible permutations # and remove duplicates using a set operation generated_states = list(set(itertools.permutations(seed))) generated_states.sort(reverse=True) return generated_states
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
7344b122916689b2066d9181e642da04
With this we can generate states of 3, 4, and 5 electrons in a 3 level system with degeneracy 2 meaning 6 levels in total.
states_d3 = generate_states(3,6) states_d4 = generate_states(4,6) states_d5 = generate_states(5,6)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
24a441521060d0b8ccfdbd7b21dfb723
We can consider first the $d^4$ states and take a look at them.
states_d4
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
12e98ed09271d3995da5a6f4dde7c401
It is quite a list of generated states. But from this whole list, not all states are relevant for the problem at hand. This means that we can reduce the amount of states beforehand by applying the physical constrains we have. From all the $d^4$ states, we consider only those with a full $d_{xy}$ orbital and those which distribute the other two electrons in different orbitals as possible initial states for the Ca2RuO4 system. In our representation, this means only the states that have a 1 in the first two entries and no double occupancy in $zx$ or $yz$ orbitals.
possible_states_d4 = [ # select states that fulfill list(state) for state in states_d4 # dxy orbital double occupancy if state[0]==1 and state[1]==1 # dzx/dyz orbital single occupancy and state[2] is not state[3] ] possible_states_d4
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
b6fe4a9094da27ebe2e079ef0a99ab06
We obtain 4 different $d^4$ states that fullfill the conditions previously indicated. From the previous list, the first and last elements correspond to states with $S_z=\pm1$ whereas the ones in the middle correspond to the two superimposed states for the $S=0$ state, namely, a magnon. These four states, could have been easily written down by hand, but the power of this approach is evident when generating and selecting the possible states of the $d^3$ configuration. For the $d^3$ states, we want at least those which keep one electron in the $d_{xy}$ orbital since we know that the others states are not reachable with one movement as required by optical spectroscopy.
possible_states_d3 = [list(state) for state in states_d3 if state[0]==1 # xy up occupied or state[1]==1] # xy down occupied possible_states_d3
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
433ce9799027e392329cd3b4d718ac7c
In the case of the $d^5$ states, since our ground state has a doule occupied $d_{xy}$ orbital then it has to stay occupied.
possible_states_d5 = [list(state) for state in states_d5 # xy up down occupied if state[0]==1 and state[1]==1 ] possible_states_d5
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
474756b0e613b944a0d083fa8d434d5f
We could generate all $d^3d^5$ combinations and check how many of them there are.
def combine_states(first: List[List[int]], second: List[List[int]]) -> List[List[int]]: """ Takes two lists of list representations of states and returns the list representation of a two-site state. """ # Producing all the possible final states. # This has to be read from bottom to top. # 3) the single site representations are combined # into one single two-site representation # 2) we iterate over all the combinations produced # 1) make the product of the given first and second # states lists final_states = [ functools.reduce(operator.add, combination) # 3) for combination # 2) in itertools.product(first, second) # 1) ] final_states.sort(reverse=True) return final_states print("The number of combined states is: ", len(combine_states(possible_states_d3,possible_states_d5)))
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
1484dfa6759e156f7098cf434b7c3afd
We already saw in the previous section how we can check if a transition is allowed in our list codification of the states. Here we will make it a function slightly more complex to help us deal with generating final states.
def label(initial, final, levels, mapping): """Helper function to label the levels/orbitals involved.""" changes = np.nonzero(np.logical_xor(initial, final)) positions = np.remainder(changes, levels)//2 return f"{mapping[positions[0][0]]} and {mapping[positions[0][1]]}" def transition(initial: List[int], final: List[List[int]], debug = False) -> None: """ This function takes the list representation of an initial double site state and a list of final d3 states of intrest. Then, it computes if the transition from the given initial state to a compounded d3d5 final states are possible. The d5 states are implicitly used in the function from those already generated and filtered. """ def process(final_states): # We iterate over all final states and test whether the # transition from the given initial state is allowed for state in final_states: allowed = is_allowed(initial, state) if allowed: labeled = label(initial, state, 6, {0: "xy", 1: "xz", 2: "yz"}) print(f" final state {state} allowed \ between {labeled}.") else: if debug: print(f" final state {state} not allowed.") d5 = list(possible_states_d5) print("From initial state {}".format(initial)) print("d3d5") process(combine_states(final, d5)) print("d5d3") process(combine_states(d5, final))
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
1bfa7cf267061046e48759ea176c6c17
With this, we can now explore the transitions between the different initial states and final states ($^4A_2$, $^2E$, and $^2T_1$ multiplets for the $d^3$ sector). Concerning the $d^4$ states, as explained in chapter 5, there is the possibility to be in the $S_z=\pm1$ or $S_z=0$. We will cover each one of them in the following. What we are ultimately interested in is in the intensities of the transitions and thus we need the amplitudes since $I\sim\hat{A}^2$. We will go through each multiplet covering the ground states consisting of only $S_z=\pm1$ and then with the $S_z=0$. $^4A_2$ First, we will deal with the $^4A_2$ multiplet. The representations for the $|^4A_2,\pm3/2>$ states are given by
A2_32 = [[1,0,1,0,1,0]] # 4A2 Sz=3/2 A2_neg_32 = [[0,1,0,1,0,1]] # 4A2 Sz=-3/2
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
226c86a1e1af80838c78fa00886fe023
whereas the ones for the$|^4A_2,\pm1/2>$
A2_12 = [[0,1,1,0,1,0], [1,0,0,1,1,0], [1,0,1,0,0,1]] # 4A2 Sz=1/2 A2_neg_12 = [[1,0,0,1,0,1], [0,1,1,0,0,1], [0,1,0,1,1,0]] # 4A2 Sz=-1/2
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
4f7a43294c0f7c2312b915ad11dd48a5
Notice that the prefactors and signs are missing from this representation, and have to be taken into account when combining all the pieces into the end result. $S_z=\pm1$ Starting with the pure $S_z=\pm1$ as initial states, meaning $d_{\uparrow}^4d_{\uparrow}^4$ (FM) and $d_{\uparrow}^4d_{\downarrow}^4$ (AFM), we have the following representations:
FM = [1,1,1,0,1,0,1,1,1,0,1,0] AFM_up = [1,1,1,0,1,0,1,1,0,1,0,1] AFM_down = [1,1,0,1,0,1,1,1,1,0,1,0]
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
a3ea1956bf9014d225337ef74e1d925b
Handling the ferromagnetic ordering first, the allowed transitions from the initial state into the $|^4A_2,3/2>$ state are
transition(FM, A2_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
2fe7deefc966263f50301a68b5f9f0b4
Comparing the initial and final states representations and considering the $|^4A_2,3/2>$ prefactor, we obtain that there are two possible transitions with matrix element $t_{xy,xz}$ and $t_{xy,yz}$. Each one is allowed twice from swapping the positions between $d^3$ and $d^5$. Then, for the $|^4A_2,\pm1/2>$ states
transition(FM, A2_12) transition(FM, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
95ee1638df43f1b74f8663bfdc34dad1
Thus, for the $|^4A_2,\pm1/2>$ states, there is no allowed transition starting from the FM initial ground state. Repeating for both $^4A_2$ but starting from the antiferromagnetic state ($d^4_\uparrow d^4_\downarrow$) initial state we get
transition(AFM_up, A2_32) transition(AFM_up, A2_12) transition(AFM_up, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
b4033a0f472902849bdc1923ed595ccc
We see that the AFM initial ground state has no transition matrix element for the $|^4A_2,3/2>$ state. Whereas transitions involving the $|^4A_2,\pm1/2>$ state are allowed. Once again, checking the prefactors for the multiplet and the initial ground state we get a transition matrix element of $t_{xy,xz}/\sqrt{3}$ and $t_{xy,yz}/\sqrt{3}$, twice each. These are the same results as could have been obtained using simple physical arguments. $S_z=0$ The case of $S_z=0$ is handled similarly, the difference is that we get more terms to handle. We start with the $d_0^4d_\uparrow^4$ initial state and the $|^4A_2,\pm3/2>$ states. Since the $d_0^4$ is a superposition of two states, we will split it in the two parts. Being $|f>$ any valid final state involving a combination (tensor product) of a $|d^3>$ and a $|d^5>$ states, and being $|i>$ of the type $|d^4_0>|d^4_\uparrow>$ where $|d^4_0>=|A>+|B>$, then the matrix element $<f|\hat{t}|i>$ can be split as $<f|\hat{t}|A>|d^4_\uparrow>+<f|\hat{t}|B>|d^4_\uparrow>)$.
S0_1 = [1, 1, 1, 0, 0, 1] # |A> S0_2 = [1, 1, 0, 1, 1, 0] # |B> d_zero_down = [1, 1, 0, 1, 0, 1] d_zero_up = [1, 1, 1, 0, 1, 0]
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
12d2e7d50d1b6ca024d921db41b9ffc7
Thus, we append the $d^4_\uparrow$ representation to each part of the $d^4_0$ states. Then, checking for the transitions into the $|^4A_2,\pm3/2>$ $d^3$ state we get
transition(S0_1 + d_zero_up, A2_32) transition(S0_2 + d_zero_up, A2_32) print("\n\n") transition(S0_1 + d_zero_up, A2_neg_32) transition(S0_2 + d_zero_up, A2_neg_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
d48fb2547a825eee5153a72b51798ccf
Collecting the terms we get that for $|^4A_2, 3/2>$ there is no transitions into a $|d^3>|d^5>$ final state but there are transitions into two different $|d^5>|d^3>$ final states, one for each of the $|A>$ and $|B>$ parts. Thus, considering the numerical factors of the involved states, the amplitude in this case is $\frac{1}{\sqrt{2}}t_{xy,xz}$ and $\frac{1}{\sqrt{2}}t_{xy,yz}$. In this case, the states involved in $|^4A_2, -3/2>$ do not show any allowed transition. Now, we can perform the same procedure but considering the $d^4_\downarrow$ state.
transition(S0_1 + d_zero_down, A2_32) transition(S0_2 + d_zero_down, A2_32) print("\n\n") transition(S0_1 + d_zero_down, A2_neg_32) transition(S0_2 + d_zero_down, A2_neg_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
2b063a68d0cdab4e61a000954c44dce3
Here, we observe the same situation than before but swapping the roles between the $|^4A_2,\pm3/2>$ states. This means that the contribution of the $d^0 d^4_\uparrow$ is the same as the $d^0 d^4_\downarrow$ one. Similarly, we can start from the $d^4_\uparrow d^0$ or the $d^4_\downarrow d^0$ which will also swap from transitions involving a $|d^5>|d^3>$ state to the $|d^3>|d^5>$ ones. The explicit computation is shown next for completeness.
transition(d_zero_up + S0_1, A2_32) transition(d_zero_up + S0_2, A2_32) print("\n\n") transition(d_zero_up + S0_1, A2_neg_32) transition(d_zero_up + S0_2, A2_neg_32) print("\n\n") transition(d_zero_down + S0_1, A2_32) transition(d_zero_down + S0_2, A2_32) print("\n\n") transition(d_zero_down + S0_1, A2_neg_32) transition(d_zero_down + S0_2, A2_neg_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
e42abd4cb5fe764fbb4194f754130504
Following the same procedure for the $|^4A_2, 1/2>$ states and $d^4_0d^4_\uparrow$ ground state
transition(S0_1 + d_zero_up, A2_12) transition(S0_2 + d_zero_up, A2_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
4becf6a020f69ed9f23409944b974c48
Here we get some possible transitions to final states of interest. Here, we have to remember that the "receiving" $d3$ multiplet has three terms, which have to be added if present. For the $|d^3>|d^5>$ case there are two allowed transitions into $d^5$ states involving $t_{xy,xz}$ and $t_{xy,yz}$ for $|A>$ and $|B>$. From $|A>$ and $|B>$ we find computed terms that correspond to the same $d^5$ final state that have to be added. Thus, considering the $1/\sqrt{2}$ and $1/\sqrt{3}$ prefactors for the states, each term has a factor of $1/\sqrt{6}$. Then, we obtain the total contributions $\sqrt{\frac{2}{3}}t_{xy,xz}$ and $\sqrt{\frac{2}{3}}t_{xy,yz}$ for transitions into $d^5_{xz/xy,\downarrow}$ in the $|d^3>|d^5>$ case, whereas for the $|d^5>|d^3>$ one, we obtain $\sqrt{\frac{1}{6}}t_{xy,xz}$ and $\sqrt{\frac{1}{6}}t_{xy,yz}$ for the final states involving $d^5_{xz,\uparrow}$ and $d^5_{xz,\uparrow}$ states, respectively. And for the $|^4A_2, -1/2>$ state
transition(S0_1 + d_zero_up, A2_neg_12) transition(S0_2 + d_zero_up, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
4216bd6d8398c96738d1f4aeec31007b
there is no transition found. We repeat for $|d^4_\uparrow d^4_0>$
transition(d_zero_up + S0_1, A2_12) transition(d_zero_up + S0_2, A2_12) print("\n\n") transition(d_zero_up + S0_1, A2_neg_12) transition(d_zero_up + S0_2, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
4d19b5886aea2ea03960122c2f3cbba4
Which is the same situation than before but swapping the position of the contributions as we already saw for the $|^4A_2, 3/2>$ case. For completeness we show the situation with $d^4_\downarrow$ as follows.
transition(S0_1 + d_zero_down, A2_12) transition(S0_2 + d_zero_down, A2_12) print("\n\n") transition(d_zero_down + S0_1, A2_12) transition(d_zero_down + S0_2, A2_12) print("\n\n") transition(S0_1 + d_zero_down, A2_neg_12) transition(S0_2 + d_zero_down, A2_neg_12) print("\n\n") transition(d_zero_down + S0_1, A2_neg_12) transition(d_zero_down + S0_2, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
6f826422ea3ce10deea99f8fb2e1c7b8
Continuing with the $d^4_0d^4_0$ the situation gets more complicated since $<f|\hat{t}|d^4_0>|d^4_0>$ can be split as follows $<f|\hat{t}(|A>+|B>)(|A>+|B>)$ which gives 4 terms labeled $F$ to $I$. Thus, we construct the four combinations for the initial state and calculate each one of them to later sum them up.
F = S0_1 + S0_1 G = S0_1 + S0_2 H = S0_2 + S0_1 I = S0_2 + S0_2
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
9c254310cb69213df886120cecc69f55
First dealing with the $|^4A_2,\pm 3/2>$ states for the $d^3$ sector.
transition(F, A2_32) transition(G, A2_32) transition(H, A2_32) transition(I, A2_32) transition(F, A2_neg_32) transition(G, A2_neg_32) transition(H, A2_neg_32) transition(I, A2_neg_32)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
20c352118865f9ae2ba5fbff6b1fb0d6
No transitions from the $d^4_0d^4_0$ state to $|^4A_2,\pm3/2>$. And now repeating the same strategy for the $|^4A_2,1/2>$ state
transition(F, A2_12) transition(G, A2_12) transition(H, A2_12) transition(I, A2_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
b905892aae5e351ae6179cfc59725e4f
Here we have terms for both $|d^3>|d^5>$ and $|d^5>|d^3>$ and for each component of the initial state which can be grouped into which $d^5$ state they transition into. Terms pairs $F-H$ and $G-I$ belong together involving the $d^5_{xz\downarrow}$ and $d^5_{yz\downarrow}$ states, respectively. Adding terms corresponding to $d^3$ multiplet participating and considering the prefactors, we get the terms $\frac{1}{\sqrt{3}}t_{xy,xz}$ and $\frac{1}{\sqrt{3}}t_{xy,yz}$. And for completeness the $|^4A_2,-1/2>$ state
transition(F, A2_neg_12) transition(G, A2_neg_12) transition(H, A2_neg_12) transition(I, A2_neg_12)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
7fca890b22f1c88b8e00d77b470f0a06
For $|^4A_2,-1/2>$ states we obtain the same values than for $|^4A_2,1/2>$ but involving the other spin state. Now we have all the amplitudes corresponding to transitions into the $^4A_2$ multiplet enabled by the initial states involving $S_z=0$, namely, $\uparrow 0+ 0\uparrow+ \downarrow 0 + 0\downarrow + 00$. $|^2E,a/b>$ First we encode the $|^2E,a>$ multiplet and check the $S_z=\pm1$ ground states
Ea = [[0,1,1,0,1,0], [1,0,0,1,1,0], [1,0,1,0,0,1]] transition(AFM_down, Ea) transition(AFM_up, Ea) transition(FM, Ea)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
0ac161d89f000b4ad386126e18948ae7
For the $|^2E,a>$ multiplet, only transitions from the AFM ground state are possible. Collecting the prefactors we get that the transition matrix element in $-\sqrt{2/3}t_{xy,xz}$ and $-\sqrt{2/3}t_{xy,yz}$ as could be easily checked by hand. Then, for the $|^2E,b>$ multiplet
Eb = [[1,0,1,0,0,1], [1,0,0,1,1,0]] transition(AFM_down, Eb) transition(AFM_up, Eb) transition(FM, Eb)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
f4e121f69b041ccd9e51a459e3a8a3d1
From the $S=\pm1$ initial states, no transitions possible to Eb. We follow with the situation when considering the $S=0$. In this case, each initial state is decomposed in two parts resulting in 4 terms.
transition(S0_1 + S0_1, Ea) transition(S0_1 + S0_2, Ea) transition(S0_2 + S0_1, Ea) transition(S0_2 + S0_2, Ea)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
4474eb974e5ddfe1e1647d9f63d5c2a7
Each one of the combinations is allowed, thus considering the prefactors of the $S_0$ and $|^2E,a>$ we obtain $\sqrt{\frac{2}{3}}t_{xy,xz}$ and $\sqrt{\frac{2}{3}}t_{xy,yz}$. Doing the same for $|^2E,b>$
transition(S0_1 + S0_1, Eb) transition(S0_1 + S0_2, Eb) transition(S0_2 + S0_1, Eb) transition(S0_2 + S0_2, Eb)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
61366d10ba3965d9363f8450114e1325
Adding all the contributions of the allowed terms we obtain, that due to the - sign in the $|^2E,b>$ multiplet, the contribution is 0. We sill have to cover the ground state of the kind $d_0^4d_\uparrow^4$. As done previously, we again will split the $d_0^4$ in the two parts.
S0_1 = [1, 1, 1, 0, 0, 1] S0_2 = [1, 1, 0, 1, 1, 0]
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
52ad02d4133dcea81d16ec2271d396ea
and then we add the $d^4_\uparrow$ representation to each one. Thus, for the $|^2E, Ea>$ $d^3$ multiplet we get
transition(S0_1 + d_zero_up, Ea) transition(S0_2 + d_zero_up, Ea) print("\n\n") transition(d_zero_up + S0_1, Ea) transition(d_zero_up + S0_2, Ea)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
376d349dc8e7656db12153eb94861df6
Here, both parts of the $S_z=0$ state contribute. Checking the prefactors for $S_z=0$ ($1/\sqrt{2}$) and $|^2E, Ea>$ ($1/\sqrt{6}$) we get a matrix element $\sqrt{\frac{2}{3}}t_{xy/xz}$. Following for transitions into the $|^2E, Eb>$
transition(S0_1 + d_zero_up, Eb) transition(S0_2 + d_zero_up, Eb) print("\n\n") transition(d_zero_up + S0_1, Eb) transition(d_zero_up + S0_2, Eb)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
2dbd90638cc57084d9db8d1a51d192dd
$|^2T_1,+/->$ This multiplet has 6 possible forms, $\textit{xy}$, $\textit{xz}$, and $\textit{yz}$ singly occupied First we encode the $|^2T_1,+>$ multiplet with singly occupied $\textit{xy}$
T1_p_xy = [[1,0,1,1,0,0], [1,0,0,0,1,1]] transition(AFM_down, T1_p_xy) transition(AFM_up, T1_p_xy) transition(FM, T1_p_xy)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
7089c0f27bd1dc884122d2b4750461a2
And for the $|^2T_1,->$
T1_n_xy = [[0,1,1,1,0,0], [0,1,0,0,1,1]] transition(AFM_down, T1_n_xy) transition(AFM_up, T1_n_xy) transition(FM, T1_n_xy)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
2c0d4448ed6779c8c40fb28723183f50
In this case, there is no possible transition to states with a singly occupied $\textit{xy}$ orbital from the $\textit{xy}$ ordered ground state.
T1_p_xz = [[1,1,1,0,0,0], [0,0,1,0,1,1]] transition(AFM_up, T1_p_xz) transition(FM, T1_p_xz) T1_p_yz = [[1,1,0,0,1,0], [0,0,1,1,1,0]] transition(AFM_up, T1_p_yz) transition(FM, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
f1fb008746ef94aa8572ae22538ebe88
We can see that the transitions from the ferromagnetic state are forbidden for the $xy$ orbitally ordered ground state for both $|^2T_1, xz\uparrow>$ and $|^2T_1, yz\uparrow>$ while allowing for transitions with amplitudes: $t_{yz,xz}/\sqrt{2}$, $t_{xz,xz}/\sqrt{2}$, $t_{xz,yz}/\sqrt{2}$, and $t_{yz,yz}/\sqrt{2}$. For completeness, we show the transitions into the states $|^2T_1, xz\downarrow>$ and $|^2T_1, yz\downarrow>$ from the $\uparrow\uparrow$ and $\uparrow\downarrow$ ground states.
T1_n_xz = [[1,1,0,1,0,0], [0,0,0,1,1,1]] transition(AFM_up, T1_n_xz) transition(FM, T1_n_xz) T1_n_yz = [[1,1,0,0,0,1], [0,0,1,1,0,1]] transition(AFM_up, T1_n_yz) transition(FM, T1_n_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
92c83b754385b7e85a7af976854bd3a7
S=0 Now the challenge of addressing this multiplet when considering the $S=0$ component in the ground state.
S0_1 = [1, 1, 1, 0, 0, 1] S0_2 = [1, 1, 0, 1, 1, 0] T1_p_xz = [[1,1,1,0,0,0], [0,0,1,0,1,1]] T1_p_yz = [[1,1,0,0,1,0], [0,0,1,1,1,0]]
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
6b1d3d75b235a1d2d875633f89797c47
First, we calculate for the $d^4_0d^4_\uparrow$ ground state. Again the $d^4_0$ state is split in two parts.
transition(S0_1 + d_zero_up, T1_p_xz) transition(S0_2 + d_zero_up, T1_p_xz) print("\n\n") transition(S0_1 + d_zero_up, T1_p_yz) transition(S0_2 + d_zero_up, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
943caf092264c514d6cde1293e8e90d8
And for $d^4_0d^4_\downarrow$
transition(S0_1 + d_zero_down, T1_p_xz) transition(S0_2 + d_zero_down, T1_p_xz) print("\n\n") transition(S0_1 + d_zero_down, T1_p_yz) transition(S0_2 + d_zero_down, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
7cb565cc0fe2aa99ed038e15fabffd0f
Thus, for final states with singly occupied $\textit{xz}$ multiplet, we obtain transitions involving $t_{yz,xz}/2$, $t_{yz,yz}/2$, $t_{xz,xz}/2$ and $t_{xz,yz}/2$ when accounting for the prefactors of the states. For completeness, repeating for the cases $d^4_\uparrow d^4_0$ and $d^4_\downarrow d^4_0$
transition(d_zero_up + S0_1, T1_p_xz) transition(d_zero_up + S0_2, T1_p_xz) print("\n\n") transition(d_zero_up + S0_1, T1_p_yz) transition(d_zero_up + S0_2, T1_p_yz) print("\n\n") print("\n\n") transition(d_zero_down + S0_1, T1_p_xz) transition(d_zero_down + S0_2, T1_p_xz) print("\n\n") transition(d_zero_down + S0_1, T1_p_yz) transition(d_zero_down + S0_2, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
158c3e1089f6043cf0d1e69be0d2461d
In this case, considering the prefactors of the states involved, we obtain contributions $t_{yz,xy}/{2}$ and $t_{yz,yz}/{2}$, $t_{xz,xz}/{2}$, and $t_{xz,yz}/{2}$. And at last $d^4_0d^4_0$
transition(S0_1 + S0_1, T1_p_xz) transition(S0_1 + S0_2, T1_p_xz) transition(S0_2 + S0_1, T1_p_xz) transition(S0_2 + S0_2, T1_p_xz) print("------------------------") transition(S0_1 + S0_1, T1_p_yz) transition(S0_1 + S0_2, T1_p_yz) transition(S0_2 + S0_1, T1_p_yz) transition(S0_2 + S0_2, T1_p_yz)
Apendix.ipynb
ivergara/science_notebooks
gpl-3.0
d81ee2a764ce213d1a61da44d1f72fe3
<b>Restart the kernel</b> after you do a pip install (click on the reload button above).
%%bash pip freeze | grep -e 'flow\|beam' import tensorflow as tf import tensorflow_transform as tft import shutil print(tf.__version__) # change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} fi
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
755c0e5bf16007f130d9535c9319174c
Input source: BigQuery Get data from BigQuery but defer filtering etc. to Beam. Note that the dayofweek column is now strings.
from google.cloud import bigquery def create_query(phase, EVERY_N): """ phase: 1=train 2=valid """ base_query = """ WITH daynames AS (SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek) SELECT (tolls_amount + fare_amount) AS fare_amount, daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek, EXTRACT(HOUR FROM pickup_datetime) AS hourofday, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count AS passengers, 'notneeded' AS key FROM `nyc-tlc.yellow.trips`, daynames WHERE trip_distance > 0 AND fare_amount > 0 """ if EVERY_N == None: if phase < 2: # training query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING), 4)) < 2".format(base_query) else: query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING), 4)) = {1}".format(base_query, phase) else: query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase) return query query = create_query(2, 100000) df_valid = bigquery.Client().query(query).to_dataframe() display(df_valid.head()) df_valid.describe()
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
5b4ccb8b9b85d51b5d36bf6d90826b1a
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
%%writefile requirements.txt tensorflow-transform==0.8.0
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
ad86dd0b99b9687f7f9bc5455241d898
Test transform_data is type pcollection. test if _ = is neccesary
import datetime import tensorflow as tf import apache_beam as beam import tensorflow_transform as tft from tensorflow_transform.beam import impl as beam_impl def is_valid(inputs): try: pickup_longitude = inputs['pickuplon'] dropoff_longitude = inputs['dropofflon'] pickup_latitude = inputs['pickuplat'] dropoff_latitude = inputs['dropofflat'] hourofday = inputs['hourofday'] dayofweek = inputs['dayofweek'] passenger_count = inputs['passengers'] fare_amount = inputs['fare_amount'] return (fare_amount >= 2.5 and pickup_longitude > -78 and pickup_longitude < -70 \ and dropoff_longitude > -78 and dropoff_longitude < -70 and pickup_latitude > 37 \ and pickup_latitude < 45 and dropoff_latitude > 37 and dropoff_latitude < 45 \ and passenger_count > 0) except: return False def preprocess_tft(inputs): import datetime print inputs result = {} result['fare_amount'] = tf.identity(inputs['fare_amount']) result['dayofweek'] = tft.string_to_int(inputs['dayofweek']) # builds a vocabulary result['hourofday'] = tf.identity(inputs['hourofday']) # pass through result['pickuplon'] = (tft.scale_to_0_1(inputs['pickuplon'])) # scaling numeric values result['pickuplat'] = (tft.scale_to_0_1(inputs['pickuplat'])) result['dropofflon'] = (tft.scale_to_0_1(inputs['dropofflon'])) result['dropofflat'] = (tft.scale_to_0_1(inputs['dropofflat'])) result['passengers'] = tf.cast(inputs['passengers'], tf.float32) # a cast result['key'] = tf.as_string(tf.ones_like(inputs['passengers'])) # arbitrary TF func # engineered features latdiff = inputs['pickuplat'] - inputs['dropofflat'] londiff = inputs['pickuplon'] - inputs['dropofflon'] result['latdiff'] = tft.scale_to_0_1(latdiff) result['londiff'] = tft.scale_to_0_1(londiff) dist = tf.sqrt(latdiff * latdiff + londiff * londiff) result['euclidean'] = tft.scale_to_0_1(dist) return result def preprocess(in_test_mode): import os import os.path import tempfile from apache_beam.io import tfrecordio from tensorflow_transform.coders import example_proto_coder from tensorflow_transform.tf_metadata import dataset_metadata from tensorflow_transform.tf_metadata import dataset_schema from tensorflow_transform.beam import tft_beam_io from tensorflow_transform.beam.tft_beam_io import transform_fn_io job_name = 'preprocess-taxi-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S') if in_test_mode: import shutil print 'Launching local job ... hang on' OUTPUT_DIR = './preproc_tft' shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EVERY_N = 100000 else: print 'Launching Dataflow job {} ... hang on'.format(job_name) OUTPUT_DIR = 'gs://{0}/taxifare/preproc_tft/'.format(BUCKET) import subprocess subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split()) EVERY_N = 10000 options = { 'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'), 'temp_location': os.path.join(OUTPUT_DIR, 'tmp'), 'job_name': job_name, 'project': PROJECT, 'max_num_workers': 6, 'teardown_policy': 'TEARDOWN_ALWAYS', 'no_save_main_session': True, 'requirements_file': 'requirements.txt' } opts = beam.pipeline.PipelineOptions(flags=[], **options) if in_test_mode: RUNNER = 'DirectRunner' else: RUNNER = 'DataflowRunner' # set up raw data metadata raw_data_schema = { colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation()) for colname in 'dayofweek,key'.split(',') } raw_data_schema.update({ colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation()) for colname in 'fare_amount,pickuplon,pickuplat,dropofflon,dropofflat'.split(',') }) raw_data_schema.update({ colname : dataset_schema.ColumnSchema(tf.int64, [], dataset_schema.FixedColumnRepresentation()) for colname in 'hourofday,passengers'.split(',') }) raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema)) # run Beam with beam.Pipeline(RUNNER, options=opts) as p: with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')): # save the raw data metadata raw_data_metadata | 'WriteInputMetadata' >> tft_beam_io.WriteMetadata( os.path.join(OUTPUT_DIR, 'metadata/rawdata_metadata'), pipeline=p) # read training data from bigquery and filter rows raw_data = (p | 'train_read' >> beam.io.Read(beam.io.BigQuerySource(query=create_query(1, EVERY_N), use_standard_sql=True)) | 'train_filter' >> beam.Filter(is_valid)) raw_dataset = (raw_data, raw_data_metadata) # analyze and transform training data transformed_dataset, transform_fn = ( raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft)) transformed_data, transformed_metadata = transformed_dataset # save transformed training data to disk in efficient tfrecord format transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord( os.path.join(OUTPUT_DIR, 'train'), file_name_suffix='.gz', coder=example_proto_coder.ExampleProtoCoder( transformed_metadata.schema)) # read eval data from bigquery and filter rows raw_test_data = (p | 'eval_read' >> beam.io.Read(beam.io.BigQuerySource(query=create_query(2, EVERY_N), use_standard_sql=True)) | 'eval_filter' >> beam.Filter(is_valid)) raw_test_dataset = (raw_test_data, raw_data_metadata) # transform eval data transformed_test_dataset = ( (raw_test_dataset, transform_fn) | beam_impl.TransformDataset()) transformed_test_data, _ = transformed_test_dataset # save transformed training data to disk in efficient tfrecord format transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord( os.path.join(OUTPUT_DIR, 'eval'), file_name_suffix='.gz', coder=example_proto_coder.ExampleProtoCoder( transformed_metadata.schema)) # save transformation function to disk for use at serving time transform_fn | 'WriteTransformFn' >> transform_fn_io.WriteTransformFn( os.path.join(OUTPUT_DIR, 'metadata')) preprocess(in_test_mode=False) # change to True to run locally %%bash # ls preproc_tft gsutil ls gs://${BUCKET}/taxifare/preproc_tft/
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
de7220f82431e3c1b2f319ea3fc3a235
<h2> Train off preprocessed data </h2>
%%bash rm -rf taxifare_tft.tar.gz taxi_trained export PYTHONPATH=${PYTHONPATH}:$PWD/taxifare_tft python -m trainer.task \ --train_data_paths="gs://${BUCKET}/taxifare/preproc_tft/train*" \ --eval_data_paths="gs://${BUCKET}/taxifare/preproc_tft/eval*" \ --output_dir=./taxi_trained \ --train_steps=10 --job-dir=/tmp \ --metadata_path=gs://${BUCKET}/taxifare/preproc_tft/metadata !ls $PWD/taxi_trained/export/exporter %%writefile /tmp/test.json {"dayofweek":"Thu","hourofday":17,"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2} %%bash model_dir=$(ls $PWD/taxi_trained/export/exporter/) gcloud ai-platform local predict \ --model-dir=./taxi_trained/export/exporter/${model_dir} \ --json-instances=/tmp/test.json
courses/machine_learning/feateng/tftransform.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
fbc58f25ab8d9e8101f373ce628164db
If we don't indicate we want a simple 2-level list of list with lolviz(), we get a generic object graph:
objviz(table) courses = [ ['msan501', 51], ['msan502', 32], ['msan692', 101] ] mycourses = courses print(id(mycourses), id(courses)) objviz(courses)
examples.ipynb
parrt/lolviz
bsd-3-clause
9e76c2050adccf0e0651752aaf5407f8
You can also display strings as arrays in isolation (but not in other data structures as I figured it's not that useful in most cases):
strviz('New York') class Tree: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right root = Tree('parrt', Tree('mary', Tree('jim', Tree('srinivasan'), Tree('april'))), Tree('xue',None,Tree('mike'))) treeviz(root) from IPython.display import display N = 100 def f(x): a = ['hi','mom'] thestack = callsviz(varnames=['table','x','head','courses','N','a']) display(thestack) f(99)
examples.ipynb
parrt/lolviz
bsd-3-clause
64b17bee0778ce627641a1aa3d07cfcf
If you'd like to save an image from jupyter, use render():
def f(x): thestack = callsviz(varnames=['table','x','tree','head','courses']) print(thestack.source[:100]) # show first 100 char of graphviz syntax thestack.render("/tmp/t") # save as PDF f(99)
examples.ipynb
parrt/lolviz
bsd-3-clause
a1716e823abe4809f9d57ee1443aa3b1
Numpy viz
import numpy as np A = np.array([[1,2,8,9],[3,4,22,1]]) objviz(A) B = np.ones((100,100)) for i in range(100): for j in range(100): B[i,j] = i+j B matrixviz(A) matrixviz(B) A = np.array(np.arange(-5.0,5.0,2.1)) B = A.reshape(-1,1) matrices = [A,B] def f(): w,h = 20,20 C = np.ones((w,h), dtype=int) for i in range(w): for j in range(h): C[i,j] = i+j display(callsviz(varnames=['matrices','A','C'])) f()
examples.ipynb
parrt/lolviz
bsd-3-clause
8dbd131bfc62d87ec592a4c4910bd4cc
Pandas dataframes, series
import pandas as pd df = pd.DataFrame() df["sqfeet"] = [750, 800, 850, 900,950] df["rent"] = [1160, 1200, 1280, 1450,2000] objviz(df) objviz(df.rent)
examples.ipynb
parrt/lolviz
bsd-3-clause
0c67b4bad2ac5bd08ac0bf8bee6895a9
Model 3: More sophisticated models What if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery: DNN To create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers.
%%bigquery -- This model type is in alpha, so it may not work for you yet. -- This training takes on the order of 15 minutes. CREATE OR REPLACE MODEL serverlessml.model3b_dnn OPTIONS(input_label_cols=['fare_amount'], model_type='dnn_regressor', hidden_units=[32, 8]) AS SELECT * FROM serverlessml.cleaned_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model3b_dnn)
courses/machine_learning/deepdive2/launching_into_ml/solutions/first_model.ipynb
turbomanage/training-data-analyst
apache-2.0
4c9631e49e63fe919e0c8065fb281ebf
Behavioural parameters
gam_pars = { 'Control': dict(Freq=(2.8, 0.0, 1), Rare=(2.8, 0.75, 1)), 'Patient': dict(Freq=(3.0, 0.0, 1.2), Rare=(3.0, 1., 1.2))} subs_per_group = 20 n_trials = 1280 probs = dict(Rare=0.2, Freq=0.8) accuracy = dict(Control=dict(Freq=0.96, Rare=0.886), Patient=dict(Freq=0.945, Rare=0.847)) logfile_date_range = (datetime.date(2016, 9, 1), datetime.date(2017, 8, 31))
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
014e650834257cf191804e94d0003250
Plot RT distributions for sanity checking
fig, axs = plt.subplots(2, 1) # These are chosen empirically to generate sane RTs x_shift = 220 x_mult = 100 cols = dict(Patient='g', Control='r') lins = dict(Freq='-', Rare='--') # For plotting x = np.linspace(gamma.ppf(0.01, *gam_pars['Control']['Freq']), gamma.ppf(0.99, *gam_pars['Patient']['Rare']), 100) RTs = {} ax = axs[0] for sub in gam_pars.keys(): for cond in ['Freq', 'Rare']: lab = sub + '/' + cond ax.plot(x_shift + x_mult * x, gamma.pdf(x, *gam_pars[sub][cond]), cols[sub], ls=lins[cond], lw=2, alpha=0.6, label=lab) RTs.setdefault(sub, {}).update( {cond: gamma.rvs(*gam_pars[sub][cond], size=int(probs[cond] * n_trials)) * x_mult + x_shift}) ax.legend(loc='best', frameon=False) ax = axs[1] for sub in gam_pars.keys(): for cond in ['Freq', 'Rare']: lab = sub + '/' + cond ax.hist(RTs[sub][cond], bins=20, normed=True, histtype='stepfilled', alpha=0.2, label=lab) print('{:s}\tmedian = {:.1f} [{:.1f}, {:.1f}]' .format(lab, np.median(RTs[sub][cond]), np.min(RTs[sub][cond]), np.max(RTs[sub][cond]))) ax.legend(loc='best', frameon=False) plt.show()
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
6f4267552cec30d133880be174185956
Create logfile data
# calculate time in 100 us steps # 1-3 sec start delay start_time = np.random.randint(1e4, 3e4) # Modify ISI a little from paper: accomodate slightly longer tails # of the simulated distributions (up to about 1500 ms) ISI_ran = (1.5e4, 1.9e4) freq_stims = string.ascii_lowercase rare_stims = string.digits
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
178c76de445ee43248f1229aafdc0a8f
Create subject IDs
# ctrl_NUMs = list(np.random.randint(10, 60, size=2 * subs_per_group)) ctrl_NUMs = list(random.sample(range(10, 60), 2 * subs_per_group)) pat_NUMs = sorted(random.sample(ctrl_NUMs, subs_per_group)) ctrl_NUMs = sorted([c for c in ctrl_NUMs if not c in pat_NUMs]) IDs = dict(Control=['{:04d}_{:s}'.format(n, ''.join(random.choices( string.ascii_uppercase, k=3))) for n in ctrl_NUMs], Patient=['{:04d}_{:s}'.format(n, ''.join(random.choices( string.ascii_uppercase, k=3))) for n in pat_NUMs])
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
975e6b5d1d45837749ed92d83ffd80d1
Write subject ID codes to a CSV file
with open(os.path.join(logs_autogen, 'subj_codes.csv'), 'wt') as fp: csvw = csv.writer(fp, delimiter=';') for stype in IDs.keys(): for sid in IDs[stype]: csvw.writerow([sid, stype])
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
1c459852d214d008be21ff00934ee0b0
Function for generating individualised RTs
def indiv_RT(sub_type, cond): # globals: gam_pars, probs, n_trials, x_mult, x_shift return(gamma.rvs(*gam_pars[sub_type][cond], size=int(probs[cond] * n_trials)) * x_mult + x_shift)
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
9a3d05d9830dc8d5bc394e9db7d32c8b
Write logfiles
# Write to empty logs dir if not os.path.exists(logs_autogen): os.makedirs(logs_autogen) for f in glob.glob(os.path.join(logs_autogen, '*.log')): os.remove(f) for stype in ['Control', 'Patient']: for sid in IDs[stype]: log_date = random_date(*logfile_date_range) log_fname = '{:s}_{:s}.log'.format(sid, log_date.isoformat()) with open(os.path.join(logs_autogen, log_fname), 'wt') as log_fp: log_fp.write('# Original filename: {:s}\n'.format(log_fname)) log_fp.write('# Time unit: 100 us\n') log_fp.write('# RARECAT=digit\n') log_fp.write('#\n') log_fp.write('# Time\tHHGG\tEvent\n') reacts = np.r_[indiv_RT(stype, 'Freq'), indiv_RT(stype, 'Rare')] # no need to shuffle ITIs... itis = np.random.randint(*ISI_ran, size=len(reacts)) n_freq = len(RTs[stype]['Freq']) n_rare = len(RTs[stype]['Rare']) n_resps = n_freq + n_rare resps = np.random.choice([0, 1], p=[1 - accuracy[stype]['Rare'], accuracy[stype]['Rare']], size=n_resps) # this only works in python 3.6 freq_s = random.choices(freq_stims, k=n_freq) # for older python: # random.choice(string.ascii_uppercase) for _ in range(N) rare_s = random.choices(rare_stims, k=n_rare) stims = np.r_[freq_s, rare_s] resps = np.r_[np.random.choice([0, 1], p=[1 - accuracy[stype]['Freq'], accuracy[stype]['Freq']], size=n_freq), np.random.choice([0, 1], p=[1 - accuracy[stype]['Rare'], accuracy[stype]['Rare']], size=n_rare)] corr_answs = np.r_[np.ones(n_freq, dtype=np.int), 2*np.ones(n_rare, dtype=np.int)] # This shuffles the lists together... tmp = list(zip(reacts, stims, resps, corr_answs)) np.random.shuffle(tmp) reacts, stims, resps, corr_answs = zip(*tmp) assert len(resps) == len(stims) prev_present, prev_response = start_time, -1 for rt, iti, stim, resp, corr_ans in \ zip(reacts, itis, stims, resps, corr_answs): # This is needed to ensure that the present response time # exceeds the previous response time (plus a little buffer) # Slightly skews (truncates) the distribution, but what the hell pres_time = max([prev_present + iti, prev_response + 100]) resp_time = pres_time + int(10. * rt) prev_present = pres_time prev_response = resp_time log_fp.write('{:d}\t42\tSTIM={:s}\n'.format(pres_time, stim)) if resp == 0 and corr_ans == 1: answ = 2 elif resp == 0 and corr_ans == 2: answ = 1 else: answ = corr_ans log_fp.write('{:d}\t42\tRESP={:d}\n'.format(resp_time, answ, resp))
src/Logfile_generator.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
acb26b64c98e057a7f459bef3e16617d
Notice this is a typical dataframe, possibly with more columns as strings than numbers. The text in contained in the column 'text'. Notice also there are missing texts. For now, we will drop these texts so we can move forward with text analysis. In your own work, you should justify dropping missing texts when possible.
df = df.dropna(subset=["text"]) df ##Ex: Print the first text in the dataframe (starts with "A DOG WITH A BAD NAME"). ###Hint: Remind yourself about the syntax for slicing a dataframe
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
312930915de6ecf817a808cf2d72fd80
<a id='stats'></a> 1. Descriptive Statistics and Visualization The first thing we probably want to do is describe our data, to make sure everything is in order. We can use the describe function for the numerical data, and the value_counts function for categorical data.
print(df.describe()) #get descriptive statistics for all numerical columns print() print(df['author gender'].value_counts()) #frequency counts for categorical data print() print(df['year'].value_counts()) #treat year as a categorical variable print() print(df['year'].mode()) #find the year in which the most novels were published
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
ffc5d4dd9a5541ce25089d3dc77564f4
We can do a few things by just using the metadata already present. For example, we can use the groupby and the count() function to graph the number of books by male and female authors. This is similar to the value_counts() function, but allows us to plot the output.
#creat a pandas object that is a groupby dataframe, grouped on author gender grouped_gender = df.groupby("author gender") print(grouped_gender['text'].count())
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
c1005caa41f13c9e34faef3dbd37fe56
Let's graph the number of texts by gender of author.
grouped_gender['text'].count().plot(kind = 'bar') plt.show() #Ex: Create a variable called 'grouped_year' that groups the dataframe by year. ## Print the number of texts per year.
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
49313244a359c19b9842984e97564ee8
We can graph this via a line graph.
grouped_year['text'].count().plot(kind = 'line') plt.show()
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
48e0241b6034bc806bc95402b4db40c8
Oops! That doesn't look right! Python automatically converted the year to scientific notation. We can set that option to False.
plt.ticklabel_format(useOffset=False) #forces Python to not convert numbers grouped_year['text'].count().plot(kind = 'line') plt.show()
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
b2afd55cc6f77ce6085957eb1d374130
We haven't done any text analysis yet. Let's apply some of our text analysis techniques to the text, add columns with the output, and analyze/visualize the output. <a id='str'></a> 2. The str attribute Luckily for us, pandas has an attribute called 'str' which allows us to access Python's built-in string functions. For example, we can make the text lowercase, and assign this to a new column. Note: You may get a "SettingWithCopyWarning" highlighted with a pink background. This is not an error, it is Python telling you that while the code is valid, you might be doing something stupid. In this case, the warning is a false positive. In most cases you should read the warning carefully and try to fix your code.
df['text_lc'] = df['text'].str.lower() df ##Ex: create a new column, 'text_split', that contains the lower case text split into list. ####HINT: split on white space, don't tokenize it.
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
88b8e9dc7606213812999ce5ec67ecb9
<a id='apply'></a> 3. The apply function We can also apply a function to each row. To get a word count of a text file we would take the length of the split string like this: len(text_split) If we want to do this on every row in our dataframe, we can use the apply() function.
df['word_count'] = df['text_split'].apply(len) df
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
875b947f7115a83a889bfa9cc4ca5eed
What is the average length of each novel in our data? With pandas, this is easy!
df['word_count'].mean()
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
27208018dcfb89d0f9724a33f9d055f8