markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values | hash
stringlengths 32
32
|
---|---|---|---|---|---|
2. Aplicación al Iris Dataset
Podemos visualizar el resultado con una matriz de confusión. | from sklearn.metrics import confusion_matrix
cm = confusion_matrix(Y, Y_pred)
print cm | clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb | sebastiandres/mat281 | cc0-1.0 | 2c0b9d68e094f102d88a6aaa465749ba |
Run
Provide the run information:
* run id
* run metalink containing the 3 by 3 kernel extractions
* participant | run_id = '0000005-150701000025418-oozie-oozi-W'
run_meta = 'http://sb-10-16-10-55.dev.terradue.int:50075/streamFile/ciop/run/participant-f/0000005-150701000025418-oozie-oozi-W/results.metalink?'
participant = 'participant-f' | evaluation-participant-f.ipynb | ocean-color-ac-challenge/evaluate-pearson | apache-2.0 | ed877e21fc2fbe4fefec6ac00f451890 |
Below we define the rotation and reflection matrices | def rotation_matrix(angle, d):
directions = {
"x":[1.,0.,0.],
"y":[0.,1.,0.],
"z":[0.,0.,1.]
}
direction = np.array(directions[d])
sina = np.sin(angle)
cosa = np.cos(angle)
# rotation matrix around unit vector
R = np.diag([cosa, cosa, cosa])
R += np.outer(direction, direction) * (1.0 - cosa)
direction *= sina
R += np.array([[ 0.0, -direction[2], direction[1]],
[ direction[2], 0.0, -direction[0]],
[-direction[1], direction[0], 0.0]])
return R
def reflection_matrix(d):
m = {
"x":[[-1.,0.,0.],[0., 1.,0.],[0.,0., 1.]],
"y":[[1., 0.,0.],[0.,-1.,0.],[0.,0., 1.]],
"z":[[1., 0.,0.],[0., 1.,0.],[1.,0.,-1.]]
}
return np.array(m[d]) | docs/Molonglo_coords.ipynb | ewanbarr/anansi | apache-2.0 | d67a38461006279705b2768b96de4362 |
Define a position vectors | def pos_vector(a,b):
return np.array([[np.cos(b)*np.cos(a)],
[np.cos(b)*np.sin(a)],
[np.sin(b)]])
def pos_from_vector(vec):
a,b,c = vec
a_ = np.arctan2(b,a)
c_ = np.arcsin(c)
return a_,c_ | docs/Molonglo_coords.ipynb | ewanbarr/anansi | apache-2.0 | c61145a079233d6e61fb86fe9e3be20d |
Generic transform | def transform(a,b,R,inverse=True):
P = pos_vector(a,b)
if inverse:
R = R.T
V = np.dot(R,P).ravel()
a,b = pos_from_vector(V)
a = 0 if np.isnan(a) else a
b = 0 if np.isnan(a) else b
return a,b | docs/Molonglo_coords.ipynb | ewanbarr/anansi | apache-2.0 | 6c67b07e922c21c1799334b16103e6fc |
Reference conversion formula from Duncan's old TCC | def hadec_to_nsew(ha,dec):
ew = np.arcsin((0.9999940546 * np.cos(dec) * np.sin(ha))
- (0.0029798011806 * np.cos(dec) * np.cos(ha))
+ (0.002015514993 * np.sin(dec)))
ns = np.arcsin(((-0.0000237558704 * np.cos(dec) * np.sin(ha))
+ (0.578881847 * np.cos(dec) * np.cos(ha))
+ (0.8154114339 * np.sin(dec)))
/ np.cos(ew))
return ns,ew | docs/Molonglo_coords.ipynb | ewanbarr/anansi | apache-2.0 | 7b088ad19395295a521533e5eda394c2 |
New conversion formula using rotation matrices
What do we think we should have:
\begin{equation}
\begin{bmatrix}
\cos(\rm EW)\cos(\rm NS) \
\cos(\rm EW)\sin(\rm NS) \
\sin(\rm EW)
\end{bmatrix}
=
\mathbf{R}
\begin{bmatrix}
\cos(\delta)\cos(\rm HA) \
\cos(\delta)\sin(\rm HA) \
\sin(\delta)
\end{bmatrix}
\end{equation}
Where $\mathbf{R}$ is a composite rotation matrix.
We need a rotations in axis of array plus orthogonal rotation w.r.t. to array centre. Note that the NS convention is flipped so HA and NS go clockwise and anti-clockwise respectively when viewed from the north pole in both coordinate systems.
\begin{equation}
\mathbf{R}_x
=
\begin{bmatrix}
1 & 0 & 0 \
0 & \cos(\theta) & -\sin(\theta) \
0 & \sin(\theta) & \cos(\theta)
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R}_y
=
\begin{bmatrix}
\cos(\phi) & 0 & \sin(\phi) \
0 & 1 & 0 \
-\sin(\phi) & 0 & \cos(\phi)
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R}_z
=
\begin{bmatrix}
\cos(\eta) & -\sin(\eta) & 0\
\sin(\eta) & \cos(\eta) & 0\
0 & 0 & 1
\end{bmatrix}
\end{equation}
\begin{equation}
\mathbf{R} = \mathbf{R}_x \mathbf{R}_y \mathbf{R}_z
\end{equation}
Here I think $\theta$ is a $3\pi/2$ rotation to put the telescope pole (west) at the telescope zenith and $\phi$ is also $\pi/2$ to rotate the telescope meridian (which is lengthwise on the array, what we traditionally think of as the meridian is actually the equator of the telescope) into the position of $Az=0$.
However rotation of NS and HA are opposite, so a reflection is needed. For example reflection around a plane in along which the $z$ axis lies:
\begin{equation}
\mathbf{\bar{R}}_z
=
\begin{bmatrix}
1 & 0 & 0\
0 & 1 & 0\
0 & 0 & -1
\end{bmatrix}
\end{equation}
Conversion to azimuth and elevations should therefore require $\theta=-\pi/2$ and $\phi=\pi/2$ with a reflection about $x$.
Taking into account the EW skew and slope of the telescope:
\begin{equation}
\begin{bmatrix}
\cos(\rm EW)\cos(\rm NS) \
\cos(\rm EW)\sin(\rm NS) \
\sin(\rm EW)
\end{bmatrix}
=
\begin{bmatrix}
\cos(\alpha) & -\sin(\alpha) & 0\
\sin(\alpha) & \cos(\alpha) & 0\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\cos(\beta) & 0 & \sin(\beta) \
0 & 1 & 0 \
-\sin(\beta) & 0 & \cos(\beta)
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \
0 & 0 & 1 \
0 & -1 & 0
\end{bmatrix}
\begin{bmatrix}
0 & 0 & -1 \
0 & 1 & 0 \
1 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
-1 & 0 & 0\
0 & 1 & 0\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
\cos(\delta)\cos(\rm HA) \
\cos(\delta)\sin(\rm HA) \
\sin(\delta)
\end{bmatrix}
\end{equation}
So the correction matrix to take telescope coordinates to ns,ew
\begin{bmatrix}
\cos(\alpha)\sin(\beta) & -\sin(\beta) & \cos(\alpha)\sin(\beta) \
\sin(\alpha)\cos(\beta) & \cos(\alpha) & \sin(\alpha)\sin(\beta) \
-\sin(\beta) & 0 & \cos(\beta)
\end{bmatrix}
and to Az Elv
\begin{bmatrix}
\sin(\alpha) & -\cos(\alpha)\sin(\beta) & -\cos(\alpha)\cos(\beta) \
\cos(\alpha) & -\sin(\alpha)\sin(\beta) & -\sin(\alpha)\cos(\beta) \
-\cos(\beta) & 0 & \sin(\beta)
\end{bmatrix} | # There should be a slope and tilt conversion to get accurate change
#skew = 4.363323129985824e-05
#slope = 0.0034602076124567475
#skew = 0.00004
#slope = 0.00346
skew = 0.01297 # <- this is the skew I get if I optimize for the same results as duncan's system
slope= 0.00343
def telescope_to_nsew_matrix(skew,slope):
R = rotation_matrix(skew,"z")
R = np.dot(R,rotation_matrix(slope,"y"))
return R
def nsew_to_azel_matrix(skew,slope):
pre_R = telescope_to_nsew_matrix(skew,slope)
x_rot = rotation_matrix(-np.pi/2,"x")
y_rot = rotation_matrix(np.pi/2,"y")
R = np.dot(x_rot,y_rot)
R = np.dot(pre_R,R)
R_bar = reflection_matrix("x")
R = np.dot(R,R_bar)
return R
def nsew_to_azel(ns, ew):
az,el = transform(ns,ew,nsew_to_azel_matrix(skew,slope))
return az,el
print nsew_to_azel(0,np.pi/2) # should be -pi/2 and 0
print nsew_to_azel(-np.pi/2,0)# should be -pi and 0
print nsew_to_azel(0.0,.5) # should be pi/2 and something near pi/2
print nsew_to_azel(-.5,.5) # less than pi/2 and less than pi/2
print nsew_to_azel(.5,-.5)
print nsew_to_azel(.5,.5) | docs/Molonglo_coords.ipynb | ewanbarr/anansi | apache-2.0 | 0947627ff0baf4e134178ff874af5aa6 |
The inverse of this is: | def azel_to_nsew(az, el):
ns,ew = transform(az,el,nsew_to_azel_matrix(skew,slope).T)
return ns,ew | docs/Molonglo_coords.ipynb | ewanbarr/anansi | apache-2.0 | fe1e47771aeb3e917ca66bd90f51ad45 |
Extending this to HA Dec | mol_lat = -0.6043881274183919 # in radians
def azel_to_hadec_matrix(lat):
rot_y = rotation_matrix(np.pi/2-lat,"y")
rot_z = rotation_matrix(np.pi,"z")
R = np.dot(rot_y,rot_z)
return R
def azel_to_hadec(az,el,lat):
ha,dec = transform(az,el,azel_to_hadec_matrix(lat))
return ha,dec
def nsew_to_hadec(ns,ew,lat,skew=skew,slope=slope):
R = np.dot(nsew_to_azel_matrix(skew,slope),azel_to_hadec_matrix(lat))
ha,dec = transform(ns,ew,R)
return ha,dec
ns,ew = 0.8,0.8
az,el = nsew_to_azel(ns,ew)
print "AzEl:",az,el
ha,dec = azel_to_hadec(az,el,mol_lat)
print "HADec:",ha,dec
ha,dec = nsew_to_hadec(ns,ew,mol_lat)
print "HADec2:",ha,dec
# This is Duncan's version
ns_,ew_ = hadec_to_nsew(ha,dec)
print "NSEW Duncan:",ns_,ew_
print "NS offset:",ns_-ns," EW offset:",ew_-ew
def test(ns,ew,skew,slope):
ha,dec = nsew_to_hadec(ns,ew,mol_lat,skew,slope)
ns_,ew_ = hadec_to_nsew(ha,dec)
no,eo = ns-ns_,ew-ew_
no = 0 if np.isnan(no) else no
eo = 0 if np.isnan(eo) else eo
return no,eo
ns = np.linspace(-np.pi/2+0.1,np.pi/2-0.1,10)
ew = np.linspace(-np.pi/2+0.1,np.pi/2-0.1,10)
def test2(a):
skew,slope = a
out_ns = np.empty([10,10])
out_ew = np.empty([10,10])
for ii,n in enumerate(ns):
for jj,k in enumerate(ew):
a,b = test(n,k,skew,slope)
out_ns[ii,jj] = a
out_ew[ii,jj] = b
a = abs(out_ns).sum()#abs(np.median(out_ns))
b = abs(out_ew).sum()#abs(np.median(out_ew))
print a,b
print max(a,b)
return max(a,b)
#minimize(test2,[skew,slope])
# Plotting out the conversion error as a function of HA and Dec.
# Colour scale is log of the absolute difference between original system and new system
ns = np.linspace(-np.pi/2,np.pi/2,10)
ew = np.linspace(-np.pi/2,np.pi/2,10)
out_ns = np.empty([10,10])
out_ew = np.empty([10,10])
for ii,n in enumerate(ns):
for jj,k in enumerate(ew):
print jj
a,b = test(n,k,skew,slope)
out_ns[ii,jj] = a
out_ew[ii,jj] = b
plt.figure()
plt.subplot(121)
plt.imshow(abs(out_ns),aspect="auto")
plt.colorbar()
plt.subplot(122)
plt.imshow(abs(out_ew),aspect="auto")
plt.colorbar()
plt.show()
from mpl_toolkits.mplot3d import Axes3D
from itertools import product, combinations
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_aspect("equal")
#draw sphere
u, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]
x=np.cos(u)*np.sin(v)
y=np.sin(u)*np.sin(v)
z=np.cos(v)
ax.plot_wireframe(x, y, z, color="r",lw=1)
R = rotation_matrix(np.pi/2,"x")
pos_v = np.array([[x],[y],[z]])
p = pos_v.T
for i in p:
for j in i:
j[0] = np.dot(R,j[0])
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
a = Arrow3D([0,1],[0,0.1],[0,.10], mutation_scale=20, lw=1, arrowstyle="-|>", color="k")
ax.add_artist(a)
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
x=p.T[0,0]
y=p.T[1,0]
z=p.T[2,0]
ax.plot_wireframe(x, y, z, color="b",lw=1)
plt.show() | docs/Molonglo_coords.ipynb | ewanbarr/anansi | apache-2.0 | e7feef917fa597a962075c66853db735 |
Durchführung
Die Versuchumgebung besteht aus einem doppelwandigen, luftdicht verschlossenen Messingrohr R. Auf einer Seite des Rohres ist im Inneren ein Lautsprecher L und gegenüberliegend ein Kondensatormikrofon KM montiert. Um die Schallgeschwindigkeiten bei verschiedenen Distanzen bestimmen zu können, kann die Wand, an welcher das KM angebracht ist, per Handkurbel verstellt werden. Die Temperatur im Inneren kann durch ein Chromel-Alumel-Thermoelement bestimmt werden. Zudem gibt es natürlich ein Einlassventil sowie ein Ablassventil.
Die genaue Versuchsanordng kann der nachstehenden Illustration entnommen werden.
Laufzeitmessung
Die Versuchsanornung zur bestimmung der Schallgeschwindigkeit mithilfe der Laufzeitmethode kann der folgenden Abbildung entnommen werden.
Um kurze, steile Schallimpulse zu erzeugen, wird ein Kondensator C per Drucktaster über dem Lautsprecher L entladen. Zeitgleich wird dem Zeitmesser signalisiert dass er die Zeitmessung starten soll. Das Kondensatormikrofon wird dann nach einiger Zeit und genügend Verstärkung im Audioverstärker den Impuls aufnehmen und dem Zeitmesser das Signal die Zeit zu stoppen geben.
Zur Kontrolle der Funktionalität steht ein Oszilloskop bereit auf welchem die Impulse beobachtet werden können. Diese sollten in etwa wie in folgender Abbildung aussehen.
Resonanzmethode
Nachfolgend ist die Versuchsanornung zur Resonanzbestimmung zu sehen.
Zur bestimmung der Resonanz wird der Impulsgeber aus der Laufzeitmessung durch einen Sinusgenerator ersetzt. Nun sendet der Lautsprecher kontinuierlich Wellen in das Rohr. Auf dem Oszilloskop wird das ausgesendete Signal mit dem empfangenen Signal im XY-Modus in Relation gestellt. Logischerweise müsste bei Resonanz die Verstärkung des Resonators linear sein und auf dem Oszilloskop eine Linie zu sehen sein. Ist noch eine Ellipse sichtbar, so herrscht noch hysterese und es ist noch keine vollkommen konstruktive Interferenz.
Nun kann mithilfe der Handkurbel die Distanz des Mikrofons zum Lautsprecher verstellt werden. Dadurch kann die Distanz wischen zwei Wellenbergen gemessen werden.
Gasgemische
Beide Methodiken wurden mit reiner Luft und je Helium und SF6 angewandt.
Zudem wurden dann die Gase Helium und SD6 in 20% schritten vermischt und gemessen.
Der Anteil konnte einfach über den Druck im Behälter eingestellt werden, da dieser wie in \ref{eq:m_p} dargestellt direkt proportional zu den Molekülen des Gases ist.
Konstanten
Die nachfolgenden Konstanten sind alle in Horst Kuchlings Taschenbuch der Physik zu finden. | # Constants
name = ['Luft', 'Helium', 'SF6']
mm = [28.95, 4.00, 146.06]
ri = [287, 2078, 56.92]
cp = [1.01, 5.23, 0.665]
cv = [0.72, 3.21, 0.657]
k = [1.63, 1.40, 1.012]
c0 = [971, 344, 129]
constants_tbl = PrettyTable(
list(zip(name, mm, ri, cp, cv, k, c0)),
label='tab:gase',
caption='Kennwerte und Konstanten der verwendeten Gase.',
extra_header=[
'Gas',
r'$M_m[\frac{g}{mol}]$',
r'$R_i[\frac{J}{kg K}]$',
r'$c_p[\frac{kJ}{kg K}]$',
r'$c_v[\frac{kJ}{kg K}]$',
r'$K$',
r'$c_0[\frac{m}{s}]$'
], entries_per_column=3)
constants_tbl.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | 202fcbb8075c8dd2e12e89d5cd023ab2 |
Verwendete Messgeräte | # Utilities
name = ['Oszilloskop', 'Zeitmesser', 'Funktionsgenerator', 'Verstärker', 'Vakuumpumpe', 'Netzgerät', 'Temperaturmessgerät']
manufacturer = ['LeCroy', 'Keithley', 'HP', 'WicTronic', 'Pfeiffer', ' ', ' ']
device = ['9631 Dual 300MHz Oscilloscope 2.5 GS/s', '775 Programmable Counter/Timer', '33120A 15MHz Waveform Generator', 'Zweikanalverstärker', 'Vacuum', ' ', ' ']
utilities_tbl = PrettyTable(
list(zip(name, manufacturer, device)),
label='tab:utilities',
caption='Verwendete Gerätschaften',
extra_header=[
'Funktion',
'Hersteller',
'Gerätename',
], entries_per_column=7)
utilities_tbl.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | 67e9d153aa6e88afc4686b3ba3167915 |
Auswertung
Bei allen Versuchen wurde im Behältnis ein Unterdruck von -0.8 Bar erzeugt. Anschliesend wurde das Rohr bis 0.3 Bar mit Gas gefüllt. Dies wurde jeweils zweimal gemacht um Rückstände des vorherigen Gases zu entfernen.
Laufzeitmethode
Bei der Laufzeitmethode wurde die Laufzeit vom Lautsprecher bis zum Mikrofon bei verschidenen Distanzen gemessen. Mit einer Linearen Regression konnte dann die Schallgeschwindigkeit bestimmt werden. systematische Fehler wie die Wahl der Triggerschwelle, die Position des Mikrofons oder der Position des Lautsprechers sind im y-Achsenabschnitt $t_0$ enthalten und müssen somit nicht mehr berücksichtigt werden. | # Laufzeitenmethode Luft, Helium, SF6
import collections
# Read Data
dfb = pd.read_csv('data/laufzeitmethode.csv')
ax = None
i = 0
for gas1 in ['luft', 'helium', 'sf6']:
df = dfb.loc[dfb['gas1'] == gas1].loc[dfb['gas2'] == gas1].loc[dfb['p'] == 1]
slope, intercept, sem, r, p = stats.linregress(df['t'], df['s'])
n = np.linspace(0.0, df['t'][9 + i * 10] * 1.2, 100)
results[gas1] = {
gas1: {
}
}
results[gas1][gas1]['1_l_df'] = df
results[gas1][gas1]['1_l_slope'] = slope
results[gas1][gas1]['1_l_intercept'] = intercept
results[gas1][gas1]['1_l_sem'] = sem
ax = df.plot(kind='scatter', x='t', y='s', label='gemessene Laufzeit')
plt.plot(n, [i * slope + intercept for i in n], label='linearer Fit der Laufzeit', axes=ax)
plt.xlabel('Laufzeit [s]')
plt.ylabel('Strecke [m]')
plt.xlim([0, df['t'][9 + i * 10] * 1.1])
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
i += 1
plt.close()
figure = PrettyFigure(
ax.figure,
label='fig:laufzeiten_{}'.format(gas1),
caption='Laufzeiten in {}. Dazu einen linearen Fit um die Mittlere Geschwindigkeit zu bestimmen.'.format(gas1.title()))
figure.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | 131035fc9a008d79daca38a4ce03f4ee |
Resonanzmethode
Um eine anständige Messung zu kriegen, wurde zuerst eine Anfangsfrequenz bestimmt, bei welcher mindestens 3 konstruktive Interferenzen über die Messdistanz von einem Meter gemessen wurden. Da wurde dann eine Messung durchgeführt, sowie bei 5 weiteren, höheren Frequenzen.
Mit einem linearen Fit konnte dann vorzüglich die Schallgeschwindigkeit berechnet werden. Hierbei wurde die Formel in \ref{eq:rohr_offen_2} verwendet. | # Resonanzmethode Luft, Helium, SF6
import collections
# Read Data
dfb2 = pd.read_csv('data/resonanzfrequenz.csv')
ax = None
i = 0
for gas1 in ['luft', 'helium', 'sf6']:
df = dfb2.loc[dfb2['gas1'] == gas1].loc[dfb2['gas2'] == gas1].loc[dfb2['p'] == 1]
df['lbd'] = 1 / (df['s'] * 2)
df['v'] = 2 * df['f'] * df['s']
slope, intercept, sem, r, p = stats.linregress(df['lbd'], df['f'])
n = np.linspace(0.0, df['lbd'][(5 + i * 6) if i < 2 else 15] * 1.2, 100)
results[gas1][gas1]['1_r_df'] = df
results[gas1][gas1]['1_r_slope'] = slope
results[gas1][gas1]['1_r_intercept'] = intercept
results[gas1][gas1]['1_r_sem'] = sem
ax = df.plot(kind='scatter', x='lbd', y='f', label='gemessenes $\lambda^{-1}$')
plt.plot(n, [i * slope + intercept for i in n], label='linearer Fit von $\lambda^{-1}$', axes=ax)
plt.xlabel(r'$1 / \lambda [m^{-1}]$')
plt.ylabel(r'$Frequenz [s^{-1}]$')
plt.xlim([0, df['lbd'][(5 + i * 6) if i < 2 else 15] * 1.1])
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
i += 1
plt.close()
figure = PrettyFigure(
ax.figure,
label='fig:laufzeiten_{}'.format(gas1),
caption='Abstände der Maxima bei resonanten Frequenzen in {}. Dazu einen linearen Fit um die Mittlere Geschwindigkeit zu bestimmen.'.format(gas1.title()))
figure.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | d4b6a11870c0a96e31c498d1c2f42750 |
Gasgemische
Bei diesem Versuch wurden Helium und SF6 mit $\frac{1}{5}$ Anteilen kombiniert.
Dafür wurde jeweils erst ein Gas bis einem Druck proportional zum jeweiligen Anteil eingelassen und darauf hin das zweite Gas.
Wie in \ref{eq:m_p} erklärt ist dies möglich. | # Laufzeitenmethode Helium-SF6-Gemisch
import collections
# Read Data
dfb = pd.read_csv('data/laufzeitmethode.csv')
ax = None
colors = ['blue', 'green', 'red', 'purple']
results['helium']['sf6'] = {}
v_exp = []
for i in range(1, 5):
i /= 5
df = dfb.loc[dfb['gas1'] == 'helium'].loc[dfb['gas2'] == 'sf6'].loc[dfb['p'] == i]
slope, intercept, sem, r, p = stats.linregress(df['t'], df['s'])
v_exp.append(slope)
n = np.linspace(0.0, df['t'][29 + i * 15] * 2, 100)
results['helium']['sf6']['0{}_l_df'.format(int(i * 10))] = df
results['helium']['sf6']['0{}_l_slope'.format(int(i * 10))] = slope
results['helium']['sf6']['0{}_l_intercept'.format(int(i * 10))] = intercept
results['helium']['sf6']['0{}_l_sem'.format(int(i * 10))] = sem
if i == 0.2:
ax = df.plot(kind='scatter', x='t', y='s', label='gemessene Laufzeit', color=colors[int(i * 5) - 1])
else:
plt.scatter(df['t'], df['s'], axes=ax, label=None, color=colors[int(i * 5) - 1])
plt.plot(n, [i * slope + intercept for i in n], label='Laufzeit ({:.1f}\% Helium, {:.1f}\% SF6)'.format(i, 1 - i), axes=ax, color=colors[int(i * 5) - 1])
plt.xlabel('Laufzeit [s]')
plt.ylabel('Strecke [m]')
plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)
i += 0.2
plt.xlim([0, 0.006])
plt.close()
figure = PrettyFigure(
ax.figure,
label='fig:laufzeiten_HESF6',
caption='Laufzeiten in verschiedenen Helium/SF6-Gemischen. Dazu lineare Regression um die Mittlere Geschwindigkeit zu bestimmen.')
figure.show()
# Literature & Calcs
T = 21.3 + 273.15
Ri = 287
K = 1.402
results['luft']['luft']['berechnet'] = math.sqrt(K * Ri * T)
results['luft']['luft']['literatur'] = 343
Ri = 2078
K = 1.63
results['helium']['helium']['berechnet'] = math.sqrt(K * Ri * T)
results['helium']['helium']['literatur'] = 971
Ri = 56.92
K = 1.012
results['sf6']['sf6']['berechnet'] = math.sqrt(K * Ri * T)
results['sf6']['sf6']['literatur'] = 129
cp1 = cp[1]
cp2 = cp[2]
cv1 = cv[1]
cv2 = cv[2]
RL1 = ri[1]
RL2 = ri[2]
m1 = 0.2
m2 = 0.8
s1 = (m1 * cp1) + (m2 * cp2)
s2 = (m1 + cv1) + (m2 * cv2)
s3 = (m1 + RL1) + (m2 * RL2)
results['helium']['sf6']['02_l_berechnet'] = math.sqrt(s1 / s2 * s3 * T)
m1 = 0.4
m2 = 0.6
s1 = (m1 * cp1) + (m2 * cp2)
s2 = (m1 + cv1) + (m2 * cv2)
s3 = (m1 + RL1) + (m2 * RL2)
results['helium']['sf6']['04_l_berechnet'] = math.sqrt(s1 / s2 * s3 * T)
m1 = 0.6
m2 = 0.4
s1 = (m1 * cp1) + (m2 * cp2)
s2 = (m1 + cv1) + (m2 * cv2)
s3 = (m1 + RL1) + (m2 * RL2)
results['helium']['sf6']['06_l_berechnet'] = math.sqrt(s1 / s2 * s3 * T)
m1 = 0.8
m2 = 0.2
s1 = (m1 * cp1) + (m2 * cp2)
s2 = (m1 + cv1) + (m2 * cv2)
s3 = (m1 + RL1) + (m2 * RL2)
results['helium']['sf6']['08_l_berechnet'] = math.sqrt(s1 / s2 * s3 * T)
p = [p for p in np.linspace(0, 1, 1000)]
v = [math.sqrt(((n * cp1) + ((1 - n) * cp2)) / ((n + cv1) + ((1 - n) * cv2)) * ((n + RL1) + ((1 - n) * RL2)) * T) for n in p]
fig = plt.figure()
plt.plot(p, v, label='errechnete Laufzeit')
plt.scatter([0.2, 0.4, 0.6, 0.8], v_exp, label='experimentelle Laufzeit')
plt.xlabel('Heliumanteil')
plt.ylabel('Schallgeschwindigkeit [v]')
plt.xlim([0, 1])
plt.close()
figure = PrettyFigure(
fig,
label='fig:laufzeiten_vgl',
caption='Laufzeiten in Helium/SF6-Gemischen. Experimentelle Werte verglichen mit den berechneten.')
figure.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | 4801ac3b0206cfe7398a5ee7b0f62f14 |
Fehlerrechnung
Wie in der nachfolgenden Sektion ersichtlich ist, hält sich der statistische Fehler sehr in Grenzen. Der systematische Fehler sollte durch die Lineare Regression ebenfalls durch den Offset kompensiert werden.
Resonanzmethode
Hier wurde nur eine Distanz zwischen den Maximas gemessen. Besser wäre drei oder gar vier zu messen und dann zwischen den Werten ebenfalls noch eine lineare Regression anzustellen. Würde man den versuch noch einmal durchführen, so müsste man das sicher tun. Das von Auge ablesen am Oszilloskop erache ich als eher wenig kritisch, da das Bild relativ gut gezoomt werden kann und man schon kleinste Änderungen bemerkt.
Gasgemische
Bei den Gasgemischen gibt es natürlich die sehr hohe Fehlerquelle des Abmischens. Das Behältnis muss jedes Mal komplett geleert werden und dann wiede befüllt. Das Ablesen am Manometer ist nicht unbeding das genaueste Pozedere. Jedoch schätze ich es so ein dass man die Gemische auf ein Prozent genau abmischen kann. Jedoch wird dieses eine Prozent unter der Wurzel verrechnet was dann den Fehler noch vergrössert.
Hier müsste man eine Fehlerkorrektur machen. Aus diese wurde hier aber verzichtet, da wie im nächsten Abschnitt erläutert wahrscheinlich sowieso ein Messfehler vorliegt und man deshlab die Daten noch einmal erheben müsste.
Resultate und Diskussion
Reine Gase
Wie Tabelle \ref{tab:resultat_rein} entnommen werden kann fallen die Resultate äusserst zufriedenstellend aus.
Bei der Laufzeitmethode sowie der Resonanzmethode in der Luft gibt es praktisch keine Abweichung (< 1%) von Literaturwerten.
Bei SF6 sieht es ähnlich aus. Beim Helium gibt es auf den Ersten Blick krassere Unterschiede. Wenn man aber genauer hinschaut, merkt man, dass die Werte insgesamt um Faktor 3 grösser sind als bei der Luftmessung und somit auch der Relative Fehler. Er er wird zwar ein wenig grösser, bleibt aber immernoch < 5%.
Spannend finde ich die Tatsache, dass die Resonanzmethode näher am Literaturwert liegt, da diese der Annahme nach ungenauer sein müsste. Bei Luft und SF6 war dies auch tatsächlich der Fall. | # Show results
values = [
'Luft',
'Helium',
'SF6'
]
means_l = [
'{0:.2f}'.format(results['luft']['luft']['1_l_slope']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['helium']['1_l_slope']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['sf6']['sf6']['1_l_slope']) + r'$\frac{m}{s}$'
]
means_r = [
'{0:.2f}'.format(results['luft']['luft']['1_r_slope']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['helium']['1_r_slope']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['sf6']['sf6']['1_r_slope']) + r'$\frac{m}{s}$'
]
sem_l = [
'{0:.2f}'.format(results['luft']['luft']['1_l_sem']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['helium']['1_l_sem']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['sf6']['sf6']['1_l_sem']) + r'$\frac{m}{s}$'
]
sem_r = [
'{0:.2f}'.format(results['luft']['luft']['1_r_sem']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['helium']['1_r_sem']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['sf6']['sf6']['1_r_sem']) + r'$\frac{m}{s}$'
]
berechnet = [
'{0:.2f}'.format(results['luft']['luft']['berechnet']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['helium']['berechnet']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['sf6']['sf6']['berechnet']) + r'$\frac{m}{s}$'
]
literatur = [
'{0:.2f}'.format(results['luft']['luft']['literatur']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['helium']['literatur']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['sf6']['sf6']['literatur']) + r'$\frac{m}{s}$'
]
v2_results_tbl = PrettyTable(
list(zip(values, means_l, sem_l, means_r, sem_r, berechnet, literatur)),
label='tab:resultat_rein',
caption='Resultate aus den Versuchen mit reinen Gasen.',
extra_header=[
'Wert',
'Laufzeitmethode $v_{L}$',
'stat. Fehler',
'Resonanzmethode $v_{R}$',
'stat. Fehler',
'berechnet',
'Literatur'
], entries_per_column=3)
v2_results_tbl.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | e5410e05e676fb387ed2806676effdbc |
Gasgemische
In der Tabelle \ref{tab:resultat_gasgemisch} kann einfach erkannt werden, dass die experimentell bestimmten Werte absolut nicht übereinstimmen mit den berechneten Werten. Beide Resultatreihen würden einzeln aber plausibel aussehen, wobei die berechnete Reihe in Anbetracht der Konstanten von SF6 und Helium stimmen müsste. Helium hat viel Grössere Werte für $c_v$, $c_p$, $R_i$ und zwar um jeweils etwa eine Grössenordnung. Somit fallen sie in der verwendeten Formel \ref{eq:v_spez} viel stärker isn Gewicht, weshalb die Schallgeschwindigkeiten näher bei Helium liegen müssten als bei SF6.
Leider lag es nicht im Zeitlichen Rahmen und dem des Praktikums, die Messung noch einmal zu machen. Jedoch müsste diese noch einmal durchgeführt werden und verifiziert werden, dass diese tatsächlich stimmt. Erst dann kann man den Fehler in der Mathematik suchen. Da die Werte mit exakt derselben Formel für die Laufzeit berechnet wurden wie bei den reinen Gasen und da die Werte stimmten, ist anzunehmen dass tatsächlich ein Messfehler vorliegt. Die Form der er errechneten Kurve stimmt auch definitiv mit der von $y =\sqrt{x}$ überein. | # Show results
values = [
'20% / 80%',
'40% / 60%',
'60% / 40%',
'80% / 20%'
]
means_x = [
'{0:.2f}'.format(results['helium']['sf6']['02_l_slope']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['04_l_slope']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['06_l_slope']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['08_l_slope']) + r'$\frac{m}{s}$'
]
sem_x = [
'{0:.2f}'.format(results['helium']['sf6']['02_l_sem']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['04_l_sem']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['06_l_sem']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['08_l_sem']) + r'$\frac{m}{s}$'
]
berechnet_x = [
'{0:.2f}'.format(results['helium']['sf6']['02_l_berechnet']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['04_l_berechnet']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['06_l_berechnet']) + r'$\frac{m}{s}$',
'{0:.2f}'.format(results['helium']['sf6']['08_l_berechnet']) + r'$\frac{m}{s}$'
]
v2_results_tbl = PrettyTable(
list(zip(values, means_x, sem_x, berechnet_x)),
label='tab:resultat_gasgemisch',
caption='Resultate aus dem Versuch mit den Gasgemischen.',
extra_header=[
'Helium / SF6',
'mit Laufzeitmethode $v_{L}$',
'statistischer Fehler',
'berechnet',
], entries_per_column=4)
v2_results_tbl.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | 06d449ebd950b6a25cd85486f2556cee |
Anhang
Laufzeitmethode | data = PrettyTable(
list(zip(dfb['gas1'], dfb['gas2'], dfb['p'], dfb['s'], dfb['t'])),
caption='Messwerte der Laufzeitmethode.',
entries_per_column=len(dfb['gas1']),
extra_header=['Gas 1', 'Gas 2', 'Anteil Gas 1', 'Strecke [m]', 'Laufzeit [s]']
)
data.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | ca9c1f9b5f22b0bfa61ce086d4a4a063 |
Resonanzmethode | data = PrettyTable(
list(zip(dfb2['gas1'], dfb2['gas2'], dfb2['p'], dfb2['f'], dfb2['s'])),
caption='Messwerte der Resonanzmethode.',
entries_per_column=len(dfb2['gas1']),
extra_header=['Gas 1', 'Gas 2', 'Anteil Gas 1', 'Frequenz [Hz]', 'Strecke [m]']
)
data.show() | versuch3/W8.ipynb | Yatekii/glal3 | gpl-3.0 | 71ab112a88c1881413e7d257758549e4 |
First we'll open an image, and create a helper function that converts that image into a training set of (x,y) positions (the data) and their corresponding (r,g,b) colors (the labels). We'll then load a picture with it. | def get_data(img):
width, height = img.size
pixels = img.getdata()
x_data, y_data = [],[]
for y in range(height):
for x in range(width):
idx = x + y * width
r, g, b = pixels[idx]
x_data.append([x / float(width), y / float(height)])
y_data.append([r, g, b])
x_data = np.array(x_data)
y_data = np.array(y_data)
return x_data, y_data
im1 = Image.open("../assets/dog.jpg")
x1, y1 = get_data(im1)
print("data", x1)
print("labels", y1)
imshow(im1) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | 6c8aaadbbc7a390eb68a96fe9a836e8b |
We've postfixed all the variable names with a 1 because later we'll open a second image.
We're now going to define a neural network which takes a 2-neuron input (the normalized x, y position) and outputs a 3-neuron output corresponding to color. We'll use Keras's Sequential class to create a deep neural network with a bunch of 20-neuron fully-connected layers with ReLU activations. Our loss function will be a mean_squared_error between the predicted colors and the actual ones from the image.
Once we've defined that model, we'll create a neural network m1 with that architecture. | def make_model():
model = Sequential()
model.add(Dense(2, activation='relu', input_shape=(2,)))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(3))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
m1 = make_model() | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | e8d5e53b32121fa2889a57fab4254fed |
Let's now go ahead and train our neural network. In this case, we are going to use the training set as the validation set as well. Normally, you'd never do this because it would cause your neural network to overfit. But in this experiment, we're not worried about overfitting... in fact, overfitting is the whole point!
We train for 25 epochs and have a batch size of 5. | m1.fit(x1, y1, batch_size=5, epochs=25, verbose=1, validation_data=(x1, y1)) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | e7d0e9e1c73489ad89dd9a45856624ec |
Now that the neural net is finished training, let's take the training data, our pixel positions, and simply send them back straight through the network, and plot the predicted colors on a new image. We'll make a new function for this called generate_image. | def generate_image(model, x, width, height):
img = Image.new("RGB", [width, height])
pixels = img.load()
y_pred = model.predict(x)
for y in range(height):
for x in range(width):
idx = x + y * width
r, g, b = y_pred[idx]
pixels[x, y] = (int(r), int(g), int(b))
return img
img = generate_image(m1, x1, im1.width, im1.height)
imshow(img) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | d14f2e82360f8896d9f71f985f03850d |
Sort of looks like the original image a bit! Of course the network can't learn the mapping perfectly without pretty much memorizing the data, but this way gives us a pretty good impression and doubles as an extremely inefficient form of compression!
Let's load another image. We'll load the second image and also resize it so that it's the same size as the first image. | im2 = Image.open("../assets/kitty.jpg")
im2 = im2.resize(im1.size)
x2, y2 = get_data(im2)
print("data", x2)
print("labels", y2)
imshow(im2) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | 10accdd5f5775b0e68003458f7e2f753 |
Now we'll repeat the experiment from before. We'll make a new neural network m2 which will learn to map im2's (x,y) positions to its (r,g,b) colors. | m2 = make_model() # make a new model, keep m1 separate
m2.fit(x2, y2, batch_size=5, epochs=25, verbose=1, validation_data=(x2, y2)) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | 47bb77b00522721171bada6d22cb64a0 |
Let's generate a new image from m2 and see how it looks. | img = generate_image(m2, x2, im2.width, im2.height)
imshow(img) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | 6d243e4f41f4b147aa84b2bf40d2d0c5 |
Not too bad!
Now let's do something funky. We're going to make a new neural network, m3, with the same architecture as m1 and m2 but instead of training it, we'll just set its weights to be interpolations between the weights of m1 and m2 and at each step, we'll generate a new image. In other words, we'll gradually change the model learned from the first image into the model learned from the second image, and see what kind of an image it outputs at each step.
To help us do this, we'll create a function get_interpolated_weights and we'll make one change to our image generation function: instead of just coloring the pixels to be the exact outputs, we'll auto-normalize every frame by rescaling the minimum and maximum output color to 0 to 255. This is because sometimes the intermediate models output in different ranges than what m1 and m2 were trained to. Yeah, this is a bit of a hack, but it works! | def get_interpolated_weights(model1, model2, amt):
w1 = np.array(model1.get_weights())
w2 = np.array(model2.get_weights())
w3 = np.add((1.0 - amt) * w1, amt * w2)
return w3
def generate_image_rescaled(model, x, width, height):
img = Image.new("RGB", [width, height])
pixels = img.load()
y_pred = model.predict(x)
y_pred = 255.0 * (y_pred - np.min(y_pred)) / (np.max(y_pred) - np.min(y_pred)) # rescale y_pred
for y in range(height):
for x in range(width):
idx = x + y * width
r, g, b = y_pred[idx]
pixels[x, y] = (int(r), int(g), int(b))
return img
# make new model to hold interpolated weights
m3 = make_model()
# we'll do 8 frames and stitch the images together at the end
n = 8
interpolated_images = []
for i in range(n):
amt = float(i)/(n-1.0)
w3 = get_interpolated_weights(m1, m2, amt)
m3.set_weights(w3)
img = generate_image_rescaled(m3, x1, im1.width, im1.height)
interpolated_images.append(img)
full_image = np.concatenate(interpolated_images, axis=1)
figure(figsize=(16,4))
imshow(full_image) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | 49223c8ca8d5b37b75083bbaf686c082 |
Neat... Let's do one last thing, and make an animation with more frames. We'll generate 120 frames inside the assets folder, then use ffmpeg to stitch them into an mp4 file. If you don't have ffmpeg, you can install it from here. | n = 120
frames_dir = '../assets/neural-painter-frames'
video_path = '../assets/neural-painter-interpolation.mp4'
import os
if not os.path.isdir(frames_dir):
os.makedirs(frames_dir)
for i in range(n):
amt = float(i)/(n-1.0)
w3 = get_interpolated_weights(m1, m2, amt)
m3.set_weights(w3)
img = generate_image_rescaled(m3, x1, im1.width, im1.height)
img.save('../assets/neural-painter-frames/frame%04d.png'%i)
cmd = 'ffmpeg -i %s/frame%%04d.png -c:v libx264 -pix_fmt yuv420p %s' % (frames_dir, video_path)
os.system(cmd) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | 74e23d1fd235cb50f1a249224d67f091 |
You can find the video now in the assets directory. Looks neat! We can also display it in this notebook. From here, there's a lot of fun things we can do... Triangulating between multiple images, or streaming together several interpolations, or predicting color from not just position, but time in a movie. Lots of possibilities. | from IPython.display import HTML
import io
import base64
video = io.open(video_path, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))) | examples/dreaming/neural-net-painter.ipynb | ml4a/ml4a-guides | gpl-2.0 | 1e26c89b6b81f966c975fb8ad90c33e2 |
Event loop and GUI integration
The %gui magic enables the integration of GUI event loops with the interactive execution loop, allowing you to run GUI code without blocking IPython.
Consider for example the execution of Qt-based code. Once we enable the Qt gui support: | %gui qt | extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb | pacoqueen/ginn | gpl-2.0 | 33b065ae9b09a9207cab3eeb0d52da20 |
We can define a simple Qt application class (simplified version from this Qt tutorial): | import sys
from PyQt4 import QtGui, QtCore
class SimpleWindow(QtGui.QWidget):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
self.setGeometry(300, 300, 200, 80)
self.setWindowTitle('Hello World')
quit = QtGui.QPushButton('Close', self)
quit.setGeometry(10, 10, 60, 35)
self.connect(quit, QtCore.SIGNAL('clicked()'),
self, QtCore.SLOT('close()')) | extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb | pacoqueen/ginn | gpl-2.0 | 64a459e975fca972df7fd667b297ed61 |
And now we can instantiate it: | app = QtCore.QCoreApplication.instance()
if app is None:
app = QtGui.QApplication([])
sw = SimpleWindow()
sw.show()
from IPython.lib.guisupport import start_event_loop_qt4
start_event_loop_qt4(app) | extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb | pacoqueen/ginn | gpl-2.0 | 82639d93c2b0008421fbf5b4990b5a51 |
But IPython still remains responsive: | 10+2 | extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb | pacoqueen/ginn | gpl-2.0 | aec91d2b3175e0f5417d1387bea4969f |
The %gui magic can be similarly used to control Wx, Tk, glut and pyglet applications, as can be seen in our examples.
Embedding IPython in a terminal application | %%writefile simple-embed.py
# This shows how to use the new top-level embed function. It is a simpler
# API that manages the creation of the embedded shell.
from IPython import embed
a = 10
b = 20
embed(header='First time', banner1='')
c = 30
d = 40
embed(header='The second time') | extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb | pacoqueen/ginn | gpl-2.0 | 5f056e9b9583620ac9b2a732252d645a |
$d2q9$
cf.Jonas Tölke. Implementation of a Lattice Boltzmann kernel using the Compute
Unified Device Architecture developed by nVIDIA. Comput. Visual Sci. DOI 10.1007/s00791-008-0120-2
Affine Spaces
Lattices, that are "sufficiently" Galilean invariant, through non-perturbative algebraic theory
cf. http://staff.polito.it/pietro.asinari/publications/preprint_Asinari_PA_2010a.pdf, I. Karlin and P. Asinari, Factorization symmetry in the lattice Boltzmann method. Physica A 389, 1530 (2010). The prepaper that this seemd to be based upon and had some more calculation details is
Maxwell Lattices in 1-dim.
Maxwell's (M) moment relations | #u = Symbol("u",assume="real")
u = Symbol("u",real=True)
#T_0 =Symbol("T_0",assume="positive")
T_0 =Symbol("T_0",real=True,positive=True)
#v = Symbol("v",assume="real")
v = Symbol("v",real=True)
#phi_v = sqrt( pi/(Rat(2)*T_0))*exp( - (v-u)**2/(Rat(2)*T_0))
phi_v = sqrt( pi/(2*T_0))*exp( - (v-u)**2/(2*T_0))
integrate(phi_v,v)
integrate(phi_v,(v,-oo,oo))
(integrate(phi_v,v).subs(u,oo)- integrate(phi_v,v).subs(u,-oo)).expand()
integrate(phi_v,(v,-oo,oo),conds='none')
integrate(phi_v,(v,-oo,oo),conds='separate') | LatticeBoltzmann/LatticeBoltzmannMethod.ipynb | ernestyalumni/CUDACFD_out | mit | 0986d81710685df0aec608446370cc4e |
cf. http://stackoverflow.com/questions/16599325/simplify-conditional-integrals-in-sympy | refine(integrate(phi_v,(v,-oo,oo)), Q.is_true(Abs(periodic_argument(1/polar_lift(sqrt(T_0))**2, oo)) <= pi/2)) | LatticeBoltzmann/LatticeBoltzmannMethod.ipynb | ernestyalumni/CUDACFD_out | mit | 08bf46e1ecfd8bfd1156f1deb07cebde |
Causal assumptions
Having introduced the basic functionality, we now turn to a discussion of the assumptions underlying a causal interpretation:
Faithfulness / Stableness: Independencies in data arise not from coincidence, but rather from causal structure or, expressed differently, If two variables are independent given some other subset of variables, then they are not connected by a causal link in the graph.
Causal Sufficiency: Measured variables include all of the common causes.
Causal Markov Condition: All the relevant probabilistic information that can be obtained from the system is contained in its direct causes or, expressed differently, If two variables are not connected in the causal graph given some set of conditions (see Runge Chaos 2018 for further definitions), then they are conditionally independent.
No contemporaneous effects: There are no causal effects at lag zero.
Stationarity
Parametric assumptions of independence tests (these were already discussed in basic tutorial)
Faithfulness
Faithfulness, as stated above, is an expression of the assumption that the independencies we measure come from the causal structure, i.e., the time series graph, and cannot occur due to some fine tuning of the parameters. Another unfaithful case are processes containing purely deterministic dependencies, i.e., $Y=f(X)$, without any noise. We illustrate these cases in the following.
Fine tuning
Suppose in our model we have two ways in which $X^0$ causes $X^2$, a direct one, and an indirect effect $X^0\to X^1 \to X^2$ as realized in the following model:
\begin{align}
X^0_t &= \eta^0_t\
X^1_t &= 0.6 X^0_{t-1} + \eta^1_t\
X^2_t &= 0.6 X^1_{t-1} - 0.36 X^0_{t-2} + \eta^2_t\
\end{align} | np.random.seed(1)
data = np.random.randn(500, 3)
for t in range(1, 500):
# data[t, 0] += 0.6*data[t-1, 1]
data[t, 1] += 0.6*data[t-1, 0]
data[t, 2] += 0.6*data[t-1, 1] - 0.36*data[t-2, 0]
var_names = [r'$X^0$', r'$X^1$', r'$X^2$']
dataframe = pp.DataFrame(data, var_names=var_names)
# tp.plot_timeseries(dataframe) | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | ac442da6d39f683515cb4a48c1dcb0ca |
Since here $X^2_t = 0.6 X^1_{t-1} - 0.36 X^0_{t-2} + \eta^2_t = 0.6 (0.6 X^0_{t-2} + \eta^1_{t-1}) - 0.36 X^0_{t-2} + \eta^2_t = 0.36 X^0_{t-2} - 0.36 X^0_{t-2} + ...$, there is no unconditional dependency $X^0_{t-2} \to X^2_t$ and the link is not detected in the condition-selection step: | parcorr = ParCorr()
pcmci_parcorr = PCMCI(
dataframe=dataframe,
cond_ind_test=parcorr,
verbosity=1)
all_parents = pcmci_parcorr.run_pc_stable(tau_max=2, pc_alpha=0.2) | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 105652f9189abfb5b09a0d73c79e17b6 |
However, since the other parent of $X^2$, namely $X^1_{t-1}$ is detected, the MCI step conditions on $X^1_{t-1}$ and can reveal the true underlying graph (in this particular case): | results = pcmci_parcorr.run_pcmci(tau_max=2, pc_alpha=0.2, alpha_level = 0.01) | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 250b0a9e28093ff805c697319b58443a |
Note, however, that this is not always the case and such cancellation, even though a pathological case, can present a problem especially for smaller sample sizes.
Deterministic dependencies
Another violation of faithfulness can happen due to purely deterministic dependencies as shown here: | np.random.seed(1)
data = np.random.randn(500, 3)
for t in range(1, 500):
data[t, 0] = 0.4*data[t-1, 1]
data[t, 2] += 0.3*data[t-2, 1] + 0.7*data[t-1, 0]
dataframe = pp.DataFrame(data, var_names=var_names)
tp.plot_timeseries(dataframe); plt.show()
parcorr = ParCorr()
pcmci_parcorr = PCMCI(
dataframe=dataframe,
cond_ind_test=parcorr,
verbosity=2)
results = pcmci_parcorr.run_pcmci(tau_max=2, pc_alpha=0.2, alpha_level = 0.01)
# Plot time series graph
tp.plot_time_series_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='MCI',
); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 1e8e6e604d08b5c6bfc08f984b173f99 |
Here the partial correlation $X^1_{t-1} \to X^0_t$ is exactly 1. Since these now represent the same variable, the true link $X^0_{t-1} \to X^2_t$ cannot be detected anymore since we condition on $X^1_{t-2}$. Deterministic copies of other variables should be excluded from the analysis.
Causal sufficiency
Causal sufficiency demands that the set of variables contains all common causes of any two variables. This assumption is mostly violated when analyzing open complex systems outside a confined experimental setting. Any link estimated from a causal discovery algorithm could become non-significant if more variables are included in the analysis.
Observational causal inference assuming causal sufficiency should generally be seen more as one step towards a physical process understanding. There exist, however, algorithms that take into account and can expclicitely represent confounded links (e.g., the FCI algorithm and LPCMCI). Causal discovery can greatly help in an explorative model building analysis to get an idea of potential drivers. In particular, the absence of a link allows for a more robust conclusion: If there is no evidence for a statistical dependency, then a physical mechanism is less likely (assuming that the other assumptions hold).
See Runge, Jakob. 2018. “Causal Network Reconstruction from Time Series: From Theoretical Assumptions to Practical Estimation.” Chaos: An Interdisciplinary Journal of Nonlinear Science 28 (7): 075310.
for alternative approaches that do not necessitate Causal Sufficiency.
Unobserved driver / latent variable
For the common driver process, consider that the common driver was not measured: | np.random.seed(1)
data = np.random.randn(10000, 5)
a = 0.8
for t in range(5, 10000):
data[t, 0] += a*data[t-1, 0]
data[t, 1] += a*data[t-1, 1] + 0.5*data[t-1, 0]
data[t, 2] += a*data[t-1, 2] + 0.5*data[t-1, 1] + 0.5*data[t-1, 4]
data[t, 3] += a*data[t-1, 3] + 0.5*data[t-2, 4]
data[t, 4] += a*data[t-1, 4]
# tp.plot_timeseries(dataframe)
obsdata = data[:,[0, 1, 2, 3]]
var_names_lat = ['W', 'Y', 'X', 'Z', 'U']
for data_here in [data, obsdata]:
dataframe = pp.DataFrame(data_here)
parcorr = ParCorr()
pcmci_parcorr = PCMCI(
dataframe=dataframe,
cond_ind_test=parcorr,
verbosity=0)
results = pcmci_parcorr.run_pcmci(tau_max=5, pc_alpha=0.1, alpha_level = 0.001)
tp.plot_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names_lat,
link_colorbar_label='cross-MCI',
node_colorbar_label='auto-MCI',
); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 774f9a203a80fb3b9ff9f21f8bbbd3bf |
The upper plot shows the true causal graph if all variables are observed. The lower graph shows the case where variable $U$ is hidden. Then several spurious links appear: (1) $X\to Z$ and (2) links from $Y$ and $W$ to $Z$, which is counterintuitive because there is no possible indirect pathway (see upper graph). What's the reason? The culprit is the collider $X$: MCI (or FullCI and any other causal measure conditioning on the entire past) between $Y$ and $Z$ is conditioned on the parents of $Z$, which includes $X$ here in the lower latent graph. But then conditioning on a collider opens up the paths from $Y$ and $W$ to $Z$ and makes them dependent.
Solar forcing
In a geoscientific context, the solar forcing typically is a strong common driver of many processes. To remove this trivial effect, time series are typically anomalized, that is, the average seasonal cycle is subtracted. But one could also include the solar forcing explicitely as shown here via a sine wave for an artificial example. We've also made the time series more realistic by adding an auto-dependency on their past values. | np.random.seed(42)
T = 2000
data = np.random.randn(T, 4)
# Simple sun
data[:,3] = np.sin(np.arange(T)*20/np.pi) + 0.1*np.random.randn(T)
c = 0.8
for t in range(1, T):
data[t, 0] += 0.4*data[t-1, 0] + 0.4*data[t-1, 1] + c*data[t-1,3]
data[t, 1] += 0.5*data[t-1, 1] + c*data[t-1,3]
data[t, 2] += 0.6*data[t-1, 2] + 0.3*data[t-2, 1] + c*data[t-1,3]
dataframe = pp.DataFrame(data, var_names=[r'$X^0$', r'$X^1$', r'$X^2$', 'Sun'])
tp.plot_timeseries(dataframe); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 3709fa45db6e606e6ae43b5e282ff710 |
If we do not account for the common solar forcing, there will be many spurious links: | parcorr = ParCorr()
dataframe_nosun = pp.DataFrame(data[:,[0,1,2]], var_names=[r'$X^0$', r'$X^1$', r'$X^2$'])
pcmci_parcorr = PCMCI(
dataframe=dataframe_nosun,
cond_ind_test=parcorr,
verbosity=0)
tau_max = 2
tau_min = 1
results = pcmci_parcorr.run_pcmci(tau_max=tau_max, pc_alpha=0.2, alpha_level = 0.01)
# Plot time series graph
tp.plot_time_series_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='MCI',
); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 4f96e313561418e1ab0c255d168f5e82 |
However, if we explicitely include the solar forcing variable (which we assume is known in this case), we can identify the correct causal graph. Since we are not interested in the drivers of the solar forcing variable, we don't attempt to reconstruct its parents. This can be achieved by restricting selected_links. | parcorr = ParCorr()
# Only estimate parents of variables 0, 1, 2
selected_links = {}
for j in range(4):
if j in [0, 1, 2]:
selected_links[j] = [(var, -lag) for var in range(4)
for lag in range(tau_min, tau_max + 1)]
else:
selected_links[j] = []
pcmci_parcorr = PCMCI(
dataframe=dataframe,
cond_ind_test=parcorr,
verbosity=0)
results = pcmci_parcorr.run_pcmci(tau_min=tau_min, tau_max=tau_max, pc_alpha=0.2,
selected_links=selected_links, alpha_level = 0.01)
# Plot time series graph
tp.plot_time_series_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=[r'$X^0$', r'$X^1$', r'$X^2$', 'Sun'],
link_colorbar_label='MCI',
); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | f523baf95f6885bdf03a1136dea221ab |
Time sub-sampling
Sometimes a time series might be sub-sampled, that is the measurements are less frequent than the true underlying time-dependency. Consider the following process: | np.random.seed(1)
data = np.random.randn(1000, 3)
for t in range(1, 1000):
data[t, 0] += 0.*data[t-1, 0] + 0.6*data[t-1,2]
data[t, 1] += 0.*data[t-1, 1] + 0.6*data[t-1,0]
data[t, 2] += 0.*data[t-1, 2] + 0.6*data[t-1,1]
dataframe = pp.DataFrame(data, var_names=[r'$X^0$', r'$X^1$', r'$X^2$'])
tp.plot_timeseries(dataframe); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 33def42cd35ccea4c3afa2aefd55e464 |
With the original time sampling we obtain the correct causal graph: | pcmci_parcorr = PCMCI(dataframe=dataframe, cond_ind_test=ParCorr())
results = pcmci_parcorr.run_pcmci(tau_min=0,tau_max=2, pc_alpha=0.2, alpha_level = 0.01)
# Plot time series graph
tp.plot_time_series_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='MCI',
); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 0fc1c123e7ed8c5db2f636024eb8f0a3 |
If we sub-sample the data, very counter-intuitive links can appear. The true causal loop gets detected in the wrong direction: | sampled_data = data[::2]
pcmci_parcorr = PCMCI(dataframe=pp.DataFrame(sampled_data, var_names=var_names),
cond_ind_test=ParCorr(), verbosity=0)
results = pcmci_parcorr.run_pcmci(tau_min=0, tau_max=2, pc_alpha=0.2, alpha_level=0.01)
# Plot time series graph
tp.plot_time_series_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='MCI',
); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 5e3a32d287befd0dda75604a4996be61 |
If causal lags are smaller than the time sampling, such problems may occur. Causal inference for sub-sampled data is still an active area of research.
Causal Markov condition
The Markov condition can be rephrased as assuming that the noises driving each variable are independent of each other and independent in time (iid). This is violated in the following example where each variable is driven by 1/f noise which refers to the scaling of the power spectrum. 1/f noise can be generated by averaging AR(1) processes (http://www.scholarpedia.org/article/1/f_noise) which means that the noise is not independent in time anymore (even though the noise terms of each individual variable are still independent). Note that this constitutes a violation of the Markov Condition of the observed process only. So one might call this rather a violation of Causal Sufficiency. | np.random.seed(1)
T = 10000
# Generate 1/f noise by averaging AR1-process with wide range of coeffs
# (http://www.scholarpedia.org/article/1/f_noise)
def one_over_f_noise(T, n_ar=20):
whitenoise = np.random.randn(T, n_ar)
ar_coeffs = np.linspace(0.1, 0.9, n_ar)
for t in range(T):
whitenoise[t] += ar_coeffs*whitenoise[t-1]
return whitenoise.sum(axis=1)
data = np.random.randn(T, 3)
data[:,0] += one_over_f_noise(T)
data[:,1] += one_over_f_noise(T)
data[:,2] += one_over_f_noise(T)
for t in range(1, T):
data[t, 0] += 0.4*data[t-1, 1]
data[t, 2] += 0.3*data[t-2, 1]
dataframe = pp.DataFrame(data, var_names=var_names)
tp.plot_timeseries(dataframe); plt.show()
# plt.psd(data[:,0],return_line=True)[2]
# plt.psd(data[:,1],return_line=True)[2]
# plt.psd(data[:,2],return_line=True)[2]
# plt.gca().set_xscale("log", nonposx='clip')
# plt.gca().set_yscale("log", nonposy='clip') | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 53fd903dc25a8028a3a43f50fd09ef69 |
Here PCMCI will detect many spurious links, especially auto-dependencies, since the process has long memory and the present state is not independent of the further past given some set of parents. | parcorr = ParCorr()
pcmci_parcorr = PCMCI(
dataframe=dataframe,
cond_ind_test=parcorr,
verbosity=1)
results = pcmci_parcorr.run_pcmci(tau_max=5, pc_alpha=0.2, alpha_level = 0.01) | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 5f6def5b69a31def6a853427a6ec421b |
Time aggregation
An important choice is how to aggregate measured time series. For example, climate time series might have been measured daily, but one might be interested in a less noisy time-scale and analyze monthly aggregates. Consider the following process: | np.random.seed(1)
data = np.random.randn(1000, 3)
for t in range(1, 1000):
data[t, 0] += 0.7*data[t-1, 0]
data[t, 1] += 0.6*data[t-1, 1] + 0.6*data[t-1,0]
data[t, 2] += 0.5*data[t-1, 2] + 0.6*data[t-1,1]
dataframe = pp.DataFrame(data, var_names=var_names)
tp.plot_timeseries(dataframe); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 71d37680bb5df918d0504ee62b130137 |
With the original time aggregation we obtain the correct causal graph: | pcmci_parcorr = PCMCI(dataframe=dataframe, cond_ind_test=ParCorr())
results = pcmci_parcorr.run_pcmci(tau_min=0,tau_max=2, pc_alpha=0.2, alpha_level = 0.01)
# Plot time series graph
tp.plot_time_series_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='MCI',
); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | 89dba2daacf5ee3893407693c9a208d0 |
If we aggregate the data, we also detect a contemporaneous dependency for which no causal direction can be assessed in this framework and we obtain also several lagged spurious links. Essentially, we now have direct causal effects that appear contemporaneous on the aggregated time scale. Also causal inference for time-aggregated data is still an active area of research. Note again that this constitutes a violation of the Markov Condition of the observed process only. So one might call this rather a violation of Causal Sufficiency. | aggregated_data = pp.time_bin_with_mask(data, time_bin_length=4)
pcmci_parcorr = PCMCI(dataframe=pp.DataFrame(aggregated_data[0], var_names=var_names), cond_ind_test=ParCorr(),
verbosity=0)
results = pcmci_parcorr.run_pcmci(tau_min=0, tau_max=2, pc_alpha=0.2, alpha_level = 0.01)
# Plot time series graph
tp.plot_time_series_graph(
val_matrix=results['val_matrix'],
graph=results['graph'],
var_names=var_names,
link_colorbar_label='MCI',
); plt.show() | tutorials/tigramite_tutorial_assumptions.ipynb | jakobrunge/tigramite | gpl-3.0 | dfb7a8b9969ca12d872d0212a3c685e4 |
First, list supported options on the Stackdriver magic %sd: | %sd -h | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | 7ad6e0f013ccc96733cac19e2ceb92f8 |
Let's see what we can do with the monitoring command: | %sd monitoring -h | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | 0c28d9a80d8ca788231c659c5139a061 |
List names of Compute Engine CPU metrics
Here we use IPython cell magics to list the CPU metrics. The Labels column shows that instance_name is a metric label. | %sd monitoring metrics list --type compute*/cpu/* | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | 69679cad3de42b37c02622b4368a2fcb |
List monitored resource types related to GCE | %sd monitoring resource_types list --type gce* | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | c70d9fd507ba03699832fee16fba99af |
Querying time series data
The Query class allows users to query and access the monitoring time series data.
Many useful methods of the Query class are actually defined by the base class, which is provided by the google-cloud-python library. These methods include:
* select_metrics: filters the query based on metric labels.
* select_resources: filters the query based on resource type and labels.
* align: aligns the query along the specified time intervals.
* reduce: applies aggregation to the query.
* as_dataframe: returns the time series data as a pandas DataFrame object.
Reference documentation for the Query base class is available here. You can also get help from inside the notebook by calling the help function on any class, object or method. | from google.datalab.stackdriver import monitoring as gcm
help(gcm.Query.select_interval) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | a32ad725452380c360ace8a549e432f9 |
Initializing the query
During intialization, the metric type and the time interval need to be specified. For interactive use, the metric type has a default value. The simplest way to specify the time interval that ends now is to use the arguments days, hours, and minutes.
In the cell below, we initialize the query to load the time series for CPU Utilization for the last two hours. | query_cpu = gcm.Query('compute.googleapis.com/instance/cpu/utilization', hours=2) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | 9fba8759da5873bfe625ebd2ed6d14e4 |
Getting the metadata
The method metadata() returns a QueryMetadata object. It contains the following information about the time series matching the query:
* resource types
* resource labels and their values
* metric labels and their values
This helps you understand the structure of the time series data, and makes it easier to modify the query. | metadata_cpu = query_cpu.metadata().as_dataframe()
metadata_cpu.head(5) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | 2069ba2a2079fd4a183dba9e163f562f |
Reading the instance names from the metadata
Next, we read in the instance names from the metadata, and use it in filtering the time series data below. If there are no GCE instances in this project, the cells below will raise errors. | import sys
if metadata_cpu.empty:
sys.stderr.write('This project has no GCE instances. The remaining notebook '
'will raise errors!')
else:
instance_names = sorted(list(metadata_cpu['metric.labels']['instance_name']))
print('First 5 instance names: %s' % ([str(name) for name in instance_names[:5]],)) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | 8f393ccb5a06728796fdcbf27dae4fb4 |
Filtering by metric label
We first filter query_cpu defined earlier to include only the first instance. Next, calling as_dataframe gets the results from the monitoring API, and converts them into a pandas DataFrame. | query_cpu_single_instance = query_cpu.select_metrics(instance_name=instance_names[0])
# Get the query results as a pandas DataFrame and look at the last 5 rows.
data_single_instance = query_cpu_single_instance.as_dataframe(label='instance_name')
data_single_instance.tail(5) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | bcdd1b0f56cd44770ba9e324561bb091 |
Displaying the time series as a linechart
We can plot the time series data by calling the plot method of the dataframe. The pandas library uses matplotlib for plotting, so you can learn more about it here. | # N.B. A useful trick is to assign the return value of plot to _
# so that you don't get text printed before the plot itself.
_ = data_single_instance.plot() | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | 20307bd4049a53a72bb5cca1bc855dc2 |
Aggregating the query
You can aggregate or summarize time series data along various dimensions.
* In the first stage, data in a time series is aligned to a specified period.
* In the second stage, data from multiple time series is combined, or reduced, into one time series.
Not all alignment and reduction options are applicable to all time series, depending on their metric type and value type. Alignment and reduction may change the metric type or value type of a time series.
Aligning the query
For multiple time series, aligning the data is recommended. Aligned data is more compact to read from the Monitoring API, and lends itself better to visualizations.
The alignment period can be specified using the arguments hours, minutes, and seconds. In the cell below, we do the following:
* select a subset of the instances by using a prefix of the first instance name
* align the time series to 5 minute intervals using an 'ALIGN_MEAN' method.
* plot the time series, and adjust the legend to be outside the plot. You can learn more about legend placement here. | # Filter the query by a common instance name prefix.
common_prefix = instance_names[0].split('-')[0]
query_cpu_aligned = query_cpu.select_metrics(instance_name_prefix=common_prefix)
# Align the query to have data every 5 minutes.
query_cpu_aligned = query_cpu_aligned.align(gcm.Aligner.ALIGN_MEAN, minutes=5)
data_multiple_instances = query_cpu_aligned.as_dataframe(label='instance_name')
# Display the data as a linechart, and move the legend to the right of it.
_ = data_multiple_instances.plot().legend(loc="upper left", bbox_to_anchor=(1,1)) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | 17da2736b4079ece01b335f6a20316f6 |
Reducing the query
In order to combine the data across multiple time series, the reduce() method can be used. The fields to be retained after aggregation must be specified in the method.
For example, to aggregate the results by the zone, 'resource.zone' can be specified. | query_cpu_reduced = query_cpu_aligned.reduce(gcm.Reducer.REDUCE_MEAN, 'resource.zone')
data_per_zone = query_cpu_reduced.as_dataframe('zone')
data_per_zone.tail(5) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | b8aa50dd1931d6bb5e9120b1603afc73 |
Displaying the time series as a heatmap
Let us look at the time series at the instance level as a heatmap. A heatmap is a compact representation of the data, and can often highlight patterns.
The diagram below shows the instances along rows, and the timestamps along columns. | import matplotlib
import seaborn
# Set the size of the heatmap to have a better aspect ratio.
div_ratio = 1 if len(data_multiple_instances.columns) == 1 else 2.0
width, height = (size/div_ratio for size in data_multiple_instances.shape)
matplotlib.pyplot.figure(figsize=(width, height))
# Display the data as a heatmap. The timestamps are converted to strings
# for better readbility.
_ = seaborn.heatmap(data_multiple_instances.T,
xticklabels=data_multiple_instances.index.map(str),
cmap='YlGnBu') | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | f4c035b789a6b4fc1d93a0fca8c2198d |
Multi-level headers
If you don't provide any labels to as_dataframe, it returns all the resource and metric labels present in the time series as a multi-level header.
This allows you to filter, and aggregate the data more easily. | data_multi_level = query_cpu_aligned.as_dataframe()
data_multi_level.tail(5) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | a02a99f6a9609dd672b05ac0c3c50945 |
Filter the dataframe
Let us filter the multi-level dataframe based on the common prefix. Applying the filter will look across all column headers. | print('Finding pattern "%s" in the dataframe headers' % (common_prefix,))
data_multi_level.filter(regex=common_prefix).tail(5) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | bd6bdda8c2ad5afaf36d93ee846c561f |
Aggregate columns in the dataframe
Here, we aggregate the multi-level dataframe at the zone level. This is similar to applying reduction using 'REDUCE_MEAN' on the field 'resource.zone'. | data_multi_level.groupby(level='zone', axis=1).mean().tail(5) | tutorials/Stackdriver Monitoring/Getting started.ipynb | googledatalab/notebooks | apache-2.0 | d7f80283b4fb1e11de181bb7bd962275 |
<a id='sec3.2'></a>
<a id='sec1.2'></a>
1.2 Compute POI Info
Compute POI (Longitude, Latitude) as the average coordinates of the assigned photos. | poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').agg(np.mean)
poi_coords.reset_index(inplace=True)
poi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True)
poi_coords.head() | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | 3f1a71eb7e99456f6889b5c6b40d5f9f |
<a id='sec1.3'></a>
1.3 Construct Travelling Sequences | seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\
.groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max])
seq_all.columns = seq_all.columns.droplevel()
seq_all.reset_index(inplace=True)
seq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True)
seq_all['poiDuration(sec)'] = seq_all['departureTime'] - seq_all['arrivalTime']
seq_all.head() | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | f31ac13b82a9ba8e87b75d2e344df9cd |
<a id='sec1.4'></a>
1.4 Generate KML File for Trajectory
Visualise Trajectory on map by generating a KML file for a trajectory and its associated POIs. | def generate_kml(fname, seqid_set, seq_all, poi_all):
k = kml.KML()
ns = '{http://www.opengis.net/kml/2.2}'
styid = 'style1'
# colors in KML: aabbggrr, aa=00 is fully transparent
sty = styles.Style(id=styid, styles=[styles.LineStyle(color='9f0000ff', width=2)]) # transparent red
doc = kml.Document(ns, '1', 'Trajectory', 'Trajectory visualization', styles=[sty])
k.append(doc)
poi_set = set()
seq_dict = dict()
for seqid in seqid_set:
# ordered POIs in sequence
seqi = seq_all[seq_all['seqID'] == seqid].copy()
seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)
seq = seqi['poiID'].tolist()
seq_dict[seqid] = seq
for poi in seq: poi_set.add(poi)
# Placemark for trajectory
for seqid in sorted(seq_dict.keys()):
seq = seq_dict[seqid]
desc = 'Trajectory: ' + str(seq[0]) + '->' + str(seq[-1])
pm = kml.Placemark(ns, str(seqid), 'Trajectory ' + str(seqid), desc, styleUrl='#' + styid)
pm.geometry = LineString([(poi_all.loc[x, 'poiLon'], poi_all.loc[x, 'poiLat']) for x in seq])
doc.append(pm)
# Placemark for POI
for poi in sorted(poi_set):
desc = 'POI of category ' + poi_all.loc[poi, 'poiTheme']
pm = kml.Placemark(ns, str(poi), 'POI ' + str(poi), desc, styleUrl='#' + styid)
pm.geometry = Point(poi_all.loc[poi, 'poiLon'], poi_all.loc[poi, 'poiLat'])
doc.append(pm)
# save to file
kmlstr = k.to_string(prettyprint=True)
with open(fname, 'w') as f:
f.write('<?xml version="1.0" encoding="UTF-8"?>\n')
f.write(kmlstr) | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | 0d56593fa19c93d594cc782f31cc4b12 |
<a id='sec2'></a>
2. Trajectory with same (start, end) | seq_user = seq_all[['userID', 'seqID', 'poiID']].copy().groupby(['userID', 'seqID']).agg(np.size)
seq_user.reset_index(inplace=True)
seq_user.rename(columns={'size':'seqLen'}, inplace=True)
seq_user.set_index('seqID', inplace=True)
seq_user.head()
def extract_seq(seqid, seq_all):
seqi = seq_all[seq_all['seqID'] == seqid].copy()
seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)
return seqi['poiID'].tolist()
startend_dict = dict()
for seqid in seq_all['seqID'].unique():
seq = extract_seq(seqid, seq_all)
if (seq[0], seq[-1]) not in startend_dict:
startend_dict[(seq[0], seq[-1])] = [seqid]
else:
startend_dict[(seq[0], seq[-1])].append(seqid)
indices = sorted(startend_dict.keys())
columns = ['#traj', '#user']
startend_seq = pd.DataFrame(data=np.zeros((len(indices), len(columns))), index=indices, columns=columns)
for pair, seqid_set in startend_dict.items():
users = set([seq_user.loc[x, 'userID'] for x in seqid_set])
startend_seq.loc[pair, '#traj'] = len(seqid_set)
startend_seq.loc[pair, '#user'] = len(users)
startend_seq.sort(columns=['#traj'], ascending=True, inplace=True)
startend_seq.index.name = '(start, end)'
startend_seq.sort_index(inplace=True)
print(startend_seq.shape)
startend_seq | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | f6fbbc4b2d2feb4327e64aa053dfc737 |
<a id='sec3'></a>
3. Trajectory with more than one occurrence
Contruct trajectories with more than one occurrence (can be same or different user). | distinct_seq = dict()
for seqid in seq_all['seqID'].unique():
seq = extract_seq(seqid, seq_all)
#if len(seq) < 2: continue # drop trajectory with single point
if str(seq) not in distinct_seq:
distinct_seq[str(seq)] = [(seqid, seq_user.loc[seqid].iloc[0])] # (seqid, user)
else:
distinct_seq[str(seq)].append((seqid, seq_user.loc[seqid].iloc[0]))
print(len(distinct_seq))
#distinct_seq
distinct_seq_df = pd.DataFrame.from_dict({k:len(distinct_seq[k]) for k in sorted(distinct_seq.keys())}, orient='index')
distinct_seq_df.columns = ['#occurrence']
distinct_seq_df.index.name = 'trajectory'
distinct_seq_df['seqLen'] = [len(x.split(',')) for x in distinct_seq_df.index]
distinct_seq_df.sort_index(inplace=True)
print(distinct_seq_df.shape)
distinct_seq_df.head()
plt.figure(figsize=[9, 9])
plt.xlabel('sequence length')
plt.ylabel('#occurrence')
plt.scatter(distinct_seq_df['seqLen'], distinct_seq_df['#occurrence'], marker='+') | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | edf584f28c7116a09372f77194ed3bfa |
Filtering out sequences with single point as well as sequences occurs only once. | distinct_seq_df2 = distinct_seq_df[distinct_seq_df['seqLen'] > 1]
distinct_seq_df2 = distinct_seq_df2[distinct_seq_df2['#occurrence'] > 1]
distinct_seq_df2.head()
plt.figure(figsize=[9, 9])
plt.xlabel('sequence length')
plt.ylabel('#occurrence')
plt.scatter(distinct_seq_df2['seqLen'], distinct_seq_df2['#occurrence'], marker='+') | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | f07d2b5d496f23ac356402397eb9d835 |
<a id='sec4'></a>
4. Visualise Trajectory
<a id='sec4.1'></a>
4.1 Visualise Trajectories with more than one occurrence | for seqstr in distinct_seq_df2.index:
assert(seqstr in distinct_seq)
seqid = distinct_seq[seqstr][0][0]
fname = re.sub(',', '_', re.sub('[ \[\]]', '', seqstr))
fname = os.path.join(data_dir, suffix + '-seq-occur-' + str(len(distinct_seq[seqstr])) + '_' + fname + '.kml')
generate_kml(fname, [seqid], seq_all, poi_all) | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | 326c72042a500517f03b054a803bf08d |
<a id='sec4.2'></a>
4.2 Visualise Trajectories with same (start, end) but different paths | startend_distinct_seq = dict()
distinct_seqid_set = [distinct_seq[x][0][0] for x in distinct_seq_df2.index]
for seqid in distinct_seqid_set:
seq = extract_seq(seqid, seq_all)
if (seq[0], seq[-1]) not in startend_distinct_seq:
startend_distinct_seq[(seq[0], seq[-1])] = [seqid]
else:
startend_distinct_seq[(seq[0], seq[-1])].append(seqid)
for pair in sorted(startend_distinct_seq.keys()):
if len(startend_distinct_seq[pair]) < 2: continue
fname = suffix + '-seq-start_' + str(pair[0]) + '_end_' + str(pair[1]) + '.kml'
fname = os.path.join(data_dir, fname)
print(pair, len(startend_distinct_seq[pair]))
generate_kml(fname, startend_distinct_seq[pair], seq_all, poi_all) | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | 32f96258a85005a8cc3f430c120d7f6e |
<a id='sec5'></a>
5. Visualise the Most Common Edges
<a id='sec5.1'></a>
5.1 Count the occurrence of edges | edge_count = pd.DataFrame(data=np.zeros((poi_all.index.shape[0], poi_all.index.shape[0]), dtype=np.int), \
index=poi_all.index, columns=poi_all.index)
for seqid in seq_all['seqID'].unique():
seq = extract_seq(seqid, seq_all)
for j in range(len(seq)-1):
edge_count.loc[seq[j], seq[j+1]] += 1
edge_count
k = kml.KML()
ns = '{http://www.opengis.net/kml/2.2}'
width_set = set()
# Placemark for edges
pm_list = []
for poi1 in poi_all.index:
for poi2 in poi_all.index:
width = edge_count.loc[poi1, poi2]
if width < 1: continue
width_set.add(width)
sid = str(poi1) + '_' + str(poi2)
desc = 'Edge: ' + str(poi1) + '->' + str(poi2) + ', #occurrence: ' + str(width)
pm = kml.Placemark(ns, sid, 'Edge_' + sid, desc, styleUrl='#sty' + str(width))
pm.geometry = LineString([(poi_all.loc[x, 'poiLon'], poi_all.loc[x, 'poiLat']) for x in [poi1, poi2]])
pm_list.append(pm)
# Placemark for POIs
for poi in poi_all.index:
sid = str(poi)
desc = 'POI of category ' + poi_all.loc[poi, 'poiTheme']
pm = kml.Placemark(ns, sid, 'POI_' + sid, desc, styleUrl='#sty1')
pm.geometry = Point(poi_all.loc[poi, 'poiLon'], poi_all.loc[poi, 'poiLat'])
pm_list.append(pm)
# Styles
stys = []
for width in width_set:
sid = 'sty' + str(width)
# colors in KML: aabbggrr, aa=00 is fully transparent
stys.append(styles.Style(id=sid, styles=[styles.LineStyle(color='3f0000ff', width=width)])) # transparent red
doc = kml.Document(ns, '1', 'Edges', 'Edge visualization', styles=stys)
for pm in pm_list: doc.append(pm)
k.append(doc)
# save to file
fname = suffix + '-common_edges.kml'
fname = os.path.join(data_dir, fname)
kmlstr = k.to_string(prettyprint=True)
with open(fname, 'w') as f:
f.write('<?xml version="1.0" encoding="UTF-8"?>\n')
f.write(kmlstr) | tour/traj_visualisation.ipynb | charmasaur/digbeta | gpl-3.0 | f51a856434850904bdad9be5a53a0ff4 |
讨论
函数 re.split() 是非常实用的,因为它允许你为分隔符指定多个正则模式。 比如,在上面的例子中,分隔符可以是逗号,分号或者是空格,并且后面紧跟着任意个的空格。 只要这个模式被找到,那么匹配的分隔符两边的实体都会被当成是结果中的元素返回。 返回结果为一个字段列表,这个跟 str.split() 返回值类型是一样的。
当你使用 re.split() 函数时候,需要特别注意的是正则表达式中是否包含一个括号捕获分组。 如果使用了捕获分组,那么被匹配的文本也将出现在结果列表中。比如,观察一下这段代码运行后的结果: | fields = re.split(r"(;|,|\s)\s*", line)
fields | 02 strings and text/02.01 split string on multiple delimiters.ipynb | wuafeing/Python3-Tutorial | gpl-3.0 | 3309cad33697598e5bd886c405f80291 |
获取分割字符在某些情况下也是有用的。 比如,你可能想保留分割字符串,用来在后面重新构造一个新的输出字符串: | values = fields[::2]
values
delimiters = fields[1::2] + [""]
delimiters
# Reform the line using the same delimiters
"".join(v + d for v, d in zip(values, delimiters)) | 02 strings and text/02.01 split string on multiple delimiters.ipynb | wuafeing/Python3-Tutorial | gpl-3.0 | b97bf8b6a5679bac30c9d26f7946d502 |
如果你不想保留分割字符串到结果列表中去,但仍然需要使用到括号来分组正则表达式的话, 确保你的分组是非捕获分组,形如 (?:...) 。比如: | re.split(r"(?:,|;|\s)\s*", line) | 02 strings and text/02.01 split string on multiple delimiters.ipynb | wuafeing/Python3-Tutorial | gpl-3.0 | 65f50e153b7d3fa432158e2ce06de002 |
Step 4: 建立模型
把数据集分回 训练/测试集 | dummy_train_df = all_dummy_df.loc[train_df.index]
dummy_test_df = all_dummy_df.loc[test_df.index]
dummy_train_df.shape, dummy_test_df.shape | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | 44ad56e39336ab4af779e063a2acf43a |
Ridge Regression
用Ridge Regression模型来跑一遍看看。(对于多因子的数据集,这种模型可以方便的把所有的var都无脑的放进去) | from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | 83843da80a334188784ef477dd093b42 |
这一步不是很必要,只是把DF转化成Numpy Array,这跟Sklearn更加配 | X_train = dummy_train_df.values
X_test = dummy_test_df.values | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | f47f1dcb1fe89b4262ca5f91a90a287e |
用Sklearn自带的cross validation方法来测试模型 | alphas = np.logspace(-3, 2, 50)
test_scores = []
for alpha in alphas:
clf = Ridge(alpha)
test_score = np.sqrt(-cross_val_score(clf, X_train, y_train, cv=10, scoring='neg_mean_squared_error'))
test_scores.append(np.mean(test_score)) | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | f581c70a27c4b7951fe9d335607360b3 |
存下所有的CV值,看看哪个alpha值更好(也就是『调参数』) | import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(alphas, test_scores)
plt.title("Alpha vs CV Error"); | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | fd08458930679990430e63450af5a6ba |
可见,大概alpha=10~20的时候,可以把score达到0.135左右。
Random Forest | from sklearn.ensemble import RandomForestRegressor
max_features = [.1, .3, .5, .7, .9, .99]
test_scores = []
for max_feat in max_features:
clf = RandomForestRegressor(n_estimators=200, max_features=max_feat)
test_score = np.sqrt(-cross_val_score(clf, X_train, y_train, cv=5, scoring='neg_mean_squared_error'))
test_scores.append(np.mean(test_score))
plt.plot(max_features, test_scores)
plt.title("Max Features vs CV Error"); | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | 5590e3333bf38ac4404e9142cee08da1 |
用RF的最优值达到了0.137
Step 5: Ensemble
这里我们用一个Stacking的思维来汲取两种或者多种模型的优点
首先,我们把最好的parameter拿出来,做成我们最终的model | ridge = Ridge(alpha=15)
rf = RandomForestRegressor(n_estimators=500, max_features=.3)
ridge.fit(X_train, y_train)
rf.fit(X_train, y_train) | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | 14357a2efb053aa5297059d21a7a86d2 |
上面提到了,因为最前面我们给label做了个log(1+x), 于是这里我们需要把predit的值给exp回去,并且减掉那个"1"
所以就是我们的expm1()函数。 | y_ridge = np.expm1(ridge.predict(X_test))
y_rf = np.expm1(rf.predict(X_test)) | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | 9b4f50ae997de27bc6571f250fb4d0ea |
一个正经的Ensemble是把这群model的预测结果作为新的input,再做一次预测。这里我们简单的方法,就是直接『平均化』。 | y_final = (y_ridge + y_rf) / 2 | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | a84f32201a93aefe0215985a86aa15b9 |
Step 6: 提交结果 | submission_df = pd.DataFrame(data= {'Id' : test_df.index, 'SalePrice': y_final}) | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | f91b7660aae90d8280c8dcbd2efacbc7 |
我们的submission大概长这样: | submission_df.head(10) | python/kaggle/competition/house-price/house_price.ipynb | muxiaobai/CourseExercises | gpl-2.0 | 38e62cc2bdc43511a60fa2ef4a74d1b5 |
Variables, lists and dictionaries | var1 = 1
my_string = "This is a string"
var1
print(my_string)
my_list = [1, 2, 3, 'x', 'y']
my_list
my_list[0]
my_list[1:3]
salaries = {'Mike':2000, 'Ann':3000}
salaries['Mike']
salaries['Jake'] = 2500
salaries | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | 4f2727e38a7d6b64eb5f77423cc12937 |
Strings | long_string = 'This is a string \n Second line of the string'
print(long_string)
long_string.split(" ")
long_string.split("\n")
long_string.count('s') # case sensitive!
long_string.upper() | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | 8a6a81e2fb13d4908078d7b4b9865e8f |
Conditionals | if long_string.startswith('X'):
print('Yes')
elif long_string.startswith('T'):
print('It has T')
else:
print('No') | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | 3206807ad080e7e68426f76b43d7e3c4 |
Loops | for line in long_string.split('\n'):
print line
c = 0
while c < 10:
c += 2
print c | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | fe9c7580dd76c783ab2481dd0cf553c3 |
List comprehensions | some_numbers = [1,2,3,4]
[x**2 for x in some_numbers] | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | 7803eb5851cd9643122887cc97219e7b |
File operations | with open('../README.md', 'r') as f:
content = f.read()
print(content) | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | c24dbc0a5d42abe824a66696caacd7e6 |
Functions | def average(numbers):
return float(sum(numbers)/len(numbers))
average([1,2,2,2.5,3,])
map(average, [[1,2,2,2.5,3,],[3,2.3,4.2,2.5,5,]])
# %load cool_events.py
#!/usr/bin/env python
from IPython.display import HTML
class HUB:
"""
HUB event class
"""
def __init__(self, version):
self.full_name = "Heidelberg Unseminars in Bioinformatics"
self.info = HTML("<p>Heidelberg Unseminars in Bioinformatics are participant-"
"driven meetings where people with an interest in bioinformatics "
"come together to discuss hot topics and exchange ideas and then go "
"for a drink and a snack afterwards.</p>")
self.version = version
def __repr__(self):
return self.full_name
this_event = HUB(21)
this_event
this_event.full_name
this_event.version | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | 4a5d4dde970ee5f493353f6dff58da47 |
Python libraries
Library is a collection of resources. These include pre-written code, subroutines, classes, etc. | from math import exp
exp(2) #shift tab to access documentation
import math
math.exp(10)
import numpy as np # Numpy - package for scientifc computing
#import pandas as pd # Pandas - package for working with data frames (tables)
#import Bio # BioPython - package for bioinformatics
#import sklearn # scikit-learn - package for machine larning
#from rdkit import Chem # RDKit - Chemoinformatics library | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | e92fb47b6bfaec1e4fe80a9009b0f672 |
Plotting | %matplotlib inline
import matplotlib.pyplot as plt
x_values = np.arange(0, 20, 0.1)
y_values = [math.sin(x) for x in x_values]
plt.plot(x_values, y_values)
plt.scatter(x_values, y_values)
plt.boxplot(y_values) | notebooks/Intro to Python and Jupyter.ipynb | samoturk/HUB-ipython | mit | 7cabd404b2c827874c864b5a86e09adc |
Load up the tptY3 buzzard mocks. | fname = '/u/ki/jderose/public_html/bcc/measurement/y3/3x2pt/buzzard/flock/buzzard-2/tpt_Y3_v0.fits'
hdulist = fits.open(fname)
z_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])
zbin=1
a = 0.81120
z = 1.0/a - 1.0 | notebooks/wt Integral calculation.ipynb | mclaughlin6464/pearce | mit | a9efdb5caed833b7e35f3a23fac19387 |