repo_name
stringlengths 6
103
| path
stringlengths 4
209
| copies
stringclasses 325
values | size
stringlengths 4
7
| content
stringlengths 838
1.04M
| license
stringclasses 15
values |
---|---|---|---|---|---|
pravsripad/mne-python | tutorials/preprocessing/50_artifact_correction_ssp.py | 2 | 23179 | # -*- coding: utf-8 -*-
"""
.. _tut-artifact-ssp:
============================
Repairing artifacts with SSP
============================
This tutorial covers the basics of signal-space projection (SSP) and shows
how SSP can be used for artifact repair; extended examples illustrate use
of SSP for environmental noise reduction, and for repair of ocular and
heartbeat artifacts.
We begin as always by importing the necessary Python modules. To save ourselves
from repeatedly typing ``mne.preprocessing`` we'll directly import a handful of
functions from that submodule:
"""
# %%
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.preprocessing import (create_eog_epochs, create_ecg_epochs,
compute_proj_ecg, compute_proj_eog)
# %%
# .. note::
# Before applying SSP (or any artifact repair strategy), be sure to observe
# the artifacts in your data to make sure you choose the right repair tool.
# Sometimes the right tool is no tool at all โ if the artifacts are small
# enough you may not even need to repair them to get good analysis results.
# See :ref:`tut-artifact-overview` for guidance on detecting and
# visualizing various types of artifact.
#
#
# What is SSP?
# ^^^^^^^^^^^^
#
# Signal-space projection (SSP) :footcite:`UusitaloIlmoniemi1997` is a
# technique for removing noise from EEG
# and MEG signals by :term:`projecting <projector>` the signal onto a
# lower-dimensional subspace. The subspace is chosen by calculating the average
# pattern across sensors when the noise is present, treating that pattern as
# a "direction" in the sensor space, and constructing the subspace to be
# orthogonal to the noise direction (for a detailed walk-through of projection
# see :ref:`tut-projectors-background`).
#
# The most common use of SSP is to remove noise from MEG signals when the noise
# comes from environmental sources (sources outside the subject's body and the
# MEG system, such as the electromagnetic fields from nearby electrical
# equipment) and when that noise is *stationary* (doesn't change much over the
# duration of the recording). However, SSP can also be used to remove
# biological artifacts such as heartbeat (ECG) and eye movement (EOG)
# artifacts. Examples of each of these are given below.
#
#
# Example: Environmental noise reduction from empty-room recordings
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# The :ref:`example data <sample-dataset>` was recorded on a Neuromag system,
# which stores SSP projectors for environmental noise removal in the system
# configuration (so that reasonably clean raw data can be viewed in real-time
# during acquisition). For this reason, all the `~mne.io.Raw` data in
# the example dataset already includes SSP projectors, which are noted in the
# output when loading the data:
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
# here we crop and resample just for speed
raw = mne.io.read_raw_fif(sample_data_raw_file).crop(0, 60)
raw.load_data().resample(100)
# %%
# The :ref:`example data <sample-dataset>` also includes an "empty room"
# recording taken the same day as the recording of the subject. This will
# provide a more accurate estimate of environmental noise than the projectors
# stored with the system (which are typically generated during annual
# maintenance and tuning). Since we have this subject-specific empty-room
# recording, we'll create our own projectors from it and discard the
# system-provided SSP projectors (saving them first, for later comparison with
# the custom ones):
system_projs = raw.info['projs']
raw.del_proj()
empty_room_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'ernoise_raw.fif')
# cropped to 60 sec just for speed
empty_room_raw = mne.io.read_raw_fif(empty_room_file).crop(0, 30)
# %%
# Notice that the empty room recording itself has the system-provided SSP
# projectors in it โ we'll remove those from the empty room file too.
empty_room_raw.del_proj()
# %%
# Visualizing the empty-room noise
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Let's take a look at the spectrum of the empty room noise. We can view an
# individual spectrum for each sensor, or an average (with confidence band)
# across sensors:
for average in (False, True):
empty_room_raw.plot_psd(average=average, dB=False, xscale='log')
# %%
# Creating the empty-room projectors
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# We create the SSP vectors using `~mne.compute_proj_raw`, and control
# the number of projectors with parameters ``n_grad`` and ``n_mag``. Once
# created, the field pattern of the projectors can be easily visualized with
# `~mne.viz.plot_projs_topomap`. We include the parameter
# ``vlim='joint'`` so that the colormap is computed jointly for all projectors
# of a given channel type; this makes it easier to compare their relative
# smoothness. Note that for the function to know the types of channels in a
# projector, you must also provide the corresponding `~mne.Info` object:
empty_room_projs = mne.compute_proj_raw(empty_room_raw, n_grad=3, n_mag=3)
mne.viz.plot_projs_topomap(empty_room_projs, colorbar=True, vlim='joint',
info=empty_room_raw.info)
# %%
# Notice that the gradiometer-based projectors seem to reflect problems with
# individual sensor units rather than a global noise source (indeed, planar
# gradiometers are much less sensitive to distant sources). This is the reason
# that the system-provided noise projectors are computed only for
# magnetometers. Comparing the system-provided projectors to the
# subject-specific ones, we can see they are reasonably similar (though in a
# different order) and the left-right component seems to have changed
# polarity.
fig, axs = plt.subplots(2, 3)
for idx, _projs in enumerate([system_projs, empty_room_projs[3:]]):
mne.viz.plot_projs_topomap(_projs, axes=axs[idx], colorbar=True,
vlim='joint', info=empty_room_raw.info)
# %%
# Visualizing how projectors affect the signal
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# We could visualize the different effects these have on the data by applying
# each set of projectors to different copies of the `~mne.io.Raw` object
# using `~mne.io.Raw.apply_proj`. However, the `~mne.io.Raw.plot`
# method has a ``proj`` parameter that allows us to *temporarily* apply
# projectors while plotting, so we can use this to visualize the difference
# without needing to copy the data. Because the projectors are so similar, we
# need to zoom in pretty close on the data to see any differences:
mags = mne.pick_types(raw.info, meg='mag')
for title, projs in [('system', system_projs),
('subject-specific', empty_room_projs[3:])]:
raw.add_proj(projs, remove_existing=True)
with mne.viz.use_browser_backend('matplotlib'):
fig = raw.plot(proj=True, order=mags, duration=1, n_channels=2)
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} projectors'.format(title), size='xx-large', weight='bold')
# %%
# The effect is sometimes easier to see on averaged data. Here we use an
# interactive feature of `mne.Evoked.plot_topomap` to turn projectors on
# and off to see the effect on the data. Of course, the interactivity won't
# work on the tutorial website, but you can download the tutorial and try it
# locally:
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {'auditory/left': 1}
# NOTE: appropriate rejection criteria are highly data-dependent
reject = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 ยตV
eog=250e-6) # 250 ยตV
# time range where we expect to see the auditory N100: 50-150 ms post-stimulus
times = np.linspace(0.05, 0.15, 5)
epochs = mne.Epochs(raw, events, event_id, proj='delayed', reject=reject)
fig = epochs.average().plot_topomap(times, proj='interactive')
# %%
# Plotting the ERP/F using ``evoked.plot()`` or ``evoked.plot_joint()`` with
# and without projectors applied can also be informative, as can plotting with
# ``proj='reconstruct'``, which can reduce the signal bias introduced by
# projections (see :ref:`tut-artifact-ssp-reconstruction` below).
#
# Example: EOG and ECG artifact repair
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# Visualizing the artifacts
# ~~~~~~~~~~~~~~~~~~~~~~~~~
#
# As mentioned in :ref:`the ICA tutorial <tut-artifact-ica>`, an important
# first step is visualizing the artifacts you want to repair. Here they are in
# the raw data:
# pick some channels that clearly show heartbeats and blinks
regexp = r'(MEG [12][45][123]1|EEG 00.)'
artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
# %%
# Repairing ECG artifacts with SSP
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# MNE-Python provides several functions for detecting and removing heartbeats
# from EEG and MEG data. As we saw in :ref:`tut-artifact-overview`,
# `~mne.preprocessing.create_ecg_epochs` can be used to both detect and
# extract heartbeat artifacts into an `~mne.Epochs` object, which can
# be used to visualize how the heartbeat artifacts manifest across the sensors:
ecg_evoked = create_ecg_epochs(raw).average()
ecg_evoked.plot_joint()
# %%
# Looks like the EEG channels are pretty spread out; let's baseline-correct and
# plot again:
ecg_evoked.apply_baseline((None, None))
ecg_evoked.plot_joint()
# %%
# To compute SSP projectors for the heartbeat artifact, you can use
# `~mne.preprocessing.compute_proj_ecg`, which takes a
# `~mne.io.Raw` object as input and returns the requested number of
# projectors for magnetometers, gradiometers, and EEG channels (default is two
# projectors for each channel type).
# `~mne.preprocessing.compute_proj_ecg` also returns an :term:`events`
# array containing the sample numbers corresponding to the peak of the
# `R wave <https://en.wikipedia.org/wiki/QRS_complex>`__ of each detected
# heartbeat.
projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None)
# %%
# The first line of output tells us that
# `~mne.preprocessing.compute_proj_ecg` found three existing projectors
# already in the `~mne.io.Raw` object, and will include those in the
# list of projectors that it returns (appending the new ECG projectors to the
# end of the list). If you don't want that, you can change that behavior with
# the boolean ``no_proj`` parameter. Since we've already run the computation,
# we can just as easily separate out the ECG projectors by indexing the list of
# projectors:
ecg_projs = projs[3:]
print(ecg_projs)
# %%
# Just like with the empty-room projectors, we can visualize the scalp
# distribution:
mne.viz.plot_projs_topomap(ecg_projs, info=raw.info)
# %%
# Moreover, because these projectors were created using epochs chosen
# specifically because they contain time-locked artifacts, we can do a
# joint plot of the projectors and their effect on the time-averaged epochs.
# This figure has three columns:
#
# 1. The left shows the data traces before (black) and after (green)
# projection. We can see that the ECG artifact is well suppressed by one
# projector per channel type.
# 2. The center shows the topomaps associated with the projectors, in this case
# just a single topography for our one projector per channel type.
# 3. The right again shows the data traces (black), but this time with those
# traces also projected onto the first projector for each channel type (red)
# plus one surrogate ground truth for an ECG channel (MEG 0111).
# sphinx_gallery_thumbnail_number = 17
# ideally here we would just do `picks_trace='ecg'`, but this dataset did not
# have a dedicated ECG channel recorded, so we just pick a channel that was
# very sensitive to the artifact
fig = mne.viz.plot_projs_joint(ecg_projs, ecg_evoked, picks_trace='MEG 0111')
fig.suptitle('ECG projectors')
# %%
# Since no dedicated ECG sensor channel was detected in the
# `~mne.io.Raw` object, by default
# `~mne.preprocessing.compute_proj_ecg` used the magnetometers to
# estimate the ECG signal (as stated on the third line of output, above). You
# can also supply the ``ch_name`` parameter to restrict which channel to use
# for ECG artifact detection; this is most useful when you had an ECG sensor
# but it is not labeled as such in the `~mne.io.Raw` file.
#
# The next few lines of the output describe the filter used to isolate ECG
# events. The default settings are usually adequate, but the filter can be
# customized via the parameters ``ecg_l_freq``, ``ecg_h_freq``, and
# ``filter_length`` (see the documentation of
# `~mne.preprocessing.compute_proj_ecg` for details).
#
# .. TODO what are the cases where you might need to customize the ECG filter?
# infants? Heart murmur?
#
# Once the ECG events have been identified,
# `~mne.preprocessing.compute_proj_ecg` will also filter the data
# channels before extracting epochs around each heartbeat, using the parameter
# values given in ``l_freq``, ``h_freq``, ``filter_length``, ``filter_method``,
# and ``iir_params``. Here again, the default parameter values are usually
# adequate.
#
# .. TODO should advice for filtering here be the same as advice for filtering
# raw data generally? (e.g., keep high-pass very low to avoid peak shifts?
# what if your raw data is already filtered?)
#
# By default, the filtered epochs will be averaged together
# before the projection is computed; this can be controlled with the boolean
# ``average`` parameter. In general this improves the signal-to-noise (where
# "signal" here is our artifact!) ratio because the artifact temporal waveform
# is fairly similar across epochs and well time locked to the detected events.
#
# To get a sense of how the heartbeat affects the signal at each sensor, you
# can plot the data with and without the ECG projectors:
raw.del_proj()
for title, proj in [('Without', empty_room_projs), ('With', ecg_projs)]:
raw.add_proj(proj, remove_existing=False)
with mne.viz.use_browser_backend('matplotlib'):
fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} ECG projectors'.format(title), size='xx-large',
weight='bold')
# %%
# Finally, note that above we passed ``reject=None`` to the
# `~mne.preprocessing.compute_proj_ecg` function, meaning that all
# detected ECG epochs would be used when computing the projectors (regardless
# of signal quality in the data sensors during those epochs). The default
# behavior is to reject epochs based on signal amplitude: epochs with
# peak-to-peak amplitudes exceeding 50 ยตV in EEG channels, 250 ยตV in EOG
# channels, 2000 fT/cm in gradiometer channels, or 3000 fT in magnetometer
# channels. You can change these thresholds by passing a dictionary with keys
# ``eeg``, ``eog``, ``mag``, and ``grad`` (though be sure to pass the threshold
# values in volts, teslas, or teslas/meter). Generally, it is a good idea to
# reject such epochs when computing the ECG projectors (since presumably the
# high-amplitude fluctuations in the channels are noise, not reflective of
# brain activity); passing ``reject=None`` above was done simply to avoid the
# dozens of extra lines of output (enumerating which sensor(s) were responsible
# for each rejected epoch) from cluttering up the tutorial.
#
# .. note::
#
# `~mne.preprocessing.compute_proj_ecg` has a similar parameter
# ``flat`` for specifying the *minimum* acceptable peak-to-peak amplitude
# for each channel type.
#
# While `~mne.preprocessing.compute_proj_ecg` conveniently combines
# several operations into a single function, MNE-Python also provides functions
# for performing each part of the process. Specifically:
#
# - `mne.preprocessing.find_ecg_events` for detecting heartbeats in a
# `~mne.io.Raw` object and returning a corresponding :term:`events`
# array
#
# - `mne.preprocessing.create_ecg_epochs` for detecting heartbeats in a
# `~mne.io.Raw` object and returning an `~mne.Epochs` object
#
# - `mne.compute_proj_epochs` for creating projector(s) from any
# `~mne.Epochs` object
#
# See the documentation of each function for further details.
#
#
# Repairing EOG artifacts with SSP
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Once again let's visualize our artifact before trying to repair it. We've
# seen above the large deflections in frontal EEG channels in the raw data;
# here is how the ocular artifacts manifests across all the sensors:
eog_evoked = create_eog_epochs(raw).average(picks='all')
eog_evoked.apply_baseline((None, None))
eog_evoked.plot_joint()
# %%
# Just like we did with the heartbeat artifact, we can compute SSP projectors
# for the ocular artifact using `~mne.preprocessing.compute_proj_eog`,
# which again takes a `~mne.io.Raw` object as input and returns the
# requested number of projectors for magnetometers, gradiometers, and EEG
# channels (default is two projectors for each channel type). This time, we'll
# pass ``no_proj`` parameter (so we get back only the new EOG projectors, not
# also the existing projectors in the `~mne.io.Raw` object), and we'll
# ignore the events array by assigning it to ``_`` (the conventional way of
# handling unwanted return elements in Python).
eog_projs, _ = compute_proj_eog(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None,
no_proj=True)
# %%
# Just like with the empty-room and ECG projectors, we can visualize the scalp
# distribution:
mne.viz.plot_projs_topomap(eog_projs, info=raw.info)
# %%
# And we can do a joint image:
fig = mne.viz.plot_projs_joint(eog_projs, eog_evoked, 'eog')
fig.suptitle('EOG projectors')
# %%
# And finally, we can make a joint visualization with our EOG evoked. We will
# also make a bad choice here and select *two* EOG projectors for EEG and
# magnetometers, and we will see them show up as noise in the plot. Even though
# the projected time course (left column) looks perhaps okay, problems show
# up in the center (topomaps) and right plots (projection of channel data
# onto the projection vector):
#
# 1. The second magnetometer topomap has a bilateral auditory field pattern.
# 2. The uniformly-scaled projected temporal time course (solid lines) show
# that, while the first projector trace (red) has a large EOG-like
# amplitude, the second projector trace (blue-green) is much smaller.
# 3. The re-normalized projected temporal time courses show that the
# second PCA trace is very noisy relative to the EOG channel data (yellow).
eog_projs_bad, _ = compute_proj_eog(
raw, n_grad=1, n_mag=2, n_eeg=2, reject=None,
no_proj=True)
fig = mne.viz.plot_projs_joint(eog_projs_bad, eog_evoked, picks_trace='eog')
fig.suptitle('Too many EOG projectors')
# %%
# Now we repeat the plot from above (with empty room and ECG projectors) and
# compare it to a plot with empty room, ECG, and EOG projectors, to see how
# well the ocular artifacts have been repaired:
for title in ('Without', 'With'):
if title == 'With':
raw.add_proj(eog_projs)
with mne.viz.use_browser_backend('matplotlib'):
fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} EOG projectors'.format(title), size='xx-large',
weight='bold')
# %%
# Notice that the small peaks in the first to magnetometer channels (``MEG
# 1411`` and ``MEG 1421``) that occur at the same time as the large EEG
# deflections have also been removed.
#
#
# Choosing the number of projectors
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# In the examples above, we used 3 projectors (all magnetometer) to capture
# empty room noise, and saw how projectors computed for the gradiometers failed
# to capture *global* patterns (and thus we discarded the gradiometer
# projectors). Then we computed 3 projectors (1 for each channel type) to
# capture the heartbeat artifact, and 3 more to capture the ocular artifact.
# How did we choose these numbers? The short answer is "based on experience" โ
# knowing how heartbeat artifacts typically manifest across the sensor array
# allows us to recognize them when we see them, and recognize when additional
# projectors are capturing something else other than a heartbeat artifact (and
# thus may be removing brain signal and should be discarded).
#
# .. _tut-artifact-ssp-reconstruction:
#
# Visualizing SSP sensor-space bias via signal reconstruction
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .. sidebar:: SSP reconstruction
#
# Internally, the reconstruction is performed by effectively using a
# minimum-norm source localization to a spherical source space with the
# projections accounted for, and then projecting the source-space data
# back out to sensor space.
#
# Because SSP performs an orthogonal projection, any spatial component in the
# data that is not perfectly orthogonal to the SSP spatial direction(s) will
# have its overall amplitude reduced by the projection operation. In other
# words, SSP typically introduces some amount of amplitude reduction bias in
# the sensor space data.
#
# When performing source localization of M/EEG data, these projections are
# properly taken into account by being applied not just to the M/EEG data
# but also to the forward solution, and hence SSP should not bias the estimated
# source amplitudes. However, for sensor space analyses, it can be useful to
# visualize the extent to which SSP projection has biased the data. This can be
# explored by using ``proj='reconstruct'`` in evoked plotting functions, for
# example via `evoked.plot() <mne.Evoked.plot>`, here restricted to just
# EEG channels for speed:
evoked_eeg = epochs.average().pick('eeg')
evoked_eeg.del_proj().add_proj(ecg_projs).add_proj(eog_projs)
fig, axes = plt.subplots(1, 3, figsize=(8, 3), squeeze=False)
for ii in range(axes.shape[0]):
axes[ii, 0].get_shared_y_axes().join(*axes[ii])
for pi, proj in enumerate((False, True, 'reconstruct')):
evoked_eeg.plot(proj=proj, axes=axes[:, pi], spatial_colors=True)
if pi == 0:
for ax in axes[:, pi]:
parts = ax.get_title().split('(')
ax.set(ylabel=f'{parts[0]} ({ax.get_ylabel()})\n'
f'{parts[1].replace(")", "")}')
axes[0, pi].set(title=f'proj={proj}')
for text in list(axes[0, pi].texts):
text.remove()
plt.setp(axes[1:, :].ravel(), title='')
plt.setp(axes[:, 1:].ravel(), ylabel='')
plt.setp(axes[:-1, :].ravel(), xlabel='')
mne.viz.tight_layout()
# %%
# Note that here the bias in the EEG and magnetometer channels is reduced by
# the reconstruction. This suggests that the application of SSP has slightly
# reduced the amplitude of our signals in sensor space, but that it should not
# bias the amplitudes in source space.
#
# References
# ^^^^^^^^^^
#
# .. footbibliography::
| bsd-3-clause |
pravsripad/mne-python | examples/inverse/dics_source_power.py | 11 | 3492 | # -*- coding: utf-8 -*-
"""
.. _ex-inverse-source-power:
==========================================
Compute source power using DICS beamformer
==========================================
Compute a Dynamic Imaging of Coherent Sources (DICS) :footcite:`GrossEtAl2001`
filter from single-trial activity to estimate source power across a frequency
band. This example demonstrates how to source localize the event-related
synchronization (ERS) of beta band activity in the
:ref:`somato dataset <somato-dataset>`.
"""
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Roman Goj <roman.goj@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
#
# License: BSD-3-Clause
# %%
import numpy as np
import mne
from mne.datasets import somato
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
print(__doc__)
# %%
# Reading the raw data and creating epochs:
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = (data_path / f'sub-{subject}' / 'meg' /
f'sub-{subject}_task-{task}_meg.fif')
# Use a shorter segment of raw just for speed here
raw = mne.io.read_raw_fif(raw_fname)
raw.crop(0, 120) # one minute for speed (looks similar to using all ~800 sec)
# Read epochs
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, preload=True)
del raw
# Paths to forward operator and FreeSurfer subject directory
fname_fwd = (data_path / 'derivatives' / f'sub-{subject}' /
f'sub-{subject}_task-{task}-fwd.fif')
subjects_dir = data_path / 'derivatives' / 'freesurfer' / 'subjects'
# %%
# We are interested in the beta band. Define a range of frequencies, using a
# log scale, from 12 to 30 Hz.
freqs = np.logspace(np.log10(12), np.log10(30), 9)
# %%
# Computing the cross-spectral density matrix for the beta frequency band, for
# different time intervals. We use a decim value of 20 to speed up the
# computation in this example at the loss of accuracy.
csd = csd_morlet(epochs, freqs, tmin=-1, tmax=1.5, decim=20)
csd_baseline = csd_morlet(epochs, freqs, tmin=-1, tmax=0, decim=20)
# ERS activity starts at 0.5 seconds after stimulus onset
csd_ers = csd_morlet(epochs, freqs, tmin=0.5, tmax=1.5, decim=20)
info = epochs.info
del epochs
# %%
# To compute the source power for a frequency band, rather than each frequency
# separately, we average the CSD objects across frequencies.
csd = csd.mean()
csd_baseline = csd_baseline.mean()
csd_ers = csd_ers.mean()
# %%
# Computing DICS spatial filters using the CSD that was computed on the entire
# timecourse.
fwd = mne.read_forward_solution(fname_fwd)
filters = make_dics(info, fwd, csd, noise_csd=csd_baseline,
pick_ori='max-power', reduce_rank=True, real_filter=True)
del fwd
# %%
# Applying DICS spatial filters separately to the CSD computed using the
# baseline and the CSD computed during the ERS activity.
baseline_source_power, freqs = apply_dics_csd(csd_baseline, filters)
beta_source_power, freqs = apply_dics_csd(csd_ers, filters)
# %%
# Visualizing source power during ERS activity relative to the baseline power.
stc = beta_source_power / baseline_source_power
message = 'DICS source power in the 12-30 Hz frequency band'
brain = stc.plot(hemi='both', views='axial', subjects_dir=subjects_dir,
subject=subject, time_label=message)
# %%
# References
# ----------
# .. footbibliography::
| bsd-3-clause |
herilalaina/scikit-learn | examples/ensemble/plot_random_forest_regression_multioutput.py | 27 | 2685 | """
============================================================
Comparing random forests and the multi-output meta estimator
============================================================
An example to compare multi-output regression with random forest and
the :ref:`multioutput.MultiOutputRegressor <multiclass>` meta-estimator.
This example illustrates the use of the
:ref:`multioutput.MultiOutputRegressor <multiclass>` meta-estimator
to perform multi-output regression. A random forest regressor is used,
which supports multi-output regression natively, so the results can be
compared.
The random forest regressor will only ever predict values within the
range of observations or closer to zero for each of the targets. As a
result the predictions are biased towards the centre of the circle.
Using a single underlying feature the model learns both the
x and y coordinate as output.
"""
print(__doc__)
# Author: Tim Head <betatim@gmail.com>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.multioutput import MultiOutputRegressor
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(200 * rng.rand(600, 1) - 100, axis=0)
y = np.array([np.pi * np.sin(X).ravel(), np.pi * np.cos(X).ravel()]).T
y += (0.5 - rng.rand(*y.shape))
X_train, X_test, y_train, y_test = train_test_split(X, y,
train_size=400,
random_state=4)
max_depth = 30
regr_multirf = MultiOutputRegressor(RandomForestRegressor(max_depth=max_depth,
random_state=0))
regr_multirf.fit(X_train, y_train)
regr_rf = RandomForestRegressor(max_depth=max_depth, random_state=2)
regr_rf.fit(X_train, y_train)
# Predict on new data
y_multirf = regr_multirf.predict(X_test)
y_rf = regr_rf.predict(X_test)
# Plot the results
plt.figure()
s = 50
a = 0.4
plt.scatter(y_test[:, 0], y_test[:, 1], edgecolor='k',
c="navy", s=s, marker="s", alpha=a, label="Data")
plt.scatter(y_multirf[:, 0], y_multirf[:, 1], edgecolor='k',
c="cornflowerblue", s=s, alpha=a,
label="Multi RF score=%.2f" % regr_multirf.score(X_test, y_test))
plt.scatter(y_rf[:, 0], y_rf[:, 1], edgecolor='k',
c="c", s=s, marker="^", alpha=a,
label="RF score=%.2f" % regr_rf.score(X_test, y_test))
plt.xlim([-6, 6])
plt.ylim([-6, 6])
plt.xlabel("target 1")
plt.ylabel("target 2")
plt.title("Comparing random forests and the multi-output meta estimator")
plt.legend()
plt.show()
| bsd-3-clause |
pravsripad/mne-python | mne/conftest.py | 3 | 31290 | # -*- coding: utf-8 -*-
# Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
from contextlib import contextmanager
import inspect
from textwrap import dedent
import gc
import os
import os.path as op
from pathlib import Path
import shutil
import sys
import warnings
import pytest
from unittest import mock
import numpy as np
import mne
from mne import read_events, pick_types, Epochs
from mne.channels import read_layout
from mne.coreg import create_default_subject
from mne.datasets import testing
from mne.fixes import has_numba, _compare_version
from mne.io import read_raw_fif, read_raw_ctf, read_raw_nirx, read_raw_snirf
from mne.stats import cluster_level
from mne.utils import (_pl, _assert_no_instances, numerics, Bunch,
_check_qt_version, _TempDir)
# data from sample dataset
from mne.viz._figure import use_browser_backend
test_path = testing.data_path(download=False)
s_path = op.join(test_path, 'MEG', 'sample')
fname_evoked = op.join(s_path, 'sample_audvis_trunc-ave.fif')
fname_cov = op.join(s_path, 'sample_audvis_trunc-cov.fif')
fname_fwd = op.join(s_path, 'sample_audvis_trunc-meg-eeg-oct-4-fwd.fif')
fname_fwd_full = op.join(s_path, 'sample_audvis_trunc-meg-eeg-oct-6-fwd.fif')
bem_path = op.join(test_path, 'subjects', 'sample', 'bem')
fname_bem = op.join(bem_path, 'sample-1280-bem.fif')
fname_aseg = op.join(test_path, 'subjects', 'sample', 'mri', 'aseg.mgz')
subjects_dir = op.join(test_path, 'subjects')
fname_src = op.join(bem_path, 'sample-oct-4-src.fif')
fname_trans = op.join(s_path, 'sample_audvis_trunc-trans.fif')
ctf_dir = op.join(test_path, 'CTF')
fname_ctf_continuous = op.join(ctf_dir, 'testdata_ctf.ds')
nirx_path = test_path / 'NIRx'
snirf_path = test_path / 'SNIRF'
nirsport2 = nirx_path / 'nirsport_v2' / 'aurora_recording _w_short_and_acc'
nirsport2_snirf = (
snirf_path / 'NIRx' / 'NIRSport2' / '1.0.3' /
'2021-05-05_001.snirf')
nirsport2_2021_9 = nirx_path / 'nirsport_v2' / 'aurora_2021_9'
nirsport2_20219_snirf = (
snirf_path / 'NIRx' / 'NIRSport2' / '2021.9' /
'2021-10-01_002.snirf')
# data from mne.io.tests.data
base_dir = op.join(op.dirname(__file__), 'io', 'tests', 'data')
fname_raw_io = op.join(base_dir, 'test_raw.fif')
fname_event_io = op.join(base_dir, 'test-eve.fif')
fname_cov_io = op.join(base_dir, 'test-cov.fif')
fname_evoked_io = op.join(base_dir, 'test-ave.fif')
event_id, tmin, tmax = 1, -0.1, 1.0
vv_layout = read_layout('Vectorview-all')
collect_ignore = [
'export/_brainvision.py',
'export/_eeglab.py',
'export/_edf.py']
def pytest_configure(config):
"""Configure pytest options."""
# Markers
for marker in ('slowtest', 'ultraslowtest', 'pgtest'):
config.addinivalue_line('markers', marker)
# Fixtures
for fixture in ('matplotlib_config', 'close_all', 'check_verbose',
'qt_config', 'protect_config'):
config.addinivalue_line('usefixtures', fixture)
# pytest-qt uses PYTEST_QT_API, but let's make it respect qtpy's QT_API
# if present
if os.getenv('PYTEST_QT_API') is None and os.getenv('QT_API') is not None:
os.environ['PYTEST_QT_API'] = os.environ['QT_API']
# Warnings
# - Once SciPy updates not to have non-integer and non-tuple errors (1.2.0)
# we should remove them from here.
# - This list should also be considered alongside reset_warnings in
# doc/conf.py.
if os.getenv('MNE_IGNORE_WARNINGS_IN_TESTS', '') != 'true':
first_kind = 'error'
else:
first_kind = 'always'
warning_lines = r"""
{0}::
# matplotlib->traitlets (notebook)
ignore:Passing unrecognized arguments to super.*:DeprecationWarning
# notebook tests
ignore:There is no current event loop:DeprecationWarning
ignore:unclosed <socket\.socket:ResourceWarning
ignore:unclosed event loop <:ResourceWarning
# ignore if joblib is missing
ignore:joblib not installed.*:RuntimeWarning
# TODO: This is indicative of a problem
ignore:.*Matplotlib is currently using agg.*:
# qdarkstyle
ignore:.*Setting theme=.*:RuntimeWarning
# scikit-learn using this arg
ignore:.*The 'sym_pos' keyword is deprecated.*:DeprecationWarning
# Should be removable by 2022/07/08, SciPy savemat issue
ignore:.*elementwise comparison failed; returning scalar in.*:FutureWarning
# numba with NumPy dev
ignore:`np.MachAr` is deprecated.*:DeprecationWarning
""".format(first_kind) # noqa: E501
for warning_line in warning_lines.split('\n'):
warning_line = warning_line.strip()
if warning_line and not warning_line.startswith('#'):
config.addinivalue_line('filterwarnings', warning_line)
# Have to be careful with autouse=True, but this is just an int comparison
# so it shouldn't really add appreciable overhead
@pytest.fixture(autouse=True)
def check_verbose(request):
"""Set to the default logging level to ensure it's tested properly."""
starting_level = mne.utils.logger.level
yield
# ensures that no tests break the global state
try:
assert mne.utils.logger.level == starting_level
except AssertionError:
pytest.fail('.'.join([request.module.__name__,
request.function.__name__]) +
' modifies logger.level')
@pytest.fixture(autouse=True)
def close_all():
"""Close all matplotlib plots, regardless of test status."""
# This adds < 1 ยตS in local testing, and we have ~2500 tests, so ~2 ms max
import matplotlib.pyplot as plt
yield
plt.close('all')
@pytest.fixture(autouse=True)
def add_mne(doctest_namespace):
"""Add mne to the namespace."""
doctest_namespace["mne"] = mne
@pytest.fixture(scope='function')
def verbose_debug():
"""Run a test with debug verbosity."""
with mne.utils.use_log_level('debug'):
yield
@pytest.fixture(scope='session')
def qt_config():
"""Configure the Qt backend for viz tests."""
os.environ['_MNE_BROWSER_NO_BLOCK'] = 'true'
@pytest.fixture(scope='session')
def matplotlib_config():
"""Configure matplotlib for viz tests."""
import matplotlib
from matplotlib import cbook
# Allow for easy interactive debugging with a call like:
#
# $ MNE_MPL_TESTING_BACKEND=Qt5Agg pytest mne/viz/tests/test_raw.py -k annotation -x --pdb # noqa: E501
#
try:
want = os.environ['MNE_MPL_TESTING_BACKEND']
except KeyError:
want = 'agg' # don't pop up windows
with warnings.catch_warnings(record=True): # ignore warning
warnings.filterwarnings('ignore')
matplotlib.use(want, force=True)
import matplotlib.pyplot as plt
assert plt.get_backend() == want
# overwrite some params that can horribly slow down tests that
# users might have changed locally (but should not otherwise affect
# functionality)
plt.ioff()
plt.rcParams['figure.dpi'] = 100
try:
plt.rcParams['figure.raise_window'] = False
except KeyError: # MPL < 3.3
pass
# Make sure that we always reraise exceptions in handlers
orig = cbook.CallbackRegistry
class CallbackRegistryReraise(orig):
def __init__(self, exception_handler=None, signals=None):
super(CallbackRegistryReraise, self).__init__(exception_handler)
cbook.CallbackRegistry = CallbackRegistryReraise
@pytest.fixture(scope='session')
def ci_macos():
"""Determine if running on MacOS CI."""
return (os.getenv('CI', 'false').lower() == 'true' and
sys.platform == 'darwin')
@pytest.fixture(scope='session')
def azure_windows():
"""Determine if running on Azure Windows."""
return (os.getenv('AZURE_CI_WINDOWS', 'false').lower() == 'true' and
sys.platform.startswith('win'))
@pytest.fixture()
def check_gui_ci(ci_macos, azure_windows):
"""Skip tests that are not reliable on CIs."""
if azure_windows or ci_macos:
pytest.skip('Skipping GUI tests on MacOS CIs and Azure Windows')
@pytest.fixture(scope='function')
def raw_orig():
"""Get raw data without any change to it from mne.io.tests.data."""
raw = read_raw_fif(fname_raw_io, preload=True)
return raw
@pytest.fixture(scope='function')
def raw():
"""
Get raw data and pick channels to reduce load for testing.
(from mne.io.tests.data)
"""
raw = read_raw_fif(fname_raw_io, preload=True)
# Throws a warning about a changed unit.
with pytest.warns(RuntimeWarning, match='unit'):
raw.set_channel_types({raw.ch_names[0]: 'ias'})
raw.pick_channels(raw.ch_names[:9])
raw.info.normalize_proj() # Fix projectors after subselection
return raw
@pytest.fixture(scope='function')
def raw_ctf():
"""Get ctf raw data from mne.io.tests.data."""
raw_ctf = read_raw_ctf(fname_ctf_continuous, preload=True)
return raw_ctf
@pytest.fixture(scope='function')
def events():
"""Get events from mne.io.tests.data."""
return read_events(fname_event_io)
def _get_epochs(stop=5, meg=True, eeg=False, n_chan=20):
"""Get epochs."""
raw = read_raw_fif(fname_raw_io)
events = read_events(fname_event_io)
picks = pick_types(raw.info, meg=meg, eeg=eeg, stim=False,
ecg=False, eog=False, exclude='bads')
# Use a subset of channels for plotting speed
picks = np.round(np.linspace(0, len(picks) + 1, n_chan)).astype(int)
with pytest.warns(RuntimeWarning, match='projection'):
epochs = Epochs(raw, events[:stop], event_id, tmin, tmax, picks=picks,
proj=False, preload=False)
epochs.info.normalize_proj() # avoid warnings
return epochs
@pytest.fixture()
def epochs():
"""
Get minimal, pre-loaded epochs data suitable for most tests.
(from mne.io.tests.data)
"""
return _get_epochs().load_data()
@pytest.fixture()
def epochs_unloaded():
"""Get minimal, unloaded epochs data from mne.io.tests.data."""
return _get_epochs()
@pytest.fixture()
def epochs_full():
"""Get full, preloaded epochs from mne.io.tests.data."""
return _get_epochs(None).load_data()
@pytest.fixture(scope='session', params=[testing._pytest_param()])
def _evoked():
# This one is session scoped, so be sure not to modify it (use evoked
# instead)
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
evoked.crop(0, 0.2)
return evoked
@pytest.fixture()
def evoked(_evoked):
"""Get evoked data."""
return _evoked.copy()
@pytest.fixture(scope='function', params=[testing._pytest_param()])
def noise_cov():
"""Get a noise cov from the testing dataset."""
return mne.read_cov(fname_cov)
@pytest.fixture
def noise_cov_io():
"""Get noise-covariance (from mne.io.tests.data)."""
return mne.read_cov(fname_cov_io)
@pytest.fixture(scope='function')
def bias_params_free(evoked, noise_cov):
"""Provide inputs for free bias functions."""
fwd = mne.read_forward_solution(fname_fwd)
return _bias_params(evoked, noise_cov, fwd)
@pytest.fixture(scope='function')
def bias_params_fixed(evoked, noise_cov):
"""Provide inputs for fixed bias functions."""
fwd = mne.read_forward_solution(fname_fwd)
mne.convert_forward_solution(
fwd, force_fixed=True, surf_ori=True, copy=False)
return _bias_params(evoked, noise_cov, fwd)
def _bias_params(evoked, noise_cov, fwd):
evoked.pick_types(meg=True, eeg=True, exclude=())
# restrict to limited set of verts (small src here) and one hemi for speed
vertices = [fwd['src'][0]['vertno'].copy(), []]
stc = mne.SourceEstimate(
np.zeros((sum(len(v) for v in vertices), 1)), vertices, 0, 1)
fwd = mne.forward.restrict_forward_to_stc(fwd, stc)
assert fwd['sol']['row_names'] == noise_cov['names']
assert noise_cov['names'] == evoked.ch_names
evoked = mne.EvokedArray(fwd['sol']['data'].copy(), evoked.info)
data_cov = noise_cov.copy()
data = fwd['sol']['data'] @ fwd['sol']['data'].T
data *= 1e-14 # 100 nAm at each source, effectively (1e-18 would be 1 nAm)
# This is rank-deficient, so let's make it actually positive semidefinite
# by regularizing a tiny bit
data.flat[::data.shape[0] + 1] += mne.make_ad_hoc_cov(evoked.info)['data']
# Do our projection
proj, _, _ = mne.io.proj.make_projector(
data_cov['projs'], data_cov['names'])
data = proj @ data @ proj.T
data_cov['data'][:] = data
assert data_cov['data'].shape[0] == len(noise_cov['names'])
want = np.arange(fwd['sol']['data'].shape[1])
if not mne.forward.is_fixed_orient(fwd):
want //= 3
return evoked, fwd, noise_cov, data_cov, want
@pytest.fixture
def garbage_collect():
"""Garbage collect on exit."""
yield
gc.collect()
@pytest.fixture
def mpl_backend(garbage_collect):
"""Use for epochs/ica when not implemented with pyqtgraph yet."""
with use_browser_backend('matplotlib') as backend:
yield backend
backend._close_all()
# Skip functions or modules for mne-qt-browser < 0.2.0
pre_2_0_skip_modules = ['mne.viz.tests.test_epochs',
'mne.viz.tests.test_ica']
pre_2_0_skip_funcs = ['test_plot_raw_white',
'test_plot_raw_selection']
def _check_pyqtgraph(request):
# Check Qt
qt_version, api = _check_qt_version(return_api=True)
if (not qt_version) or _compare_version(qt_version, '<', '5.12'):
pytest.skip(f'Qt API {api} has version {qt_version} '
f'but pyqtgraph needs >= 5.12!')
try:
import mne_qt_browser # noqa: F401
# Check mne-qt-browser version
lower_2_0 = _compare_version(mne_qt_browser.__version__, '<', '0.2.0')
m_name = request.function.__module__
f_name = request.function.__name__
if lower_2_0 and m_name in pre_2_0_skip_modules:
pytest.skip(f'Test-Module "{m_name}" was skipped for'
f' mne-qt-browser < 0.2.0')
elif lower_2_0 and f_name in pre_2_0_skip_funcs:
pytest.skip(f'Test "{f_name}" was skipped for '
f'mne-qt-browser < 0.2.0')
except Exception:
pytest.skip('Requires mne_qt_browser')
else:
ver = mne_qt_browser.__version__
if api != 'PyQt5' and _compare_version(ver, '<=', '0.2.6'):
pytest.skip(f'mne_qt_browser {ver} requires PyQt5, API is {api}')
@pytest.mark.pgtest
@pytest.fixture
def pg_backend(request, garbage_collect):
"""Use for pyqtgraph-specific test-functions."""
_check_pyqtgraph(request)
with use_browser_backend('qt') as backend:
backend._close_all()
yield backend
backend._close_all()
# This shouldn't be necessary, but let's make sure nothing is stale
import mne_qt_browser
mne_qt_browser._browser_instances.clear()
@pytest.fixture(params=[
'matplotlib',
pytest.param('qt', marks=pytest.mark.pgtest),
])
def browser_backend(request, garbage_collect, monkeypatch):
"""Parametrizes the name of the browser backend."""
backend_name = request.param
if backend_name == 'qt':
_check_pyqtgraph(request)
with use_browser_backend(backend_name) as backend:
backend._close_all()
monkeypatch.setenv('MNE_BROWSE_RAW_SIZE', '10,10')
yield backend
backend._close_all()
if backend_name == 'qt':
# This shouldn't be necessary, but let's make sure nothing is stale
import mne_qt_browser
mne_qt_browser._browser_instances.clear()
@pytest.fixture(params=["pyvistaqt"])
def renderer(request, options_3d, garbage_collect):
"""Yield the 3D backends."""
with _use_backend(request.param, interactive=False) as renderer:
yield renderer
@pytest.fixture(params=["pyvistaqt"])
def renderer_pyvistaqt(request, options_3d, garbage_collect):
"""Yield the PyVista backend."""
with _use_backend(request.param, interactive=False) as renderer:
yield renderer
@pytest.fixture(params=["notebook"])
def renderer_notebook(request, options_3d):
"""Yield the 3D notebook renderer."""
with _use_backend(request.param, interactive=False) as renderer:
yield renderer
@pytest.fixture(scope="module", params=["pyvistaqt"])
def renderer_interactive_pyvistaqt(request, options_3d):
"""Yield the interactive PyVista backend."""
with _use_backend(request.param, interactive=True) as renderer:
yield renderer
@pytest.fixture(scope="module", params=["pyvistaqt"])
def renderer_interactive(request, options_3d):
"""Yield the interactive 3D backends."""
with _use_backend(request.param, interactive=True) as renderer:
yield renderer
@contextmanager
def _use_backend(backend_name, interactive):
from mne.viz.backends.renderer import _use_test_3d_backend
_check_skip_backend(backend_name)
with _use_test_3d_backend(backend_name, interactive=interactive):
from mne.viz.backends import renderer
try:
yield renderer
finally:
renderer.backend._close_all()
def _check_skip_backend(name):
from mne.viz.backends.tests._utils import (has_pyvista,
has_imageio_ffmpeg,
has_pyvistaqt)
if name in ('pyvistaqt', 'notebook'):
if not has_pyvista():
pytest.skip("Test skipped, requires pyvista.")
if not has_imageio_ffmpeg():
pytest.skip("Test skipped, requires imageio-ffmpeg")
if name == 'pyvistaqt' and not _check_qt_version():
pytest.skip("Test skipped, requires Qt.")
if name == 'pyvistaqt' and not has_pyvistaqt():
pytest.skip("Test skipped, requires pyvistaqt")
@pytest.fixture(scope='session')
def pixel_ratio():
"""Get the pixel ratio."""
from mne.viz.backends.tests._utils import has_pyvista
if not has_pyvista() or not _check_qt_version():
return 1.
from qtpy.QtWidgets import QApplication, QMainWindow
_ = QApplication.instance() or QApplication([])
window = QMainWindow()
ratio = float(window.devicePixelRatio())
window.close()
return ratio
@pytest.fixture(scope='function', params=[testing._pytest_param()])
def subjects_dir_tmp(tmp_path):
"""Copy MNE-testing-data subjects_dir to a temp dir for manipulation."""
for key in ('sample', 'fsaverage'):
shutil.copytree(op.join(subjects_dir, key), str(tmp_path / key))
return str(tmp_path)
@pytest.fixture(params=[testing._pytest_param()])
def subjects_dir_tmp_few(tmp_path):
"""Copy fewer files to a tmp_path."""
subjects_path = tmp_path / 'subjects'
os.mkdir(subjects_path)
# add fsaverage
create_default_subject(subjects_dir=subjects_path, fs_home=test_path,
verbose=True)
# add sample (with few files)
sample_path = subjects_path / 'sample'
os.makedirs(sample_path / 'bem')
for dirname in ('mri', 'surf'):
shutil.copytree(
test_path / 'subjects' / 'sample' / dirname, sample_path / dirname)
return subjects_path
# Scoping these as session will make things faster, but need to make sure
# not to modify them in-place in the tests, so keep them private
@pytest.fixture(scope='session', params=[testing._pytest_param()])
def _evoked_cov_sphere(_evoked):
"""Compute a small evoked/cov/sphere combo for use with forwards."""
evoked = _evoked.copy().pick_types(meg=True)
evoked.pick_channels(evoked.ch_names[::4])
assert len(evoked.ch_names) == 77
cov = mne.read_cov(fname_cov)
sphere = mne.make_sphere_model('auto', 'auto', evoked.info)
return evoked, cov, sphere
@pytest.fixture(scope='session')
def _fwd_surf(_evoked_cov_sphere):
"""Compute a forward for a surface source space."""
evoked, cov, sphere = _evoked_cov_sphere
src_surf = mne.read_source_spaces(fname_src)
return mne.make_forward_solution(
evoked.info, fname_trans, src_surf, sphere, mindist=5.0)
@pytest.fixture(scope='session')
def _fwd_subvolume(_evoked_cov_sphere):
"""Compute a forward for a surface source space."""
pytest.importorskip('nibabel')
evoked, cov, sphere = _evoked_cov_sphere
volume_labels = ['Left-Cerebellum-Cortex', 'right-Cerebellum-Cortex']
with pytest.raises(ValueError,
match=r"Did you mean one of \['Right-Cere"):
mne.setup_volume_source_space(
'sample', pos=20., volume_label=volume_labels,
subjects_dir=subjects_dir)
volume_labels[1] = 'R' + volume_labels[1][1:]
src_vol = mne.setup_volume_source_space(
'sample', pos=20., volume_label=volume_labels,
subjects_dir=subjects_dir, add_interpolator=False)
return mne.make_forward_solution(
evoked.info, fname_trans, src_vol, sphere, mindist=5.0)
@pytest.fixture(scope='session')
def _all_src_types_fwd(_fwd_surf, _fwd_subvolume):
"""Create all three forward types (surf, vol, mixed)."""
fwds = dict(surface=_fwd_surf, volume=_fwd_subvolume)
with pytest.raises(RuntimeError,
match='Invalid source space with kinds'):
fwds['volume']['src'] + fwds['surface']['src']
# mixed (4)
fwd = fwds['surface'].copy()
f2 = fwds['volume']
for keys, axis in [(('source_rr',), 0),
(('source_nn',), 0),
(('sol', 'data'), 1),
(('_orig_sol',), 1)]:
a, b = fwd, f2
key = keys[0]
if len(keys) > 1:
a, b = a[key], b[key]
key = keys[1]
a[key] = np.concatenate([a[key], b[key]], axis=axis)
fwd['sol']['ncol'] = fwd['sol']['data'].shape[1]
fwd['nsource'] = fwd['sol']['ncol'] // 3
fwd['src'] = fwd['src'] + f2['src']
fwds['mixed'] = fwd
return fwds
@pytest.fixture(scope='session')
def _all_src_types_inv_evoked(_evoked_cov_sphere, _all_src_types_fwd):
"""Compute inverses for all source types."""
evoked, cov, _ = _evoked_cov_sphere
invs = dict()
for kind, fwd in _all_src_types_fwd.items():
assert fwd['src'].kind == kind
with pytest.warns(RuntimeWarning, match='has been reduced'):
invs[kind] = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov)
return invs, evoked
@pytest.fixture(scope='function')
def all_src_types_inv_evoked(_all_src_types_inv_evoked):
"""All source types of inverses, allowing for possible modification."""
invs, evoked = _all_src_types_inv_evoked
invs = {key: val.copy() for key, val in invs.items()}
evoked = evoked.copy()
return invs, evoked
@pytest.fixture(scope='function')
def mixed_fwd_cov_evoked(_evoked_cov_sphere, _all_src_types_fwd):
"""Compute inverses for all source types."""
evoked, cov, _ = _evoked_cov_sphere
return _all_src_types_fwd['mixed'].copy(), cov.copy(), evoked.copy()
@pytest.fixture(scope='session')
@pytest.mark.slowtest
@pytest.mark.parametrize(params=[testing._pytest_param()])
def src_volume_labels():
"""Create a 7mm source space with labels."""
pytest.importorskip('nibabel')
volume_labels = mne.get_volume_labels_from_aseg(fname_aseg)
with pytest.warns(RuntimeWarning, match='Found no usable.*Left-vessel.*'):
src = mne.setup_volume_source_space(
'sample', 7., mri='aseg.mgz', volume_label=volume_labels,
add_interpolator=False, bem=fname_bem,
subjects_dir=subjects_dir)
lut, _ = mne.read_freesurfer_lut()
assert len(volume_labels) == 46
assert volume_labels[0] == 'Unknown'
assert lut['Unknown'] == 0 # it will be excluded during label gen
return src, tuple(volume_labels), lut
def _fail(*args, **kwargs):
__tracebackhide__ = True
raise AssertionError('Test should not download')
@pytest.fixture(scope='function')
def download_is_error(monkeypatch):
"""Prevent downloading by raising an error when it's attempted."""
import pooch
monkeypatch.setattr(pooch, 'retrieve', _fail)
# We can't use monkeypatch because its scope (function-level) conflicts with
# the requests fixture (module-level), so we live with a module-scoped version
# that uses mock
@pytest.fixture(scope='module')
def options_3d():
"""Disable advanced 3d rendering."""
with mock.patch.dict(
os.environ, {
"MNE_3D_OPTION_ANTIALIAS": "false",
"MNE_3D_OPTION_DEPTH_PEELING": "false",
"MNE_3D_OPTION_SMOOTH_SHADING": "false",
}
):
yield
@pytest.fixture(scope='session')
def protect_config():
"""Protect ~/.mne."""
temp = _TempDir()
with mock.patch.dict(os.environ, {"_MNE_FAKE_HOME_DIR": temp}):
yield
@pytest.fixture()
def brain_gc(request):
"""Ensure that brain can be properly garbage collected."""
keys = (
'renderer_interactive',
'renderer_interactive_pyvistaqt',
'renderer',
'renderer_pyvistaqt',
'renderer_notebook',
)
assert set(request.fixturenames) & set(keys) != set()
for key in keys:
if key in request.fixturenames:
is_pv = \
request.getfixturevalue(key)._get_3d_backend() == 'pyvistaqt'
close_func = request.getfixturevalue(key).backend._close_all
break
if not is_pv:
yield
return
from mne.viz import Brain
ignore = set(id(o) for o in gc.get_objects())
yield
close_func()
# no need to warn if the test itself failed, pytest-harvest helps us here
try:
outcome = request.node.harvest_rep_call
except Exception:
outcome = 'failed'
if outcome != 'passed':
return
_assert_no_instances(Brain, 'after')
# Check VTK
objs = gc.get_objects()
bad = list()
for o in objs:
try:
name = o.__class__.__name__
except Exception: # old Python, probably
pass
else:
if name.startswith('vtk') and id(o) not in ignore:
bad.append(name)
del o
del objs, ignore, Brain
assert len(bad) == 0, 'VTK objects linger:\n' + '\n'.join(bad)
def pytest_sessionfinish(session, exitstatus):
"""Handle the end of the session."""
n = session.config.option.durations
if n is None:
return
print('\n')
try:
import pytest_harvest
except ImportError:
print('Module-level timings require pytest-harvest')
return
from py.io import TerminalWriter
# get the number to print
res = pytest_harvest.get_session_synthesis_dct(session)
files = dict()
for key, val in res.items():
parts = Path(key.split(':')[0]).parts
# split mne/tests/test_whatever.py into separate categories since these
# are essentially submodule-level tests. Keeping just [:3] works,
# except for mne/viz where we want level-4 granulatity
split_submodules = (('mne', 'viz'), ('mne', 'preprocessing'))
parts = parts[:4 if parts[:2] in split_submodules else 3]
if not parts[-1].endswith('.py'):
parts = parts + ('',)
file_key = '/'.join(parts)
files[file_key] = files.get(file_key, 0) + val['pytest_duration_s']
files = sorted(list(files.items()), key=lambda x: x[1])[::-1]
# print
files = files[:n]
if len(files):
writer = TerminalWriter()
writer.line() # newline
writer.sep('=', f'slowest {n} test module{_pl(n)}')
names, timings = zip(*files)
timings = [f'{timing:0.2f}s total' for timing in timings]
rjust = max(len(timing) for timing in timings)
timings = [timing.rjust(rjust) for timing in timings]
for name, timing in zip(names, timings):
writer.line(f'{timing.ljust(15)}{name}')
@pytest.fixture(scope="function", params=('Numba', 'NumPy'))
def numba_conditional(monkeypatch, request):
"""Test both code paths on machines that have Numba."""
assert request.param in ('Numba', 'NumPy')
if request.param == 'NumPy' and has_numba:
monkeypatch.setattr(
cluster_level, '_get_buddies', cluster_level._get_buddies_fallback)
monkeypatch.setattr(
cluster_level, '_get_selves', cluster_level._get_selves_fallback)
monkeypatch.setattr(
cluster_level, '_where_first', cluster_level._where_first_fallback)
monkeypatch.setattr(
numerics, '_arange_div', numerics._arange_div_fallback)
if request.param == 'Numba' and not has_numba:
pytest.skip('Numba not installed')
yield request.param
# Create one nbclient and reuse it
@pytest.fixture(scope='session')
def _nbclient():
try:
import nbformat
from jupyter_client import AsyncKernelManager
from nbclient import NotebookClient
from ipywidgets import Button # noqa
import ipyvtklink # noqa
except Exception as exc:
return pytest.skip(f'Skipping Notebook test: {exc}')
km = AsyncKernelManager(config=None)
nb = nbformat.reads("""
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata":{},
"outputs": [],
"source":[]
}
],
"metadata": {
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version":3},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}""", as_version=4)
client = NotebookClient(nb, km=km)
yield client
client._cleanup_kernel()
@pytest.fixture(scope='function')
def nbexec(_nbclient):
"""Execute Python code in a notebook."""
# Adapted/simplified from nbclient/client.py (BSD-3-Clause)
_nbclient._cleanup_kernel()
def execute(code, reset=False):
_nbclient.reset_execution_trackers()
with _nbclient.setup_kernel():
assert _nbclient.kc is not None
cell = Bunch(cell_type='code', metadata={}, source=dedent(code))
_nbclient.execute_cell(cell, 0, execution_count=0)
_nbclient.set_widgets_metadata()
yield execute
def pytest_runtest_call(item):
"""Run notebook code written in Python."""
if 'nbexec' in getattr(item, 'fixturenames', ()):
nbexec = item.funcargs['nbexec']
code = inspect.getsource(getattr(item.module, item.name.split('[')[0]))
code = code.splitlines()
ci = 0
for ci, c in enumerate(code):
if c.startswith(' '): # actual content
break
code = '\n'.join(code[ci:])
def run(nbexec=nbexec, code=code):
nbexec(code)
item.runtest = run
return
@pytest.mark.filterwarnings('ignore:.*Extraction of measurement.*:')
@pytest.fixture(params=(
[nirsport2, nirsport2_snirf, testing._pytest_param()],
[nirsport2_2021_9, nirsport2_20219_snirf, testing._pytest_param()],
))
def nirx_snirf(request):
"""Return a (raw_nirx, raw_snirf) matched pair."""
pytest.importorskip('h5py')
return (read_raw_nirx(request.param[0], preload=True),
read_raw_snirf(request.param[1], preload=True))
| bsd-3-clause |
pravsripad/mne-python | mne/io/eeglab/tests/test_eeglab.py | 2 | 20248 | # Author: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Mikolaj Magnuski <mmagnuski@swps.edu.pl>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
#
# License: BSD-3-Clause
from copy import deepcopy
import os.path as op
import shutil
import numpy as np
from numpy.testing import (assert_array_equal, assert_array_almost_equal,
assert_equal, assert_allclose)
import pytest
from scipy import io
from mne import write_events, read_epochs_eeglab
from mne.channels import read_custom_montage
from mne.io import read_raw_eeglab
from mne.io.eeglab.eeglab import _get_montage_information, _dol_to_lod
from mne.io.tests.test_raw import _test_raw_reader
from mne.datasets import testing
from mne.utils import Bunch
from mne.annotations import events_from_annotations, read_annotations
base_dir = op.join(testing.data_path(download=False), 'EEGLAB')
raw_fname_mat = op.join(base_dir, 'test_raw.set')
raw_fname_onefile_mat = op.join(base_dir, 'test_raw_onefile.set')
raw_fname_event_duration = op.join(base_dir, 'test_raw_event_duration.set')
epochs_fname_mat = op.join(base_dir, 'test_epochs.set')
epochs_fname_onefile_mat = op.join(base_dir, 'test_epochs_onefile.set')
raw_mat_fnames = [raw_fname_mat, raw_fname_onefile_mat]
epochs_mat_fnames = [epochs_fname_mat, epochs_fname_onefile_mat]
raw_fname_chanloc = op.join(base_dir, 'test_raw_chanloc.set')
raw_fname_chanloc_fids = op.join(base_dir, 'test_raw_chanloc_fids.set')
raw_fname_2021 = op.join(base_dir, 'test_raw_2021.set')
raw_fname_h5 = op.join(base_dir, 'test_raw_h5.set')
raw_fname_onefile_h5 = op.join(base_dir, 'test_raw_onefile_h5.set')
epochs_fname_h5 = op.join(base_dir, 'test_epochs_h5.set')
epochs_fname_onefile_h5 = op.join(base_dir, 'test_epochs_onefile_h5.set')
raw_h5_fnames = [raw_fname_h5, raw_fname_onefile_h5]
epochs_h5_fnames = [epochs_fname_h5, epochs_fname_onefile_h5]
montage_path = op.join(base_dir, 'test_chans.locs')
pymatreader = pytest.importorskip('pymatreader') # module-level
@testing.requires_testing_data
@pytest.mark.parametrize('fname', [
raw_fname_mat,
raw_fname_h5,
raw_fname_chanloc,
], ids=op.basename)
def test_io_set_raw(fname):
"""Test importing EEGLAB .set files."""
montage = read_custom_montage(montage_path)
montage.ch_names = [
'EEG {0:03d}'.format(ii) for ii in range(len(montage.ch_names))
]
kws = dict(reader=read_raw_eeglab, input_fname=fname)
if fname.endswith('test_raw_chanloc.set'):
with pytest.warns(RuntimeWarning,
match="The data contains 'boundary' events"):
raw0 = _test_raw_reader(**kws)
elif '_h5' in fname: # should be safe enough, and much faster
raw0 = read_raw_eeglab(fname, preload=True)
else:
raw0 = _test_raw_reader(**kws)
# test that preloading works
if fname.endswith('test_raw_chanloc.set'):
raw0.set_montage(montage, on_missing='ignore')
# crop to check if the data has been properly preloaded; we cannot
# filter as the snippet of raw data is very short
raw0.crop(0, 1)
else:
raw0.set_montage(montage)
raw0.filter(1, None, l_trans_bandwidth='auto', filter_length='auto',
phase='zero')
# test that using uint16_codec does not break stuff
read_raw_kws = dict(input_fname=fname, preload=False, uint16_codec='ascii')
if fname.endswith('test_raw_chanloc.set'):
with pytest.warns(RuntimeWarning,
match="The data contains 'boundary' events"):
raw0 = read_raw_eeglab(**read_raw_kws)
raw0.set_montage(montage, on_missing='ignore')
else:
raw0 = read_raw_eeglab(**read_raw_kws)
raw0.set_montage(montage)
# Annotations
if fname != raw_fname_chanloc:
assert len(raw0.annotations) == 154
assert set(raw0.annotations.description) == {'rt', 'square'}
assert_array_equal(raw0.annotations.duration, 0.)
@testing.requires_testing_data
def test_io_set_raw_more(tmp_path):
"""Test importing EEGLAB .set files."""
tmp_path = str(tmp_path)
eeg = io.loadmat(raw_fname_mat, struct_as_record=False,
squeeze_me=True)['EEG']
# test reading file with one event (read old version)
negative_latency_fname = op.join(tmp_path, 'test_negative_latency.set')
evnts = deepcopy(eeg.event[0])
evnts.latency = 0
io.savemat(negative_latency_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate,
'nbchan': eeg.nbchan,
'data': 'test_negative_latency.fdt',
'epoch': eeg.epoch, 'event': evnts,
'chanlocs': eeg.chanlocs, 'pnts': eeg.pnts}},
appendmat=False, oned_as='row')
shutil.copyfile(op.join(base_dir, 'test_raw.fdt'),
negative_latency_fname.replace('.set', '.fdt'))
with pytest.warns(RuntimeWarning, match="has a sample index of -1."):
read_raw_eeglab(input_fname=negative_latency_fname, preload=True)
# test negative event latencies
evnts.latency = -1
io.savemat(negative_latency_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate,
'nbchan': eeg.nbchan,
'data': 'test_negative_latency.fdt',
'epoch': eeg.epoch, 'event': evnts,
'chanlocs': eeg.chanlocs, 'pnts': eeg.pnts}},
appendmat=False, oned_as='row')
with pytest.raises(ValueError, match='event sample index is negative'):
with pytest.warns(RuntimeWarning, match="has a sample index of -1."):
read_raw_eeglab(input_fname=negative_latency_fname, preload=True)
# test overlapping events
overlap_fname = op.join(tmp_path, 'test_overlap_event.set')
io.savemat(overlap_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate,
'nbchan': eeg.nbchan, 'data': 'test_overlap_event.fdt',
'epoch': eeg.epoch,
'event': [eeg.event[0], eeg.event[0]],
'chanlocs': eeg.chanlocs, 'pnts': eeg.pnts}},
appendmat=False, oned_as='row')
shutil.copyfile(op.join(base_dir, 'test_raw.fdt'),
overlap_fname.replace('.set', '.fdt'))
read_raw_eeglab(input_fname=overlap_fname, preload=True)
# test reading file with empty event durations
empty_dur_fname = op.join(tmp_path, 'test_empty_durations.set')
evnts = deepcopy(eeg.event)
for ev in evnts:
ev.duration = np.array([], dtype='float')
io.savemat(empty_dur_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate,
'nbchan': eeg.nbchan,
'data': 'test_negative_latency.fdt',
'epoch': eeg.epoch, 'event': evnts,
'chanlocs': eeg.chanlocs, 'pnts': eeg.pnts}},
appendmat=False, oned_as='row')
shutil.copyfile(op.join(base_dir, 'test_raw.fdt'),
empty_dur_fname.replace('.set', '.fdt'))
raw = read_raw_eeglab(input_fname=empty_dur_fname, preload=True)
assert (raw.annotations.duration == 0).all()
# test reading file when the EEG.data name is wrong
io.savemat(overlap_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate,
'nbchan': eeg.nbchan, 'data': 'test_overla_event.fdt',
'epoch': eeg.epoch,
'event': [eeg.event[0], eeg.event[0]],
'chanlocs': eeg.chanlocs, 'pnts': eeg.pnts}},
appendmat=False, oned_as='row')
with pytest.warns(RuntimeWarning, match="must have changed on disk"):
read_raw_eeglab(input_fname=overlap_fname, preload=True)
# raise error when both EEG.data and fdt name from set are wrong
overlap_fname = op.join(tmp_path, 'test_ovrlap_event.set')
io.savemat(overlap_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate,
'nbchan': eeg.nbchan, 'data': 'test_overla_event.fdt',
'epoch': eeg.epoch,
'event': [eeg.event[0], eeg.event[0]],
'chanlocs': eeg.chanlocs, 'pnts': eeg.pnts}},
appendmat=False, oned_as='row')
with pytest.raises(FileNotFoundError, match="not find the .fdt data file"):
read_raw_eeglab(input_fname=overlap_fname, preload=True)
# test reading file with one channel
one_chan_fname = op.join(tmp_path, 'test_one_channel.set')
io.savemat(one_chan_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate,
'nbchan': 1, 'data': np.random.random((1, 3)),
'epoch': eeg.epoch, 'event': eeg.epoch,
'chanlocs': {'labels': 'E1', 'Y': -6.6069,
'X': 6.3023, 'Z': -2.9423},
'times': eeg.times[:3], 'pnts': 3}},
appendmat=False, oned_as='row')
read_raw_eeglab(input_fname=one_chan_fname, preload=True)
# test reading file with 3 channels - one without position information
# first, create chanlocs structured array
ch_names = ['F3', 'unknown', 'FPz']
x, y, z = [1., 2., np.nan], [4., 5., np.nan], [7., 8., np.nan]
dt = [('labels', 'S10'), ('X', 'f8'), ('Y', 'f8'), ('Z', 'f8')]
nopos_dt = [('labels', 'S10'), ('Z', 'f8')]
chanlocs = np.zeros((3,), dtype=dt)
nopos_chanlocs = np.zeros((3,), dtype=nopos_dt)
for ind, vals in enumerate(zip(ch_names, x, y, z)):
for fld in range(4):
chanlocs[ind][dt[fld][0]] = vals[fld]
if fld in (0, 3):
nopos_chanlocs[ind][dt[fld][0]] = vals[fld]
# In theory this should work and be simpler, but there is an obscure
# SciPy writing bug that pops up sometimes:
# nopos_chanlocs = np.array(chanlocs[['labels', 'Z']])
# test reading channel names but not positions when there is no X (only Z)
# field in the EEG.chanlocs structure
nopos_fname = op.join(tmp_path, 'test_no_chanpos.set')
io.savemat(nopos_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate, 'nbchan': 3,
'data': np.random.random((3, 2)), 'epoch': eeg.epoch,
'event': eeg.epoch, 'chanlocs': nopos_chanlocs,
'times': eeg.times[:2], 'pnts': 2}},
appendmat=False, oned_as='row')
# load the file
raw = read_raw_eeglab(input_fname=nopos_fname, preload=True)
# test that channel names have been loaded but not channel positions
for i in range(3):
assert_equal(raw.info['chs'][i]['ch_name'], ch_names[i])
assert_array_equal(raw.info['chs'][i]['loc'][:3],
np.array([np.nan, np.nan, np.nan]))
@pytest.mark.timeout(60) # ~60 sec on Travis OSX
@testing.requires_testing_data
@pytest.mark.parametrize('fnames', [
epochs_mat_fnames,
pytest.param(epochs_h5_fnames, marks=[pytest.mark.slowtest]),
])
def test_io_set_epochs(fnames):
"""Test importing EEGLAB .set epochs files."""
epochs_fname, epochs_fname_onefile = fnames
with pytest.warns(RuntimeWarning, match='multiple events'):
epochs = read_epochs_eeglab(epochs_fname)
with pytest.warns(RuntimeWarning, match='multiple events'):
epochs2 = read_epochs_eeglab(epochs_fname_onefile)
# one warning for each read_epochs_eeglab because both files have epochs
# associated with multiple events
assert_array_equal(epochs.get_data(), epochs2.get_data())
@testing.requires_testing_data
def test_io_set_epochs_events(tmp_path):
"""Test different combinations of events and event_ids."""
tmp_path = str(tmp_path)
out_fname = op.join(tmp_path, 'test-eve.fif')
events = np.array([[4, 0, 1], [12, 0, 2], [20, 0, 3], [26, 0, 3]])
write_events(out_fname, events)
event_id = {'S255/S8': 1, 'S8': 2, 'S255/S9': 3}
out_fname = op.join(tmp_path, 'test-eve.fif')
epochs = read_epochs_eeglab(epochs_fname_mat, events, event_id)
assert_equal(len(epochs.events), 4)
assert epochs.preload
assert epochs._bad_dropped
epochs = read_epochs_eeglab(epochs_fname_mat, out_fname, event_id)
pytest.raises(ValueError, read_epochs_eeglab, epochs_fname_mat,
None, event_id)
pytest.raises(ValueError, read_epochs_eeglab, epochs_fname_mat,
epochs.events, None)
@testing.requires_testing_data
def test_degenerate(tmp_path):
"""Test some degenerate conditions."""
# test if .dat file raises an error
tmp_path = str(tmp_path)
eeg = io.loadmat(epochs_fname_mat, struct_as_record=False,
squeeze_me=True)['EEG']
eeg.data = 'epochs_fname.dat'
bad_epochs_fname = op.join(tmp_path, 'test_epochs.set')
io.savemat(bad_epochs_fname,
{'EEG': {'trials': eeg.trials, 'srate': eeg.srate,
'nbchan': eeg.nbchan, 'data': eeg.data,
'epoch': eeg.epoch, 'event': eeg.event,
'chanlocs': eeg.chanlocs, 'pnts': eeg.pnts}},
appendmat=False, oned_as='row')
shutil.copyfile(op.join(base_dir, 'test_epochs.fdt'),
op.join(tmp_path, 'test_epochs.dat'))
with pytest.warns(RuntimeWarning, match='multiple events'):
pytest.raises(NotImplementedError, read_epochs_eeglab,
bad_epochs_fname)
@pytest.mark.parametrize("fname", [
raw_fname_mat,
raw_fname_onefile_mat,
# We don't test the h5 varaints here because they are implicitly tested
# in test_io_set_raw
])
@pytest.mark.filterwarnings('ignore: Complex objects')
@testing.requires_testing_data
def test_eeglab_annotations(fname):
"""Test reading annotations in EEGLAB files."""
annotations = read_annotations(fname)
assert len(annotations) == 154
assert set(annotations.description) == {'rt', 'square'}
assert np.all(annotations.duration == 0.)
@testing.requires_testing_data
def test_eeglab_read_annotations():
"""Test annotations onsets are timestamps (+ validate some)."""
annotations = read_annotations(raw_fname_mat)
validation_samples = [0, 1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31]
expected_onset = np.array([1.00, 1.69, 2.08, 4.70, 7.71, 11.30, 17.18,
20.20, 26.12, 29.14, 35.25, 44.30, 47.15])
assert annotations.orig_time is None
assert_array_almost_equal(annotations.onset[validation_samples],
expected_onset, decimal=2)
# test if event durations are imported correctly
raw = read_raw_eeglab(raw_fname_event_duration, preload=True)
# file contains 3 annotations with 0.5 s (64 samples) duration each
assert_allclose(raw.annotations.duration, np.ones(3) * 0.5)
@testing.requires_testing_data
def test_eeglab_event_from_annot():
"""Test all forms of obtaining annotations."""
raw_fname_mat = op.join(base_dir, 'test_raw.set')
raw_fname = raw_fname_mat
event_id = {'rt': 1, 'square': 2}
raw1 = read_raw_eeglab(input_fname=raw_fname, preload=False)
annotations = read_annotations(raw_fname)
assert len(raw1.annotations) == 154
raw1.set_annotations(annotations)
events_b, _ = events_from_annotations(raw1, event_id=event_id)
assert len(events_b) == 154
def _assert_array_allclose_nan(left, right):
assert_array_equal(np.isnan(left), np.isnan(right))
assert_allclose(left[~np.isnan(left)], right[~np.isnan(left)], atol=1e-8)
@pytest.fixture(scope='session')
def one_chanpos_fname(tmp_path_factory):
"""Test file with 3 channels to exercise EEGLAB reader.
File characteristics
- ch_names: 'F3', 'unknown', 'FPz'
- 'FPz' has no position information.
- the rest is aleatory
Notes from when this code was factorized:
# test reading file with one event (read old version)
"""
fname = str(tmp_path_factory.mktemp('data') / 'test_chanpos.set')
file_conent = dict(EEG={
'trials': 1, 'nbchan': 3, 'pnts': 3, 'epoch': [], 'event': [],
'srate': 128, 'times': np.array([0., 0.1, 0.2]),
'data': np.empty([3, 3]),
'chanlocs': np.array(
[(b'F3', 1., 4., 7.),
(b'unknown', np.nan, np.nan, np.nan),
(b'FPz', 2., 5., 8.)],
dtype=[('labels', 'S10'), ('X', 'f8'), ('Y', 'f8'), ('Z', 'f8')]
)
})
io.savemat(file_name=fname, mdict=file_conent, appendmat=False,
oned_as='row')
return fname
@testing.requires_testing_data
def test_position_information(one_chanpos_fname):
"""Test reading file with 3 channels - one without position information."""
nan = np.nan
EXPECTED_LOCATIONS_FROM_FILE = np.array([
[-4., 1., 7., 0., 0., 0., nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[-5., 2., 8., 0., 0., 0., nan, nan, nan, nan, nan, nan],
])
EXPECTED_LOCATIONS_FROM_MONTAGE = np.array([
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
])
raw = read_raw_eeglab(input_fname=one_chanpos_fname, preload=True)
assert_array_equal(np.array([ch['loc'] for ch in raw.info['chs']]),
EXPECTED_LOCATIONS_FROM_FILE)
# To accommodate the new behavior so that:
# read_raw_eeglab(.. montage=montage) and raw.set_montage(montage)
# behaves the same we need to flush the montage. otherwise we get
# a mix of what is in montage and in the file
raw = read_raw_eeglab(
input_fname=one_chanpos_fname,
preload=True,
).set_montage(None) # Flush the montage builtin within input_fname
_assert_array_allclose_nan(np.array([ch['loc'] for ch in raw.info['chs']]),
EXPECTED_LOCATIONS_FROM_MONTAGE)
@testing.requires_testing_data
def test_io_set_raw_2021():
"""Test reading new default file format (no EEG struct)."""
assert "EEG" not in io.loadmat(raw_fname_2021)
_test_raw_reader(reader=read_raw_eeglab, input_fname=raw_fname_2021,
test_preloading=False, preload=True)
@testing.requires_testing_data
def test_read_single_epoch():
"""Test reading raw set file as an Epochs instance."""
with pytest.raises(ValueError, match='trials less than 2'):
read_epochs_eeglab(raw_fname_mat)
@testing.requires_testing_data
def test_get_montage_info_with_ch_type():
"""Test that the channel types are properly returned."""
mat = pymatreader.read_mat(raw_fname_onefile_mat, uint16_codec=None)
n = len(mat['EEG']['chanlocs']['labels'])
mat['EEG']['chanlocs']['type'] = ['eeg'] * (n - 2) + ['eog'] + ['stim']
mat['EEG']['chanlocs'] = _dol_to_lod(mat['EEG']['chanlocs'])
mat['EEG'] = Bunch(**mat['EEG'])
ch_names, ch_types, montage = _get_montage_information(mat['EEG'], False)
assert len(ch_names) == len(ch_types) == n
assert ch_types == ['eeg'] * (n - 2) + ['eog'] + ['stim']
assert montage is None
# test unknown type warning
mat = pymatreader.read_mat(raw_fname_onefile_mat, uint16_codec=None)
n = len(mat['EEG']['chanlocs']['labels'])
mat['EEG']['chanlocs']['type'] = ['eeg'] * (n - 2) + ['eog'] + ['unknown']
mat['EEG']['chanlocs'] = _dol_to_lod(mat['EEG']['chanlocs'])
mat['EEG'] = Bunch(**mat['EEG'])
with pytest.warns(RuntimeWarning, match='Unknown types found'):
ch_names, ch_types, montage = \
_get_montage_information(mat['EEG'], False)
@testing.requires_testing_data
def test_fidsposition_information():
"""Test reading file with 3 fiducial locations."""
raw = read_raw_eeglab(raw_fname_chanloc_fids)
montage = raw.get_montage()
pos = montage.get_positions()
assert pos['nasion'] is not None
assert pos['lpa'] is not None
assert pos['rpa'] is not None
assert len(pos['nasion']) == 3
assert len(pos['lpa']) == 3
assert len(pos['rpa']) == 3
| bsd-3-clause |
mikec964/chelmbigstock | chelmbigstock/chelmbigstock.py | 1 | 15947 | #!/usr/bin/env python3
"""
Code to use stock history to see if future fluctuations can be predicted
@Author: Andy Webber
Created: March 1, 2014
"""
# A python script to learn about stock picking
import sys
from operator import itemgetter
import numpy as np
from sklearn import linear_model
import timeit
from scipy.stats import anderson
import dateutl
from Stock import Stock
from LearningData import LearningData
from sklearn import preprocessing
"""std_scale = preprocessing.StandardScaler().fit(X_train)
X_train_std = std_scale.transform(X_train)
X_test_std = std_scale.transform(X_test) """
def form_data(stocks, init_param):
""" This function constructs the training, testing and cross validation
objects for the stock market analysis """
rs = stocks[1].rsi
ts = stocks[1].tsi
a = 1
for date in init_param.reference_dates:
try:
training_data
except NameError:
training_data = LearningData()
training_data.construct(stocks, date, init_param.future_day, init_param.features)
else:
training_data.append(stocks, date, init_param.future_day, init_param.features)
for date in init_param.test_dates:
try:
test_data
except NameError:
test_data = LearningData()
test_data.construct(stocks, date, init_param.future_day, init_param.features)
else:
test_data.append(stocks, date, init_param.future_day, init_param.features)
#reference_date = dateutl.days_since_1900('1991-01-01')
#test_data.construct(stocks,[reference_date, day_history, init_param.future_day])
return training_data, test_data
def output(training_data, cv_data):
" This function outputs the data in csv form so it can be examined in Matlab"
f = open('training_x.txt','w')
for i in range(0,training_data.m):
x_str = ','.join(str(x) for x in training_data.X[i])
print(x_str)
f.write(x_str + '\n')
f.close
f = open('training_y.txt','w')
y_str = ','.join(str(y) for y in training_data.y)
f.write(y_str)
f.close
f = open('cv_x.txt','w')
for i in range(0,cv_data.m):
x_str = ','.join(str(x) for x in cv_data.X[i])
print(x_str)
f.write(x_str + '\n')
f.close
f = open('cv_y.txt','w')
y_str = ','.join(str(y) for y in cv_data.y)
f.write(y_str)
f.close
def logistic_reg(training_data):
""" This function does the actual training. It takes in training data
and cross validation data and returns the model and optimal
regularization parameter """
""" Setting guesses for minimum and maximum values of regularization parameter then
find the value of parameter that minimizes error on cross validation data. If
local minimum is found the return this model. If not, extend minimum or maximum
appropriately and repeat """
from sklearn.linear_model import LogisticRegression
C_min = 1.0e-5
C_max = 1.0e5
regularization_flag = 1 # To set 1 until local minimum is found
regularization_param = 0
# while regularization_flag != 0:
# regularization_param, regularization_flag = set_reg_param(training_data, cv_data, alpha_min, alpha_max)
# if regularization_flag == -1:
# """ The local minimum is at point less than alpha_min """
# alpha_min = alpha_min * 0.3
# if regularization_flag == 1:
# """ The local minimum is at point greater then alpha_max """
# alpha_max = alpha_max * 3
lr = LogisticRegression (C=C_max, random_state=0)
lr.fit(training_data.X, training_data.y)
return lr, C_max
def set_reg_param(training_data, cv_data, alpha_min, alpha_max):
""" This function does a linear regression with regularization for training_data
then tests prediction for training_data and cv_data over a range of regularization
parameters. If a local minimum is found it returns the parameter and a 0 to indicate
it is complete. If minimum it below alpha_min it returns -1 for flag. If it is above
alpha_max, it returns 1 for flag. """
f = open('alpha.txt', 'w')
alph = alpha_min
min_alpha = alpha_min # This is the value of alpha in our range that gives minimum for cv data
alpha_largest = alpha_min # Learning is not generally done at alpha_min, this tracks larget alpha
while alph < alpha_max:
""" Learn for this parameter """
clf = linear_model.Ridge (alpha=alph, fit_intercept=False)
clf.fit(training_data.X, training_data.y)
""" Get prediction for this parameter """
predict_data = clf.predict(training_data.X)
predict_cv = clf.predict(cv_data.X)
""" Caculate the differences from actual data for training and cv data"""
diff_training = (1.0/training_data.m) * np.linalg.norm(predict_data - training_data.y)
diff_cv = (1.0/cv_data.m) * np.linalg.norm(predict_cv - cv_data.y)
""" Write out the values for plotting. Do appropriate work to determine min_val_alpha """
f.write(str(alph) + " " + str(diff_training) + " " + str(diff_cv) + "\n")
if alph == alpha_min:
min_diff = diff_cv # Just setting default value for first value of alph
min_alpha = alpha_min
if diff_cv < min_diff:
""" We have a new minimum so value and alph must be recored """
min_diff = diff_cv
min_alpha = alph
alpha_largest = alph # Keep track of largest alpha used
alph = alph * 1.5 # increment alph
f.close()
""" Loop is now complete. If min_value_alpha is not alpha_min or alpha_max, return flag of 0
else return -1 or 1 so min or max can be adjusted and loop completed again """
if abs(min_alpha - alpha_min) < alpha_min/10.0:
flag = -1 # Local minimum is less than alpha_min so return -1
elif abs(min_alpha - alpha_largest) < alpha_min/10.0:
flag = 1 # Local minimum is greater than alpha_max so return 1
else:
flag = 0 # Local minimum is in range so return 0
return min_alpha, flag
def examine(stocks, init_param, C_in, gamma_in, verbose):
""" This plot takes in the stocks and features. It plots a ROC curve
returns the Area under the curve"""
from sklearn.svm import SVC
from sklearn import metrics
import matplotlib.pyplot as plt
# import pandas as pd
training_data, test_data = form_data(stocks, init_param)
std_scale = preprocessing.StandardScaler().fit(training_data.X)
training_data.X = std_scale.transform(training_data.X)
test_data.X = std_scale.transform(test_data.X)
svm = SVC(kernel='rbf', random_state=0, gamma = gamma_in, C=C_in, probability=True)
svm.fit(training_data.X, training_data.y)
preds = svm.predict_proba(test_data.X)[:,1]
fpr, tpr, _ = metrics.roc_curve(test_data.y, preds)
# df = pd.DataFrame(dict(fpr=fpr, tpr=tpr))
roc_auc = metrics.auc(fpr,tpr)
if verbose:
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
return roc_auc
def choose_features(stocks, init_param, C, gamma):
""" This function chooses the feature from the available_features array
that when added to chosen_features give maximium area under curve.
It returns chosen features and available_features arrays with
the best feature added to the former and removed from the latter.
It also appends the best aoc onto the aoc array"""
chosen_features = []
available_features = init_param.features[:]
"""The code is written to edit init_param.features but make a copy to
restore things after the loop"""
init_param_features = init_param.features[:]
aoc = []
while (len(available_features) > 5):
best_aoc = 0
for feature in available_features:
input_features = chosen_features[:]
input_features.append(feature)
init_param.features = input_features
feature_aoc = examine(stocks, init_param, C, gamma, False)
if feature_aoc > best_aoc:
best_aoc = feature_aoc
best_feature = feature
chosen_features.append(best_feature)
available_features.remove(best_feature)
aoc.append(best_aoc)
""" Restore init_param.features """
init_param.features = init_param_features[:]
return chosen_features, available_features, aoc
def execute(init_param):
""" execute is the function where each run is done. main sets parameters then calls execute"""
from sklearn.svm import SVC
import matplotlib.pyplot as plt
start = timeit.timeit()
stocks = Stock.read_stocks('../data/stocks_read.txt', init_param.max_stocks)
# stocks = 1
""" Chose the best feature """
# chosen_features = []
# available_features = init_param.features
C = 1
gamma = 0.2
chosen_features, available_features, aoc = choose_features(stocks, init_param, C, gamma)
init_param.features = ['rsi','tsi']
verbose = True
examine(stocks, init_param, verbose, C, gamma)
training_data, test_data = form_data(stocks, init_param)
std_scale = preprocessing.StandardScaler().fit(training_data.X)
training_data.X = std_scale.transform(training_data.X)
test_data.X = std_scale.transform(test_data.X)
end1 = timeit.timeit()
print("form_data took ", (end1-start))
print("training_data has ",len(training_data.y)," elements")
print("test_data has ",len(test_data.y)," elements")
if init_param.output:
output(training_data, cv_data)
#clf, regularization_parameter = learn(training_data, cv_data)
""" lr, C = logistic_reg(training_data)
test_predict = lr.predict(test_data.X)
errors = np.count_nonzero(test_predict - test_data.y)
accuracy = 1.0 - (errors/len(test_predict))
print("accuracy is ",accuracy)
end2 = timeit.timeit()
print("regression took ",(end2-end1))"""
train_errors, test_errors, C_arr = [], [], []
train_accuracy, test_accuracy = [],[]
C_i = 0.01
while C_i < 10:
svm = SVC(kernel='rbf', random_state=0, gamma = 0.2, C=C_i)
svm.fit(training_data.X, training_data.y)
train_errors.append(np.count_nonzero(svm.predict(training_data.X)-training_data.y))
accuracy = 1.0 - np.count_nonzero(svm.predict(training_data.X)-training_data.y)/len(training_data.y)
train_accuracy.append(accuracy)
test_errors.append(np.count_nonzero(svm.predict(test_data.X)-test_data.y))
accuracy = 1.0 - np.count_nonzero(svm.predict(test_data.X)-test_data.y)/len(test_data.y)
test_accuracy.append(accuracy)
C_arr.append(C_i)
C_i = C_i *1.1
plt.plot(C_arr, train_accuracy,c='r')
plt.plot(C_arr, test_accuracy,c='b')
plt.xscale('log')
plt.show()
yy = np.asarray(training_data.y)
XX = np.asarray(training_data.X)
XX0 = XX[yy == 0]
XX1 = XX[yy == 1]
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(XX0[:,0], XX0[:,12],c='red')
ax1.scatter(XX1[:,0], XX1[:,12],c='blue')
plt.show()
# init_param2 = init_param
# init_param2.reference_dates = [dateutl.days_since_1900('2000-01-01')]
# init_param2.test_dates = [dateutl.days_since_1900('2010-01-01')]
# training_data2, test_data2 = form_data(init_param2)
# lr, C = logistic_reg(training_data2)
# test_predict2 = lr.predict(test_data2.X)
# errors = np.count_nonzero(test_predict2 - test_data2.y)
# accuracy = 1.0 - (errors/len(test_predict))
print("accuracy is ",accuracy)
print("run finished with accuracy", accuracy)
class InitialParameters(object):
""" This class defines an object of parameters used to run the code. It
is set in main and the parameters are passed to execute """
def __init__(self):
""" The object is defined with default values that can then be changed in main()"""
#self.max_stocks = 100
self.max_stocks = 200
""" cv_factor determines what portion of stocks to put in cross validation set and what portion
to leave in training set. cv_factor = 2 means every other stock goes into cross validation
set. cv_factor = 3 means every third stock goes into cross validation set """
self.cv_factor = 2
""" future_day is how many training days in the future we train for. Setting future_day = 25
means we are measuring how the stock does 25 days out """
self.future_day = 25
""" The reference dates are the reference dates we are training on"""
self.reference_dates = []
#self.reference_dates.append(dateutl.days_since_1900('1980-01-01'))
self.reference_dates.append(dateutl.days_since_1900('2001-01-01'))
"""self.reference_dates.append(dateutl.days_since_1900('2001-03-01'))
self.reference_dates.append(dateutl.days_since_1900('2001-05-01'))
self.reference_dates.append(dateutl.days_since_1900('2001-07-01'))
self.reference_dates.append(dateutl.days_since_1900('2001-09-01'))
self.reference_dates.append(dateutl.days_since_1900('2001-11-01'))"""
""" test_dates are the dates we are using for testing """
self.test_dates = []
#self.test_dates.append(dateutl.days_since_1900('1991-01-01'))
self.test_dates.append(dateutl.days_since_1900('2010-01-01'))
self.test_dates.append(dateutl.days_since_1900('2010-03-01'))
self.test_dates.append(dateutl.days_since_1900('2010-05-01'))
self.test_dates.append(dateutl.days_since_1900('2010-07-01'))
self.test_dates.append(dateutl.days_since_1900('2010-09-01'))
self.test_dates.append(dateutl.days_since_1900('2010-11-01'))
"""train_history_days and train_increment set how many historical days we use to
train and the increment used. Setting train_history_days = 21 and train_increment = 5
means we are using the values at days days 5, 10, 15 and 20 days before the reference day
as input features """
self.train_days = 21
self.train_increment = 5
self.features = ['rsi','tsi','ppo','adx','dip14','dim14','cci', \
'cmo','mfi','natr','roc','stoch','uo']
""" output is just a boolean about calling the output function to write out
appropriate X and y matricies. The default is False meaning do not write out
matricies """
self.output = False
def main(argv):
init_param = InitialParameters()
#init_param.reference_dates.append(dateutl.days_since_1900('1981-01-01'))
#init_param.reference_dates.append(dateutl.days_since_1900('2001-01-01'))
execute(init_param)
if __name__ == "__main__":
main(sys.argv)
| gpl-3.0 |
shareactorIO/pipeline | source.ml/prediction.ml/python/store/default/python_balancescale/1/train_balancescale.py | 1 | 1668 | import pickle
import pandas as pd
# Scikit-learn method to split the dataset into train and test dataset
from sklearn.cross_validation import train_test_split
# Scikit-learn method to implement the decision tree classifier
from sklearn.tree import DecisionTreeClassifier
# Load the dataset
balance_scale_data = pd.read_csv('balancescale.data', sep=',', header=None)
print("Dataset Length:: ", len(balance_scale_data))
print("Dataset Shape:: ", balance_scale_data.shape)
# Split the dataset into train and test dataset
X = balance_scale_data.values[:, 1:5]
Y = balance_scale_data.values[:, 0]
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=100)
# Decision model with Gini index critiria
decision_tree_model = DecisionTreeClassifier(criterion="gini", random_state=100, max_depth=3, min_samples_leaf=5)
decision_tree_model.fit(X_train, y_train)
print("Decision Tree classifier :: ", decision_tree_model)
print("prediction: ", decision_tree_model.predict([1,1,3,4]))
# Dump the trained decision tree classifier with Pickle
decision_tree_pkl_filename = 'python_balancescale.pkl'
# Open the file to save as pkl file
decision_tree_model_pkl = open(decision_tree_pkl_filename, 'wb')
pickle.dump(decision_tree_model, decision_tree_model_pkl)
# Close the pickle instances
decision_tree_model_pkl.close()
# Loading the saved decision tree model pickle
decision_tree_model_pkl = open(decision_tree_pkl_filename, 'rb')
decision_tree_model = pickle.load(decision_tree_model_pkl)
print("Loaded Decision tree model :: ", decision_tree_model)
print("prediction: ", decision_tree_model.predict([[1,1,3,4]]))
decision_tree_model_pkl.close() | apache-2.0 |
pravsripad/mne-python | mne/export/_egimff.py | 9 | 5694 | # -*- coding: utf-8 -*-
# Authors: MNE Developers
#
# License: BSD-3-Clause
import os
import shutil
import datetime
import os.path as op
import numpy as np
from ..io.egi.egimff import _import_mffpy
from ..io.pick import pick_types, pick_channels
from ..utils import verbose, warn, _check_fname
@verbose
def export_evokeds_mff(fname, evoked, history=None, *, overwrite=False,
verbose=None):
"""Export evoked dataset to MFF.
%(export_warning)s
Parameters
----------
%(fname_export_params)s
evoked : list of Evoked instances
List of evoked datasets to export to one file. Note that the
measurement info from the first evoked instance is used, so be sure
that information matches.
history : None (default) | list of dict
Optional list of history entries (dictionaries) to be written to
history.xml. This must adhere to the format described in
mffpy.xml_files.History.content. If None, no history.xml will be
written.
%(overwrite)s
.. versionadded:: 0.24.1
%(verbose)s
Notes
-----
.. versionadded:: 0.24
%(export_warning_note_evoked)s
Only EEG channels are written to the output file.
``info['device_info']['type']`` must be a valid MFF recording device
(e.g. 'HydroCel GSN 256 1.0'). This field is automatically populated when
using MFF read functions.
"""
mffpy = _import_mffpy('Export evokeds to MFF.')
import pytz
info = evoked[0].info
if np.round(info['sfreq']) != info['sfreq']:
raise ValueError('Sampling frequency must be a whole number. '
f'sfreq: {info["sfreq"]}')
sampling_rate = int(info['sfreq'])
# check for unapplied projectors
if any(not proj['active'] for proj in evoked[0].info['projs']):
warn('Evoked instance has unapplied projectors. Consider applying '
'them before exporting with evoked.apply_proj().')
# Initialize writer
# Future changes: conditions based on version or mffpy requirement if
# https://github.com/BEL-Public/mffpy/pull/92 is merged and released.
fname = _check_fname(fname, overwrite=overwrite)
if op.exists(fname):
os.remove(fname) if op.isfile(fname) else shutil.rmtree(fname)
writer = mffpy.Writer(fname)
current_time = pytz.utc.localize(datetime.datetime.utcnow())
writer.addxml('fileInfo', recordTime=current_time)
try:
device = info['device_info']['type']
except (TypeError, KeyError):
raise ValueError('No device type. Cannot determine sensor layout.')
writer.add_coordinates_and_sensor_layout(device)
# Add EEG data
eeg_channels = pick_types(info, eeg=True, exclude=[])
eeg_bin = mffpy.bin_writer.BinWriter(sampling_rate)
for ave in evoked:
# Signals are converted to ยตV
block = (ave.data[eeg_channels] * 1e6).astype(np.float32)
eeg_bin.add_block(block, offset_us=0)
writer.addbin(eeg_bin)
# Add categories
categories_content = _categories_content_from_evokeds(evoked)
writer.addxml('categories', categories=categories_content)
# Add history
if history:
writer.addxml('historyEntries', entries=history)
writer.write()
def _categories_content_from_evokeds(evoked):
"""Return categories.xml content for evoked dataset."""
content = dict()
begin_time = 0
for ave in evoked:
# Times are converted to microseconds
sfreq = ave.info['sfreq']
duration = np.round(len(ave.times) / sfreq * 1e6).astype(int)
end_time = begin_time + duration
event_time = begin_time - np.round(ave.tmin * 1e6).astype(int)
eeg_bads = _get_bad_eeg_channels(ave.info)
content[ave.comment] = [
_build_segment_content(begin_time, end_time, event_time, eeg_bads,
name='Average', nsegs=ave.nave)
]
begin_time += duration
return content
def _get_bad_eeg_channels(info):
"""Return a list of bad EEG channels formatted for categories.xml.
Given a list of only the EEG channels in file, return the indices of this
list (starting at 1) that correspond to bad channels.
"""
if len(info['bads']) == 0:
return []
eeg_channels = pick_types(info, eeg=True, exclude=[])
bad_channels = pick_channels(info['ch_names'], info['bads'])
bads_elementwise = np.isin(eeg_channels, bad_channels)
return list(np.flatnonzero(bads_elementwise) + 1)
def _build_segment_content(begin_time, end_time, event_time, eeg_bads,
status='unedited', name=None, pns_bads=None,
nsegs=None):
"""Build content for a single segment in categories.xml.
Segments are sorted into categories in categories.xml. In a segmented MFF
each category can contain multiple segments, but in an averaged MFF each
category only contains one segment (the average).
"""
channel_status = [{
'signalBin': 1,
'exclusion': 'badChannels',
'channels': eeg_bads
}]
if pns_bads:
channel_status.append({
'signalBin': 2,
'exclusion': 'badChannels',
'channels': pns_bads
})
content = {
'status': status,
'beginTime': begin_time,
'endTime': end_time,
'evtBegin': event_time,
'evtEnd': event_time,
'channelStatus': channel_status,
}
if name:
content['name'] = name
if nsegs:
content['keys'] = {
'#seg': {
'type': 'long',
'data': nsegs
}
}
return content
| bsd-3-clause |
MebiusHKU/flask-web | flask/lib/python2.7/site-packages/jinja2/visitor.py | 1402 | 3316 | # -*- coding: utf-8 -*-
"""
jinja2.visitor
~~~~~~~~~~~~~~
This module implements a visitor for the nodes.
:copyright: (c) 2010 by the Jinja Team.
:license: BSD.
"""
from jinja2.nodes import Node
class NodeVisitor(object):
"""Walks the abstract syntax tree and call visitor functions for every
node found. The visitor functions may return values which will be
forwarded by the `visit` method.
Per default the visitor functions for the nodes are ``'visit_'`` +
class name of the node. So a `TryFinally` node visit function would
be `visit_TryFinally`. This behavior can be changed by overriding
the `get_visitor` function. If no visitor function exists for a node
(return value `None`) the `generic_visit` visitor is used instead.
"""
def get_visitor(self, node):
"""Return the visitor function for this node or `None` if no visitor
exists for this node. In that case the generic visit function is
used instead.
"""
method = 'visit_' + node.__class__.__name__
return getattr(self, method, None)
def visit(self, node, *args, **kwargs):
"""Visit a node."""
f = self.get_visitor(node)
if f is not None:
return f(node, *args, **kwargs)
return self.generic_visit(node, *args, **kwargs)
def generic_visit(self, node, *args, **kwargs):
"""Called if no explicit visitor function exists for a node."""
for node in node.iter_child_nodes():
self.visit(node, *args, **kwargs)
class NodeTransformer(NodeVisitor):
"""Walks the abstract syntax tree and allows modifications of nodes.
The `NodeTransformer` will walk the AST and use the return value of the
visitor functions to replace or remove the old node. If the return
value of the visitor function is `None` the node will be removed
from the previous location otherwise it's replaced with the return
value. The return value may be the original node in which case no
replacement takes place.
"""
def generic_visit(self, node, *args, **kwargs):
for field, old_value in node.iter_fields():
if isinstance(old_value, list):
new_values = []
for value in old_value:
if isinstance(value, Node):
value = self.visit(value, *args, **kwargs)
if value is None:
continue
elif not isinstance(value, Node):
new_values.extend(value)
continue
new_values.append(value)
old_value[:] = new_values
elif isinstance(old_value, Node):
new_node = self.visit(old_value, *args, **kwargs)
if new_node is None:
delattr(node, field)
else:
setattr(node, field, new_node)
return node
def visit_list(self, node, *args, **kwargs):
"""As transformers may return lists in some places this method
can be used to enforce a list as return value.
"""
rv = self.visit(node, *args, **kwargs)
if not isinstance(rv, list):
rv = [rv]
return rv
| bsd-3-clause |
rubasben/namebench | nb_third_party/jinja2/visitor.py | 1402 | 3316 | # -*- coding: utf-8 -*-
"""
jinja2.visitor
~~~~~~~~~~~~~~
This module implements a visitor for the nodes.
:copyright: (c) 2010 by the Jinja Team.
:license: BSD.
"""
from jinja2.nodes import Node
class NodeVisitor(object):
"""Walks the abstract syntax tree and call visitor functions for every
node found. The visitor functions may return values which will be
forwarded by the `visit` method.
Per default the visitor functions for the nodes are ``'visit_'`` +
class name of the node. So a `TryFinally` node visit function would
be `visit_TryFinally`. This behavior can be changed by overriding
the `get_visitor` function. If no visitor function exists for a node
(return value `None`) the `generic_visit` visitor is used instead.
"""
def get_visitor(self, node):
"""Return the visitor function for this node or `None` if no visitor
exists for this node. In that case the generic visit function is
used instead.
"""
method = 'visit_' + node.__class__.__name__
return getattr(self, method, None)
def visit(self, node, *args, **kwargs):
"""Visit a node."""
f = self.get_visitor(node)
if f is not None:
return f(node, *args, **kwargs)
return self.generic_visit(node, *args, **kwargs)
def generic_visit(self, node, *args, **kwargs):
"""Called if no explicit visitor function exists for a node."""
for node in node.iter_child_nodes():
self.visit(node, *args, **kwargs)
class NodeTransformer(NodeVisitor):
"""Walks the abstract syntax tree and allows modifications of nodes.
The `NodeTransformer` will walk the AST and use the return value of the
visitor functions to replace or remove the old node. If the return
value of the visitor function is `None` the node will be removed
from the previous location otherwise it's replaced with the return
value. The return value may be the original node in which case no
replacement takes place.
"""
def generic_visit(self, node, *args, **kwargs):
for field, old_value in node.iter_fields():
if isinstance(old_value, list):
new_values = []
for value in old_value:
if isinstance(value, Node):
value = self.visit(value, *args, **kwargs)
if value is None:
continue
elif not isinstance(value, Node):
new_values.extend(value)
continue
new_values.append(value)
old_value[:] = new_values
elif isinstance(old_value, Node):
new_node = self.visit(old_value, *args, **kwargs)
if new_node is None:
delattr(node, field)
else:
setattr(node, field, new_node)
return node
def visit_list(self, node, *args, **kwargs):
"""As transformers may return lists in some places this method
can be used to enforce a list as return value.
"""
rv = self.visit(node, *args, **kwargs)
if not isinstance(rv, list):
rv = [rv]
return rv
| apache-2.0 |
nwiizo/workspace_2017 | keras_ex/example/mnist_irnn.py | 9 | 2333 | '''This is a reproduction of the IRNN experiment
with pixel-by-pixel sequential MNIST in
"A Simple Way to Initialize Recurrent Networks of Rectified Linear Units"
by Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton
arXiv:1504.00941v2 [cs.NE] 7 Apr 2015
http://arxiv.org/pdf/1504.00941v2.pdf
Optimizer is replaced with RMSprop which yields more stable and steady
improvement.
Reaches 0.93 train/test accuracy after 900 epochs
(which roughly corresponds to 1687500 steps in the original paper.)
'''
from __future__ import print_function
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import SimpleRNN
from keras.initializations import normal, identity
from keras.optimizers import RMSprop
from keras.utils import np_utils
batch_size = 32
nb_classes = 10
nb_epochs = 200
hidden_units = 100
learning_rate = 1e-6
clip_norm = 1.0
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], -1, 1)
X_test = X_test.reshape(X_test.shape[0], -1, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
print('Evaluate IRNN...')
model = Sequential()
model.add(SimpleRNN(output_dim=hidden_units,
init=lambda shape, name: normal(shape, scale=0.001, name=name),
inner_init=lambda shape, name: identity(shape, scale=1.0, name=name),
activation='relu',
input_shape=X_train.shape[1:]))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
rmsprop = RMSprop(lr=learning_rate)
model.compile(loss='categorical_crossentropy',
optimizer=rmsprop,
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epochs,
verbose=1, validation_data=(X_test, Y_test))
scores = model.evaluate(X_test, Y_test, verbose=0)
print('IRNN test score:', scores[0])
print('IRNN test accuracy:', scores[1])
| mit |
pnxs/dots-code-generator | dots/dots_parser.py | 1 | 94683 | # The file was automatically generated by Lark v0.11.3
__version__ = "0.11.3"
#
#
# Lark Stand-alone Generator Tool
# ----------------------------------
# Generates a stand-alone LALR(1) parser with a standard lexer
#
# Git: https://github.com/erezsh/lark
# Author: Erez Shinan (erezshin@gmail.com)
#
#
# >>> LICENSE
#
# This tool and its generated code use a separate license from Lark,
# and are subject to the terms of the Mozilla Public License, v. 2.0.
# If a copy of the MPL was not distributed with this
# file, You can obtain one at https://mozilla.org/MPL/2.0/.
#
# If you wish to purchase a commercial license for this tool and its
# generated code, you may contact me via email or otherwise.
#
# If MPL2 is incompatible with your free or open-source project,
# contact me and we'll work it out.
#
#
from io import open
class LarkError(Exception):
pass
class ConfigurationError(LarkError, ValueError):
pass
def assert_config(value, options, msg='Got %r, expected one of %s'):
if value not in options:
raise ConfigurationError(msg % (value, options))
class GrammarError(LarkError):
pass
class ParseError(LarkError):
pass
class LexError(LarkError):
pass
class UnexpectedInput(LarkError):
#--
pos_in_stream = None
_terminals_by_name = None
def get_context(self, text, span=40):
#--
assert self.pos_in_stream is not None, self
pos = self.pos_in_stream
start = max(pos - span, 0)
end = pos + span
if not isinstance(text, bytes):
before = text[start:pos].rsplit('\n', 1)[-1]
after = text[pos:end].split('\n', 1)[0]
return before + after + '\n' + ' ' * len(before.expandtabs()) + '^\n'
else:
before = text[start:pos].rsplit(b'\n', 1)[-1]
after = text[pos:end].split(b'\n', 1)[0]
return (before + after + b'\n' + b' ' * len(before.expandtabs()) + b'^\n').decode("ascii", "backslashreplace")
def match_examples(self, parse_fn, examples, token_type_match_fallback=False, use_accepts=False):
#--
assert self.state is not None, "Not supported for this exception"
if isinstance(examples, dict):
examples = examples.items()
candidate = (None, False)
for i, (label, example) in enumerate(examples):
assert not isinstance(example, STRING_TYPE)
for j, malformed in enumerate(example):
try:
parse_fn(malformed)
except UnexpectedInput as ut:
if ut.state == self.state:
if use_accepts and hasattr(self, 'accepts') and ut.accepts != self.accepts:
logger.debug("Different accepts with same state[%d]: %s != %s at example [%s][%s]" %
(self.state, self.accepts, ut.accepts, i, j))
continue
try:
if ut.token == self.token: ##
logger.debug("Exact Match at example [%s][%s]" % (i, j))
return label
if token_type_match_fallback:
##
if (ut.token.type == self.token.type) and not candidate[-1]:
logger.debug("Token Type Fallback at example [%s][%s]" % (i, j))
candidate = label, True
except AttributeError:
pass
if candidate[0] is None:
logger.debug("Same State match at example [%s][%s]" % (i, j))
candidate = label, False
return candidate[0]
def _format_expected(self, expected):
if self._terminals_by_name:
d = self._terminals_by_name
expected = [d[t_name].user_repr() if t_name in d else t_name for t_name in expected]
return "Expected one of: \n\t* %s\n" % '\n\t* '.join(expected)
class UnexpectedEOF(ParseError, UnexpectedInput):
def __init__(self, expected, state=None, terminals_by_name=None):
self.expected = expected
self.state = state
from .lexer import Token
self.token = Token("<EOF>", "") ##
self.pos_in_stream = -1
self.line = -1
self.column = -1
self._terminals_by_name = terminals_by_name
super(UnexpectedEOF, self).__init__()
def __str__(self):
message = "Unexpected end-of-input. "
message += self._format_expected(self.expected)
return message
class UnexpectedCharacters(LexError, UnexpectedInput):
def __init__(self, seq, lex_pos, line, column, allowed=None, considered_tokens=None, state=None, token_history=None,
terminals_by_name=None, considered_rules=None):
##
self.line = line
self.column = column
self.pos_in_stream = lex_pos
self.state = state
self._terminals_by_name = terminals_by_name
self.allowed = allowed
self.considered_tokens = considered_tokens
self.considered_rules = considered_rules
self.token_history = token_history
if isinstance(seq, bytes):
self.char = seq[lex_pos:lex_pos + 1].decode("ascii", "backslashreplace")
else:
self.char = seq[lex_pos]
self._context = self.get_context(seq)
super(UnexpectedCharacters, self).__init__()
def __str__(self):
message = "No terminal matches '%s' in the current parser context, at line %d col %d" % (self.char, self.line, self.column)
message += '\n\n' + self._context
if self.allowed:
message += self._format_expected(self.allowed)
if self.token_history:
message += '\nPrevious tokens: %s\n' % ', '.join(repr(t) for t in self.token_history)
return message
class UnexpectedToken(ParseError, UnexpectedInput):
#--
def __init__(self, token, expected, considered_rules=None, state=None, interactive_parser=None, terminals_by_name=None, token_history=None):
##
self.line = getattr(token, 'line', '?')
self.column = getattr(token, 'column', '?')
self.pos_in_stream = getattr(token, 'pos_in_stream', None)
self.state = state
self.token = token
self.expected = expected ##
self._accepts = NO_VALUE
self.considered_rules = considered_rules
self.interactive_parser = interactive_parser
self._terminals_by_name = terminals_by_name
self.token_history = token_history
super(UnexpectedToken, self).__init__()
@property
def accepts(self):
if self._accepts is NO_VALUE:
self._accepts = self.interactive_parser and self.interactive_parser.accepts()
return self._accepts
def __str__(self):
message = ("Unexpected token %r at line %s, column %s.\n%s"
% (self.token, self.line, self.column, self._format_expected(self.accepts or self.expected)))
if self.token_history:
message += "Previous tokens: %r\n" % self.token_history
return message
@property
def puppet(self):
warn("UnexpectedToken.puppet attribute has been renamed to interactive_parser", DeprecationWarning)
return self.interactive_parser
class VisitError(LarkError):
#--
def __init__(self, rule, obj, orig_exc):
self.obj = obj
self.orig_exc = orig_exc
message = 'Error trying to process rule "%s":\n\n%s' % (rule, orig_exc)
super(VisitError, self).__init__(message)
import sys, re
import logging
from io import open
logger = logging.getLogger("lark")
logger.addHandler(logging.StreamHandler())
##
##
logger.setLevel(logging.CRITICAL)
if sys.version_info[0]>2:
from abc import ABC, abstractmethod
else:
from abc import ABCMeta, abstractmethod
class ABC(object): ##
__slots__ = ()
__metclass__ = ABCMeta
Py36 = (sys.version_info[:2] >= (3, 6))
NO_VALUE = object()
def classify(seq, key=None, value=None):
d = {}
for item in seq:
k = key(item) if (key is not None) else item
v = value(item) if (value is not None) else item
if k in d:
d[k].append(v)
else:
d[k] = [v]
return d
def _deserialize(data, namespace, memo):
if isinstance(data, dict):
if '__type__' in data: ##
class_ = namespace[data['__type__']]
return class_.deserialize(data, memo)
elif '@' in data:
return memo[data['@']]
return {key:_deserialize(value, namespace, memo) for key, value in data.items()}
elif isinstance(data, list):
return [_deserialize(value, namespace, memo) for value in data]
return data
class Serialize(object):
#--
def memo_serialize(self, types_to_memoize):
memo = SerializeMemoizer(types_to_memoize)
return self.serialize(memo), memo.serialize()
def serialize(self, memo=None):
if memo and memo.in_types(self):
return {'@': memo.memoized.get(self)}
fields = getattr(self, '__serialize_fields__')
res = {f: _serialize(getattr(self, f), memo) for f in fields}
res['__type__'] = type(self).__name__
postprocess = getattr(self, '_serialize', None)
if postprocess:
postprocess(res, memo)
return res
@classmethod
def deserialize(cls, data, memo):
namespace = getattr(cls, '__serialize_namespace__', {})
namespace = {c.__name__:c for c in namespace}
fields = getattr(cls, '__serialize_fields__')
if '@' in data:
return memo[data['@']]
inst = cls.__new__(cls)
for f in fields:
try:
setattr(inst, f, _deserialize(data[f], namespace, memo))
except KeyError as e:
raise KeyError("Cannot find key for class", cls, e)
postprocess = getattr(inst, '_deserialize', None)
if postprocess:
postprocess()
return inst
class SerializeMemoizer(Serialize):
#--
__serialize_fields__ = 'memoized',
def __init__(self, types_to_memoize):
self.types_to_memoize = tuple(types_to_memoize)
self.memoized = Enumerator()
def in_types(self, value):
return isinstance(value, self.types_to_memoize)
def serialize(self):
return _serialize(self.memoized.reversed(), None)
@classmethod
def deserialize(cls, data, namespace, memo):
return _deserialize(data, namespace, memo)
try:
STRING_TYPE = basestring
except NameError: ##
STRING_TYPE = str
import types
from functools import wraps, partial
from contextlib import contextmanager
Str = type(u'')
try:
classtype = types.ClassType ##
except AttributeError:
classtype = type ##
def smart_decorator(f, create_decorator):
if isinstance(f, types.FunctionType):
return wraps(f)(create_decorator(f, True))
elif isinstance(f, (classtype, type, types.BuiltinFunctionType)):
return wraps(f)(create_decorator(f, False))
elif isinstance(f, types.MethodType):
return wraps(f)(create_decorator(f.__func__, True))
elif isinstance(f, partial):
##
return wraps(f.func)(create_decorator(lambda *args, **kw: f(*args[1:], **kw), True))
else:
return create_decorator(f.__func__.__call__, True)
try:
import regex
except ImportError:
regex = None
import sre_parse
import sre_constants
categ_pattern = re.compile(r'\\p{[A-Za-z_]+}')
def get_regexp_width(expr):
if regex:
##
##
##
regexp_final = re.sub(categ_pattern, 'A', expr)
else:
if re.search(categ_pattern, expr):
raise ImportError('`regex` module must be installed in order to use Unicode categories.', expr)
regexp_final = expr
try:
return [int(x) for x in sre_parse.parse(regexp_final).getwidth()]
except sre_constants.error:
raise ValueError(expr)
from collections import OrderedDict
class Meta:
def __init__(self):
self.empty = True
class Tree(object):
#--
def __init__(self, data, children, meta=None):
self.data = data
self.children = children
self._meta = meta
@property
def meta(self):
if self._meta is None:
self._meta = Meta()
return self._meta
def __repr__(self):
return 'Tree(%r, %r)' % (self.data, self.children)
def _pretty_label(self):
return self.data
def _pretty(self, level, indent_str):
if len(self.children) == 1 and not isinstance(self.children[0], Tree):
return [indent_str*level, self._pretty_label(), '\t', '%s' % (self.children[0],), '\n']
l = [indent_str*level, self._pretty_label(), '\n']
for n in self.children:
if isinstance(n, Tree):
l += n._pretty(level+1, indent_str)
else:
l += [indent_str*(level+1), '%s' % (n,), '\n']
return l
def pretty(self, indent_str=' '):
#--
return ''.join(self._pretty(0, indent_str))
def __eq__(self, other):
try:
return self.data == other.data and self.children == other.children
except AttributeError:
return False
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash((self.data, tuple(self.children)))
def iter_subtrees(self):
#--
queue = [self]
subtrees = OrderedDict()
for subtree in queue:
subtrees[id(subtree)] = subtree
queue += [c for c in reversed(subtree.children)
if isinstance(c, Tree) and id(c) not in subtrees]
del queue
return reversed(list(subtrees.values()))
def find_pred(self, pred):
#--
return filter(pred, self.iter_subtrees())
def find_data(self, data):
#--
return self.find_pred(lambda t: t.data == data)
from inspect import getmembers, getmro
class Discard(Exception):
#--
pass
##
class _Decoratable:
#--
@classmethod
def _apply_decorator(cls, decorator, **kwargs):
mro = getmro(cls)
assert mro[0] is cls
libmembers = {name for _cls in mro[1:] for name, _ in getmembers(_cls)}
for name, value in getmembers(cls):
##
if name.startswith('_') or (name in libmembers and name not in cls.__dict__):
continue
if not callable(value):
continue
##
if hasattr(cls.__dict__[name], 'vargs_applied') or hasattr(value, 'vargs_applied'):
continue
static = isinstance(cls.__dict__[name], (staticmethod, classmethod))
setattr(cls, name, decorator(value, static=static, **kwargs))
return cls
def __class_getitem__(cls, _):
return cls
class Transformer(_Decoratable):
#--
__visit_tokens__ = True ##
def __init__(self, visit_tokens=True):
self.__visit_tokens__ = visit_tokens
def _call_userfunc(self, tree, new_children=None):
##
children = new_children if new_children is not None else tree.children
try:
f = getattr(self, tree.data)
except AttributeError:
return self.__default__(tree.data, children, tree.meta)
else:
try:
wrapper = getattr(f, 'visit_wrapper', None)
if wrapper is not None:
return f.visit_wrapper(f, tree.data, children, tree.meta)
else:
return f(children)
except (GrammarError, Discard):
raise
except Exception as e:
raise VisitError(tree.data, tree, e)
def _call_userfunc_token(self, token):
try:
f = getattr(self, token.type)
except AttributeError:
return self.__default_token__(token)
else:
try:
return f(token)
except (GrammarError, Discard):
raise
except Exception as e:
raise VisitError(token.type, token, e)
def _transform_children(self, children):
for c in children:
try:
if isinstance(c, Tree):
yield self._transform_tree(c)
elif self.__visit_tokens__ and isinstance(c, Token):
yield self._call_userfunc_token(c)
else:
yield c
except Discard:
pass
def _transform_tree(self, tree):
children = list(self._transform_children(tree.children))
return self._call_userfunc(tree, children)
def transform(self, tree):
#--
return self._transform_tree(tree)
def __mul__(self, other):
#--
return TransformerChain(self, other)
def __default__(self, data, children, meta):
#--
return Tree(data, children, meta)
def __default_token__(self, token):
#--
return token
class InlineTransformer(Transformer): ##
def _call_userfunc(self, tree, new_children=None):
##
children = new_children if new_children is not None else tree.children
try:
f = getattr(self, tree.data)
except AttributeError:
return self.__default__(tree.data, children, tree.meta)
else:
return f(*children)
class TransformerChain(object):
def __init__(self, *transformers):
self.transformers = transformers
def transform(self, tree):
for t in self.transformers:
tree = t.transform(tree)
return tree
def __mul__(self, other):
return TransformerChain(*self.transformers + (other,))
class Transformer_InPlace(Transformer):
#--
def _transform_tree(self, tree): ##
return self._call_userfunc(tree)
def transform(self, tree):
for subtree in tree.iter_subtrees():
subtree.children = list(self._transform_children(subtree.children))
return self._transform_tree(tree)
class Transformer_NonRecursive(Transformer):
#--
def transform(self, tree):
##
rev_postfix = []
q = [tree]
while q:
t = q.pop()
rev_postfix.append(t)
if isinstance(t, Tree):
q += t.children
##
stack = []
for x in reversed(rev_postfix):
if isinstance(x, Tree):
size = len(x.children)
if size:
args = stack[-size:]
del stack[-size:]
else:
args = []
stack.append(self._call_userfunc(x, args))
elif self.__visit_tokens__ and isinstance(x, Token):
stack.append(self._call_userfunc_token(x))
else:
stack.append(x)
t ,= stack ##
return t
class Transformer_InPlaceRecursive(Transformer):
#--
def _transform_tree(self, tree):
tree.children = list(self._transform_children(tree.children))
return self._call_userfunc(tree)
##
class VisitorBase:
def _call_userfunc(self, tree):
return getattr(self, tree.data, self.__default__)(tree)
def __default__(self, tree):
#--
return tree
def __class_getitem__(cls, _):
return cls
class Visitor(VisitorBase):
#--
def visit(self, tree):
#--
for subtree in tree.iter_subtrees():
self._call_userfunc(subtree)
return tree
def visit_topdown(self,tree):
#--
for subtree in tree.iter_subtrees_topdown():
self._call_userfunc(subtree)
return tree
class Visitor_Recursive(VisitorBase):
#--
def visit(self, tree):
#--
for child in tree.children:
if isinstance(child, Tree):
self.visit(child)
self._call_userfunc(tree)
return tree
def visit_topdown(self,tree):
#--
self._call_userfunc(tree)
for child in tree.children:
if isinstance(child, Tree):
self.visit_topdown(child)
return tree
def visit_children_decor(func):
#--
@wraps(func)
def inner(cls, tree):
values = cls.visit_children(tree)
return func(cls, values)
return inner
class Interpreter(_Decoratable):
#--
def visit(self, tree):
f = getattr(self, tree.data)
wrapper = getattr(f, 'visit_wrapper', None)
if wrapper is not None:
return f.visit_wrapper(f, tree.data, tree.children, tree.meta)
else:
return f(tree)
def visit_children(self, tree):
return [self.visit(child) if isinstance(child, Tree) else child
for child in tree.children]
def __getattr__(self, name):
return self.__default__
def __default__(self, tree):
return self.visit_children(tree)
##
def _apply_decorator(obj, decorator, **kwargs):
try:
_apply = obj._apply_decorator
except AttributeError:
return decorator(obj, **kwargs)
else:
return _apply(decorator, **kwargs)
def _inline_args__func(func):
@wraps(func)
def create_decorator(_f, with_self):
if with_self:
def f(self, children):
return _f(self, *children)
else:
def f(self, children):
return _f(*children)
return f
return smart_decorator(func, create_decorator)
def inline_args(obj): ##
return _apply_decorator(obj, _inline_args__func)
def _visitor_args_func_dec(func, visit_wrapper=None, static=False):
def create_decorator(_f, with_self):
if with_self:
def f(self, *args, **kwargs):
return _f(self, *args, **kwargs)
else:
def f(self, *args, **kwargs):
return _f(*args, **kwargs)
return f
if static:
f = wraps(func)(create_decorator(func, False))
else:
f = smart_decorator(func, create_decorator)
f.vargs_applied = True
f.visit_wrapper = visit_wrapper
return f
def _vargs_inline(f, _data, children, _meta):
return f(*children)
def _vargs_meta_inline(f, _data, children, meta):
return f(meta, *children)
def _vargs_meta(f, _data, children, meta):
return f(children, meta) ##
def _vargs_tree(f, data, children, meta):
return f(Tree(data, children, meta))
def v_args(inline=False, meta=False, tree=False, wrapper=None):
#--
if tree and (meta or inline):
raise ValueError("Visitor functions cannot combine 'tree' with 'meta' or 'inline'.")
func = None
if meta:
if inline:
func = _vargs_meta_inline
else:
func = _vargs_meta
elif inline:
func = _vargs_inline
elif tree:
func = _vargs_tree
if wrapper is not None:
if func is not None:
raise ValueError("Cannot use 'wrapper' along with 'tree', 'meta' or 'inline'.")
func = wrapper
def _visitor_args_dec(obj):
return _apply_decorator(obj, _visitor_args_func_dec, visit_wrapper=func)
return _visitor_args_dec
class Symbol(Serialize):
__slots__ = ('name',)
is_term = NotImplemented
def __init__(self, name):
self.name = name
def __eq__(self, other):
assert isinstance(other, Symbol), other
return self.is_term == other.is_term and self.name == other.name
def __ne__(self, other):
return not (self == other)
def __hash__(self):
return hash(self.name)
def __repr__(self):
return '%s(%r)' % (type(self).__name__, self.name)
fullrepr = property(__repr__)
class Terminal(Symbol):
__serialize_fields__ = 'name', 'filter_out'
is_term = True
def __init__(self, name, filter_out=False):
self.name = name
self.filter_out = filter_out
@property
def fullrepr(self):
return '%s(%r, %r)' % (type(self).__name__, self.name, self.filter_out)
class NonTerminal(Symbol):
__serialize_fields__ = 'name',
is_term = False
class RuleOptions(Serialize):
__serialize_fields__ = 'keep_all_tokens', 'expand1', 'priority', 'template_source', 'empty_indices'
def __init__(self, keep_all_tokens=False, expand1=False, priority=None, template_source=None, empty_indices=()):
self.keep_all_tokens = keep_all_tokens
self.expand1 = expand1
self.priority = priority
self.template_source = template_source
self.empty_indices = empty_indices
def __repr__(self):
return 'RuleOptions(%r, %r, %r, %r)' % (
self.keep_all_tokens,
self.expand1,
self.priority,
self.template_source
)
class Rule(Serialize):
#--
__slots__ = ('origin', 'expansion', 'alias', 'options', 'order', '_hash')
__serialize_fields__ = 'origin', 'expansion', 'order', 'alias', 'options'
__serialize_namespace__ = Terminal, NonTerminal, RuleOptions
def __init__(self, origin, expansion, order=0, alias=None, options=None):
self.origin = origin
self.expansion = expansion
self.alias = alias
self.order = order
self.options = options or RuleOptions()
self._hash = hash((self.origin, tuple(self.expansion)))
def _deserialize(self):
self._hash = hash((self.origin, tuple(self.expansion)))
def __str__(self):
return '<%s : %s>' % (self.origin.name, ' '.join(x.name for x in self.expansion))
def __repr__(self):
return 'Rule(%r, %r, %r, %r)' % (self.origin, self.expansion, self.alias, self.options)
def __hash__(self):
return self._hash
def __eq__(self, other):
if not isinstance(other, Rule):
return False
return self.origin == other.origin and self.expansion == other.expansion
from copy import copy
class Pattern(Serialize):
raw = None
type = None
def __init__(self, value, flags=(), raw=None):
self.value = value
self.flags = frozenset(flags)
self.raw = raw
def __repr__(self):
return repr(self.to_regexp())
##
def __hash__(self):
return hash((type(self), self.value, self.flags))
def __eq__(self, other):
return type(self) == type(other) and self.value == other.value and self.flags == other.flags
def to_regexp(self):
raise NotImplementedError()
def min_width(self):
raise NotImplementedError()
def max_width(self):
raise NotImplementedError()
if Py36:
##
def _get_flags(self, value):
for f in self.flags:
value = ('(?%s:%s)' % (f, value))
return value
else:
def _get_flags(self, value):
for f in self.flags:
value = ('(?%s)' % f) + value
return value
class PatternStr(Pattern):
__serialize_fields__ = 'value', 'flags'
type = "str"
def to_regexp(self):
return self._get_flags(re.escape(self.value))
@property
def min_width(self):
return len(self.value)
max_width = min_width
class PatternRE(Pattern):
__serialize_fields__ = 'value', 'flags', '_width'
type = "re"
def to_regexp(self):
return self._get_flags(self.value)
_width = None
def _get_width(self):
if self._width is None:
self._width = get_regexp_width(self.to_regexp())
return self._width
@property
def min_width(self):
return self._get_width()[0]
@property
def max_width(self):
return self._get_width()[1]
class TerminalDef(Serialize):
__serialize_fields__ = 'name', 'pattern', 'priority'
__serialize_namespace__ = PatternStr, PatternRE
def __init__(self, name, pattern, priority=1):
assert isinstance(pattern, Pattern), pattern
self.name = name
self.pattern = pattern
self.priority = priority
def __repr__(self):
return '%s(%r, %r)' % (type(self).__name__, self.name, self.pattern)
def user_repr(self):
if self.name.startswith('__'): ##
return self.pattern.raw or self.name
else:
return self.name
class Token(Str):
#--
__slots__ = ('type', 'pos_in_stream', 'value', 'line', 'column', 'end_line', 'end_column', 'end_pos')
def __new__(cls, type_, value, pos_in_stream=None, line=None, column=None, end_line=None, end_column=None, end_pos=None):
try:
self = super(Token, cls).__new__(cls, value)
except UnicodeDecodeError:
value = value.decode('latin1')
self = super(Token, cls).__new__(cls, value)
self.type = type_
self.pos_in_stream = pos_in_stream
self.value = value
self.line = line
self.column = column
self.end_line = end_line
self.end_column = end_column
self.end_pos = end_pos
return self
def update(self, type_=None, value=None):
return Token.new_borrow_pos(
type_ if type_ is not None else self.type,
value if value is not None else self.value,
self
)
@classmethod
def new_borrow_pos(cls, type_, value, borrow_t):
return cls(type_, value, borrow_t.pos_in_stream, borrow_t.line, borrow_t.column, borrow_t.end_line, borrow_t.end_column, borrow_t.end_pos)
def __reduce__(self):
return (self.__class__, (self.type, self.value, self.pos_in_stream, self.line, self.column))
def __repr__(self):
return 'Token(%r, %r)' % (self.type, self.value)
def __deepcopy__(self, memo):
return Token(self.type, self.value, self.pos_in_stream, self.line, self.column)
def __eq__(self, other):
if isinstance(other, Token) and self.type != other.type:
return False
return Str.__eq__(self, other)
__hash__ = Str.__hash__
class LineCounter:
__slots__ = 'char_pos', 'line', 'column', 'line_start_pos', 'newline_char'
def __init__(self, newline_char):
self.newline_char = newline_char
self.char_pos = 0
self.line = 1
self.column = 1
self.line_start_pos = 0
def __eq__(self, other):
if not isinstance(other, LineCounter):
return NotImplemented
return self.char_pos == other.char_pos and self.newline_char == other.newline_char
def feed(self, token, test_newline=True):
#--
if test_newline:
newlines = token.count(self.newline_char)
if newlines:
self.line += newlines
self.line_start_pos = self.char_pos + token.rindex(self.newline_char) + 1
self.char_pos += len(token)
self.column = self.char_pos - self.line_start_pos + 1
class UnlessCallback:
def __init__(self, mres):
self.mres = mres
def __call__(self, t):
for mre, type_from_index in self.mres:
m = mre.match(t.value)
if m:
t.type = type_from_index[m.lastindex]
break
return t
class CallChain:
def __init__(self, callback1, callback2, cond):
self.callback1 = callback1
self.callback2 = callback2
self.cond = cond
def __call__(self, t):
t2 = self.callback1(t)
return self.callback2(t) if self.cond(t2) else t2
def _create_unless(terminals, g_regex_flags, re_, use_bytes):
tokens_by_type = classify(terminals, lambda t: type(t.pattern))
assert len(tokens_by_type) <= 2, tokens_by_type.keys()
embedded_strs = set()
callback = {}
for retok in tokens_by_type.get(PatternRE, []):
unless = []
for strtok in tokens_by_type.get(PatternStr, []):
if strtok.priority > retok.priority:
continue
s = strtok.pattern.value
m = re_.match(retok.pattern.to_regexp(), s, g_regex_flags)
if m and m.group(0) == s:
unless.append(strtok)
if strtok.pattern.flags <= retok.pattern.flags:
embedded_strs.add(strtok)
if unless:
callback[retok.name] = UnlessCallback(build_mres(unless, g_regex_flags, re_, match_whole=True, use_bytes=use_bytes))
terminals = [t for t in terminals if t not in embedded_strs]
return terminals, callback
def _build_mres(terminals, max_size, g_regex_flags, match_whole, re_, use_bytes):
##
##
##
postfix = '$' if match_whole else ''
mres = []
while terminals:
pattern = u'|'.join(u'(?P<%s>%s)' % (t.name, t.pattern.to_regexp() + postfix) for t in terminals[:max_size])
if use_bytes:
pattern = pattern.encode('latin-1')
try:
mre = re_.compile(pattern, g_regex_flags)
except AssertionError: ##
return _build_mres(terminals, max_size//2, g_regex_flags, match_whole, re_, use_bytes)
mres.append((mre, {i: n for n, i in mre.groupindex.items()}))
terminals = terminals[max_size:]
return mres
def build_mres(terminals, g_regex_flags, re_, use_bytes, match_whole=False):
return _build_mres(terminals, len(terminals), g_regex_flags, match_whole, re_, use_bytes)
def _regexp_has_newline(r):
#--
return '\n' in r or '\\n' in r or '\\s' in r or '[^' in r or ('(?s' in r and '.' in r)
class Lexer(object):
#--
lex = NotImplemented
def make_lexer_state(self, text):
line_ctr = LineCounter(b'\n' if isinstance(text, bytes) else '\n')
return LexerState(text, line_ctr)
class TraditionalLexer(Lexer):
def __init__(self, conf):
terminals = list(conf.terminals)
assert all(isinstance(t, TerminalDef) for t in terminals), terminals
self.re = conf.re_module
if not conf.skip_validation:
##
for t in terminals:
try:
self.re.compile(t.pattern.to_regexp(), conf.g_regex_flags)
except self.re.error:
raise LexError("Cannot compile token %s: %s" % (t.name, t.pattern))
if t.pattern.min_width == 0:
raise LexError("Lexer does not allow zero-width terminals. (%s: %s)" % (t.name, t.pattern))
if not (set(conf.ignore) <= {t.name for t in terminals}):
raise LexError("Ignore terminals are not defined: %s" % (set(conf.ignore) - {t.name for t in terminals}))
##
self.newline_types = frozenset(t.name for t in terminals if _regexp_has_newline(t.pattern.to_regexp()))
self.ignore_types = frozenset(conf.ignore)
terminals.sort(key=lambda x: (-x.priority, -x.pattern.max_width, -len(x.pattern.value), x.name))
self.terminals = terminals
self.user_callbacks = conf.callbacks
self.g_regex_flags = conf.g_regex_flags
self.use_bytes = conf.use_bytes
self.terminals_by_name = conf.terminals_by_name
self._mres = None
def _build(self):
terminals, self.callback = _create_unless(self.terminals, self.g_regex_flags, self.re, self.use_bytes)
assert all(self.callback.values())
for type_, f in self.user_callbacks.items():
if type_ in self.callback:
##
self.callback[type_] = CallChain(self.callback[type_], f, lambda t: t.type == type_)
else:
self.callback[type_] = f
self._mres = build_mres(terminals, self.g_regex_flags, self.re, self.use_bytes)
@property
def mres(self):
if self._mres is None:
self._build()
return self._mres
def match(self, text, pos):
for mre, type_from_index in self.mres:
m = mre.match(text, pos)
if m:
return m.group(0), type_from_index[m.lastindex]
def lex(self, state, parser_state):
with suppress(EOFError):
while True:
yield self.next_token(state, parser_state)
def next_token(self, lex_state, parser_state=None):
line_ctr = lex_state.line_ctr
while line_ctr.char_pos < len(lex_state.text):
res = self.match(lex_state.text, line_ctr.char_pos)
if not res:
allowed = {v for m, tfi in self.mres for v in tfi.values()} - self.ignore_types
if not allowed:
allowed = {"<END-OF-FILE>"}
raise UnexpectedCharacters(lex_state.text, line_ctr.char_pos, line_ctr.line, line_ctr.column,
allowed=allowed, token_history=lex_state.last_token and [lex_state.last_token],
state=parser_state, terminals_by_name=self.terminals_by_name)
value, type_ = res
if type_ not in self.ignore_types:
t = Token(type_, value, line_ctr.char_pos, line_ctr.line, line_ctr.column)
line_ctr.feed(value, type_ in self.newline_types)
t.end_line = line_ctr.line
t.end_column = line_ctr.column
t.end_pos = line_ctr.char_pos
if t.type in self.callback:
t = self.callback[t.type](t)
if not isinstance(t, Token):
raise LexError("Callbacks must return a token (returned %r)" % t)
lex_state.last_token = t
return t
else:
if type_ in self.callback:
t2 = Token(type_, value, line_ctr.char_pos, line_ctr.line, line_ctr.column)
self.callback[type_](t2)
line_ctr.feed(value, type_ in self.newline_types)
##
raise EOFError(self)
class LexerState(object):
__slots__ = 'text', 'line_ctr', 'last_token'
def __init__(self, text, line_ctr, last_token=None):
self.text = text
self.line_ctr = line_ctr
self.last_token = last_token
def __eq__(self, other):
if not isinstance(other, LexerState):
return NotImplemented
return self.text is other.text and self.line_ctr == other.line_ctr and self.last_token == other.last_token
def __copy__(self):
return type(self)(self.text, copy(self.line_ctr), self.last_token)
class ContextualLexer(Lexer):
def __init__(self, conf, states, always_accept=()):
terminals = list(conf.terminals)
terminals_by_name = conf.terminals_by_name
trad_conf = copy(conf)
trad_conf.terminals = terminals
lexer_by_tokens = {}
self.lexers = {}
for state, accepts in states.items():
key = frozenset(accepts)
try:
lexer = lexer_by_tokens[key]
except KeyError:
accepts = set(accepts) | set(conf.ignore) | set(always_accept)
lexer_conf = copy(trad_conf)
lexer_conf.terminals = [terminals_by_name[n] for n in accepts if n in terminals_by_name]
lexer = TraditionalLexer(lexer_conf)
lexer_by_tokens[key] = lexer
self.lexers[state] = lexer
assert trad_conf.terminals is terminals
self.root_lexer = TraditionalLexer(trad_conf)
def make_lexer_state(self, text):
return self.root_lexer.make_lexer_state(text)
def lex(self, lexer_state, parser_state):
try:
while True:
lexer = self.lexers[parser_state.position]
yield lexer.next_token(lexer_state, parser_state)
except EOFError:
pass
except UnexpectedCharacters as e:
##
##
try:
last_token = lexer_state.last_token ##
token = self.root_lexer.next_token(lexer_state, parser_state)
raise UnexpectedToken(token, e.allowed, state=parser_state, token_history=[last_token], terminals_by_name=self.root_lexer.terminals_by_name)
except UnexpectedCharacters:
raise e ##
class LexerThread(object):
#--
def __init__(self, lexer, text):
self.lexer = lexer
self.state = lexer.make_lexer_state(text)
def lex(self, parser_state):
return self.lexer.lex(self.state, parser_state)
def __copy__(self):
copied = object.__new__(LexerThread)
copied.lexer = self.lexer
copied.state = copy(self.state)
return copied
class LexerConf(Serialize):
__serialize_fields__ = 'terminals', 'ignore', 'g_regex_flags', 'use_bytes', 'lexer_type'
__serialize_namespace__ = TerminalDef,
def __init__(self, terminals, re_module, ignore=(), postlex=None, callbacks=None, g_regex_flags=0, skip_validation=False, use_bytes=False):
self.terminals = terminals
self.terminals_by_name = {t.name: t for t in self.terminals}
assert len(self.terminals) == len(self.terminals_by_name)
self.ignore = ignore
self.postlex = postlex
self.callbacks = callbacks or {}
self.g_regex_flags = g_regex_flags
self.re_module = re_module
self.skip_validation = skip_validation
self.use_bytes = use_bytes
self.lexer_type = None
@property
def tokens(self):
warn("LexerConf.tokens is deprecated. Use LexerConf.terminals instead", DeprecationWarning)
return self.terminals
def _deserialize(self):
self.terminals_by_name = {t.name: t for t in self.terminals}
class ParserConf(Serialize):
__serialize_fields__ = 'rules', 'start', 'parser_type'
def __init__(self, rules, callbacks, start):
assert isinstance(start, list)
self.rules = rules
self.callbacks = callbacks
self.start = start
self.parser_type = None
from functools import partial, wraps
from itertools import repeat, product
class ExpandSingleChild:
def __init__(self, node_builder):
self.node_builder = node_builder
def __call__(self, children):
if len(children) == 1:
return children[0]
else:
return self.node_builder(children)
class PropagatePositions:
def __init__(self, node_builder):
self.node_builder = node_builder
def __call__(self, children):
res = self.node_builder(children)
##
if isinstance(res, Tree):
res_meta = res.meta
for c in children:
if isinstance(c, Tree):
child_meta = c.meta
if not child_meta.empty:
res_meta.line = child_meta.line
res_meta.column = child_meta.column
res_meta.start_pos = child_meta.start_pos
res_meta.empty = False
break
elif isinstance(c, Token):
res_meta.line = c.line
res_meta.column = c.column
res_meta.start_pos = c.pos_in_stream
res_meta.empty = False
break
for c in reversed(children):
if isinstance(c, Tree):
child_meta = c.meta
if not child_meta.empty:
res_meta.end_line = child_meta.end_line
res_meta.end_column = child_meta.end_column
res_meta.end_pos = child_meta.end_pos
res_meta.empty = False
break
elif isinstance(c, Token):
res_meta.end_line = c.end_line
res_meta.end_column = c.end_column
res_meta.end_pos = c.end_pos
res_meta.empty = False
break
return res
class ChildFilter:
def __init__(self, to_include, append_none, node_builder):
self.node_builder = node_builder
self.to_include = to_include
self.append_none = append_none
def __call__(self, children):
filtered = []
for i, to_expand, add_none in self.to_include:
if add_none:
filtered += [None] * add_none
if to_expand:
filtered += children[i].children
else:
filtered.append(children[i])
if self.append_none:
filtered += [None] * self.append_none
return self.node_builder(filtered)
class ChildFilterLALR(ChildFilter):
#--
def __call__(self, children):
filtered = []
for i, to_expand, add_none in self.to_include:
if add_none:
filtered += [None] * add_none
if to_expand:
if filtered:
filtered += children[i].children
else: ##
filtered = children[i].children
else:
filtered.append(children[i])
if self.append_none:
filtered += [None] * self.append_none
return self.node_builder(filtered)
class ChildFilterLALR_NoPlaceholders(ChildFilter):
#--
def __init__(self, to_include, node_builder):
self.node_builder = node_builder
self.to_include = to_include
def __call__(self, children):
filtered = []
for i, to_expand in self.to_include:
if to_expand:
if filtered:
filtered += children[i].children
else: ##
filtered = children[i].children
else:
filtered.append(children[i])
return self.node_builder(filtered)
def _should_expand(sym):
return not sym.is_term and sym.name.startswith('_')
def maybe_create_child_filter(expansion, keep_all_tokens, ambiguous, _empty_indices):
##
if _empty_indices:
assert _empty_indices.count(False) == len(expansion)
s = ''.join(str(int(b)) for b in _empty_indices)
empty_indices = [len(ones) for ones in s.split('0')]
assert len(empty_indices) == len(expansion)+1, (empty_indices, len(expansion))
else:
empty_indices = [0] * (len(expansion)+1)
to_include = []
nones_to_add = 0
for i, sym in enumerate(expansion):
nones_to_add += empty_indices[i]
if keep_all_tokens or not (sym.is_term and sym.filter_out):
to_include.append((i, _should_expand(sym), nones_to_add))
nones_to_add = 0
nones_to_add += empty_indices[len(expansion)]
if _empty_indices or len(to_include) < len(expansion) or any(to_expand for i, to_expand,_ in to_include):
if _empty_indices or ambiguous:
return partial(ChildFilter if ambiguous else ChildFilterLALR, to_include, nones_to_add)
else:
##
return partial(ChildFilterLALR_NoPlaceholders, [(i, x) for i,x,_ in to_include])
class AmbiguousExpander:
#--
def __init__(self, to_expand, tree_class, node_builder):
self.node_builder = node_builder
self.tree_class = tree_class
self.to_expand = to_expand
def __call__(self, children):
def _is_ambig_tree(t):
return hasattr(t, 'data') and t.data == '_ambig'
##
##
##
##
ambiguous = []
for i, child in enumerate(children):
if _is_ambig_tree(child):
if i in self.to_expand:
ambiguous.append(i)
to_expand = [j for j, grandchild in enumerate(child.children) if _is_ambig_tree(grandchild)]
child.expand_kids_by_index(*to_expand)
if not ambiguous:
return self.node_builder(children)
expand = [iter(child.children) if i in ambiguous else repeat(child) for i, child in enumerate(children)]
return self.tree_class('_ambig', [self.node_builder(list(f[0])) for f in product(zip(*expand))])
def maybe_create_ambiguous_expander(tree_class, expansion, keep_all_tokens):
to_expand = [i for i, sym in enumerate(expansion)
if keep_all_tokens or ((not (sym.is_term and sym.filter_out)) and _should_expand(sym))]
if to_expand:
return partial(AmbiguousExpander, to_expand, tree_class)
class AmbiguousIntermediateExpander:
#--
def __init__(self, tree_class, node_builder):
self.node_builder = node_builder
self.tree_class = tree_class
def __call__(self, children):
def _is_iambig_tree(child):
return hasattr(child, 'data') and child.data == '_iambig'
def _collapse_iambig(children):
#--
##
##
if children and _is_iambig_tree(children[0]):
iambig_node = children[0]
result = []
for grandchild in iambig_node.children:
collapsed = _collapse_iambig(grandchild.children)
if collapsed:
for child in collapsed:
child.children += children[1:]
result += collapsed
else:
new_tree = self.tree_class('_inter', grandchild.children + children[1:])
result.append(new_tree)
return result
collapsed = _collapse_iambig(children)
if collapsed:
processed_nodes = [self.node_builder(c.children) for c in collapsed]
return self.tree_class('_ambig', processed_nodes)
return self.node_builder(children)
def ptb_inline_args(func):
@wraps(func)
def f(children):
return func(*children)
return f
def inplace_transformer(func):
@wraps(func)
def f(children):
##
tree = Tree(func.__name__, children)
return func(tree)
return f
def apply_visit_wrapper(func, name, wrapper):
if wrapper is _vargs_meta or wrapper is _vargs_meta_inline:
raise NotImplementedError("Meta args not supported for internal transformer")
@wraps(func)
def f(children):
return wrapper(func, name, children, None)
return f
class ParseTreeBuilder:
def __init__(self, rules, tree_class, propagate_positions=False, ambiguous=False, maybe_placeholders=False):
self.tree_class = tree_class
self.propagate_positions = propagate_positions
self.ambiguous = ambiguous
self.maybe_placeholders = maybe_placeholders
self.rule_builders = list(self._init_builders(rules))
def _init_builders(self, rules):
for rule in rules:
options = rule.options
keep_all_tokens = options.keep_all_tokens
expand_single_child = options.expand1
wrapper_chain = list(filter(None, [
(expand_single_child and not rule.alias) and ExpandSingleChild,
maybe_create_child_filter(rule.expansion, keep_all_tokens, self.ambiguous, options.empty_indices if self.maybe_placeholders else None),
self.propagate_positions and PropagatePositions,
self.ambiguous and maybe_create_ambiguous_expander(self.tree_class, rule.expansion, keep_all_tokens),
self.ambiguous and partial(AmbiguousIntermediateExpander, self.tree_class)
]))
yield rule, wrapper_chain
def create_callback(self, transformer=None):
callbacks = {}
for rule, wrapper_chain in self.rule_builders:
user_callback_name = rule.alias or rule.options.template_source or rule.origin.name
try:
f = getattr(transformer, user_callback_name)
##
wrapper = getattr(f, 'visit_wrapper', None)
if wrapper is not None:
f = apply_visit_wrapper(f, user_callback_name, wrapper)
else:
if isinstance(transformer, InlineTransformer):
f = ptb_inline_args(f)
elif isinstance(transformer, Transformer_InPlace):
f = inplace_transformer(f)
except AttributeError:
f = partial(self.tree_class, user_callback_name)
for w in wrapper_chain:
f = w(f)
if rule in callbacks:
raise GrammarError("Rule '%s' already exists" % (rule,))
callbacks[rule] = f
return callbacks
class LALR_Parser(Serialize):
def __init__(self, parser_conf, debug=False):
analysis = LALR_Analyzer(parser_conf, debug=debug)
analysis.compute_lalr()
callbacks = parser_conf.callbacks
self._parse_table = analysis.parse_table
self.parser_conf = parser_conf
self.parser = _Parser(analysis.parse_table, callbacks, debug)
@classmethod
def deserialize(cls, data, memo, callbacks, debug=False):
inst = cls.__new__(cls)
inst._parse_table = IntParseTable.deserialize(data, memo)
inst.parser = _Parser(inst._parse_table, callbacks, debug)
return inst
def serialize(self, memo):
return self._parse_table.serialize(memo)
def parse_interactive(self, lexer, start):
return self.parser.parse(lexer, start, start_interactive=True)
def parse(self, lexer, start, on_error=None):
try:
return self.parser.parse(lexer, start)
except UnexpectedInput as e:
if on_error is None:
raise
while True:
if isinstance(e, UnexpectedCharacters):
s = e.interactive_parser.lexer_state.state
p = s.line_ctr.char_pos
if not on_error(e):
raise e
if isinstance(e, UnexpectedCharacters):
##
if p == s.line_ctr.char_pos:
s.line_ctr.feed(s.text[p:p+1])
try:
return e.interactive_parser.resume_parse()
except UnexpectedToken as e2:
if (isinstance(e, UnexpectedToken)
and e.token.type == e2.token.type == '$END'
and e.interactive_parser == e2.interactive_parser):
##
raise e2
e = e2
except UnexpectedCharacters as e2:
e = e2
class ParseConf(object):
__slots__ = 'parse_table', 'callbacks', 'start', 'start_state', 'end_state', 'states'
def __init__(self, parse_table, callbacks, start):
self.parse_table = parse_table
self.start_state = self.parse_table.start_states[start]
self.end_state = self.parse_table.end_states[start]
self.states = self.parse_table.states
self.callbacks = callbacks
self.start = start
class ParserState(object):
__slots__ = 'parse_conf', 'lexer', 'state_stack', 'value_stack'
def __init__(self, parse_conf, lexer, state_stack=None, value_stack=None):
self.parse_conf = parse_conf
self.lexer = lexer
self.state_stack = state_stack or [self.parse_conf.start_state]
self.value_stack = value_stack or []
@property
def position(self):
return self.state_stack[-1]
##
def __eq__(self, other):
if not isinstance(other, ParserState):
return NotImplemented
return len(self.state_stack) == len(other.state_stack) and self.position == other.position
def __copy__(self):
return type(self)(
self.parse_conf,
self.lexer, ##
copy(self.state_stack),
deepcopy(self.value_stack),
)
def copy(self):
return copy(self)
def feed_token(self, token, is_end=False):
state_stack = self.state_stack
value_stack = self.value_stack
states = self.parse_conf.states
end_state = self.parse_conf.end_state
callbacks = self.parse_conf.callbacks
while True:
state = state_stack[-1]
try:
action, arg = states[state][token.type]
except KeyError:
expected = {s for s in states[state].keys() if s.isupper()}
raise UnexpectedToken(token, expected, state=self, interactive_parser=None)
assert arg != end_state
if action is Shift:
##
assert not is_end
state_stack.append(arg)
value_stack.append(token if token.type not in callbacks else callbacks[token.type](token))
return
else:
##
rule = arg
size = len(rule.expansion)
if size:
s = value_stack[-size:]
del state_stack[-size:]
del value_stack[-size:]
else:
s = []
value = callbacks[rule](s)
_action, new_state = states[state_stack[-1]][rule.origin.name]
assert _action is Shift
state_stack.append(new_state)
value_stack.append(value)
if is_end and state_stack[-1] == end_state:
return value_stack[-1]
class _Parser(object):
def __init__(self, parse_table, callbacks, debug=False):
self.parse_table = parse_table
self.callbacks = callbacks
self.debug = debug
def parse(self, lexer, start, value_stack=None, state_stack=None, start_interactive=False):
parse_conf = ParseConf(self.parse_table, self.callbacks, start)
parser_state = ParserState(parse_conf, lexer, state_stack, value_stack)
if start_interactive:
return InteractiveParser(self, parser_state, parser_state.lexer)
return self.parse_from_state(parser_state)
def parse_from_state(self, state):
##
try:
token = None
for token in state.lexer.lex(state):
state.feed_token(token)
token = Token.new_borrow_pos('$END', '', token) if token else Token('$END', '', 0, 1, 1)
return state.feed_token(token, True)
except UnexpectedInput as e:
try:
e.interactive_parser = InteractiveParser(self, state, state.lexer)
except NameError:
pass
raise e
except Exception as e:
if self.debug:
print("")
print("STATE STACK DUMP")
print("----------------")
for i, s in enumerate(state.state_stack):
print('%d)' % i , s)
print("")
raise
class Action:
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
def __repr__(self):
return str(self)
Shift = Action('Shift')
Reduce = Action('Reduce')
class ParseTable:
def __init__(self, states, start_states, end_states):
self.states = states
self.start_states = start_states
self.end_states = end_states
def serialize(self, memo):
tokens = Enumerator()
rules = Enumerator()
states = {
state: {tokens.get(token): ((1, arg.serialize(memo)) if action is Reduce else (0, arg))
for token, (action, arg) in actions.items()}
for state, actions in self.states.items()
}
return {
'tokens': tokens.reversed(),
'states': states,
'start_states': self.start_states,
'end_states': self.end_states,
}
@classmethod
def deserialize(cls, data, memo):
tokens = data['tokens']
states = {
state: {tokens[token]: ((Reduce, Rule.deserialize(arg, memo)) if action==1 else (Shift, arg))
for token, (action, arg) in actions.items()}
for state, actions in data['states'].items()
}
return cls(states, data['start_states'], data['end_states'])
class IntParseTable(ParseTable):
@classmethod
def from_ParseTable(cls, parse_table):
enum = list(parse_table.states)
state_to_idx = {s:i for i,s in enumerate(enum)}
int_states = {}
for s, la in parse_table.states.items():
la = {k:(v[0], state_to_idx[v[1]]) if v[0] is Shift else v
for k,v in la.items()}
int_states[ state_to_idx[s] ] = la
start_states = {start:state_to_idx[s] for start, s in parse_table.start_states.items()}
end_states = {start:state_to_idx[s] for start, s in parse_table.end_states.items()}
return cls(int_states, start_states, end_states)
def _wrap_lexer(lexer_class):
future_interface = getattr(lexer_class, '__future_interface__', False)
if future_interface:
return lexer_class
else:
class CustomLexerWrapper(Lexer):
def __init__(self, lexer_conf):
self.lexer = lexer_class(lexer_conf)
def lex(self, lexer_state, parser_state):
return self.lexer.lex(lexer_state.text)
return CustomLexerWrapper
class MakeParsingFrontend:
def __init__(self, parser_type, lexer_type):
self.parser_type = parser_type
self.lexer_type = lexer_type
def __call__(self, lexer_conf, parser_conf, options):
assert isinstance(lexer_conf, LexerConf)
assert isinstance(parser_conf, ParserConf)
parser_conf.parser_type = self.parser_type
lexer_conf.lexer_type = self.lexer_type
return ParsingFrontend(lexer_conf, parser_conf, options)
@classmethod
def deserialize(cls, data, memo, lexer_conf, callbacks, options):
parser_conf = ParserConf.deserialize(data['parser_conf'], memo)
parser = LALR_Parser.deserialize(data['parser'], memo, callbacks, options.debug)
parser_conf.callbacks = callbacks
return ParsingFrontend(lexer_conf, parser_conf, options, parser=parser)
class ParsingFrontend(Serialize):
__serialize_fields__ = 'lexer_conf', 'parser_conf', 'parser', 'options'
def __init__(self, lexer_conf, parser_conf, options, parser=None):
self.parser_conf = parser_conf
self.lexer_conf = lexer_conf
self.options = options
##
if parser: ##
self.parser = parser
else:
create_parser = {
'lalr': create_lalr_parser,
'earley': create_earley_parser,
'cyk': CYK_FrontEnd,
}[parser_conf.parser_type]
self.parser = create_parser(lexer_conf, parser_conf, options)
##
lexer_type = lexer_conf.lexer_type
self.skip_lexer = False
if lexer_type in ('dynamic', 'dynamic_complete'):
assert lexer_conf.postlex is None
self.skip_lexer = True
return
try:
create_lexer = {
'standard': create_traditional_lexer,
'contextual': create_contextual_lexer,
}[lexer_type]
except KeyError:
assert issubclass(lexer_type, Lexer), lexer_type
self.lexer = _wrap_lexer(lexer_type)(lexer_conf)
else:
self.lexer = create_lexer(lexer_conf, self.parser, lexer_conf.postlex)
if lexer_conf.postlex:
self.lexer = PostLexConnector(self.lexer, lexer_conf.postlex)
def _verify_start(self, start=None):
if start is None:
start = self.parser_conf.start
if len(start) > 1:
raise ConfigurationError("Lark initialized with more than 1 possible start rule. Must specify which start rule to parse", start)
start ,= start
elif start not in self.parser_conf.start:
raise ConfigurationError("Unknown start rule %s. Must be one of %r" % (start, self.parser_conf.start))
return start
def parse(self, text, start=None, on_error=None):
start = self._verify_start(start)
stream = text if self.skip_lexer else LexerThread(self.lexer, text)
kw = {} if on_error is None else {'on_error': on_error}
return self.parser.parse(stream, start, **kw)
def parse_interactive(self, text=None, start=None):
start = self._verify_start(start)
if self.parser_conf.parser_type != 'lalr':
raise ConfigurationError("parse_interactive() currently only works with parser='lalr' ")
stream = text if self.skip_lexer else LexerThread(self.lexer, text)
return self.parser.parse_interactive(stream, start)
def get_frontend(parser, lexer):
assert_config(parser, ('lalr', 'earley', 'cyk'))
if not isinstance(lexer, type): ##
expected = {
'lalr': ('standard', 'contextual'),
'earley': ('standard', 'dynamic', 'dynamic_complete'),
'cyk': ('standard', ),
}[parser]
assert_config(lexer, expected, 'Parser %r does not support lexer %%r, expected one of %%s' % parser)
return MakeParsingFrontend(parser, lexer)
def _get_lexer_callbacks(transformer, terminals):
result = {}
for terminal in terminals:
callback = getattr(transformer, terminal.name, None)
if callback is not None:
result[terminal.name] = callback
return result
class PostLexConnector:
def __init__(self, lexer, postlexer):
self.lexer = lexer
self.postlexer = postlexer
def make_lexer_state(self, text):
return self.lexer.make_lexer_state(text)
def lex(self, lexer_state, parser_state):
i = self.lexer.lex(lexer_state, parser_state)
return self.postlexer.process(i)
def create_traditional_lexer(lexer_conf, parser, postlex):
return TraditionalLexer(lexer_conf)
def create_contextual_lexer(lexer_conf, parser, postlex):
states = {idx:list(t.keys()) for idx, t in parser._parse_table.states.items()}
always_accept = postlex.always_accept if postlex else ()
return ContextualLexer(lexer_conf, states, always_accept=always_accept)
def create_lalr_parser(lexer_conf, parser_conf, options=None):
debug = options.debug if options else False
return LALR_Parser(parser_conf, debug=debug)
create_earley_parser = NotImplemented
CYK_FrontEnd = NotImplemented
class LarkOptions(Serialize):
#--
OPTIONS_DOC = """
**=== General Options ===**
start
The start symbol. Either a string, or a list of strings for multiple possible starts (Default: "start")
debug
Display debug information and extra warnings. Use only when debugging (default: False)
When used with Earley, it generates a forest graph as "sppf.png", if 'dot' is installed.
transformer
Applies the transformer to every parse tree (equivalent to applying it after the parse, but faster)
propagate_positions
Propagates (line, column, end_line, end_column) attributes into all tree branches.
maybe_placeholders
When True, the ``[]`` operator returns ``None`` when not matched.
When ``False``, ``[]`` behaves like the ``?`` operator, and returns no value at all.
(default= ``False``. Recommended to set to ``True``)
cache
Cache the results of the Lark grammar analysis, for x2 to x3 faster loading. LALR only for now.
- When ``False``, does nothing (default)
- When ``True``, caches to a temporary file in the local directory
- When given a string, caches to the path pointed by the string
regex
When True, uses the ``regex`` module instead of the stdlib ``re``.
g_regex_flags
Flags that are applied to all terminals (both regex and strings)
keep_all_tokens
Prevent the tree builder from automagically removing "punctuation" tokens (default: False)
tree_class
Lark will produce trees comprised of instances of this class instead of the default ``lark.Tree``.
**=== Algorithm Options ===**
parser
Decides which parser engine to use. Accepts "earley" or "lalr". (Default: "earley").
(there is also a "cyk" option for legacy)
lexer
Decides whether or not to use a lexer stage
- "auto" (default): Choose for me based on the parser
- "standard": Use a standard lexer
- "contextual": Stronger lexer (only works with parser="lalr")
- "dynamic": Flexible and powerful (only with parser="earley")
- "dynamic_complete": Same as dynamic, but tries *every* variation of tokenizing possible.
ambiguity
Decides how to handle ambiguity in the parse. Only relevant if parser="earley"
- "resolve": The parser will automatically choose the simplest derivation
(it chooses consistently: greedy for tokens, non-greedy for rules)
- "explicit": The parser will return all derivations wrapped in "_ambig" tree nodes (i.e. a forest).
- "forest": The parser will return the root of the shared packed parse forest.
**=== Misc. / Domain Specific Options ===**
postlex
Lexer post-processing (Default: None) Only works with the standard and contextual lexers.
priority
How priorities should be evaluated - auto, none, normal, invert (Default: auto)
lexer_callbacks
Dictionary of callbacks for the lexer. May alter tokens during lexing. Use with caution.
use_bytes
Accept an input of type ``bytes`` instead of ``str`` (Python 3 only).
edit_terminals
A callback for editing the terminals before parse.
import_paths
A List of either paths or loader functions to specify from where grammars are imported
source_path
Override the source of from where the grammar was loaded. Useful for relative imports and unconventional grammar loading
**=== End Options ===**
"""
if __doc__:
__doc__ += OPTIONS_DOC
##
##
##
##
##
##
##
##
_defaults = {
'debug': False,
'keep_all_tokens': False,
'tree_class': None,
'cache': False,
'postlex': None,
'parser': 'earley',
'lexer': 'auto',
'transformer': None,
'start': 'start',
'priority': 'auto',
'ambiguity': 'auto',
'regex': False,
'propagate_positions': False,
'lexer_callbacks': {},
'maybe_placeholders': False,
'edit_terminals': None,
'g_regex_flags': 0,
'use_bytes': False,
'import_paths': [],
'source_path': None,
}
def __init__(self, options_dict):
o = dict(options_dict)
options = {}
for name, default in self._defaults.items():
if name in o:
value = o.pop(name)
if isinstance(default, bool) and name not in ('cache', 'use_bytes'):
value = bool(value)
else:
value = default
options[name] = value
if isinstance(options['start'], STRING_TYPE):
options['start'] = [options['start']]
self.__dict__['options'] = options
assert_config(self.parser, ('earley', 'lalr', 'cyk', None))
if self.parser == 'earley' and self.transformer:
raise ConfigurationError('Cannot specify an embedded transformer when using the Earley algorithm.'
'Please use your transformer on the resulting parse tree, or use a different algorithm (i.e. LALR)')
if o:
raise ConfigurationError("Unknown options: %s" % o.keys())
def __getattr__(self, name):
try:
return self.__dict__['options'][name]
except KeyError as e:
raise AttributeError(e)
def __setattr__(self, name, value):
assert_config(name, self.options.keys(), "%r isn't a valid option. Expected one of: %s")
self.options[name] = value
def serialize(self, memo):
return self.options
@classmethod
def deserialize(cls, data, memo):
return cls(data)
##
##
_LOAD_ALLOWED_OPTIONS = {'postlex', 'transformer', 'lexer_callbacks', 'use_bytes', 'debug', 'g_regex_flags', 'regex', 'propagate_positions', 'tree_class'}
_VALID_PRIORITY_OPTIONS = ('auto', 'normal', 'invert', None)
_VALID_AMBIGUITY_OPTIONS = ('auto', 'resolve', 'explicit', 'forest')
class PostLex(ABC):
@abstractmethod
def process(self, stream):
return stream
always_accept = ()
class Lark(Serialize):
#--
def __init__(self, grammar, **options):
self.options = LarkOptions(options)
##
use_regex = self.options.regex
if use_regex:
if regex:
re_module = regex
else:
raise ImportError('`regex` module must be installed if calling `Lark(regex=True)`.')
else:
re_module = re
##
if self.options.source_path is None:
try:
self.source_path = grammar.name
except AttributeError:
self.source_path = '<string>'
else:
self.source_path = self.options.source_path
##
try:
read = grammar.read
except AttributeError:
pass
else:
grammar = read()
cache_fn = None
cache_md5 = None
if isinstance(grammar, STRING_TYPE):
self.source_grammar = grammar
if self.options.use_bytes:
if not isascii(grammar):
raise ConfigurationError("Grammar must be ascii only, when use_bytes=True")
if sys.version_info[0] == 2 and self.options.use_bytes != 'force':
raise ConfigurationError("`use_bytes=True` may have issues on python2."
"Use `use_bytes='force'` to use it at your own risk.")
if self.options.cache:
if self.options.parser != 'lalr':
raise ConfigurationError("cache only works with parser='lalr' for now")
unhashable = ('transformer', 'postlex', 'lexer_callbacks', 'edit_terminals')
options_str = ''.join(k+str(v) for k, v in options.items() if k not in unhashable)
from . import __version__
s = grammar + options_str + __version__ + str(sys.version_info[:2])
cache_md5 = hashlib.md5(s.encode('utf8')).hexdigest()
if isinstance(self.options.cache, STRING_TYPE):
cache_fn = self.options.cache
else:
if self.options.cache is not True:
raise ConfigurationError("cache argument must be bool or str")
##
cache_fn = tempfile.gettempdir() + '/.lark_cache_%s_%s_%s.tmp' % ((cache_md5,) + sys.version_info[:2])
if FS.exists(cache_fn):
logger.debug('Loading grammar from cache: %s', cache_fn)
##
for name in (set(options) - _LOAD_ALLOWED_OPTIONS):
del options[name]
with FS.open(cache_fn, 'rb') as f:
old_options = self.options
try:
file_md5 = f.readline().rstrip(b'\n')
cached_used_files = pickle.load(f)
if file_md5 == cache_md5.encode('utf8') and verify_used_files(cached_used_files):
cached_parser_data = pickle.load(f)
self._load(cached_parser_data, **options)
return
except Exception: ##
logger.exception("Failed to load Lark from cache: %r. We will try to carry on." % cache_fn)
##
##
self.options = old_options
##
self.grammar, used_files = load_grammar(grammar, self.source_path, self.options.import_paths, self.options.keep_all_tokens)
else:
assert isinstance(grammar, Grammar)
self.grammar = grammar
if self.options.lexer == 'auto':
if self.options.parser == 'lalr':
self.options.lexer = 'contextual'
elif self.options.parser == 'earley':
if self.options.postlex is not None:
logger.info("postlex can't be used with the dynamic lexer, so we use standard instead. "
"Consider using lalr with contextual instead of earley")
self.options.lexer = 'standard'
else:
self.options.lexer = 'dynamic'
elif self.options.parser == 'cyk':
self.options.lexer = 'standard'
else:
assert False, self.options.parser
lexer = self.options.lexer
if isinstance(lexer, type):
assert issubclass(lexer, Lexer) ##
else:
assert_config(lexer, ('standard', 'contextual', 'dynamic', 'dynamic_complete'))
if self.options.postlex is not None and 'dynamic' in lexer:
raise ConfigurationError("Can't use postlex with a dynamic lexer. Use standard or contextual instead")
if self.options.ambiguity == 'auto':
if self.options.parser == 'earley':
self.options.ambiguity = 'resolve'
else:
assert_config(self.options.parser, ('earley', 'cyk'), "%r doesn't support disambiguation. Use one of these parsers instead: %s")
if self.options.priority == 'auto':
self.options.priority = 'normal'
if self.options.priority not in _VALID_PRIORITY_OPTIONS:
raise ConfigurationError("invalid priority option: %r. Must be one of %r" % (self.options.priority, _VALID_PRIORITY_OPTIONS))
assert self.options.ambiguity not in ('resolve__antiscore_sum', ), 'resolve__antiscore_sum has been replaced with the option priority="invert"'
if self.options.ambiguity not in _VALID_AMBIGUITY_OPTIONS:
raise ConfigurationError("invalid ambiguity option: %r. Must be one of %r" % (self.options.ambiguity, _VALID_AMBIGUITY_OPTIONS))
if self.options.postlex is not None:
terminals_to_keep = set(self.options.postlex.always_accept)
else:
terminals_to_keep = set()
##
self.terminals, self.rules, self.ignore_tokens = self.grammar.compile(self.options.start, terminals_to_keep)
if self.options.edit_terminals:
for t in self.terminals:
self.options.edit_terminals(t)
self._terminals_dict = {t.name: t for t in self.terminals}
##
##
if self.options.priority == 'invert':
for rule in self.rules:
if rule.options.priority is not None:
rule.options.priority = -rule.options.priority
##
##
##
elif self.options.priority is None:
for rule in self.rules:
if rule.options.priority is not None:
rule.options.priority = None
##
self.lexer_conf = LexerConf(
self.terminals, re_module, self.ignore_tokens, self.options.postlex,
self.options.lexer_callbacks, self.options.g_regex_flags, use_bytes=self.options.use_bytes
)
if self.options.parser:
self.parser = self._build_parser()
elif lexer:
self.lexer = self._build_lexer()
if cache_fn:
logger.debug('Saving grammar to cache: %s', cache_fn)
with FS.open(cache_fn, 'wb') as f:
f.write(cache_md5.encode('utf8') + b'\n')
pickle.dump(used_files, f)
self.save(f)
if __doc__:
__doc__ += "\n\n" + LarkOptions.OPTIONS_DOC
__serialize_fields__ = 'parser', 'rules', 'options'
def _build_lexer(self, dont_ignore=False):
lexer_conf = self.lexer_conf
if dont_ignore:
from copy import copy
lexer_conf = copy(lexer_conf)
lexer_conf.ignore = ()
return TraditionalLexer(lexer_conf)
def _prepare_callbacks(self):
self._callbacks = {}
##
if self.options.ambiguity != 'forest':
self._parse_tree_builder = ParseTreeBuilder(
self.rules,
self.options.tree_class or Tree,
self.options.propagate_positions,
self.options.parser != 'lalr' and self.options.ambiguity == 'explicit',
self.options.maybe_placeholders
)
self._callbacks = self._parse_tree_builder.create_callback(self.options.transformer)
self._callbacks.update(_get_lexer_callbacks(self.options.transformer, self.terminals))
def _build_parser(self):
self._prepare_callbacks()
parser_class = get_frontend(self.options.parser, self.options.lexer)
parser_conf = ParserConf(self.rules, self._callbacks, self.options.start)
return parser_class(self.lexer_conf, parser_conf, options=self.options)
def save(self, f):
#--
data, m = self.memo_serialize([TerminalDef, Rule])
pickle.dump({'data': data, 'memo': m}, f, protocol=pickle.HIGHEST_PROTOCOL)
@classmethod
def load(cls, f):
#--
inst = cls.__new__(cls)
return inst._load(f)
def _deserialize_lexer_conf(self, data, memo, options):
lexer_conf = LexerConf.deserialize(data['lexer_conf'], memo)
lexer_conf.callbacks = options.lexer_callbacks or {}
lexer_conf.re_module = regex if options.regex else re
lexer_conf.use_bytes = options.use_bytes
lexer_conf.g_regex_flags = options.g_regex_flags
lexer_conf.skip_validation = True
lexer_conf.postlex = options.postlex
return lexer_conf
def _load(self, f, **kwargs):
if isinstance(f, dict):
d = f
else:
d = pickle.load(f)
memo = d['memo']
data = d['data']
assert memo
memo = SerializeMemoizer.deserialize(memo, {'Rule': Rule, 'TerminalDef': TerminalDef}, {})
options = dict(data['options'])
if (set(kwargs) - _LOAD_ALLOWED_OPTIONS) & set(LarkOptions._defaults):
raise ConfigurationError("Some options are not allowed when loading a Parser: {}"
.format(set(kwargs) - _LOAD_ALLOWED_OPTIONS))
options.update(kwargs)
self.options = LarkOptions.deserialize(options, memo)
self.rules = [Rule.deserialize(r, memo) for r in data['rules']]
self.source_path = '<deserialized>'
parser_class = get_frontend(self.options.parser, self.options.lexer)
self.lexer_conf = self._deserialize_lexer_conf(data['parser'], memo, self.options)
self.terminals = self.lexer_conf.terminals
self._prepare_callbacks()
self._terminals_dict = {t.name: t for t in self.terminals}
self.parser = parser_class.deserialize(
data['parser'],
memo,
self.lexer_conf,
self._callbacks,
self.options, ##
)
return self
@classmethod
def _load_from_dict(cls, data, memo, **kwargs):
inst = cls.__new__(cls)
return inst._load({'data': data, 'memo': memo}, **kwargs)
@classmethod
def open(cls, grammar_filename, rel_to=None, **options):
#--
if rel_to:
basepath = os.path.dirname(rel_to)
grammar_filename = os.path.join(basepath, grammar_filename)
with open(grammar_filename, encoding='utf8') as f:
return cls(f, **options)
@classmethod
def open_from_package(cls, package, grammar_path, search_paths=("",), **options):
#--
package = FromPackageLoader(package, search_paths)
full_path, text = package(None, grammar_path)
options.setdefault('source_path', full_path)
options.setdefault('import_paths', [])
options['import_paths'].append(package)
return cls(text, **options)
def __repr__(self):
return 'Lark(open(%r), parser=%r, lexer=%r, ...)' % (self.source_path, self.options.parser, self.options.lexer)
def lex(self, text, dont_ignore=False):
#--
if not hasattr(self, 'lexer') or dont_ignore:
lexer = self._build_lexer(dont_ignore)
else:
lexer = self.lexer
lexer_thread = LexerThread(lexer, text)
stream = lexer_thread.lex(None)
if self.options.postlex:
return self.options.postlex.process(stream)
return stream
def get_terminal(self, name):
#--
return self._terminals_dict[name]
def parse_interactive(self, text=None, start=None):
return self.parser.parse_interactive(text, start=start)
def parse(self, text, start=None, on_error=None):
#--
return self.parser.parse(text, start=start, on_error=on_error)
@property
def source(self):
warn("Lark.source attribute has been renamed to Lark.source_path", DeprecationWarning)
return self.source_path
@source.setter
def source(self, value):
self.source_path = value
@property
def grammar_source(self):
warn("Lark.grammar_source attribute has been renamed to Lark.source_grammar", DeprecationWarning)
return self.source_grammar
@grammar_source.setter
def grammar_source(self, value):
self.source_grammar = value
class DedentError(LarkError):
pass
class Indenter(PostLex):
def __init__(self):
self.paren_level = None
self.indent_level = None
assert self.tab_len > 0
def handle_NL(self, token):
if self.paren_level > 0:
return
yield token
indent_str = token.rsplit('\n', 1)[1] ##
indent = indent_str.count(' ') + indent_str.count('\t') * self.tab_len
if indent > self.indent_level[-1]:
self.indent_level.append(indent)
yield Token.new_borrow_pos(self.INDENT_type, indent_str, token)
else:
while indent < self.indent_level[-1]:
self.indent_level.pop()
yield Token.new_borrow_pos(self.DEDENT_type, indent_str, token)
if indent != self.indent_level[-1]:
raise DedentError('Unexpected dedent to column %s. Expected dedent to %s' % (indent, self.indent_level[-1]))
def _process(self, stream):
for token in stream:
if token.type == self.NL_type:
for t in self.handle_NL(token):
yield t
else:
yield token
if token.type in self.OPEN_PAREN_types:
self.paren_level += 1
elif token.type in self.CLOSE_PAREN_types:
self.paren_level -= 1
assert self.paren_level >= 0
while len(self.indent_level) > 1:
self.indent_level.pop()
yield Token(self.DEDENT_type, '')
assert self.indent_level == [0], self.indent_level
def process(self, stream):
self.paren_level = 0
self.indent_level = [0]
return self._process(stream)
##
@property
def always_accept(self):
return (self.NL_type,)
import pickle, zlib, base64
DATA = (
b'eJztWdlSG1kSZZHEvth4ZTWr2b3hBWOMhRAYrpCEJLCh211dQBnJFhJWqTrsB0fMEzGOuI/V/zLzEfNPk3mzZE67zQ9MjB3B0V3y5LmZeW9t/wj/+Z+BOvPvqz+pI2d2xXUqPv9uLTqfnYp1VC69N+2WqlM5LZTsouu/8ye/+rr+la/q3K9+vlnVCzQINAqEBMICEYEmgWaBFoFWgTaBdoEOgU6BLoFugSsCVwV6BK4JXBe4IXBT4Jbr6EjhpFSuOKxeN7zJ+ropltrejidzvqM7TqyKc+J8tt4X7ROX1qVbPNexDr9UHdf/VotF9cuZ4+tWCknV+Vz17KKvmy3Ta1m+bknwpBjHy9NtEsmL8IUrXtEJQkeCbouuXoE+gX6BAYFBgSGBOwLDAiMCowJjAuMCEwJ3BSYFpgSmBWYEZgXmBOYF7gncF3gg8FDgkcCCwGOBJwJPBZ4JLAo8F1gSeCGwLPBSYEXglUBUYFUgJrAmEBdYF9gQeC2wKbAloAQSAtsCSYGUQFpgRyAjkBXICewK7Am8EXgrsC9wQHUVdqt2pUqZrf2yv2dfCiZUtIsVPz+lW9OmW0okX292XLX80Sm5XCJUdG3H5SMqmdNTp1T1Vb1uzEU3fNXAVRZNppLWfV81fm888FVItzgl79QqVJ1TX4V1j2V9b7vWWdFzeVZER7K5zG4s56smHRqLJ9d81axD8eTutq9adGRzO53K0GCrbr2w9lWbjiRWM9FY3FftOpIJfnboK2614h1VrbNK+cypVAtU1apTt2Tj25uxVCKV9FWX7khnUul4JrdvJaPbZNWtm7dTmXjudZSGr+jGTdp16qpuh/USS4/utay/scs6Hvrqmr5uWWghIwu+uq6bg/lffHUjCErJPnV8dVOHY6LhFv2iDR/11W0dymR3Vn3VqyPls2qhXPJVnw4lTF+/bpI+EjSgI3vxWC6V8dWgDplsqiHd9odzVC0H2VV3mNYse1g3J+LZrCxyRIfjO7vRhK9GdYTWVCid+GpMhygRpGVch9ejiSz9mtDt4s76wy56xHdXd8azsWg6vmZR1jaTVACTui2IiqxpSndbViDS4pqzHvlqmg6407My1Z+a0SGOgK9mjWsy9NVc3lXzutMy86tidZ8OKppg8yFnCtD8of/lc6o+wgJhA2GWsJHwDmGI8CNhmPD43PcU17GKMJi9Zkyg0YiNJmw0Y6MFGp5qYM5W8hChoTbWxL2NX7lRH2z4c99VIZ5Xc3cX3ZlGPTbaoeGpMFt2EPOnwEM390a4tybyDoo0jQg2GrDRiI0WaHiqCTXGcXIcNcZRY9xYNqOaQVQziGoGUc0gOhhENYOGs4Vi2EnLPeHwtVKjixqL3Ghjb7Us17LO1bAMKqdR5bRhbA9I7jFJBzW6qfEbNzpR/xDqH0L9Q6h/CPUPof4h462LHFwhB6fsoJsdXKXWGM3rIbRAOhfwe8JrhG+hgHlJ04TXCceZ8gqz3KDWA+q9SdjIvVcxbSuoagXTtoIBWTESezCQ7G3hL6rqg6sf2L00dtfQ4yh6HEWPo2g5aiyvs+Ut8nAf1okpfAL2a2i/ZuxvoOcD9HyAng8whQeYwgNDc5NpOIBxGrpNuEnYS7jLo7fQySI6WUQniyhv0fDeZss+4rlJQ7xnRwj7CWdw7/byrNrx8IhZBrBxExqe6uPJLO1xkPM49/Z/lUaei2sA9eZQbw715lBvzpAPBnusjWmG8HicwVKfQc4ZPB5nMNAzuAlmjIM7zFkToJBTIadCaQrzpdCBMpzDtQOxmYZqweYghzHII7il+5GlH/n7UVM/aurH1fQbz6MYoRRaptAyhRFKoesUcqYM59hXOQAecgrGMZO7yLmLmdzFcO0amgmsvEHCR4RDhClTXnVqLiif10GwFFvdFSshmnXPL65as4Z1ksd/tGb2jgt2T03REu5QY52XMI1L2Mcl7OMS9jEs+5iRfeN5ppbkD5jUWTk+ZOJ912zei4an5kjJME2eZCXzmK0RzNYIyhrBbI2grBHM1ohxcC/YMDl2cL92F/LrT+4+agtdxVytGpIHsv/NvYK5d+jk3odBFM+Y+lFwqRrkxgKGdBm1L2NIl9HTsvH0GOP1FuP11ow/QeYYMseQOYbMMWP5NDiAMizw2WVl0oplshjE7oBNngcLbOHGEsp8gzLfGGcvgkvpdZ68jJqnUPMUap5CzVOG5iXRrODZu4Bn7wKevQvG4JVUUJ0qmaqoU1e4N4oCllDAEgpYQgFLhm8VLzt8pi9xbwzrdAfrdAfJd7BOd7BOd7BOd4ynNeRMI2caOdPImUbONHKmDWecOUdI9QYN1bI1j9maN/PWg9LY42xtsNEotWZp3hhhgnCc8A3hBOEh4V3CX9j0Nc/uCtieMHU3NDy1idl7jNl7jNl7bCZvXXabeAuy9Byz9NzYqZ9Vc+0o5apu51kJrNkHGIUHhmUbq2Qcgz6OVTKO/seNZZIt+WAZDvzf5d4UhuYphuapsUpjxjOY8Qw6z2DGM5jxDGY8Yzh3cA17SLOHa9jDNewZywzGnmvmhamZOlX8IRc28LxGnteGJ4v3D1u4qi2Us4WWW3g12cIlbhnO3GV1EQItk8g4aex22W6K5v0erGX1XG4eV3h078cVdwWzen7wchW8vEIvr4yXN1hXD7GuHprxt5epvwG8CVx1AuORMCT7tceRVzQ0TZgGshnCmKmFOtV/fvFYMku4dv73xxM+GB3jok4lobpuG1cHUkN/vTiy6TXQO4ZxGDN2v2Ac7mEc7pnxX7E2o1gMUazNKDJHjeU73ClJrKkk0iRxpyQxoEncKUnD+Rvuzme4O5+ZcQuzVnsObCccOr94/qsFtBatWuD5+W+HWX5nFq643qC2+rjXxkhs4BI2MBIbGIkNo+oQYzyHMZ4z40dYJlwW7wjnCF+Cyh/L5LLymCfcD8ptFMqFy6oeQt1rXB//7A0PB2weXPMbnoELl55yahdYejA2D8oN3Ps+uLOa4AvSCdYjO9g+//tN2wTGasIIyuPzxDCWwzDur2Gsp2FMxjBWzbDhLATCnrKwD+igDx30oYM+dNCHDvrQQZ9x8BFLI4uTs1gaWVxu1lgWa/FvOr/83nYd7daN3SkWOpfo1g/HVBTsX6D9C2Nfql38buPFrxw8JE1xoM7wUYcfZZ4F1+Xn55c+8njqE4Z3AMM7gOEdwPAOYMQGMLzc8Dzd/v0dpbyZzLvqyOWX08d/6Xvpehfvas03lmPn0Dvxv+muj45zZtnFohW8Xv+mW6sVx7GOirbr+kkdPrKP8g51N52V3WrR+ewn8/V5T4fNhx4/P6nbqhW75L4vV06pncx/ekcebX7NXChXCtUvvo6UaIw/AbXYp4eFE890hmyvWvZ12HxMIvoefi1tn5BmixwVRCrpC76tkcJD++gjq9dXT+0vhzStaB85+XLx2Km4/j91p3NcqFoXn96SebrLy0980+3ystc6s6t5/qyk29yyVzlyTIef9PJTuos/NxRKJ+sV/mBVOva9/Nz/vz/9r31/+vCvxrq6D/+mP5TyUMKufPS9+f8CukPl4A=='
)
DATA = pickle.loads(zlib.decompress(base64.b64decode(DATA)))
MEMO = (
b'eJzFWVtvE0cUjhM7vsRJoEAv9Ab0gh0Id9oGAsEEA8GxHWwHCklYbZMNu8Y32buUUFCrqlLjatuXLkWCol7fKlW0VdW3vlZqK1WqhFSp6t/oS6W2M56198zueL0mN4MQnj3zne98M3POmfVbnltvbu+ofW5qoRj+R3UX+LygqV1jiYymeku8LAvlgoafeK7yOQU9CoRGDk3tGRyaCe/QVM98jr9c0WY0tZt7XZqTRfTfUMz1nuc/9OkQVB/HyQslgeM01T9BwFJRTVF9pbJULEvyghZziUG1JyOU81KBz50Q5jUl5kL+xE61L5oejUxET3DpTGoscUoT3Xi8W924bdfASGhkeOv0dDg0jT7hgZFtmuib0cQAdt9Zdy8GxV5F7MMuxH4l1klgPaOJSDzaQBtC8XA3cFCRwYszN6b4wesz4TD6Tv5aH90g0YcHgEtXE5ddxGXn+XTDXxDjbfEHewNYwdYQbgLRlYkYCsA1aI3gIQi9E6nkRDSVucCtXvzdxLV3NBmPR9GWqjv1T++e3j11KTAz4GDdvASkG22DyVEDo7sil5VZmQAE1YC+v9JyWQOTfWSyO5qYjDemuoWCktcnikPA2q+7GotPJFPAlZQvFcsya0ZAnzF+PBUZNVR1vcEy7tGNUybjmyzjYH27JseTCcP2EMu2l9j609H4mMn+MMu+T8eOnp2MjBu2R1i2/bqCSH2DshtpL9SsVZek8aYp63T4k5HxtDHHM8/nKmSS+Jp5yvpGtPF4xGC0k8XoEZ3RePrsccN0imW6QTdNUaYzLNON+uKci45mkilj7a8Ks3KxzJqxiczwjUfT6czpCBB9mGX+qG4eT6aitPlRlvljujnHRRLJBLenYd6HDw/6UxnAJwhmAHeTE/Q4DbW3AdWLgYbbQHoCl4JulLovS7WygJesIvP4bKA0nigW6pkcJXm/cK3EFypSsYCqgp7SOa5mXfsXhxQUP1J41VMszwllLdahevicxFe0hOotlmQ0s1KrPf1XBKHE8bkcJxevCGiwqnpr4HN7tUWxL6H2y0K+lONlgasUlfKsgAB60Yi8wEmFOWlWqGhhzC+l5ISkjqugATceQAVnM2b3YY2ieKvGSbyNlLiLov44Id7DDz6pip9iV+JnCfFzBCZ+gQDEL5XYk8ZcIyXVIUJkPDhXnOVmi/m8UJAr5ClxdlANzEs5lLa4oiKjUIKqzxCQzO0hmFytMBtTDYHA2HrdtlQuloSyLAn6UwGF0kGFUmWF8hSQ4StTDOLXkLZ4H3EVv9G/fgce/dBw6GI7RIDVanWxKgDHT9s5tnH2PdNxZ1PHi8gz5fiZh3RsOOuyc0ZH+SzYKXoFcrpPDlNU0MlCsy07IlAbldBRaG/ZtwARHphFoD2LfwJSfzlY6kWT4FvhaalXVZPLY3Sw9X7tfrU2qAeleslsTmsR3Tbg0aIQ8BqlvW7gOMOYK+WUCk6bRvBn6uYORX4O0PDp53PB8QbQ2z9dAsIgSRNm5QM3br7hgLkHpAAzNGAP4NLWfnoe7Kd/zDL/a7DJdrg6OuxiynZhA8PeQ3/1WqZn4KLY5h/8WYQb84VlI+2cJTEI1O3tc1ctaZpO04trQNo+6eksKWm327BcwT3AUNdtXxlM6obWhjcm6mlFlBI4vCSiS1K02778mRQdWCWimJm3FTNKwh0gPTfropoXi824rTXNITVj31Jqxk5AytoGYjrMCslb0JmN6yAsxaR4WOqRCZmEcYEOPkjmcuT9UDuFYpfBIJuvL24jrGwJLDBvn9CptdwNA0OySYXLZsUsr5aaSsckvgd4YIRv+KHcO1yXvUCVdyyqiJfq6uuS1K/i9qD77EHnICiqQY27uj3qfqAD3X+As1Kgtwu11cjQOo7TJ5N74X54aCptHpoDINLbMFImnexdKqnQ3ppsOOjtYFveaHibWk9fHF4CMoOWro0EwOT+MuB+37In1B7y4oMzHNocQoj7CkxbFhAgzjVanJv0RjE3r+Lbbe6EIcDD32jmLTnO2ueS8dYtt5PsWHuTTltIy9RnHwLr95N172V/puvlr63q628WAzqW7O8WA+nhG+7Da84++8BB5w0FH15Lyi37brPAR1acLb2RaTmbt9pVckeAVI+uHlX77ppwo2QcWWlutG7NG+qq5bXZsRWkZt9C17hQMkVacFnR7MPQ0dfWVe/4WrLHdP1tXfhGV58urW6gFV3I9sSqssX0elrQo8SMLj89u40ZtOVmSownV5AbJtPbggwl1KllJ0Mr09eCDORyejm5YOf99s4pIcYM55ZX8Uu9lZ8B2NbO0Lixvdtmsx8Dgl3vtFwA33dw0YZw44Am46W1ocImjoOPyVuRA80uxExfccOXeMcET/3i5hAvYYMHfpdxqETSjh38hYN31mROMPFCcICs4ged4Ow06Qwh8FmnwBoEbtLHQeCUU+A7ELhJEwaB0wD4b5O05gudw7XPMCFDcIBw/RZydbANJsH54q2XadNPPg7ZnmsGGjKNka8/tsn5PIC/x3h5YbqoUm9uHN5RX7V3YYwxC+cvbQZ0AXhbtCQ5Rj51uA4Xm+GGTGPk6x/t0VZ2/Q8k/+Ae'
)
MEMO = pickle.loads(zlib.decompress(base64.b64decode(MEMO)))
Shift = 0
Reduce = 1
def Lark_StandAlone(**kwargs):
return Lark._load_from_dict(DATA, MEMO, **kwargs)
| gpl-3.0 |
herilalaina/scikit-learn | examples/covariance/plot_covariance_estimation.py | 34 | 5075 | """
=======================================================================
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood
=======================================================================
When working with covariance estimation, the usual approach is to use
a maximum likelihood estimator, such as the
:class:`sklearn.covariance.EmpiricalCovariance`. It is unbiased, i.e. it
converges to the true (population) covariance when given many
observations. However, it can also be beneficial to regularize it, in
order to reduce its variance; this, in turn, introduces some bias. This
example illustrates the simple regularization used in
:ref:`shrunk_covariance` estimators. In particular, it focuses on how to
set the amount of regularization, i.e. how to choose the bias-variance
trade-off.
Here we compare 3 approaches:
* Setting the parameter by cross-validating the likelihood on three folds
according to a grid of potential shrinkage parameters.
* A close formula proposed by Ledoit and Wolf to compute
the asymptotically optimal regularization parameter (minimizing a MSE
criterion), yielding the :class:`sklearn.covariance.LedoitWolf`
covariance estimate.
* An improvement of the Ledoit-Wolf shrinkage, the
:class:`sklearn.covariance.OAS`, proposed by Chen et al. Its
convergence is significantly better under the assumption that the data
are Gaussian, in particular for small samples.
To quantify estimation error, we plot the likelihood of unseen data for
different values of the shrinkage parameter. We also show the choices by
cross-validation, or with the LedoitWolf and OAS estimates.
Note that the maximum likelihood estimate corresponds to no shrinkage,
and thus performs poorly. The Ledoit-Wolf estimate performs really well,
as it is close to the optimal and is computational not costly. In this
example, the OAS estimate is a bit further away. Interestingly, both
approaches outperform cross-validation, which is significantly most
computationally costly.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
from sklearn.covariance import LedoitWolf, OAS, ShrunkCovariance, \
log_likelihood, empirical_covariance
from sklearn.model_selection import GridSearchCV
# #############################################################################
# Generate sample data
n_features, n_samples = 40, 20
np.random.seed(42)
base_X_train = np.random.normal(size=(n_samples, n_features))
base_X_test = np.random.normal(size=(n_samples, n_features))
# Color samples
coloring_matrix = np.random.normal(size=(n_features, n_features))
X_train = np.dot(base_X_train, coloring_matrix)
X_test = np.dot(base_X_test, coloring_matrix)
# #############################################################################
# Compute the likelihood on test data
# spanning a range of possible shrinkage coefficient values
shrinkages = np.logspace(-2, 0, 30)
negative_logliks = [-ShrunkCovariance(shrinkage=s).fit(X_train).score(X_test)
for s in shrinkages]
# under the ground-truth model, which we would not have access to in real
# settings
real_cov = np.dot(coloring_matrix.T, coloring_matrix)
emp_cov = empirical_covariance(X_train)
loglik_real = -log_likelihood(emp_cov, linalg.inv(real_cov))
# #############################################################################
# Compare different approaches to setting the parameter
# GridSearch for an optimal shrinkage coefficient
tuned_parameters = [{'shrinkage': shrinkages}]
cv = GridSearchCV(ShrunkCovariance(), tuned_parameters)
cv.fit(X_train)
# Ledoit-Wolf optimal shrinkage coefficient estimate
lw = LedoitWolf()
loglik_lw = lw.fit(X_train).score(X_test)
# OAS coefficient estimate
oa = OAS()
loglik_oa = oa.fit(X_train).score(X_test)
# #############################################################################
# Plot results
fig = plt.figure()
plt.title("Regularized covariance: likelihood and shrinkage coefficient")
plt.xlabel('Regularization parameter: shrinkage coefficient')
plt.ylabel('Error: negative log-likelihood on test data')
# range shrinkage curve
plt.loglog(shrinkages, negative_logliks, label="Negative log-likelihood")
plt.plot(plt.xlim(), 2 * [loglik_real], '--r',
label="Real covariance likelihood")
# adjust view
lik_max = np.amax(negative_logliks)
lik_min = np.amin(negative_logliks)
ymin = lik_min - 6. * np.log((plt.ylim()[1] - plt.ylim()[0]))
ymax = lik_max + 10. * np.log(lik_max - lik_min)
xmin = shrinkages[0]
xmax = shrinkages[-1]
# LW likelihood
plt.vlines(lw.shrinkage_, ymin, -loglik_lw, color='magenta',
linewidth=3, label='Ledoit-Wolf estimate')
# OAS likelihood
plt.vlines(oa.shrinkage_, ymin, -loglik_oa, color='purple',
linewidth=3, label='OAS estimate')
# best CV estimator likelihood
plt.vlines(cv.best_estimator_.shrinkage, ymin,
-cv.best_estimator_.score(X_test), color='cyan',
linewidth=3, label='Cross-validation best estimate')
plt.ylim(ymin, ymax)
plt.xlim(xmin, xmax)
plt.legend()
plt.show()
| bsd-3-clause |
schets/scikit-learn | benchmarks/bench_sparsify.py | 320 | 3372 | """
Benchmark SGD prediction time with dense/sparse coefficients.
Invoke with
-----------
$ kernprof.py -l sparsity_benchmark.py
$ python -m line_profiler sparsity_benchmark.py.lprof
Typical output
--------------
input data sparsity: 0.050000
true coef sparsity: 0.000100
test data sparsity: 0.027400
model sparsity: 0.000024
r^2 on test data (dense model) : 0.233651
r^2 on test data (sparse model) : 0.233651
Wrote profile results to sparsity_benchmark.py.lprof
Timer unit: 1e-06 s
File: sparsity_benchmark.py
Function: benchmark_dense_predict at line 51
Total time: 0.532979 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
51 @profile
52 def benchmark_dense_predict():
53 301 640 2.1 0.1 for _ in range(300):
54 300 532339 1774.5 99.9 clf.predict(X_test)
File: sparsity_benchmark.py
Function: benchmark_sparse_predict at line 56
Total time: 0.39274 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
56 @profile
57 def benchmark_sparse_predict():
58 1 10854 10854.0 2.8 X_test_sparse = csr_matrix(X_test)
59 301 477 1.6 0.1 for _ in range(300):
60 300 381409 1271.4 97.1 clf.predict(X_test_sparse)
"""
from scipy.sparse.csr import csr_matrix
import numpy as np
from sklearn.linear_model.stochastic_gradient import SGDRegressor
from sklearn.metrics import r2_score
np.random.seed(42)
def sparsity_ratio(X):
return np.count_nonzero(X) / float(n_samples * n_features)
n_samples, n_features = 5000, 300
X = np.random.randn(n_samples, n_features)
inds = np.arange(n_samples)
np.random.shuffle(inds)
X[inds[int(n_features / 1.2):]] = 0 # sparsify input
print("input data sparsity: %f" % sparsity_ratio(X))
coef = 3 * np.random.randn(n_features)
inds = np.arange(n_features)
np.random.shuffle(inds)
coef[inds[n_features/2:]] = 0 # sparsify coef
print("true coef sparsity: %f" % sparsity_ratio(coef))
y = np.dot(X, coef)
# add noise
y += 0.01 * np.random.normal((n_samples,))
# Split data in train set and test set
n_samples = X.shape[0]
X_train, y_train = X[:n_samples / 2], y[:n_samples / 2]
X_test, y_test = X[n_samples / 2:], y[n_samples / 2:]
print("test data sparsity: %f" % sparsity_ratio(X_test))
###############################################################################
clf = SGDRegressor(penalty='l1', alpha=.2, fit_intercept=True, n_iter=2000)
clf.fit(X_train, y_train)
print("model sparsity: %f" % sparsity_ratio(clf.coef_))
def benchmark_dense_predict():
for _ in range(300):
clf.predict(X_test)
def benchmark_sparse_predict():
X_test_sparse = csr_matrix(X_test)
for _ in range(300):
clf.predict(X_test_sparse)
def score(y_test, y_pred, case):
r2 = r2_score(y_test, y_pred)
print("r^2 on test data (%s) : %f" % (case, r2))
score(y_test, clf.predict(X_test), 'dense model')
benchmark_dense_predict()
clf.sparsify()
score(y_test, clf.predict(X_test), 'sparse model')
benchmark_sparse_predict()
| bsd-3-clause |
nhuntwalker/astroML | book_figures/chapter9/fig_photoz_boosting.py | 4 | 4462 | """
Photometric Redshifts by Random Forests
---------------------------------------
Figure 9.16
Photometric redshift estimation using gradient-boosted decision trees, with 100
boosting steps. As with random forests (figure 9.15), boosting allows for
improved results over the single tree case (figure 9.14). Note, however, that
the computational cost of boosted decision trees is such that it is
computationally prohibitive to use very deep trees. By stringing together a
large number of very naive estimators, boosted trees improve on the
underfitting of each individual estimator.
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.ensemble import GradientBoostingRegressor
from astroML.datasets import fetch_sdss_specgals
from astroML.decorators import pickle_results
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Fetch and prepare the data
data = fetch_sdss_specgals()
# put magnitudes in a matrix
mag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T
z = data['z']
# train on ~60,000 points
mag_train = mag[::10]
z_train = z[::10]
# test on ~6,000 distinct points
mag_test = mag[1::100]
z_test = z[1::100]
#------------------------------------------------------------
# Compute the results
# This is a long computation, so we'll save the results to a pickle.
@pickle_results('photoz_boosting.pkl')
def compute_photoz_forest(N_boosts):
rms_test = np.zeros(len(N_boosts))
rms_train = np.zeros(len(N_boosts))
i_best = 0
z_fit_best = None
for i, Nb in enumerate(N_boosts):
try:
# older versions of scikit-learn
clf = GradientBoostingRegressor(n_estimators=Nb, learn_rate=0.1,
max_depth=3, random_state=0)
except TypeError:
clf = GradientBoostingRegressor(n_estimators=Nb, learning_rate=0.1,
max_depth=3, random_state=0)
clf.fit(mag_train, z_train)
z_fit_train = clf.predict(mag_train)
z_fit = clf.predict(mag_test)
rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2))
if rms_test[i] <= rms_test[i_best]:
i_best = i
z_fit_best = z_fit
return rms_test, rms_train, i_best, z_fit_best
N_boosts = (10, 100, 200, 300, 400, 500)
rms_test, rms_train, i_best, z_fit_best = compute_photoz_forest(N_boosts)
best_N = N_boosts[i_best]
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 2.5))
fig.subplots_adjust(wspace=0.25,
left=0.1, right=0.95,
bottom=0.15, top=0.9)
# left panel: plot cross-validation results
ax = fig.add_subplot(121)
ax.plot(N_boosts, rms_test, '-k', label='cross-validation')
ax.plot(N_boosts, rms_train, '--k', label='training set')
ax.legend(loc=1)
ax.set_xlabel('number of boosts')
ax.set_ylabel('rms error')
ax.set_xlim(0, 510)
ax.set_ylim(0.009, 0.032)
ax.yaxis.set_major_locator(plt.MultipleLocator(0.01))
ax.text(0.03, 0.03, "Tree depth: 3",
ha='left', va='bottom', transform=ax.transAxes)
# right panel: plot best fit
ax = fig.add_subplot(122)
ax.scatter(z_test, z_fit_best, s=1, lw=0, c='k')
ax.plot([-0.1, 0.4], [-0.1, 0.4], ':k')
ax.text(0.04, 0.96, "N = %i\nrms = %.3f" % (best_N, rms_test[i_best]),
ha='left', va='top', transform=ax.transAxes)
ax.set_xlabel(r'$z_{\rm true}$')
ax.set_ylabel(r'$z_{\rm fit}$')
ax.set_xlim(-0.02, 0.4001)
ax.set_ylim(-0.02, 0.4001)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.1))
plt.show()
| bsd-2-clause |
codeaudit/reweighted-ws | learning/datasets/__init__.py | 6 | 6165 | """
Classes representing datasets.
"""
from __future__ import division
import os
import abc
import logging
import cPickle as pickle
import os.path as path
import gzip
import h5py
import numpy as np
import theano
import theano.tensor as T
from learning.preproc import Preproc
_logger = logging.getLogger(__name__)
floatX = theano.config.floatX
#-----------------------------------------------------------------------------
def datapath(fname):
""" Try to find *fname* in the dataset directory and return
a absolute path.
"""
candidates = [
path.abspath(path.join(path.dirname(__file__), "../../data")),
path.abspath("."),
path.abspath("data"),
]
if 'DATASET_PATH' in os.environ:
candidates.append(os.environ['DATASET_PATH'])
for c in candidates:
c = path.join(c, fname)
if path.exists(c):
return c
raise IOError("Could not find %s" % fname)
#-----------------------------------------------------------------------------
# Dataset base class
class DataSet(object):
__metaclass__ = abc.ABCMeta
def __init__(self, preproc=[]):
self._preprocessors = []
self.add_preproc(preproc)
def add_preproc(self, preproc):
""" Add the given preprocessors to the list of preprocessors to be used
Parameters
----------
preproc : {Preproc, list of Preprocessors}
"""
if isinstance(preproc, Preproc):
preproc = [preproc,]
for p in preproc:
assert isinstance(p, Preproc)
self._preprocessors += preproc
def preproc(self, X, Y):
""" Statically preprocess data.
Parameters
----------
X, Y : ndarray
Returns
-------
X, Y : ndarray
"""
for p in self._preprocessors:
X, Y = p.preproc(X, Y)
return X, Y
def late_preproc(self, X, Y):
""" Preprocess a batch of data
Parameters
----------
X, Y : theano.tensor
Returns
-------
X, Y : theano.tensor
"""
for p in self._preprocessors:
X, Y = p.late_preproc(X, Y)
return X, Y
#-----------------------------------------------------------------------------
class ToyData(DataSet):
def __init__(self, which_set='train', preproc=[]):
super(ToyData, self).__init__(preproc)
self.which_set = which_set
X = np.array(
[[1., 1., 1., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 1., 1., 1.]], dtype=floatX)
Y = np.array([[1., 0.], [0., 1.]], dtype=floatX)
if which_set == 'train':
self.X = np.concatenate([X]*10)
self.Y = np.concatenate([Y]*10)
elif which_set == 'valid':
self.X = np.concatenate([X]*2)
self.Y = np.concatenate([Y]*2)
elif which_set == 'test':
self.X = np.concatenate([X]*2)
self.Y = np.concatenate([Y]*2)
else:
raise ValueError("Unknown dataset %s" % which_set)
self.n_datapoints = self.X.shape[0]
#-----------------------------------------------------------------------------
class BarsData(DataSet):
def __init__(self, which_set='train', n_datapoints=1000, D=5, preproc=[]):
super(BarsData, self).__init__(preproc)
n_vis = D**2
n_hid = 2*D
bar_prob = 1./n_hid
X = np.zeros((n_datapoints, D, D), dtype=floatX)
Y = (np.random.uniform(size=(n_datapoints, n_hid)) < bar_prob).astype(floatX)
for n in xrange(n_datapoints):
for d in xrange(D):
if Y[n, d] > 0.5:
X[n, d, :] = 1.0
if Y[n, D+d] > 0.5:
X[n, :, d] = 1.0
self.X = X.reshape((n_datapoints, n_vis))
self.Y = Y
self.n_datapoints = n_datapoints
#-----------------------------------------------------------------------------
class FromModel(DataSet):
def __init__(self, model, n_datapoints, preproc=[]):
super(FromModel, self).__init__(preproc)
batch_size = 100
# Compile a Theano function to draw samples from the model
n_samples = T.iscalar('n_samples')
n_samples.tag.test_value = 10
X, _ = model.sample_p(n_samples)
do_sample = theano.function(
inputs=[n_samples],
outputs=X[0],
name='sample_p')
model.setup()
n_vis = model.n_X
#n_hid = model.n_hid
X = np.empty((n_datapoints, n_vis), dtype=floatX)
#Y = np.empty((n_datapoints, n_hid), dtype=np.floatX)
for b in xrange(n_datapoints//batch_size):
first = b*batch_size
last = first + batch_size
X[first:last] = do_sample(batch_size)
remain = n_datapoints % batch_size
if remain > 0:
X[last:] = do_sample(remain)
self.n_datapoints = n_datapoints
self.X = X
self.Y = None
#-----------------------------------------------------------------------------
class FromH5(DataSet):
def __init__(self, fname, n_datapoints=None, offset=0, table_X="X", table_Y="Y"):
""" Load a dataset from an HDF5 file. """
super(FromH5, self).__init__()
if not path.exists(fname):
fname = datapath(fname)
with h5py.File(fname, "r") as h5:
#
if not table_X in h5.keys():
_logger.error("H5 file %s does not contain a table named %s" % (fname, table_X))
raise ArgumentError()
N_total, D = h5[table_X].shape
if n_datapoints is None:
n_datapoints = N_total-offset
X = h5[table_X][offset:(offset+n_datapoints)]
X = X.astype(floatX)
if table_Y in h5.keys():
Y = h5[table_Y][offset:(offset+n_datapoints)]
Y = Y.astype(floatX)
else:
Y = None
Y = X[:,0]
self.X = X
self.Y = Y
self.n_datapoints = self.X.shape[0]
| agpl-3.0 |
schets/scikit-learn | examples/linear_model/plot_multi_task_lasso_support.py | 248 | 2211 | #!/usr/bin/env python
"""
=============================================
Joint feature selection with multi-task Lasso
=============================================
The multi-task lasso allows to fit multiple regression problems
jointly enforcing the selected features to be the same across
tasks. This example simulates sequential measurements, each task
is a time instant, and the relevant features vary in amplitude
over time while being the same. The multi-task lasso imposes that
features that are selected at one time point are select for all time
point. This makes feature selection by the Lasso more stable.
"""
print(__doc__)
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import MultiTaskLasso, Lasso
rng = np.random.RandomState(42)
# Generate some 2D coefficients with sine waves with random frequency and phase
n_samples, n_features, n_tasks = 100, 30, 40
n_relevant_features = 5
coef = np.zeros((n_tasks, n_features))
times = np.linspace(0, 2 * np.pi, n_tasks)
for k in range(n_relevant_features):
coef[:, k] = np.sin((1. + rng.randn(1)) * times + 3 * rng.randn(1))
X = rng.randn(n_samples, n_features)
Y = np.dot(X, coef.T) + rng.randn(n_samples, n_tasks)
coef_lasso_ = np.array([Lasso(alpha=0.5).fit(X, y).coef_ for y in Y.T])
coef_multi_task_lasso_ = MultiTaskLasso(alpha=1.).fit(X, Y).coef_
###############################################################################
# Plot support and time series
fig = plt.figure(figsize=(8, 5))
plt.subplot(1, 2, 1)
plt.spy(coef_lasso_)
plt.xlabel('Feature')
plt.ylabel('Time (or Task)')
plt.text(10, 5, 'Lasso')
plt.subplot(1, 2, 2)
plt.spy(coef_multi_task_lasso_)
plt.xlabel('Feature')
plt.ylabel('Time (or Task)')
plt.text(10, 5, 'MultiTaskLasso')
fig.suptitle('Coefficient non-zero location')
feature_to_plot = 0
plt.figure()
plt.plot(coef[:, feature_to_plot], 'k', label='Ground truth')
plt.plot(coef_lasso_[:, feature_to_plot], 'g', label='Lasso')
plt.plot(coef_multi_task_lasso_[:, feature_to_plot],
'r', label='MultiTaskLasso')
plt.legend(loc='upper center')
plt.axis('tight')
plt.ylim([-1.1, 1.1])
plt.show()
| bsd-3-clause |
herilalaina/scikit-learn | sklearn/tests/test_discriminant_analysis.py | 27 | 13926 | import numpy as np
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_false
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import ignore_warnings
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.discriminant_analysis import _cov
# Data is just 6 separable points in the plane
X = np.array([[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]], dtype='f')
y = np.array([1, 1, 1, 2, 2, 2])
y3 = np.array([1, 1, 2, 2, 3, 3])
# Degenerate data with only one feature (still should be separable)
X1 = np.array([[-2, ], [-1, ], [-1, ], [1, ], [1, ], [2, ]], dtype='f')
# Data is just 9 separable points in the plane
X6 = np.array([[0, 0], [-2, -2], [-2, -1], [-1, -1], [-1, -2],
[1, 3], [1, 2], [2, 1], [2, 2]])
y6 = np.array([1, 1, 1, 1, 1, 2, 2, 2, 2])
y7 = np.array([1, 2, 3, 2, 3, 1, 2, 3, 1])
# Degenerate data with 1 feature (still should be separable)
X7 = np.array([[-3, ], [-2, ], [-1, ], [-1, ], [0, ], [1, ], [1, ],
[2, ], [3, ]])
# Data that has zero variance in one dimension and needs regularization
X2 = np.array([[-3, 0], [-2, 0], [-1, 0], [-1, 0], [0, 0], [1, 0], [1, 0],
[2, 0], [3, 0]])
# One element class
y4 = np.array([1, 1, 1, 1, 1, 1, 1, 1, 2])
# Data with less samples in a class than n_features
X5 = np.c_[np.arange(8), np.zeros((8, 3))]
y5 = np.array([0, 0, 0, 0, 0, 1, 1, 1])
solver_shrinkage = [('svd', None), ('lsqr', None), ('eigen', None),
('lsqr', 'auto'), ('lsqr', 0), ('lsqr', 0.43),
('eigen', 'auto'), ('eigen', 0), ('eigen', 0.43)]
def test_lda_predict():
# Test LDA classification.
# This checks that LDA implements fit and predict and returns correct
# values for simple toy data.
for test_case in solver_shrinkage:
solver, shrinkage = test_case
clf = LinearDiscriminantAnalysis(solver=solver, shrinkage=shrinkage)
y_pred = clf.fit(X, y).predict(X)
assert_array_equal(y_pred, y, 'solver %s' % solver)
# Assert that it works with 1D data
y_pred1 = clf.fit(X1, y).predict(X1)
assert_array_equal(y_pred1, y, 'solver %s' % solver)
# Test probability estimates
y_proba_pred1 = clf.predict_proba(X1)
assert_array_equal((y_proba_pred1[:, 1] > 0.5) + 1, y,
'solver %s' % solver)
y_log_proba_pred1 = clf.predict_log_proba(X1)
assert_array_almost_equal(np.exp(y_log_proba_pred1), y_proba_pred1,
8, 'solver %s' % solver)
# Primarily test for commit 2f34950 -- "reuse" of priors
y_pred3 = clf.fit(X, y3).predict(X)
# LDA shouldn't be able to separate those
assert_true(np.any(y_pred3 != y3), 'solver %s' % solver)
# Test invalid shrinkages
clf = LinearDiscriminantAnalysis(solver="lsqr", shrinkage=-0.2231)
assert_raises(ValueError, clf.fit, X, y)
clf = LinearDiscriminantAnalysis(solver="eigen", shrinkage="dummy")
assert_raises(ValueError, clf.fit, X, y)
clf = LinearDiscriminantAnalysis(solver="svd", shrinkage="auto")
assert_raises(NotImplementedError, clf.fit, X, y)
# Test unknown solver
clf = LinearDiscriminantAnalysis(solver="dummy")
assert_raises(ValueError, clf.fit, X, y)
def test_lda_priors():
# Test priors (negative priors)
priors = np.array([0.5, -0.5])
clf = LinearDiscriminantAnalysis(priors=priors)
msg = "priors must be non-negative"
assert_raise_message(ValueError, msg, clf.fit, X, y)
# Test that priors passed as a list are correctly handled (run to see if
# failure)
clf = LinearDiscriminantAnalysis(priors=[0.5, 0.5])
clf.fit(X, y)
# Test that priors always sum to 1
priors = np.array([0.5, 0.6])
prior_norm = np.array([0.45, 0.55])
clf = LinearDiscriminantAnalysis(priors=priors)
assert_warns(UserWarning, clf.fit, X, y)
assert_array_almost_equal(clf.priors_, prior_norm, 2)
def test_lda_coefs():
# Test if the coefficients of the solvers are approximately the same.
n_features = 2
n_classes = 2
n_samples = 1000
X, y = make_blobs(n_samples=n_samples, n_features=n_features,
centers=n_classes, random_state=11)
clf_lda_svd = LinearDiscriminantAnalysis(solver="svd")
clf_lda_lsqr = LinearDiscriminantAnalysis(solver="lsqr")
clf_lda_eigen = LinearDiscriminantAnalysis(solver="eigen")
clf_lda_svd.fit(X, y)
clf_lda_lsqr.fit(X, y)
clf_lda_eigen.fit(X, y)
assert_array_almost_equal(clf_lda_svd.coef_, clf_lda_lsqr.coef_, 1)
assert_array_almost_equal(clf_lda_svd.coef_, clf_lda_eigen.coef_, 1)
assert_array_almost_equal(clf_lda_eigen.coef_, clf_lda_lsqr.coef_, 1)
def test_lda_transform():
# Test LDA transform.
clf = LinearDiscriminantAnalysis(solver="svd", n_components=1)
X_transformed = clf.fit(X, y).transform(X)
assert_equal(X_transformed.shape[1], 1)
clf = LinearDiscriminantAnalysis(solver="eigen", n_components=1)
X_transformed = clf.fit(X, y).transform(X)
assert_equal(X_transformed.shape[1], 1)
clf = LinearDiscriminantAnalysis(solver="lsqr", n_components=1)
clf.fit(X, y)
msg = "transform not implemented for 'lsqr'"
assert_raise_message(NotImplementedError, msg, clf.transform, X)
def test_lda_explained_variance_ratio():
# Test if the sum of the normalized eigen vectors values equals 1,
# Also tests whether the explained_variance_ratio_ formed by the
# eigen solver is the same as the explained_variance_ratio_ formed
# by the svd solver
state = np.random.RandomState(0)
X = state.normal(loc=0, scale=100, size=(40, 20))
y = state.randint(0, 3, size=(40,))
clf_lda_eigen = LinearDiscriminantAnalysis(solver="eigen")
clf_lda_eigen.fit(X, y)
assert_almost_equal(clf_lda_eigen.explained_variance_ratio_.sum(), 1.0, 3)
assert_equal(clf_lda_eigen.explained_variance_ratio_.shape, (2,),
"Unexpected length for explained_variance_ratio_")
clf_lda_svd = LinearDiscriminantAnalysis(solver="svd")
clf_lda_svd.fit(X, y)
assert_almost_equal(clf_lda_svd.explained_variance_ratio_.sum(), 1.0, 3)
assert_equal(clf_lda_svd.explained_variance_ratio_.shape, (2,),
"Unexpected length for explained_variance_ratio_")
assert_array_almost_equal(clf_lda_svd.explained_variance_ratio_,
clf_lda_eigen.explained_variance_ratio_)
def test_lda_orthogonality():
# arrange four classes with their means in a kite-shaped pattern
# the longer distance should be transformed to the first component, and
# the shorter distance to the second component.
means = np.array([[0, 0, -1], [0, 2, 0], [0, -2, 0], [0, 0, 5]])
# We construct perfectly symmetric distributions, so the LDA can estimate
# precise means.
scatter = np.array([[0.1, 0, 0], [-0.1, 0, 0], [0, 0.1, 0], [0, -0.1, 0],
[0, 0, 0.1], [0, 0, -0.1]])
X = (means[:, np.newaxis, :] + scatter[np.newaxis, :, :]).reshape((-1, 3))
y = np.repeat(np.arange(means.shape[0]), scatter.shape[0])
# Fit LDA and transform the means
clf = LinearDiscriminantAnalysis(solver="svd").fit(X, y)
means_transformed = clf.transform(means)
d1 = means_transformed[3] - means_transformed[0]
d2 = means_transformed[2] - means_transformed[1]
d1 /= np.sqrt(np.sum(d1 ** 2))
d2 /= np.sqrt(np.sum(d2 ** 2))
# the transformed within-class covariance should be the identity matrix
assert_almost_equal(np.cov(clf.transform(scatter).T), np.eye(2))
# the means of classes 0 and 3 should lie on the first component
assert_almost_equal(np.abs(np.dot(d1[:2], [1, 0])), 1.0)
# the means of classes 1 and 2 should lie on the second component
assert_almost_equal(np.abs(np.dot(d2[:2], [0, 1])), 1.0)
def test_lda_scaling():
# Test if classification works correctly with differently scaled features.
n = 100
rng = np.random.RandomState(1234)
# use uniform distribution of features to make sure there is absolutely no
# overlap between classes.
x1 = rng.uniform(-1, 1, (n, 3)) + [-10, 0, 0]
x2 = rng.uniform(-1, 1, (n, 3)) + [10, 0, 0]
x = np.vstack((x1, x2)) * [1, 100, 10000]
y = [-1] * n + [1] * n
for solver in ('svd', 'lsqr', 'eigen'):
clf = LinearDiscriminantAnalysis(solver=solver)
# should be able to separate the data perfectly
assert_equal(clf.fit(x, y).score(x, y), 1.0,
'using covariance: %s' % solver)
def test_lda_store_covariance():
# Test for slover 'lsqr' and 'eigen'
# 'store_covariance' has no effect on 'lsqr' and 'eigen' solvers
for solver in ('lsqr', 'eigen'):
clf = LinearDiscriminantAnalysis(solver=solver).fit(X6, y6)
assert_true(hasattr(clf, 'covariance_'))
# Test the actual attribute:
clf = LinearDiscriminantAnalysis(solver=solver,
store_covariance=True).fit(X6, y6)
assert_true(hasattr(clf, 'covariance_'))
assert_array_almost_equal(
clf.covariance_,
np.array([[0.422222, 0.088889], [0.088889, 0.533333]])
)
# Test for SVD slover, the default is to not set the covariances_ attribute
clf = LinearDiscriminantAnalysis(solver='svd').fit(X6, y6)
assert_false(hasattr(clf, 'covariance_'))
# Test the actual attribute:
clf = LinearDiscriminantAnalysis(solver=solver,
store_covariance=True).fit(X6, y6)
assert_true(hasattr(clf, 'covariance_'))
assert_array_almost_equal(
clf.covariance_,
np.array([[0.422222, 0.088889], [0.088889, 0.533333]])
)
def test_qda():
# QDA classification.
# This checks that QDA implements fit and predict and returns
# correct values for a simple toy dataset.
clf = QuadraticDiscriminantAnalysis()
y_pred = clf.fit(X6, y6).predict(X6)
assert_array_equal(y_pred, y6)
# Assure that it works with 1D data
y_pred1 = clf.fit(X7, y6).predict(X7)
assert_array_equal(y_pred1, y6)
# Test probas estimates
y_proba_pred1 = clf.predict_proba(X7)
assert_array_equal((y_proba_pred1[:, 1] > 0.5) + 1, y6)
y_log_proba_pred1 = clf.predict_log_proba(X7)
assert_array_almost_equal(np.exp(y_log_proba_pred1), y_proba_pred1, 8)
y_pred3 = clf.fit(X6, y7).predict(X6)
# QDA shouldn't be able to separate those
assert_true(np.any(y_pred3 != y7))
# Classes should have at least 2 elements
assert_raises(ValueError, clf.fit, X6, y4)
def test_qda_priors():
clf = QuadraticDiscriminantAnalysis()
y_pred = clf.fit(X6, y6).predict(X6)
n_pos = np.sum(y_pred == 2)
neg = 1e-10
clf = QuadraticDiscriminantAnalysis(priors=np.array([neg, 1 - neg]))
y_pred = clf.fit(X6, y6).predict(X6)
n_pos2 = np.sum(y_pred == 2)
assert_greater(n_pos2, n_pos)
def test_qda_store_covariance():
# The default is to not set the covariances_ attribute
clf = QuadraticDiscriminantAnalysis().fit(X6, y6)
assert_false(hasattr(clf, 'covariance_'))
# Test the actual attribute:
clf = QuadraticDiscriminantAnalysis(store_covariance=True).fit(X6, y6)
assert_true(hasattr(clf, 'covariance_'))
assert_array_almost_equal(
clf.covariance_[0],
np.array([[0.7, 0.45], [0.45, 0.7]])
)
assert_array_almost_equal(
clf.covariance_[1],
np.array([[0.33333333, -0.33333333], [-0.33333333, 0.66666667]])
)
def test_qda_deprecation():
# Test the deprecation
clf = QuadraticDiscriminantAnalysis(store_covariances=True)
assert_warns_message(DeprecationWarning, "'store_covariances' was renamed"
" to store_covariance in version 0.19 and will be "
"removed in 0.21.", clf.fit, X, y)
# check that covariance_ (and covariances_ with warning) is stored
assert_warns_message(DeprecationWarning, "Attribute covariances_ was "
"deprecated in version 0.19 and will be removed "
"in 0.21. Use covariance_ instead", getattr, clf,
'covariances_')
def test_qda_regularization():
# the default is reg_param=0. and will cause issues
# when there is a constant variable
clf = QuadraticDiscriminantAnalysis()
with ignore_warnings():
y_pred = clf.fit(X2, y6).predict(X2)
assert_true(np.any(y_pred != y6))
# adding a little regularization fixes the problem
clf = QuadraticDiscriminantAnalysis(reg_param=0.01)
with ignore_warnings():
clf.fit(X2, y6)
y_pred = clf.predict(X2)
assert_array_equal(y_pred, y6)
# Case n_samples_in_a_class < n_features
clf = QuadraticDiscriminantAnalysis(reg_param=0.1)
with ignore_warnings():
clf.fit(X5, y5)
y_pred5 = clf.predict(X5)
assert_array_equal(y_pred5, y5)
def test_covariance():
x, y = make_blobs(n_samples=100, n_features=5,
centers=1, random_state=42)
# make features correlated
x = np.dot(x, np.arange(x.shape[1] ** 2).reshape(x.shape[1], x.shape[1]))
c_e = _cov(x, 'empirical')
assert_almost_equal(c_e, c_e.T)
c_s = _cov(x, 'auto')
assert_almost_equal(c_s, c_s.T)
| bsd-3-clause |
nhuntwalker/astroML | book_figures/chapter10/fig_LS_double_eclipse.py | 4 | 4789 | """
Lomb-Scargle Aliasing
---------------------
Figure 10.18
Analysis of a light curve where the standard Lomb-Scargle periodogram fails to
find the correct period (the same star as in the top-left panel in figure
10.17). The two top panels show the periodograms (left) and phased light curves
(right) for the truncated Fourier series model with M = 1 and M = 6 terms.
Phased light curves are computed using the incorrect aliased period favored by
the M = 1 model. The correct period is favored by the M = 6 model but
unrecognized by the M = 1 model (bottom-left panel). The phased light curve
constructed with the correct period is shown in the bottom-right panel. This
case demonstrates that the Lomb-Scargle periodogram may easily fail when the
signal shape significantly differs from a single sinusoid.
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from astroML.time_series import multiterm_periodogram, MultiTermFit
from astroML.datasets import fetch_LINEAR_sample
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Get data
data = fetch_LINEAR_sample()
t, y, dy = data[14752041].T
#------------------------------------------------------------
# Do a single-term and multi-term fit around the peak
omega0 = 17.217
nterms_fit = 6
# hack to get better phases: this doesn't change results,
# except for how the phase plots are displayed
t -= 0.4 * np.pi / omega0
width = 0.03
omega = np.linspace(omega0 - width - 0.01, omega0 + width - 0.01, 1000)
#------------------------------------------------------------
# Compute periodograms and best-fit solutions
# factor gives the factor that we're dividing the fundamental frequency by
factors = [1, 2]
nterms = [1, 6]
# Compute PSDs for factors & nterms
PSDs = dict()
for f in factors:
for n in nterms:
PSDs[(f, n)] = multiterm_periodogram(t, y, dy, omega / f, n)
# Compute the best-fit omega from the 6-term fit
omega_best = dict()
for f in factors:
omegaf = omega / f
PSDf = PSDs[(f, 6)]
omega_best[f] = omegaf[np.argmax(PSDf)]
# Compute the best-fit solution based on the fundamental frequency
best_fit = dict()
for f in factors:
for n in nterms:
mtf = MultiTermFit(omega_best[f], n)
mtf.fit(t, y, dy)
phase_best, y_best = mtf.predict(1000, adjust_offset=False)
best_fit[(f, n)] = (phase_best, y_best)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 2.5))
fig.subplots_adjust(left=0.1, right=0.95, wspace=0.25,
bottom=0.12, top=0.95, hspace=0.2)
for i, f in enumerate(factors):
P_best = 2 * np.pi / omega_best[f]
phase_best = (t / P_best) % 1
# first column: plot the PSD
ax1 = fig.add_subplot(221 + 2 * i)
ax1.plot(omega / f, PSDs[(f, 6)], '-', c='black', label='6 terms')
ax1.plot(omega / f, PSDs[(f, 1)], '-', c='gray', label='1 term')
ax1.grid(color='gray')
ax1.legend(loc=2)
ax1.axis('tight')
ax1.set_ylim(-0.05, 1.001)
ax1.xaxis.set_major_locator(plt.MultipleLocator(0.01))
ax1.xaxis.set_major_formatter(plt.FormatStrFormatter('%.2f'))
# second column: plot the phased data & fit
ax2 = fig.add_subplot(222 + 2 * i)
ax2.errorbar(phase_best, y, dy, fmt='.k', ms=4, ecolor='gray', lw=1,
capsize=1.5)
ax2.plot(best_fit[(f, 1)][0], best_fit[(f, 1)][1], '-', c='gray')
ax2.plot(best_fit[(f, 6)][0], best_fit[(f, 6)][1], '-', c='black')
ax2.text(0.02, 0.02, (r"$\omega_0 = %.2f$" % omega_best[f] + "\n"
+ r"$P_0 = %.2f\ {\rm hours}$" % (24 * P_best)),
ha='left', va='bottom', transform=ax2.transAxes)
ax2.grid(color='gray')
ax2.set_xlim(0, 1)
ax2.set_ylim(plt.ylim()[::-1])
ax2.yaxis.set_major_locator(plt.MultipleLocator(0.4))
# label both axes
ax1.set_ylabel(r'$P_{\rm LS}(\omega)$')
ax2.set_ylabel(r'${\rm mag}$')
if i == 1:
ax1.set_xlabel(r'$\omega$')
ax2.set_xlabel(r'${\rm phase}$')
plt.show()
| bsd-2-clause |
herilalaina/scikit-learn | sklearn/ensemble/gradient_boosting.py | 1 | 81822 | """Gradient Boosted Regression Trees
This module contains methods for fitting gradient boosted regression trees for
both classification and regression.
The module structure is the following:
- The ``BaseGradientBoosting`` base class implements a common ``fit`` method
for all the estimators in the module. Regression and classification
only differ in the concrete ``LossFunction`` used.
- ``GradientBoostingClassifier`` implements gradient boosting for
classification problems.
- ``GradientBoostingRegressor`` implements gradient boosting for
regression problems.
"""
# Authors: Peter Prettenhofer, Scott White, Gilles Louppe, Emanuele Olivetti,
# Arnaud Joly, Jacob Schreiber
# License: BSD 3 clause
from __future__ import print_function
from __future__ import division
from abc import ABCMeta
from abc import abstractmethod
from .base import BaseEnsemble
from ..base import ClassifierMixin
from ..base import RegressorMixin
from ..externals import six
from ._gradient_boosting import predict_stages
from ._gradient_boosting import predict_stage
from ._gradient_boosting import _random_sample_mask
import numbers
import numpy as np
from scipy import stats
from scipy.sparse import csc_matrix
from scipy.sparse import csr_matrix
from scipy.sparse import issparse
from scipy.special import expit
from time import time
from ..model_selection import train_test_split
from ..tree.tree import DecisionTreeRegressor
from ..tree._tree import DTYPE
from ..tree._tree import TREE_LEAF
from ..utils import check_random_state
from ..utils import check_array
from ..utils import check_X_y
from ..utils import column_or_1d
from ..utils import check_consistent_length
from ..utils import deprecated
from ..utils.fixes import logsumexp
from ..utils.stats import _weighted_percentile
from ..utils.validation import check_is_fitted
from ..utils.multiclass import check_classification_targets
from ..exceptions import NotFittedError
class QuantileEstimator(object):
"""An estimator predicting the alpha-quantile of the training targets."""
def __init__(self, alpha=0.9):
if not 0 < alpha < 1.0:
raise ValueError("`alpha` must be in (0, 1.0) but was %r" % alpha)
self.alpha = alpha
def fit(self, X, y, sample_weight=None):
if sample_weight is None:
self.quantile = stats.scoreatpercentile(y, self.alpha * 100.0)
else:
self.quantile = _weighted_percentile(y, sample_weight,
self.alpha * 100.0)
def predict(self, X):
check_is_fitted(self, 'quantile')
y = np.empty((X.shape[0], 1), dtype=np.float64)
y.fill(self.quantile)
return y
class MeanEstimator(object):
"""An estimator predicting the mean of the training targets."""
def fit(self, X, y, sample_weight=None):
if sample_weight is None:
self.mean = np.mean(y)
else:
self.mean = np.average(y, weights=sample_weight)
def predict(self, X):
check_is_fitted(self, 'mean')
y = np.empty((X.shape[0], 1), dtype=np.float64)
y.fill(self.mean)
return y
class LogOddsEstimator(object):
"""An estimator predicting the log odds ratio."""
scale = 1.0
def fit(self, X, y, sample_weight=None):
# pre-cond: pos, neg are encoded as 1, 0
if sample_weight is None:
pos = np.sum(y)
neg = y.shape[0] - pos
else:
pos = np.sum(sample_weight * y)
neg = np.sum(sample_weight * (1 - y))
if neg == 0 or pos == 0:
raise ValueError('y contains non binary labels.')
self.prior = self.scale * np.log(pos / neg)
def predict(self, X):
check_is_fitted(self, 'prior')
y = np.empty((X.shape[0], 1), dtype=np.float64)
y.fill(self.prior)
return y
class ScaledLogOddsEstimator(LogOddsEstimator):
"""Log odds ratio scaled by 0.5 -- for exponential loss. """
scale = 0.5
class PriorProbabilityEstimator(object):
"""An estimator predicting the probability of each
class in the training data.
"""
def fit(self, X, y, sample_weight=None):
if sample_weight is None:
sample_weight = np.ones_like(y, dtype=np.float64)
class_counts = np.bincount(y, weights=sample_weight)
self.priors = class_counts / class_counts.sum()
def predict(self, X):
check_is_fitted(self, 'priors')
y = np.empty((X.shape[0], self.priors.shape[0]), dtype=np.float64)
y[:] = self.priors
return y
class ZeroEstimator(object):
"""An estimator that simply predicts zero. """
def fit(self, X, y, sample_weight=None):
if np.issubdtype(y.dtype, np.signedinteger):
# classification
self.n_classes = np.unique(y).shape[0]
if self.n_classes == 2:
self.n_classes = 1
else:
# regression
self.n_classes = 1
def predict(self, X):
check_is_fitted(self, 'n_classes')
y = np.empty((X.shape[0], self.n_classes), dtype=np.float64)
y.fill(0.0)
return y
class LossFunction(six.with_metaclass(ABCMeta, object)):
"""Abstract base class for various loss functions.
Attributes
----------
K : int
The number of regression trees to be induced;
1 for regression and binary classification;
``n_classes`` for multi-class classification.
"""
is_multi_class = False
def __init__(self, n_classes):
self.K = n_classes
def init_estimator(self):
"""Default ``init`` estimator for loss function. """
raise NotImplementedError()
@abstractmethod
def __call__(self, y, pred, sample_weight=None):
"""Compute the loss of prediction ``pred`` and ``y``. """
@abstractmethod
def negative_gradient(self, y, y_pred, **kargs):
"""Compute the negative gradient.
Parameters
---------
y : np.ndarray, shape=(n,)
The target labels.
y_pred : np.ndarray, shape=(n,):
The predictions.
"""
def update_terminal_regions(self, tree, X, y, residual, y_pred,
sample_weight, sample_mask,
learning_rate=1.0, k=0):
"""Update the terminal regions (=leaves) of the given tree and
updates the current predictions of the model. Traverses tree
and invokes template method `_update_terminal_region`.
Parameters
----------
tree : tree.Tree
The tree object.
X : ndarray, shape=(n, m)
The data array.
y : ndarray, shape=(n,)
The target labels.
residual : ndarray, shape=(n,)
The residuals (usually the negative gradient).
y_pred : ndarray, shape=(n,)
The predictions.
sample_weight : ndarray, shape=(n,)
The weight of each sample.
sample_mask : ndarray, shape=(n,)
The sample mask to be used.
learning_rate : float, default=0.1
learning rate shrinks the contribution of each tree by
``learning_rate``.
k : int, default 0
The index of the estimator being updated.
"""
# compute leaf for each sample in ``X``.
terminal_regions = tree.apply(X)
# mask all which are not in sample mask.
masked_terminal_regions = terminal_regions.copy()
masked_terminal_regions[~sample_mask] = -1
# update each leaf (= perform line search)
for leaf in np.where(tree.children_left == TREE_LEAF)[0]:
self._update_terminal_region(tree, masked_terminal_regions,
leaf, X, y, residual,
y_pred[:, k], sample_weight)
# update predictions (both in-bag and out-of-bag)
y_pred[:, k] += (learning_rate
* tree.value[:, 0, 0].take(terminal_regions, axis=0))
@abstractmethod
def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,
residual, pred, sample_weight):
"""Template method for updating terminal regions (=leaves). """
class RegressionLossFunction(six.with_metaclass(ABCMeta, LossFunction)):
"""Base class for regression loss functions. """
def __init__(self, n_classes):
if n_classes != 1:
raise ValueError("``n_classes`` must be 1 for regression but "
"was %r" % n_classes)
super(RegressionLossFunction, self).__init__(n_classes)
class LeastSquaresError(RegressionLossFunction):
"""Loss function for least squares (LS) estimation.
Terminal regions need not to be updated for least squares. """
def init_estimator(self):
return MeanEstimator()
def __call__(self, y, pred, sample_weight=None):
if sample_weight is None:
return np.mean((y - pred.ravel()) ** 2.0)
else:
return (1.0 / sample_weight.sum() *
np.sum(sample_weight * ((y - pred.ravel()) ** 2.0)))
def negative_gradient(self, y, pred, **kargs):
return y - pred.ravel()
def update_terminal_regions(self, tree, X, y, residual, y_pred,
sample_weight, sample_mask,
learning_rate=1.0, k=0):
"""Least squares does not need to update terminal regions.
But it has to update the predictions.
"""
# update predictions
y_pred[:, k] += learning_rate * tree.predict(X).ravel()
def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,
residual, pred, sample_weight):
pass
class LeastAbsoluteError(RegressionLossFunction):
"""Loss function for least absolute deviation (LAD) regression. """
def init_estimator(self):
return QuantileEstimator(alpha=0.5)
def __call__(self, y, pred, sample_weight=None):
if sample_weight is None:
return np.abs(y - pred.ravel()).mean()
else:
return (1.0 / sample_weight.sum() *
np.sum(sample_weight * np.abs(y - pred.ravel())))
def negative_gradient(self, y, pred, **kargs):
"""1.0 if y - pred > 0.0 else -1.0"""
pred = pred.ravel()
return 2.0 * (y - pred > 0.0) - 1.0
def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,
residual, pred, sample_weight):
"""LAD updates terminal regions to median estimates. """
terminal_region = np.where(terminal_regions == leaf)[0]
sample_weight = sample_weight.take(terminal_region, axis=0)
diff = y.take(terminal_region, axis=0) - pred.take(terminal_region, axis=0)
tree.value[leaf, 0, 0] = _weighted_percentile(diff, sample_weight, percentile=50)
class HuberLossFunction(RegressionLossFunction):
"""Huber loss function for robust regression.
M-Regression proposed in Friedman 2001.
References
----------
J. Friedman, Greedy Function Approximation: A Gradient Boosting
Machine, The Annals of Statistics, Vol. 29, No. 5, 2001.
"""
def __init__(self, n_classes, alpha=0.9):
super(HuberLossFunction, self).__init__(n_classes)
self.alpha = alpha
self.gamma = None
def init_estimator(self):
return QuantileEstimator(alpha=0.5)
def __call__(self, y, pred, sample_weight=None):
pred = pred.ravel()
diff = y - pred
gamma = self.gamma
if gamma is None:
if sample_weight is None:
gamma = stats.scoreatpercentile(np.abs(diff), self.alpha * 100)
else:
gamma = _weighted_percentile(np.abs(diff), sample_weight, self.alpha * 100)
gamma_mask = np.abs(diff) <= gamma
if sample_weight is None:
sq_loss = np.sum(0.5 * diff[gamma_mask] ** 2.0)
lin_loss = np.sum(gamma * (np.abs(diff[~gamma_mask]) - gamma / 2.0))
loss = (sq_loss + lin_loss) / y.shape[0]
else:
sq_loss = np.sum(0.5 * sample_weight[gamma_mask] * diff[gamma_mask] ** 2.0)
lin_loss = np.sum(gamma * sample_weight[~gamma_mask] *
(np.abs(diff[~gamma_mask]) - gamma / 2.0))
loss = (sq_loss + lin_loss) / sample_weight.sum()
return loss
def negative_gradient(self, y, pred, sample_weight=None, **kargs):
pred = pred.ravel()
diff = y - pred
if sample_weight is None:
gamma = stats.scoreatpercentile(np.abs(diff), self.alpha * 100)
else:
gamma = _weighted_percentile(np.abs(diff), sample_weight, self.alpha * 100)
gamma_mask = np.abs(diff) <= gamma
residual = np.zeros((y.shape[0],), dtype=np.float64)
residual[gamma_mask] = diff[gamma_mask]
residual[~gamma_mask] = gamma * np.sign(diff[~gamma_mask])
self.gamma = gamma
return residual
def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,
residual, pred, sample_weight):
terminal_region = np.where(terminal_regions == leaf)[0]
sample_weight = sample_weight.take(terminal_region, axis=0)
gamma = self.gamma
diff = (y.take(terminal_region, axis=0)
- pred.take(terminal_region, axis=0))
median = _weighted_percentile(diff, sample_weight, percentile=50)
diff_minus_median = diff - median
tree.value[leaf, 0] = median + np.mean(
np.sign(diff_minus_median) *
np.minimum(np.abs(diff_minus_median), gamma))
class QuantileLossFunction(RegressionLossFunction):
"""Loss function for quantile regression.
Quantile regression allows to estimate the percentiles
of the conditional distribution of the target.
"""
def __init__(self, n_classes, alpha=0.9):
super(QuantileLossFunction, self).__init__(n_classes)
self.alpha = alpha
self.percentile = alpha * 100.0
def init_estimator(self):
return QuantileEstimator(self.alpha)
def __call__(self, y, pred, sample_weight=None):
pred = pred.ravel()
diff = y - pred
alpha = self.alpha
mask = y > pred
if sample_weight is None:
loss = (alpha * diff[mask].sum() -
(1.0 - alpha) * diff[~mask].sum()) / y.shape[0]
else:
loss = ((alpha * np.sum(sample_weight[mask] * diff[mask]) -
(1.0 - alpha) * np.sum(sample_weight[~mask] * diff[~mask])) /
sample_weight.sum())
return loss
def negative_gradient(self, y, pred, **kargs):
alpha = self.alpha
pred = pred.ravel()
mask = y > pred
return (alpha * mask) - ((1.0 - alpha) * ~mask)
def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,
residual, pred, sample_weight):
terminal_region = np.where(terminal_regions == leaf)[0]
diff = (y.take(terminal_region, axis=0)
- pred.take(terminal_region, axis=0))
sample_weight = sample_weight.take(terminal_region, axis=0)
val = _weighted_percentile(diff, sample_weight, self.percentile)
tree.value[leaf, 0] = val
class ClassificationLossFunction(six.with_metaclass(ABCMeta, LossFunction)):
"""Base class for classification loss functions. """
def _score_to_proba(self, score):
"""Template method to convert scores to probabilities.
the does not support probabilities raises AttributeError.
"""
raise TypeError('%s does not support predict_proba' % type(self).__name__)
@abstractmethod
def _score_to_decision(self, score):
"""Template method to convert scores to decisions.
Returns int arrays.
"""
class BinomialDeviance(ClassificationLossFunction):
"""Binomial deviance loss function for binary classification.
Binary classification is a special case; here, we only need to
fit one tree instead of ``n_classes`` trees.
"""
def __init__(self, n_classes):
if n_classes != 2:
raise ValueError("{0:s} requires 2 classes; got {1:d} class(es)"
.format(self.__class__.__name__, n_classes))
# we only need to fit one tree for binary clf.
super(BinomialDeviance, self).__init__(1)
def init_estimator(self):
return LogOddsEstimator()
def __call__(self, y, pred, sample_weight=None):
"""Compute the deviance (= 2 * negative log-likelihood). """
# logaddexp(0, v) == log(1.0 + exp(v))
pred = pred.ravel()
if sample_weight is None:
return -2.0 * np.mean((y * pred) - np.logaddexp(0.0, pred))
else:
return (-2.0 / sample_weight.sum() *
np.sum(sample_weight * ((y * pred) - np.logaddexp(0.0, pred))))
def negative_gradient(self, y, pred, **kargs):
"""Compute the residual (= negative gradient). """
return y - expit(pred.ravel())
def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,
residual, pred, sample_weight):
"""Make a single Newton-Raphson step.
our node estimate is given by:
sum(w * (y - prob)) / sum(w * prob * (1 - prob))
we take advantage that: y - prob = residual
"""
terminal_region = np.where(terminal_regions == leaf)[0]
residual = residual.take(terminal_region, axis=0)
y = y.take(terminal_region, axis=0)
sample_weight = sample_weight.take(terminal_region, axis=0)
numerator = np.sum(sample_weight * residual)
denominator = np.sum(sample_weight * (y - residual) * (1 - y + residual))
# prevents overflow and division by zero
if abs(denominator) < 1e-150:
tree.value[leaf, 0, 0] = 0.0
else:
tree.value[leaf, 0, 0] = numerator / denominator
def _score_to_proba(self, score):
proba = np.ones((score.shape[0], 2), dtype=np.float64)
proba[:, 1] = expit(score.ravel())
proba[:, 0] -= proba[:, 1]
return proba
def _score_to_decision(self, score):
proba = self._score_to_proba(score)
return np.argmax(proba, axis=1)
class MultinomialDeviance(ClassificationLossFunction):
"""Multinomial deviance loss function for multi-class classification.
For multi-class classification we need to fit ``n_classes`` trees at
each stage.
"""
is_multi_class = True
def __init__(self, n_classes):
if n_classes < 3:
raise ValueError("{0:s} requires more than 2 classes.".format(
self.__class__.__name__))
super(MultinomialDeviance, self).__init__(n_classes)
def init_estimator(self):
return PriorProbabilityEstimator()
def __call__(self, y, pred, sample_weight=None):
# create one-hot label encoding
Y = np.zeros((y.shape[0], self.K), dtype=np.float64)
for k in range(self.K):
Y[:, k] = y == k
if sample_weight is None:
return np.sum(-1 * (Y * pred).sum(axis=1) +
logsumexp(pred, axis=1))
else:
return np.sum(-1 * sample_weight * (Y * pred).sum(axis=1) +
logsumexp(pred, axis=1))
def negative_gradient(self, y, pred, k=0, **kwargs):
"""Compute negative gradient for the ``k``-th class. """
return y - np.nan_to_num(np.exp(pred[:, k] -
logsumexp(pred, axis=1)))
def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,
residual, pred, sample_weight):
"""Make a single Newton-Raphson step. """
terminal_region = np.where(terminal_regions == leaf)[0]
residual = residual.take(terminal_region, axis=0)
y = y.take(terminal_region, axis=0)
sample_weight = sample_weight.take(terminal_region, axis=0)
numerator = np.sum(sample_weight * residual)
numerator *= (self.K - 1) / self.K
denominator = np.sum(sample_weight * (y - residual) *
(1.0 - y + residual))
# prevents overflow and division by zero
if abs(denominator) < 1e-150:
tree.value[leaf, 0, 0] = 0.0
else:
tree.value[leaf, 0, 0] = numerator / denominator
def _score_to_proba(self, score):
return np.nan_to_num(
np.exp(score - (logsumexp(score, axis=1)[:, np.newaxis])))
def _score_to_decision(self, score):
proba = self._score_to_proba(score)
return np.argmax(proba, axis=1)
class ExponentialLoss(ClassificationLossFunction):
"""Exponential loss function for binary classification.
Same loss as AdaBoost.
References
----------
Greg Ridgeway, Generalized Boosted Models: A guide to the gbm package, 2007
"""
def __init__(self, n_classes):
if n_classes != 2:
raise ValueError("{0:s} requires 2 classes; got {1:d} class(es)"
.format(self.__class__.__name__, n_classes))
# we only need to fit one tree for binary clf.
super(ExponentialLoss, self).__init__(1)
def init_estimator(self):
return ScaledLogOddsEstimator()
def __call__(self, y, pred, sample_weight=None):
pred = pred.ravel()
if sample_weight is None:
return np.mean(np.exp(-(2. * y - 1.) * pred))
else:
return (1.0 / sample_weight.sum() *
np.sum(sample_weight * np.exp(-(2 * y - 1) * pred)))
def negative_gradient(self, y, pred, **kargs):
y_ = -(2. * y - 1.)
return y_ * np.exp(y_ * pred.ravel())
def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,
residual, pred, sample_weight):
terminal_region = np.where(terminal_regions == leaf)[0]
pred = pred.take(terminal_region, axis=0)
y = y.take(terminal_region, axis=0)
sample_weight = sample_weight.take(terminal_region, axis=0)
y_ = 2. * y - 1.
numerator = np.sum(y_ * sample_weight * np.exp(-y_ * pred))
denominator = np.sum(sample_weight * np.exp(-y_ * pred))
# prevents overflow and division by zero
if abs(denominator) < 1e-150:
tree.value[leaf, 0, 0] = 0.0
else:
tree.value[leaf, 0, 0] = numerator / denominator
def _score_to_proba(self, score):
proba = np.ones((score.shape[0], 2), dtype=np.float64)
proba[:, 1] = expit(2.0 * score.ravel())
proba[:, 0] -= proba[:, 1]
return proba
def _score_to_decision(self, score):
return (score.ravel() >= 0.0).astype(np.int)
LOSS_FUNCTIONS = {'ls': LeastSquaresError,
'lad': LeastAbsoluteError,
'huber': HuberLossFunction,
'quantile': QuantileLossFunction,
'deviance': None, # for both, multinomial and binomial
'exponential': ExponentialLoss,
}
INIT_ESTIMATORS = {'zero': ZeroEstimator}
class VerboseReporter(object):
"""Reports verbose output to stdout.
If ``verbose==1`` output is printed once in a while (when iteration mod
verbose_mod is zero).; if larger than 1 then output is printed for
each update.
"""
def __init__(self, verbose):
self.verbose = verbose
def init(self, est, begin_at_stage=0):
# header fields and line format str
header_fields = ['Iter', 'Train Loss']
verbose_fmt = ['{iter:>10d}', '{train_score:>16.4f}']
# do oob?
if est.subsample < 1:
header_fields.append('OOB Improve')
verbose_fmt.append('{oob_impr:>16.4f}')
header_fields.append('Remaining Time')
verbose_fmt.append('{remaining_time:>16s}')
# print the header line
print(('%10s ' + '%16s ' *
(len(header_fields) - 1)) % tuple(header_fields))
self.verbose_fmt = ' '.join(verbose_fmt)
# plot verbose info each time i % verbose_mod == 0
self.verbose_mod = 1
self.start_time = time()
self.begin_at_stage = begin_at_stage
def update(self, j, est):
"""Update reporter with new iteration. """
do_oob = est.subsample < 1
# we need to take into account if we fit additional estimators.
i = j - self.begin_at_stage # iteration relative to the start iter
if (i + 1) % self.verbose_mod == 0:
oob_impr = est.oob_improvement_[j] if do_oob else 0
remaining_time = ((est.n_estimators - (j + 1)) *
(time() - self.start_time) / float(i + 1))
if remaining_time > 60:
remaining_time = '{0:.2f}m'.format(remaining_time / 60.0)
else:
remaining_time = '{0:.2f}s'.format(remaining_time)
print(self.verbose_fmt.format(iter=j + 1,
train_score=est.train_score_[j],
oob_impr=oob_impr,
remaining_time=remaining_time))
if self.verbose == 1 and ((i + 1) // (self.verbose_mod * 10) > 0):
# adjust verbose frequency (powers of 10)
self.verbose_mod *= 10
class BaseGradientBoosting(six.with_metaclass(ABCMeta, BaseEnsemble)):
"""Abstract base class for Gradient Boosting. """
@abstractmethod
def __init__(self, loss, learning_rate, n_estimators, criterion,
min_samples_split, min_samples_leaf, min_weight_fraction_leaf,
max_depth, min_impurity_decrease, min_impurity_split,
init, subsample, max_features,
random_state, alpha=0.9, verbose=0, max_leaf_nodes=None,
warm_start=False, presort='auto',
validation_fraction=0.1, n_iter_no_change=None,
tol=1e-4):
self.n_estimators = n_estimators
self.learning_rate = learning_rate
self.loss = loss
self.criterion = criterion
self.min_samples_split = min_samples_split
self.min_samples_leaf = min_samples_leaf
self.min_weight_fraction_leaf = min_weight_fraction_leaf
self.subsample = subsample
self.max_features = max_features
self.max_depth = max_depth
self.min_impurity_decrease = min_impurity_decrease
self.min_impurity_split = min_impurity_split
self.init = init
self.random_state = random_state
self.alpha = alpha
self.verbose = verbose
self.max_leaf_nodes = max_leaf_nodes
self.warm_start = warm_start
self.presort = presort
self.validation_fraction = validation_fraction
self.n_iter_no_change = n_iter_no_change
self.tol = tol
def _fit_stage(self, i, X, y, y_pred, sample_weight, sample_mask,
random_state, X_idx_sorted, X_csc=None, X_csr=None):
"""Fit another stage of ``n_classes_`` trees to the boosting model. """
assert sample_mask.dtype == np.bool
loss = self.loss_
original_y = y
for k in range(loss.K):
if loss.is_multi_class:
y = np.array(original_y == k, dtype=np.float64)
residual = loss.negative_gradient(y, y_pred, k=k,
sample_weight=sample_weight)
# induce regression tree on residuals
tree = DecisionTreeRegressor(
criterion=self.criterion,
splitter='best',
max_depth=self.max_depth,
min_samples_split=self.min_samples_split,
min_samples_leaf=self.min_samples_leaf,
min_weight_fraction_leaf=self.min_weight_fraction_leaf,
min_impurity_decrease=self.min_impurity_decrease,
min_impurity_split=self.min_impurity_split,
max_features=self.max_features,
max_leaf_nodes=self.max_leaf_nodes,
random_state=random_state,
presort=self.presort)
if self.subsample < 1.0:
# no inplace multiplication!
sample_weight = sample_weight * sample_mask.astype(np.float64)
if X_csc is not None:
tree.fit(X_csc, residual, sample_weight=sample_weight,
check_input=False, X_idx_sorted=X_idx_sorted)
else:
tree.fit(X, residual, sample_weight=sample_weight,
check_input=False, X_idx_sorted=X_idx_sorted)
# update tree leaves
if X_csr is not None:
loss.update_terminal_regions(tree.tree_, X_csr, y, residual, y_pred,
sample_weight, sample_mask,
self.learning_rate, k=k)
else:
loss.update_terminal_regions(tree.tree_, X, y, residual, y_pred,
sample_weight, sample_mask,
self.learning_rate, k=k)
# add tree to ensemble
self.estimators_[i, k] = tree
return y_pred
def _check_params(self):
"""Check validity of parameters and raise ValueError if not valid. """
if self.n_estimators <= 0:
raise ValueError("n_estimators must be greater than 0 but "
"was %r" % self.n_estimators)
if self.learning_rate <= 0.0:
raise ValueError("learning_rate must be greater than 0 but "
"was %r" % self.learning_rate)
if (self.loss not in self._SUPPORTED_LOSS
or self.loss not in LOSS_FUNCTIONS):
raise ValueError("Loss '{0:s}' not supported. ".format(self.loss))
if self.loss == 'deviance':
loss_class = (MultinomialDeviance
if len(self.classes_) > 2
else BinomialDeviance)
else:
loss_class = LOSS_FUNCTIONS[self.loss]
if self.loss in ('huber', 'quantile'):
self.loss_ = loss_class(self.n_classes_, self.alpha)
else:
self.loss_ = loss_class(self.n_classes_)
if not (0.0 < self.subsample <= 1.0):
raise ValueError("subsample must be in (0,1] but "
"was %r" % self.subsample)
if self.init is not None:
if isinstance(self.init, six.string_types):
if self.init not in INIT_ESTIMATORS:
raise ValueError('init="%s" is not supported' % self.init)
else:
if (not hasattr(self.init, 'fit')
or not hasattr(self.init, 'predict')):
raise ValueError("init=%r must be valid BaseEstimator "
"and support both fit and "
"predict" % self.init)
if not (0.0 < self.alpha < 1.0):
raise ValueError("alpha must be in (0.0, 1.0) but "
"was %r" % self.alpha)
if isinstance(self.max_features, six.string_types):
if self.max_features == "auto":
# if is_classification
if self.n_classes_ > 1:
max_features = max(1, int(np.sqrt(self.n_features_)))
else:
# is regression
max_features = self.n_features_
elif self.max_features == "sqrt":
max_features = max(1, int(np.sqrt(self.n_features_)))
elif self.max_features == "log2":
max_features = max(1, int(np.log2(self.n_features_)))
else:
raise ValueError("Invalid value for max_features: %r. "
"Allowed string values are 'auto', 'sqrt' "
"or 'log2'." % self.max_features)
elif self.max_features is None:
max_features = self.n_features_
elif isinstance(self.max_features, (numbers.Integral, np.integer)):
max_features = self.max_features
else: # float
if 0. < self.max_features <= 1.:
max_features = max(int(self.max_features *
self.n_features_), 1)
else:
raise ValueError("max_features must be in (0, n_features]")
self.max_features_ = max_features
if not isinstance(self.n_iter_no_change,
(numbers.Integral, np.integer, type(None))):
raise ValueError("n_iter_no_change should either be None or an "
"integer. %r was passed"
% self.n_iter_no_change)
allowed_presort = ('auto', True, False)
if self.presort not in allowed_presort:
raise ValueError("'presort' should be in {}. Got {!r} instead."
.format(allowed_presort, self.presort))
def _init_state(self):
"""Initialize model state and allocate model state data structures. """
if self.init is None:
self.init_ = self.loss_.init_estimator()
elif isinstance(self.init, six.string_types):
self.init_ = INIT_ESTIMATORS[self.init]()
else:
self.init_ = self.init
self.estimators_ = np.empty((self.n_estimators, self.loss_.K),
dtype=np.object)
self.train_score_ = np.zeros((self.n_estimators,), dtype=np.float64)
# do oob?
if self.subsample < 1.0:
self.oob_improvement_ = np.zeros((self.n_estimators),
dtype=np.float64)
def _clear_state(self):
"""Clear the state of the gradient boosting model. """
if hasattr(self, 'estimators_'):
self.estimators_ = np.empty((0, 0), dtype=np.object)
if hasattr(self, 'train_score_'):
del self.train_score_
if hasattr(self, 'oob_improvement_'):
del self.oob_improvement_
if hasattr(self, 'init_'):
del self.init_
if hasattr(self, '_rng'):
del self._rng
def _resize_state(self):
"""Add additional ``n_estimators`` entries to all attributes. """
# self.n_estimators is the number of additional est to fit
total_n_estimators = self.n_estimators
if total_n_estimators < self.estimators_.shape[0]:
raise ValueError('resize with smaller n_estimators %d < %d' %
(total_n_estimators, self.estimators_[0]))
self.estimators_.resize((total_n_estimators, self.loss_.K))
self.train_score_.resize(total_n_estimators)
if (self.subsample < 1 or hasattr(self, 'oob_improvement_')):
# if do oob resize arrays or create new if not available
if hasattr(self, 'oob_improvement_'):
self.oob_improvement_.resize(total_n_estimators)
else:
self.oob_improvement_ = np.zeros((total_n_estimators,),
dtype=np.float64)
def _is_initialized(self):
return len(getattr(self, 'estimators_', [])) > 0
def _check_initialized(self):
"""Check that the estimator is initialized, raising an error if not."""
check_is_fitted(self, 'estimators_')
@property
@deprecated("Attribute n_features was deprecated in version 0.19 and "
"will be removed in 0.21.")
def n_features(self):
return self.n_features_
def fit(self, X, y, sample_weight=None, monitor=None):
"""Fit the gradient boosting model.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values (integers in classification, real numbers in
regression)
For classification, labels must correspond to classes.
sample_weight : array-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted. Splits
that would create child nodes with net zero or negative weight are
ignored while searching for a split in each node. In the case of
classification, splits are also ignored if they would result in any
single class carrying a negative weight in either child node.
monitor : callable, optional
The monitor is called after each iteration with the current
iteration, a reference to the estimator and the local variables of
``_fit_stages`` as keyword arguments ``callable(i, self,
locals())``. If the callable returns ``True`` the fitting procedure
is stopped. The monitor can be used for various things such as
computing held-out estimates, early stopping, model introspect, and
snapshoting.
Returns
-------
self : object
Returns self.
"""
# if not warmstart - clear the estimator state
if not self.warm_start:
self._clear_state()
# Check input
X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'], dtype=DTYPE)
n_samples, self.n_features_ = X.shape
if sample_weight is None:
sample_weight = np.ones(n_samples, dtype=np.float32)
else:
sample_weight = column_or_1d(sample_weight, warn=True)
check_consistent_length(X, y, sample_weight)
y = self._validate_y(y)
if self.n_iter_no_change is not None:
X, X_val, y, y_val, sample_weight, sample_weight_val = (
train_test_split(X, y, sample_weight,
random_state=self.random_state,
test_size=self.validation_fraction))
else:
X_val = y_val = sample_weight_val = None
self._check_params()
if not self._is_initialized():
# init state
self._init_state()
# fit initial model - FIXME make sample_weight optional
self.init_.fit(X, y, sample_weight)
# init predictions
y_pred = self.init_.predict(X)
begin_at_stage = 0
# The rng state must be preserved if warm_start is True
self._rng = check_random_state(self.random_state)
else:
# add more estimators to fitted model
# invariant: warm_start = True
if self.n_estimators < self.estimators_.shape[0]:
raise ValueError('n_estimators=%d must be larger or equal to '
'estimators_.shape[0]=%d when '
'warm_start==True'
% (self.n_estimators,
self.estimators_.shape[0]))
begin_at_stage = self.estimators_.shape[0]
# The requirements of _decision_function (called in two lines
# below) are more constrained than fit. It accepts only CSR
# matrices.
X = check_array(X, dtype=DTYPE, order="C", accept_sparse='csr')
y_pred = self._decision_function(X)
self._resize_state()
if self.presort is True and issparse(X):
raise ValueError(
"Presorting is not supported for sparse matrices.")
presort = self.presort
# Allow presort to be 'auto', which means True if the dataset is dense,
# otherwise it will be False.
if presort == 'auto':
presort = not issparse(X)
X_idx_sorted = None
if presort:
X_idx_sorted = np.asfortranarray(np.argsort(X, axis=0),
dtype=np.int32)
# fit the boosting stages
n_stages = self._fit_stages(X, y, y_pred, sample_weight, self._rng,
X_val, y_val, sample_weight_val,
begin_at_stage, monitor, X_idx_sorted)
# change shape of arrays after fit (early-stopping or additional ests)
if n_stages != self.estimators_.shape[0]:
self.estimators_ = self.estimators_[:n_stages]
self.train_score_ = self.train_score_[:n_stages]
if hasattr(self, 'oob_improvement_'):
self.oob_improvement_ = self.oob_improvement_[:n_stages]
self.n_estimators_ = n_stages
return self
def _fit_stages(self, X, y, y_pred, sample_weight, random_state,
X_val, y_val, sample_weight_val,
begin_at_stage=0, monitor=None, X_idx_sorted=None):
"""Iteratively fits the stages.
For each stage it computes the progress (OOB, train score)
and delegates to ``_fit_stage``.
Returns the number of stages fit; might differ from ``n_estimators``
due to early stopping.
"""
n_samples = X.shape[0]
do_oob = self.subsample < 1.0
sample_mask = np.ones((n_samples, ), dtype=np.bool)
n_inbag = max(1, int(self.subsample * n_samples))
loss_ = self.loss_
# Set min_weight_leaf from min_weight_fraction_leaf
if self.min_weight_fraction_leaf != 0. and sample_weight is not None:
min_weight_leaf = (self.min_weight_fraction_leaf *
np.sum(sample_weight))
else:
min_weight_leaf = 0.
if self.verbose:
verbose_reporter = VerboseReporter(self.verbose)
verbose_reporter.init(self, begin_at_stage)
X_csc = csc_matrix(X) if issparse(X) else None
X_csr = csr_matrix(X) if issparse(X) else None
if self.n_iter_no_change is not None:
loss_history = np.ones(self.n_iter_no_change) * np.inf
# We create a generator to get the predictions for X_val after
# the addition of each successive stage
y_val_pred_iter = self._staged_decision_function(X_val)
# perform boosting iterations
i = begin_at_stage
for i in range(begin_at_stage, self.n_estimators):
# subsampling
if do_oob:
sample_mask = _random_sample_mask(n_samples, n_inbag,
random_state)
# OOB score before adding this stage
old_oob_score = loss_(y[~sample_mask],
y_pred[~sample_mask],
sample_weight[~sample_mask])
# fit next stage of trees
y_pred = self._fit_stage(i, X, y, y_pred, sample_weight,
sample_mask, random_state, X_idx_sorted,
X_csc, X_csr)
# track deviance (= loss)
if do_oob:
self.train_score_[i] = loss_(y[sample_mask],
y_pred[sample_mask],
sample_weight[sample_mask])
self.oob_improvement_[i] = (
old_oob_score - loss_(y[~sample_mask],
y_pred[~sample_mask],
sample_weight[~sample_mask]))
else:
# no need to fancy index w/ no subsampling
self.train_score_[i] = loss_(y, y_pred, sample_weight)
if self.verbose > 0:
verbose_reporter.update(i, self)
if monitor is not None:
early_stopping = monitor(i, self, locals())
if early_stopping:
break
# We also provide an early stopping based on the score from
# validation set (X_val, y_val), if n_iter_no_change is set
if self.n_iter_no_change is not None:
# By calling next(y_val_pred_iter), we get the predictions
# for X_val after the addition of the current stage
validation_loss = loss_(y_val, next(y_val_pred_iter),
sample_weight_val)
# Require validation_score to be better (less) than at least
# one of the last n_iter_no_change evaluations
if np.any(validation_loss + self.tol < loss_history):
loss_history[i % len(loss_history)] = validation_loss
else:
break
return i + 1
def _make_estimator(self, append=True):
# we don't need _make_estimator
raise NotImplementedError()
def _init_decision_function(self, X):
"""Check input and compute prediction of ``init``. """
self._check_initialized()
X = self.estimators_[0, 0]._validate_X_predict(X, check_input=True)
if X.shape[1] != self.n_features_:
raise ValueError("X.shape[1] should be {0:d}, not {1:d}.".format(
self.n_features_, X.shape[1]))
score = self.init_.predict(X).astype(np.float64)
return score
def _decision_function(self, X):
# for use in inner loop, not raveling the output in single-class case,
# not doing input validation.
score = self._init_decision_function(X)
predict_stages(self.estimators_, X, self.learning_rate, score)
return score
def _staged_decision_function(self, X):
"""Compute decision function of ``X`` for each iteration.
This method allows monitoring (i.e. determine error on testing set)
after each stage.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
score : generator of array, shape = [n_samples, k]
The decision function of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
Regression and binary classification are special cases with
``k == 1``, otherwise ``k==n_classes``.
"""
X = check_array(X, dtype=DTYPE, order="C", accept_sparse='csr')
score = self._init_decision_function(X)
for i in range(self.estimators_.shape[0]):
predict_stage(self.estimators_, i, X, self.learning_rate, score)
yield score.copy()
@property
def feature_importances_(self):
"""Return the feature importances (the higher, the more important the
feature).
Returns
-------
feature_importances_ : array, shape = [n_features]
"""
self._check_initialized()
total_sum = np.zeros((self.n_features_, ), dtype=np.float64)
for stage in self.estimators_:
stage_sum = sum(tree.feature_importances_
for tree in stage) / len(stage)
total_sum += stage_sum
importances = total_sum / len(self.estimators_)
return importances
def _validate_y(self, y):
self.n_classes_ = 1
if y.dtype.kind == 'O':
y = y.astype(np.float64)
# Default implementation
return y
def apply(self, X):
"""Apply trees in the ensemble to X, return leaf indices.
.. versionadded:: 0.17
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, its dtype will be converted to
``dtype=np.float32``. If a sparse matrix is provided, it will
be converted to a sparse ``csr_matrix``.
Returns
-------
X_leaves : array_like, shape = [n_samples, n_estimators, n_classes]
For each datapoint x in X and for each tree in the ensemble,
return the index of the leaf x ends up in each estimator.
In the case of binary classification n_classes is 1.
"""
self._check_initialized()
X = self.estimators_[0, 0]._validate_X_predict(X, check_input=True)
# n_classes will be equal to 1 in the binary classification or the
# regression case.
n_estimators, n_classes = self.estimators_.shape
leaves = np.zeros((X.shape[0], n_estimators, n_classes))
for i in range(n_estimators):
for j in range(n_classes):
estimator = self.estimators_[i, j]
leaves[:, i, j] = estimator.apply(X, check_input=False)
return leaves
class GradientBoostingClassifier(BaseGradientBoosting, ClassifierMixin):
"""Gradient Boosting for classification.
GB builds an additive model in a
forward stage-wise fashion; it allows for the optimization of
arbitrary differentiable loss functions. In each stage ``n_classes_``
regression trees are fit on the negative gradient of the
binomial or multinomial deviance loss function. Binary classification
is a special case where only a single regression tree is induced.
Read more in the :ref:`User Guide <gradient_boosting>`.
Parameters
----------
loss : {'deviance', 'exponential'}, optional (default='deviance')
loss function to be optimized. 'deviance' refers to
deviance (= logistic regression) for classification
with probabilistic outputs. For loss 'exponential' gradient
boosting recovers the AdaBoost algorithm.
learning_rate : float, optional (default=0.1)
learning rate shrinks the contribution of each tree by `learning_rate`.
There is a trade-off between learning_rate and n_estimators.
n_estimators : int (default=100)
The number of boosting stages to perform. Gradient boosting
is fairly robust to over-fitting so a large number usually
results in better performance.
max_depth : integer, optional (default=3)
maximum depth of the individual regression estimators. The maximum
depth limits the number of nodes in the tree. Tune this parameter
for best performance; the best value depends on the interaction
of the input variables.
criterion : string, optional (default="friedman_mse")
The function to measure the quality of a split. Supported criteria
are "friedman_mse" for the mean squared error with improvement
score by Friedman, "mse" for mean squared error, and "mae" for
the mean absolute error. The default value of "friedman_mse" is
generally the best as it can provide a better approximation in
some cases.
.. versionadded:: 0.18
min_samples_split : int, float, optional (default=2)
The minimum number of samples required to split an internal node:
- If int, then consider `min_samples_split` as the minimum number.
- If float, then `min_samples_split` is a percentage and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split.
.. versionchanged:: 0.18
Added float values for percentages.
min_samples_leaf : int, float, optional (default=1)
The minimum number of samples required to be at a leaf node:
- If int, then consider `min_samples_leaf` as the minimum number.
- If float, then `min_samples_leaf` is a percentage and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node.
.. versionchanged:: 0.18
Added float values for percentages.
min_weight_fraction_leaf : float, optional (default=0.)
The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided.
subsample : float, optional (default=1.0)
The fraction of samples to be used for fitting the individual base
learners. If smaller than 1.0 this results in Stochastic Gradient
Boosting. `subsample` interacts with the parameter `n_estimators`.
Choosing `subsample < 1.0` leads to a reduction of variance
and an increase in bias.
max_features : int, float, string or None, optional (default=None)
The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split.
- If float, then `max_features` is a percentage and
`int(max_features * n_features)` features are considered at each
split.
- If "auto", then `max_features=sqrt(n_features)`.
- If "sqrt", then `max_features=sqrt(n_features)`.
- If "log2", then `max_features=log2(n_features)`.
- If None, then `max_features=n_features`.
Choosing `max_features < n_features` leads to a reduction of variance
and an increase in bias.
Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features.
max_leaf_nodes : int or None, optional (default=None)
Grow trees with ``max_leaf_nodes`` in best-first fashion.
Best nodes are defined as relative reduction in impurity.
If None then unlimited number of leaf nodes.
min_impurity_split : float,
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.
.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19 and will be removed in 0.21.
Use ``min_impurity_decrease`` instead.
min_impurity_decrease : float, optional (default=0.)
A node will be split if this split induces a decrease of the impurity
greater than or equal to this value.
The weighted impurity decrease equation is the following::
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where ``N`` is the total number of samples, ``N_t`` is the number of
samples at the current node, ``N_t_L`` is the number of samples in the
left child, and ``N_t_R`` is the number of samples in the right child.
``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed.
.. versionadded:: 0.19
init : BaseEstimator, None, optional (default=None)
An estimator object that is used to compute the initial
predictions. ``init`` has to provide ``fit`` and ``predict``.
If None it uses ``loss.init_estimator``.
verbose : int, default: 0
Enable verbose output. If 1 then it prints progress and performance
once in a while (the more trees the lower the frequency). If greater
than 1 then it prints progress and performance for every tree.
warm_start : bool, default: False
When set to ``True``, reuse the solution of the previous call to fit
and add more estimators to the ensemble, otherwise, just erase the
previous solution.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
presort : bool or 'auto', optional (default='auto')
Whether to presort the data to speed up the finding of best splits in
fitting. Auto mode by default will use presorting on dense data and
default to normal sorting on sparse data. Setting presort to true on
sparse data will raise an error.
.. versionadded:: 0.17
*presort* parameter.
validation_fraction : float, optional, default 0.1
The proportion of training data to set aside as validation set for
early stopping. Must be between 0 and 1.
Only used if ``n_iter_no_change`` is set to an integer.
.. versionadded:: 0.20
n_iter_no_change : int, default None
``n_iter_no_change`` is used to decide if early stopping will be used
to terminate training when validation score is not improving. By
default it is set to None to disable early stopping. If set to a
number, it will set aside ``validation_fraction`` size of the training
data as validation and terminate training when validation score is not
improving in all of the previous ``n_iter_no_change`` numbers of
iterations.
.. versionadded:: 0.20
tol : float, optional, default 1e-4
Tolerance for the early stopping. When the loss is not improving
by at least tol for ``n_iter_no_change`` iterations (if set to a
number), the training stops.
.. versionadded:: 0.20
Attributes
----------
n_estimators_ : int
The number of estimators as selected by early stopping (if
``n_iter_no_change`` is specified). Otherwise it is set to
``n_estimators``.
.. versionadded:: 0.20
feature_importances_ : array, shape = [n_features]
The feature importances (the higher, the more important the feature).
oob_improvement_ : array, shape = [n_estimators]
The improvement in loss (= deviance) on the out-of-bag samples
relative to the previous iteration.
``oob_improvement_[0]`` is the improvement in
loss of the first stage over the ``init`` estimator.
train_score_ : array, shape = [n_estimators]
The i-th score ``train_score_[i]`` is the deviance (= loss) of the
model at iteration ``i`` on the in-bag sample.
If ``subsample == 1`` this is the deviance on the training data.
loss_ : LossFunction
The concrete ``LossFunction`` object.
init_ : BaseEstimator
The estimator that provides the initial predictions.
Set via the ``init`` argument or ``loss.init_estimator``.
estimators_ : ndarray of DecisionTreeRegressor, shape = [n_estimators, ``loss_.K``]
The collection of fitted sub-estimators. ``loss_.K`` is 1 for binary
classification, otherwise n_classes.
Notes
-----
The features are always randomly permuted at each split. Therefore,
the best found split may vary, even with the same training data and
``max_features=n_features``, if the improvement of the criterion is
identical for several splits enumerated during the search of the best
split. To obtain a deterministic behaviour during fitting,
``random_state`` has to be fixed.
See also
--------
sklearn.tree.DecisionTreeClassifier, RandomForestClassifier
AdaBoostClassifier
References
----------
J. Friedman, Greedy Function Approximation: A Gradient Boosting
Machine, The Annals of Statistics, Vol. 29, No. 5, 2001.
J. Friedman, Stochastic Gradient Boosting, 1999
T. Hastie, R. Tibshirani and J. Friedman.
Elements of Statistical Learning Ed. 2, Springer, 2009.
"""
_SUPPORTED_LOSS = ('deviance', 'exponential')
def __init__(self, loss='deviance', learning_rate=0.1, n_estimators=100,
subsample=1.0, criterion='friedman_mse', min_samples_split=2,
min_samples_leaf=1, min_weight_fraction_leaf=0.,
max_depth=3, min_impurity_decrease=0.,
min_impurity_split=None, init=None,
random_state=None, max_features=None, verbose=0,
max_leaf_nodes=None, warm_start=False,
presort='auto', validation_fraction=0.1,
n_iter_no_change=None, tol=1e-4):
super(GradientBoostingClassifier, self).__init__(
loss=loss, learning_rate=learning_rate, n_estimators=n_estimators,
criterion=criterion, min_samples_split=min_samples_split,
min_samples_leaf=min_samples_leaf,
min_weight_fraction_leaf=min_weight_fraction_leaf,
max_depth=max_depth, init=init, subsample=subsample,
max_features=max_features,
random_state=random_state, verbose=verbose,
max_leaf_nodes=max_leaf_nodes,
min_impurity_decrease=min_impurity_decrease,
min_impurity_split=min_impurity_split,
warm_start=warm_start, presort=presort,
validation_fraction=validation_fraction,
n_iter_no_change=n_iter_no_change, tol=tol)
def _validate_y(self, y):
check_classification_targets(y)
self.classes_, y = np.unique(y, return_inverse=True)
self.n_classes_ = len(self.classes_)
return y
def decision_function(self, X):
"""Compute the decision function of ``X``.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
score : array, shape = [n_samples, n_classes] or [n_samples]
The decision function of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
Regression and binary classification produce an array of shape
[n_samples].
"""
X = check_array(X, dtype=DTYPE, order="C", accept_sparse='csr')
score = self._decision_function(X)
if score.shape[1] == 1:
return score.ravel()
return score
def staged_decision_function(self, X):
"""Compute decision function of ``X`` for each iteration.
This method allows monitoring (i.e. determine error on testing set)
after each stage.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
score : generator of array, shape = [n_samples, k]
The decision function of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
Regression and binary classification are special cases with
``k == 1``, otherwise ``k==n_classes``.
"""
for dec in self._staged_decision_function(X):
# no yield from in Python2.X
yield dec
def predict(self, X):
"""Predict class for X.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
y : array of shape = [n_samples]
The predicted values.
"""
score = self.decision_function(X)
decisions = self.loss_._score_to_decision(score)
return self.classes_.take(decisions, axis=0)
def staged_predict(self, X):
"""Predict class at each stage for X.
This method allows monitoring (i.e. determine error on testing set)
after each stage.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
y : generator of array of shape = [n_samples]
The predicted value of the input samples.
"""
for score in self._staged_decision_function(X):
decisions = self.loss_._score_to_decision(score)
yield self.classes_.take(decisions, axis=0)
def predict_proba(self, X):
"""Predict class probabilities for X.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Raises
------
AttributeError
If the ``loss`` does not support probabilities.
Returns
-------
p : array of shape = [n_samples]
The class probabilities of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
"""
score = self.decision_function(X)
try:
return self.loss_._score_to_proba(score)
except NotFittedError:
raise
except AttributeError:
raise AttributeError('loss=%r does not support predict_proba' %
self.loss)
def predict_log_proba(self, X):
"""Predict class log-probabilities for X.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Raises
------
AttributeError
If the ``loss`` does not support probabilities.
Returns
-------
p : array of shape = [n_samples]
The class log-probabilities of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
"""
proba = self.predict_proba(X)
return np.log(proba)
def staged_predict_proba(self, X):
"""Predict class probabilities at each stage for X.
This method allows monitoring (i.e. determine error on testing set)
after each stage.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
y : generator of array of shape = [n_samples]
The predicted value of the input samples.
"""
try:
for score in self._staged_decision_function(X):
yield self.loss_._score_to_proba(score)
except NotFittedError:
raise
except AttributeError:
raise AttributeError('loss=%r does not support predict_proba' %
self.loss)
class GradientBoostingRegressor(BaseGradientBoosting, RegressorMixin):
"""Gradient Boosting for regression.
GB builds an additive model in a forward stage-wise fashion;
it allows for the optimization of arbitrary differentiable loss functions.
In each stage a regression tree is fit on the negative gradient of the
given loss function.
Read more in the :ref:`User Guide <gradient_boosting>`.
Parameters
----------
loss : {'ls', 'lad', 'huber', 'quantile'}, optional (default='ls')
loss function to be optimized. 'ls' refers to least squares
regression. 'lad' (least absolute deviation) is a highly robust
loss function solely based on order information of the input
variables. 'huber' is a combination of the two. 'quantile'
allows quantile regression (use `alpha` to specify the quantile).
learning_rate : float, optional (default=0.1)
learning rate shrinks the contribution of each tree by `learning_rate`.
There is a trade-off between learning_rate and n_estimators.
n_estimators : int (default=100)
The number of boosting stages to perform. Gradient boosting
is fairly robust to over-fitting so a large number usually
results in better performance.
max_depth : integer, optional (default=3)
maximum depth of the individual regression estimators. The maximum
depth limits the number of nodes in the tree. Tune this parameter
for best performance; the best value depends on the interaction
of the input variables.
criterion : string, optional (default="friedman_mse")
The function to measure the quality of a split. Supported criteria
are "friedman_mse" for the mean squared error with improvement
score by Friedman, "mse" for mean squared error, and "mae" for
the mean absolute error. The default value of "friedman_mse" is
generally the best as it can provide a better approximation in
some cases.
.. versionadded:: 0.18
min_samples_split : int, float, optional (default=2)
The minimum number of samples required to split an internal node:
- If int, then consider `min_samples_split` as the minimum number.
- If float, then `min_samples_split` is a percentage and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split.
.. versionchanged:: 0.18
Added float values for percentages.
min_samples_leaf : int, float, optional (default=1)
The minimum number of samples required to be at a leaf node:
- If int, then consider `min_samples_leaf` as the minimum number.
- If float, then `min_samples_leaf` is a percentage and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node.
.. versionchanged:: 0.18
Added float values for percentages.
min_weight_fraction_leaf : float, optional (default=0.)
The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided.
subsample : float, optional (default=1.0)
The fraction of samples to be used for fitting the individual base
learners. If smaller than 1.0 this results in Stochastic Gradient
Boosting. `subsample` interacts with the parameter `n_estimators`.
Choosing `subsample < 1.0` leads to a reduction of variance
and an increase in bias.
max_features : int, float, string or None, optional (default=None)
The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split.
- If float, then `max_features` is a percentage and
`int(max_features * n_features)` features are considered at each
split.
- If "auto", then `max_features=n_features`.
- If "sqrt", then `max_features=sqrt(n_features)`.
- If "log2", then `max_features=log2(n_features)`.
- If None, then `max_features=n_features`.
Choosing `max_features < n_features` leads to a reduction of variance
and an increase in bias.
Note: the search for a split does not stop until at least one
valid partition of the node samples is found, even if it requires to
effectively inspect more than ``max_features`` features.
max_leaf_nodes : int or None, optional (default=None)
Grow trees with ``max_leaf_nodes`` in best-first fashion.
Best nodes are defined as relative reduction in impurity.
If None then unlimited number of leaf nodes.
min_impurity_split : float,
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.
.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19 and will be removed in 0.21.
Use ``min_impurity_decrease`` instead.
min_impurity_decrease : float, optional (default=0.)
A node will be split if this split induces a decrease of the impurity
greater than or equal to this value.
The weighted impurity decrease equation is the following::
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where ``N`` is the total number of samples, ``N_t`` is the number of
samples at the current node, ``N_t_L`` is the number of samples in the
left child, and ``N_t_R`` is the number of samples in the right child.
``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed.
.. versionadded:: 0.19
alpha : float (default=0.9)
The alpha-quantile of the huber loss function and the quantile
loss function. Only if ``loss='huber'`` or ``loss='quantile'``.
init : BaseEstimator, None, optional (default=None)
An estimator object that is used to compute the initial
predictions. ``init`` has to provide ``fit`` and ``predict``.
If None it uses ``loss.init_estimator``.
verbose : int, default: 0
Enable verbose output. If 1 then it prints progress and performance
once in a while (the more trees the lower the frequency). If greater
than 1 then it prints progress and performance for every tree.
warm_start : bool, default: False
When set to ``True``, reuse the solution of the previous call to fit
and add more estimators to the ensemble, otherwise, just erase the
previous solution.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
presort : bool or 'auto', optional (default='auto')
Whether to presort the data to speed up the finding of best splits in
fitting. Auto mode by default will use presorting on dense data and
default to normal sorting on sparse data. Setting presort to true on
sparse data will raise an error.
.. versionadded:: 0.17
optional parameter *presort*.
validation_fraction : float, optional, default 0.1
The proportion of training data to set aside as validation set for
early stopping. Must be between 0 and 1.
Only used if early_stopping is True
.. versionadded:: 0.20
n_iter_no_change : int, default None
``n_iter_no_change`` is used to decide if early stopping will be used
to terminate training when validation score is not improving. By
default it is set to None to disable early stopping. If set to a
number, it will set aside ``validation_fraction`` size of the training
data as validation and terminate training when validation score is not
improving in all of the previous ``n_iter_no_change`` numbers of
iterations.
.. versionadded:: 0.20
tol : float, optional, default 1e-4
Tolerance for the early stopping. When the loss is not improving
by at least tol for ``n_iter_no_change`` iterations (if set to a
number), the training stops.
.. versionadded:: 0.20
Attributes
----------
feature_importances_ : array, shape = [n_features]
The feature importances (the higher, the more important the feature).
oob_improvement_ : array, shape = [n_estimators]
The improvement in loss (= deviance) on the out-of-bag samples
relative to the previous iteration.
``oob_improvement_[0]`` is the improvement in
loss of the first stage over the ``init`` estimator.
train_score_ : array, shape = [n_estimators]
The i-th score ``train_score_[i]`` is the deviance (= loss) of the
model at iteration ``i`` on the in-bag sample.
If ``subsample == 1`` this is the deviance on the training data.
loss_ : LossFunction
The concrete ``LossFunction`` object.
init_ : BaseEstimator
The estimator that provides the initial predictions.
Set via the ``init`` argument or ``loss.init_estimator``.
estimators_ : ndarray of DecisionTreeRegressor, shape = [n_estimators, 1]
The collection of fitted sub-estimators.
Notes
-----
The features are always randomly permuted at each split. Therefore,
the best found split may vary, even with the same training data and
``max_features=n_features``, if the improvement of the criterion is
identical for several splits enumerated during the search of the best
split. To obtain a deterministic behaviour during fitting,
``random_state`` has to be fixed.
See also
--------
DecisionTreeRegressor, RandomForestRegressor
References
----------
J. Friedman, Greedy Function Approximation: A Gradient Boosting
Machine, The Annals of Statistics, Vol. 29, No. 5, 2001.
J. Friedman, Stochastic Gradient Boosting, 1999
T. Hastie, R. Tibshirani and J. Friedman.
Elements of Statistical Learning Ed. 2, Springer, 2009.
"""
_SUPPORTED_LOSS = ('ls', 'lad', 'huber', 'quantile')
def __init__(self, loss='ls', learning_rate=0.1, n_estimators=100,
subsample=1.0, criterion='friedman_mse', min_samples_split=2,
min_samples_leaf=1, min_weight_fraction_leaf=0.,
max_depth=3, min_impurity_decrease=0.,
min_impurity_split=None, init=None, random_state=None,
max_features=None, alpha=0.9, verbose=0, max_leaf_nodes=None,
warm_start=False, presort='auto', validation_fraction=0.1,
n_iter_no_change=None, tol=1e-4):
super(GradientBoostingRegressor, self).__init__(
loss=loss, learning_rate=learning_rate, n_estimators=n_estimators,
criterion=criterion, min_samples_split=min_samples_split,
min_samples_leaf=min_samples_leaf,
min_weight_fraction_leaf=min_weight_fraction_leaf,
max_depth=max_depth, init=init, subsample=subsample,
max_features=max_features,
min_impurity_decrease=min_impurity_decrease,
min_impurity_split=min_impurity_split,
random_state=random_state, alpha=alpha, verbose=verbose,
max_leaf_nodes=max_leaf_nodes, warm_start=warm_start,
presort=presort, validation_fraction=validation_fraction,
n_iter_no_change=n_iter_no_change, tol=tol)
def predict(self, X):
"""Predict regression target for X.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
y : array of shape = [n_samples]
The predicted values.
"""
X = check_array(X, dtype=DTYPE, order="C", accept_sparse='csr')
return self._decision_function(X).ravel()
def staged_predict(self, X):
"""Predict regression target at each stage for X.
This method allows monitoring (i.e. determine error on testing set)
after each stage.
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32`` and if a sparse matrix is provided
to a sparse ``csr_matrix``.
Returns
-------
y : generator of array of shape = [n_samples]
The predicted value of the input samples.
"""
for y in self._staged_decision_function(X):
yield y.ravel()
def apply(self, X):
"""Apply trees in the ensemble to X, return leaf indices.
.. versionadded:: 0.17
Parameters
----------
X : array-like or sparse matrix, shape = [n_samples, n_features]
The input samples. Internally, its dtype will be converted to
``dtype=np.float32``. If a sparse matrix is provided, it will
be converted to a sparse ``csr_matrix``.
Returns
-------
X_leaves : array_like, shape = [n_samples, n_estimators]
For each datapoint x in X and for each tree in the ensemble,
return the index of the leaf x ends up in each estimator.
"""
leaves = super(GradientBoostingRegressor, self).apply(X)
leaves = leaves.reshape(X.shape[0], self.estimators_.shape[0])
return leaves
| bsd-3-clause |
xzturn/tensorflow | tensorflow/python/data/experimental/kernel_tests/serialization/sequence_dataset_serialization_test.py | 9 | 5105 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for the sequence datasets serialization."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
from tensorflow.python.data.experimental.kernel_tests.serialization import dataset_serialization_test_base
from tensorflow.python.data.kernel_tests import test_base
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.framework import combinations
from tensorflow.python.platform import test
class SkipDatasetSerializationTest(
dataset_serialization_test_base.DatasetSerializationTestBase,
parameterized.TestCase):
def _build_skip_dataset(self, count):
components = (np.arange(10),)
return dataset_ops.Dataset.from_tensor_slices(components).skip(count)
@combinations.generate(test_base.default_test_combinations())
def testSkipFewerThanInputs(self):
count = 4
num_outputs = 10 - count
self.run_core_tests(lambda: self._build_skip_dataset(count), num_outputs)
@combinations.generate(test_base.default_test_combinations())
def testSkipVarious(self):
# Skip more than inputs
self.run_core_tests(lambda: self._build_skip_dataset(20), 0)
# Skip exactly the input size
self.run_core_tests(lambda: self._build_skip_dataset(10), 0)
self.run_core_tests(lambda: self._build_skip_dataset(-1), 0)
# Skip nothing
self.run_core_tests(lambda: self._build_skip_dataset(0), 10)
@combinations.generate(test_base.default_test_combinations())
def testInvalidSkip(self):
with self.assertRaisesRegexp(ValueError,
'Shape must be rank 0 but is rank 1'):
self.run_core_tests(lambda: self._build_skip_dataset([1, 2]), 0)
class TakeDatasetSerializationTest(
dataset_serialization_test_base.DatasetSerializationTestBase,
parameterized.TestCase):
def _build_take_dataset(self, count):
components = (np.arange(10),)
return dataset_ops.Dataset.from_tensor_slices(components).take(count)
@combinations.generate(test_base.default_test_combinations())
def testTakeFewerThanInputs(self):
count = 4
self.run_core_tests(lambda: self._build_take_dataset(count), count)
@combinations.generate(test_base.default_test_combinations())
def testTakeVarious(self):
# Take more than inputs
self.run_core_tests(lambda: self._build_take_dataset(20), 10)
# Take exactly the input size
self.run_core_tests(lambda: self._build_take_dataset(10), 10)
# Take all
self.run_core_tests(lambda: self._build_take_dataset(-1), 10)
# Take nothing
self.run_core_tests(lambda: self._build_take_dataset(0), 0)
def testInvalidTake(self):
with self.assertRaisesRegexp(ValueError,
'Shape must be rank 0 but is rank 1'):
self.run_core_tests(lambda: self._build_take_dataset([1, 2]), 0)
class RepeatDatasetSerializationTest(
dataset_serialization_test_base.DatasetSerializationTestBase,
parameterized.TestCase):
def _build_repeat_dataset(self, count, take_count=3):
components = (np.arange(10),)
return dataset_ops.Dataset.from_tensor_slices(components).take(
take_count).repeat(count)
@combinations.generate(test_base.default_test_combinations())
def testFiniteRepeat(self):
count = 10
self.run_core_tests(lambda: self._build_repeat_dataset(count), 3 * count)
@combinations.generate(test_base.default_test_combinations())
def testEmptyRepeat(self):
self.run_core_tests(lambda: self._build_repeat_dataset(0), 0)
@combinations.generate(test_base.default_test_combinations())
def testInfiniteRepeat(self):
self.verify_unused_iterator(
lambda: self._build_repeat_dataset(-1), 10, verify_exhausted=False)
self.verify_multiple_breaks(
lambda: self._build_repeat_dataset(-1), 20, verify_exhausted=False)
self.verify_reset_restored_iterator(
lambda: self._build_repeat_dataset(-1), 20, verify_exhausted=False)
# Test repeat empty dataset
self.run_core_tests(lambda: self._build_repeat_dataset(-1, 0), 0)
@combinations.generate(test_base.default_test_combinations())
def testInvalidRepeat(self):
with self.assertRaisesRegexp(
ValueError, 'Shape must be rank 0 but is rank 1'):
self.run_core_tests(lambda: self._build_repeat_dataset([1, 2], 0), 0)
if __name__ == '__main__':
test.main()
| apache-2.0 |
SoluMilken/xgboostwithwarmstart | xgboostwithwarmstart/xgboost_with_warm_start.py | 1 | 15366 | # coding: utf-8
# pylint: disable=too-many-arguments, too-many-locals, invalid-name, fixme, E0012, R0912
"""Scikit-Learn Wrapper interface for XGBoost."""
from __future__ import absolute_import
import numpy as np
from xgboost import XGBRegressor
from xgboost.core import Booster, DMatrix, XGBoostError
from xgboost.training import train
# Do not use class names on scikit-learn directly.
# Re-define the classes on .compat to guarantee the behavior without scikit-learn
from xgboost.compat import (SKLEARN_INSTALLED, XGBModelBase,
XGBClassifierBase, XGBRegressorBase,
XGBLabelEncoder)
from xgboost.sklearn import _objective_decorator, XGBModel
class XGBRegressorWithWarmStart(XGBRegressor):
def __init__(self, max_depth=3, learning_rate=0.1, n_estimators=100,
silent=True, objective="reg:linear",
nthread=-1, gamma=0, min_child_weight=1, max_delta_step=0,
subsample=1, colsample_bytree=1, colsample_bylevel=1,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1,
base_score=0.5, seed=0, missing=None, warm_start=False):
super(XGBRegressorWithWarmStart, self).__init__(
max_depth, learning_rate, n_estimators,
silent, objective,
nthread, gamma, min_child_weight, max_delta_step,
subsample, colsample_bytree, colsample_bylevel,
reg_alpha, reg_lambda, scale_pos_weight,
base_score, seed, missing)
self.warm_start = warm_start
self.n_trained_estimators = 0
def fit(self, X, y, sample_weight=None, eval_set=None, eval_metric=None,
early_stopping_rounds=None, verbose=True):
# pylint: disable=missing-docstring,invalid-name,attribute-defined-outside-init
"""
Fit the gradient boosting model
Parameters
----------
X : array_like
Feature matrix
y : array_like
Labels
sample_weight : array_like
Weight for each instance
eval_set : list, optional
A list of (X, y) tuple pairs to use as a validation set for
early-stopping
eval_metric : str, callable, optional
If a str, should be a built-in evaluation metric to use. See
doc/parameter.md. If callable, a custom evaluation metric. The call
signature is func(y_predicted, y_true) where y_true will be a
DMatrix object such that you may need to call the get_label
method. It must return a str, value pair where the str is a name
for the evaluation and value is the value of the evaluation
function. This objective is always minimized.
early_stopping_rounds : int
Activates early stopping. Validation error needs to decrease at
least every <early_stopping_rounds> round(s) to continue training.
Requires at least one item in evals. If there's more than one,
will use the last. Returns the model from the last iteration
(not the best one). If early stopping occurs, the model will
have three additional fields: bst.best_score, bst.best_iteration
and bst.best_ntree_limit.
(Use bst.best_ntree_limit to get the correct value if num_parallel_tree
and/or num_class appears in the parameters)
verbose : bool
If `verbose` and an evaluation set is used, writes the evaluation
metric measured on the validation set to stderr.
"""
if sample_weight is not None:
trainDmatrix = DMatrix(X, label=y, weight=sample_weight, missing=self.missing)
else:
trainDmatrix = DMatrix(X, label=y, missing=self.missing)
evals_result = {}
if eval_set is not None:
evals = list(DMatrix(x[0], label=x[1], missing=self.missing) for x in eval_set)
evals = list(zip(evals, ["validation_{}".format(i) for i in
range(len(evals))]))
else:
evals = ()
params = self.get_xgb_params()
if callable(self.objective):
obj = _objective_decorator(self.objective)
params["objective"] = "reg:linear"
else:
obj = None
feval = eval_metric if callable(eval_metric) else None
if eval_metric is not None:
if callable(eval_metric):
eval_metric = None
else:
params.update({'eval_metric': eval_metric})
if self.warm_start:
n_estimators = self.n_estimators - self.n_trained_estimators
self._Booster = train(params, trainDmatrix,
n_estimators, evals=evals,
early_stopping_rounds=early_stopping_rounds,
evals_result=evals_result, obj=obj, feval=feval,
verbose_eval=verbose, xgb_model=self._Booster)
else:
self._Booster = train(params, trainDmatrix,
self.n_estimators, evals=evals,
early_stopping_rounds=early_stopping_rounds,
evals_result=evals_result, obj=obj, feval=feval,
verbose_eval=verbose)
self.n_trained_estimators = self.n_estimators
if evals_result:
for val in evals_result.items():
evals_result_key = list(val[1].keys())[0]
evals_result[val[0]][evals_result_key] = val[1][evals_result_key]
self.evals_result_ = evals_result
if early_stopping_rounds is not None:
self.best_score = self._Booster.best_score
self.best_iteration = self._Booster.best_iteration
self.best_ntree_limit = self._Booster.best_ntree_limit
return self
@property
def feature_importances_(self):
"""
Returns
-------
feature_importances_ : array of shape = [n_features]
"""
b = self.booster()
fs = b.get_fscore()
all_features = [fs.get(f, 0.) for f in b.feature_names]
all_features = np.array(all_features, dtype=np.float32)
return all_features / all_features.sum()
class XGBClassifierWithWarmStart(XGBModel, XGBClassifierBase):
# pylint: disable=missing-docstring,too-many-arguments,invalid-name
__doc__ = """Implementation of the scikit-learn API for XGBoost classification.
""" + '\n'.join(XGBModel.__doc__.split('\n')[2:])
def __init__(self, max_depth=3, learning_rate=0.1,
n_estimators=100, silent=True,
objective="binary:logistic",
nthread=-1, gamma=0, min_child_weight=1,
max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1,
base_score=0.5, seed=0, missing=None, warm_start=False):
super(XGBClassifierWithWarmStart, self).__init__(
max_depth, learning_rate, n_estimators, silent, objective,
nthread, gamma, min_child_weight, max_delta_step, subsample,
colsample_bytree, colsample_bylevel, reg_alpha, reg_lambda,
scale_pos_weight, base_score, seed, missing)
self.warm_start = warm_start
self.n_trained_estimators = 0
def fit(self, X, y, sample_weight=None, eval_set=None, eval_metric=None,
early_stopping_rounds=None, verbose=True):
# pylint: disable = attribute-defined-outside-init,arguments-differ
"""
Fit gradient boosting classifier
Parameters
----------
X : array_like
Feature matrix
y : array_like
Labels
sample_weight : array_like
Weight for each instance
eval_set : list, optional
A list of (X, y) pairs to use as a validation set for
early-stopping
eval_metric : str, callable, optional
If a str, should be a built-in evaluation metric to use. See
doc/parameter.md. If callable, a custom evaluation metric. The call
signature is func(y_predicted, y_true) where y_true will be a
DMatrix object such that you may need to call the get_label
method. It must return a str, value pair where the str is a name
for the evaluation and value is the value of the evaluation
function. This objective is always minimized.
early_stopping_rounds : int, optional
Activates early stopping. Validation error needs to decrease at
least every <early_stopping_rounds> round(s) to continue training.
Requires at least one item in evals. If there's more than one,
will use the last. Returns the model from the last iteration
(not the best one). If early stopping occurs, the model will
have three additional fields: bst.best_score, bst.best_iteration
and bst.best_ntree_limit.
(Use bst.best_ntree_limit to get the correct value if num_parallel_tree
and/or num_class appears in the parameters)
verbose : bool
If `verbose` and an evaluation set is used, writes the evaluation
metric measured on the validation set to stderr.
"""
evals_result = {}
self.classes_ = np.unique(y)
self.n_classes_ = len(self.classes_)
params = self.get_xgb_params()
if callable(self.objective):
obj = _objective_decorator(self.objective)
# Use default value. Is it really not used ?
params["objective"] = "binary:logistic"
else:
obj = None
if self.n_classes_ > 2:
# Switch to using a multiclass objective in the underlying XGB instance
params["objective"] = "multi:softprob"
params['num_class'] = self.n_classes_
feval = eval_metric if callable(eval_metric) else None
if eval_metric is not None:
if callable(eval_metric):
eval_metric = None
else:
params.update({"eval_metric": eval_metric})
self._le = XGBLabelEncoder().fit(y)
training_labels = self._le.transform(y)
if eval_set is not None:
# TODO: use sample_weight if given?
evals = list(
DMatrix(x[0], label=self._le.transform(x[1]), missing=self.missing)
for x in eval_set
)
nevals = len(evals)
eval_names = ["validation_{}".format(i) for i in range(nevals)]
evals = list(zip(evals, eval_names))
else:
evals = ()
self._features_count = X.shape[1]
if sample_weight is not None:
trainDmatrix = DMatrix(X, label=training_labels, weight=sample_weight,
missing=self.missing)
else:
trainDmatrix = DMatrix(X, label=training_labels, missing=self.missing)
if self.warm_start:
n_estimators = self.n_estimators - self.n_trained_estimators
self._Booster = train(params, trainDmatrix,
n_estimators, evals=evals,
early_stopping_rounds=early_stopping_rounds,
evals_result=evals_result, obj=obj, feval=feval,
verbose_eval=verbose, xgb_model=self._Booster)
else:
self._Booster = train(params, trainDmatrix,
self.n_estimators, evals=evals,
early_stopping_rounds=early_stopping_rounds,
evals_result=evals_result, obj=obj, feval=feval,
verbose_eval=verbose)
self.n_trained_estimators = self.n_estimators
self.objective = params["objective"]
if evals_result:
for val in evals_result.items():
evals_result_key = list(val[1].keys())[0]
evals_result[val[0]][evals_result_key] = val[1][evals_result_key]
self.evals_result_ = evals_result
if early_stopping_rounds is not None:
self.best_score = self._Booster.best_score
self.best_iteration = self._Booster.best_iteration
self.best_ntree_limit = self._Booster.best_ntree_limit
return self
def predict(self, data, output_margin=False, ntree_limit=0):
test_dmatrix = DMatrix(data, missing=self.missing)
class_probs = self.booster().predict(test_dmatrix,
output_margin=output_margin,
ntree_limit=ntree_limit)
if len(class_probs.shape) > 1:
column_indexes = np.argmax(class_probs, axis=1)
else:
column_indexes = np.repeat(0, class_probs.shape[0])
column_indexes[class_probs > 0.5] = 1
return self._le.inverse_transform(column_indexes)
def predict_proba(self, data, output_margin=False, ntree_limit=0):
test_dmatrix = DMatrix(data, missing=self.missing)
class_probs = self.booster().predict(test_dmatrix,
output_margin=output_margin,
ntree_limit=ntree_limit)
if self.objective == "multi:softprob":
return class_probs
else:
classone_probs = class_probs
classzero_probs = 1.0 - classone_probs
return np.vstack((classzero_probs, classone_probs)).transpose()
def evals_result(self):
"""Return the evaluation results.
If eval_set is passed to the `fit` function, you can call evals_result() to
get evaluation results for all passed eval_sets. When eval_metric is also
passed to the `fit` function, the evals_result will contain the eval_metrics
passed to the `fit` function
Returns
-------
evals_result : dictionary
Example
-------
param_dist = {'objective':'binary:logistic', 'n_estimators':2}
clf = xgb.XGBClassifier(**param_dist)
clf.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_test, y_test)],
eval_metric='logloss',
verbose=True)
evals_result = clf.evals_result()
The variable evals_result will contain:
{'validation_0': {'logloss': ['0.604835', '0.531479']},
'validation_1': {'logloss': ['0.41965', '0.17686']}}
"""
if self.evals_result_:
evals_result = self.evals_result_
else:
raise XGBoostError('No results.')
return evals_result
@property
def feature_importances_(self):
"""
Returns
-------
feature_importances_ : array of shape = [n_features]
"""
b = self.booster()
fs = b.get_fscore()
all_features = [fs.get(f, 0.) for f in b.feature_names]
all_features = np.array(all_features, dtype=np.float32)
return all_features / all_features.sum()
| bsd-2-clause |
alissonperez/django-onmydesk | onmydesk/core/datasets.py | 1 | 3685 | from abc import ABCMeta, abstractmethod
from collections import OrderedDict
from django.db import connection, connections
class BaseDataset(metaclass=ABCMeta):
"""An abstract representation of what must be a Dataset class.
It's possible to use context management with datasets. To do this you must
override methods :func:`__enter__` to lock resources and :func:`__exit__` to free up them. E.g.::
class MyDataset(BaseDataset):
def iterate(self, params=None):
return self.file.read()
def __enter__(self):
self.file = open('somefile.txt')
def __exit__(self, type, value, traceback):
self.file.close()
with MyDataset() as mydataset:
for row in mydataset.iterate():
print(row)
"""
@abstractmethod
def iterate(self, params=None):
"""It must returns any iterable object.
:param dict params: Parameters to be used by dataset"""
raise NotImplemented()
def __enter__(self):
"""*Enter* from context manager to lock some resource (for example)."""
return self
def __exit__(self, type, value, traceback):
"""*Exit* from context manager to free up some resource (for example)."""
pass
class SQLDataset(BaseDataset):
"""
A SQLDataset is used to run raw queries into database. E.g.::
with SQLDataset('SELECT * FROM users'):
for row in mydataset.iterate():
print(row) # --> A OrderedDict with cols and values.
.. note:: It's recomended to use instances of this class using `with` statement.
**BE CAREFUL**
Always use `query_params` from :func:`__init__` to put dinamic values into the query. E.g.::
# WRONG WAY:
mydataset = SQLDataset('SELECT * FROM users where age > {}'.format(18))
# RIGHT WAY:
mydataset = SQLDataset('SELECT * FROM users where age > %d', [18])
"""
def __init__(self, query, query_params=[], db_alias=None):
"""
:param str query: Raw sql query.
:param list query_params: Params to be evaluated with query.
:param str db_alias: Database alias from django settings. Optional.
"""
self.query = query
self.query_params = query_params
self.db_alias = db_alias
self.cursor = None
def iterate(self, params=None):
"""
:param dict params: Parameters to be used by dataset.
:returns: Rows from query result.
:rtype: Iterator with OrderedDict items.
"""
# Used if we are not using context manager
has_cursor = bool(self.cursor)
if not has_cursor:
self._init_cursor()
self.cursor.execute(self.query, self.query_params)
cols = tuple(c[0] for c in self.cursor.description)
one = self.cursor.fetchone()
while one is not None:
row = OrderedDict(zip(cols, one))
yield row
one = self.cursor.fetchone()
if not has_cursor:
self._close_cursor()
def __enter__(self):
"""*Enter* from context manager to open a cursor with database"""
self._init_cursor()
return self
def __exit__(self, type, value, traceback):
"""*Exit* from context manager to close cursor with database"""
self._close_cursor()
def _close_cursor(self):
self.cursor.close()
self.cursor = None
def _init_cursor(self):
if self.db_alias:
self.cursor = connections[self.db_alias].cursor()
else:
self.cursor = connection.cursor()
| mit |
keras-team/keras-io | examples/keras_recipes/subclassing_conv_layers.py | 1 | 4049 | """
Title: Customizing the convolution operation of a Conv2D layer
Author: [lukewood](https://lukewood.xyz)
Date created: 11/03/2021
Last modified: 11/03/2021
Description: This example shows how to implement custom convolution layers using the `Conv.convolution_op()` API.
"""
"""
## Introduction
You may sometimes need to implement custom versions of convolution layers like `Conv1D` and `Conv2D`.
Keras enables you do this without implementing the entire layer from scratch: you can reuse
most of the base convolution layer and just customize the convolution op itself via the
`convolution_op()` method.
This method was introduced in Keras 2.7. So before using the
`convolution_op()` API, ensure that you are running Keras version 2.7.0 or greater.
"""
import tensorflow.keras as keras
print(keras.__version__)
"""
## A Simple `StandardizedConv2D` implementation
There are two ways to use the `Conv.convolution_op()` API. The first way
is to override the `convolution_op()` method on a convolution layer subclass.
Using this approach, we can quickly implement a
[StandardizedConv2D](https://arxiv.org/abs/1903.10520) as shown below.
"""
import tensorflow as tf
import tensorflow.keras as keras
import keras.layers as layers
import numpy as np
class StandardizedConv2DWithOverride(layers.Conv2D):
def convolution_op(self, inputs, kernel):
mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True)
return tf.nn.conv2d(
inputs,
(kernel - mean) / tf.sqrt(var + 1e-10),
padding="VALID",
strides=list(self.strides),
name=self.__class__.__name__,
)
"""
The other way to use the `Conv.convolution_op()` API is to directly call the
`convolution_op()` method from the `call()` method of a convolution layer subclass.
A comparable class implemented using this approach is shown below.
"""
class StandardizedConv2DWithCall(layers.Conv2D):
def call(self, inputs):
mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True)
result = self.convolution_op(
inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10)
)
if self.use_bias:
result = result + self.bias
return result
"""
## Example Usage
Both of these layers work as drop-in replacements for `Conv2D`. The following
demonstration performs classification on the MNIST dataset.
"""
# Model / data parameters
num_classes = 10
input_shape = (28, 28, 1)
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale images to the [0, 1] range
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = keras.Sequential(
[
keras.layers.InputLayer(input_shape=input_shape),
StandardizedConv2DWithCall(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
StandardizedConv2DWithOverride(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
model.summary()
"""
"""
batch_size = 128
epochs = 5
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=batch_size, epochs=5, validation_split=0.1)
"""
## Conclusion
The `Conv.convolution_op()` API provides an easy and readable way to implement custom
convolution layers. A `StandardizedConvolution` implementation using the API is quite
terse, consisting of only four lines of code.
"""
| apache-2.0 |
herilalaina/scikit-learn | examples/linear_model/plot_logistic_path.py | 33 | 1195 | #!/usr/bin/env python
"""
=================================
Path with L1- Logistic Regression
=================================
Computes path on IRIS dataset.
"""
print(__doc__)
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# License: BSD 3 clause
from datetime import datetime
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
from sklearn import datasets
from sklearn.svm import l1_min_c
iris = datasets.load_iris()
X = iris.data
y = iris.target
X = X[y != 2]
y = y[y != 2]
X -= np.mean(X, 0)
# #############################################################################
# Demo path functions
cs = l1_min_c(X, y, loss='log') * np.logspace(0, 3)
print("Computing regularization path ...")
start = datetime.now()
clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6)
coefs_ = []
for c in cs:
clf.set_params(C=c)
clf.fit(X, y)
coefs_.append(clf.coef_.ravel().copy())
print("This took ", datetime.now() - start)
coefs_ = np.array(coefs_)
plt.plot(np.log10(cs), coefs_)
ymin, ymax = plt.ylim()
plt.xlabel('log(C)')
plt.ylabel('Coefficients')
plt.title('Logistic Regression Path')
plt.axis('tight')
plt.show()
| bsd-3-clause |
nwiizo/workspace_2017 | keras_ex/example/mnist_hierarchical_rnn.py | 2 | 3364 | """This is an example of using Hierarchical RNN (HRNN) to classify MNIST digits.
HRNNs can learn across multiple levels of temporal hiearchy over a complex sequence.
Usually, the first recurrent layer of an HRNN encodes a sentence (e.g. of word vectors)
into a sentence vector. The second recurrent layer then encodes a sequence of
such vectors (encoded by the first layer) into a document vector. This
document vector is considered to preserve both the word-level and
sentence-level structure of the context.
# References
- [A Hierarchical Neural Autoencoder for Paragraphs and Documents](https://arxiv.org/abs/1506.01057)
Encodes paragraphs and documents with HRNN.
Results have shown that HRNN outperforms standard
RNNs and may play some role in more sophisticated generation tasks like
summarization or question answering.
- [Hierarchical recurrent neural network for skeleton based action recognition](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298714)
Achieved state-of-the-art results on skeleton based action recognition with 3 levels
of bidirectional HRNN combined with fully connected layers.
In the below MNIST example the first LSTM layer first encodes every
column of pixels of shape (28, 1) to a column vector of shape (128,). The second LSTM
layer encodes then these 28 column vectors of shape (28, 128) to a image vector
representing the whole image. A final Dense layer is added for prediction.
After 5 epochs: train acc: 0.9858, val acc: 0.9864
"""
from __future__ import print_function
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Input, Dense, TimeDistributed
from keras.layers import LSTM
from keras.utils import np_utils
# Training parameters.
batch_size = 32
nb_classes = 10
nb_epochs = 5
# Embedding dimensions.
row_hidden = 128
col_hidden = 128
# The data, shuffled and split between train and test sets.
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Reshapes data to 4D for Hierarchical RNN.
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# Converts class vectors to binary class matrices.
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
row, col, pixel = X_train.shape[1:]
# 4D input.
x = Input(shape=(row, col, pixel))
# Encodes a row of pixels using TimeDistributed Wrapper.
encoded_rows = TimeDistributed(LSTM(output_dim=row_hidden))(x)
# Encodes columns of encoded rows.
encoded_columns = LSTM(col_hidden)(encoded_rows)
# Final predictions and model.
prediction = Dense(nb_classes, activation='softmax')(encoded_columns)
model = Model(input=x, output=prediction)
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# Training.
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epochs,
verbose=1, validation_data=(X_test, Y_test))
# Evaluation.
scores = model.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
| mit |
google/ml-fairness-gym | agents/threshold_policies.py | 1 | 9878 | # coding=utf-8
# Copyright 2022 The ML Fairness Gym Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python2, python3
"""Helper functions for finding appropriate thresholds.
Many agents use classifiers to calculate continuous scores and then use a
threshold to transform those scores into decisions that optimize some reward.
The helper functions in this module are intended to aid with choosing those
thresholds.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import bisect
import enum
from absl import logging
import attr
import numpy as np
import scipy.optimize
import scipy.spatial
from six.moves import zip
from sklearn import metrics as sklearn_metrics
class ThresholdPolicy(enum.Enum):
SINGLE_THRESHOLD = "single_threshold"
MAXIMIZE_REWARD = "maximize_reward"
EQUALIZE_OPPORTUNITY = "equalize_opportunity"
@attr.s
class RandomizedThreshold(object):
"""Represents a distribution over decision thresholds."""
values = attr.ib(factory=lambda: [0.])
weights = attr.ib(factory=lambda: [1.])
rng = attr.ib(factory=np.random.RandomState)
tpr_target = attr.ib(default=None)
def smoothed_value(self):
# If one weight is small, this is probably an optimization artifact.
# Snap to a single threshold.
if len(self.weights) == 2 and min(self.weights) < 1e-4:
return self.values[np.argmax(self.weights)]
return np.dot(self.weights, self.values)
def sample(self):
return self.rng.choice(self.values, p=self.weights)
def iteritems(self):
return zip(self.weights, self.values)
def convex_hull_roc(roc):
"""Returns an roc curve without the points inside the convex hull.
Points below the fpr=tpr line corresponding to random performance are also
removed.
Args:
roc: A tuple of lists that are all the same length, containing
(false_positive_rates, true_positive_rates, thresholds). This is the same
format returned by sklearn.metrics.roc_curve.
"""
fprs, tprs, thresholds = roc
if np.isnan(fprs).any() or np.isnan(tprs).any():
logging.debug("Convex hull solver does not handle NaNs.")
return roc
if len(fprs) < 3:
logging.debug("Convex hull solver does not curves with < 3 points.")
return roc
try:
# Add (fpr=1, tpr=0) to the convex hull to remove any points below the
# random-performance line.
hull = scipy.spatial.ConvexHull(np.vstack([fprs + [1], tprs + [0]]).T)
except scipy.spatial.qhull.QhullError:
logging.debug("Convex hull solver failed.")
return roc
verticies = set(hull.vertices)
return (
[fpr for idx, fpr in enumerate(fprs) if idx in verticies],
[tpr for idx, tpr in enumerate(tprs) if idx in verticies],
[thresh for idx, thresh in enumerate(thresholds) if idx in verticies],
)
def _threshold_from_tpr(roc, tpr_target, rng):
"""Returns a `RandomizedThreshold` that achieves `tpr_target`.
For an arbitrary value of tpr_target in [0, 1], there may not be a single
threshold that achieves that tpr_value on our data. In this case, we
interpolate between the two closest achievable points on the discrete ROC
curve.
See e.g., Theorem 1 of Scott et al (1998)
"Maximum realisable performance: a principled method for enhancing
performance by using multiple classifiers in variable cost problem domains"
http://mi.eng.cam.ac.uk/reports/svr-ftp/auto-pdf/Scott_tr320.pdf
Args:
roc: A tuple (fpr, tpr, thresholds) as returned by sklearn's roc_curve
function.
tpr_target: A float between [0, 1], the target value of TPR that we would
like to achieve.
rng: A `np.RandomState` object that will be used in the returned
RandomizedThreshold.
Return: A RandomizedThreshold that achieves the target TPR value.
"""
# First filter out points that are not on the convex hull.
_, tpr_list, thresh_list = convex_hull_roc(roc)
idx = bisect.bisect_left(tpr_list, tpr_target)
# TPR target is larger than any of the TPR values in the list. In this case,
# take the highest threshold possible.
if idx == len(tpr_list):
return RandomizedThreshold(
weights=[1], values=[thresh_list[-1]], rng=rng, tpr_target=tpr_target)
# TPR target is exactly achievable by an existing threshold. In this case,
# do not randomize between two different thresholds. Use a single threshold
# with probability 1.
if tpr_list[idx] == tpr_target:
return RandomizedThreshold(
weights=[1], values=[thresh_list[idx]], rng=rng, tpr_target=tpr_target)
# Interpolate between adjacent thresholds. Since we are only considering
# points on the convex hull of the roc curve, we only need to consider
# interpolating between pairs of adjacent points.
alpha = _interpolate(x=tpr_target, low=tpr_list[idx - 1], high=tpr_list[idx])
return RandomizedThreshold(
weights=[alpha, 1 - alpha],
values=[thresh_list[idx - 1], thresh_list[idx]],
rng=rng,
tpr_target=tpr_target)
def _interpolate(x, low, high):
"""returns a such that a*low + (1-a)*high = x."""
assert low <= x <= high, ("x is not between [low, high]: Expected %s <= %s <="
" %s") % (low, x, high)
alpha = 1 - ((x - low) / (high - low))
assert np.abs(alpha * low + (1 - alpha) * high - x) < 1e-6
return alpha
def single_threshold(predictions, labels, weights, cost_matrix):
"""Finds a single threshold that maximizes reward.
Args:
predictions: A list of float predictions.
labels: A list of binary labels.
weights: A list of instance weights.
cost_matrix: A CostMatrix.
Returns:
A single threshold that maximizes reward.
"""
threshold = equality_of_opportunity_thresholds({"dummy": predictions},
{"dummy": labels},
{"dummy": weights},
cost_matrix)["dummy"]
return threshold.smoothed_value()
def equality_of_opportunity_thresholds(group_predictions,
group_labels,
group_weights,
cost_matrix,
rng=None):
"""Finds thresholds that equalize opportunity while maximizing reward.
Using the definition from "Equality of Opportunity in Supervised Learning" by
Hardt et al., equality of opportunity constraints require that the classifier
have equal true-positive rate for all groups and can be enforced as a
post-processing step on a threshold-based binary classifier by creating
group-specific thresholds.
Since there are many different thresholds where equality of opportunity
constraints can hold, we simultaneously maximize reward described by a reward
matrix.
Args:
group_predictions: A dict mapping from group identifiers to predictions for
instances from that group.
group_labels: A dict mapping from group identifiers to labels for instances
from that group.
group_weights: A dict mapping from group identifiers to weights for
instances from that group.
cost_matrix: A CostMatrix.
rng: A `np.random.RandomState`.
Returns:
A dict mapping from group identifiers to thresholds such that recall is
equal for all groups.
Raises:
ValueError if the keys of group_predictions and group_labels are not the
same.
"""
if set(group_predictions.keys()) != set(group_labels.keys()):
raise ValueError("group_predictions and group_labels have mismatched keys.")
if rng is None:
rng = np.random.RandomState()
groups = sorted(group_predictions.keys())
roc = {}
if group_weights is None:
group_weights = {}
for group in groups:
if group not in group_weights or group_weights[group] is None:
# If weights is unspecified, use equal weights.
group_weights[group] = [1 for _ in group_labels[group]]
assert len(group_labels[group]) == len(group_weights[group]) == len(
group_predictions[group])
fprs, tprs, thresholds = sklearn_metrics.roc_curve(
y_true=group_labels[group],
y_score=group_predictions[group],
sample_weight=group_weights[group])
roc[group] = (fprs, np.nan_to_num(tprs), thresholds)
def negative_reward(tpr_target):
"""Returns negative reward suitable for optimization by minimization."""
my_reward = 0
for group in groups:
weights_ = []
predictions_ = []
labels_ = []
for thresh_prob, threshold in _threshold_from_tpr(
roc[group], tpr_target, rng=rng).iteritems():
labels_.extend(group_labels[group])
for weight, prediction in zip(group_weights[group],
group_predictions[group]):
weights_.append(weight * thresh_prob)
predictions_.append(prediction >= threshold)
confusion_matrix = sklearn_metrics.confusion_matrix(
labels_, predictions_, sample_weight=weights_)
my_reward += np.multiply(confusion_matrix, cost_matrix.as_array()).sum()
return -my_reward
opt = scipy.optimize.minimize_scalar(
negative_reward,
bounds=[0, 1],
method="bounded",
options={"maxiter": 100})
return ({
group: _threshold_from_tpr(roc[group], opt.x, rng=rng) for group in groups
})
| apache-2.0 |
schets/scikit-learn | examples/covariance/plot_lw_vs_oas.py | 247 | 2903 | """
=============================
Ledoit-Wolf vs OAS estimation
=============================
The usual covariance maximum likelihood estimate can be regularized
using shrinkage. Ledoit and Wolf proposed a close formula to compute
the asymptotically optimal shrinkage parameter (minimizing a MSE
criterion), yielding the Ledoit-Wolf covariance estimate.
Chen et al. proposed an improvement of the Ledoit-Wolf shrinkage
parameter, the OAS coefficient, whose convergence is significantly
better under the assumption that the data are Gaussian.
This example, inspired from Chen's publication [1], shows a comparison
of the estimated MSE of the LW and OAS methods, using Gaussian
distributed data.
[1] "Shrinkage Algorithms for MMSE Covariance Estimation"
Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import toeplitz, cholesky
from sklearn.covariance import LedoitWolf, OAS
np.random.seed(0)
###############################################################################
n_features = 100
# simulation covariance matrix (AR(1) process)
r = 0.1
real_cov = toeplitz(r ** np.arange(n_features))
coloring_matrix = cholesky(real_cov)
n_samples_range = np.arange(6, 31, 1)
repeat = 100
lw_mse = np.zeros((n_samples_range.size, repeat))
oa_mse = np.zeros((n_samples_range.size, repeat))
lw_shrinkage = np.zeros((n_samples_range.size, repeat))
oa_shrinkage = np.zeros((n_samples_range.size, repeat))
for i, n_samples in enumerate(n_samples_range):
for j in range(repeat):
X = np.dot(
np.random.normal(size=(n_samples, n_features)), coloring_matrix.T)
lw = LedoitWolf(store_precision=False, assume_centered=True)
lw.fit(X)
lw_mse[i, j] = lw.error_norm(real_cov, scaling=False)
lw_shrinkage[i, j] = lw.shrinkage_
oa = OAS(store_precision=False, assume_centered=True)
oa.fit(X)
oa_mse[i, j] = oa.error_norm(real_cov, scaling=False)
oa_shrinkage[i, j] = oa.shrinkage_
# plot MSE
plt.subplot(2, 1, 1)
plt.errorbar(n_samples_range, lw_mse.mean(1), yerr=lw_mse.std(1),
label='Ledoit-Wolf', color='g')
plt.errorbar(n_samples_range, oa_mse.mean(1), yerr=oa_mse.std(1),
label='OAS', color='r')
plt.ylabel("Squared error")
plt.legend(loc="upper right")
plt.title("Comparison of covariance estimators")
plt.xlim(5, 31)
# plot shrinkage coefficient
plt.subplot(2, 1, 2)
plt.errorbar(n_samples_range, lw_shrinkage.mean(1), yerr=lw_shrinkage.std(1),
label='Ledoit-Wolf', color='g')
plt.errorbar(n_samples_range, oa_shrinkage.mean(1), yerr=oa_shrinkage.std(1),
label='OAS', color='r')
plt.xlabel("n_samples")
plt.ylabel("Shrinkage")
plt.legend(loc="lower right")
plt.ylim(plt.ylim()[0], 1. + (plt.ylim()[1] - plt.ylim()[0]) / 10.)
plt.xlim(5, 31)
plt.show()
| bsd-3-clause |
e110c0/unisono | src/connection_interactive_test.py | 1 | 4898 | #!/usr/bin/env python3
'''
connection_interactive_test.py
Created on: Sat 06, 2010
Authors: cd
(C) 2010 by I8, TUM
This file is part of UNISONO Unified Information Service for Overlay
Network Optimization.
UNISONO is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or
(at your option) any later version.
UNISONO is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with UNISONO. If not, see <http://www.gnu.org/licenses/>.
'''
from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import xmlrpc.client
import threading
from sys import stdin
from time import sleep
from itertools import count
import socket, time, logging
from unisono.connection import Client
import unittest
class ClientUnittest(unittest.TestCase):
#def __init__(self, a):
# super().__init__(a)
def setUp(self):
self.count = 0
self.c = Client("127.0.0.1", 45312)
self.localip = '127.0.0.1'
#self.localip = '131.159.14.169'
self.remoteip = '131.159.14.169'
#self.remoteip = '134.2.172.172'
def tearDown(self):
self.c.close()
def test_periodic_orders(self):
'''
We generate a periodic order which will be executed 10 times.
Hit enter when at least 3 answers arrived
'''
order = {'orderid': None, # the Client class will care about this!
'identifier1':self.localip,
'identifier2':self.remoteip,
'type':'periodic',
'parameters' : {'interval': '3', 'lifetime': '30'},
'dataitem':'RTT'}
def callback(result):
print("callback function: %s" % result)
self.assertEquals(result['identifier1'], self.localip)
self.assertEquals(result['identifier2'], self.remoteip)
self.count +=1
print("%d outstanding answers" % (3-self.count))
#for the lulz: skip one oderderid
self.assertEquals(self.c.getOrderId(), '0')
ret = self.c.commit_order(order, callback)
#orderid is now '1', return code should be 0 (no error)
self.assertEquals(ret, ('1',0))
orderid = ret[0]
print("press any key ....")
ch = stdin.read(1)
self.assertTrue(self.count >= 3)
self.assertEqual(self.c.cancel_order(self.c.getId(), orderid), 0)
def test_make_7_orders(self):
'''make 7 oneshot orders'''
order = {'orderid': None, # the Client class will care about this!
'identifier1':self.localip,
'identifier2':self.remoteip,
'type':'oneshot',
'dataitem':'RTT'}
def callback(result):
print("callback function: %s" % result)
self.assertEquals(result['identifier1'], self.localip)
self.assertEquals(result['identifier2'], self.remoteip)
self.count +=1
print("%d outstanding answers" % (7-self.count))
self.assertEquals(self.c.commit_order(order, callback), ('0',0))
sleep(1)
self.assertEquals(self.c.commit_order(order, callback), ('1',0))
self.assertEquals(self.c.commit_order(order, callback), ('2',0))
self.assertEquals(self.c.commit_order(order, callback), ('3',0))
self.assertEquals(self.c.commit_order(order, callback), ('4',0))
#skip one oderderid
self.assertEquals(self.c.getOrderId(), '5')
self.assertEquals(self.c.commit_order(order, callback), ('6',0))
self.assertEquals(self.c.commit_order(order, callback), ('7',0))
print("press any key ....")
ch = stdin.read(1)
self.assertEquals(self.count, 7)
def test_some_datasets(self):
'''query the datasets defined im commands'''
commands = ['CPU_CORE_COUNT', 'CPU_SPEED', 'CPU_TYPE']
for i in commands:
self.assertTrue(i in self.c.list_available_dataitems())
self.commands = commands
def callback(result):
print("test_some_datasets callback function: %s" % result)
self.assertEquals(result['identifier1'], self.localip)
self.assertEquals(result['identifier2'], self.remoteip)
found = False
for i in self.commands:
if i in result:
self.commands.remove(i)
found = True
if not found:
print("unknown field in result %s" % result)
self.assertTrue(False)
for i in commands:
#invalid orderird fixed in Client
order = {'orderid': 'invalid', 'identifier1':self.localip,
'identifier2':self.remoteip, 'type':'oneshot',
'dataitem':i}
self.c.commit_order(order, callback)
print("press any key ....")
ch = stdin.read(1)
self.assertEquals(self.commands, [])
def test_connect_disconnect(self):
'''unregister an reregister, *deprecated'''
self.c.unregister_connector(self.c.getId())
self.c.register_connector(self.c._getLocalPort())
if __name__ == '__main__':
unittest.main()
| gpl-2.0 |
nwiizo/workspace_2017 | keras_ex/example/imdb_cnn_lstm.py | 3 | 2133 | '''Train a recurrent convolutional network on the IMDB sentiment
classification task.
Gets to 0.8498 test accuracy after 2 epochs. 41s/epoch on K520 GPU.
'''
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import LSTM
from keras.layers import Convolution1D, MaxPooling1D
from keras.datasets import imdb
# Embedding
max_features = 20000
maxlen = 100
embedding_size = 128
# Convolution
filter_length = 5
nb_filter = 64
pool_length = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 30
nb_epoch = 2
'''
Note:
batch_size is highly sensitive.
Only 2 epochs are needed as the dataset is very small.
'''
print('Loading data...')
(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=max_features)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, embedding_size, input_length=maxlen))
model.add(Dropout(0.25))
model.add(Convolution1D(nb_filter=nb_filter,
filter_length=filter_length,
border_mode='valid',
activation='relu',
subsample_length=1))
model.add(MaxPooling1D(pool_length=pool_length))
model.add(LSTM(lstm_output_size))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test, batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
| mit |
keras-team/keras-io | examples/structured_data/tabtransformer.py | 1 | 19195 | """
Title: Structured data learning with TabTransformer
Author: [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)
Date created: 2022/01/18
Last modified: 2022/01/18
Description: Using contextual embeddings for structured data classification.
"""
"""
## Introduction
This example demonstrates how to do structured data classification using
[TabTransformer](https://arxiv.org/abs/2012.06678), a deep tabular data modeling
architecture for supervised and semi-supervised learning.
The TabTransformer is built upon self-attention based Transformers.
The Transformer layers transform the embeddings of categorical features
into robust contextual embeddings to achieve higher predictive accuracy.
This example should be run with TensorFlow 2.7 or higher,
as well as [TensorFlow Addons](https://www.tensorflow.org/addons/overview),
which can be installed using the following command:
```python
pip install -U tensorflow-addons
```
## Setup
"""
import math
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
"""
## Prepare the data
This example uses the
[United States Census Income Dataset](https://archive.ics.uci.edu/ml/datasets/census+income)
provided by the
[UC Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php).
The task is binary classification
to predict whether a person is likely to be making over USD 50,000 a year.
The dataset includes 48,842 instances with 14 input features: 5 numerical features and 9 categorical features.
First, let's load the dataset from the UCI Machine Learning Repository into a Pandas
DataFrame:
"""
CSV_HEADER = [
"age",
"workclass",
"fnlwgt",
"education",
"education_num",
"marital_status",
"occupation",
"relationship",
"race",
"gender",
"capital_gain",
"capital_loss",
"hours_per_week",
"native_country",
"income_bracket",
]
train_data_url = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
)
train_data = pd.read_csv(train_data_url, header=None, names=CSV_HEADER)
test_data_url = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test"
)
test_data = pd.read_csv(test_data_url, header=None, names=CSV_HEADER)
print(f"Train dataset shape: {train_data.shape}")
print(f"Test dataset shape: {test_data.shape}")
"""
Remove the first record (because it is not a valid data example) and a trailing 'dot' in the class labels.
"""
test_data = test_data[1:]
test_data.income_bracket = test_data.income_bracket.apply(
lambda value: value.replace(".", "")
)
"""
Now we store the training and test data in separate CSV files.
"""
train_data_file = "train_data.csv"
test_data_file = "test_data.csv"
train_data.to_csv(train_data_file, index=False, header=False)
test_data.to_csv(test_data_file, index=False, header=False)
"""
## Define dataset metadata
Here, we define the metadata of the dataset that will be useful for reading and parsing
the data into input features, and encoding the input features with respect to their types.
"""
# A list of the numerical feature names.
NUMERIC_FEATURE_NAMES = [
"age",
"education_num",
"capital_gain",
"capital_loss",
"hours_per_week",
]
# A dictionary of the categorical features and their vocabulary.
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
"workclass": sorted(list(train_data["workclass"].unique())),
"education": sorted(list(train_data["education"].unique())),
"marital_status": sorted(list(train_data["marital_status"].unique())),
"occupation": sorted(list(train_data["occupation"].unique())),
"relationship": sorted(list(train_data["relationship"].unique())),
"race": sorted(list(train_data["race"].unique())),
"gender": sorted(list(train_data["gender"].unique())),
"native_country": sorted(list(train_data["native_country"].unique())),
}
# Name of the column to be used as instances weight.
WEIGHT_COLUMN_NAME = "fnlwgt"
# A list of the categorical feature names.
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())
# A list of all the input features.
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
# A list of column default values for each feature.
COLUMN_DEFAULTS = [
[0.0] if feature_name in NUMERIC_FEATURE_NAMES + [WEIGHT_COLUMN_NAME] else ["NA"]
for feature_name in CSV_HEADER
]
# The name of the target feature.
TARGET_FEATURE_NAME = "income_bracket"
# A list of the labels of the target features.
TARGET_LABELS = [" <=50K", " >50K"]
"""
## Configure the hyperparameters
The hyperparameters includes model architecture and training configurations.
"""
LEARNING_RATE = 0.001
WEIGHT_DECAY = 0.0001
DROPOUT_RATE = 0.2
BATCH_SIZE = 265
NUM_EPOCHS = 15
NUM_TRANSFORMER_BLOCKS = 3 # Number of transformer blocks.
NUM_HEADS = 4 # Number of attention heads.
EMBEDDING_DIMS = 16 # Embedding dimensions of the categorical features.
MLP_HIDDEN_UNITS_FACTORS = [
2,
1,
] # MLP hidden layer units, as factors of the number of inputs.
NUM_MLP_BLOCKS = 2 # Number of MLP blocks in the baseline model.
"""
## Implement data reading pipeline
We define an input function that reads and parses the file, then converts features
and labels into a[`tf.data.Dataset`](https://www.tensorflow.org/guide/datasets)
for training or evaluation.
"""
target_label_lookup = layers.StringLookup(
vocabulary=TARGET_LABELS, mask_token=None, num_oov_indices=0
)
def prepare_example(features, target):
target_index = target_label_lookup(target)
weights = features.pop(WEIGHT_COLUMN_NAME)
return features, target_index, weights
def get_dataset_from_csv(csv_file_path, batch_size=128, shuffle=False):
dataset = tf.data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=CSV_HEADER,
column_defaults=COLUMN_DEFAULTS,
label_name=TARGET_FEATURE_NAME,
num_epochs=1,
header=False,
na_value="?",
shuffle=shuffle,
).map(prepare_example, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False)
return dataset.cache()
"""
## Implement a training and evaluation procedure
"""
def run_experiment(
model,
train_data_file,
test_data_file,
num_epochs,
learning_rate,
weight_decay,
batch_size,
):
optimizer = tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
)
model.compile(
optimizer=optimizer,
loss=keras.losses.BinaryCrossentropy(),
metrics=[keras.metrics.BinaryAccuracy(name="accuracy")],
)
train_dataset = get_dataset_from_csv(train_data_file, batch_size, shuffle=True)
validation_dataset = get_dataset_from_csv(test_data_file, batch_size)
print("Start training the model...")
history = model.fit(
train_dataset, epochs=num_epochs, validation_data=validation_dataset
)
print("Model training finished")
_, accuracy = model.evaluate(validation_dataset, verbose=0)
print(f"Validation accuracy: {round(accuracy * 100, 2)}%")
return history
"""
## Create model inputs
Now, define the inputs for the models as a dictionary, where the key is the feature name,
and the value is a `keras.layers.Input` tensor with the corresponding feature shape
and data type.
"""
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
if feature_name in NUMERIC_FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.float32
)
else:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.string
)
return inputs
"""
## Encode features
The `encode_inputs` method returns `encoded_categorical_feature_list` and `numerical_feature_list`.
We encode the categorical features as embeddings, using a fixed `embedding_dims` for all the features,
regardless their vocabulary sizes. This is required for the Transformer model.
"""
def encode_inputs(inputs, embedding_dims):
encoded_categorical_feature_list = []
numerical_feature_list = []
for feature_name in inputs:
if feature_name in CATEGORICAL_FEATURE_NAMES:
# Get the vocabulary of the categorical feature.
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
# Create a lookup to convert string values to an integer indices.
# Since we are not using a mask token nor expecting any out of vocabulary
# (oov) token, we set mask_token to None and num_oov_indices to 0.
lookup = layers.StringLookup(
vocabulary=vocabulary,
mask_token=None,
num_oov_indices=0,
output_mode="int",
)
# Convert the string input values into integer indices.
encoded_feature = lookup(inputs[feature_name])
# Create an embedding layer with the specified dimensions.
embedding = layers.Embedding(
input_dim=len(vocabulary), output_dim=embedding_dims
)
# Convert the index values to embedding representations.
encoded_categorical_feature = embedding(encoded_feature)
encoded_categorical_feature_list.append(encoded_categorical_feature)
else:
# Use the numerical features as-is.
numerical_feature = tf.expand_dims(inputs[feature_name], -1)
numerical_feature_list.append(numerical_feature)
return encoded_categorical_feature_list, numerical_feature_list
"""
## Implement an MLP block
"""
def create_mlp(hidden_units, dropout_rate, activation, normalization_layer, name=None):
mlp_layers = []
for units in hidden_units:
mlp_layers.append(normalization_layer),
mlp_layers.append(layers.Dense(units, activation=activation))
mlp_layers.append(layers.Dropout(dropout_rate))
return keras.Sequential(mlp_layers, name=name)
"""
## Experiment 1: a baseline model
In the first experiment, we create a simple multi-layer feed-forward network.
"""
def create_baseline_model(
embedding_dims, num_mlp_blocks, mlp_hidden_units_factors, dropout_rate
):
# Create model inputs.
inputs = create_model_inputs()
# encode features.
encoded_categorical_feature_list, numerical_feature_list = encode_inputs(
inputs, embedding_dims
)
# Concatenate all features.
features = layers.concatenate(
encoded_categorical_feature_list + numerical_feature_list
)
# Compute Feedforward layer units.
feedforward_units = [features.shape[-1]]
# Create several feedforwad layers with skip connections.
for layer_idx in range(num_mlp_blocks):
features = create_mlp(
hidden_units=feedforward_units,
dropout_rate=dropout_rate,
activation=keras.activations.gelu,
normalization_layer=layers.LayerNormalization(epsilon=1e-6),
name=f"feedforward_{layer_idx}",
)(features)
# Compute MLP hidden_units.
mlp_hidden_units = [
factor * features.shape[-1] for factor in mlp_hidden_units_factors
]
# Create final MLP.
features = create_mlp(
hidden_units=mlp_hidden_units,
dropout_rate=dropout_rate,
activation=keras.activations.selu,
normalization_layer=layers.BatchNormalization(),
name="MLP",
)(features)
# Add a sigmoid as a binary classifer.
outputs = layers.Dense(units=1, activation="sigmoid", name="sigmoid")(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
baseline_model = create_baseline_model(
embedding_dims=EMBEDDING_DIMS,
num_mlp_blocks=NUM_MLP_BLOCKS,
mlp_hidden_units_factors=MLP_HIDDEN_UNITS_FACTORS,
dropout_rate=DROPOUT_RATE,
)
print("Total model weights:", baseline_model.count_params())
keras.utils.plot_model(baseline_model, show_shapes=True, rankdir="LR")
"""
Let's train and evaluate the baseline model:
"""
history = run_experiment(
model=baseline_model,
train_data_file=train_data_file,
test_data_file=test_data_file,
num_epochs=NUM_EPOCHS,
learning_rate=LEARNING_RATE,
weight_decay=WEIGHT_DECAY,
batch_size=BATCH_SIZE,
)
"""
The baseline linear model achieves ~81% validation accuracy.
"""
"""
## Experiment 2: TabTransformer
The TabTransformer architecture works as follows:
1. All the categorical features are encoded as embeddings, using the same `embedding_dims`.
This means that each value in each categorical feature will have its own embedding vector.
2. A column embedding, one embedding vector for each categorical feature, is added (point-wise) to the categorical feature embedding.
3. The embedded categorical features are fed into a stack of Transformer blocks.
Each Transformer block consists of a multi-head self-attention layer followed by a feed-forward layer.
3. The outputs of the final Transformer layer, which are the *contextual embeddings* of the categorical features,
are concatenated with the input numerical features, and fed into a final MLP block.
4. A `softmax` classifer is applied at the end of the model.
The [paper](https://arxiv.org/abs/2012.06678) discusses both addition and concatenation of the column embedding in the
*Appendix: Experiment and Model Details* section.
The architecture of TabTransformer is shown below, as presented in the paper.
<img src="https://raw.githubusercontent.com/keras-team/keras-io/master/examples/structured_data/img/tabtransformer/tabtransformer.png" width="500"/>
"""
def create_tabtransformer_classifier(
num_transformer_blocks,
num_heads,
embedding_dims,
mlp_hidden_units_factors,
dropout_rate,
use_column_embedding=False,
):
# Create model inputs.
inputs = create_model_inputs()
# encode features.
encoded_categorical_feature_list, numerical_feature_list = encode_inputs(
inputs, embedding_dims
)
# Stack categorical feature embeddings for the Tansformer.
encoded_categorical_features = tf.stack(encoded_categorical_feature_list, axis=1)
# Concatenate numerical features.
numerical_features = layers.concatenate(numerical_feature_list)
# Add column embedding to categorical feature embeddings.
if use_column_embedding:
num_columns = encoded_categorical_features.shape[1]
column_embedding = layers.Embedding(
input_dim=num_columns, output_dim=embedding_dims
)
column_indices = tf.range(start=0, limit=num_columns, delta=1)
encoded_categorical_features = encoded_categorical_features + column_embedding(
column_indices
)
# Create multiple layers of the Transformer block.
for block_idx in range(num_transformer_blocks):
# Create a multi-head attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads,
key_dim=embedding_dims,
dropout=dropout_rate,
name=f"multihead_attention_{block_idx}",
)(encoded_categorical_features, encoded_categorical_features)
# Skip connection 1.
x = layers.Add(name=f"skip_connection1_{block_idx}")(
[attention_output, encoded_categorical_features]
)
# Layer normalization 1.
x = layers.LayerNormalization(name=f"layer_norm1_{block_idx}", epsilon=1e-6)(x)
# Feedforward.
feedforward_output = create_mlp(
hidden_units=[embedding_dims],
dropout_rate=dropout_rate,
activation=keras.activations.gelu,
normalization_layer=layers.LayerNormalization(epsilon=1e-6),
name=f"feedforward_{block_idx}",
)(x)
# Skip connection 2.
x = layers.Add(name=f"skip_connection2_{block_idx}")([feedforward_output, x])
# Layer normalization 2.
encoded_categorical_features = layers.LayerNormalization(
name=f"layer_norm2_{block_idx}", epsilon=1e-6
)(x)
# Flatten the "contextualized" embeddings of the categorical features.
categorical_features = layers.Flatten()(encoded_categorical_features)
# Apply layer normalization to the numerical features.
numerical_features = layers.LayerNormalization(epsilon=1e-6)(numerical_features)
# Prepare the input for the final MLP block.
features = layers.concatenate([categorical_features, numerical_features])
# Compute MLP hidden_units.
mlp_hidden_units = [
factor * features.shape[-1] for factor in mlp_hidden_units_factors
]
# Create final MLP.
features = create_mlp(
hidden_units=mlp_hidden_units,
dropout_rate=dropout_rate,
activation=keras.activations.selu,
normalization_layer=layers.BatchNormalization(),
name="MLP",
)(features)
# Add a sigmoid as a binary classifer.
outputs = layers.Dense(units=1, activation="sigmoid", name="sigmoid")(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
tabtransformer_model = create_tabtransformer_classifier(
num_transformer_blocks=NUM_TRANSFORMER_BLOCKS,
num_heads=NUM_HEADS,
embedding_dims=EMBEDDING_DIMS,
mlp_hidden_units_factors=MLP_HIDDEN_UNITS_FACTORS,
dropout_rate=DROPOUT_RATE,
)
print("Total model weights:", tabtransformer_model.count_params())
keras.utils.plot_model(tabtransformer_model, show_shapes=True, rankdir="LR")
"""
Let's train and evaluate the TabTransformer model:
"""
history = run_experiment(
model=tabtransformer_model,
train_data_file=train_data_file,
test_data_file=test_data_file,
num_epochs=NUM_EPOCHS,
learning_rate=LEARNING_RATE,
weight_decay=WEIGHT_DECAY,
batch_size=BATCH_SIZE,
)
"""
The TabTransformer model achieves ~85% validation accuracy.
Note that, with the default parameter configurations, both the baseline and the TabTransformer
have similar number of trainable weights: 109,629 and 92,151 respectively, and both use the same training hyperparameters.
"""
"""
## Conclusion
TabTransformer significantly outperforms MLP and recent
deep networks for tabular data while matching the performance of tree-based ensemble models.
TabTransformer can be learned in end-to-end supervised training using labeled examples.
For a scenario where there are a few labeled examples and a large number of unlabeled
examples, a pre-training procedure can be employed to train the Transformer layers using unlabeled data.
This is followed by fine-tuning of the pre-trained Transformer layers along with
the top MLP layer using the labeled data.
Example available on HuggingFace.
| Trained Model | Demo |
| :--: | :--: |
| [![Generic badge](https://img.shields.io/badge/๐ค%20Model-TabTransformer-black.svg)](https://huggingface.co/keras-io/tab_transformer) | [![Generic badge](https://img.shields.io/badge/๐ค%20Spaces-TabTransformer-black.svg)](https://huggingface.co/spaces/keras-io/TabTransformer_Classification) |
"""
| apache-2.0 |
nhuntwalker/astroML | examples/datasets/plot_wmap_power_spectra.py | 5 | 2092 | """
WMAP power spectrum analysis with HealPy
----------------------------------------
This demonstrates how to plot and take a power spectrum of the WMAP data
using healpy, the python wrapper for healpix. Healpy is available for
download at the `github site <https://github.com/healpy/healpy>`_
"""
# Author: Jake VanderPlas <vanderplas@astro.washington.edu>
# License: BSD
# The figure is an example from astroML: see http://astroML.github.com
import numpy as np
from matplotlib import pyplot as plt
# warning: due to a bug in healpy, importing it before pylab can cause
# a segmentation fault in some circumstances.
import healpy as hp
from astroML.datasets import fetch_wmap_temperatures
#------------------------------------------------------------
# Fetch the data
wmap_unmasked = fetch_wmap_temperatures(masked=False)
wmap_masked = fetch_wmap_temperatures(masked=True)
white_noise = np.ma.asarray(np.random.normal(0, 0.062, wmap_masked.shape))
#------------------------------------------------------------
# plot the unmasked map
fig = plt.figure(1)
hp.mollview(wmap_unmasked, min=-1, max=1, title='Unmasked map',
fig=1, unit=r'$\Delta$T (mK)')
#------------------------------------------------------------
# plot the masked map
# filled() fills the masked regions with a null value.
fig = plt.figure(2)
hp.mollview(wmap_masked.filled(), title='Masked map',
fig=2, unit=r'$\Delta$T (mK)')
#------------------------------------------------------------
# compute and plot the power spectrum
cl = hp.anafast(wmap_masked.filled(), lmax=1024)
ell = np.arange(len(cl))
cl_white = hp.anafast(white_noise, lmax=1024)
fig = plt.figure(3)
ax = fig.add_subplot(111)
ax.scatter(ell, ell * (ell + 1) * cl,
s=4, c='black', lw=0,
label='data')
ax.scatter(ell, ell * (ell + 1) * cl_white,
s=4, c='gray', lw=0,
label='white noise')
ax.set_xlabel(r'$\ell$')
ax.set_ylabel(r'$\ell(\ell+1)C_\ell$')
ax.set_title('Angular Power (not mask corrected)')
ax.legend(loc='upper right')
ax.grid()
ax.set_xlim(0, 1100)
plt.show()
| bsd-2-clause |
ChristophSchauer/RPG-Ratten | functions_RPG.py | 1 | 45372 | ๏ปฟ# -*- coding: utf-8 -*-
"""
Header
@author: Christoph
Version : 1.0
Programmed with: WinPython 3.4.4.1
Changed to: WinPython 2.7.10.3
History:
[2016.03.24, CS]: initial setup; put in all the functions of the main_RPG;
ERROR1: say the user the right counter of turns he used;
start with the function for the char generation;
generate a functions file;
insert the asking of the user to save his char;
ERROR2: thLib has an error, maybe reinstallation of python;
[2016.03.25, CS]: insert the rest of the functions; insert the random_dice
function;
start with the rat fighting system function;
ERROR2 solved: put the needed functions into the
functions_RPG.py;
ERROR1: solved: the turn counter has to inerate with 1 and
not with itself;
implement random_dice function with any number of dices,
output and an exclusion criteria;
ERROR3: starting a fight the following message appears:
'numpy.float64' object cannot be interpreted as an integer;
ERROR3: solved: change the type of the fight_array from
float64 to int32;
ERROR4: problem with the enemy's turn in fct_fight_rat;
[2016.03.29, CS]: change the char generation; check, that the player has not
more than 3 points on each attribute;
ERROR4: solved: checked the if clauses for the fights;
changed the damage_calculation: the attack values of player
and enemy are passed;
[2016.03.30, CS]: ERROR7: change random_dice: add the checking for zero
dices, the function random_dice function has to check for
the number of dices which are thrown and adjust the output
accordingly;
ERROR7: solved;
exit() changed to raise SystemExit;
changed the rooms-dictionary: all the persons must be
defined with 0 or 1, if the player can fight against them
or not;
changed the move-funtion and the rooms-dictionary: now all
doors must be defined with 'locked' or 'opened';
[2016.03.31, CS]: changed the imported libraries to save memory;
[2016.04.06, CS]: ERROR#14: when the map should be loaded, there is a problem
with y/j, maybe english and german layout of Keyboard;
ERROR#14: solved: also answer for the z from english
keyboards;
[2016.04.11, MG]: ISSUE#13: Adjacent rooms are now shown, TODO: Hidden room
option;
[2016.04.11, CS]: ISSUE#16: changed all the naming of the interaction in
english;
ISSUE#18: changed the call of tkinter;
[2016.04.13, CS]: ISSUE#17: long texts are not translated, they are checked
with an if clause and it exists a english and a german
version of the text;
[2016.04.11, MG]: ISSUE#13: Hidden Rooms won't be shown;
[2016.04.15, CS]: ISSUE#21: write the function; also insert load funcction,
but this one can't be assessed by the user until now;
[2016.04.16, CS]: ISSUE#21: at the end of the save name the actual time stamp
is added;
[2016.04.16, MG]: ISSUE#19: Darkness trigger and "use" function added;
[2016.04.17, CS]: ISSUE#24: it is checked, if the parameter can be counted in
the inventory;
[2016.04.18, CS]: ISSUE#29: add the command 'help' in showInstructions;
[2016.04.19, CS]: change to version 1.0;
[2016.04.20, CS]: ISSUE#35: make the code python 2-3 compatible;
[2016.04.21, CS]: ISSUE#34: the game ask the user as long as he does not use
a number;
[2016.04.25, CS]: ISSUE#33: implemented the replay;
ISSUE#37: let the program check if python 2 or python 3 is
used;
[2016.04.26, CS]: ISSUE#30: added in showStatus a query for the
case 'item' = [];
ISSUE#39: implement '' as return value and check if it
happens, when it happens the game takes the default value
or warns the player that this can't be done;
[2016.05.02, CS]: insert the parts of pygame in the functions: fct_rooms;
"""
# python 2-3 compatible code
import future
from builtins import input
import past
import six
from io import open
import parameter_RPG
import main_RPG
from random import randint
from os import path
import json
import sys
if sys.version_info.major == 3:
# Python 3.x
import tkinter as tk
import tkinter.filedialog as tkf
else:
# Python 2.x
import Tkinter as tk
import tkFileDialog as tkf
from numpy import ones
import time
import os
def getfile(FilterSpec='*', DialogTitle='Select File: ', DefaultName=''):
'''
taken from Thomas Haslwanter
Selecting an existing file.
Parameters
----------
FilterSpec : query-string
File filters
DialogTitle : string
Window title
DefaultName : string
Can be a directory AND filename
Returns
-------
filename : string
selected existing file, or empty string if nothing is selected
pathname: string
selected path, or empty string if nothing is selected
Examples
--------
>>> (myFile, myPath) = thLib.ui.getfile('*.py', 'Testing file-selection', 'c:\\temp\\test.py')
'''
root = tk.Tk()
root.withdraw()
fullInFile = tkf.askopenfilename(initialfile=DefaultName,
title=DialogTitle, filetypes=[('Select', FilterSpec),
('all files','*')])
# Close the Tk-window manager again
root.destroy()
if not os.path.exists(fullInFile):
(fileName, dirName) = ('','')
else:
print('Selection: ' + fullInFile)
dirName = os.path.dirname(fullInFile)
fileName = os.path.basename(fullInFile)
return (fileName, dirName)
def getdir(DialogTitle='Select Directory', DefaultName='.'):
'''
taken from Thomas Haslwanter
Select a directory
Parameters
----------
DialogTitle : string
Window title
DefaultName : string
Can be a directory AND filename
Returns
-------
directory : string
Selected directory, or empty string if nothing is selected.
Examples
--------
>>> myDir = thLib.ui.getdir('c:\\temp', 'Pick your directory')
'''
root = tk.Tk()
root.withdraw()
fullDir = tkf.askdirectory(initialdir=DefaultName, title=DialogTitle)
# Close the Tk-window manager again
root.destroy()
if not os.path.exists(fullDir):
fullDir = ''
else:
print('Selection: ' + fullDir)
return fullDir
def savefile(FilterSpec='*',DialogTitle='Save File: ', DefaultName=''):
'''
taken from Thomas Haslwanter
Selecting an existing or new file:
Parameters
----------
FilterSpec : string
File filters.
DialogTitle : string
Window title.
DefaultName : string
Can be a directory AND filename.
Returns
-------
filename : string
Selected file.
pathname : string
Selecte path.
Examples
--------
>>> (myFile, myPath) = thLib.ui.savefile('*.py', 'Testing file-selection', 'c:\\temp\\test.py')
'''
root = tk.Tk()
root.withdraw()
outFile = tkf.asksaveasfile(mode='w', title=DialogTitle, initialfile=DefaultName, filetypes=[('Save as', FilterSpec)])
# Close the Tk-window manager again
root.destroy()
if outFile == None:
(fileName, dirName) = ('','')
else:
fullOutFile = outFile.name
print('Selection: ' + fullOutFile)
dirName = path.dirname(fullOutFile)
fileName = path.basename(fullOutFile)
return (fileName, dirName)
def print_lines(*lines):
"""
A helpful function for printing many
separate strings on separate lines.
"""
print("\n".join([line for line in lines]))
def showInstructions():
"""
show the user his interface and the possible commands
input:
none
output:
show the pssoble commands and parameters to the user
"""
# print a main menu and the commands
print_lines("RPG Game",
"========",
"commands:",
"'help' - show the commands",
"'exit' - exit the game, you can save your character",
"'save' - save the game to continue it later",
"'status' - show the players character",
"'mission' - show the mission of the game",
"'go [north, east, south, west, up, down]'",
"'get [item]'",
"'use [item]'",
"'drop [item]'",
"'fight [person]'",
"'credits'")
def showStatus(currentRoom, rooms, turn, inventory, torch, history, playerstatus):
"""
the user can see in which room he is standing
also his inventory is shown to him
the persons in the room
the last point are the possible directions where he can go
input:
none
output:
five lines:
place
inventory
torch burn duration
persons
possible directions
"""
# print the player's current status
print("---------------------------")
print("you are in the " + rooms[currentRoom]["name"])
# print the current inventory
print("inventory: " + str(inventory))
#show the torch's burn duration
if torch == 0:
print("You have no lit torch")
else:
print("Your torch will burn for: " + str(torch) + " turns!")
# Triggercheck: check if room is too dark to see
triggercheck = rooms[currentRoom].get("trigger")
if triggercheck is not None and torch == 0:
if "dark" in triggercheck:
print("It's too dark in here, you should use a torch to lighten up a bit")
else:
#show descriptions for the room
if "detail" in rooms[currentRoom]:
print(rooms[currentRoom]["detail"])
# print an item if there is one
if "item" in rooms[currentRoom] and rooms[currentRoom]['item'] != []:
print("you see: " + str(rooms[currentRoom]["item"]))
# print POI if there is one
if "person" in rooms[currentRoom]:
print("you see: " + rooms[currentRoom]["person"][0])
if rooms[currentRoom]["person"][0] == "princess":
print_lines("you won the game!",
"you played " + str(turn) + " turn(s)")
write_history(history, 'won the game: ' + str(turn) + ' turn(s)')
# ask the player to save the character
print('want to save your character? (Y/N)')
decision = input('>').lower()
decision = decision.lower()
# write the command to the history
write_history(history, decision)
if decision == 'y' or decision == 'yes' or decision == 'z':
# save the character status
fct_save_game(2, playerstatus, rooms, currentRoom, inventory, turn)
else:
print('character not saved')
# ask the player to replay the game
print('want to replay? (Y/N)')
decision = input('>').lower()
decision = decision.lower()
# write the command to the history
write_history(history, 'replay: ' + decision)
if decision == 'y' or decision == 'yes' or decision == 'z':
# start the game from the beginning
main_RPG.fct_main()
else:
print('goodbye')
raise SystemExit
# print other accessible rooms
CurRoom = []
for x in rooms[currentRoom]:
if x in parameter_RPG.directions:
if not 'hidden' in rooms[currentRoom].get(x):
CurRoom.append(x)
if len(CurRoom) == 1:
print("There's a door leading: " + str(CurRoom))
elif len(CurRoom) == 0:
print("There are no doors you can see!")
elif len(CurRoom) > 1:
print("There are doors leading: " + str(CurRoom))
print("---------------------------")
def generate_char(name):
playerstatus_dummy = {
"name" : [],
"clever" : [],
"social" : [],
"strong" : [],
"fast" : [],
"life" : [],
"tricks" : [],
"talents" : [],
"pack" : [],
"pros" : [],
"cons" : []
}
print("name your hero please:")
playerstatus_dummy["name"] = input(">")
# write the command to the history
write_history(name, "name your hero please: " + playerstatus_dummy["name"])
value = 0
while (value != 8):
value = 0
print_lines("you can distribute 8 points onto the following 4 attributes:\n",
"clever, social, strong, fast",
"seperate them by comma (eg:2,2,3,1)",
"no atttribute should have more than 3 points")
"""
# if german
print_lines("du kannst 8 Punkte auf die folgenden 4 Attribute verteilen:\n",
"clever, sozial, stark, schnell",
"trenne sie mit Komma (z.B.: 2,2,3,1)",
"keiner der Attribute dar mehr als 3 PUnkte haben")
"""
data = input(">")
data = data.split(',')
# write the command to the history
write_history(name, 'values: ' + str(data))
for index in range(4):
# check if the values from the user are between 0 and 3
if int(data[index]) <= 3 and int(data[index]) >= 0:
value = value + int(data[index])
if value != 8:
print("you distributed the values false")
else:
playerstatus_dummy["clever"] = int(data[0])
playerstatus_dummy["social"] = int(data[1])
playerstatus_dummy["strong"] = int(data[2])
playerstatus_dummy["fast"] = int(data[3])
playerstatus_dummy["life"] = int(data[2])*3
print("your char was created, now the game can begin")
return(playerstatus_dummy)
def fct_rooms():
print("using default")
# a dictionary linking a room to other positions
rooms = {
00:{ "mission_eng" : "find the princess",
"mission_ger" : "finde die Prinzessin"},
11:{ "name" : "hall",
"east" : [12,'opened'],
"south": [13,'opened'],
"up" : [21,'opened'],
"item" : ["torch"],
"room" : [1,1]},
12:{ "name" : "bedroom",
"west" : [11,'opened'],
"south": [14,'opened'],
"room" : [5,1]},
13:{ "name" : "kitchen",
"north": [11,'opened'],
"item" : ["sword"],
"trigger": ["dark"],
"room" : [1,5]},
14:{ "name" : "bathroom",
"detail":"You see traces of a fight, the sink is broken.",
"north": [12,'opened'],
"item" : ["soap"],
"room" : [5,5]},
21:{ "name" : "staircase",
"detail":"You see a dusty old bookshelf.",
"east" : [22,'opened'],
"south": [23,'opened','hidden','book'],
"down" : [11,'opened'],
"item" : ["torch"],
"room" : [1,1]},
22:{ "name" : "corridor",
"west" : [21,'opened'],
"south": [24,'opened'],
"up" : [32,'locked'],
"item" : ["torch"],
"person": ["bat",1],
"room" : [5,1]},
23:{ "name" : "terrace",
"north": [21,'opened'],
"trigger": ["dark"],
"person": ["bat",1],
"item" : ["key"],
"room" : [1,5]},
24:{ "name" : "study",
"north": [22,'opened'],
"item" : ["book"],
"room" : [5,5]},
32:{ "name" : "towerroom",
"down" : [22,'locked'],
"person" : ["princess",0],
"room" : [5,1]}
}
return(rooms)
def fct_move(parameter, currentRoom, rooms, inventory, name):
# check that they are allowed wherever they want to go
if parameter in rooms[currentRoom].keys():
# check if the door to the new room is locked
if not "hidden" in rooms[currentRoom][parameter]:
if "locked" in rooms[currentRoom][parameter]:
print("door locked")
if "key" in inventory:
print("want to use the key? [Y/N]")
answer = input(">")
answer = answer.lower()
# write the command to the history
write_history(name, 'want to use the key? ' + answer)
if answer == "y" or answer == "yes" or answer == "z":
print("opens the door with the key")
# change the door property
rooms[currentRoom][parameter][rooms[currentRoom][parameter].index("locked")] = 'opened'
# set the current room to the new room
currentRoom = rooms[currentRoom][parameter][0]
# change the lock of the old room from the new room
other = parameter_RPG.directions[(parameter_RPG.directions.index(parameter)+3)%6]
rooms[currentRoom][other][rooms[currentRoom][other].index("locked")] = 'opened'
else:
# set the current room to the new room
currentRoom = rooms[currentRoom][parameter][0]
else:
#This extra line is needed or else nothing is written in case of a hidden room
print("you can't go that way!")
# if there is no door/link to the new room
else:
print("you can't go that way!")
return(currentRoom)
def fct_get(parameter, currentRoom, rooms, inventory, torch):
#again check if it's too dark
triggercheck = rooms[currentRoom].get("trigger")
if triggercheck is not None and torch == 0:
if "dark" in triggercheck:
print("You can't pick up what you can't see!")
else:
# if the room contains an item, and the item is the one they want to get
if "item" in rooms[currentRoom] and parameter in rooms[currentRoom]["item"]:
# add the item to the inventory
inventory += [parameter]
# display a helpfull message
print(parameter + " got!")
# delete the item from the room
del rooms[currentRoom]["item"][rooms[currentRoom]["item"].index(parameter)]
# otherwise, if the item isn't there to get
else:
# tell them they can't get it
print("can't get " + parameter + "!")
return(inventory)
def fct_fight(parameter, currentRoom, rooms, inventory, turn, torch):
#again check if it's too dark
triggercheck = rooms[currentRoom].get("trigger")
if triggercheck is not None and torch == 0:
if "dark" in triggercheck:
print("You can't fight what you can't see!")
else:
# check if someone is in the room
# check that they are allowed whoever they want to fight
if "person" in rooms[currentRoom] and parameter in rooms[currentRoom]["person"]:
# if the player has a sword he is better at fighting
if rooms[currentRoom]['person'][1] == 1:
if "sword" in inventory:
if(randint(1,6+1)>2):
print("enemy died")
# if the enemy died delete it from the room
del rooms[currentRoom]["person"]
else:
print("you died")
print("you played " + str(turn) + " turn(s)")
# waits for 10 seconds to close the game
print("the game closes in 10 seconds")
time.sleep(10)
raise SystemExit
else:
if(randint(1,6+1)>4):
print("enemy died")
# if the enemy died delete it from the room
del rooms[currentRoom]["person"]
else:
print("you died")
print("you played " + str(turn) + " turn(s)")
# waits for 10 seconds to close the game
print("the game closes in 10 seconds")
time.sleep(10)
raise SystemExit
else:
print("this person can't be attacked")
else:
print("you are fighting against your own shadow")
def fct_drop(parameter, currentRoom, rooms, inventory):
# look if the player has something to drop
if inventory == [] or inventory.count(parameter) == 0 or inventory[inventory.index(parameter)] != parameter :
print("you can't drop anything")
else:
rooms[currentRoom]["item"] += [parameter]
del inventory[inventory.index(parameter)]
print("you dropped " + parameter + "!")
return(inventory)
def fct_use(parameter, currentRoom, rooms, inventory, torch):
# look if the player has something to use
if inventory == [] or inventory.count(parameter) == 0 or inventory[inventory.index(parameter)] != parameter :
print("you can't use anything")
else:
UsableItems = []
for x in rooms[currentRoom]:
if x is not "item":
if x is not "detail":
if parameter in rooms[currentRoom].get(x):
UsableItems.append(x)
# if the player uses a torch
if parameter == "torch":
if torch == 0:
torch = 3
print("You lit your torch for 3 turns")
else:
torch += 3
print("You extended your torch's burning duration by 3")
del inventory[inventory.index(parameter)]
# if the player uses the soap
if parameter == 'soap':
if "person" in rooms[currentRoom]:
names = parameter_RPG.enemystatus.keys()
# if the princess is in the same room
if rooms[currentRoom]["person"][0] == "princess":
print('the princess is not amused')
# if an enemy is in the same room
elif names.count(rooms[currentRoom]['person'][0]) == 1:
print('the enemy is not amused')
else:
print('washing shadows?')
else:
print('washing yourself for the princess does not change your social status')
del inventory[inventory.index(parameter)]
elif UsableItems != []:
for x in UsableItems:
rooms[currentRoom].get(x).remove(parameter)
rooms[currentRoom].get(x).remove('hidden')
del inventory[inventory.index(parameter)]
print("you used " + parameter + "!")
print("A door has opened!")
else:
print("Using " + parameter + " would have no use!")
return(inventory, torch)
def fct_exit(turn, playerstatus, name):
print_lines("thank you for playing",
"you played " + str(turn) + " turn(s)",
"want to save your char (y/n)?")
answer = input(">")
answer = answer.lower()
# write the command to the history
write_history(name, "want to save your char (y/n)? " + answer)
if answer == 'y' or answer == 'yes' or answer == "z":
print("where do you want to save your char?")
path = getdir(DialogTitle='Select folder:')
if path == '':
print('aborted saving')
os.chdir(path)
with open('player_saves.json', 'w', encoding='utf-8') as fp:
json.dump(playerstatus, fp)
print("stats saved under: " + path)
raise SystemExit
def fct_save_game(status, playerstatus, rooms, currentRoom, inventory, turn):
# get the localtime variables
localtime = time.localtime(time.time())
# make the save time stamp (year_month_day_hour_min_sec)
save_time = str(localtime.tm_year) +'_'+ str(localtime.tm_mon) +'_'+ str(localtime.tm_mday) +'_'+ str(localtime.tm_hour) +'_'+ str(localtime.tm_min) +'_'+ str(localtime.tm_sec)
save_time = unicode(save_time, 'utf-8')
# generate the output list
output = []
output.append(rooms)
output.append(playerstatus)
output.append(inventory)
output.append(currentRoom)
output.append(turn)
if sys.version_info.major == 3:
# if called from the auto save (status=1)
if status == 1:
with open('autosave_'+save_time+'.json', 'w', encoding='utf-8') as fp:
json.dump(output, fp)
# if called from the character saving (status=2)
elif status == 2:
with open('charsave_'+save_time+'.json', 'w', encoding='utf-8') as fp:
json.dump(output, fp)
# if called by the user (status=0)
else:
path = getdir(DialogTitle='Select folder:')
if path == '':
print('aborted saving')
os.chdir(path)
with open('player_saves_'+save_time+'.json', 'w', encoding='utf-8') as fp:
json.dump(output, fp)
else:
# if called from the auto save (status=1)
if status == 1:
with open('autosave_'+save_time+'.json', 'wb') as fp:
json.dump(output, fp)
# if called from the character saving (status=2)
elif status == 2:
with open('charsave_'+save_time+'.json', 'wb') as fp:
json.dump(output, fp)
# if called by the user (status=0)
else:
path = getdir(DialogTitle='Select folder:')
if path == '':
print('aborted saving')
os.chdir(path)
with open('player_saves_'+save_time+'.json', 'wb') as fp:
json.dump(output, fp)
print('game saved')
def fct_load_game():
path = getfile(FilterSpec='.json', DialogTitle='Select file:')
# data[0] = rooms
# data[1] = playerstatus
# data[2] = inventory
# data[3] = currentRoom
# data[4] = turn
# check if the user aborted the search
if path == ('',''):
print('can not load map')
data = ['ERROR',0,0,0,0]
else:
print('load map')
os.chdir(path[1])
with open(path[0], 'r', encoding='utf-8') as fp:
data = json.load(fp)
return(data[0],data[1],data[2],data[3],data[4])
def write_history(name, command):
# append the player's command to the history
with open(name, "a", encoding='utf-8') as historyfile:
historyfile.write(u' '.join(command)+'\n')
def random_dice(numberdices=6, numberoutput=2, exclusion = ' '):
# if more than 0 dices are used
if numberdices > 0:
# if the output would be larger than the input
# limit the number of dices to the output number
if numberoutput > numberdices:
numberoutput = numberdices
values = []
for i in range(numberdices):
values.append(randint(1,6))
values.sort(reverse=True)
output = []
if numberdices == 1:
return(int(sum(values)))
else:
if exclusion != ' ':
for index in range(numberdices):
if len(output) < numberoutput:
if values[index] != exclusion:
output += [values[index]]
else:
for index in range(numberoutput):
output += [values[index]]
return(int(sum(output)))
# if 0 or less dices are used
else:
return(0)
def fct_fight_rat(playerstatus, enemystatus, enemy, currentRoom, rooms, name):
# look for any exclusion criteria
if playerstatus["pack"] == 'collector':
player_exclusion_init = 6
else:
player_exclusion_init = ' '
# dice out the initiative of the player and the enemy
init_player = random_dice(numberdices=playerstatus["fast"], numberoutput=playerstatus["fast"], exclusion=player_exclusion_init)
init_enemy = random_dice(numberdices=enemystatus[enemy]["fast"], numberoutput=enemystatus[enemy]["fast"], exclusion=' ')
# falls sie den gleichen Wert haben
if init_player == init_enemy:
while init_player == init_enemy:
init_player = random_dice(1,1,' ')
init_enemy = random_dice(1,1,' ')
if init_player > init_enemy:
player_turn = True
enemy_turn = False
else:
player_turn = False
enemy_turn = True
# values needed: strong, fast, clever, life
fight_array = ones((2,8))
fight_array[0,0] = playerstatus['strong']
fight_array[0,1] = playerstatus['fast']
fight_array[0,2] = playerstatus['clever']
fight_array[0,3] = playerstatus['life'] # gesamte Lebenspunkte
fight_array[0,4] = playerstatus['life'] # aktuelle Lebenspunkte
fight_array[0,5] = 1 # Status Bewusstsein
fight_array[0,6] = 0 # Status festbeiรen
fight_array[0,7] = 0 # Status รผberwรคltigen
fight_array[1,0] = enemystatus[enemy]['strong']
fight_array[1,1] = enemystatus[enemy]['fast']
fight_array[1,2] = enemystatus[enemy]['clever']
fight_array[1,3] = enemystatus[enemy]['life'] # gesamte Lebenspunkte
fight_array[1,4] = enemystatus[enemy]['life'] # aktuelle Lebenspunkte
fight_array[1,5] = 1 # Status Bewusstsein
fight_array[1,6] = 0 # Status festbeiรen
fight_array[1,7] = 0 # Status รผberwรคltigen
fight_array = fight_array.astype(int)
while fight_array[0,4] != 0 or fight_array[1,4] != 0:
# calculate the malus for the dice results
player_malus = int((fight_array[0,3]-fight_array[0,4])//2)
enemy_malus = int((fight_array[1,3]-fight_array[1,4])//2)
# falls der Spieler am Zug ist oder der Gegner bewusstlos ist:
if player_turn or fight_array[1,5] == 0:
decision = 0
# falls der Spieler sich schon festgebissen hat
if fight_array[0,6] == 1:
# attack values of the enemy and the player not necessary,
# because the damage is dealt automatically:
# player = 1
# enemy = 0
fight_array = damage_calculation(currentRoom, rooms, fight_array, player=1, enemy=0)
# falls der Spieler sich noch nicht festgebissen hat
else:
while decision == 0:
print_lines("what do you want to do?",
"(1) attack", # strong/fast
"(2) bite tight", # strong/fast; dann strong/fast vs. strong/fast
"(3) overwhelm") # strong/fast, strong/clever vs. strong/clever
if fight_array[1,6] == 1:
print("(4) shake of the bite")
if fight_array[1,7] == 1:
print("(5) shake of overwhelming")
else:
if fight_array[1,7] == 1:
print("(4) shake of overwhelming")
# check if the user has an integer as input
check_integer = False
while check_integer == False:
decision = input(">")
try:
int(decision)
except ValueError:
check_integer = False
print('I only take numbers, nothing else')
else:
check_integer = True
decision = int(decision)
# write the command to the history
write_history(name, 'fight: ' + decision)
# Wert Spieler und Gegner bestimmen
player = random_dice(numberdices=fight_array[0,0]+fight_array[0,1], numberoutput=2, exclusion=' ') - int(player_malus)
# falls der Gegner nicht bewusstlos ist
if fight_array[1,5] != 0:
# Wahrscheinlichkeit: 2/3 ausweichen, 1/3 blocken
# falls kleiner, dann ausweichen: clever/fast
if randint(1,6) < 4:
enemy = random_dice(numberdices=fight_array[1,2]+fight_array[1,1], numberoutput=2, exclusion=' ') - int(enemy_malus)
# sonst blocken: strong/fast
else:
enemy = random_dice(numberdices=fight_array[1,0]+fight_array[1,1], numberoutput=2, exclusion=' ') - int(enemy_malus)
# ansonsten hat er einen Kampf mit dem Ergebnis 0 gemacht
else:
enemy = 0
if player > enemy:
# falls es keine รberwรคltigung und kein durch den Gegner gab
if fight_array[1,7] == 1 and fight_array[1,6] == 1:
if decision < 1 or decision > 5:
print("false input")
decision = 0
# falls es keine festbeiรen durch den Gegner gab
elif fight_array[1,6] == 1:
if decision < 1 or decision > 4:
print("false input")
decision = 0
# falls es keine Behinderung durch den Gegner gab
else:
if decision < 1 or decision > 3:
print("false input")
decision = 0
# wenn der Spieler angreift
if decision == 1:
fight_array = damage_calculation(currentRoom, rooms, fight_array, player, enemy)
# wenn der Spieler sich festbeiรen will
if decision == 2:
# Angriff erfolgreich, Schaden verrechnen
fight_array = damage_calculation(currentRoom, rooms, fight_array, player, enemy)
# muss sich noch festbeiรen
player = random_dice(numberdices=fight_array[0,0]+fight_array[0,1], numberoutput=2, exclusion=' ') - int(player_malus)
enemy = random_dice(numberdices=fight_array[1,0]+fight_array[1,1], numberoutput=2, exclusion=' ') - int(enemy_malus)
# falls grรถรer, hast sich der Spieler festgebissen
if player > enemy:
fight_array[0,6] = 1
print("you were able to bite tight")
else:
print("you weren't able to bite tight")
# falls der Spieler den Gegner รผberwรคltigen will
if decision == 3:
# kein Schaden verrechnet
# muss den Gegner noch รผberwรคltigen
player = random_dice(numberdices=fight_array[0,0]+fight_array[0,2], numberoutput=2, exclusion=' ') - int(player_malus)
enemy = random_dice(numberdices=fight_array[1,0]+fight_array[1,2], numberoutput=2, exclusion=' ') - int(enemy_malus)
# falls grรถรer, wurde der Gegner รผberwรคltigt
if player > enemy:
fight_array[0,7] = 1
print("you have overpowered the enemy")
else:
print("you couldn't overpower the enemy")
if decision == 4:
print("you could loose the bite")
if decision == 5:
print("you could loose the overpowering")
else:
print("the enemy has tricked you")
else: # enemy turn
# hat nur eine Attacke pro Runde
attack_used = False
# if the enemy lost consciosness he has no turn
if fight_array[1,5] == 0:
attack_used = True
# falls der Gegner รผberwรคtligt wurde, befreit er sich
if fight_array[0,7] == 1 and attack_used == False:
enemy = random_dice(numberdices=fight_array[1,0]+fight_array[1,2], numberoutput=2, exclusion=' ') - int(fight_array[0,0]) - int(enemy_malus)
player = random_dice(numberdices=fight_array[0,0]+fight_array[0,2], numberoutput=2, exclusion=' ') - int(player_malus)
if enemy>player:
print("the enemy has freed himself")
fight_array[0,7] = 0
else:
print("the enemy wasn't able to free himself")
attack_used = True
# festbeiรen lรถsen , selber festbeiรen, selber angreifen oder รผberwรคltigen
# falls Spieler sich nicht festgebissen hat
if fight_array[0,6] == 0:
# gibt es nur 3 Mรถglichkeiten (angreifen, selber festbeiรen, รผberwรคltigen)
decision = randint(1,3)
# falls Spieler sich festgebissen hat
else:
# gibt es 4 Mรถglichkeiten (angreifen, selber festbeiรen, รผberwรคltigen, festbeiรen lรถsen)
decision = randint(1,4)
# Werte fรผr Angriff vorberechnen
enemy = random_dice(numberdices=fight_array[1,0]+fight_array[1,2], numberoutput=2, exclusion=' ') - int(enemy_malus)
player = random_dice(numberdices=fight_array[0,0]+fight_array[0,2], numberoutput=2, exclusion=' ') - int(player_malus)
# falls 1, dann angreifen
if decision == 1 and attack_used == False:
if enemy > player:
fight_array = damage_calculation(currentRoom, rooms, fight_array, player, enemy)
print("the enemy has attacked and damaged you")
else:
print("the enemy attacked you but wasn't able to damage you")
attack_used = True
# falls 2, dann selber festbeiรen
elif decision == 2 and attack_used == False:
# angreifen
if enemy > player:
fight_array = damage_calculation(currentRoom, rooms, fight_array, player, enemy)
print("the enemy attacked you and tries to bite tight")
# festbeiรen
enemy = random_dice(numberdices=fight_array[1,0]+fight_array[1,1], numberoutput=2, exclusion=' ') - int(enemy_malus)
player = random_dice(numberdices=fight_array[0,0]+fight_array[0,1], numberoutput=2, exclusion=' ') - int(player_malus)
if enemy > player:
fight_array[1,6] = 1
print("the enemy was able to bite tight")
else:
print("the enemy wasn't able to bite tight")
else:
print("the enemy attacked you but wasn't able to damage you")
attack_used = True
# falls 3, dann Spieler รผberwรคltigen
elif decision == 3 and attack_used == False:
if enemy > player:
print("the enemy attacked you and tries to overpower you")
enemy = random_dice(numberdices=fight_array[1,0]+fight_array[1,2], numberoutput=2, exclusion=' ') - int(enemy_malus)
player = random_dice(numberdices=fight_array[0,0]+fight_array[0,2], numberoutput=2, exclusion=' ') - int(player_malus)
# falls grรถรer, wurde der Gegner รผberwรคltigt
if enemy > player:
fight_array[1,7] = 1
print("you were overpowered")
else:
print("you weren't overpowered")
else:
print("the enemy attacked you but wasn't able to damage you")
attack_used = True
# falls 4, dann festbeiรen lรถsen
elif decision == 4 and attack_used == False:
enemy = random_dice(numberdices=fight_array[1,0]+fight_array[1,1], numberoutput=2, exclusion=' ') - int(enemy_malus)
player = random_dice(numberdices=fight_array[0,0]+fight_array[0,1], numberoutput=2, exclusion=' ') - int(player_malus)
if enemy > player:
print("the enemy could free himself from your bite")
else:
print("the enemy couldn't free himself from your bite")
attack_used = True
else:
print("you have tricked the enemy")
# switch turns
player_turn = not player_turn
enemy_turn = not enemy_turn
if fight_array[1,4] == 0:
print("the enemy is dead")
else:
print("you died after XXX turns")
def damage_calculation(currentRoom, rooms, fight_array, player, enemy):
if player > enemy:
# Abzug der Lebenspunkte
fight_array[1,4] = fight_array[1,4] - fight_array[0,0]
print("you have bitten the enemy")
# falls die LP des Gegners <1 sind, dann passiert etwas
if fight_array[1,4] < 1:
# Gegner ist bewusstlos
fight_array[1,5] = 0
print("enemy is unconscios")
#falls seine Lebenspunkte = -Stรคrke, dann stirbt er
if fight_array[1,4] <= -fight_array[1,0]:
print("enemy is dead")
del rooms[currentRoom]["person"]
else:
# Abzug der Lebenspunkte
fight_array[0,4] = fight_array[0,4] - fight_array[1,0]
print("the enemy has bitten you")
# falls die LP des Spielers <1 sind, dann passiert etwas
if fight_array[0,4] < 1:
# Spieler ist bewusstlos
fight_array[0,5] = 0
print("you are unconsios")
#falls seine Lebenspunkte = -Stรคrke, dann stirbt er
if fight_array[0,4] <= -fight_array[0,0]:
print("you are dead")
return(fight_array)
| apache-2.0 |
NUKnightLab/cityhallmonitor | machinelearning/document_clustering.py | 1 | 8478 | """
http://scikit-learn.org/stable/auto_examples/text/document_clustering.html
adapted to work with documents from City Hall Monitor
=======================================
Clustering text documents using k-means
=======================================
This is an example showing how the scikit-learn can be used to cluster
documents by topics using a bag-of-words approach. This example uses
a scipy.sparse matrix to store the features instead of standard numpy arrays.
Two feature extraction methods can be used in this example:
- TfidfVectorizer uses a in-memory vocabulary (a python dict) to map the most
frequent words to features indices and hence compute a word occurrence
frequency (sparse) matrix. The word frequencies are then reweighted using
the Inverse Document Frequency (IDF) vector collected feature-wise over
the corpus.
- HashingVectorizer hashes word occurrences to a fixed dimensional space,
possibly with collisions. The word count vectors are then normalized to
each have l2-norm equal to one (projected to the euclidean unit-ball) which
seems to be important for k-means to work in high dimensional space.
HashingVectorizer does not provide IDF weighting as this is a stateless
model (the fit method does nothing). When IDF weighting is needed it can
be added by pipelining its output to a TfidfTransformer instance.
Two algorithms are demoed: ordinary k-means and its more scalable cousin
minibatch k-means.
Additionally, latent sematic analysis can also be used to reduce dimensionality
and discover latent patterns in the data.
It can be noted that k-means (and minibatch k-means) are very sensitive to
feature scaling and that in this case the IDF weighting helps improve the
quality of the clustering by quite a lot as measured against the "ground truth"
provided by the class label assignments of the 20 newsgroups dataset.
This improvement is not visible in the Silhouette Coefficient which is small
for both as this measure seem to suffer from the phenomenon called
"Concentration of Measure" or "Curse of Dimensionality" for high dimensional
datasets such as text data. Other measures such as V-measure and Adjusted Rand
Index are information theoretic based evaluation scores: as they are only based
on cluster assignments rather than distances, hence not affected by the curse
of dimensionality.
Note: as k-means is optimizing a non-convex objective function, it will likely
end up in a local optimum. Several runs with independent random init might be
necessary to get a good convergence.
"""
# Author: Peter Prettenhofer <peter.prettenhofer@gmail.com>
# Lars Buitinck <L.J.Buitinck@uva.nl>
# License: BSD 3 clause
from __future__ import print_function
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
from sklearn import metrics
from sklearn.cluster import KMeans, MiniBatchKMeans
import logging
from optparse import OptionParser
import sys
from time import time
import numpy as np
import psycopg2
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
# parse commandline arguments
op = OptionParser()
op.add_option("--lsa",
dest="n_components", type="int",
help="Preprocess documents with latent semantic analysis.")
op.add_option("--no-minibatch",
action="store_false", dest="minibatch", default=True,
help="Use ordinary k-means algorithm (in batch mode).")
op.add_option("--no-idf",
action="store_false", dest="use_idf", default=True,
help="Disable Inverse Document Frequency feature weighting.")
op.add_option("--use-hashing",
action="store_true", default=False,
help="Use a hashing feature vectorizer")
op.add_option("--n-features", type=int, default=10000,
help="Maximum number of features (dimensions)"
" to extract from text.")
op.add_option("--verbose",
action="store_true", dest="verbose", default=False,
help="Print progress reports inside k-means algorithm.")
print(__doc__)
op.print_help()
(opts, args) = op.parse_args()
if len(args) > 0:
op.error("this script takes no arguments.")
sys.exit(1)
###############################################################################
print("loading city hall monitor documents")
documents = []
con = psycopg2.connect(database='cityhallmonitor',user='cityhallmonitor',password='cityhallmonitor')
cur = con.cursor()
t0 = time()
cur.execute('select text from cityhallmonitor_document')
for row in cur:
documents.append(row[0])
print("%d documents" % len(documents))
print()
print("done in %fs" % (time() - t0))
labels = None # not sure what the analog to labels is for our dataset
true_k = 20 # is there a smarter way to get this from our documents?
print("Extracting features from the training dataset using a sparse vectorizer")
t0 = time()
if opts.use_hashing:
if opts.use_idf:
# Perform an IDF normalization on the output of HashingVectorizer
hasher = HashingVectorizer(n_features=opts.n_features,
stop_words='english', non_negative=True,
norm=None, binary=False)
vectorizer = make_pipeline(hasher, TfidfTransformer())
else:
vectorizer = HashingVectorizer(n_features=opts.n_features,
stop_words='english',
non_negative=False, norm='l2',
binary=False)
else:
vectorizer = TfidfVectorizer(max_df=0.5, max_features=opts.n_features,
min_df=2, stop_words='english',
use_idf=opts.use_idf)
X = vectorizer.fit_transform(documents)
print("done in %fs" % (time() - t0))
print("n_samples: %d, n_features: %d" % X.shape)
print()
if opts.n_components:
print("Performing dimensionality reduction using LSA")
t0 = time()
# Vectorizer results are normalized, which makes KMeans behave as
# spherical k-means for better results. Since LSA/SVD results are
# not normalized, we have to redo the normalization.
svd = TruncatedSVD(opts.n_components)
normalizer = Normalizer(copy=False)
lsa = make_pipeline(svd, normalizer)
X = lsa.fit_transform(X)
print("done in %fs" % (time() - t0))
explained_variance = svd.explained_variance_ratio_.sum()
print("Explained variance of the SVD step: {}%".format(
int(explained_variance * 100)))
print()
###############################################################################
# Do the actual clustering
if opts.minibatch:
km = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1,
init_size=1000, batch_size=1000, verbose=opts.verbose)
else:
km = KMeans(n_clusters=true_k, init='k-means++', max_iter=100, n_init=1,
verbose=opts.verbose)
print("Clustering sparse data with %s" % km)
t0 = time()
km.fit(X)
print("done in %0.3fs" % (time() - t0))
print()
if labels:
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels, km.labels_))
print("Completeness: %0.3f" % metrics.completeness_score(labels, km.labels_))
print("V-measure: %0.3f" % metrics.v_measure_score(labels, km.labels_))
print("Adjusted Rand-Index: %.3f"
% metrics.adjusted_rand_score(labels, km.labels_))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, km.labels_, sample_size=1000))
else:
print("Can't compute accuracy metrics without labels")
print()
if not opts.use_hashing:
print("Top terms per cluster:")
if opts.n_components:
original_space_centroids = svd.inverse_transform(km.cluster_centers_)
order_centroids = original_space_centroids.argsort()[:, ::-1]
else:
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(true_k):
print("Cluster %d:" % i, end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
| mit |
miguelfrde/stanford-cs231n | assignment2/cs231n/data_utils.py | 3 | 8331 | from __future__ import print_function
from builtins import range
from six.moves import cPickle as pickle
import numpy as np
import os
from scipy.misc import imread
import platform
def load_pickle(f):
version = platform.python_version_tuple()
if version[0] == '2':
return pickle.load(f)
elif version[0] == '3':
return pickle.load(f, encoding='latin1')
raise ValueError("invalid python version: {}".format(version))
def load_CIFAR_batch(filename):
""" load single batch of cifar """
with open(filename, 'rb') as f:
datadict = load_pickle(f)
X = datadict['data']
Y = datadict['labels']
X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("float")
Y = np.array(Y)
return X, Y
def load_CIFAR10(ROOT):
""" load all of cifar """
xs = []
ys = []
for b in range(1,6):
f = os.path.join(ROOT, 'data_batch_%d' % (b, ))
X, Y = load_CIFAR_batch(f)
xs.append(X)
ys.append(Y)
Xtr = np.concatenate(xs)
Ytr = np.concatenate(ys)
del X, Y
Xte, Yte = load_CIFAR_batch(os.path.join(ROOT, 'test_batch'))
return Xtr, Ytr, Xte, Yte
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000,
subtract_mean=True):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for classifiers. These are the same steps as we used for the SVM, but
condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
if subtract_mean:
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Transpose so that channels come first
X_train = X_train.transpose(0, 3, 1, 2).copy()
X_val = X_val.transpose(0, 3, 1, 2).copy()
X_test = X_test.transpose(0, 3, 1, 2).copy()
# Package data into a dictionary
return {
'X_train': X_train, 'y_train': y_train,
'X_val': X_val, 'y_val': y_val,
'X_test': X_test, 'y_test': y_test,
}
def load_tiny_imagenet(path, dtype=np.float32, subtract_mean=True):
"""
Load TinyImageNet. Each of TinyImageNet-100-A, TinyImageNet-100-B, and
TinyImageNet-200 have the same directory structure, so this can be used
to load any of them.
Inputs:
- path: String giving path to the directory to load.
- dtype: numpy datatype used to load the data.
- subtract_mean: Whether to subtract the mean training image.
Returns: A dictionary with the following entries:
- class_names: A list where class_names[i] is a list of strings giving the
WordNet names for class i in the loaded dataset.
- X_train: (N_tr, 3, 64, 64) array of training images
- y_train: (N_tr,) array of training labels
- X_val: (N_val, 3, 64, 64) array of validation images
- y_val: (N_val,) array of validation labels
- X_test: (N_test, 3, 64, 64) array of testing images.
- y_test: (N_test,) array of test labels; if test labels are not available
(such as in student code) then y_test will be None.
- mean_image: (3, 64, 64) array giving mean training image
"""
# First load wnids
with open(os.path.join(path, 'wnids.txt'), 'r') as f:
wnids = [x.strip() for x in f]
# Map wnids to integer labels
wnid_to_label = {wnid: i for i, wnid in enumerate(wnids)}
# Use words.txt to get names for each class
with open(os.path.join(path, 'words.txt'), 'r') as f:
wnid_to_words = dict(line.split('\t') for line in f)
for wnid, words in wnid_to_words.items():
wnid_to_words[wnid] = [w.strip() for w in words.split(',')]
class_names = [wnid_to_words[wnid] for wnid in wnids]
# Next load training data.
X_train = []
y_train = []
for i, wnid in enumerate(wnids):
if (i + 1) % 20 == 0:
print('loading training data for synset %d / %d'
% (i + 1, len(wnids)))
# To figure out the filenames we need to open the boxes file
boxes_file = os.path.join(path, 'train', wnid, '%s_boxes.txt' % wnid)
with open(boxes_file, 'r') as f:
filenames = [x.split('\t')[0] for x in f]
num_images = len(filenames)
X_train_block = np.zeros((num_images, 3, 64, 64), dtype=dtype)
y_train_block = wnid_to_label[wnid] * \
np.ones(num_images, dtype=np.int64)
for j, img_file in enumerate(filenames):
img_file = os.path.join(path, 'train', wnid, 'images', img_file)
img = imread(img_file)
if img.ndim == 2:
## grayscale file
img.shape = (64, 64, 1)
X_train_block[j] = img.transpose(2, 0, 1)
X_train.append(X_train_block)
y_train.append(y_train_block)
# We need to concatenate all training data
X_train = np.concatenate(X_train, axis=0)
y_train = np.concatenate(y_train, axis=0)
# Next load validation data
with open(os.path.join(path, 'val', 'val_annotations.txt'), 'r') as f:
img_files = []
val_wnids = []
for line in f:
img_file, wnid = line.split('\t')[:2]
img_files.append(img_file)
val_wnids.append(wnid)
num_val = len(img_files)
y_val = np.array([wnid_to_label[wnid] for wnid in val_wnids])
X_val = np.zeros((num_val, 3, 64, 64), dtype=dtype)
for i, img_file in enumerate(img_files):
img_file = os.path.join(path, 'val', 'images', img_file)
img = imread(img_file)
if img.ndim == 2:
img.shape = (64, 64, 1)
X_val[i] = img.transpose(2, 0, 1)
# Next load test images
# Students won't have test labels, so we need to iterate over files in the
# images directory.
img_files = os.listdir(os.path.join(path, 'test', 'images'))
X_test = np.zeros((len(img_files), 3, 64, 64), dtype=dtype)
for i, img_file in enumerate(img_files):
img_file = os.path.join(path, 'test', 'images', img_file)
img = imread(img_file)
if img.ndim == 2:
img.shape = (64, 64, 1)
X_test[i] = img.transpose(2, 0, 1)
y_test = None
y_test_file = os.path.join(path, 'test', 'test_annotations.txt')
if os.path.isfile(y_test_file):
with open(y_test_file, 'r') as f:
img_file_to_wnid = {}
for line in f:
line = line.split('\t')
img_file_to_wnid[line[0]] = line[1]
y_test = [wnid_to_label[img_file_to_wnid[img_file]]
for img_file in img_files]
y_test = np.array(y_test)
mean_image = X_train.mean(axis=0)
if subtract_mean:
X_train -= mean_image[None]
X_val -= mean_image[None]
X_test -= mean_image[None]
return {
'class_names': class_names,
'X_train': X_train,
'y_train': y_train,
'X_val': X_val,
'y_val': y_val,
'X_test': X_test,
'y_test': y_test,
'class_names': class_names,
'mean_image': mean_image,
}
def load_models(models_dir):
"""
Load saved models from disk. This will attempt to unpickle all files in a
directory; any files that give errors on unpickling (such as README.txt)
will be skipped.
Inputs:
- models_dir: String giving the path to a directory containing model files.
Each model file is a pickled dictionary with a 'model' field.
Returns:
A dictionary mapping model file names to models.
"""
models = {}
for model_file in os.listdir(models_dir):
with open(os.path.join(models_dir, model_file), 'rb') as f:
try:
models[model_file] = load_pickle(f)['model']
except pickle.UnpicklingError:
continue
return models
| mit |
kozo2/metmask | setup.py | 1 | 1394 | #!/usr/bin/env python
from setuptools import setup
import os
import metmask
import metmask.parse
files = ["data/*"]
setup(name='metmask',
version=metmask.__version__,
description='A program for masking metabolite identifiers',
author='Henning Redestig',
author_email='henning.red@gmail.com',
url='http://metmask.sourceforge.net',
requires=['sqlite3'],
platforms=['Linux', 'WinXP'],
classifiers = [
'Environment :: Console',
'Intended Audience :: Science/Research',
'Natural Language :: English',
'Operating System :: POSIX',
'Programming Language :: Python',
'Topic :: Scientific/Engineering :: Chemistry',
'Topic :: Scientific/Engineering :: Bioinformatics',
],
license='OSI Approved :: GNU General Public License (GPL)',
packages=['metmask', 'metmask.parse'],
long_description="""
This is a package for creating, maintaining and querying a
database with metabolite identifiers. Focused on mapping analyte
identifiers to the original identifiers of the parent metabolite
in order to facilitate biological interpretation of metabolomics
datasets. Provides automated import for several sources such as
KEGG, PlantCyc and the NIST library.
""",
package_data={'metmask': files},
scripts=['scripts/metmask'])
| gpl-3.0 |
h2oai/h2o | py/testdir_multi_jvm/test_impute_with_na.py | 8 | 8524 | import unittest, time, sys, random
sys.path.extend(['.','..','../..','py'])
import h2o, h2o_cmd, h2o_glm, h2o_import as h2i, h2o_jobs, h2o_exec as h2e, h2o_util, h2o_browse as h2b
print "Put some NAs in covtype then impute with the 3 methods"
print "Don't really understand the group_by. Randomly put some columns in there"
DO_POLL = False
AVOID_BUG = True
class Basic(unittest.TestCase):
def tearDown(self):
h2o.check_sandbox_for_errors()
@classmethod
def setUpClass(cls):
global SEED
SEED = h2o.setup_random_seed()
h2o.init(1, java_heap_GB=4)
@classmethod
def tearDownClass(cls):
# h2o.sleep(3600)
h2o.tear_down_cloud()
def test_impute_with_na(self):
h2b.browseTheCloud()
csvFilename = 'covtype.data'
csvPathname = 'standard/' + csvFilename
hex_key = "covtype.hex"
parseResult = h2i.import_parse(bucket='home-0xdiag-datasets', path=csvPathname, hex_key=hex_key, schema='local', timeoutSecs=20)
print "Just insert some NAs and see what happens"
inspect = h2o_cmd.runInspect(key=hex_key)
origNumRows = inspect['numRows']
origNumCols = inspect['numCols']
missing_fraction = 0.5
# NOT ALLOWED TO SET AN ENUM COL?
if 1==0:
# since insert missing values (below) doesn't insert NA into enum rows, make it NA with exec?
# just one in row 1
for enumCol in enumColList:
print "hack: Putting NA in row 0 of col %s" % enumCol
execExpr = '%s[1, %s+1] = NA' % (hex_key, enumCol)
h2e.exec_expr(execExpr=execExpr, timeoutSecs=10)
inspect = h2o_cmd.runInspect(key=hex_key)
missingValuesList = h2o_cmd.infoFromInspect(inspect)
print "missingValuesList after exec:", missingValuesList
if len(missingValuesList) != len(enumColList):
raise Exception ("Didn't get missing values in expected number of cols: %s %s" % (enumColList, missingValuesList))
for trial in range(1):
# copy the dataset
hex_key2 = 'c.hex'
execExpr = '%s = %s' % (hex_key2, hex_key)
h2e.exec_expr(execExpr=execExpr, timeoutSecs=10)
imvResult = h2o.nodes[0].insert_missing_values(key=hex_key2, missing_fraction=missing_fraction, seed=SEED)
print "imvResult", h2o.dump_json(imvResult)
# maybe make the output col a factor column
# maybe one of the 0,1 cols too?
# java.lang.IllegalArgumentException: Method `mode` only applicable to factor columns.
# ugh. ToEnum2 and ToInt2 take 1-based column indexing. This should really change back to 0 based for h2o-dev? (like Exec3)
print "Doing the ToEnum2 AFTER the NA injection, because h2o doesn't work right if we do it before"
expectedMissing = missing_fraction * origNumRows # per col
enumColList = [49, 50, 51, 52, 53, 54]
for e in enumColList:
enumResult = h2o.nodes[0].to_enum(src_key=hex_key2, column_index=(e+1))
inspect = h2o_cmd.runInspect(key=hex_key2)
numRows = inspect['numRows']
numCols = inspect['numCols']
self.assertEqual(origNumRows, numRows)
self.assertEqual(origNumCols, numCols)
missingValuesList = h2o_cmd.infoFromInspect(inspect)
print "missingValuesList", missingValuesList
# this is an approximation because we can't force an exact # of missing using insert_missing_values
if len(missingValuesList) != numCols:
raise Exception ("Why is missingValuesList not right afer ToEnum2?: %s %s" % (enumColList, missingValuesList))
for mv in missingValuesList:
h2o_util.assertApproxEqual(mv, expectedMissing, rel=0.1 * mv,
msg='mv %s is not approx. expected %s' % (mv, expectedMissing))
summaryResult = h2o_cmd.runSummary(key=hex_key2)
h2o_cmd.infoFromSummary(summaryResult)
# h2o_cmd.infoFromSummary(summaryResult)
print "I don't understand why the values don't increase every iteration. It seems to stay stuck with the first effect"
print "trial", trial
print "expectedMissing:", expectedMissing
print "Now get rid of all the missing values, by imputing means. We know all columns should have NAs from above"
print "Do the columns in random order"
# don't do the enum cols ..impute doesn't support right?
if AVOID_BUG:
shuffledColList = range(0,49) # 0 to 48
execExpr = '%s = %s[,1:49]' % (hex_key2, hex_key2)
h2e.exec_expr(execExpr=execExpr, timeoutSecs=10)
# summaryResult = h2o_cmd.runSummary(key=hex_key2)
# h2o_cmd.infoFromSummary(summaryResult)
inspect = h2o_cmd.runInspect(key=hex_key2)
numCols = inspect['numCols']
missingValuesList = h2o_cmd.infoFromInspect(inspect)
print "missingValuesList after impute:", missingValuesList
if len(missingValuesList) != 49:
raise Exception ("expected missing values in all cols after pruning enum cols: %s" % missingValuesList)
else:
shuffledColList = range(0,55) # 0 to 54
origInspect = inspect
random.shuffle(shuffledColList)
for column in shuffledColList:
# get a random set of column. no duplicate. random order? 0 is okay? will be []
groupBy = random.sample(range(55), random.randint(0, 54))
# header names start with 1, not 0. Empty string if []
groupByNames = ",".join(map(lambda x: "C" + str(x+1), groupBy))
# what happens if column and groupByNames overlap?? Do we loop here and choose until no overlap
columnName = "C%s" % (column + 1)
print "don't use mode if col isn't enum"
badChoices = True
while badChoices:
method = random.choice(["mean", "median", "mode"])
badChoices = column not in enumColList and method=="mode"
NEWSEED = random.randint(0, sys.maxint)
print "does impute modify the source key?"
# we get h2o error (argument exception) if no NAs
impResult = h2o.nodes[0].impute(source=hex_key2, column=column, method=method)
print "Now check that there are no missing values"
print "FIX! broken..insert missing values doesn't insert NAs in enum cols"
inspect = h2o_cmd.runInspect(key=hex_key2)
numRows2 = inspect['numRows']
numCols2 = inspect['numCols']
self.assertEqual(numRows, numRows2, "imput shouldn't have changed frame numRows: %s %s" % (numRows, numRows2))
self.assertEqual(numCols, numCols2, "imput shouldn't have changed frame numCols: %s %s" % (numCols, numCols2))
# check that the mean didn't change for the col
# the enum cols with mode, we'll have to think of something else
missingValuesList = h2o_cmd.infoFromInspect(inspect)
print "missingValuesList after impute:", missingValuesList
if missingValuesList:
raise Exception ("Not expecting any missing values after imputing all cols: %s" % missingValuesList)
cols = inspect['cols']
origCols = origInspect['cols']
print "\nFIX! ignoring these errors. have to figure out why."
for i, (c, oc) in enumerate(zip(cols, origCols)):
# I suppose since we impute to either median or mean, we can't assume the mean stays the same
# but for this tolerance it's okay (maybe a different dataset, that wouldn't be true
### h2o_util.assertApproxEqual(c['mean'], oc['mean'], tol=0.000000001,
### msg="col %i original mean: %s not equal to mean after impute: %s" % (i, c['mean'], oc['mean']))
if not h2o_util.approxEqual(oc['mean'], c['mean'], tol=0.000000001):
msg = "col %i original mean: %s not equal to mean after impute: %s" % (i, oc['mean'], c['mean'])
print msg
if __name__ == '__main__':
h2o.unit_main()
| apache-2.0 |
Lab41/pelops | pelops/models/makesvm.py | 3 | 3700 | """ work with SVM and chips """
import time
import sklearn
from scipy.stats import uniform as sp_rand
from sklearn import svm
from sklearn.externals import joblib
from sklearn.model_selection import RandomizedSearchCV
from tqdm import tnrange
from pelops.analysis.camerautil import get_match_id, make_good_bad
from pelops.analysis.comparecameras import make_work
def train_svm(examples, fd_train, eg_train):
"""
train a support vector machine
examples(int): number of examples to generate
fd_train(featureDataset): where to join features to chips
eg_train(experimentGenerator): makes experiments
clf(SVM): scm classifier trainined on the input examples
"""
lessons_train = list()
outcomes_train = list()
for _ in tnrange(examples):
cameras_train = eg_train.generate()
match_id = get_match_id(cameras_train)
goods, bads = make_good_bad(cameras_train, match_id)
make_work(fd_train, lessons_train, outcomes_train, goods, 1)
make_work(fd_train, lessons_train, outcomes_train, bads, 0)
clf = svm.SVC()
print('fitting')
start = time.time()
clf.fit(lessons_train, outcomes_train)
end = time.time()
print('fitting took {} seconds'.format(end - start))
return clf
def search(examples, fd_train, eg_train, iterations):
"""
beginnnings of hyperparameter search for svm
"""
param_grid = {'C': sp_rand()}
lessons_train = list()
outcomes_train = list()
for _ in tnrange(examples):
cameras_train = eg_train.generate()
match_id = get_match_id(cameras_train)
goods, bads = make_good_bad(cameras_train, match_id)
make_work(fd_train, lessons_train, outcomes_train, goods, 1)
make_work(fd_train, lessons_train, outcomes_train, bads, 0)
clf = svm.SVC()
print('searching')
start = time.time()
rsearch = RandomizedSearchCV(
estimator=clf, param_distributions=param_grid, n_iter=iterations)
rsearch.fit(lessons_train, outcomes_train)
end = time.time()
print('searching took {} seconds'.format(end - start))
print(rsearch.best_score_)
print(rsearch.best_estimator_.C)
def save_model(model, filename):
"""
save a model to disk
model(somemodel): trained model to save
filename(str): location to safe the model
"""
joblib.dump(model, filename)
def load_model(filename):
"""
load a model from disk. make sure that models only
show up from version 0.18.1 of sklearn as other versions
may not load correctly
filename(str): name of file to load
"""
if sklearn.__version__ == '0.18.1':
model = joblib.load(filename)
return model
else:
print('upgrade sklearn to version 0.18.1')
def test_svm(examples, clf_train, fd_test, eg_test):
"""
score the trained SVM against test features
examples(int): number of examples to run
clf_train(modle): model for evaluating testing data
fd_test(featureDataset): testing dataset
eg_test(experimentGenerator): generated experiments from testing dataset
out(int): score from the model
"""
lessons_test = list()
outcomes_test = list()
for _ in tnrange(examples):
cameras_test = eg_test.generate()
match_id = get_match_id(cameras_test)
goods, bads = make_good_bad(cameras_test, match_id)
make_work(fd_test, lessons_test, outcomes_test, goods, 1)
make_work(fd_test, lessons_test, outcomes_test, bads, 0)
print('scoring')
start = time.time()
out = clf_train.score(lessons_test, outcomes_test)
end = time.time()
print('scoring took {} seconds'.format(end - start))
return out
| apache-2.0 |
simmetria/sentry | src/sentry/utils/javascript.py | 1 | 3562 | """
sentry.utils.javascript
~~~~~~~~~~~~~~~~~~~~~~~
:copyright: (c) 2010-2012 by the Sentry Team, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
from django.core.urlresolvers import reverse
from django.utils.html import escape
from sentry.constants import STATUS_RESOLVED
from sentry.models import Group, GroupBookmark
from sentry.templatetags.sentry_plugins import get_tags
from sentry.utils import json
transformers = {}
def transform(objects, request=None):
if not objects:
return objects
elif not isinstance(objects, (list, tuple)):
return transform([objects], request=request)[0]
# elif isinstance(obj, dict):
# return dict((k, transform(v, request=request)) for k, v in obj.iteritems())
t = transformers.get(type(objects[0]))
if t:
t.attach_metadata(objects, request=request)
return [t(o, request=request) for o in objects]
return objects
def to_json(obj, request=None):
result = transform(obj, request=request)
return json.dumps(result)
def register(type):
def wrapped(cls):
transformers[type] = cls()
return cls
return wrapped
class Transformer(object):
def __call__(self, obj, request=None):
return self.transform(obj, request)
def attach_metadata(self, objects, request=None):
pass
def transform(self, obj, request=None):
return {}
@register(Group)
class GroupTransformer(Transformer):
def attach_metadata(self, objects, request=None):
from sentry.templatetags.sentry_plugins import handle_before_events
if request and objects:
handle_before_events(request, objects)
if request and request.user.is_authenticated() and objects:
bookmarks = set(GroupBookmark.objects.filter(
user=request.user,
group__in=objects,
).values_list('group_id', flat=True))
else:
bookmarks = set()
if objects:
historical_data = Group.objects.get_chart_data_for_group(
instances=objects,
max_days=1,
key='group',
)
else:
historical_data = {}
for g in objects:
g.is_bookmarked = g.pk in bookmarks
g.historical_data = [x[1] for x in historical_data.get(g.id, [])]
def transform(self, obj, request=None):
d = {
'id': str(obj.id),
'count': str(obj.times_seen),
'title': escape(obj.message_top()),
'message': escape(obj.error()),
'level': obj.level,
'levelName': escape(obj.get_level_display()),
'logger': escape(obj.logger),
'permalink': reverse('sentry-group', args=[obj.project.slug, obj.id]),
'versions': list(obj.get_version() or []),
'lastSeen': obj.last_seen.isoformat(),
'timeSpent': obj.avg_time_spent,
'canResolve': request and request.user.is_authenticated(),
'isResolved': obj.status == STATUS_RESOLVED,
'score': getattr(obj, 'sort_value', 0),
'project': {
'name': obj.project.name,
'slug': obj.project.slug,
},
}
if hasattr(obj, 'is_bookmarked'):
d['isBookmarked'] = obj.is_bookmarked
if hasattr(obj, 'historical_data'):
d['historicalData'] = obj.historical_data
if request:
d['tags'] = list(get_tags(obj, request))
return d
| bsd-3-clause |
herilalaina/scikit-learn | sklearn/__init__.py | 7 | 5276 | """
Machine learning module for Python
==================================
sklearn is a Python module integrating classical machine
learning algorithms in the tightly-knit world of scientific Python
packages (numpy, scipy, matplotlib).
It aims to provide simple and efficient solutions to learning problems
that are accessible to everybody and reusable in various contexts:
machine-learning as a versatile tool for science and engineering.
See http://scikit-learn.org for complete documentation.
"""
import sys
import re
import warnings
import os
from contextlib import contextmanager as _contextmanager
import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.INFO)
_ASSUME_FINITE = bool(os.environ.get('SKLEARN_ASSUME_FINITE', False))
def get_config():
"""Retrieve current values for configuration set by :func:`set_config`
Returns
-------
config : dict
Keys are parameter names that can be passed to :func:`set_config`.
"""
return {'assume_finite': _ASSUME_FINITE}
def set_config(assume_finite=None):
"""Set global scikit-learn configuration
Parameters
----------
assume_finite : bool, optional
If True, validation for finiteness will be skipped,
saving time, but leading to potential crashes. If
False, validation for finiteness will be performed,
avoiding error.
"""
global _ASSUME_FINITE
if assume_finite is not None:
_ASSUME_FINITE = assume_finite
@_contextmanager
def config_context(**new_config):
"""Context manager for global scikit-learn configuration
Parameters
----------
assume_finite : bool, optional
If True, validation for finiteness will be skipped,
saving time, but leading to potential crashes. If
False, validation for finiteness will be performed,
avoiding error.
Notes
-----
All settings, not just those presently modified, will be returned to
their previous values when the context manager is exited. This is not
thread-safe.
Examples
--------
>>> import sklearn
>>> from sklearn.utils.validation import assert_all_finite
>>> with sklearn.config_context(assume_finite=True):
... assert_all_finite([float('nan')])
>>> with sklearn.config_context(assume_finite=True):
... with sklearn.config_context(assume_finite=False):
... assert_all_finite([float('nan')])
... # doctest: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: Input contains NaN, ...
"""
old_config = get_config().copy()
set_config(**new_config)
try:
yield
finally:
set_config(**old_config)
# Make sure that DeprecationWarning within this package always gets printed
warnings.filterwarnings('always', category=DeprecationWarning,
module=r'^{0}\.'.format(re.escape(__name__)))
# PEP0440 compatible formatted version, see:
# https://www.python.org/dev/peps/pep-0440/
#
# Generic release markers:
# X.Y
# X.Y.Z # For bugfix releases
#
# Admissible pre-release markers:
# X.YaN # Alpha release
# X.YbN # Beta release
# X.YrcN # Release Candidate
# X.Y # Final release
#
# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
# 'X.Y.dev0' is the canonical version of 'X.Y.dev'
#
__version__ = '0.20.dev0'
try:
# This variable is injected in the __builtins__ by the build
# process. It used to enable importing subpackages of sklearn when
# the binaries are not built
__SKLEARN_SETUP__
except NameError:
__SKLEARN_SETUP__ = False
if __SKLEARN_SETUP__:
sys.stderr.write('Partial import of sklearn during the build process.\n')
# We are not importing the rest of scikit-learn during the build
# process, as it may not be compiled yet
else:
from . import __check_build
from .base import clone
__check_build # avoid flakes unused variable error
__all__ = ['calibration', 'cluster', 'covariance', 'cross_decomposition',
'cross_validation', 'datasets', 'decomposition', 'dummy',
'ensemble', 'exceptions', 'externals', 'feature_extraction',
'feature_selection', 'gaussian_process', 'grid_search',
'isotonic', 'kernel_approximation', 'kernel_ridge',
'learning_curve', 'linear_model', 'manifold', 'metrics',
'mixture', 'model_selection', 'multiclass', 'multioutput',
'naive_bayes', 'neighbors', 'neural_network', 'pipeline',
'preprocessing', 'random_projection', 'semi_supervised',
'svm', 'tree', 'discriminant_analysis',
# Non-modules:
'clone']
def setup_module(module):
"""Fixture for the tests to assure globally controllable seeding of RNGs"""
import os
import numpy as np
import random
# It could have been provided in the environment
_random_seed = os.environ.get('SKLEARN_SEED', None)
if _random_seed is None:
_random_seed = np.random.uniform() * (2 ** 31 - 1)
_random_seed = int(_random_seed)
print("I: Seeding RNGs with %r" % _random_seed)
np.random.seed(_random_seed)
random.seed(_random_seed)
| bsd-3-clause |
shareactorIO/pipeline | clustered.ml/tensorflow/src/mnist_trainer.py | 3 | 4145 | import math
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Flags for defining the tf.train.ClusterSpec
tf.app.flags.DEFINE_string("ps_hosts", "",
["clustered-tensorflow-ps:2222"])
tf.app.flags.DEFINE_string("worker_hosts", "",
["clustered-tensorflow-worker:2222"])
# Flags for defining the tf.train.Server
tf.app.flags.DEFINE_string("job_name", "", "One of 'ps', 'worker'")
tf.app.flags.DEFINE_integer("task_index", 0, "Index of task within the job")
tf.app.flags.DEFINE_integer("hidden_units", 100,
"Number of units in the hidden layer of the NN")
tf.app.flags.DEFINE_string("data_dir", "datasets/mnist/",
"Directory for storing mnist data")
tf.app.flags.DEFINE_integer("batch_size", 100, "Training batch size")
FLAGS = tf.app.flags.FLAGS
IMAGE_PIXELS = 28
def main(_):
ps_hosts = FLAGS.ps_hosts.split(",")
worker_hosts = FLAGS.worker_hosts.split(",")
# Create a cluster from the parameter server and worker hosts.
cluster = tf.train.ClusterSpec({"ps": ps_hosts, "worker": worker_hosts})
# Create and start a server for the local task.
server = tf.train.Server(cluster,
job_name=FLAGS.job_name,
task_index=FLAGS.task_index)
if FLAGS.job_name == "ps":
server.join()
elif FLAGS.job_name == "worker":
# Assigns ops to the local worker by default.
with tf.device(tf.train.replica_device_setter(
worker_device="/job:worker/task:%d" % FLAGS.task_index,
cluster=cluster)):
# Variables of the hidden layer
hid_w = tf.Variable(
tf.truncated_normal([IMAGE_PIXELS * IMAGE_PIXELS, FLAGS.hidden_units],
stddev=1.0 / IMAGE_PIXELS), name="hid_w")
hid_b = tf.Variable(tf.zeros([FLAGS.hidden_units]), name="hid_b")
# Variables of the softmax layer
sm_w = tf.Variable(
tf.truncated_normal([FLAGS.hidden_units, 10],
stddev=1.0 / math.sqrt(FLAGS.hidden_units)),
name="sm_w")
sm_b = tf.Variable(tf.zeros([10]), name="sm_b")
x = tf.placeholder(tf.float32, [None, IMAGE_PIXELS * IMAGE_PIXELS])
y_ = tf.placeholder(tf.float32, [None, 10])
hid_lin = tf.nn.xw_plus_b(x, hid_w, hid_b)
hid = tf.nn.relu(hid_lin)
y = tf.nn.softmax(tf.nn.xw_plus_b(hid, sm_w, sm_b))
loss = -tf.reduce_sum(y_ * tf.log(tf.clip_by_value(y, 1e-10, 1.0)))
global_step = tf.Variable(0)
train_op = tf.train.AdagradOptimizer(0.01).minimize(
loss, global_step=global_step)
saver = tf.train.Saver()
summary_op = tf.merge_all_summaries()
init_op = tf.initialize_all_variables()
# Create a "supervisor", which oversees the training process.
sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0),
logdir="train_logs/",
init_op=init_op,
summary_op=summary_op,
saver=saver,
global_step=global_step,
save_model_secs=600)
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
# The supervisor takes care of session initialization, restoring from
# a checkpoint, and closing when done or an error occurs.
with sv.managed_session(server.target) as sess:
# Loop until the supervisor shuts down or 1000000 steps have completed.
step = 0
while not sv.should_stop() and step < 1000000:
# Run a training step asynchronously.
# See `tf.train.SyncReplicasOptimizer` for additional details on how to
# perform *synchronous* training.
batch_xs, batch_ys = mnist.train.next_batch(FLAGS.batch_size)
train_feed = {x: batch_xs, y_: batch_ys}
_, step = sess.run([train_op, global_step], feed_dict=train_feed)
if step % 100 == 0:
print("Done step %d" % step)
# Ask for all the services to stop.
sv.stop()
if __name__ == "__main__":
tf.app.run()
| apache-2.0 |
dwillmer/fastats | tests/maths/correlation/test_pearson.py | 2 | 2117 | import numpy as np
import pandas as pd
from pytest import approx, mark
from fastats.maths.correlation import pearson, pearson_pairwise
from tests.data.datasets import SKLearnDataSets
def test_pearson_uwe_normal_hypervent():
"""
This is a basic sanity test for the Pearson
correlation function based on the example from
UWE:
http://learntech.uwe.ac.uk/da/Default.aspx?pageid=1442
The correlation between normal and hypervent should
be 0.966
"""
normal = np.array([56, 56, 65, 65, 50, 25, 87, 44, 35])
hypervent = np.array([87, 91, 85, 91, 75, 28, 122, 66, 58])
result = pearson(normal, hypervent)
assert result == approx(0.966194346491)
A = np.stack([normal, hypervent]).T
assert pearson_pairwise(A).diagonal(1) == approx(0.966194346491)
def test_pearson_stats_howto():
"""
This is a basic sanity test for the Pearson
correlation based on the example from:
http://www.statisticshowto.com/how-to-compute-pearsons-correlation-coefficients/
"""
age = np.array([43, 21, 25, 42, 57, 59])
glucose = np.array([99, 65, 79, 75, 87, 81])
result = pearson(age, glucose)
assert result == approx(0.529808901890)
A = np.stack([age, glucose]).T
assert pearson_pairwise(A).diagonal(1) == approx(0.529808901890)
def test_pearson_nan_result():
x = np.array([1, 2, 3, 4], dtype='float')
y = np.array([2, 3, 4, 3], dtype='float')
assert pearson(x, y) == approx(0.6324555320)
x[0] = np.nan
assert np.isnan(pearson(x, y))
x[0] = 1.0
y[0] = np.nan
assert np.isnan(pearson(x, y))
y[0] = 2.0
assert pearson(x, y) == approx(0.6324555320)
@mark.parametrize('A', SKLearnDataSets)
def test_pearson_pairwise_versus_pandas(A):
"""
This is a check of the pairwise Pearson correlation against
pandas DataFrame corr for an input dataset A.
"""
data = A.value
expected = pd.DataFrame(data).corr(method='pearson').values
output = pearson_pairwise(data)
assert np.allclose(expected, output)
if __name__ == '__main__':
import pytest
pytest.main([__file__])
| mit |
herilalaina/scikit-learn | examples/mixture/plot_concentration_prior.py | 15 | 5696 | """
========================================================================
Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture
========================================================================
This example plots the ellipsoids obtained from a toy dataset (mixture of three
Gaussians) fitted by the ``BayesianGaussianMixture`` class models with a
Dirichlet distribution prior
(``weight_concentration_prior_type='dirichlet_distribution'``) and a Dirichlet
process prior (``weight_concentration_prior_type='dirichlet_process'``). On
each figure, we plot the results for three different values of the weight
concentration prior.
The ``BayesianGaussianMixture`` class can adapt its number of mixture
components automatically. The parameter ``weight_concentration_prior`` has a
direct link with the resulting number of components with non-zero weights.
Specifying a low value for the concentration prior will make the model put most
of the weight on few components set the remaining components weights very close
to zero. High values of the concentration prior will allow a larger number of
components to be active in the mixture.
The Dirichlet process prior allows to define an infinite number of components
and automatically selects the correct number of components: it activates a
component only if it is necessary.
On the contrary the classical finite mixture model with a Dirichlet
distribution prior will favor more uniformly weighted components and therefore
tends to divide natural clusters into unnecessary sub-components.
"""
# Author: Thierry Guillemot <thierry.guillemot.work@gmail.com>
# License: BSD 3 clause
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from sklearn.mixture import BayesianGaussianMixture
print(__doc__)
def plot_ellipses(ax, weights, means, covars):
for n in range(means.shape[0]):
eig_vals, eig_vecs = np.linalg.eigh(covars[n])
unit_eig_vec = eig_vecs[0] / np.linalg.norm(eig_vecs[0])
angle = np.arctan2(unit_eig_vec[1], unit_eig_vec[0])
# Ellipse needs degrees
angle = 180 * angle / np.pi
# eigenvector normalization
eig_vals = 2 * np.sqrt(2) * np.sqrt(eig_vals)
ell = mpl.patches.Ellipse(means[n], eig_vals[0], eig_vals[1],
180 + angle, edgecolor='black')
ell.set_clip_box(ax.bbox)
ell.set_alpha(weights[n])
ell.set_facecolor('#56B4E9')
ax.add_artist(ell)
def plot_results(ax1, ax2, estimator, X, y, title, plot_title=False):
ax1.set_title(title)
ax1.scatter(X[:, 0], X[:, 1], s=5, marker='o', color=colors[y], alpha=0.8)
ax1.set_xlim(-2., 2.)
ax1.set_ylim(-3., 3.)
ax1.set_xticks(())
ax1.set_yticks(())
plot_ellipses(ax1, estimator.weights_, estimator.means_,
estimator.covariances_)
ax2.get_xaxis().set_tick_params(direction='out')
ax2.yaxis.grid(True, alpha=0.7)
for k, w in enumerate(estimator.weights_):
ax2.bar(k, w, width=0.9, color='#56B4E9', zorder=3,
align='center', edgecolor='black')
ax2.text(k, w + 0.007, "%.1f%%" % (w * 100.),
horizontalalignment='center')
ax2.set_xlim(-.6, 2 * n_components - .4)
ax2.set_ylim(0., 1.1)
ax2.tick_params(axis='y', which='both', left='off',
right='off', labelleft='off')
ax2.tick_params(axis='x', which='both', top='off')
if plot_title:
ax1.set_ylabel('Estimated Mixtures')
ax2.set_ylabel('Weight of each component')
# Parameters of the dataset
random_state, n_components, n_features = 2, 3, 2
colors = np.array(['#0072B2', '#F0E442', '#D55E00'])
covars = np.array([[[.7, .0], [.0, .1]],
[[.5, .0], [.0, .1]],
[[.5, .0], [.0, .1]]])
samples = np.array([200, 500, 200])
means = np.array([[.0, -.70],
[.0, .0],
[.0, .70]])
# mean_precision_prior= 0.8 to minimize the influence of the prior
estimators = [
("Finite mixture with a Dirichlet distribution\nprior and "
r"$\gamma_0=$", BayesianGaussianMixture(
weight_concentration_prior_type="dirichlet_distribution",
n_components=2 * n_components, reg_covar=0, init_params='random',
max_iter=1500, mean_precision_prior=.8,
random_state=random_state), [0.001, 1, 1000]),
("Infinite mixture with a Dirichlet process\n prior and" r"$\gamma_0=$",
BayesianGaussianMixture(
weight_concentration_prior_type="dirichlet_process",
n_components=2 * n_components, reg_covar=0, init_params='random',
max_iter=1500, mean_precision_prior=.8,
random_state=random_state), [1, 1000, 100000])]
# Generate data
rng = np.random.RandomState(random_state)
X = np.vstack([
rng.multivariate_normal(means[j], covars[j], samples[j])
for j in range(n_components)])
y = np.concatenate([j * np.ones(samples[j], dtype=int)
for j in range(n_components)])
# Plot results in two different figures
for (title, estimator, concentrations_prior) in estimators:
plt.figure(figsize=(4.7 * 3, 8))
plt.subplots_adjust(bottom=.04, top=0.90, hspace=.05, wspace=.05,
left=.03, right=.99)
gs = gridspec.GridSpec(3, len(concentrations_prior))
for k, concentration in enumerate(concentrations_prior):
estimator.weight_concentration_prior = concentration
estimator.fit(X)
plot_results(plt.subplot(gs[0:2, k]), plt.subplot(gs[2, k]), estimator,
X, y, r"%s$%.1e$" % (title, concentration),
plot_title=k == 0)
plt.show()
| bsd-3-clause |
moonbury/notebooks | github/MasteringMLWithScikit-learn/8365OS_04_Codes/sms.py | 3 | 1322 | """
Best score: 0.992
Best parameters set:
clf__C: 7.0
clf__penalty: 'l2'
vect__max_df: 0.5
vect__max_features: None
vect__ngram_range: (1, 2)
vect__norm: 'l2'
vect__use_idf: True
"""
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model.logistic import LogisticRegression
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('vect', TfidfVectorizer(max_df=0.05, stop_words='english')),
('clf', LogisticRegression())
])
parameters = {
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (10000, 13000, None),
'vect__ngram_range': ((1, 1), (1, 2)),
'vect__use_idf': (True, False),
'vect__norm': ('l1', 'l2'),
'clf__penalty': ('l1', 'l2'),
'clf__C': (3.0, 5.0, 7.0),
}
if __name__ == "__main__":
num_jobs = -1
grid_search = GridSearchCV(pipeline, parameters, n_jobs=num_jobs, verbose=1, scoring='roc_auc')
df = pd.read_csv('sms/sms.csv')
grid_search.fit(df['message'], df['label'])
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
| gpl-3.0 |
sangwook236/general-development-and-testing | sw_dev/python/rnd/test/language_processing/hugging_face_transformers_test.py | 2 | 32378 | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
# REF [site] >>
# https://github.com/huggingface/transformers
# https://huggingface.co/transformers/
# https://medium.com/analytics-vidhya/a-comprehensive-guide-to-build-your-own-language-model-in-python-5141b3917d6d
import time
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers import BertTokenizer, BertModel, BertForMaskedLM
from transformers import BertPreTrainedModel
from transformers import BertConfig
from transformers import *
# REF [site] >> https://github.com/huggingface/transformers
def quick_tour():
# Transformers has a unified API for 10 transformer architectures and 30 pretrained weights.
# Model | Tokenizer | Pretrained weights shortcut
MODELS = [(BertModel, BertTokenizer, 'bert-base-uncased'),
(OpenAIGPTModel, OpenAIGPTTokenizer, 'openai-gpt'),
(GPT2Model, GPT2Tokenizer, 'gpt2'),
(CTRLModel, CTRLTokenizer, 'ctrl'),
(TransfoXLModel, TransfoXLTokenizer, 'transfo-xl-wt103'),
(XLNetModel, XLNetTokenizer, 'xlnet-base-cased'),
(XLMModel, XLMTokenizer, 'xlm-mlm-enfr-1024'),
(DistilBertModel, DistilBertTokenizer, 'distilbert-base-cased'),
(RobertaModel, RobertaTokenizer, 'roberta-base'),
(XLMRobertaModel, XLMRobertaTokenizer, 'xlm-roberta-base'),
]
# To use TensorFlow 2.0 versions of the models, simply prefix the class names with 'TF', e.g. `TFRobertaModel` is the TF 2.0 counterpart of the PyTorch model `RobertaModel`.
# Let's encode some text in a sequence of hidden-states using each model.
for model_class, tokenizer_class, pretrained_weights in MODELS:
# Load pretrained model/tokenizer.
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
# Encode text.
input_ids = torch.tensor([tokenizer.encode('Here is some text to encode', add_special_tokens=True)]) # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model.
with torch.no_grad():
last_hidden_states = model(input_ids)[0] # Models outputs are now tuples.
#--------------------
# Each architecture is provided with several class for fine-tuning on down-stream tasks, e.g.
BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction,
BertForSequenceClassification, BertForTokenClassification, BertForQuestionAnswering]
output_dir_path = './directory/to/save'
import os
os.makedirs(output_dir_path, exist_ok=True)
# All the classes for an architecture can be initiated from pretrained weights for this architecture.
# Note that additional weights added for fine-tuning are only initialized and need to be trained on the down-stream task.
pretrained_weights = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(pretrained_weights)
for model_class in BERT_MODEL_CLASSES:
# Load pretrained model/tokenizer.
model = model_class.from_pretrained(pretrained_weights)
# Models can return full list of hidden-states & attentions weights at each layer.
model = model_class.from_pretrained(pretrained_weights, output_hidden_states=True, output_attentions=True)
input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")])
all_hidden_states, all_attentions = model(input_ids)[-2:]
# Models are compatible with Torchscript.
model = model_class.from_pretrained(pretrained_weights, torchscript=True)
traced_model = torch.jit.trace(model, (input_ids,))
# Simple serialization for models and tokenizers.
model.save_pretrained(output_dir_path) # Save.
model = model_class.from_pretrained(output_dir_path) # Re-load.
tokenizer.save_pretrained(output_dir_path) # Save.
tokenizer = BertTokenizer.from_pretrained(output_dir_path) # Re-load.
# SOTA examples for GLUE, SQUAD, text generation...
print('{} processed.'.format(model_class.__name__))
def gpt2_example():
# NOTE [info] >> Refer to example codes in the comment of forward() of each BERT class in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py
pretrained_model_name = 'gpt2'
tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model_name)
input_ids = torch.tensor(tokenizer.encode('Hello, my dog is cute', add_special_tokens=True)).unsqueeze(0) # Batch size 1.
if True:
print('Start loading a model...')
start_time = time.time()
# The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.
model = GPT2Model.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids)
print('End inferring: {} secs.'.format(time.time() - start_time))
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple.
print('{} processed.'.format(GPT2Model.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
model = GPT2LMHeadModel.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids, labels=input_ids)
print('End inferring: {} secs.'.format(time.time() - start_time))
loss, logits = outputs[:2]
print('{} processed.'.format(GPT2LMHeadModel.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks.
model = GPT2DoubleHeadsModel.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
# Add a [CLS] to the vocabulary (we should train it also!).
tokenizer.add_special_tokens({'cls_token': '[CLS]'})
model.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size.
print(tokenizer.cls_token_id, len(tokenizer)) # The newly token the last token of the vocabulary.
choices = ['Hello, my dog is cute [CLS]', 'Hello, my cat is cute [CLS]']
encoded_choices = [tokenizer.encode(s) for s in choices]
cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices]
input_ids0 = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2.
mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1.
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids0, mc_token_ids=mc_token_ids)
print('End inferring: {} secs.'.format(time.time() - start_time))
lm_prediction_scores, mc_prediction_scores = outputs[:2]
print('{} processed.'.format(GPT2DoubleHeadsModel.__name__))
# REF [site] >> https://medium.com/analytics-vidhya/a-comprehensive-guide-to-build-your-own-language-model-in-python-5141b3917d6d
def sentence_completion_model_using_gpt2_example():
# Load pre-trained model tokenizer (vocabulary).
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# Encode a text inputs.
text = 'What is the fastest car in the'
indexed_tokens = tokenizer.encode(text)
# Convert indexed tokens in a PyTorch tensor.
tokens_tensor = torch.tensor([indexed_tokens])
# Load pre-trained model (weights).
model = GPT2LMHeadModel.from_pretrained('gpt2')
# Set the model in evaluation mode to deactivate the DropOut modules.
model.eval()
# If you have a GPU, put everything on cuda.
tokens_tensor = tokens_tensor.to('cuda')
model.to('cuda')
# Predict all tokens.
with torch.no_grad():
outputs = model(tokens_tensor)
predictions = outputs[0]
# Get the predicted next sub-word.
predicted_index = torch.argmax(predictions[0, -1, :]).item()
predicted_text = tokenizer.decode(indexed_tokens + [predicted_index])
# Print the predicted word.
print('Predicted text = {}.'.format(predicted_text))
# REF [site] >>
# https://github.com/huggingface/transformers/blob/master/examples/run_generation.py
# python pytorch-transformers/examples/run_generation.py --model_type=gpt2 --length=100 --model_name_or_path=gpt2
# https://medium.com/analytics-vidhya/a-comprehensive-guide-to-build-your-own-language-model-in-python-5141b3917d6d
def conditional_text_generation_using_gpt2_example():
raise NotImplementedError
def bert_example():
# NOTE [info] >> Refer to example codes in the comment of forward() of each BERT class in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py
pretrained_model_name = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(pretrained_model_name)
input_ids = torch.tensor(tokenizer.encode('Hello, my dog is cute', add_special_tokens=True)).unsqueeze(0) # Batch size 1.
if True:
print('Start loading a model...')
start_time = time.time()
# The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
model = BertModel.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids)
print('End inferring: {} secs.'.format(time.time() - start_time))
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple.
print('{} processed.'.format(BertModel.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# Bert Model with two heads on top as done during the pre-training: a 'masked language modeling' head and a 'next sentence prediction (classification)' head.
model = BertForPreTraining.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids)
print('End inferring: {} secs.'.format(time.time() - start_time))
prediction_scores, seq_relationship_scores = outputs[:2]
print('{} processed.'.format(BertForPreTraining.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# Bert Model with a 'language modeling' head on top.
model = BertForMaskedLM.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids, masked_lm_labels=input_ids)
print('End inferring: {} secs.'.format(time.time() - start_time))
loss, prediction_scores = outputs[:2]
print('{} processed.'.format(BertForMaskedLM.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# Bert Model with a 'next sentence prediction (classification)' head on top.
model = BertForNextSentencePrediction.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids)
print('End inferring: {} secs.'.format(time.time() - start_time))
seq_relationship_scores = outputs[0]
print('{} processed.'.format(BertForNextSentencePrediction.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
model = BertForSequenceClassification.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1.
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids, labels=labels)
print('End inferring: {} secs.'.format(time.time() - start_time))
loss, logits = outputs[:2]
print('{} processed.'.format(BertForSequenceClassification.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
model = BertForMultipleChoice.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
choices = ['Hello, my dog is cute', 'Hello, my cat is amazing']
input_ids0 = torch.tensor([tokenizer.encode(s, add_special_tokens=True) for s in choices]).unsqueeze(0) # Batch size 1, 2 choices.
labels = torch.tensor(1).unsqueeze(0) # Batch size 1.
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids0, labels=labels)
print('End inferring: {} secs.'.format(time.time() - start_time))
loss, classification_scores = outputs[:2]
print('{} processed.'.format(BertForMultipleChoice.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
model = BertForTokenClassification.from_pretrained(pretrained_model_name)
print('End loading a model: {} secs.'.format(time.time() - start_time))
labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1.
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
outputs = model(input_ids, labels=labels)
print('End inferring: {} secs.'.format(time.time() - start_time))
loss, scores = outputs[:2]
print('{} processed.'.format(BertForTokenClassification.__name__))
if True:
print('Start loading a model...')
start_time = time.time()
# Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute 'span start logits' and 'span end logits').
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
print('End loading a model: {} secs.'.format(time.time() - start_time))
question, text = 'Who was Jim Henson?', 'Jim Henson was a nice puppet'
encoding = tokenizer.encode_plus(question, text)
input_ids0, token_type_ids = encoding['input_ids'], encoding['token_type_ids']
print('Start inferring...')
start_time = time.time()
model.eval()
with torch.no_grad():
start_scores, end_scores = model(torch.tensor([input_ids0]), token_type_ids=torch.tensor([token_type_ids]))
print('End inferring: {} secs.'.format(time.time() - start_time))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids0)
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
assert answer == 'a nice puppet'
print('{} processed.'.format(BertForQuestionAnswering.__name__))
# REF [site] >> https://www.analyticsvidhya.com/blog/2019/07/pytorch-transformers-nlp-python/?utm_source=blog&utm_medium=openai-gpt2-text-generator-python
def masked_language_modeling_for_bert_example():
# Load pre-trained model tokenizer (vocabulary).
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Tokenize input.
text = '[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]'
tokenized_text = tokenizer.tokenize(text)
# Mask a token that we will try to predict back with 'BertForMaskedLM'.
masked_index = 8
tokenized_text[masked_index] = '[MASK]'
assert tokenized_text == ['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]']
# Convert token to vocabulary indices.
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
# Define sentence A and B indices associated to 1st and 2nd sentences (see paper).
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
# Convert inputs to PyTorch tensors.
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# Load pre-trained model (weights).
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval()
# If you have a GPU, put everything on cuda.
tokens_tensor = tokens_tensor.to('cuda')
segments_tensors = segments_tensors.to('cuda')
model.to('cuda')
# Predict all tokens.
with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
predictions = outputs[0]
# Confirm we were able to predict 'henson'.
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
assert predicted_token == 'henson'
print('Predicted token is: {}.'.format(predicted_token))
class MyBertForSequenceClassification(BertPreTrainedModel):
def __init__(self, config, pretrained_model_name):
super(MyBertForSequenceClassification, self).__init__(config)
#self.bert = BertModel(config)
self.bert = BertModel.from_pretrained(pretrained_model_name, config=config)
self.dropout = torch.nn.Dropout(config.hidden_dropout_prob)
self.classifier = torch.nn.Linear(config.hidden_size, config.num_labels)
# TODO [check] >> Are weights initialized?
#self.init_weights()
def forward(self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None):
_, pooled_output = self.bert(input_ids, token_type_ids, attention_mask)
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
return logits
def sequence_classification_using_bert():
# REF [site] >> https://huggingface.co/transformers/model_doc/bert.html
pretrained_model_name = 'bert-base-multilingual-cased'
tokenizer = BertTokenizer.from_pretrained(pretrained_model_name)
print('tokenizer.vocab_size = {}.'.format(tokenizer.vocab_size))
#print('tokenizer.get_vocab():\n{}.'.format(tokenizer.get_vocab()))
if True:
model = BertForSequenceClassification.from_pretrained(pretrained_model_name)
elif False:
model = MyBertForSequenceClassification.from_pretrained(pretrained_model_name, pretrained_model_name=pretrained_model_name) # Not good.
else:
#config = BertConfig(num_labels=10, output_attentions=False, output_hidden_states=False)
#config = BertConfig.from_pretrained(pretrained_model_name, num_labels=10, output_attentions=False, output_hidden_states=False)
config = BertConfig.from_pretrained(pretrained_model_name, output_attentions=False, output_hidden_states=False)
#model = MyBertForSequenceClassification.from_pretrained(pretrained_model_name, config=config, pretrained_model_name=pretrained_model_name) # Not good.
model = MyBertForSequenceClassification(config, pretrained_model_name=pretrained_model_name)
#--------------------
# Train a model.
#--------------------
# Test a model.
input_ids = [
tokenizer.encode('Hello, my dog is so cute.', add_special_tokens=True),
tokenizer.encode('Hi, my cat is cute', add_special_tokens=True),
tokenizer.encode('Hi, my pig is so small...', add_special_tokens=True),
]
max_input_len = len(max(input_ids, key=len))
print('Max. input len = {}.'.format(max_input_len))
def convert(x):
y = [x[-1]] * max_input_len # TODO [check] >> x[-1] is correct?
y[:len(x)] = x
return y
input_ids = list(map(convert, input_ids))
input_ids = torch.tensor(input_ids)
model.eval()
with torch.no_grad():
model_outputs = model(input_ids) # Batch size x #labels.
print('Model output losses = {}.'.format(model_outputs.loss))
print('Model output logits = {}.'.format(model_outputs.logits))
def korean_bert_example():
if False:
pretrained_model_name = 'bert-base-multilingual-uncased'
#pretrained_model_name = 'bert-base-multilingual-cased' # Not correctly working.
tokenizer = BertTokenizer.from_pretrained(pretrained_model_name)
else:
# REF [site] >> https://github.com/monologg/KoBERT-Transformers
from tokenization_kobert import KoBertTokenizer
# REF [site] >> https://huggingface.co/monologg
pretrained_model_name = 'monologg/kobert'
#pretrained_model_name = 'monologg/distilkobert'
tokenizer = KoBertTokenizer.from_pretrained(pretrained_model_name)
tokens = tokenizer.tokenize('์ํด๋จ์ต๋๋ค')
token_ids = tokenizer.convert_tokens_to_ids(tokens)
print('Tokens = {}.'.format(tokens))
#print('Token IDs = {}.'.format(token_ids))
model = BertForSequenceClassification.from_pretrained(pretrained_model_name)
#--------------------
input_ids = [
tokenizer.encode('๋ด ๊ฐ๋ ๋ฌด์ฒ ๊ท์ฌ์.', add_special_tokens=True),
tokenizer.encode('๋ด ๊ณ ์์ด๋ ๊ท์ฌ์.', add_special_tokens=True),
tokenizer.encode('๋ด ๋ผ์ง๋ ๋๋ฌด ์์์.', add_special_tokens=True),
]
max_input_len = len(max(input_ids, key=len))
print('Max. input len = {}.'.format(max_input_len))
def convert(x):
y = [x[-1]] * max_input_len # TODO [check] >> x[-1] is correct?
y[:len(x)] = x
return y
input_ids = list(map(convert, input_ids))
input_ids = torch.tensor(input_ids)
model.eval()
with torch.no_grad():
model_outputs = model(input_ids) # Batch size x #labels.
print('Model output losses = {}.'.format(model_outputs.loss))
print('Model output logits = {}.'.format(model_outputs.logits))
# REF [site] >> https://huggingface.co/transformers/model_doc/encoderdecoder.html
def encoder_decoder_example():
from transformers import EncoderDecoderConfig, EncoderDecoderModel
from transformers import BertConfig, GPT2Config
pretrained_model_name = 'bert-base-uncased'
#pretrained_model_name = 'gpt2'
if 'bert' in pretrained_model_name:
# Initialize a BERT bert-base-uncased style configuration.
config_encoder, config_decoder = BertConfig(), BertConfig()
elif 'gpt2' in pretrained_model_name:
config_encoder, config_decoder = GPT2Config(), GPT2Config()
else:
print('Invalid model, {}.'.format(pretrained_model_name))
return
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
if 'bert' in pretrained_model_name:
# Initialize a Bert2Bert model from the bert-base-uncased style configurations.
model = EncoderDecoderModel(config=config)
#model = EncoderDecoderModel.from_encoder_decoder_pretrained(pretrained_model_name, pretrained_model_name) # Initialize Bert2Bert from pre-trained checkpoints.
tokenizer = BertTokenizer.from_pretrained(pretrained_model_name)
elif 'gpt2' in pretrained_model_name:
model = EncoderDecoderModel(config=config)
tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model_name)
#print('Configuration of the encoder & decoder:\n{}.\n{}.'.format(model.config.encoder, model.config.decoder))
#print('Encoder type = {}, decoder type = {}.'.format(type(model.encoder), type(model.decoder)))
if False:
# Access the model configuration.
config_encoder = model.config.encoder
config_decoder = model.config.decoder
# Set decoder config to causal LM.
config_decoder.is_decoder = True
config_decoder.add_cross_attention = True
#--------------------
input_ids = torch.tensor(tokenizer.encode('Hello, my dog is cute', add_special_tokens=True)).unsqueeze(0) # Batch size 1.
if False:
# Forward.
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)
# Train.
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)
loss, logits = outputs.loss, outputs.logits
# Save the model, including its configuration.
model.save_pretrained('my-model')
#--------------------
# Load model and config from pretrained folder.
encoder_decoder_config = EncoderDecoderConfig.from_pretrained('my-model')
model = EncoderDecoderModel.from_pretrained('my-model', config=encoder_decoder_config)
#--------------------
# Generate.
# REF [site] >>
# https://huggingface.co/transformers/internal/generation_utils.html
# https://huggingface.co/blog/how-to-generate
generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
#generated = model.generate(input_ids, max_length=50, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, do_sample=True, top_k=0, temperature=0.7, early_stopping=True, decoder_start_token_id=model.config.decoder.pad_token_id)
print('Generated = {}.'.format(tokenizer.decode(generated[0], skip_special_tokens=True)))
# REF [site] >> https://huggingface.co/transformers/main_classes/pipelines.html
def pipeline_example():
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
# Tasks: 'feature-extraction', 'text-classification', 'sentiment-analysis', 'token-classification', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_xx_to_yy', 'text2text-generation', 'text-generation', 'zero-shot-classification', 'conversational', 'table-question-answering'.
# Sentiment analysis pipeline.
sa_pipeline = pipeline('sentiment-analysis')
# Question answering pipeline, specifying the checkpoint identifier.
qa_pipeline = pipeline('question-answering', model='distilbert-base-cased-distilled-squad', tokenizer='bert-base-cased')
# Named entity recognition pipeline, passing in a specific model and tokenizer.
# REF [site] >> https://huggingface.co/dbmdz
model = AutoModelForTokenClassification.from_pretrained('dbmdz/bert-large-cased-finetuned-conll03-english')
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
ner_pipeline = pipeline('ner', model=model, tokenizer=tokenizer)
#--------------------
if False:
"""
conversation = Conversation('Going to the movies tonight - any suggestions?')
# Steps usually performed by the model when generating a response:
# 1. Mark the user input as processed (moved to the history)
conversation.mark_processed()
# 2. Append a mode response
conversation.append_response('The Big lebowski.')
conversation.add_user_input('Is it good?')
"""
conversational_pipeline = pipeline('conversational')
conversation_1 = Conversation('Going to the movies tonight - any suggestions?')
conversation_2 = Conversation("What's the last book you have read?")
responses = conversational_pipeline([conversation_1, conversation_2])
print('Responses:\n{}.'.format(responses))
conversation_1.add_user_input('Is it an action movie?')
conversation_2.add_user_input('What is the genre of this book?')
responses = conversational_pipeline([conversation_1, conversation_2])
print('Responses:\n{}.'.format(responses))
#--------------------
if False:
if True:
# Use BART in PyTorch.
summarizer = pipeline('summarization')
else:
# Use T5 in TensorFlow.
summarizer = pipeline('summarization', model='t5-base', tokenizer='t5-base', framework='tf')
summary = summarizer('An apple a day, keeps the doctor away', min_length=5, max_length=20)
print('Summary: {}.'.format(summary))
#--------------------
# REF [site] >> https://huggingface.co/transformers/model_doc/tapas.html
if False:
import pandas as pd
data_dict = {
'actors': ['brad pitt', 'leonardo di caprio', 'george clooney'],
'age': ['56', '45', '59'],
'number of movies': ['87', '53', '69'],
'date of birth': ['7 february 1967', '10 june 1996', '28 november 1967'],
}
data_df = pd.DataFrame.from_dict(data_dict)
if False:
# Show the data frame.
from IPython.display import display, HTML
display(data_df)
#print(HTML(data_df.to_html()).data)
query = 'How old is Brad Pitt?'
#query = 'What is the age of Brad Pitt?'
#query = 'How much is Brad PItt?' # Incorrect question.
table_pipeline = pipeline('table-question-answering')
answer = table_pipeline(data_dict, query)
#answer = table_pipeline(data_df, query)
print('Answer: {}.'.format(answer))
#--------------------
if False:
text2text_generator = pipeline('text2text-generation')
generated = text2text_generator('question: What is 42 ? context: 42 is the answer to life, the universe and everything')
print('Generated text: {}.'.format(generated))
def question_answering_example():
from transformers import pipeline
# Open and read the article.
question = 'What is the capital of the Netherlands?'
context = r"The four largest cities in the Netherlands are Amsterdam, Rotterdam, The Hague and Utrecht.[17] Amsterdam is the country's most populous city and nominal capital,[18] while The Hague holds the seat of the States General, Cabinet and Supreme Court.[19] The Port of Rotterdam is the busiest seaport in Europe, and the busiest in any country outside East Asia and Southeast Asia, behind only China and Singapore."
# Generating an answer to the question in context.
qa = pipeline(task='question-answering')
answer = qa(question=question, context=context)
# Print the answer.
print(f'Question: {question}.')
print(f"Answer: '{answer['answer']}' with score {answer['score']}.")
# REF [site] >> https://huggingface.co/krevas/finance-koelectra-small-generator
def korean_fill_mask_example():
from transformers import pipeline
# REF [site] >> https://huggingface.co/krevas
fill_mask = pipeline(
'fill-mask',
model='krevas/finance-koelectra-small-generator',
tokenizer='krevas/finance-koelectra-small-generator'
)
filled = fill_mask(f'๋ด์ผ ํด๋น ์ข
๋ชฉ์ด ๋ํญ {fill_mask.tokenizer.mask_token}ํ ๊ฒ์ด๋ค.')
print(f'Filled mask: {filled}.')
def korean_table_question_answering_example():
from transformers import pipeline
from transformers import TapasConfig, TapasForQuestionAnswering, TapasTokenizer
import pandas as pd
# REF [site] >> https://github.com/monologg/KoBERT-Transformers
from tokenization_kobert import KoBertTokenizer
data_dict = {
'๋ฐฐ์ฐ': ['์ก๊ดํธ', '์ต๋ฏผ์', '์ค๊ฒฝ๊ตฌ'],
'๋์ด': ['54', '58', '53'],
'์ถ์ฐ์ํ์': ['38', '32', '42'],
'์๋
์์ผ': ['1967/02/25', '1962/05/30', '1967/05/14'],
}
data_df = pd.DataFrame.from_dict(data_dict)
if False:
# Show the data frame.
from IPython.display import display, HTML
display(data_df)
#print(HTML(data_df.to_html()).data)
query = '์ต๋ฏผ์์จ์ ๋์ด๋?'
# REF [site] >> https://huggingface.co/monologg
pretrained_model_name = 'monologg/kobert'
#pretrained_model_name = 'monologg/distilkobert'
if False:
# Not working.
table_pipeline = pipeline(
'table-question-answering',
model=pretrained_model_name,
tokenizer=KoBertTokenizer.from_pretrained(pretrained_model_name)
)
elif False:
# Not working.
#config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True, select_one_column=False)
#model = TapasForQuestionAnswering.from_pretrained(pretrained_model_name, config=config)
model = TapasForQuestionAnswering.from_pretrained(pretrained_model_name)
table_pipeline = pipeline(
'table-question-answering',
model=model,
tokenizer=KoBertTokenizer.from_pretrained(pretrained_model_name)
)
else:
# Not correctly working.
model = TapasForQuestionAnswering.from_pretrained(pretrained_model_name)
table_pipeline = pipeline(
'table-question-answering',
model=model,
tokenizer=TapasTokenizer.from_pretrained(pretrained_model_name)
)
answer = table_pipeline(data_dict, query)
#answer = table_pipeline(data_df, query)
print('Answer: {}.'.format(answer))
def main():
#quick_tour()
#--------------------
# GPT-2.
#gpt2_example()
#sentence_completion_model_using_gpt2_example()
#conditional_text_generation_using_gpt2_example() # Not yet implemented.
#--------------------
# BERT.
#bert_example()
#masked_language_modeling_for_bert_example()
#sequence_classification_using_bert()
#korean_bert_example()
#--------------------
#encoder_decoder_example()
#--------------------
# Pipeline.
pipeline_example()
#question_answering_example()
#korean_fill_mask_example()
#korean_table_question_answering_example() # Not correctly working.
#--------------------------------------------------------------------
if '__main__' == __name__:
main()
| gpl-2.0 |
xzturn/tensorflow | tensorflow/python/data/experimental/ops/distribute.py | 1 | 6810 | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Distribution Strategy-related dataset transformations."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.compat import compat
from tensorflow.python.data.experimental.ops.distribute_options import AutoShardPolicy
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.data.util import nest
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops
class _AutoShardDataset(dataset_ops.UnaryDataset):
"""A `Dataset` that shards the `Dataset` automatically.
This dataset takes in an existing dataset and tries to automatically figure
out how to shard the dataset in a multi-worker scenario. Currently, it uses
Grappler to walk up the dataset graph until it finds a reader dataset (e.g.
CSVDataset, TFRecordDataset), then inserts a ShardDataset op before that node
so that each worker only sees some files.
Args:
num_workers: Total number of workers to shard this dataset across.
index: The current worker index (out of the total number of workers) this
dataset is for.
Raises:
NotFoundError: If we cannot find a suitable reader dataset to begin
automatically sharding the dataset.
"""
def __init__(self, input_dataset, num_workers, index):
self._input_dataset = input_dataset
self._element_spec = input_dataset.element_spec
if (compat.forward_compatible(2019, 11, 25) or
(input_dataset.options().experimental_distribute.auto_shard_policy !=
AutoShardPolicy.AUTO)):
variant_tensor = ged_ops.auto_shard_dataset(
self._input_dataset._variant_tensor, # pylint: disable=protected-access
num_workers=num_workers,
index=index,
auto_shard_policy=int(input_dataset.options().experimental_distribute
.auto_shard_policy),
**self._flat_structure)
else:
variant_tensor = ged_ops.auto_shard_dataset(
self._input_dataset._variant_tensor, # pylint: disable=protected-access
num_workers=num_workers,
index=index,
**self._flat_structure)
super(_AutoShardDataset, self).__init__(input_dataset, variant_tensor)
@property
def element_spec(self):
return self._element_spec
def _AutoShardDatasetV1(input_dataset, num_workers, index): # pylint: disable=invalid-name
return dataset_ops.DatasetV1Adapter(
_AutoShardDataset(input_dataset, num_workers, index))
class _RebatchDataset(dataset_ops.UnaryDataset):
"""A `Dataset` that divides the batch size by `num_replicas`.
For each batch in the input dataset, the resulting dataset will produce
`num_replicas` minibatches whose sizes add up to the original batch size.
"""
def __init__(self, input_dataset, num_replicas, use_fallback=True):
def recalculate_batch_size(output_shape):
"""Recalculates the output_shape after dividing it by num_replicas."""
# If the output shape is unknown, we set the batch dimension to unknown.
if output_shape.rank is None:
return None
if len(output_shape) < 1:
raise ValueError("Expected a dataset whose elements have rank >= 1 "
"but found a dataset whose elements are scalars. "
"You can fix the issue by adding the `batch` "
"transformation to the dataset.")
output_dims = [d.value for d in output_shape.dims]
if output_dims[0] is not None and output_dims[0] % num_replicas == 0:
return output_dims[0] // num_replicas
# Set the batch dimension to unknown. If the global batch size does not
# divide num_replicas evenly, the minibatches may have different sizes.
return None
def rebatch(type_spec):
# pylint: disable=protected-access
batch_size = recalculate_batch_size(type_spec._to_legacy_output_shapes())
return type_spec._unbatch()._batch(batch_size)
# pylint: enable=protected-access
self._element_spec = nest.map_structure(
rebatch, dataset_ops.get_structure(input_dataset))
input_dataset = dataset_ops.normalize_to_dense(input_dataset)
variant_tensor = ged_ops.rebatch_dataset(
input_dataset._variant_tensor, # pylint: disable=protected-access
num_replicas=num_replicas,
**self._flat_structure)
super(_RebatchDataset, self).__init__(input_dataset, variant_tensor)
@property
def element_spec(self):
return self._element_spec
class _RemoteDataset(dataset_ops.DatasetSource):
"""Creates a dataset on a given `device` given a graph def."""
def __init__(self, graph_def, device, element_spec):
self._elem_spec = element_spec
with ops.device(device):
variant_tensor = ged_ops.dataset_from_graph(graph_def)
super(_RemoteDataset, self).__init__(variant_tensor)
@property
def element_spec(self):
return self._elem_spec
def replicate(dataset, devices):
"""A transformation that replicates `dataset` onto a list of devices.
Args:
dataset: A `tf.data.Dataset` object.
devices: A list of devices to replicate the dataset on.
Returns:
A dictionary mapping device name to a dataset on that device.
"""
if not isinstance(dataset, dataset_ops.DatasetV2):
raise TypeError("`dataset` must be a `tf.data.Dataset` object.")
# pylint: disable=protected-access
dataset_device = dataset._variant_tensor.device
datasets = {}
if len(devices) == 1 and devices[0] == dataset_device:
datasets[devices[0]] = dataset
return datasets
with ops.colocate_with(dataset._variant_tensor):
dataset = dataset._apply_options()
external_state_policy = dataset.options().experimental_external_state_policy
graph_def = dataset._as_serialized_graph(
strip_device_assignment=True,
external_state_policy=external_state_policy)
for device in devices:
ds = _RemoteDataset(graph_def, device, dataset.element_spec)
datasets[device] = ds
return datasets
_AutoShardDatasetV1.__doc__ = _AutoShardDataset.__doc__
| apache-2.0 |
sangwook236/general-development-and-testing | sw_dev/python/rnd/test/machine_learning/keras/keras_siamese_example.py | 2 | 4298 | #!/usr/bin/env python
# coding: UTF-8
from __future__ import absolute_import
from __future__ import print_function
import random
import numpy as np
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Flatten, Dense, Dropout, Lambda
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras import backend as K
def euclidean_distance(vects):
x, y = vects
sum_square = K.sum(K.square(x - y), axis=1, keepdims=True)
return K.sqrt(K.maximum(sum_square, K.epsilon()))
def eucl_dist_output_shape(shapes):
shape1, shape2 = shapes
return (shape1[0], 1)
def contrastive_loss(y_true, y_pred):
'''Contrastive loss from Hadsell-et-al.'06
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
'''
margin = 1
square_pred = K.square(y_pred)
margin_square = K.square(K.maximum(margin - y_pred, 0))
return K.mean(y_true * square_pred + (1 - y_true) * margin_square)
def create_pairs(x, digit_indices, num_classes):
'''Positive and negative pair creation.
Alternates between positive and negative pairs.
'''
pairs = []
labels = []
n = min([len(digit_indices[d]) for d in range(num_classes)]) - 1
for d in range(num_classes):
for i in range(n):
z1, z2 = digit_indices[d][i], digit_indices[d][i + 1]
pairs += [[x[z1], x[z2]]]
inc = random.randrange(1, num_classes)
dn = (d + inc) % num_classes
z1, z2 = digit_indices[d][i], digit_indices[dn][i]
pairs += [[x[z1], x[z2]]]
labels += [1, 0]
return np.array(pairs), np.array(labels)
def create_base_network(input_shape):
'''Base network to be shared (eq. to feature extraction).
'''
input = Input(shape=input_shape)
x = Flatten()(input)
x = Dense(128, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(128, activation='relu')(x)
return Model(input, x)
def compute_accuracy(y_true, y_pred):
'''Compute classification accuracy with a fixed threshold on distances.
'''
pred = y_pred.ravel() < 0.5
return np.mean(pred == y_true)
def accuracy(y_true, y_pred):
'''Compute classification accuracy with a fixed threshold on distances.
'''
return K.mean(K.equal(y_true, K.cast(y_pred < 0.5, y_true.dtype)))
# REF [site] >> ${KERAS_HOME}/examples/mnist_siamese.py
# REF [paper] >> "Dimensionality Reduction by Learning an Invariant Mapping", CVPR 2006.
def siamese_mnist_example():
num_classes = 10
epochs = 20
# The data, split between train and test sets.
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
input_shape = x_train.shape[1:]
# Create training+test positive and negative pairs.
digit_indices = [np.where(y_train == i)[0] for i in range(num_classes)]
tr_pairs, tr_y = create_pairs(x_train, digit_indices, num_classes)
digit_indices = [np.where(y_test == i)[0] for i in range(num_classes)]
te_pairs, te_y = create_pairs(x_test, digit_indices, num_classes)
# Network definition.
base_network = create_base_network(input_shape)
input_a = Input(shape=input_shape)
input_b = Input(shape=input_shape)
# Because we re-use the same instance 'base_network', the weights of the network will be shared across the two branches.
processed_a = base_network(input_a)
processed_b = base_network(input_b)
distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b])
model = Model([input_a, input_b], distance)
# Train.
rms = RMSprop()
model.compile(loss=contrastive_loss, optimizer=rms, metrics=[accuracy])
model.fit([tr_pairs[:, 0], tr_pairs[:, 1]], tr_y,
batch_size=128,
epochs=epochs,
validation_data=([te_pairs[:, 0], te_pairs[:, 1]], te_y))
# Compute final accuracy on training and test sets.
y_pred = model.predict([tr_pairs[:, 0], tr_pairs[:, 1]])
tr_acc = compute_accuracy(tr_y, y_pred)
y_pred = model.predict([te_pairs[:, 0], te_pairs[:, 1]])
te_acc = compute_accuracy(te_y, y_pred)
print('* Accuracy on training set: %0.2f%%' % (100 * tr_acc))
print('* Accuracy on test set: %0.2f%%' % (100 * te_acc))
def main():
siamese_mnist_example()
#--------------------------------------------------------------------
if '__main__' == __name__:
main()
| gpl-2.0 |
tejaram15/Event-Driven-Framework | Model.py | 1 | 1114 | import numpy as np
import pandas as pd
from separation import separation
from sklearn import svm
[complete,january,february,march,april,may,june,july,august,september,october,november,december] = separation()
x_train = []
x_test = []
y_train = []
y_test = []
## For jan
for i in range(1,len(january)-20):
data = january[i]
x_train.append(data[0].replace('-',''))
y_train.append(data[1])
for i in range(len(january)-20,len(january)):
data = january[i]
x_test.append(data[0].replace('-',''))
y_test.append(data[1])
x = np.asarray(x_train).astype(np.float)
y = np.asarray(y_train).astype(np.float)
# xt = np.reshape(x_test,(-1,1)).astype(np.float)
# yt = np.reshape(y_test,(-1,1)).astype(np.float)
#clf = svm.SVR(kernel='rbf', C=1, gamma=0.1)
#clf.fit(x,y)
#pred = clf.predict(xt)
#print(clf.score(xt,yt))
#print(clf.score(x,y))
import matplotlib.pyplot as plt
plt.scatter(x,y,color='orange',label='data')
lw = 2
## plt.plot(x, clf.predict(x), color='navy', lw=lw, label='RBF model')
## plt.plot(x,clf.predict(x),color='red',linewidth=2)
plt.xlabel("date")
plt.ylabel("principal")
plt.legend()
plt.show()
| mit |
darionyaphet/spark | python/pyspark/ml/recommendation.py | 9 | 22266 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
from pyspark import since, keyword_only
from pyspark.ml.util import *
from pyspark.ml.wrapper import JavaEstimator, JavaModel
from pyspark.ml.param.shared import *
from pyspark.ml.common import inherit_doc
__all__ = ['ALS', 'ALSModel']
@inherit_doc
class _ALSModelParams(HasPredictionCol, HasBlockSize):
"""
Params for :py:class:`ALS` and :py:class:`ALSModel`.
.. versionadded:: 3.0.0
"""
userCol = Param(Params._dummy(), "userCol", "column name for user ids. Ids must be within " +
"the integer value range.", typeConverter=TypeConverters.toString)
itemCol = Param(Params._dummy(), "itemCol", "column name for item ids. Ids must be within " +
"the integer value range.", typeConverter=TypeConverters.toString)
coldStartStrategy = Param(Params._dummy(), "coldStartStrategy", "strategy for dealing with " +
"unknown or new users/items at prediction time. This may be useful " +
"in cross-validation or production scenarios, for handling " +
"user/item ids the model has not seen in the training data. " +
"Supported values: 'nan', 'drop'.",
typeConverter=TypeConverters.toString)
@since("1.4.0")
def getUserCol(self):
"""
Gets the value of userCol or its default value.
"""
return self.getOrDefault(self.userCol)
@since("1.4.0")
def getItemCol(self):
"""
Gets the value of itemCol or its default value.
"""
return self.getOrDefault(self.itemCol)
@since("2.2.0")
def getColdStartStrategy(self):
"""
Gets the value of coldStartStrategy or its default value.
"""
return self.getOrDefault(self.coldStartStrategy)
@inherit_doc
class _ALSParams(_ALSModelParams, HasMaxIter, HasRegParam, HasCheckpointInterval, HasSeed):
"""
Params for :py:class:`ALS`.
.. versionadded:: 3.0.0
"""
rank = Param(Params._dummy(), "rank", "rank of the factorization",
typeConverter=TypeConverters.toInt)
numUserBlocks = Param(Params._dummy(), "numUserBlocks", "number of user blocks",
typeConverter=TypeConverters.toInt)
numItemBlocks = Param(Params._dummy(), "numItemBlocks", "number of item blocks",
typeConverter=TypeConverters.toInt)
implicitPrefs = Param(Params._dummy(), "implicitPrefs", "whether to use implicit preference",
typeConverter=TypeConverters.toBoolean)
alpha = Param(Params._dummy(), "alpha", "alpha for implicit preference",
typeConverter=TypeConverters.toFloat)
ratingCol = Param(Params._dummy(), "ratingCol", "column name for ratings",
typeConverter=TypeConverters.toString)
nonnegative = Param(Params._dummy(), "nonnegative",
"whether to use nonnegative constraint for least squares",
typeConverter=TypeConverters.toBoolean)
intermediateStorageLevel = Param(Params._dummy(), "intermediateStorageLevel",
"StorageLevel for intermediate datasets. Cannot be 'NONE'.",
typeConverter=TypeConverters.toString)
finalStorageLevel = Param(Params._dummy(), "finalStorageLevel",
"StorageLevel for ALS model factors.",
typeConverter=TypeConverters.toString)
@since("1.4.0")
def getRank(self):
"""
Gets the value of rank or its default value.
"""
return self.getOrDefault(self.rank)
@since("1.4.0")
def getNumUserBlocks(self):
"""
Gets the value of numUserBlocks or its default value.
"""
return self.getOrDefault(self.numUserBlocks)
@since("1.4.0")
def getNumItemBlocks(self):
"""
Gets the value of numItemBlocks or its default value.
"""
return self.getOrDefault(self.numItemBlocks)
@since("1.4.0")
def getImplicitPrefs(self):
"""
Gets the value of implicitPrefs or its default value.
"""
return self.getOrDefault(self.implicitPrefs)
@since("1.4.0")
def getAlpha(self):
"""
Gets the value of alpha or its default value.
"""
return self.getOrDefault(self.alpha)
@since("1.4.0")
def getRatingCol(self):
"""
Gets the value of ratingCol or its default value.
"""
return self.getOrDefault(self.ratingCol)
@since("1.4.0")
def getNonnegative(self):
"""
Gets the value of nonnegative or its default value.
"""
return self.getOrDefault(self.nonnegative)
@since("2.0.0")
def getIntermediateStorageLevel(self):
"""
Gets the value of intermediateStorageLevel or its default value.
"""
return self.getOrDefault(self.intermediateStorageLevel)
@since("2.0.0")
def getFinalStorageLevel(self):
"""
Gets the value of finalStorageLevel or its default value.
"""
return self.getOrDefault(self.finalStorageLevel)
@inherit_doc
class ALS(JavaEstimator, _ALSParams, JavaMLWritable, JavaMLReadable):
"""
Alternating Least Squares (ALS) matrix factorization.
ALS attempts to estimate the ratings matrix `R` as the product of
two lower-rank matrices, `X` and `Y`, i.e. `X * Yt = R`. Typically
these approximations are called 'factor' matrices. The general
approach is iterative. During each iteration, one of the factor
matrices is held constant, while the other is solved for using least
squares. The newly-solved factor matrix is then held constant while
solving for the other factor matrix.
This is a blocked implementation of the ALS factorization algorithm
that groups the two sets of factors (referred to as "users" and
"products") into blocks and reduces communication by only sending
one copy of each user vector to each product block on each
iteration, and only for the product blocks that need that user's
feature vector. This is achieved by pre-computing some information
about the ratings matrix to determine the "out-links" of each user
(which blocks of products it will contribute to) and "in-link"
information for each product (which of the feature vectors it
receives from each user block it will depend on). This allows us to
send only an array of feature vectors between each user block and
product block, and have the product block find the users' ratings
and update the products based on these messages.
For implicit preference data, the algorithm used is based on
`"Collaborative Filtering for Implicit Feedback Datasets",
<https://doi.org/10.1109/ICDM.2008.22>`_, adapted for the blocked
approach used here.
Essentially instead of finding the low-rank approximations to the
rating matrix `R`, this finds the approximations for a preference
matrix `P` where the elements of `P` are 1 if r > 0 and 0 if r <= 0.
The ratings then act as 'confidence' values related to strength of
indicated user preferences rather than explicit ratings given to
items.
.. note:: the input rating dataframe to the ALS implementation should be deterministic.
Nondeterministic data can cause failure during fitting ALS model.
For example, an order-sensitive operation like sampling after a repartition makes
dataframe output nondeterministic, like `df.repartition(2).sample(False, 0.5, 1618)`.
Checkpointing sampled dataframe or adding a sort before sampling can help make the
dataframe deterministic.
>>> df = spark.createDataFrame(
... [(0, 0, 4.0), (0, 1, 2.0), (1, 1, 3.0), (1, 2, 4.0), (2, 1, 1.0), (2, 2, 5.0)],
... ["user", "item", "rating"])
>>> als = ALS(rank=10, seed=0)
>>> als.setMaxIter(5)
ALS...
>>> als.getMaxIter()
5
>>> als.setRegParam(0.1)
ALS...
>>> als.getRegParam()
0.1
>>> als.clear(als.regParam)
>>> model = als.fit(df)
>>> model.getBlockSize()
4096
>>> model.getUserCol()
'user'
>>> model.setUserCol("user")
ALSModel...
>>> model.getItemCol()
'item'
>>> model.setPredictionCol("newPrediction")
ALS...
>>> model.rank
10
>>> model.userFactors.orderBy("id").collect()
[Row(id=0, features=[...]), Row(id=1, ...), Row(id=2, ...)]
>>> test = spark.createDataFrame([(0, 2), (1, 0), (2, 0)], ["user", "item"])
>>> predictions = sorted(model.transform(test).collect(), key=lambda r: r[0])
>>> predictions[0]
Row(user=0, item=2, newPrediction=0.6929101347923279)
>>> predictions[1]
Row(user=1, item=0, newPrediction=3.47356915473938)
>>> predictions[2]
Row(user=2, item=0, newPrediction=-0.8991986513137817)
>>> user_recs = model.recommendForAllUsers(3)
>>> user_recs.where(user_recs.user == 0)\
.select("recommendations.item", "recommendations.rating").collect()
[Row(item=[0, 1, 2], rating=[3.910..., 1.997..., 0.692...])]
>>> item_recs = model.recommendForAllItems(3)
>>> item_recs.where(item_recs.item == 2)\
.select("recommendations.user", "recommendations.rating").collect()
[Row(user=[2, 1, 0], rating=[4.892..., 3.991..., 0.692...])]
>>> user_subset = df.where(df.user == 2)
>>> user_subset_recs = model.recommendForUserSubset(user_subset, 3)
>>> user_subset_recs.select("recommendations.item", "recommendations.rating").first()
Row(item=[2, 1, 0], rating=[4.892..., 1.076..., -0.899...])
>>> item_subset = df.where(df.item == 0)
>>> item_subset_recs = model.recommendForItemSubset(item_subset, 3)
>>> item_subset_recs.select("recommendations.user", "recommendations.rating").first()
Row(user=[0, 1, 2], rating=[3.910..., 3.473..., -0.899...])
>>> als_path = temp_path + "/als"
>>> als.save(als_path)
>>> als2 = ALS.load(als_path)
>>> als.getMaxIter()
5
>>> model_path = temp_path + "/als_model"
>>> model.save(model_path)
>>> model2 = ALSModel.load(model_path)
>>> model.rank == model2.rank
True
>>> sorted(model.userFactors.collect()) == sorted(model2.userFactors.collect())
True
>>> sorted(model.itemFactors.collect()) == sorted(model2.itemFactors.collect())
True
.. versionadded:: 1.4.0
"""
@keyword_only
def __init__(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,
implicitPrefs=False, alpha=1.0, userCol="user", itemCol="item", seed=None,
ratingCol="rating", nonnegative=False, checkpointInterval=10,
intermediateStorageLevel="MEMORY_AND_DISK",
finalStorageLevel="MEMORY_AND_DISK", coldStartStrategy="nan", blockSize=4096):
"""
__init__(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10, \
implicitPrefs=false, alpha=1.0, userCol="user", itemCol="item", seed=None, \
ratingCol="rating", nonnegative=false, checkpointInterval=10, \
intermediateStorageLevel="MEMORY_AND_DISK", \
finalStorageLevel="MEMORY_AND_DISK", coldStartStrategy="nan", blockSize=4096)
"""
super(ALS, self).__init__()
self._java_obj = self._new_java_obj("org.apache.spark.ml.recommendation.ALS", self.uid)
self._setDefault(rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,
implicitPrefs=False, alpha=1.0, userCol="user", itemCol="item",
ratingCol="rating", nonnegative=False, checkpointInterval=10,
intermediateStorageLevel="MEMORY_AND_DISK",
finalStorageLevel="MEMORY_AND_DISK", coldStartStrategy="nan",
blockSize=4096)
kwargs = self._input_kwargs
self.setParams(**kwargs)
@keyword_only
@since("1.4.0")
def setParams(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10,
implicitPrefs=False, alpha=1.0, userCol="user", itemCol="item", seed=None,
ratingCol="rating", nonnegative=False, checkpointInterval=10,
intermediateStorageLevel="MEMORY_AND_DISK",
finalStorageLevel="MEMORY_AND_DISK", coldStartStrategy="nan", blockSize=4096):
"""
setParams(self, rank=10, maxIter=10, regParam=0.1, numUserBlocks=10, numItemBlocks=10, \
implicitPrefs=False, alpha=1.0, userCol="user", itemCol="item", seed=None, \
ratingCol="rating", nonnegative=False, checkpointInterval=10, \
intermediateStorageLevel="MEMORY_AND_DISK", \
finalStorageLevel="MEMORY_AND_DISK", coldStartStrategy="nan", blockSize=4096)
Sets params for ALS.
"""
kwargs = self._input_kwargs
return self._set(**kwargs)
def _create_model(self, java_model):
return ALSModel(java_model)
@since("1.4.0")
def setRank(self, value):
"""
Sets the value of :py:attr:`rank`.
"""
return self._set(rank=value)
@since("1.4.0")
def setNumUserBlocks(self, value):
"""
Sets the value of :py:attr:`numUserBlocks`.
"""
return self._set(numUserBlocks=value)
@since("1.4.0")
def setNumItemBlocks(self, value):
"""
Sets the value of :py:attr:`numItemBlocks`.
"""
return self._set(numItemBlocks=value)
@since("1.4.0")
def setNumBlocks(self, value):
"""
Sets both :py:attr:`numUserBlocks` and :py:attr:`numItemBlocks` to the specific value.
"""
self._set(numUserBlocks=value)
return self._set(numItemBlocks=value)
@since("1.4.0")
def setImplicitPrefs(self, value):
"""
Sets the value of :py:attr:`implicitPrefs`.
"""
return self._set(implicitPrefs=value)
@since("1.4.0")
def setAlpha(self, value):
"""
Sets the value of :py:attr:`alpha`.
"""
return self._set(alpha=value)
@since("1.4.0")
def setUserCol(self, value):
"""
Sets the value of :py:attr:`userCol`.
"""
return self._set(userCol=value)
@since("1.4.0")
def setItemCol(self, value):
"""
Sets the value of :py:attr:`itemCol`.
"""
return self._set(itemCol=value)
@since("1.4.0")
def setRatingCol(self, value):
"""
Sets the value of :py:attr:`ratingCol`.
"""
return self._set(ratingCol=value)
@since("1.4.0")
def setNonnegative(self, value):
"""
Sets the value of :py:attr:`nonnegative`.
"""
return self._set(nonnegative=value)
@since("2.0.0")
def setIntermediateStorageLevel(self, value):
"""
Sets the value of :py:attr:`intermediateStorageLevel`.
"""
return self._set(intermediateStorageLevel=value)
@since("2.0.0")
def setFinalStorageLevel(self, value):
"""
Sets the value of :py:attr:`finalStorageLevel`.
"""
return self._set(finalStorageLevel=value)
@since("2.2.0")
def setColdStartStrategy(self, value):
"""
Sets the value of :py:attr:`coldStartStrategy`.
"""
return self._set(coldStartStrategy=value)
def setMaxIter(self, value):
"""
Sets the value of :py:attr:`maxIter`.
"""
return self._set(maxIter=value)
def setRegParam(self, value):
"""
Sets the value of :py:attr:`regParam`.
"""
return self._set(regParam=value)
def setPredictionCol(self, value):
"""
Sets the value of :py:attr:`predictionCol`.
"""
return self._set(predictionCol=value)
def setCheckpointInterval(self, value):
"""
Sets the value of :py:attr:`checkpointInterval`.
"""
return self._set(checkpointInterval=value)
def setSeed(self, value):
"""
Sets the value of :py:attr:`seed`.
"""
return self._set(seed=value)
@since("3.0.0")
def setBlockSize(self, value):
"""
Sets the value of :py:attr:`blockSize`.
"""
return self._set(blockSize=value)
class ALSModel(JavaModel, _ALSModelParams, JavaMLWritable, JavaMLReadable):
"""
Model fitted by ALS.
.. versionadded:: 1.4.0
"""
@since("3.0.0")
def setUserCol(self, value):
"""
Sets the value of :py:attr:`userCol`.
"""
return self._set(userCol=value)
@since("3.0.0")
def setItemCol(self, value):
"""
Sets the value of :py:attr:`itemCol`.
"""
return self._set(itemCol=value)
@since("3.0.0")
def setColdStartStrategy(self, value):
"""
Sets the value of :py:attr:`coldStartStrategy`.
"""
return self._set(coldStartStrategy=value)
@since("3.0.0")
def setPredictionCol(self, value):
"""
Sets the value of :py:attr:`predictionCol`.
"""
return self._set(predictionCol=value)
@since("3.0.0")
def setBlockSize(self, value):
"""
Sets the value of :py:attr:`blockSize`.
"""
return self._set(blockSize=value)
@property
@since("1.4.0")
def rank(self):
"""rank of the matrix factorization model"""
return self._call_java("rank")
@property
@since("1.4.0")
def userFactors(self):
"""
a DataFrame that stores user factors in two columns: `id` and
`features`
"""
return self._call_java("userFactors")
@property
@since("1.4.0")
def itemFactors(self):
"""
a DataFrame that stores item factors in two columns: `id` and
`features`
"""
return self._call_java("itemFactors")
@since("2.2.0")
def recommendForAllUsers(self, numItems):
"""
Returns top `numItems` items recommended for each user, for all users.
:param numItems: max number of recommendations for each user
:return: a DataFrame of (userCol, recommendations), where recommendations are
stored as an array of (itemCol, rating) Rows.
"""
return self._call_java("recommendForAllUsers", numItems)
@since("2.2.0")
def recommendForAllItems(self, numUsers):
"""
Returns top `numUsers` users recommended for each item, for all items.
:param numUsers: max number of recommendations for each item
:return: a DataFrame of (itemCol, recommendations), where recommendations are
stored as an array of (userCol, rating) Rows.
"""
return self._call_java("recommendForAllItems", numUsers)
@since("2.3.0")
def recommendForUserSubset(self, dataset, numItems):
"""
Returns top `numItems` items recommended for each user id in the input data set. Note that
if there are duplicate ids in the input dataset, only one set of recommendations per unique
id will be returned.
:param dataset: a Dataset containing a column of user ids. The column name must match
`userCol`.
:param numItems: max number of recommendations for each user
:return: a DataFrame of (userCol, recommendations), where recommendations are
stored as an array of (itemCol, rating) Rows.
"""
return self._call_java("recommendForUserSubset", dataset, numItems)
@since("2.3.0")
def recommendForItemSubset(self, dataset, numUsers):
"""
Returns top `numUsers` users recommended for each item id in the input data set. Note that
if there are duplicate ids in the input dataset, only one set of recommendations per unique
id will be returned.
:param dataset: a Dataset containing a column of item ids. The column name must match
`itemCol`.
:param numUsers: max number of recommendations for each item
:return: a DataFrame of (itemCol, recommendations), where recommendations are
stored as an array of (userCol, rating) Rows.
"""
return self._call_java("recommendForItemSubset", dataset, numUsers)
if __name__ == "__main__":
import doctest
import pyspark.ml.recommendation
from pyspark.sql import SparkSession
globs = pyspark.ml.recommendation.__dict__.copy()
# The small batch size here ensures that we see multiple batches,
# even in these small test examples:
spark = SparkSession.builder\
.master("local[2]")\
.appName("ml.recommendation tests")\
.getOrCreate()
sc = spark.sparkContext
globs['sc'] = sc
globs['spark'] = spark
import tempfile
temp_path = tempfile.mkdtemp()
globs['temp_path'] = temp_path
try:
(failure_count, test_count) = doctest.testmod(globs=globs, optionflags=doctest.ELLIPSIS)
spark.stop()
finally:
from shutil import rmtree
try:
rmtree(temp_path)
except OSError:
pass
if failure_count:
sys.exit(-1)
| apache-2.0 |
h2oai/h2o | py/testdir_single_jvm/test_GLM2_covtype_1.py | 9 | 3810 | import unittest, time, sys, random
sys.path.extend(['.','..','../..','py'])
import h2o, h2o_cmd, h2o_glm, h2o_import as h2i, h2o_jobs, h2o_exec as h2e
DO_POLL = False
class Basic(unittest.TestCase):
def tearDown(self):
h2o.check_sandbox_for_errors()
@classmethod
def setUpClass(cls):
h2o.init(java_heap_GB=4)
@classmethod
def tearDownClass(cls):
h2o.tear_down_cloud()
def test_GLM2_covtype_1(self):
csvFilename = 'covtype.data'
csvPathname = 'standard/' + csvFilename
hex_key = "covtype.hex"
parseResult = h2i.import_parse(bucket='home-0xdiag-datasets', path=csvPathname, hex_key=hex_key, schema='local', timeoutSecs=20)
print "Gratuitous use of frame splitting. result not used"
fs = h2o.nodes[0].frame_split(source=hex_key, ratios=0.75)
split0_key = fs['split_keys'][0]
split1_key = fs['split_keys'][1]
split0_row = fs['split_rows'][0]
split1_row = fs['split_rows'][1]
split0_ratio = fs['split_ratios'][0]
split1_ratio = fs['split_ratios'][1]
print "WARNING: max_iter set to 8 for benchmark comparisons"
max_iter = 8
y = 54
modelKey = "GLMModel"
kwargs = {
# 'cols': x, # for 2
'response': 'C' + str(y+1), # for 2
'family': 'binomial',
# 'link': 'logit', # 2 doesn't support
'n_folds': 2,
'max_iter': max_iter,
'beta_epsilon': 1e-3,
'destination_key': modelKey
}
# maybe go back to simpler exec here. this was from when Exec failed unless this was used
execExpr="A.hex=%s" % parseResult['destination_key']
h2e.exec_expr(execExpr=execExpr, timeoutSecs=30)
# class 1=1, all else 0
execExpr="A.hex[,%s]=(A.hex[,%s]>%s)" % (y+1, y+1, 1)
h2e.exec_expr(execExpr=execExpr, timeoutSecs=30)
aHack = {'destination_key': 'A.hex'}
timeoutSecs = 120
# L2
start = time.time()
kwargs.update({'alpha': 0, 'lambda': 0})
def completionHack(jobKey, modelKey):
if DO_POLL: # not needed
pass
else:
h2o_jobs.pollStatsWhileBusy(timeoutSecs=300, pollTimeoutSecs=300, retryDelaySecs=5)
# print "FIX! how do we get the GLM result"
params = {'_modelKey': modelKey}
a = h2o.nodes[0].completion_redirect(jsonRequest="2/GLMModelView.json", params=params)
# print "GLM result from completion_redirect:", h2o.dump_json(a)
glmFirstResult = h2o_cmd.runGLM(parseResult=aHack, timeoutSecs=timeoutSecs, noPoll=not DO_POLL, **kwargs)
completionHack(glmFirstResult['job_key'], modelKey)
print "glm (L2) end on ", csvPathname, 'took', time.time() - start, 'seconds'
## h2o_glm.simpleCheckGLM(self, glm, 13, **kwargs)
# Elastic
kwargs.update({'alpha': 0.5, 'lambda': 1e-4})
start = time.time()
glmFirstResult = h2o_cmd.runGLM(parseResult=aHack, timeoutSecs=timeoutSecs, noPoll=not DO_POLL, **kwargs)
completionHack(glmFirstResult['job_key'], modelKey)
print "glm (Elastic) end on ", csvPathname, 'took', time.time() - start, 'seconds'
## h2o_glm.simpleCheckGLM(self, glm, 13, **kwargs)
# L1
kwargs.update({'alpha': 1, 'lambda': 1e-4})
start = time.time()
glmFirstResult = h2o_cmd.runGLM(parseResult=aHack, timeoutSecs=timeoutSecs, noPoll=not DO_POLL, **kwargs)
completionHack(glmFirstResult['job_key'], modelKey)
print "glm (L1) end on ", csvPathname, 'took', time.time() - start, 'seconds'
## h2o_glm.simpleCheckGLM(self, glm, 13, **kwargs)
if __name__ == '__main__':
h2o.unit_main()
| apache-2.0 |