markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']] top3segments.head(15)
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
68bfdfba25fca7f93eae51d4456b53ca
Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument. Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index.
mb1.index[:3]
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
0dfb455f193a1fabe5c81010be4d2ba6
Using the string methods split and join we can create an index that just uses the first three classifications: domain, phylum and class.
class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3])) mb_class = mb1.copy() mb_class.index = class_index
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
3cc80c31e8c8ed6e3d0af0b7932abc3f
However, since there are multiple taxonomic units with the same class, our index is no longer unique:
mb_class.head()
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
b2751cb972b4fe7f4276c6562f5bac9e
We can re-establish a unique index by summing all rows with the same class, using groupby:
mb_class.groupby(level=0).sum().head(10)
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
0bff9a150472cc89b46527a22de93361
Exercise 2 Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.
from IPython.core.display import HTML HTML(filename='Data/titanic.html')
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
0aed6a1d4e9bcb58ee3ed0c58a4e50cc
Women and children first? Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings. Use the groupby method to calculate the proportion of passengers that survived by sex. Calculate the same proportion, but by class and sex. Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.
titanic_df = pd.read_excel('Data/titanic.xls', 'titanic', index_col=None, header=0) titanic_df titanic_nameduplicate = titanic_df.duplicated(subset='name') #titanic_nameduplicate titanic_df.drop_duplicates(['name']) gender_map = {'male':0, 'female':1} titanic_df['sex'] = titanic_df.sex.map(gender_map) titanic_df titanic_grouped = titanic_df.groupby(titanic_df.sex) titanic_grouped for sex, survived in titanic_grouped: print('sex', sex) print('survived', survived) titanic_grouped.agg(survived.mean).head()
Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb
Merinorus/adaisawesome
gpl-3.0
6610dc16e9a7c16e8885ad1caf001d5f
The Heart of the Notebook: The Combinatorial Optimization class
class CombinatorialOptimisation(Disaggregator): """ A Combinatorial Optimization Algorithm based on the implementation by NILMTK This class is build upon the main Dissagregator class already implemented by NILMTK All the methods from Dissagregator are passed in here as well since we import the class as shown above. We should note howeger that Dissagregator is nothing more than a general interface class upon which all dissagregator algortihms are build. All the methods are initialized in the Dissagregator class but the specific implementation is based upon the method to be implemented. In other words, even though we pass in Dissagregator, all methods will be redefined again to work with the Combinatorial Optimization algorithm as you can see below. Attributes ---------- model : list of dicts Each dict has these keys: states : list of ints (the power (Watts) used in different states) training_metadata : ElecMeter or MeterGroup object used for training this set of states. We need this information because we need the appliance type (and perhaps some other metadata) for each model. state_combinations : 2D array Each column is an appliance. Each row is a possible combination of power demand values e.g. [[0, 0, 0, 0], [0, 0, 0, 100], [0, 0, 50, 0], [0, 0, 50, 100], ...] MIN_CHUNK_LENGTH : int """ def __init__(self): self.model = [] self.state_combinations = None self.MIN_CHUNK_LENGTH = 100 self.MODEL_NAME = 'Combinatorial Optimization' def train(self, metergroup, num_states_dict=None, **load_kwargs): """ Train using 1D CO. Places the learnt model in the `model` attribute. Parameters ---------- metergroup : a nilmtk.MeterGroup object num_states_dict : dict **load_kwargs : keyword arguments passed to `meter.power_series()` Notes ----- * only uses first chunk for each meter (TODO: handle all chunks). """ # Initializing dictionary to save the number of states if num_states_dict is None: num_states_dict = {} # The CO class is only able to train in new models. We can only train once. If model exists, raise an error if self.model: raise RuntimeError( "This implementation of Combinatorial Optimisation" " does not support multiple calls to `train`.") # How many meters do we have in the training set? num_meters = len(metergroup.meters) # If more than 20 then reduce the number of clusters to reduce the computational cost. if num_meters > 20: max_num_clusters = 2 else: max_num_clusters = 3 print('Now training...') print('Loop in all meters begins...') # We now loop in all meters passed in in the training data set # Every time, we load the data in the meter and we call the method # --> train_on_chunk. For more info about this method please see below for i, meter in enumerate(metergroup.submeters().meters): #print('We now train for submeter {}'.format(meter)) # Load the time series for the power consumption for this meter power_series = meter.power_series(**load_kwargs) # Note that we do not effectively load until we use the next() method # We load and save into chunk. Chunk will be used in training chunk = power_series.next() # Get the number of total states from the dictionary num_total_states = num_states_dict.get(meter) if num_total_states is not None: num_on_states = num_total_states - 1 else: num_on_states = None #print('i={},num_total_states={},num_on_states={}'.format(i,meter,num_total_states,num_on_states)) # The actual training happens now. We call train_on_chunk using the time series we loaded on chunk for this meter self.train_on_chunk(chunk, meter, max_num_clusters, num_on_states) # Check to see if there are any more chunks. try: power_series.next() except StopIteration: pass else: warn("The current implementation of CombinatorialOptimisation" " can only handle a single chunk. But there are multiple" " chunks available. So have only trained on the" " first chunk!") print("Done training!") def train_on_chunk(self, chunk, meter, max_num_clusters, num_on_states): """ Train on chunk trains the Combinatorial Optimization Model based on the time series for the power consumption passed in chunk. This method is based on the sklearn machine learning library and in particular the KMEANS algorithm. It calls the cluster function which is imported in the beginning of this notebook. Cluster, prepares the data in chunk so that its size is always compatible and the same and then calls the KMEANS algorithm to perform the clustering. Function cluster returns only the centers of the clustered data which correspond to the individual states for the given appliance/meter """ # Check if we've already trained on this meter. We only allow training once on each meter meters_in_model = [d['training_metadata'] for d in self.model] if meter in meters_in_model: raise RuntimeError( "Meter {} is already in model!" " Can't train twice on the same meter!" .format(meter)) # Do the KMEANS clustering and return the centers states = cluster(chunk, max_num_clusters, num_on_states) print('\t Now Clustering in Train on Chunk') #print('\t {}'.format(states)) # Append the clustered data to the model self.model.append({ 'states': states, 'training_metadata': meter}) def _set_state_combinations_if_necessary(self): """Get centroids""" # If we import sklearn at the top of the file then auto doc fails. if (self.state_combinations is None or self.state_combinations.shape[1] != len(self.model)): from sklearn.utils.extmath import cartesian # Saving the centroids in centroids (appliance states) centroids = [model['states'] for model in self.model] # Function cartesian returns all possible combinations # than can be performed using centroids self.state_combinations = cartesian(centroids) print() #print('Now printing the state combinations...') #print(cartesian(centroids)) def disaggregate(self, mains, output_datastore, vampire_power=None, **load_kwargs): '''Disaggregate mains according to the model learnt previously. Parameters ---------- mains : nilmtk.ElecMeter or nilmtk.MeterGroup output_datastore : instance of nilmtk.DataStore subclass For storing power predictions from disaggregation algorithm. vampire_power : None or number (watts) If None then will automatically determine vampire power from data. If you do not want to use vampire power then set vampire_power = 0. sample_period : number, optional The desired sample period in seconds. Set to 60 by default. sections : TimeFrameGroup, optional Set to mains.good_sections() by default. **load_kwargs : key word arguments Passed to `mains.power_series(**kwargs)` ''' # Performing default pre disaggregation checks. Checking meters etc.. load_kwargs = self._pre_disaggregation_checks(load_kwargs) # Disaggregation defauls. Sample perios and sections load_kwargs.setdefault('sample_period', 60) load_kwargs.setdefault('sections', mains.good_sections()) # Initializing time frames and fetching the meter for the aggregated data timeframes = [] building_path = '/building{}'.format(mains.building()) mains_data_location = building_path + '/elec/meter1' data_is_available = False # We now load the aggregated data for power consumption of the whole house in small chunks # Every iteration of the following loop we perform the CO step to disaggregate counter = 0 print('Disaggregation now begins...') for chunk in mains.power_series(**load_kwargs): counter += 1 # Check that chunk is sensible size if len(chunk) < self.MIN_CHUNK_LENGTH: continue print('\t Now processing chunk {}...'.format(counter)) # Record metadata timeframes.append(chunk.timeframe) measurement = chunk.name # This is where the disaggregation happens # Vampire Power is just the minimum of the power series in this chunk appliance_powers = self.disaggregate_chunk(chunk, vampire_power) # Here we save the disaggregated data for this chunk in Pandas dataframe and update the # HDF5 file we created. for i, model in enumerate(self.model): # Fetch the disag data for this appliance appliance_power = appliance_powers[i] if len(appliance_power) == 0: continue data_is_available = True # Just for saving.. Nothing major happening here cols = pd.MultiIndex.from_tuples([chunk.name]) meter_instance = model['training_metadata'].instance() df = pd.DataFrame( appliance_power.values, index=appliance_power.index, columns=cols) key = '{}/elec/meter{}'.format(building_path, meter_instance) output_datastore.append(key, df) # Copy mains data to disag output mains_df = pd.DataFrame(chunk, columns=cols) output_datastore.append(key=mains_data_location, value=mains_df) if data_is_available: self._save_metadata_for_disaggregation( output_datastore=output_datastore, sample_period=load_kwargs['sample_period'], measurement=measurement, timeframes=timeframes, building=mains.building(), meters=[d['training_metadata'] for d in self.model] ) print('Disaggregation Completed Successfully...!!!') def disaggregate_chunk(self, mains, vampire_power=None): """In-memory disaggregation. Parameters ---------- mains : pd.Series vampire_power : None or number (watts) If None then will automatically determine vampire power from data. If you do not want to use vampire power then set vampire_power = 0. Returns ------- appliance_powers : pd.DataFrame where each column represents a disaggregated appliance. Column names are the integer index into `self.model` for the appliance in question. """ if not self.model: raise RuntimeError( "The model needs to be instantiated before" " calling `disaggregate`. The model" " can be instantiated by running `train`.") if len(mains) < self.MIN_CHUNK_LENGTH: raise RuntimeError("Chunk is too short.") # sklearn produces lots of DepreciationWarnings with PyTables import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) # Because CombinatorialOptimisation could have been trained using # either train() or train_on_chunk(), we must # set state_combinations here. self._set_state_combinations_if_necessary() # Add vampire power to the model (Min of power series of the aggregated data) if vampire_power is None: vampire_power = get_vampire_power(mains) if vampire_power > 0: print() #print("Including vampire_power = {} watts to model...".format(vampire_power)) # How many combinations n_rows = self.state_combinations.shape[0] vampire_power_array = np.zeros((n_rows, 1)) + vampire_power state_combinations = np.hstack( (self.state_combinations, vampire_power_array)) else: state_combinations = self.state_combinations summed_power_of_each_combination = np.sum(state_combinations, axis=1) # summed_power_of_each_combination is now an array where each # value is the total power demand for each combination of states. # Start disaggregation # The following line finds the best combination from all the possible combinations # Returns the index to find the best combination as well as the residual # Uses the Find_Nearest algorithm indices_of_state_combinations, residual_power = find_nearest( summed_power_of_each_combination, mains.values) # Now update the state for each appliance with the optimal one and return the list # as Dataframe appliance_powers_dict = {} for i, model in enumerate(self.model): #print() #print("Estimating power demand for '{}'".format(model['training_metadata'])) predicted_power = state_combinations[ indices_of_state_combinations, i].flatten() column = pd.Series(predicted_power, index=mains.index, name=i) appliance_powers_dict[i] = column appliance_powers = pd.DataFrame(appliance_powers_dict) return appliance_powers # The current implementation of the CO does not make use of the following 2 functions. # # # ------------------------------------------------------------------------------------- def import_model(self, filename): imported_model = pickle.load(open(filename, 'r')) self.model = imported_model.model # recreate datastores from filenames for pair in self.model: pair['training_metadata'].store = HDFDataStore( pair['training_metadata'].store) self.state_combinations = imported_model.state_combinations self.MIN_CHUNK_LENGTH = imported_model.MIN_CHUNK_LENGTH def export_model(self, filename): # Can't pickle datastore, so convert to filenames exported_model = copy.deepcopy(self) for pair in exported_model.model: pair['training_metadata'].store = ( pair['training_metadata'].store.store.filename) pickle.dump(exported_model, open(filename, 'wb'))
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
eb789d60a67da6b8e13e837e73e4ebd1
Importing and Loading the REDD dataset
data_dir = '\Users\Nick\Google Drive\PhD\Courses\Semester 2\AM207\Project' we = DataSet(join(data_dir, 'REDD.h5')) print('loaded ' + str(len(we.buildings)) + ' buildings')
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
7857028ae06f69a0ca7396abbc82a71f
We want to train the Combinatorial Optimization Algorithm using the data for 5 buildings and then test it against the last building. To simplify our analysis and also to enable comparison with other methods (Neural Nets, FHMM, MLE etc) we will only try to dissagregate data associated with the fridge and the microwave. However, the REDD dataset that we are using here does not contain data measurements for the fridge and microwave for all buildings. In particular, building 4 does not have measurements for the fridge. As a result, we will exclude building 4 from the dataset and we will only import the meters associated with the fridge from other buildings. The train data set will consist of meters associated with the fridge and microwave from buildings 1,2,3 and 6. We will then test the combinatorial optimization algorithm against the aggregated data for building 5. We first plot the time window span for all buildings
for i in xrange(1,7): print('Timeframe for building {} is {}'.format(i,we.buildings[i].elec.get_timeframe()))
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
8547406c05c768abc3eab32a0710ce80
Unfortunately, due to a bug in one of the main classes of the NILMTK package the implementation of the Combinatorial Optimization do not save the meters for the disaggregated data correctly unless the building on which we test on also exists in the trainihg set. More on this issue can be found here https://github.com/nilmtk/nilmtk/issues/194 However, for us it makes no sense to use the same building for training and testing since we would like to compare this algorithm with the results from FHMM and Neural Networks. In order to circumvent this bug we do the following: The main issue is that the meter for the building we would like to disaggregate must be on the training set in order to be able to disaggregate correctly. That being said, we still want to train as less as possible on the meter we want to test on since we would like to see how the algorithm performs when a completely unknown dataset is available. In order to do that we create a metergroup comprising of the following: 1) The meters for the Frigde and Microwave for all buildings but building 5, since building 5 is the building we would like to test on. Later we will see that building 4 needs to be excluded as well because there is no meter associated with the fridge for this building. 2) The meters for the Frigde and Microwave for building 5 which is the building we would like to test on, but we limit the time window to be a very very small one. Doing that, we make sure that the meters are there and understood by the Combinatorial Optimization Class but at the same time, by limiting the time window to just a few housrd for this building do not provide enough data to overtrain. In other words, we only do this in order to be able to disaggregate correctly. After we train we will test the algorithm against the data from building 5 that werent fed into the training meters. After we disaggregate we will compare with the ground truth for the same exact window. Modifying Datasets to work with CO
# Data file directory data_dir = '\Users\Nick\Google Drive\PhD\Courses\Semester 2\AM207\Project' # Make the Data set Data = DataSet(join(data_dir, 'REDD.h5')) # Make copies of the Data Set so that local changes would not affect the global dataset Data_for_5 = DataSet(join(data_dir, 'REDD.h5')) Data_for_rest = DataSet(join(data_dir, 'REDD.h5')) # How many buildings in the data set? print(' Found {} buildings in the Data Ser.. Buildings Loaded successfully.'.format(len(Data.buildings))) # This is the point that we will break the data from building 5 so that we only include a small # portion in the training set. In fact, the line below makes sure than only a day of data is seen during training. break_point = '2011-04-19 02:00' # Changing the window for building 5 Data_for_5.set_window(end=break_point) # Making a metergroup.. e = [Data_for_5.buildings[5].elec[a] for a in ['fridge','microwave']] me = MeterGroup(e) # The data that we pass in for training for building 5 look like this... me.plot()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
ed5e59076d2d027951742cf239cb302e
Creating MeterGroups with the desired appliances from the desired buildings Below we define a function tha is able to create a metergroup that only includes meters for the appliances that we are interested in and is also able to exclude buildings that we don't want in the meter. Also, if an appliance is requested but a meter is not found then the meter is skipped but the metergoup is created nontheless.
def get_all_trainings(appliance, dataset, buildings_to_exclude): # Filtering by appliances: elecs = [] for app in appliance: app_l = [app] print ('Now loading data for ' + app + ' for all buildings in the data to create the metergroup') print() for building in dataset.buildings: if building not in buildings_to_exclude: print ('Processing Building ' + str(building) + '...') print() try: elec = dataset.buildings[building].elec[app] elecs.append(elec) except KeyError: print ('Appliance '+str(app)+' does not exist in this building') print ('Building skipped...') print () metergroup = MeterGroup(elecs) return metergroup
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
71ffd0d57d5b155642054bf0b9eada6c
Now we set the appliances that we want as well as the buildings to exclude and we create the metergroup
applianceName = ['fridge','microwave'] buildings_to_exclude = [4,5] metergroup = get_all_trainings(applianceName,Data_for_rest,buildings_to_exclude) print('Now printing the Meter Group...') print() print(metergroup)
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
0c1e9ac6177a7ca9da32e76abe926d2d
As we can see the Metergroup was successfully created and contains all the appliances we requested (Fridge and Microwave) in all buildings that the appliances exist apart from the ones we excluded Correcting the MeterGroup (Necessary for the CO to work) Now we need to perform the trick we mentioned previously. We need to also include the meter from building 5 with the Fridge and Microwave which is the building we are going to test on but we need to make sure that only a very small portion of the data is seen for this building. We already took care of that by changing the window for the data in building 5 so now we only have to include the meters for the Fridge and Microwave for building 5 from the reduced time dataset
def correct_meter(Data,building,appliance,oldmeter): # Unpack meters from the MeterGroup meters = oldmeter.all_meters() # Get the rest of the meters and append for a in appliance: meter_to_add = Data.buildings[building].elec[a] meters.append(meter_to_add) # Group again in a single metergroup and return return MeterGroup(meters) corr_metergroup = correct_meter(Data_for_5,5,applianceName,metergroup) print('The Modified Meter is now..') print() print(corr_metergroup)
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
31166bc9df2d1035c1dedd5525d7a2b0
As we can see the metergroup was updated successfully Training We now need to train in the Metergroup we just created. First, let us load the class for the CO
# Train co = CombinatorialOptimisation()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
f605e08ca9122f88cf5b53d3145c198f
Now Let's train
co.train(corr_metergroup)
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
f905ef2b2b57e84af152f03fc049d071
Preparing the Testing Data Now that the training is done, the only thing that we have to do is to prepare the Data for Building 5 that we want to test on and call the Disaggregation. The data set is now the remaining part of building 5 that is not seen. After that, we only keep the Main meter which contains ifrormation about the aggregated data consumption and we disaggregate.
Test_Data = DataSet(join(data_dir, 'REDD.h5')) Test_Data.set_window(start=break_point) # The building number on which we test building_for_testing = 5 test = Test_Data.buildings[building_for_testing].elec mains = test.mains()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
ca4c059285bb2de02091dd8f1fd91504
Disaggregating the test data The disaggregation Begins Now
# Disaggregate disag_filename = join(data_dir, 'COMBINATORIAL_OPTIMIZATION.h5') mains = test.mains() try: output = HDFDataStore(disag_filename, 'w') co.disaggregate(mains, output) except ValueError: output.close() output = HDFDataStore(disag_filename, 'w') co.disaggregate(mains, output) for meter in range(1, 2): df1 = output.store.get('/building5/elec/meter{}'.format(meter)) df2 = we.store.store.get('/building5/elec/meter{}'.format(meter)) output.close()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
f418e2fddf6f347f3f6ab0efccdee414
OK.. Now we are all done. All that remains is to interpret the results and plot the scores.. Post Processing & Results
# Opening the Dataset with the Disaggregated data disag = DataSet(disag_filename) # Getting electric appliances and meters disag_elec = disag.buildings[building_for_testing].elec # We also get the electric appliances and meters for the ground truth data to compare elec = Test_Data.buildings[building_for_testing].elec e = [test[a] for a in applianceName] me = MeterGroup(e) print(me)
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
de81b6dbf76afcfcb0bda6cf2a785db3
Resampling to align meters Before we are able to calculate and plot the metrics we need to align the ground truth meter with the disaggregated meters. Why so? If you notice in the dissagregation method of the CO class above, you may see that by default the time sampling is changed from 3s which is the raw data to 60s. This has to happen in order to make the disaggregation more efficient computationally but also because it is impossible to disaggregate using the actual time step. So in order to compare now we have to resample the meter for the ground truth and align it
def align_two_meters(master, slave, func='when_on'): """Returns a generator of 2-column pd.DataFrames. The first column is from `master`, the second from `slave`. Takes the sample rate and good_periods of `master` and applies to `slave`. Parameters ---------- master, slave : ElecMeter or MeterGroup instances """ sample_period = master.sample_period() period_alias = '{:d}S'.format(sample_period) sections = master.good_sections() master_generator = getattr(master, func)(sections=sections) for master_chunk in master_generator: if len(master_chunk) < 2: return chunk_timeframe = TimeFrame(master_chunk.index[0], master_chunk.index[-1]) slave_generator = getattr(slave, func)(sections=[chunk_timeframe]) slave_chunk = next(slave_generator) # TODO: do this resampling in the pipeline? slave_chunk = slave_chunk.resample(period_alias) if slave_chunk.empty: continue master_chunk = master_chunk.resample(period_alias) return master_chunk,slave_chunk
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
940c1846148a964c65403438f7764115
Here we just plot the disaggregated data alongside the ground truth for the Fridge
disag_elec.select(instance=18).plot() me.select(instance=18).plot()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
6723de4becc1c978f5c885025e4bd8bd
Aligning meters, Converting to Numpy and Computing Metrics In this part of the Notebook, we call the function we previously defined to align the meters and then we convert the meters to pandas and ultimately to numpy arrays. We check if any NaN's exist (which is something possible after resmplilng.. Resampling errors may occur) and replace them with 0's if they do. We also compute the following metrics for each appliance: 1) True Positive, False Positive, False Negative, True Negative 2) Precision and Recall 3) Accuracy and F1-Score For more information about these metrics please refer to the report.
appliances_scores = {} for m in me.meters: print('Processing {}...'.format(m.label())) ground_truth = m inst = m.instance() prediction = disag_elec.select(instance=inst) a = prediction.meters[0] b = a.power_series_all_data() pr_a,gt_a = align_two_meters(prediction.meters[0],ground_truth) gt = gt_a.as_matrix() pr = pr_a.as_matrix() if np.all(np.isnan(pr)==False): print('\t Predictions array seems to be fine...') print('\t No Nans detected') print() else: print('\t Serious error in Predictions...') print('\t The resampled array contains Nans') print() gt_states_on = gt > 0.1 pr_states_on = pr > 0.1 TP = np.sum(np.logical_and(gt_states_on==True,pr_states_on[1:]==True)) FP = np.sum(np.logical_and(gt_states_on==True,pr_states_on[1:]==False)) FN = np.sum(np.logical_and(gt_states_on==False,pr_states_on[1:]==True)) TN = np.sum(np.logical_and(gt_states_on==False,pr_states_on[1:]==False)) P = np.sum(gt_states_on==True) N = np.sum(gt_states_on==False) recall = TP/float(TP+FN) precision = TP/float(TP+FP) f1 = 2*precision*recall/(precision+recall) accuracy = (TP+TN)/float(P+N) result = {'F1-Score':f1, 'Precision':precision, 'Recall':recall, 'Accuracy':accuracy} appliances_scores[m.label()] = result print(appliances_scores) Names = ['Fridge','Microwave']
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
7c3fe8d97d40a01eda3c88eccace044a
Results Now we just plot the scores for both the Fridge and the Microwave in order to be able to visualize what is going on. We do not comment on the results in this notebook since we do this in the report. There is a separate notebook where all these results are combined along with the corresponding results from the Neural Network and the FHMM method and the total results are reported side by side to ease comparison. We plot them here as well for housekeeping although it is redundant. F1-Score
x = np.arange(2) y = np.array([appliances_scores[i]['F1-Score'] for i in Names]) y[np.isnan(y)] = 0.001 f = plt.figure(figsize=(18,8)) plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']}) plt.rc('text', usetex=True) ax = f.add_axes([0.2,0.2,0.8,0.8]) ax.bar(x,y,align='center') ax.set_xticks(x) ax.set_yticks(y) ax.set_yticklabels(y,fontsize=20) ax.set_xticklabels(Names,fontsize=20) ax.set_xlim([min(x)-0.5,max(x)+0.5]) plt.xlabel('Appliances',fontsize=20) plt.ylabel('F1-Score',fontsize=20) plt.title('Combinatorial Optimization',fontsize=22) plt.show()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
048eb798a3341e149e5c2de3552cd070
Precision
x = np.arange(2) y = np.array([appliances_scores[i]['Precision'] for i in Names]) y[np.isnan(y)] = 0.001 f = plt.figure(figsize=(18,8)) plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']}) plt.rc('text', usetex=True) ax = f.add_axes([0.2,0.2,0.8,0.8]) ax.bar(x,y,align='center') ax.set_xticks(x) ax.set_yticks(y) ax.set_yticklabels(y,fontsize=20) ax.set_xticklabels(Names,fontsize=20) ax.set_xlim([min(x)-0.5,max(x)+0.5]) plt.xlabel('Appliances',fontsize=20) plt.ylabel('Precision',fontsize=20) plt.title('Combinatorial Optimization',fontsize=22) plt.show()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
27524ddbbe8c515de0e68a4bfb1a64e3
Recall
x = np.arange(2) y = np.array([appliances_scores[i]['Recall'] for i in Names]) y[np.isnan(y)] = 0.001 f = plt.figure(figsize=(18,8)) plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']}) plt.rc('text', usetex=True) ax = f.add_axes([0.2,0.2,0.8,0.8]) ax.bar(x,y,align='center') ax.set_xticks(x) ax.set_yticks(y) ax.set_yticklabels(y,fontsize=20) ax.set_xticklabels(['Fridge','Sockets','Lights'],fontsize=20) ax.set_xlim([min(x)-0.5,max(x)+0.5]) plt.xlabel('Appliances',fontsize=20) plt.ylabel('Recall',fontsize=20) plt.title('Combinatorial Optimization',fontsize=22) plt.show()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
210143594e80ab02b443aac6bc760664
Accuracy
x = np.arange(2) y = np.array([appliances_scores[i]['Accuracy'] for i in Names]) y[np.isnan(y)] = 0.001 f = plt.figure(figsize=(18,8)) plt.rc('font', size=20, **{'family': 'serif', 'serif': ['Computer Modern']}) plt.rc('text', usetex=True) ax = f.add_axes([0.2,0.2,0.8,0.8]) ax.bar(x,y,align='center') ax.set_xticks(x) ax.set_yticks(y) ax.set_yticklabels(y,fontsize=20) ax.set_xticklabels(['Fridge','Sockets','Lights'],fontsize=20) ax.set_xlim([min(x)-0.5,max(x)+0.5]) plt.xlabel('Appliances',fontsize=20) plt.ylabel('Accuracy',fontsize=20) plt.title('Combinatorial Optimization',fontsize=22) plt.show()
phd-thesis/benchmarkings/am207-NILM-project-master/CO.ipynb
diegocavalca/Studies
cc0-1.0
c2f277988db8ef9a3431f9982f2f0db0
Create the grid We are going to build a uniform rectilinear grid with a node spacing of 100 km in the y-direction and 10 km in the x-direction on which we will solve the flexure equation. First we need to import RasterModelGrid.
from landlab import RasterModelGrid
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
e683af9c8259e068fdc4bab070b8fc4a
Create a rectilinear grid with a spacing of 100 km between rows and 10 km between columns. The numbers of rows and columms are provided as a tuple of (n_rows, n_cols), in the same manner as similar numpy functions. The spacing is also a tuple, (dy, dx).
grid = RasterModelGrid((3, 800), xy_spacing=(100e3, 10e3)) grid.dy, grid.dx
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
5a2de3321fdadcfccbbe68ed21849bd6
Create the component Now we create the flexure component and tell it to use our newly created grid. First, though, we'll examine the Flexure component a bit.
from landlab.components import Flexure1D
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
dbd26a532dc4ec5b69f51d19ff6b7bb6
The Flexure1D component, as with most landlab components, will require our grid to have some data that it will use. We can get the names of these data fields with the input_var_names attribute of the component class.
Flexure1D.input_var_names
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
bffeb8757469b06172ce58a16f1b8c66
We see that flexure uses just one data field: the change in lithospheric loading. Landlab component classes can provide additional information about each of these fields. For instance, to see the units for a field, use the var_units method.
Flexure1D.var_units('lithosphere__increment_of_overlying_pressure')
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
e19ff7a6024e4757d161904b74515914
To print a more detailed description of a field, use var_help.
Flexure1D.var_help('lithosphere__increment_of_overlying_pressure')
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
196eab78621a8c254af6c69fb8668040
What about the data that Flexure1D provides? Use the output_var_names attribute.
Flexure1D.output_var_names Flexure1D.var_help('lithosphere_surface__increment_of_elevation')
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
0648de3d7ec916ed36c330fe111ffd81
Now that we understand the component a little more, create it using our grid.
grid.add_zeros("lithosphere__increment_of_overlying_pressure", at="node") flex = Flexure1D(grid, method='flexure')
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
3a5324a4c9b91d3a46294976bf12aa5d
Add a point load First we'll add just a single point load to the grid. We need to call the update method of the component to calculate the resulting deflection (if we don't run update the deflections would still be all zeros). Use the load_at_node attribute of Flexure1D to set the loads. Notice that load_at_node has the same shape as the grid. Likewise, x_at_node and dz_at_node also reshaped.
flex.load_at_node[1, 200] = 1e6 flex.update() plt.plot(flex.x_at_node[1, :400] / 1000., flex.dz_at_node[1, :400])
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
743a57ee27bd89e84d6c7744d06a370d
Before we make any changes, reset the deflections to zero.
flex.dz_at_node[:] = 0.
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
a034718ad83da13e35b6286afd2c5ab1
Now we will double the effective elastic thickness but keep the same point load. Notice that, as expected, the deflections are more spread out.
flex.eet *= 2. flex.update() plt.plot(flex.x_at_node[1, :400] / 1000., flex.dz_at_node[1, :400])
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
ef0bd6f8bc9276cc547d0640b5f6c268
Add some loading We will now add a distributed load. As we saw above, for this component, the name of the attribute that holds the applied loads is load_at_node. For this example we create a loading that increases linearly of the center portion of the grid until some maximum. This could by thought of as the water load following a sea-level rise over a (linear) continental shelf.
flex.load_at_node[1, :100] = 0. flex.load_at_node[1, 100:300] = np.arange(200) * 1e6 / 200. flex.load_at_node[1, 300:] = 1e6 plt.plot(flex.load_at_node[1, :400])
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
f71cf1063a25cb1e85b147e6bacfcb6b
Update the component to solve for deflection Clear the current deflections, and run update to get the new deflections.
flex.dz_at_node[:] = 0. flex.update() plt.plot(flex.x_at_node[1, :400] / 1000., flex.dz_at_node[1, :400])
notebooks/tutorials/flexure/flexure_1d.ipynb
amandersillinois/landlab
mit
97e62bcdf339e1fa72c94151fefad571
Init
import os %%R library(ggplot2) library(dplyr) library(tidyr) library(phyloseq) library(fitdistrplus) library(sads) %%R dir.create(workDir, showWarnings=FALSE)
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
910e92df850675665126cad5464c3316
Loading phyloseq list datasets
%%R # bulk core samples F = file.path(physeqDir, physeqBulkCore) physeq.bulk = readRDS(F) #physeq.bulk.m = physeq.bulk %>% sample_data physeq.bulk %>% names %%R # SIP core samples F = file.path(physeqDir, physeqSIP) physeq.SIP = readRDS(F) #physeq.SIP.m = physeq.SIP %>% sample_data physeq.SIP %>% names
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
b98b9894f4a0c3d4667babce5c285430
Infer abundance distribution of each bulk soil community distribution fit
%%R physeq2otu.long = function(physeq){ df.OTU = physeq %>% transform_sample_counts(function(x) x/sum(x)) %>% otu_table %>% as.matrix %>% as.data.frame df.OTU$OTU = rownames(df.OTU) df.OTU = df.OTU %>% gather('sample', 'abundance', 1:(ncol(df.OTU)-1)) return(df.OTU) } df.OTU.l = lapply(physeq.bulk, physeq2otu.long) df.OTU.l %>% names #df.OTU = do.call(rbind, lapply(physeq.bulk, physeq2otu.long)) #df.OTU$Day = gsub('.+\\.D([0-9]+)\\.R.+', '\\1', df.OTU$sample) #df.OTU %>% head(n=3) %%R -w 450 -h 400 lapply(df.OTU.l, function(x) descdist(x$abundance, boot=1000)) %%R fitdists = function(x){ fit.l = list() #fit.l[['norm']] = fitdist(x$abundance, 'norm') fit.l[['exp']] = fitdist(x$abundance, 'exp') fit.l[['logn']] = fitdist(x$abundance, 'lnorm') fit.l[['gamma']] = fitdist(x$abundance, 'gamma') fit.l[['beta']] = fitdist(x$abundance, 'beta') # plotting plot.legend = c('exponential', 'lognormal', 'gamma', 'beta') par(mfrow = c(2,1)) denscomp(fit.l, legendtext=plot.legend) qqcomp(fit.l, legendtext=plot.legend) # fit summary gofstat(fit.l, fitnames=plot.legend) %>% print return(fit.l) } fits.l = lapply(df.OTU.l, fitdists) fits.l %>% names %%R # getting summaries for lognormal fits get.summary = function(x, id='logn'){ summary(x[[id]]) } fits.s = lapply(fits.l, get.summary) fits.s %>% names %%R # listing estimates for fits df.fits = do.call(rbind, lapply(fits.s, function(x) x$estimate)) %>% as.data.frame df.fits$Sample = rownames(df.fits) df.fits$Day = gsub('.+D([0-9]+)\\.R.+', '\\1', df.fits$Sample) %>% as.numeric df.fits %%R -w 650 -h 300 ggplot(df.fits, aes(Day, meanlog, ymin=meanlog-sdlog, ymax=meanlog+sdlog)) + geom_pointrange() + geom_line() + theme_bw() + theme( text = element_text(size=16) ) %%R # mean of estimaates apply(df.fits, 2, mean)
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
ccc3246fdeb7224beb0377e003b6918b
Relative abundance of most abundant taxa
%%R -w 800 df.OTU = do.call(rbind, df.OTU.l) %>% mutate(abundance = abundance * 100) %>% group_by(sample) %>% mutate(rank = row_number(desc(abundance))) %>% ungroup() %>% filter(rank < 10) ggplot(df.OTU, aes(rank, abundance, color=sample, group=sample)) + geom_point() + geom_line() + labs(y = '% rel abund')
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
18df84d100ecfe3cdd119fab8f17dd6d
Making a community file for the simulations
%%R -w 800 -h 300 df.OTU = do.call(rbind, df.OTU.l) %>% mutate(abundance = abundance * 100) %>% group_by(sample) %>% mutate(rank = row_number(desc(abundance))) %>% group_by(rank) %>% summarize(mean_abundance = mean(abundance)) %>% ungroup() %>% mutate(library = 1, mean_abundance = mean_abundance / sum(mean_abundance) * 100) %>% rename('rel_abund_perc' = mean_abundance) %>% dplyr::select(library, rel_abund_perc, rank) %>% as.data.frame df.OTU %>% nrow %>% print ggplot(df.OTU, aes(rank, rel_abund_perc)) + geom_point() + geom_line() + labs(y = 'mean % rel abund')
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
ee7e56d48b45d23762d9230e1e099512
Adding reference genome taxon names
ret = !SIPSim KDE_info -t /home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_kde.pkl ret = ret[1:] ret[:5] %%R F = '/home/nick/notebook/SIPSim/dev/fullCyc_trim//ampFrags_kde_amplified.txt' ret = read.delim(F, sep='\t') ret = ret$genomeID ret %>% length %>% print ret %>% head %%R ret %>% length %>% print df.OTU %>% nrow %%R -i ret # randomize ret = ret %>% sample %>% sample %>% sample # adding to table df.OTU$taxon_name = ret[1:nrow(df.OTU)] df.OTU = df.OTU %>% dplyr::select(library, taxon_name, rel_abund_perc, rank) df.OTU %>% head %%R #-- debug -- # df.gc = read.delim('~/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_parsed_kde_info.txt', sep='\t', row.names=) top.taxa = df.gc %>% filter(KDE_ID == 1, median > 1.709, median < 1.711) %>% dplyr::select(taxon_ID) %>% mutate(taxon_ID = taxon_ID %>% sample) %>% head top.taxa = top.taxa$taxon_ID %>% as.vector top.taxa %%R #-- debug -- # p1 = df.OTU %>% filter(taxon_name %in% top.taxa) p2 = df.OTU %>% head(n=length(top.taxa)) p3 = anti_join(df.OTU, rbind(p1, p2), c('taxon_name' = 'taxon_name')) df.OTU %>% nrow %>% print p1 %>% nrow %>% print p2 %>% nrow %>% print p3 %>% nrow %>% print p1 = p2$taxon_name p2$taxon_name = top.taxa df.OTU = rbind(p2, p1, p3) df.OTU %>% nrow %>% print df.OTU %>% head
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
8d40fe753f177c2bd29833b6e977ff9e
Writing file
%%R F = file.path(workDir, 'fullCyc_12C-Con_trm_comm.txt') write.table(df.OTU, F, sep='\t', quote=FALSE, row.names=FALSE) cat('File written:', F, '\n')
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
c1a31078762ef6f653737dc6149e5d9d
parsing amp-Frag file to match comm file
!tail -n +2 /home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm.txt | \ cut -f 2 > /home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm_taxa.txt outFile = os.path.splitext(ampFragFile)[0] + '_parsed.pkl' !SIPSim KDE_parse \ $ampFragFile \ /home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm_taxa.txt \ > $outFile print 'File written {}'.format(outFile) !SIPSim KDE_info -n $outFile
ipynb/bac_genome/fullCyc/trimDataset/dataset_info.ipynb
nick-youngblut/SIPSim
mit
08eb8d2f22760a6a5af466197a1a0640
Case 1: a = a+b The sum is first computed and resulting in a new array and the a is bound to the new array
a = np.array(range(10000000)) b = np.array(range(9999999,-1,-1)) %%time a = a + b
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
d5b24ebcaa22f4c26b803a7d04c3f861
Case 2: a += b The elements of b are directly added into the elements of a (in memory) - no intermediate array. These operators implement the so-called "in-place arithmetics" (e.g., +=, *=, /=, -= )
a = np.array(range(10000000)) b = np.array(range(9999999,-1,-1)) %%time a +=b
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
9efb0fa3c19d10356124a854fff74282
2. Vectorization
#Apply function to a complete array instead of writing loop to iterate over all elements of the array. #This is called vectorization. The opposite of vectorization (for loops) is known as the scalar implementation def f(x): return x*np.exp(4) print(f(a))
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
1661daf7922750c2243b88e6d9d89a51
3. Slicing and reshape Array slicing x[i:j:s] picks out the elements starting with index i and stepping s indices at the time up to, but not including, j.
x = np.array(range(100)) x[1:-1] # picks out all elements except the first and the last, but contrary to lists, a[1:-1] is not a copy of the data in a. x[0:-1:2] # picks out every two elements up to, but not including, the last element, while x[::4] # picks out every four elements in the whole array.
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
e92537bc971044550c50545bc0fa1f75
Array shape manipulation
a = np.linspace(-1, 1, 6) print (a) a.shape a.size # rows, columns a.shape = (2, 3) a = a.reshape(2, 3) # alternative a.shape print (a) # len(a) always returns the length of the first dimension of an array. -> no. of rows
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
37394204ccc60212cebda62bd37adde7
Exercise 1. Create a 10x10 2d array with 1 on the border and 0 inside
Z = np.ones((10,10)) Z[1:-1,1:-1] = 0 print(Z)
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
00bd7d2e4732525111359abd4def18f3
2. Create a structured array representing a position (x,y) and a color (r,g,b)
Z = np.zeros(10, [ ('position', [ ('x', float, 1), ('y', float, 1)]), ('color', [ ('r', float, 1), ('g', float, 1), ('b', float, 1)])]) print(Z)
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
6c12ce9553830e71e14d2e845295331e
3. Consider a large vector Z, compute Z to the power of 3 using 2 different methods
x = np.random.rand(5e7) %timeit np.power(x,3) %timeit x*x*x
02_NB_IntroductionNumpy.ipynb
dianafprieto/SS_2017
mit
87513d2b31a33a8465eac94b368d4654
Run the code below multiple times by repeatedly pressing Ctrl + Enter. After each run observe how the state has changed.
simulation.run(1) simulation.show_beliefs()
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
0d0717c1480b60f5f429b165de01abe5
What do you think this call to run is doing? Look at the code in simulate.py to find out. Spend a few minutes looking at the run method and the methods it calls to get a sense for what's going on. What am I looking at? The red star shows the robot's true position. The blue circles indicate the strength of the robot's belief that it is at any particular location. Ideally we want the biggest blue circle to be at the same position as the red star.
# We will provide you with the function below to help you look # at the raw numbers. def show_rounded_beliefs(beliefs): for row in beliefs: for belief in row: print("{:0.3f}".format(belief), end=" ") print() # The {:0.3f} notation is an example of "string # formatting" in Python. You can learn more about string # formatting at https://pyformat.info/ show_rounded_beliefs(simulation.beliefs)
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
7092564caba96b0008f91a081fbdcbe0
Part 2: Implement a 2D sense function. As you can see, the robot's beliefs aren't changing. No matter how many times we call the simulation's sense method, nothing happens. The beliefs remain uniform. Instructions Open localizer.py and complete the sense function. Run the code in the cell below to import the localizer module (or reload it) and then test your sense function. If the test passes, you've successfully implemented your first feature! Keep going with the project. If your tests don't pass (they likely won't the first few times you test), keep making modifications to the sense function until they do!
from imp import reload reload(localizer) def test_sense(): R = 'r' _ = 'g' simple_grid = [ [_,_,_], [_,R,_], [_,_,_] ] p = 1.0 / 9 initial_beliefs = [ [p,p,p], [p,p,p], [p,p,p] ] observation = R expected_beliefs_after = [ [1/11, 1/11, 1/11], [1/11, 3/11, 1/11], [1/11, 1/11, 1/11] ] p_hit = 3.0 p_miss = 1.0 beliefs_after_sensing = localizer.sense( observation, simple_grid, initial_beliefs, p_hit, p_miss) if helpers.close_enough(beliefs_after_sensing, expected_beliefs_after): print("Tests pass! Your sense function is working as expected") return elif not isinstance(beliefs_after_sensing, list): print("Your sense function doesn't return a list!") return elif len(beliefs_after_sensing) != len(expected_beliefs_after): print("Dimensionality error! Incorrect height") return elif len(beliefs_after_sensing[0] ) != len(expected_beliefs_after[0]): print("Dimensionality Error! Incorrect width") return elif beliefs_after_sensing == initial_beliefs: print("Your code returns the initial beliefs.") return total_probability = 0.0 for row in beliefs_after_sensing: for p in row: total_probability += p if abs(total_probability-1.0) > 0.001: print("Your beliefs appear to not be normalized") return print("Something isn't quite right with your sense function") test_sense()
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
235b9cd3f069a6fac9199fb4d2bcf842
Integration Testing Before we call this "complete" we should perform an integration test. We've verified that the sense function works on it's own, but does the localizer work overall? Let's perform an integration test. First you you should execute the code in the cell below to prepare the simulation environment.
from simulate import Simulation import simulate as sim import helpers reload(localizer) reload(sim) reload(helpers) R = 'r' G = 'g' grid = [ [R,G,G,G,R,R,R], [G,G,R,G,R,G,R], [G,R,G,G,G,G,R], [R,R,G,R,G,G,G], [R,G,R,G,R,R,R], [G,R,R,R,G,R,G], [R,R,R,G,R,G,G], ] # Use small value for blur. This parameter is used to represent # the uncertainty in MOTION, not in sensing. We want this test # to focus on sensing functionality blur = 0.1 p_hit = 100.0 simulation = sim.Simulation(grid, blur, p_hit) # Use control+Enter to run this cell many times and observe how # the robot's belief that it is in each cell (represented by the # size of the corresponding circle) changes as the robot moves. # The true position of the robot is given by the red star. # Run this cell about 15-25 times and observe the results simulation.run(1) simulation.show_beliefs() # If everything is working correctly you should see the beliefs # converge to a single large circle at the same position as the # red star. # # When you are satisfied that everything is working, continue # to the next section
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
aa14f5392fde5332677cace8146fd1a3
Part 3: Identify and Reproduce a Bug Software has bugs. That's okay. A user of your robot called tech support with a complaint "So I was using your robot in a square room and everything was fine. Then I tried loading in a map for a rectangular room and it drove around for a couple seconds and then suddenly stopped working. Fix it!" Now we have to debug. We are going to use a systematic approach. Reproduce the bug Read (and understand) the error message (when one exists) Write a test that triggers the bug. Generate a hypothesis for the cause of the bug. Try a solution. If it fixes the bug, great! If not, go back to step 4. Step 1: Reproduce the bug The user said that rectangular environments seem to be causing the bug. The code below is the same as the code you were working with when you were doing integration testing of your new feature. See if you can modify it to reproduce the bug.
from simulate import Simulation import simulate as sim import helpers reload(localizer) reload(sim) reload(helpers) R = 'r' G = 'g' grid = [ [R,G,G,G,R,R,R], [G,G,R,G,R,G,R], [G,R,G,G,G,G,R], [R,R,G,R,G,G,G], ] blur = 0.001 p_hit = 100.0 simulation = sim.Simulation(grid, blur, p_hit) # remember, the user said that the robot would sometimes drive around for a bit... # It may take several calls to "simulation.run" to actually trigger the bug. simulation.run(5) simulation.show_beliefs() simulation.run(3)
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
d784da21c815ed791f4ba76541773bfe
Step 2: Read and Understand the error message If you triggered the bug, you should see an error message directly above this cell. The end of that message should say: IndexError: list index out of range And just above that you should see something like path/to/your/directory/localizer.pyc in move(dy, dx, beliefs, blurring) 38 new_i = (i + dy ) % width 39 new_j = (j + dx ) % height ---&gt; 40 new_G[int(new_i)][int(new_j)] = cell 41 return blur(new_G, blurring) This tells us that line 40 (in the move function) is causing an IndexError because "list index out of range". If you aren't sure what this means, use Google! Copy and paste IndexError: list index out of range into Google! When I do that, I see something like this: Browse through the top links (often these will come from stack overflow) and read what people have said about this error until you are satisfied you understand how it's caused. Step 3: Write a test that reproduces the bug This will help you know when you've fixed it and help you make sure you never reintroduce it in the future. You might have to try many potential solutions, so it will be nice to have a single function to call to confirm whether or not the bug is fixed
# According to the user, sometimes the robot actually does run "for a while" # - How can you change the code so the robot runs "for a while"? # - How many times do you need to call simulation.run() to consistently # reproduce the bug? # Modify the code below so that when the function is called # it consistently reproduces the bug. def test_robot_works_in_rectangle_world(): from simulate import Simulation import simulate as sim import helpers reload(localizer) reload(sim) reload(helpers) R = 'r' G = 'g' grid = [ [R,G,G,G,R,R,R], [G,G,R,G,R,G,R], [G,R,G,G,G,G,R], [R,R,G,R,G,G,G], ] blur = 0.001 p_hit = 100.0 for i in range(1000): simulation = sim.Simulation(grid, blur, p_hit) simulation.run(10) test_robot_works_in_rectangle_world()
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
9a37fa27bbba570a2203377c653a9da5
Step 4: Generate a Hypothesis In order to have a guess about what's causing the problem, it will be helpful to use some Python debuggin tools The pdb module (python debugger) will be helpful here! Setting up the debugger Open localizer.py and uncomment the line to the top that says import pdb Just before the line of code that is causing the bug new_G[int(new_i)][int(new_j)] = cell, add a new line of code that says pdb.set_trace() Run your test by calling your test function (run the cell below this one) You should see a text entry box pop up! For now, type c into the box and hit enter to continue program execution. Keep typing c and enter until the bug is triggered again
test_robot_works_in_rectangle_world()
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
8278bb50f65812e146339c2d95cd71bd
Using the debugger The debugger works by pausing program execution wherever you write pdb.set_trace() in your code. You also have access to any variables which are accessible from that point in your code. Try running your test again. This time, when the text entry box shows up, type new_i and hit enter. You will see the value of the new_i variable show up in the debugger window. Play around with the debugger: find the values of new_j, height, and width. Do they seem reasonable / correct? When you are done playing around, type c to continue program execution. Was the bug triggered? Keep playing until you have a guess about what is causing the bug. Step 5: Write a Fix You have a hypothesis about what's wrong. Now try to fix it. When you're done you should call your test function again. You may want to remove (or comment out) the line you added to localizer.py that says pdb.set_trace() so your test can run without you having to type c into the debugger box.
test_robot_works_in_rectangle_world()
TowDHistogramFilter/TowDHistogramFilter.ipynb
jingr1/SelfDrivingCar
mit
94f2759c2935c59a5b801b7f2f61cbc5
B17001 Poverty Status by Sex by Age For the Poverty Status by Sex by Age we'll select the columns for male and female, below poverty, 65 and older. NOTE if you want to get seniors of a particular race, use table C17001a-g, condensed race iterations. The 'C' tables have fewer age ranges, but there is no 'C' table for all races: There is a C17001a for Whites, a condensed version of B17001a, but there is no C17001 for a condensed version of B17001
[e for e in b17001.columns if '65 to 74' in str(e) or '75 years' in str(e) ] # Now create a subset dataframe with just the columns we need. b17001s = b17001[['geoid', 'B17001015', 'B17001016','B17001029','B17001030']] b17001s.head()
examples/Pandas Reporter Example.ipynb
CivicKnowledge/metatab-py
bsd-3-clause
0ceef94ca336b440211b807968ffa893
Senior poverty rates Creating the sums for the senior below poverty rates at the tract level is easy, but there is a serious problem with the results: the numbers are completely unstable. The minimum RSE is 22%, and the median is about 60%. These are useless results.
b17001_65mf = pr.CensusDataFrame() b17001_65mf['geoid'] = b17001['geoid'] b17001_65mf['poverty_65'], b17001_65mf['poverty_65_m90'] = b17001.sum_m('B17001015', 'B17001016','B17001029','B17001030') b17001_65mf.add_rse('poverty_65') b17001_65mf.poverty_65_rse.replace([np.inf, -np.inf], np.nan).dropna().describe()
examples/Pandas Reporter Example.ipynb
CivicKnowledge/metatab-py
bsd-3-clause
b81086be8af442701cdf396cc4f34bfe
Time windows <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Setup
import tensorflow as tf
courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb
tensorflow/examples
apache-2.0
ad90ae2d23818de56efcc82f0ce4f666
Time Windows First, we will train a model to forecast the next step given the previous 20 steps, therefore, we need to create a dataset of 20-step windows for training.
dataset = tf.data.Dataset.range(10) for val in dataset: print(val.numpy()) dataset = tf.data.Dataset.range(10) dataset = dataset.window(5, shift=1) for window_dataset in dataset: for val in window_dataset: print(val.numpy(), end=" ") print() dataset = tf.data.Dataset.range(10) dataset = dataset.window(5, shift=1, drop_remainder=True) for window_dataset in dataset: for val in window_dataset: print(val.numpy(), end=" ") print() dataset = tf.data.Dataset.range(10) dataset = dataset.window(5, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(5)) for window in dataset: print(window.numpy()) dataset = tf.data.Dataset.range(10) dataset = dataset.window(5, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(5)) dataset = dataset.map(lambda window: (window[:-1], window[-1:])) for x, y in dataset: print(x.numpy(), y.numpy()) dataset = tf.data.Dataset.range(10) dataset = dataset.window(5, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(5)) dataset = dataset.map(lambda window: (window[:-1], window[-1:])) dataset = dataset.shuffle(buffer_size=10) for x, y in dataset: print(x.numpy(), y.numpy()) dataset = tf.data.Dataset.range(10) dataset = dataset.window(5, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(5)) dataset = dataset.map(lambda window: (window[:-1], window[-1:])) dataset = dataset.shuffle(buffer_size=10) dataset = dataset.batch(2).prefetch(1) for x, y in dataset: print("x =", x.numpy()) print("y =", y.numpy()) def window_dataset(series, window_size, batch_size=32, shuffle_buffer=1000): dataset = tf.data.Dataset.from_tensor_slices(series) dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(window_size + 1)) dataset = dataset.shuffle(shuffle_buffer) dataset = dataset.map(lambda window: (window[:-1], window[-1])) dataset = dataset.batch(batch_size).prefetch(1) return dataset
courses/udacity_intro_to_tensorflow_for_deep_learning/l08c04_time_windows.ipynb
tensorflow/examples
apache-2.0
1e9cb722b3d4bff8095c0f6a0ffbd5de
I want to open each of the conf.py files and replace the nanme of the site with hythsc.lower Dir /home/wcmckee/ccschol has all the schools folders. Need to replace in conf.py Demo Name with folder name of school. Schools name missing characters - eg ardmore
lisschol = os.listdir('/home/wcmckee/ccschol/') findwat = ('LICENSE = """') def replacetext(findtext, replacetext): for lisol in lisschol: filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py') f = open(filereaz,'r') filedata = f.read() f.close() newdata = filedata.replace(findtext, '"' + replacetext + '"') #print (newdata) f = open(filereaz,'w') f.write(newdata) f.close() replacetext('LICENSE = """', 'LICENSE = """<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons Attribution 4.0 International License" style="border-width:0; margin-bottom:12px;" src="https://i.creativecommons.org/l/by/4.0/88x31.png"></a>"') licfil = 'LICENSE = """<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons Attribution 4.0 International License" style="border-width:0; margin-bottom:12px;" src="https://i.creativecommons.org/l/by/4.0/88x31.png"></a>"' opwcm = ('/home/wcmckee/github/wcm.com/conf.py') for lisol in lisschol: print (lisol) rdwcm = open(opwcm, 'r') filewcm = rdwcm.read() newdata = filewcm.replace('wcmckee', lisol) rdwcm.close() #print (newdata) f = open('/home/wcmckee/ccschol/' + lisol + '/conf.py','w') f.write(newdata) f.close() for rdlin in rdwcm.readlines(): #print (rdlin) if 'BLOG_TITLE' in rdlin: print (rdlin) for lisol in lisschol: print (lisol) hythsc = (lisol.replace(' ', '-')) hylow = hythsc.lower() hybrac = hylow.replace('(', '') hybaec = hybrac.replace(')', '') filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py') f = open(filereaz,'r') filedata = f.read() f.close() newdata = filedata.replace('LICENCE = """', licfil ) #print (newdata) f = open(filereaz,'w') f.write(newdata) f.close() for lisol in lisschol: print (lisol) hythsc = (lisol.replace(' ', '-')) hylow = hythsc.lower() hybrac = hylow.replace('(', '') hybaec = hybrac.replace(')', '') filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py') f = open(filereaz,'r') filedata = f.read() f.close() newdata = filedata.replace('"Demo Site"', '"' + hybaec + '"') #print (newdata) f = open(filereaz,'w') f.write(newdata) f.close() for lisol in lisschol: print (lisol) hythsc = (lisol.replace(' ', '-')) hylow = hythsc.lower() hybrac = hylow.replace('(', '') hybaec = hybrac.replace(')', '') filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py') f = open(filereaz,'r') filedata = f.read() f.close() newdata = filedata.replace('"Demo Site"', '"' + hybaec + '"') #print (newdata) f = open(filereaz,'w') f.write(newdata) f.close()
posts/niktrans.ipynb
wcmckee/wcmckee.com
mit
8bd2f6fb03e24b0091f22f31cf09ed89
Perform Nikola build of all the sites in ccschol folder
buildnik = input('Build school sites y/N ') for lisol in lisschol: print (lisol) os.chdir('/home/wcmckee/ccschol/' + lisol) if 'y' in buildnik: os.system('nikola build') makerst = open('/home/wcmckee/ccs') for rs in rssch.keys(): hythsc = (rs.replace(' ', '-')) hylow = hythsc.lower() hybrac = hylow.replace('(', '-') hybaec = hybrac.replace(')', '') #print (hylow()) filereaz = ('/home/wcmckee/ccschol/' + hybaec + '/conf.py') f = open(filereaz,'r') filedata = f.read() newdata = filedata.replace("Demo Site", hybaec) f.close() f = open(filereaz,'w') f.write(newdata) f.close()
posts/niktrans.ipynb
wcmckee/wcmckee.com
mit
827d135e9703c2e4fcfb3086f70e77b1
Let's build the construction table in order to bend one of the terephtalic acid ligands.
fragment = molecule.get_fragment([(12, 17), (55, 60)]) connection = np.array([[3, 99, 1, 12], [17, 3, 99, 12], [60, 3, 17, 12]]) connection = pd.DataFrame(connection[:, 1:], index=connection[:, 0], columns=['b', 'a', 'd']) c_table = molecule.get_construction_table([(fragment, connection)]) molecule = molecule.loc[c_table.index] zmolecule = molecule.get_zmat(c_table)
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
886e340a7d69b4b2a0a3522c77aeb0ca
This gives the following movement:
zmolecule_symb = zmolecule.copy() zmolecule_symb.safe_loc[3, 'angle'] += theta cc.xyz_functions.view([zmolecule_symb.subs(theta, a).get_cartesian() for a in [-30, 0, 30]])
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
aeb069930747c1d261751df52621e3b7
Gradient for Zmat to Cartesian For the gradients it is very illustrating to compare: $$ f(x + h) \approx f(x) + f'(x) h $$ $f(x + h)$ will be zmolecule2 and $h$ will be dist_zmol The boolean chain argument denotes if the movement should be chained or not. Bond
dist_zmol1 = zmolecule.copy() r = 3 dist_zmol1.unsafe_loc[:, ['bond', 'angle', 'dihedral']] = 0 dist_zmol1.unsafe_loc[3, 'bond'] = r cc.xyz_functions.view([molecule, molecule + zmolecule.get_grad_cartesian(chain=False)(dist_zmol1), molecule + zmolecule.get_grad_cartesian()(dist_zmol1), (zmolecule + dist_zmol1).get_cartesian()])
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
eadb7c4abfe5663bb8fe03bd1966fd57
Angle
angle = 30 dist_zmol2 = zmolecule.copy() dist_zmol2.unsafe_loc[:, ['bond', 'angle', 'dihedral']] = 0 dist_zmol2.unsafe_loc[3, 'angle'] = angle cc.xyz_functions.view([molecule, molecule + zmolecule.get_grad_cartesian(chain=False)(dist_zmol2), molecule + zmolecule.get_grad_cartesian()(dist_zmol2), (zmolecule + dist_zmol2).get_cartesian()])
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
b7e623495437c482c50eacf73fc5f1e3
Note that the deviation between $f(x + h)$ and $f(x) + h f'(x)$ is not an error in the implementation but a visualisation of the small angle approximation. The smaller the angle the better is the linearisation. Gradient for Cartesian to Zmat
x_dist = 2 dist_mol = molecule.copy() dist_mol.loc[:, ['x', 'y', 'z']] = 0. dist_mol.loc[13, 'x'] = x_dist zmat_dist = molecule.get_grad_zmat(c_table)(dist_mol)
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
97c06d654de7e12495f2616f3119ec86
It is immediately obvious, that only the ['bond', 'angle', 'dihedral'] of those atoms change, which are either moved themselves in cartesian space or use moved references.
zmat_dist[(zmat_dist.loc[:, ['bond', 'angle', 'dihedral']] != 0).any(axis=1)]
Tutorial/Gradients.ipynb
mcocdawc/chemcoord
lgpl-3.0
248ff801fd96d3ff50311e9f74803479
2D trajectory interpolation The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time: t which has discrete values of time t[i]. x which has values of the x position at those times: x[i] = x(t[i]). y which has values of the y position at those times: y[i] = y(t[i]). Load those arrays into this notebook and save them as variables x, y and t:
with np.load('trajectory.npz') as data: t = data['t'] x = data['x'] y = data['y'] print(x) assert isinstance(x, np.ndarray) and len(x)==40 assert isinstance(y, np.ndarray) and len(y)==40 assert isinstance(t, np.ndarray) and len(t)==40
assignments/assignment08/InterpolationEx01.ipynb
LimeeZ/phys292-2015-work
mit
ddd444d994b98dedcc2926985fdc6a14
Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays: newt which has 200 points between ${t_{min},t_{max}}$. newx which has the interpolated values of $x(t)$ at those times. newy which has the interpolated values of $y(t)$ at those times.
newt = np.linspace(min(t),max(t), 200) f = np.sin(newt) approxx= interp1d(x,t,kind = 'cubic') newx = np.linspace(np.min(t), np.max(t), 200) approxy = interp1d(y,t,kind = 'cubic') newy = np.linspace(np.min(t), np.max(t), 200) ?interp1d assert newt[0]==t.min() assert newt[-1]==t.max() assert len(newt)==200 assert len(newx)==200 assert len(newy)==200
assignments/assignment08/InterpolationEx01.ipynb
LimeeZ/phys292-2015-work
mit
839f3b1fd180834e537c536812503f9a
Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points: For the interpolated points, use a solid line. For the original points, use circles of a different color and no line. Customize you plot to make it effective and beautiful.
plt.plot(newt, f, marker='o', linestyle='', label='original data') plt.plot(newx, newy, marker='.', label='interpolated'); plt.legend(); plt.xlabel('x') plt.ylabel('f(x)'); assert True # leave this to grade the trajectory plot
assignments/assignment08/InterpolationEx01.ipynb
LimeeZ/phys292-2015-work
mit
928863bf08273311c262956fb8ba8e2b
Pandas <img src="pandas_logo.png" width="50%" /> Pandas is a library that provides data analysis tools for the Python programming language. You can think of it as Excel on steroids, but in Python. To start off, I've used the meetup API to gather a bunch of data on members of the DataPhilly meetup group. First let's start off by looking at the events we've had over the past few years. I've loaded the data into a pandas DataFrame and stored it in the file events.pkl. A DataFrame is a table similar to an Excel spreadsheet. Let's load it and see what it looks like: DataPhilly events dataset
events_df = pd.read_pickle('events.pkl') events_df = events_df.sort_values(by='time') events_df
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
b05b6aa245f53d833ade26307c265b7a
You can access values in a DataFrame column like this:
events_df['yes_rsvp_count']
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
594777717e727a44e956fb2943beed21
You can access a row of a DataFrame using iloc:
events_df.iloc[4]
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
d66c1f48e9b4e2a09cf0c30f5c28edc0
We can view the first few rows using the head method:
events_df.head()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
8368f1ee27bbe383edab0b108f3c1989
And similarly the last few using tail:
events_df.tail(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
c37135af06be2d98b8276bcb73c35712
We can see that the yes_rsvp_count contains the number of people who RSVPed yes for each event. First let's look at some basic statistics:
yes_rsvp_count = events_df['yes_rsvp_count'] yes_rsvp_count.sum(), yes_rsvp_count.mean(), yes_rsvp_count.min(), yes_rsvp_count.max()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
93c0d2d5c8f83cbd11a12c56f3a4f15b
When we access a single column of the DataFrame like this we get a Series object which is just a 1-dimensional version of a DataFrame.
type(yes_rsvp_count)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
c01fca488212cccddd2e47db3b95383f
We can use the built-in describe method to print out a lot of useful stats in a nice tabular format:
yes_rsvp_count.describe()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
e88eafc39cc593558d72f82f828690c9
Next I'd like to graph the number of RSVPs over time to see if there are any interesting trends. To do this let's first sum the waitlist_count and yes_rsvp_count columns and make a new column called total_RSVP_count.
events_df['total_RSVP_count'] = events_df['waitlist_count'] + events_df['yes_rsvp_count'] events_df['total_RSVP_count']
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
ba95e8ec3295740136bd8b5475940ee0
We can plot these values using the plot method
events_df['total_RSVP_count'].plot()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
c38adee1aa66e51d5e0d4e50f06de910
The plot method utilizes the matplotlib library behind the scenes to draw the plot. This is interesting, but it would be nice to have the dates of the meetups on the X-axis of the plot. To accomplish this, let's convert the time field from a unix epoch timestamp to a python datetime utilizing the apply method and a function.
events_df.head(2) import datetime def get_datetime_from_epoch(epoch): return datetime.datetime.fromtimestamp(epoch/1000.0) events_df['time'] = events_df['time'].apply(get_datetime_from_epoch) events_df['time']
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
4d75ae95d73317fd2a2a6a25c436a6b7
Next let's make the time column the index of the DataFrame using the set_index method and then re-plot our data.
events_df.set_index('time', inplace=True) events_df[['total_RSVP_count']].plot()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
6d31c3fa70cb62bcc4e984e425baaffc
We can also easily plot multiple columns on the same plot.
all_rsvps = events_df[['yes_rsvp_count', 'waitlist_count', 'total_RSVP_count']] all_rsvps.plot(title='Attendance over time')
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
78e2057f272ed06d17d8486435259ebd
DataPhilly members dataset Alright so I'm seeing some interesting trends here. Let's take a look at something different. The Meetup API also provides us access to member info. Let's have a look at the data we have available:
members_df = pd.read_pickle('members.pkl') for column in ['joined', 'visited']: members_df[column] = members_df[column].apply(get_datetime_from_epoch) members_df.head(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
46273bd5f56b1983e5caa51f8f5a88e9
You'll notice that I've anonymized the meetup member_id and the member's name. I've also used the python module SexMachine to infer members gender based on their first name. I ran SexMachine on the original names before I anonymized them. Let's have a closer look at the gender breakdown of our members:
gender_counts = members_df['gender'].value_counts() gender_counts
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
9c09c19eea49f55bb22a4ddeaeff3fe4
Next let's use the hist method to plot a histogram of membership_count. This is the number of groups each member is in.
members_df['membership_count'].hist(bins=20)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
4f71ad38bcb92c4a2ae0f94ab5c6ec1e
Something looks odd here let's check out the value_counts:
members_df['membership_count'].value_counts().head()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
fc37796c0fd6728ed51da2093475845d
Okay so most members are members of 0 meetup groups?! This seems odd! I did a little digging and came up with the answer; members can set their membership details to be private, and then this value will be zero. Let's filter out these members and recreate the histogram.
members_df_non_zero = members_df[members_df['membership_count'] != 0] members_df_non_zero['membership_count'].hist(bins=50)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
03b46edb4f84cb2eb03c8bbefa2b6df9
Okay so most members are only members of a few meetup groups. There's some outliers that are pretty hard to read, let's try plotting this on a logarithmic scale to see if that helps:
ax = members_df_non_zero['membership_count'].hist(bins=50) ax.set_yscale('log') ax.set_xlim(0, 500)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
f85ee2926fed6d07c2444b82a32e6d82
Let's use a mask to filter out the outliers so we can dig into them a little further:
all_the_meetups = members_df[members_df['membership_count'] > 100] filtered = all_the_meetups[['membership_count', 'city', 'country', 'state']] filtered.sort_values(by='membership_count', ascending=False)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
4a7988520ca769db514d7b83d90d1b15
The people from Philly might actually be legitimate members, let's use a compound mask to filter them out as well:
all_the_meetups = members_df[ (members_df['membership_count'] > 100) & (members_df['city'] != 'Philadelphia') ] filtered = all_the_meetups[['membership_count', 'city', 'country', 'state']] filtered.sort_values(by='membership_count', ascending=False)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
509ef6863f1eb851df4148d84a1118c3
That's strange, I don't think we've ever had any members from Berlin, San Francisco, or Jerusalem in attendance :-). The RSVP dataset Moving on, we also have all the events that each member RSVPed to:
rsvps_df = pd.read_pickle('rsvps.pkl') rsvps_df.head(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
b924f8348d237a8c0e7bfd2edd6e1cb3