path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
exercises/ex1-DAR/teched2020-INT260_Data_Attribute_Recommendation.ipynb | ###Markdown
Cleaning up a service instance*Back to [table of contents](Table-of-Contents)*To clean all data on the service instance, you can run the following snippet. The code is self-contained and does not require you to execute any of the cells above. However, you will need to have the `key.json` containing a service key in place.You will need to set `CLEANUP_EVERYTHING = True` below to execute the cleanup.**NOTE: This will delete all data on the service instance!**
###Code
CLEANUP_EVERYTHING = False
def cleanup_everything():
import logging
import sys
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
import json
import os
if not os.path.exists("key.json"):
msg = "key.json is not found. Please follow instructions above to create a service key of"
msg += " Data Attribute Recommendation. Then, upload it into the same directory where"
msg += " this notebook is saved."
print(msg)
raise ValueError(msg)
with open("key.json") as file_handle:
key = file_handle.read()
SERVICE_KEY = json.loads(key)
from sap.aibus.dar.client.model_manager_client import ModelManagerClient
model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY)
for deployment in model_manager.read_deployment_collection()["deployments"]:
model_manager.delete_deployment_by_id(deployment["id"])
for model in model_manager.read_model_collection()["models"]:
model_manager.delete_model_by_name(model["name"])
for job in model_manager.read_job_collection()["jobs"]:
model_manager.delete_job_by_id(job["id"])
from sap.aibus.dar.client.data_manager_client import DataManagerClient
data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY)
for dataset in data_manager.read_dataset_collection()["datasets"]:
data_manager.delete_dataset_by_id(dataset["id"])
for dataset_schema in data_manager.read_dataset_schema_collection()["datasetSchemas"]:
data_manager.delete_dataset_schema_by_id(dataset_schema["id"])
print("Cleanup done!")
if CLEANUP_EVERYTHING:
print("Cleaning up all resources in this service instance.")
cleanup_everything()
else:
print("Not cleaning up. Set 'CLEANUP_EVERYTHING = True' above and run again.")
###Output
_____no_output_____
###Markdown
Data Attribute Recommendation - TechED 2020 INT260Getting started with the Python SDK for the Data Attribute Recommendation service. Business ScenarioWe will consider a business scenario involving product master data. The creation and maintenance of this product master data requires the careful manual selection of the correct categories for a given product from a pre-defined hierarchy of product categories.In this workshop, we will explore how to automate this tedious manual task with the Data Attribute Recommendation service. This workshop will cover: * Data Upload* Model Training and Deployment* Inference Requests We will work through a basic example of how to achieve these tasks using the [Python SDK for Data Attribute Recommendation](https://github.com/SAP/data-attribute-recommendation-python-sdk). *Note: if you are doing several runs of this notebook on a trial account, you may see errors stating 'The resource can no longer be used. Usage limit has been reached'. It can be beneficial to [clean up the service instance](Cleaning-up-a-service-instance) to free up limited trial resources acquired by an earlier run of the notebook. [Some limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) cannot be reset this way.* Table of Contents* [Exercise 01.1](Exercise-01.1) - Installing the SDK and preparing the service key * [Creating a service instance and key on BTP Trial](Creating-a-service-instance-and-key) * [Installing the SDK](Installing-the-SDK) * [Loading the service key into your Jupyter Notebook](Loading-the-service-key-into-your-Jupyter-Notebook)* [Exercise 01.2](Exercise-01.2) - Uploading the data* [Exercise 01.3](Exercise-01.3) - Training the model* [Exercise 01.4](Exercise-01.4) - Deploying the Model and predicting labels* [Resources](Resources) - Additional reading* [Cleaning up a service instance](Cleaning-up-a-service-instance) - Clean up all resources on the service instance* [Optional Exercises](Optional-Exercises) - Optional exercises RequirementsSee the [README in the Github repository for this workshop](https://github.com/SAP-samples/teched2020-INT260/blob/master/exercises/ex1-DAR/README.md). Exercise 01.1*Back to [table of contents](Table-of-Contents)*In exercise 01.1, we will install the SDK and prepare the service key. Creating a service instance and key on BTP Trial Please log in to your trial account: https://cockpit.eu10.hana.ondemand.com/trial/In the your global account screen, go to the "Boosters" tab:![trial_booster.png](attachment:trial_booster.png)*Boosters are only available on the Trial landscape. If you are using a production environment, please follow this tutorial to manually [create a service instance and a service key](https://developers.sap.com/tutorials/cp-aibus-dar-service-instance.html)*. In the Boosters tab, enter "Data Attribute Recommendation" into the search box. Then, select theservice tile from the search results: ![trial_locate_dar_booster.png](attachment:trial_locate_dar_booster.png) The resulting screen shows details of the booster pack. Here, click the "Start" button and wait a few seconds.![trial_start_booster.png](attachment:trial_start_booster.png) Once the booster is finished, click the "go to Service Key" link to obtain your service key.![trial_booster_finished.png](attachment:trial_booster_finished.png) Finally, download the key and save it to disk.![trial_download_key.png](attachment:trial_download_key.png) Installing the SDK The Data Attribute Recommendation SDK is available from the Python package repository. It can be installed with the standard `pip` tool:
###Code
! pip install data-attribute-recommendation-sdk
###Output
_____no_output_____
###Markdown
*Note: If you are not using a Jupyter notebook, but instead a regular Python development environment, we recommend using a Python virtual environment to set up your development environment. Please see [the dedicated tutorial to learn how to install the SDK inside a Python virtual environment](https://developers.sap.com/tutorials/cp-aibus-dar-sdk-setup.html).* Loading the service key into your Jupyter Notebook Once you downloaded the service key from the Cockpit, upload it to your notebook environment. The service key must be uploaded to same directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` is stored.We first navigate to the file browser in Jupyter. On the top of your Jupyter notebook, right-click on the Jupyter logo and open in a new tab.![service_key_main_jupyter_page.png](attachment:service_key_main_jupyter_page.png) **In the file browser, navigate to the directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` notebook file is stored. The service key must reside next to this file.**In the Jupyter file browser, click the **Upload** button (1). In the file selection dialog that opens, select the `defaultKey_*.json` file you downloaded previously from the SAP Cloud Platform Cockpit. Rename the file to `key.json`. Confirm the upload by clicking on the second **Upload** button (2).![service_key_upload.png](attachment:service_key_upload.png) The service key contains your credentials to access the service. Please treat this as carefully as you would treat any password. We keep the service key as a separate file outside this notebook to avoid leaking the secret credentials.The service key is a JSON file. We will load this file once and use the credentials throughout this workshop.
###Code
# First, set up logging so we can see the actions performed by the SDK behind the scenes
import logging
import sys
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
from pprint import pprint # for nicer output formatting
import json
import os
if not os.path.exists("key.json"):
msg = "key.json is not found. Please follow instructions above to create a service key of"
msg += " Data Attribute Recommendation. Then, upload it into the same directory where"
msg += " this notebook is saved."
print(msg)
raise ValueError(msg)
with open("key.json") as file_handle:
key = file_handle.read()
SERVICE_KEY = json.loads(key)
###Output
_____no_output_____
###Markdown
Summary Exercise 01.1In exercise 01.1, we have covered the following topics:* How to install the Python SDK for Data Attribute Recommendation* How to obtain a service key for the Data Attribute Recommendation service Exercise 01.2*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.2, we will upload our demo dataset to the service. The Dataset Obtaining the Data The dataset we use in this workshop is a CSV file containing product master data. The original data was released by BestBuy, a retail company, under an [open license](https://github.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sampledata-and-license). This makes it ideal for first experiments with the Data Attribute Recommendation service. The dataset can be downloaded directly from Github using the following command:
###Code
! wget -O bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv"
# If you receive a "command not found" error (i.e. on Windows), try curl instead of wget:
# ! curl -o bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv"
###Output
_____no_output_____
###Markdown
Let's inspect the data:
###Code
# if you are experiencing an import error here, run the following in a new cell:
# ! pip install pandas
import pandas as pd
df = pd.read_csv("bestBuy.csv")
df.head(5)
print()
print(f"Data has {df.shape[0]} rows and {df.shape[1]} columns.")
###Output
_____no_output_____
###Markdown
The CSV contains the several products. For each product, the description, the manufacturer and the price are given. Additionally, three levels of the products hierarchy are given.The first product, a set of AAA batteries, is located in the following place in the product hierarchy:```level1_category: Connected Home & Housewares |level2_category: Housewares |level3_category: Household Batteries``` We will use the Data Attribute Recommendation service to predict the categories for a given product based on its **description**, **manufacturer** and **price**. Creating the DatasetSchema We first have to describe the shape of our data by creating a DatasetSchema. This schema informs the service about the individual column types found in the CSV. We also describe which are the target columns used for training. These columns will be later predicted. In our case, these are the three category columns.The service currently supports three column types: **text**, **category** and **number**. For prediction, only **category** is currently supported.A DatasetSchema for the BestBuy dataset looks as follows:```json{ "features": [ {"label": "manufacturer", "type": "CATEGORY"}, {"label": "description", "type": "TEXT"}, {"label": "price", "type": "NUMBER"} ], "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ], "name": "bestbuy-category-prediction",}```We will now upload this DatasetSchema to the Data Attribute Recommendation service. The SDK provides the[`DataManagerClient.create_dataset_schema()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset_schema) method for this purpose.
###Code
from sap.aibus.dar.client.data_manager_client import DataManagerClient
dataset_schema = {
"features": [
{"label": "manufacturer", "type": "CATEGORY"},
{"label": "description", "type": "TEXT"},
{"label": "price", "type": "NUMBER"}
],
"labels": [
{"label": "level1_category", "type": "CATEGORY"},
{"label": "level2_category", "type": "CATEGORY"},
{"label": "level3_category", "type": "CATEGORY"}
],
"name": "bestbuy-category-prediction",
}
data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY)
response = data_manager.create_dataset_schema(dataset_schema)
dataset_schema_id = response["id"]
print()
print("DatasetSchema created:")
pprint(response)
print()
print(f"DatasetSchema ID: {dataset_schema_id}")
###Output
_____no_output_____
###Markdown
The API responds with the newly created DatasetSchema resource. The service assigned an ID to the schema. We save this ID in a variable, as we will need it when we upload the data. Uploading the Data to the service The [`DataManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient) class is also responsible for uploading data to the service. This data must fit to an existing DatasetSchema. After uploading the data, the service will validate the Dataset against the DataSetSchema in a background process. The data must be a CSV file which can optionally be `gzip` compressed.We will now upload our `bestBuy.csv` file, using the DatasetSchema which we created earlier.Data upload is a two-step process. We first create the Dataset using [`DataManagerClient.create_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset). Then we can upload data to the Dataset using the [`DataManagerClient.upload_data_to_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.upload_data_to_dataset) method.
###Code
dataset_resource = data_manager.create_dataset("my-bestbuy-dataset", dataset_schema_id)
dataset_id = dataset_resource["id"]
print()
print("Dataset created:")
pprint(dataset_resource)
print()
print(f"Dataset ID: {dataset_id}")
# Compress file first for a faster upload
! gzip -9 -c bestBuy.csv > bestBuy.csv.gz
###Output
_____no_output_____
###Markdown
Note that the data upload can take a few minutes. Please do not restart the process while the cell is still running.
###Code
# Open in binary mode.
with open('bestBuy.csv.gz', 'rb') as file_handle:
dataset_resource = data_manager.upload_data_to_dataset(dataset_id, file_handle)
print()
print("Dataset after data upload:")
print()
pprint(dataset_resource)
###Output
_____no_output_____
###Markdown
Note that the Dataset status changed from `NO_DATA` to `VALIDATING`.Dataset validation is a background process. The status will eventually change from `VALIDATING` to `SUCCEEDED`.The SDK provides the [`DataManagerClient.wait_for_dataset_validation()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.data_manager_client.DataManagerClient.wait_for_dataset_validation) method to poll for the Dataset validation.
###Code
dataset_resource = data_manager.wait_for_dataset_validation(dataset_id)
print()
print("Dataset after validation has finished:")
print()
pprint(dataset_resource)
###Output
_____no_output_____
###Markdown
If the status is `FAILED` instead of `SUCCEEDED`, then the `validationMessage` will contain details about the validation failure. To better understand the Dataset lifecycle, refer to the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/a9b7429687a04e769dbc7955c6c44265.html). Summary Exercise 01.2In exercise 01.2, we have covered the following topics:* How to create a DatasetSchema* How to upload a Dataset to the serviceYou can find optional exercises related to exercise 01.2 [below](Optional-Exercises-for-01.2). Exercise 01.3*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.3, we will train the model. Training the Model The Dataset is now uploaded and has been validated successfully by the service.To train a machine learning model, we first need to select the correct model template. Selecting the right ModelTemplateThe Data Attribute Recommendation service currently supports two different ModelTemplates:| ID | Name | Description ||--------------------------------------|---------------------------|---------------------------------------------------------------------------|| d7810207-ca31-4d4d-9b5a-841a644fd81f | **Hierarchical template** | Recommended for the prediction of multiple classes that form a hierarchy. || 223abe0f-3b52-446f-9273-f3ca39619d2c | **Generic template** | Generic neural network for multi-label, multi-class classification. || 188df8b2-795a-48c1-8297-37f37b25ea00 | **AutoML template** | Finds the [best traditional machine learning model out of several traditional algorithms](https://blogs.sap.com/2021/04/28/how-does-automl-works-in-data-attribute-recommendation/). Single label only. |We are building a model to predict product hierarchies. The **Hierarchical Template** is correct for this scenario. In this template, the first label in the DatasetSchema is considered the top-level category. Each subsequent label is considered to be further down in the hierarchy. Coming back to our example DatasetSchema:```json{ "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ]}```The first defined label is `level1_category`, which is given more weight during training than `level3_category`.Refer to the [official documentation on ModelTemplates](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e76e8c636974a06967552c05d40e066.html) to learn more. Additional model templates may be added over time, so check back regularly. Starting the training When working with models, we use the [`ModelManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient) class.To start the training, we need the IDs of the dataset and the desired model template. We also have to provide a name for the model.The [`ModelManagerClient.create_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.create_job) method launches the training Job.*Only one model of a given name can exist. If you receive a message stating 'The model name specified is already in use', you either have to remove the job and its associated model first or you have to change the `model_name` variable name below. You can also [clean up the entire service instance](Cleaning-up-a-service-instance).*
###Code
from sap.aibus.dar.client.model_manager_client import ModelManagerClient
from sap.aibus.dar.client.exceptions import DARHTTPException
model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY)
model_template_id = "d7810207-ca31-4d4d-9b5a-841a644fd81f" # hierarchical template
model_name = "bestbuy-hierarchy-model"
job_resource = model_manager.create_job(model_name, dataset_id, model_template_id)
job_id = job_resource['id']
print()
print("Job resource:")
print()
pprint(job_resource)
print()
print(f"ID of submitted Job: {job_id}")
###Output
_____no_output_____
###Markdown
The job is now running in the background. Similar to the DatasetValidation, we have to poll the job until it succeeds.The SDK provides the [`ModelManagerClient.wait_for_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_job) method:
###Code
job_resource = model_manager.wait_for_job(job_id)
print()
print("Job resource after training is finished:")
pprint(job_resource)
###Output
_____no_output_____
###Markdown
To better understand the Training Job lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/0fc40aa077ce4c708c1e5bfc875aa3be.html). IntermissionThe model training will take between 5 and 10 minutes.In the meantime, we can explore the available [resources](Resources) for both the service and the SDK. Inspecting the ModelOnce the training job is finished successfully, we can inspect the model using [`ModelManagerClient.read_model_by_name()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.read_model_by_name).
###Code
model_resource = model_manager.read_model_by_name(model_name)
print()
pprint(model_resource)
###Output
_____no_output_____
###Markdown
In the model resource, the `validationResult` key provides information about model performance. You can also use these metrics to compare performance of different [ModelTemplates](Selecting-the-right-ModelTemplate) or different datasets. Summary Exercise 01.3In exercise 01.3, we have covered the following topics:* How to select the appropriate ModelTemplate* How to train a Model from a previously uploaded DatasetYou can find optional exercises related to exercise 01.3 [below](Optional-Exercises-for-01.3). Exercise 01.4*Back to [table of contents](Table-of-Contents)**To perform this exercise, you need to execute the code in all previous exercises.*In exercise 01.4, we will deploy the model and predict labels for some unlabeled data. Deploying the Model The training job has finished and the model is ready to be deployed. By deploying the model, we create a server process in the background on the Data Attribute Recommendation service which will serve inference requests.In the SDK, the [`ModelManagerClient.create_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlmodule-sap.aibus.dar.client.model_manager_client) method lets us create a Deployment.
###Code
deployment_resource = model_manager.create_deployment(model_name)
deployment_id = deployment_resource["id"]
print()
print("Deployment resource:")
print()
pprint(deployment_resource)
print(f"Deployment ID: {deployment_id}")
###Output
_____no_output_____
###Markdown
*Note: if you are using a trial account and you see errors such as 'The resource can no longer be used. Usage limit has been reached', consider [cleaning up the service instance](Cleaning-up-a-service-instance) to free up limited trial resources.* Similar to the data upload and the training job, model deployment is an asynchronous process. We have to poll the API until the Deployment is in status `SUCCEEDED`. The SDK provides the [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) for this purposes.
###Code
deployment_resource = model_manager.wait_for_deployment(deployment_id)
print()
print("Finished deployment resource:")
print()
pprint(deployment_resource)
###Output
_____no_output_____
###Markdown
Once the Deployment is in status `SUCCEEDED`, we can run inference requests. To better understand the Deployment lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/f473b5b19a3b469e94c40eb27623b4f0.html). *For trial users: the deployment will be stopped after 8 hours. You can restart it by deleting the deployment and creating a new one for your model. The [`ModelManagerClient.ensure_deployment_exists()`](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) method will delete and re-create automatically. Then, you need to poll until the deployment is succeeded using [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) as above.* Executing Inference requests With a single inference request, we can send up to 50 objects to the service to predict the labels. The data send to the service must match the `features` section of the DatasetSchema created earlier. The `labels` defined inside of the DatasetSchema will be predicted for each object and returned as a response to the request.In the SDK, the [`InferenceClient.create_inference_request()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.htmlsap.aibus.dar.client.inference_client.InferenceClient.create_inference_request) method handles submission of inference requests.
###Code
from sap.aibus.dar.client.inference_client import InferenceClient
inference = InferenceClient.construct_from_service_key(SERVICE_KEY)
objects_to_be_classified = [
{
"features": [
{"name": "manufacturer", "value": "Energizer"},
{"name": "description", "value": "Alkaline batteries; 1.5V"},
{"name": "price", "value": "5.99"},
],
},
]
inference_response = inference.create_inference_request(model_name, objects_to_be_classified)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
###Output
_____no_output_____
###Markdown
*Note: For trial accounts, you only have a limited number of objects which you can classify.* You can also try to come up with your own example:
###Code
my_own_items = [
{
"features": [
{"name": "manufacturer", "value": "EDIT THIS"},
{"name": "description", "value": "EDIT THIS"},
{"name": "price", "value": "0.00"},
],
},
]
inference_response = inference.create_inference_request(model_name, my_own_items)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
###Output
_____no_output_____
###Markdown
You can also classify multiple objects at once. For each object, the `top_n` parameter determines how many predictions are returned.
###Code
objects_to_be_classified = [
{
"objectId": "optional-identifier-1",
"features": [
{"name": "manufacturer", "value": "Energizer"},
{"name": "description", "value": "Alkaline batteries; 1.5V"},
{"name": "price", "value": "5.99"},
],
},
{
"objectId": "optional-identifier-2",
"features": [
{"name": "manufacturer", "value": "Eidos"},
{"name": "description", "value": "Unravel a grim conspiracy at the brink of Revolution"},
{"name": "price", "value": "19.99"},
],
},
{
"objectId": "optional-identifier-3",
"features": [
{"name": "manufacturer", "value": "Cadac"},
{"name": "description", "value": "CADAC Grill Plate for Safari Chef Grills: 12\""
+ "cooking surface; designed for use with Safari Chef grills;"
+ "105 sq. in. cooking surface; PTFE nonstick coating;"
+ " 2 grill surfaces"
},
{"name": "price", "value": "39.99"},
],
}
]
inference_response = inference.create_inference_request(model_name, objects_to_be_classified, top_n=3)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
###Output
_____no_output_____
###Markdown
We can see that the service now returns the `n-best` predictions for each label as indicated by the `top_n` parameter.In some cases, the predicted category has the special value `nan`. In the `bestBuy.csv` data set, not all records have the full set of three categories. Some records only have a top-level category. The model learns this fact from the data and will occasionally suggest that a record should not have a category.
###Code
# Inspect all video games with just a top-level category entry
video_games = df[df['level1_category'] == 'Video Games']
video_games.loc[df['level2_category'].isna() & df['level3_category'].isna()].head(5)
###Output
_____no_output_____
###Markdown
To learn how to execute inference calls without the SDK just using the underlying RESTful API, see [Inference without the SDK](Inference-without-the-SDK). Summary Exercise 01.4In exercise 01.4, we have covered the following topics:* How to deploy a previously trained model* How to execute inference requests against a deployed modelYou can find optional exercises related to exercise 01.4 [below](Optional-Exercises-for-01.4). Wrapping upIn this workshop, we looked into the following topics:* Installation of the Python SDK for Data Attribute Recommendation* Modelling data with a DatasetSchema* Uploading data into a Dataset* Training a model* Predicting labels for unlabelled dataUsing these tools, we are able to solve the problem of missing Master Data attributes starting from just a CSV file containing training data.Feel free to revisit the workshop materials at any time. The [resources](Resources) section below contains additional reading.If you would like to explore the additional capabilities of the SDK, visit the [optional exercises](Optional-Exercises) below. Cleanup During the course of the workshop, we have created several resources on the Data Attribute Recommendation Service:* DatasetSchema* Dataset* Job* Model* DeploymentThe SDK provides several methods to delete these resources. Note that there are dependencies between objects: you cannot delete a Dataset without deleting the Model beforehand.You will need to set `CLEANUP_SESSION = True` below to execute the cleanup.
###Code
# Clean up all resources created earlier
CLEANUP_SESSION = False
def cleanup_session():
model_manager.delete_deployment_by_id(deployment_id) # this can take a few seconds
model_manager.delete_model_by_name(model_name)
model_manager.delete_job_by_id(job_id)
data_manager.delete_dataset_by_id(dataset_id)
data_manager.delete_dataset_schema_by_id(dataset_schema_id)
print("DONE cleaning up!")
if CLEANUP_SESSION:
print("Cleaning up resources generated in this session.")
cleanup_session()
else:
print("Not cleaning up. Set 'CLEANUP_SESSION = True' above and run again!")
###Output
_____no_output_____
###Markdown
Resources*Back to [table of contents](Table-of-Contents)* SDK Resources* [SDK source code on Github](https://github.com/SAP/data-attribute-recommendation-python-sdk)* [SDK documentation](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/)* [How to obtain support](https://github.com/SAP/data-attribute-recommendation-python-sdk/blob/master/README.mdhow-to-obtain-support)* [Tutorials: Classify Data Records with the SDK for Data Attribute Recommendation](https://developers.sap.com/group.cp-aibus-data-attribute-sdk.html) Data Attribute Recommendation* [SAP Help Portal](https://help.sap.com/viewer/product/Data_Attribute_Recommendation/SHIP/en-US)* [API Reference](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.html)* [Tutorials using Postman - interact with the service RESTful API directly](https://developers.sap.com/mission.cp-aibus-data-attribute.html)* [Trial Account Limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html)* [Metering and Pricing](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e093326a2764c298759fcb92c5b0500.html) Addendum Inference without the SDK*Back to [table of contents](Table-of-Contents)* The Data Attribute Service exposes a RESTful API. The SDK we use in this workshop uses this API to interact with the DAR service.For custom integration, you can implement your own client for the API. The tutorial "[Use Machine Learning to Classify Data Records]" is a great way to explore the Data Attribute Recommendation API with the Postman REST client. Beyond the tutorial, the [API Reference] is a comprehensive documentation of the RESTful interface.[Use Machine Learning to Classify Data Records]: https://developers.sap.com/mission.cp-aibus-data-attribute.html[API Reference]: https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.htmlTo demonstrate the underlying API, the next example uses the `curl` command line tool to perform an inference request against the Inference API.The example uses the `jq` command to extract the credentials from the service. The authentication token is retrieved from the `uaa_url` and then used for the inference request.
###Code
# If the following example gives you errors that the jq or curl commands cannot be found,
# you may be able to install them from conda by uncommenting one of the lines below:
#%conda install -q jq
#%conda install -q curl
%%bash -s "$model_name" # Pass the python model_name variable as the first argument to shell script
model_name=$1
echo "Model: $model_name"
key=$(cat key.json)
url=$(echo $key | jq -r .url)
uaa_url=$(echo $key | jq -r .uaa.url)
clientid=$(echo $key | jq -r .uaa.clientid)
clientsecret=$(echo $key | jq -r .uaa.clientsecret)
echo "Service URL: $url"
token_url=${uaa_url}/oauth/token?grant_type=client_credentials
echo "Obtaining token with clientid $clientid from $token_url"
bearer_token=$(curl \
--silent --show-error \
--user $clientid:$clientsecret \
$token_url \
| jq -r .access_token
)
inference_url=${url}/inference/api/v3/models/${model_name}/versions/1
echo "Running inference request against endpoint $inference_url"
echo ""
# We pass the token in the Authorization header.
# The payload for the inference request is passed as
# the body of the POST request below.
# The output of the curl command is piped through `jq`
# for pretty-printing
curl \
--silent --show-error \
--header "Authorization: Bearer ${bearer_token}" \
--header "Content-Type: application/json" \
-XPOST \
${inference_url} \
-d '{
"objects": [
{
"features": [
{
"name": "manufacturer",
"value": "Energizer"
},
{
"name": "description",
"value": "Alkaline batteries; 1.5V"
},
{
"name": "price",
"value": "5.99"
}
]
}
]
}' | jq
###Output
_____no_output_____ |
examples/Train_ppo_cnn+eval_contact-(pretrained).ipynb | ###Markdown
if you wish to set which cores to useaffinity_mask = {4, 5, 7} affinity_mask = {6, 7, 9} affinity_mask = {0, 1, 3} affinity_mask = {2, 3, 5} affinity_mask = {0, 2, 4, 6} pid = 0os.sched_setaffinity(pid, affinity_mask) print("CPU affinity mask is modified to %s for process id 0" % affinity_mask) DEFAULT 'CarRacing-v3' environment values continuos action = (steering_angle, throttle, brake)ACT = [[0, 0, 0], [-0.4, 0, 0], [0.4, 0, 0], [0, 0.6, 0], [0, 0, 0.8]] discrete actions: center_steering and no gas/brake, steer left, steer right, accel, brake --> actually a good choice, because car_dynamics softens the action's diff for gas and steeringREWARDS reward given each step: step taken, distance to centerline, normalized speed [0-1], normalized steer angle [0-1] reward given on new tile touched: %proportional of advance, %advance/steps_taken reward given at episode end: all tiles touched (track finished), patience or off-raod exceeded, out of bounds, max_steps exceeded reward for obstacles: obstacle hit (each step), obstacle collided (episode end)GYM_REWARD = [ -0.1, 0.0, 0.0, 0.0, 10.0, 0.0, 0, -0, -100, -0, -0, -0 ]STD_REWARD = [ -0.1, 0.0, 0.0, 0.0, 1.0, 0.0, 100, -20, -100, -50, -0, -0 ]CONT_REWARD =[-0.11, 0.1, 0.0, 0.0, 1.0, 0.0, 100, -20, -100, -50, -5, -100 ] see docu for RETURN computation details DEFAULT Environment Parameters (not related to RL Algorithm!)game_color = 1 State (frame) color option: 0 = RGB, 1 = Grayscale, 2 = Green onlyindicators = True show or not bottom Info Panelframes_per_state = 4 stacked (rolling history) Frames on each state [1-inf], latest observation always on first Frameskip_frames = 3 number of consecutive Frames to skip between history saves [0-4]discre = ACT Action discretization function, format [[steer0, throtle0, brake0], [steer1, ...], ...]. None for continuoususe_track = 1 number of times to use the same Track, [1-100]. More than 20 high risk of overfitting!!episodes_per_track = 1 number of evenly distributed starting points on each track [1-20]. Every time you call reset(), the env automatically starts at the next pointtr_complexity = 12 generated Track geometric Complexity, [6-20]tr_width = 45 relative Track Width, [30-50]patience = 2.0 max time in secs without Progress, [0.5-20]off_track = 1.0 max time in secs Driving on Grass, [0.0-5]f_reward = CONT_REWARD Reward Funtion coefficients, refer to Docu for detailsnum_obstacles = 5 Obstacle objects placed on track [0-10]end_on_contact = False Stop Episode on contact with obstacle, not recommended for starting-phase of trainingobst_location = 0 array pre-setting obstacle Location, in %track. Negative value means tracks's left-hand side. 0 for random locationoily_patch = False use all obstacles as Low-friction road (oily patch)verbose = 2
###Code
## Choose one agent, see Docu for description
#agent='CarRacing-v0'
#agent='CarRacing-v1'
agent='CarRacing-v3'
# Stop training when the model reaches the reward threshold
callback_on_best = StopTrainingOnRewardThreshold(reward_threshold = 170, verbose=1)
seed = 2000
## SIMULATION param
## Changing these makes world models incompatible!!
game_color = 2
indicators = True
fpst = 4
skip = 3
actions = [[0, 0, 0], [-0.4, 0, 0], [0.4, 0, 0], [0, 0.6, 0], [0, 0, 0.8]] #this is ACT
obst_loc = [6, -12, 25, -50, 75, -37, 62, -87, 95, -29] #track percentage, negative for obstacle to the left-hand side
## Loading drive_pretained model
import pickle
root = 'ppo_cnn_gym-mod_'
file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}'.format(game_color,fpst,skip,indicators,len(actions))
model = PPO2.load(file)
## This model param
use = 6 # number of times to use same track [1,100]
ept = 10 # different starting points on same track [1,20]
patience = 1.0
track_complexity = 12
#REWARD2 = [-0.05, 0.1, 0.0, 0.0, 2.0, 0.0, 100, -20, -100, -50, -5, -100]
if agent=='CarRacing-v3':
env = gym.make(agent, seed=seed,
game_color=game_color,
indicators=indicators,
frames_per_state=fpst,
skip_frames=skip,
# discre=actions, #passing custom actions
use_track = use,
episodes_per_track = ept,
tr_complexity = track_complexity,
tr_width = 45,
patience = patience,
off_track = patience,
end_on_contact = True, #learning to avoid obstacles the-hard-way
oily_patch = False,
num_obstacles = 5, #some obstacles
obst_location = obst_loc, #passing fixed obstacle location
# f_reward = REWARD2, #passing a custom reward function
verbose = 2 )
else:
env = gym.make(agent)
env = DummyVecEnv([lambda: env])
## Training on obstacles
model.set_env(env)
batch_size = 256
updates = 700
model.learn(total_timesteps = updates*batch_size, log_interval=1) #, callback=eval_callback)
#Save last updated model
file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}__u{:d}_e{:d}_p{}_bs{:d}'.format(
game_color,fpst,skip,indicators,len(actions),use,ept,patience,batch_size)
model.save(file, cloudpickle=True)
param_list=model.get_parameter_list()
env.close()
## This model param #2
use = 6 # number of times to use same track [1,100]
ept = 10 # different starting points on same track [1,20]
patience = 1.0
track_complexity = 12
#REWARD2 = [-0.05, 0.1, 0.0, 0.0, 2.0, 0.0, 100, -20, -100, -50, -5, -100]
seed = 25000
if agent=='CarRacing-v3':
env2 = gym.make(agent, seed=seed,
game_color=game_color,
indicators=indicators,
frames_per_state=fpst,
skip_frames=skip,
# discre=actions, #passing custom actions
use_track = use,
episodes_per_track = ept,
tr_complexity = track_complexity,
tr_width = 45,
patience = patience,
off_track = patience,
end_on_contact = False, # CHANGED
oily_patch = False,
num_obstacles = 5, #some obstacles
obst_location = 0, #using random obstacle location
# f_reward = REWARD2, #passing a custom reward function
verbose = 3 )
else:
env2 = gym.make(agent)
env2 = DummyVecEnv([lambda: env2])
## Training on obstacles
model.set_env(env2)
#batch_size = 384
updates = 1500
## Separate evaluation env
test_freq = 100 #policy updates until evaluation
test_episodes_per_track = 5 #number of starting points on test_track
eval_log = './evals/'
env_test = gym.make(agent, seed=int(3.14*seed),
game_color=game_color,
indicators=indicators,
frames_per_state=fpst,
skip_frames=skip,
# discre=actions, #passing custom actions
use_track = 1, #change test track after 1 ept round
episodes_per_track = test_episodes_per_track,
tr_complexity = 12, #test on a medium complexity track
tr_width = 45,
patience = 2.0,
off_track = 2.0,
end_on_contact = False,
oily_patch = False,
num_obstacles = 5,
obst_location = obst_loc) #passing fixed obstacle location
env_test = DummyVecEnv([lambda: env_test])
eval_callback = EvalCallback(env_test, callback_on_new_best=callback_on_best, #None,
n_eval_episodes=test_episodes_per_track*3, eval_freq=test_freq*batch_size,
best_model_save_path=eval_log, log_path=eval_log, deterministic=True,
render = False)
model.learn(total_timesteps = updates*batch_size, log_interval=1, callback=eval_callback)
#Save last updated model
#file = root+'c{:d}_f{:d}_s{:d}_{}_a{:d}__u{:d}_e{:d}_p{}_bs{:d}'.format(
# game_color,fpst,skip,indicators,len(actions),use,ept,patience,batch_size)
model.save(file+'_II', cloudpickle=True)
param_list=model.get_parameter_list()
env2.close()
env_test.close()
## Enjoy last trained policy
if agent=='CarRacing-v3': #create an independent test environment, almost everything in std/random definition
env3 = gym.make(agent, seed=None,
game_color=game_color,
indicators = True,
frames_per_state=fpst,
skip_frames=skip,
# discre=actions,
use_track = 2,
episodes_per_track = 1,
patience = 5.0,
off_track = 3.0 )
else:
env3 = gym.make(agent)
env3 = DummyVecEnv([lambda: env3])
obs = env3.reset()
print(obs.shape)
done = False
pasos = 0
_states=None
while not done: # and pasos<1500:
action, _states = model.predict(obs, deterministic=True)
obs, reward, done, info = env3.step(action)
env3.render()
pasos+=1
env3.close()
print()
print(reward, done, pasos) #, info)
## Enjoy best eval_policy
obs = env3.reset()
print(obs.shape)
## Load bestmodel from eval
#if not isinstance(model_test, PPO2):
model_test = PPO2.load(eval_log+'best_model', env3)
done = False
pasos = 0
_states=None
while not done: # and pasos<1500:
action, _states = model_test.predict(obs, deterministic=True)
obs, reward, done, info = env3.step(action)
env3.render()
pasos+=1
env3.close()
print()
print(reward, done, pasos)
print(action, _states)
model_test.save(file+'_evalbest', cloudpickle=True)
env2.close()
env3.close()
env_test.close()
print(action, _states)
obs.shape
###Output
_____no_output_____ |
courses/08_Plotly_Bokeh/Fire_Australia19.ipynb | ###Markdown
Vuoi conoscere gli incendi divampati dopo il 15 settembre 2019?
###Code
mes = australia_1[(australia_1["acq_date"]>= "2019-09-15")]
mes.head()
mes.describe()
map_sett = folium.Map([-25.274398,133.775136], zoom_start=4)
lat_3 = mes["latitude"].values.tolist()
long_3 = mes["longitude"].values.tolist()
australia_cluster_3 = MarkerCluster().add_to(map_sett)
for lat_3,long_3 in zip(lat_3,long_3):
folium.Marker([lat_3,long_3]).add_to(australia_cluster_3)
map_sett
###Output
_____no_output_____
###Markdown
Play with Folium
###Code
44.4807035,11.3712528
import folium
m1 = folium.Map(location=[44.48, 11.37], tiles='openstreetmap', zoom_start=18)
m1.save('map1.html')
m1
m3.save("filename.png")
###Output
_____no_output_____ |
presentations/How To - Estimate Pi.ipynb | ###Markdown
Estimating $\pi$ by Sampling PointsBy Evgenia "Jenny" Nitishinskaya and Delaney Granizo-MackenzieNotebook released under the Creative Commons Attribution 4.0 License.---A stochastic way to estimate the value of $\pi$ is to sample points from a square area. Some of the points will fall within the area of a circle as defined by $x^2 + y^2 = 1$, we count what percentage all points fall within this area, which allows us to estimate the area of the circle and therefore $\pi$.
###Code
# Import libraries
import math
import numpy as np
import matplotlib.pyplot as plt
in_circle = 0
outside_circle = 0
n = 10 ** 4
# Draw many random points
X = np.random.rand(n)
Y = np.random.rand(n)
for i in range(n):
if X[i]**2 + Y[i]**2 > 1:
outside_circle += 1
else:
in_circle += 1
area_of_quarter_circle = float(in_circle)/(in_circle + outside_circle)
pi_estimate = area_of_circle = area_of_quarter_circle * 4
pi_estimate
###Output
_____no_output_____
###Markdown
We can visualize the process to see how it works.
###Code
# Plot a circle for reference
circle1=plt.Circle((0,0),1,color='r', fill=False, lw=2)
fig = plt.gcf()
fig.gca().add_artist(circle1)
# Set the axis limits so the circle doesn't look skewed
plt.xlim((0, 1.8))
plt.ylim((0, 1.2))
plt.scatter(X, Y)
###Output
_____no_output_____
###Markdown
Finally, let's see how our estimate gets better as we increase $n$. We'll do this by computing the estimate for $\pi$ at each step and plotting that estimate to see how it converges.
###Code
in_circle = 0
outside_circle = 0
n = 10 ** 3
# Draw many random points
X = np.random.rand(n)
Y = np.random.rand(n)
# Make a new array
pi = np.ndarray(n)
for i in range(n):
if X[i]**2 + Y[i]**2 > 1:
outside_circle += 1
else:
in_circle += 1
area_of_quarter_circle = float(in_circle)/(in_circle + outside_circle)
pi_estimate = area_of_circle = area_of_quarter_circle * 4
pi[i] = pi_estimate
plt.plot(range(n), pi)
plt.xlabel('n')
plt.ylabel('pi estimate')
plt.plot(range(n), [math.pi] * n)
###Output
_____no_output_____ |
Concise_Chit_Chat.ipynb | ###Markdown
Concise Chit ChatGitHub Repository: Code TODO:1. create a DataLoader class for dataset preprocess. (Use tf.data.Dataset inside?)1. Create a PyPI package for easy load cornell movie curpos dataset(?)1. Use PyPI module `embeddings` to load `GLOVES`, or use tfhub to load `GLOVES`?1. How to do a `clip_norm`(or set `clip_value`) in Keras with Eager mode but without `tf.contrib`?1. Better name for variables & functions1. Code clean1. Encapsulate all layers to Model Class: 1. ChitChatEncoder 1. ChitChatDecoder 1. ChitChatModel1. Re-style to follow the book1. ...? Book Todo1. Outlines1. What's seq2seq1. What's word embedding1. 1. Split code into snips1. Write for snips1. Content cleaning and optimizing1. ...? Other1. `keras.callbacks.TensorBoard` instead of `tf.contrib.summary`? - `model.fit(callbacks=[TensorBoard(...)])`1. download url? - http://old.pep.com.cn/gzsx/jszx_1/czsxtbjxzy/qrzptgjzxjc/dzkb/dscl/ config.py
###Code
'''doc'''
# GO for start of the sentence
# DONE for end of the sentence
GO = '\b'
DONE = '\a'
# max words per sentence
MAX_LEN = 20
###Output
_____no_output_____
###Markdown
data_loader.py
###Code
'''
data loader
'''
import gzip
import re
from typing import (
# Any,
List,
Tuple,
)
import tensorflow as tf
import numpy as np
# from .config import (
# GO,
# DONE,
# MAX_LEN,
# )
DATASET_URL = 'https://github.com/huan/concise-chit-chat/releases/download/v0.0.1/dataset.txt.gz'
DATASET_FILE_NAME = 'concise-chit-chat-dataset.txt.gz'
class DataLoader():
'''data loader'''
def __init__(self) -> None:
print('DataLoader', 'downloading dataset from:', DATASET_URL)
dataset_file = tf.keras.utils.get_file(
DATASET_FILE_NAME,
origin=DATASET_URL,
)
print('DataLoader', 'loading dataset from:', dataset_file)
# dataset_file = './data/dataset.txt.gz'
# with open(path, encoding='iso-8859-1') as f:
with gzip.open(dataset_file, 'rt') as f:
self.raw_text = f.read().lower()
self.queries, self.responses \
= self.__parse_raw_text(self.raw_text)
self.size = len(self.queries)
def get_batch(
self,
batch_size=32,
) -> Tuple[List[List[str]], List[List[str]]]:
'''get batch'''
# print('corpus_list', self.corpus)
batch_indices = np.random.choice(
len(self.queries),
size=batch_size,
)
batch_queries = self.queries[batch_indices]
batch_responses = self.responses[batch_indices]
return batch_queries, batch_responses
def __parse_raw_text(
self,
raw_text: str
) -> Tuple[List[List[str]], List[List[str]]]:
'''doc'''
query_list = []
response_list = []
for line in raw_text.strip('\n').split('\n'):
query, response = line.split('\t')
query, response = self.preprocess(query), self.preprocess(response)
query_list.append('{} {} {}'.format(GO, query, DONE))
response_list.append('{} {} {}'.format(GO, response, DONE))
return np.array(query_list), np.array(response_list)
def preprocess(self, text: str) -> str:
'''doc'''
new_text = text
new_text = re.sub('[^a-zA-Z0-9 .,?!]', ' ', new_text)
new_text = re.sub(' +', ' ', new_text)
new_text = re.sub(
'([\w]+)([,;.?!#&-\'\"-]+)([\w]+)?',
r'\1 \2 \3',
new_text,
)
if len(new_text.split()) > MAX_LEN:
new_text = (' ').join(new_text.split()[:MAX_LEN])
match = re.search('[.?!]', new_text)
if match is not None:
idx = match.start()
new_text = new_text[:idx+1]
new_text = new_text.strip().lower()
return new_text
###Output
_____no_output_____
###Markdown
vocabulary.py
###Code
'''doc'''
import re
from typing import (
List,
)
import tensorflow as tf
# from .config import (
# DONE,
# GO,
# MAX_LEN,
# )
class Vocabulary:
'''voc'''
def __init__(self, text: str) -> None:
self.tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
self.tokenizer.fit_on_texts(
[GO, DONE] + re.split(
r'[\s\t\n]',
text,
)
)
# additional 1 for the index 0
self.size = 1 + len(self.tokenizer.word_index.keys())
def texts_to_padded_sequences(
self,
text_list: List[List[str]]
) -> tf.Tensor:
'''doc'''
sequence_list = self.tokenizer.texts_to_sequences(text_list)
padded_sequences = tf.keras.preprocessing.sequence.pad_sequences(
sequence_list,
maxlen=MAX_LEN,
padding='post',
truncating='post',
)
return padded_sequences
def padded_sequences_to_texts(self, sequence: List[int]) -> str:
return 'tbw'
###Output
_____no_output_____
###Markdown
model.py
###Code
'''doc'''
import tensorflow as tf
import numpy as np
from typing import (
List,
)
# from .vocabulary import Vocabulary
# from .config import (
# DONE,
# GO,
# MAX_LENGTH,
# )
EMBEDDING_DIM = 300
LATENT_UNIT_NUM = 500
class ChitEncoder(tf.keras.Model):
'''encoder'''
def __init__(
self,
) -> None:
super().__init__()
self.lstm_encoder = tf.keras.layers.CuDNNLSTM(
units=LATENT_UNIT_NUM,
return_state=True,
)
def call(
self,
inputs: tf.Tensor, # shape: [batch_size, max_len, embedding_dim]
training=None,
mask=None,
) -> tf.Tensor:
_, *state = self.lstm_encoder(inputs)
return state # shape: ([latent_unit_num], [latent_unit_num])
class ChatDecoder(tf.keras.Model):
'''decoder'''
def __init__(
self,
voc_size: int,
) -> None:
super().__init__()
self.lstm_decoder = tf.keras.layers.CuDNNLSTM(
units=LATENT_UNIT_NUM,
return_sequences=True,
return_state=True,
)
self.dense = tf.keras.layers.Dense(
units=voc_size,
)
self.time_distributed_dense = tf.keras.layers.TimeDistributed(
self.dense
)
self.initial_state = None
def set_state(self, state=None):
'''doc'''
# import pdb; pdb.set_trace()
self.initial_state = state
def call(
self,
inputs: tf.Tensor, # shape: [batch_size, None, embedding_dim]
training=False,
mask=None,
) -> tf.Tensor:
'''chat decoder call'''
# batch_size = tf.shape(inputs)[0]
# max_len = tf.shape(inputs)[0]
# outputs = tf.zeros(shape=(
# batch_size, # batch_size
# max_len, # max time step
# LATENT_UNIT_NUM, # dimention of hidden state
# ))
# import pdb; pdb.set_trace()
outputs, *states = self.lstm_decoder(inputs, initial_state=self.initial_state)
self.initial_state = states
outputs = self.time_distributed_dense(outputs)
return outputs
class ChitChat(tf.keras.Model):
'''doc'''
def __init__(
self,
vocabulary: Vocabulary,
) -> None:
super().__init__()
self.word_index = vocabulary.tokenizer.word_index
self.index_word = vocabulary.tokenizer.index_word
self.voc_size = vocabulary.size
# [batch_size, max_len] -> [batch_size, max_len, voc_size]
self.embedding = tf.keras.layers.Embedding(
input_dim=self.voc_size,
output_dim=EMBEDDING_DIM,
mask_zero=True,
)
self.encoder = ChitEncoder()
# shape: [batch_size, state]
self.decoder = ChatDecoder(self.voc_size)
# shape: [batch_size, max_len, voc_size]
def call(
self,
inputs: List[List[int]], # shape: [batch_size, max_len]
teacher_forcing_targets: List[List[int]]=None, # shape: [batch_size, max_len]
training=None,
mask=None,
) -> tf.Tensor: # shape: [batch_size, max_len, embedding_dim]
'''call'''
batch_size = tf.shape(inputs)[0]
inputs_embedding = self.embedding(tf.convert_to_tensor(inputs))
state = self.encoder(inputs_embedding)
self.decoder.set_state(state)
if training:
teacher_forcing_targets = tf.convert_to_tensor(teacher_forcing_targets)
teacher_forcing_embeddings = self.embedding(teacher_forcing_targets)
# outputs[:, 0, :].assign([self.__go_embedding()] * batch_size)
batch_go_embedding = tf.ones([batch_size, 1, 1]) * [self.__go_embedding()]
batch_go_one_hot = tf.ones([batch_size, 1, 1]) * [tf.one_hot(self.word_index[GO], self.voc_size)]
outputs = batch_go_one_hot
output = self.decoder(batch_go_embedding)
for t in range(1, MAX_LEN):
outputs = tf.concat([outputs, output], 1)
if training:
target = teacher_forcing_embeddings[:, t, :]
decoder_input = tf.expand_dims(target, axis=1)
else:
decoder_input = self.__indice_to_embedding(tf.argmax(output))
output = self.decoder(decoder_input)
return outputs
def predict(self, inputs: List[int], temperature=1.) -> List[int]:
'''doc'''
outputs = self([inputs])
outputs = tf.squeeze(outputs)
word_list = []
for t in range(1, MAX_LEN):
output = outputs[t]
indice = self.__logit_to_indice(output, temperature=temperature)
word = self.index_word[indice]
if indice == self.word_index[DONE]:
break
word_list.append(word)
return ' '.join(word_list)
def __go_embedding(self) -> tf.Tensor:
return self.embedding(
tf.convert_to_tensor(self.word_index[GO]))
def __logit_to_indice(
self,
inputs,
temperature=1.,
) -> int:
'''
[vocabulary_size]
convert one hot encoding to indice with temperature
'''
inputs = tf.squeeze(inputs)
prob = tf.nn.softmax(inputs / temperature).numpy()
indice = np.random.choice(self.voc_size, p=prob)
return indice
def __indice_to_embedding(self, indice: int) -> tf.Tensor:
tensor = tf.convert_to_tensor([[indice]])
return self.embedding(tensor)
###Output
_____no_output_____
###Markdown
Train Tensor Board[Quick guide to run TensorBoard in Google Colab](https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/)`tensorboard` vs `tensorboard/` ?
###Code
LOG_DIR = '/content/data/tensorboard/'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
# Install
! npm install -g localtunnel
# Tunnel port 6006 (TensorBoard assumed running)
get_ipython().system_raw('lt --port 6006 >> url.txt 2>&1 &')
# Get url
! cat url.txt
'''train'''
import tensorflow as tf
# from chit_chat import (
# ChitChat,
# DataLoader,
# Vocabulary,
# )
tf.enable_eager_execution()
data_loader = DataLoader()
vocabulary = Vocabulary(data_loader.raw_text)
chitchat = ChitChat(vocabulary=vocabulary)
def loss(model, x, y) -> tf.Tensor:
'''doc'''
weights = tf.cast(
tf.not_equal(y, 0),
tf.float32,
)
prediction = model(
inputs=x,
teacher_forcing_targets=y,
training=True,
)
# implment the following contrib function in a loop ?
# https://stackoverflow.com/a/41135778/1123955
# https://stackoverflow.com/q/48025004/1123955
return tf.contrib.seq2seq.sequence_loss(
prediction,
tf.convert_to_tensor(y),
weights,
)
def grad(model, inputs, targets):
'''doc'''
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, model.variables)
def train() -> int:
'''doc'''
learning_rate = 1e-3
num_batches = 8000
batch_size = 128
print('Dataset size: {}, Vocabulary size: {}'.format(
data_loader.size,
vocabulary.size,
))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
root = tf.train.Checkpoint(
optimizer=optimizer,
model=chitchat,
optimizer_step=tf.train.get_or_create_global_step(),
)
root.restore(tf.train.latest_checkpoint('./data/save'))
print('checkpoint restored.')
writer = tf.contrib.summary.create_file_writer('./data/tensorboard')
writer.set_as_default()
global_step = tf.train.get_or_create_global_step()
for batch_index in range(num_batches):
global_step.assign_add(1)
queries, responses = data_loader.get_batch(batch_size)
encoder_inputs = vocabulary.texts_to_padded_sequences(queries)
decoder_outputs = vocabulary.texts_to_padded_sequences(responses)
grads = grad(chitchat, encoder_inputs, decoder_outputs)
optimizer.apply_gradients(
grads_and_vars=zip(grads, chitchat.variables)
)
if batch_index % 10 == 0:
print("batch %d: loss %f" % (batch_index, loss(
chitchat, encoder_inputs, decoder_outputs).numpy()))
root.save('./data/save/model.ckpt')
print('checkpoint saved.')
with tf.contrib.summary.record_summaries_every_n_global_steps(1):
# your model code goes here
tf.contrib.summary.scalar('loss', loss(
chitchat, encoder_inputs, decoder_outputs).numpy())
# print('summary had been written.')
return 0
def main() -> int:
'''doc'''
return train()
main()
#! rm -fvr data/tensorboard
# ! pwd
# ! rm -frv data/save
# ! rm -fr /content/data/tensorboard
# ! kill 2823
# ! kill -9 2823
# ! ps axf | grep lt
! cat url.txt
###Output
your url is: https://bright-fox-51.localtunnel.me
###Markdown
chat.py
###Code
'''train'''
# import tensorflow as tf
# from chit_chat import (
# ChitChat,
# DataLoader,
# Vocabulary,
# DONE,
# GO,
# )
# tf.enable_eager_execution()
def main() -> int:
'''chat main'''
data_loader = DataLoader()
vocabulary = Vocabulary(data_loader.raw_text)
print('Dataset size: {}, Vocabulary size: {}'.format(
data_loader.size,
vocabulary.size,
))
chitchat = ChitChat(vocabulary)
checkpoint = tf.train.Checkpoint(model=chitchat)
checkpoint.restore(tf.train.latest_checkpoint('./data/save'))
print('checkpoint restored.')
return cli(chitchat, vocabulary=vocabulary, data_loader=data_loader)
def cli(chitchat: ChitChat, data_loader: DataLoader, vocabulary: Vocabulary):
'''command line interface'''
index_word = vocabulary.tokenizer.index_word
word_index = vocabulary.tokenizer.word_index
query = ''
while True:
try:
# Get input sentence
query = input('> ').lower()
# Check if it is quit case
if query == 'q' or query == 'quit':
break
# Normalize sentence
query = data_loader.preprocess(query)
query = '{} {} {}'.format(GO, query, DONE)
# Evaluate sentence
query_sequence = vocabulary.texts_to_padded_sequences([query])[0]
response_sequence = chitchat.predict(query_sequence, 1)
# Format and print response sentence
response_word_list = [
index_word[indice]
for indice in response_sequence
if indice != 0 and indice != word_index[DONE]
]
print('Bot:', ' '.join(response_word_list))
except KeyError:
print("Error: Encountered unknown word.")
main()
! cat /proc/cpuinfo
###Output
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU @ 2.30GHz
stepping : 0
microcode : 0x1
cpu MHz : 2299.998
cache size : 46080 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms xsaveopt arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 4599.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU @ 2.30GHz
stepping : 0
microcode : 0x1
cpu MHz : 2299.998
cache size : 46080 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms xsaveopt arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 4599.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
|
community/awards/teach_me_qiskit_2018/quantum_machine_learning/1_K_Means/Quantum K-Means Algorithm.ipynb | ###Markdown
Trusted Notebook" width="500 px" align="left"> _*Quantum K-Means algorithm*_ The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.*** Contributors Shan Jin, Xi He, Xiaokai Hou, Li Sun, Dingding Wen, Shaojun Wu and Xiaoting Wang$^{1}$1. Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China,Chengdu, China,610051*** IntroductionClustering algorithm is a typical unsupervised learning algorithm, which is mainly used to automatically classify similar samples into one category.In the clustering algorithm, according to the similarity between the samples, the samples are divided into different categories. For different similarity calculation methods, different clustering results will be obtained. The commonly used similarity calculation method is the Euclidean distance method.What we want to show is the quantum K-Means algorithm. The K-Means algorithm is a distance-based clustering algorithm that uses distance as an evaluation index for similarity, that is, the closer the distance between two objects is, the greater the similarity. The algorithm considers the cluster to be composed of objects that are close together, so the compact and independent cluster is the ultimate target. Experiment designThe implementation of the quantum K-Means algorithm mainly uses the swap test to compare the distances among the input data points. Select K points randomly from N data points as centroids, measure the distance from each point to each centroid, and assign it to the nearest centroid- class, recalculate centroids of each class that has been obtained, and iterate 2 to 3 steps until the new centroid is equal to or less than the specified threshold, and the algorithm ends. In our example, we selected 6 data points, 2 centroids, and used the swap test circuit to calculate the distance. Finally, we obtained two clusters of data points.$|0\rangle$ is an auxiliary qubit, through left $H$ gate, it will be changed to $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. Then under the control of $|1\rangle$, the circuit will swap two vectors $|x\rangle$ and $|y\rangle$. Finally, we get the result at the right end of the circuit:$$|0_{anc}\rangle |x\rangle |y\rangle \rightarrow \frac{1}{2}|0_{anc}\rangle(|xy\rangle + |yx\rangle) + \frac{1}{2}|1_{anc}\rangle(|xy\rangle - |yx\rangle)$$If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:$$P(|1_{anc}\rangle) = \frac{1}{2} - \frac{1}{2}|\langle x | y \rangle|^2$$If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:$$Euclidean \ distance = \sqrt{(2 - 2|\langle x | y \rangle|)}$$So, we can see that the probability of measuring $|1\rangle$ has positive correlation with the Euclidean distance.The schematic diagram of quantum K-Means is as the follow picture.[[1]](cite) To make our algorithm can be run using qiskit, we design a more detailed circuit to achieve our algorithm. | Quantum K-Means circuit Data pointspoint numthetaphilamxy10.01pipi0.7106330.70356220.02pipi0.7141420.730.03pipi0.7176330.69642140.04pipi0.7211070.69282450.05pipi0.7245620.6892161.31pipi0.8868110.46213271.32pipi0.8891110.45769281.33pipi0.8913880.45324191.34pipi0.8936430.448779101.35pipi0.8958760.444305 Quantum K-Means algorithm program
###Code
# import math lib
from math import pi
# import Qiskit
from qiskit import Aer, IBMQ, execute
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
# To use local qasm simulator
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
In this section, we first judge the version of Python and import the packages of qiskit, math to implement the following code. We show our algorithm on the ibm_qasm_simulator, if you need to run it on the real quantum conputer, please remove the "" in frint of "import Qconfig".
###Code
theta_list = [0.01, 0.02, 0.03, 0.04, 0.05, 1.31, 1.32, 1.33, 1.34, 1.35]
###Output
_____no_output_____
###Markdown
Here we define the number pi in the math lib, because we need to use u3 gate. And we also define a list about the parameter theta which we need to use in the u3 gate. As the same above, if you want to implement on the real quantum comnputer, please remove the symbol "" and configure your local Qconfig.py file.
###Code
# create Quantum Register called "qr" with 5 qubits
qr = QuantumRegister(5, name="qr")
# create Classical Register called "cr" with 5 bits
cr = ClassicalRegister(5, name="cr")
# Creating Quantum Circuit called "qc" involving your Quantum Register "qr"
# and your Classical Register "cr"
qc = QuantumCircuit(qr, cr, name="k_means")
#Define a loop to compute the distance between each pair of points
for i in range(9):
for j in range(1,10-i):
# Set the parament theta about different point
theta_1 = theta_list[i]
theta_2 = theta_list[i+j]
#Achieve the quantum circuit via qiskit
qc.h(qr[2])
qc.h(qr[1])
qc.h(qr[4])
qc.u3(theta_1, pi, pi, qr[1])
qc.u3(theta_2, pi, pi, qr[4])
qc.cswap(qr[2], qr[1], qr[4])
qc.h(qr[2])
qc.measure(qr[2], cr[2])
qc.reset(qr)
job = execute(qc, backend=backend, shots=1024)
result = job.result()
print(result)
print('theta_1:' + str(theta_1))
print('theta_2:' + str(theta_2))
# print( result.get_data(qc))
plot_histogram(result.get_counts())
###Output
COMPLETED
theta_1:0.01
theta_2:0.02
|
run/monitor-flir-service.ipynb | ###Markdown
Install and monitor the FLIR camera serviceInstall
###Code
! sudo cp flir-server.service /etc/systemd/system/flir-server.service
###Output
_____no_output_____
###Markdown
Start the service
###Code
! sudo systemctl start flir-server.service
###Output
_____no_output_____
###Markdown
Stop the service
###Code
! sudo systemctl stop flir-server.service
###Output
[0;1;31mWarning:[0m The unit file, source configuration file or drop-ins of flir-server.service changed on disk. Run 'systemctl daemon-reload' to reload units.
###Markdown
Enable it so that it starts on boot
###Code
! sudo systemctl enable flir-server.service # enable at boot
###Output
_____no_output_____
###Markdown
Disable it so that it does not start on boot
###Code
! sudo systemctl enable flir-server.service # enable at boot
###Output
_____no_output_____
###Markdown
To show status of the service
###Code
! sudo systemctl status flir-server.service
###Output
[0;1;32m●[0m flir-server.service - FLIR-camera server service[m
Loaded: loaded (/etc/systemd/system/flir-server.service; enabled; vendor pres[m
Active: [0;1;32mactive (running)[0m since Tue 2020-03-24 07:53:04 NZDT; 20min ago[m
Main PID: 765 (python)[m
Tasks: 17 (limit: 4915)[m
CGroup: /system.slice/flir-server.service[m
└─765 /home/rov/.virtualenvs/flir/bin/python -u run/flir-server.py[m
[m
Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "Gain.SetValue(6)"[m
Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "BlackLevelSelector.Se[m
Mar 24 07:53:06 rov-UP python[765]: 19444715 - executing: "BlackLevel.SetValue(0[m
Mar 24 07:53:07 rov-UP python[765]: 19444715 - executing: "GammaEnable.SetValue([m
Mar 24 07:53:07 rov-UP python[765]: Starting : FrontLeft[m
Mar 24 07:53:17 rov-UP python[765]: Stopping FrontLeft due to inactivity.[m
Mar 24 07:54:12 rov-UP python[765]: Starting : FrontLeft[m
Mar 24 07:54:26 rov-UP python[765]: Stopping FrontLeft due to inactivity.[m
Mar 24 07:54:33 rov-UP python[765]: Starting : FrontLeft[m
Mar 24 07:54:48 rov-UP python[765]: Stopping FrontLeft due to inactivity.[m
[K[?1l>1-18/18 (END)[m[K
###Markdown
Install and monitor the FLIR camera serviceInstall
###Code
! sudo cp flir-server.service /etc/systemd/system/flir-server.service
###Output
_____no_output_____
###Markdown
Start the service
###Code
! sudo systemctl start flir-server.service
! sudo systemctl daemon-reload
###Output
_____no_output_____
###Markdown
Stop the service
###Code
! sudo systemctl stop flir-server.service
###Output
_____no_output_____
###Markdown
Disable it so that it does not start on boot
###Code
! sudo systemctl disable flir-server.service # enable at boot
###Output
_____no_output_____
###Markdown
Enable it so that it starts on boot ! sudo systemctl enable flir-server.service enable at boot
###Code
! sudo systemctl enable flir-server.service # enable at boot
###Output
_____no_output_____
###Markdown
To show status of the service
###Code
! sudo systemctl --no-pager status flir-server.service
! sudo systemctl --no-pager status jupyter.service
###Output
Unit jupyter.service could not be found.
|
Problem_3.ipynb | ###Markdown
###Code
import math
def f(x):
return(math.exp(x)) #Trigo function
a = -1
b = 1
n = 10
h = (b-a)/n #Width of Trapezoid
S = h * (f(a)+f(b)) #Value of summation
for i in range(1,n):
S += f(a+i*h)
Integral = S*h
print('Integral = %0.4f' %Integral)
###Output
Integral = 2.1731
###Markdown
###Code
import math
def f(x):
return(math.exp(x)) #Define the trigo function
a = -1
b = 1
n = 10
h = (b-a)/n # Width of the trapezoid
S = h * (f(a)+f(b)) # Beginning value of summation
for i in range(1,n):
S += f(a+i*h)
Integral = S*h
print('Integral = %0.4f' %Integral)
###Output
Integral = 2.1731
###Markdown
###Code
import math
def f(x):
return(math.exp(x)) #defines the trigo function
a = -1
b = 1
n = 10
h = (b-a)/n #width of the trapezoid
S = h * (f(a)+f(b)) #starting value of summation
for i in range(1,n):
S += f(a+i*h)
Integral = S*h
print('Integral = %0.4f' %Integral)
###Output
Integral = 2.1731
###Markdown
###Code
import math
def f(x):
return(math.exp(x)) # define the trigo function
a = -1
b = 1
n = 10
h = (b-a)/n # Width of the trapezoid
S = h * (f(a)+f(b)) # Beginning value of summation
for i in range(1,n):
S += f(a+i*h)
Integral = S*h
print('Integral = %0.4f' %Integral)
from math import e
def f(x): return e**x
a = -1
b = 1
n = 11
h = (b-a)/n
S = h*(f(a)+f(b))
for i in range(1,n):
S+= (a+i*h)
Integral=h*S
print('Integral = %.1f' %Integral)
###Output
_____no_output_____
###Markdown
Problem 3. Create a Python program that integrates the function f(x) = ex from x1=-1 to x2 = 1 as the interval. Save your program to your repository and send your GitHub link here. (30 points)
###Code
import math
def f(x):
return(math.exp(x))
a = -1
b = 1
n = 10
h = (b-a)/n
S = h* (f(a)+f(b))
for i in range(1,n):
S+=f(a+i*h)
Integral = S*h
print('Integral = %0.4f' %Integral)
###Output
Integral = 2.1731
|
eda/hyper-parameter_tuning/random_forest-Level0.ipynb | ###Markdown
Get Training Data
###Code
# get training data
train_df = pd.read_csv(os.path.join(ROOT_DIR,DATA_DIR,FEATURE_SET,'train.csv.gz'))
X_train = train_df.drop(ID_VAR + [TARGET_VAR],axis=1)
y_train = train_df.loc[:,TARGET_VAR]
X_train.shape
y_train.shape
y_train[:10]
###Output
_____no_output_____
###Markdown
Setup pipeline for hyper-parameter tuning
###Code
# set up pipeline
pipe = Pipeline([('this_model',ThisModel(n_jobs=-1))])
###Output
_____no_output_____
###Markdown
this_scorer = make_scorer(lambda y, y_hat: np.sqrt(mean_squared_error(y,y_hat)),greater_is_better=False)
###Code
def kag_rmsle(y,y_hat):
return np.sqrt(mean_squared_error(y,y_hat))
this_scorer = make_scorer(kag_rmsle, greater_is_better=False)
grid_search = RandomizedSearchCV(pipe,
param_distributions=PARAM_GRID,
scoring=this_scorer,cv=5,
n_iter=N_ITER,
verbose=2,
n_jobs=1,
refit=False)
grid_search.fit(X_train,y_train)
grid_search.best_params_
grid_search.best_score_
df = pd.DataFrame(grid_search.cv_results_).sort_values('rank_test_score')
df
hyper_parameters = dict(FeatureSet=FEATURE_SET,cv_run=df)
with open(os.path.join(CONFIG['ROOT_DIR'],'eda','hyper-parameter_tuning',MODEL_ALGO),'wb') as f:
pickle.dump(hyper_parameters,f)
###Output
_____no_output_____ |
05-statistics.ipynb | ###Markdown
Statistics **Quick intro to the following packages**- `hepstats`.I will not discuss here the `pyhf` package, which is very niche.Please refer to the [GitHub repository](https://github.com/scikit-hep/pyhf) or related material at https://scikit-hep.org/resources. **`hepstats` - statistics tools and utilities**The package contains 2 submodules:- `hypotests`: provides tools to do hypothesis tests such as discovery test and computations of upper limits or confidence intervals.- `modeling`: includes the Bayesian Block algorithm that can be used to improve the binning of histograms.Note: feel free to complement the introduction below with the several tutorials available from the [GitHub repository](https://github.com/scikit-hep/hepstats). **1. Adaptive binning determination**The Bayesian Block algorithm produces histograms that accurately represent the underlying distribution while being robust to statistical fluctuations.
###Code
import numpy as np
import matplotlib.pyplot as plt
from hepstats.modeling import bayesian_blocks
data = np.append(np.random.laplace(size=10000), np.random.normal(5., 1., size=15000))
bblocks = bayesian_blocks(data)
plt.hist(data, bins=1000, label='Fine Binning', density=True)
plt.hist(data, bins=bblocks, label='Bayesian Blocks', histtype='step', linewidth=2, density=True)
plt.legend(loc=2);
###Output
_____no_output_____ |
wgan_experiment/WGAN_experiment.ipynb | ###Markdown
Let's look at:Number of labels per image (histogram)Quality score per image for images with multiple labels (sigmoid?)
###Code
import csv
from itertools import islice
from collections import defaultdict
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torchvision
import numpy as np
CSV_PATH = 'wgangp_data.csv'
realness = {}
# real_votes = defaultdict(int)
# fake_votes = defaultdict(int)
total_votes = defaultdict(int)
correct_votes = defaultdict(int)
with open(CSV_PATH) as f:
dictreader = csv.DictReader(f)
for line in dictreader:
img_name = line['img']
assert(line['realness'] in ('True', 'False'))
assert(line['correctness'] in ('True', 'False'))
realness[img_name] = line['realness'] == 'True'
if line['correctness'] == 'True':
correct_votes[img_name] += 1
total_votes[img_name] += 1
pdx = pd.read_csv(CSV_PATH)
pdx
pdx[pdx.groupby('img').count() > 50]
pdx
#df.img
# print(df.columns)
# print(df['img'])
# How much of the time do people guess "fake"? Slightly more than half!
pdx[pdx.correctness != pdx.realness].count()/pdx.count()
# How much of the time do people guess right? 94.4%
pdx[pdx.correctness].count()/pdx.count()
#90.3% of the time, real images are correctly labeled as real
pdx[pdx.realness][pdx.correctness].count()/pdx[pdx.realness].count()
#98.5% of the time, fake images are correctly labeled as fake
pdx[~pdx.realness][pdx.correctness].count()/pdx[~pdx.realness].count()
len(total_votes.values())
img_dict = {img: [realness[img], correct_votes[img], total_votes[img], correct_votes[img]/total_votes[img]] for img in realness }
# print(img_dict.keys())
#img_dict['celeba500/005077_crop.jpg']
plt.hist([v[3] for k,v in img_dict.items() if 'celeb' in k])
def getVotesDict(img_dict):
votes_dict = defaultdict(int)
for img in total_votes:
votes_dict[img_dict[img][2]] += 1
return votes_dict
votes_dict = getVotesDict(img_dict)
for i in sorted(votes_dict.keys()):
print(i, votes_dict[i])
selected_img_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] > 10}
less_than_50_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] < 10}
imgs_over_50 = list(selected_img_dict.keys())
# print(len(selected_img_dict))
# print(imgs_over_50)
pdx_50 = pdx[pdx.img.apply(lambda x: x in imgs_over_50)]
len(pdx_50)
pdx_under_50 = pdx[pdx.img.apply(lambda x: x not in imgs_over_50)]
len(pdx_under_50)
len(pdx_under_50[pdx_under_50.img.apply(lambda x: 'wgan' not in x)])
correctness = sorted([value[3] for key, value in selected_img_dict.items()])
print(correctness)
plt.hist(correctness)
plt.show()
correctness = sorted([value[3] for key, value in less_than_50_dict.items()])
# print(correctness)
plt.hist(correctness)
plt.show()
ct = []
# selected_img = [img in total_votes.keys() if total_votes[img] > 1 ]
discriminator = torch.load('discriminator.pt', map_location='cpu')
# torch.load_state_dict('discriminator.pt')
discriminator(torch.zeros(64,64,3))
###Output
_____no_output_____ |
Merged Jupyter Notebooks Dataset
Introduction
This dataset is a transformed version of the Jupyter Code-Text Pairs dataset. The original dataset contains markdown, code, and output pairs extracted from Jupyter notebooks. This transformation merges these components into a single, cohesive format that resembles a Jupyter notebook, making it easier to analyze and understand the flow of information.
Dataset Details
Source
The original dataset is sourced from the Hugging Face Hub, specifically the bigcode/jupyter-code-text-pairs dataset. It contains pairs of markdown, code, and output from Jupyter notebooks.
Transformation Process
Using the flexibility and efficiency of DuckDB, I processed the entire dataset without the need for heavy hardware. DuckDB's ability to handle large datasets efficiently allowed me to concatenate the markdown, code, and output for each notebook path into a single string, simulating the structure of a Jupyter notebook.
The transformation was performed using the following DuckDB query:
import duckdb
#Connect to a new DuckDB database
new_db = duckdb.connect('merged_notebooks.db')
#Query to concatenate markdown, code, and output
query = """
SELECT path,
STRING_AGG(CONCAT('###Markdown\n', markdown, '\n###Code\n', code, '\n###Output\n', output), '\n') AS concatenated_notebook
FROM read_parquet('jupyter-code-text-pairs/data/*.parquet')
GROUP BY path
"""
#Execute the query and create a new table
new_db.execute(f"CREATE TABLE concatenated_notebooks AS {query}")
Usage
To replicate the transformation or explore the original dataset, you can download it using the following command:
git clone https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs
Once downloaded, you can use the provided DuckDB query to process the data as needed.
Conclusion
This dataset provides a more integrated view of Jupyter notebooks by merging markdown, code, and output into a single format. The use of DuckDB demonstrates its capability to handle large datasets efficiently, making it an excellent tool for data transformation tasks.
- Downloads last month
- 214