markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values | hash
stringlengths 32
32
|
---|---|---|---|---|---|
Experiment with window size
The code below returns the length of the warmup phase, simulated accross several seeds. This can give us a sense of how long the warmup phase is on average for different seeds. Be mindful that when using too many seeds with a lot of chains, the GPU can run out of memory. The motivation is to check how stable the warmup strategy is when using different window sizes. | target_rhat = 1.01
warmup_window_size = 30
max_num_steps = 1000 // warmup_window_size
iteration_after_warmup = np.array([])
for seed in jax.random.split(jax.random.PRNGKey(0), 10):
initial_state = initialize((num_super_chains,), key = seed)
initial_state = np.repeat(initial_state, num_chains_short // num_super_chains,
axis = 0)
result_cold, final_kernel_args, rhat_forge = \
forge_chain(target_rhat = target_rhat,
warmup_window_size = warmup_window_size,
kernel_cold = kernel_cold,
initial_state = initial_state,
max_num_steps = max_num_steps,
seed = seed, monitor = False,
use_nested_rhat = True,
use_log_joint = False)
iteration_after_warmup = np.append(iteration_after_warmup,
len(rhat_forge) * warmup_window_size)
# print(iteration_after_warmup)
print(rhat_forge)
print(iteration_after_warmup.mean())
print(iteration_after_warmup.std()) | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 | 19b23de286cf0b2304f42e7942a9aa60 |
Results for the Banana problem
Applying the code above for the banana problem with
target_rhat = 1.01
use_nested_rhat = True
use_log_joint = False
we estimate the length of the warmup phase for different window sizes:
w = 10, length = 62 +/- 16.12
w = 15, length = 72 +/- 17.41
w = 20, length = 86 +/- 18
w = 30, length = 90 +/- 13.75
w = 60, length = 120 +/- 0.0
Taking into consideration the different granularities, we find the results to be fairly consistent with one another.
Let's go back to the original case where we use $\hat R$ and ESS as our stoping criterion. Given the approximate one-to-one map between $\hat R$ and ESS per chain, the two criterion are somewhat redundant, so I'll focus on $\hat R$. When picking the window size, we must contend with the following trade-off:
* if the window size is too short, we're unlikely to produce a large enough ESS per chain to hit the target $\hat R$, and this could mean a never-ending warmup phase, or one that only stops once we exceed a maximum number of steps.
* if the window size is too large, we may jump pass the optimal point. It's also worth noting that the first window is unlikely to yield satisfactory results, because the intial estimates are overdispered and bias.
The first item is largely mitigated by using nested-$\hat R$, since we're then less dependent on the ESS per chain. The second item could be addressed by using a path-finder to initialize the chains and/or by discarding some of the early iterations in a window when computing the diagnostics.
One final remark is that using $\hat R$ on the log joint distribution yielded somewhat optimistic results. As Pavel puts it: "log_joint is a pretty bad metric. Generally, for convergence, you prefer to measure the least constrainted directions, and log_joint is typically not that."
Draft Code |
result_cold, _, final_kernel_args = tfp.mcmc.sample_chain(
num_results = 100,
current_state = initial_state,
kernel = kernel_cold,
previous_kernel_results = None,
seed = random.PRNGKey(1954),
return_final_kernel_results = True)
result_warm, _, final_kernel_args = tfp.mcmc.sample_chain(
num_results = 50,
current_state = result_cold[-1],
kernel = kernel_warm,
previous_kernel_results = final_kernel_args,
seed = random.PRNGKey(1954),
return_final_kernel_results = True)
nested_rhat(result_warm[1:3], 4)
warmup_window_size = 200
current_state = initial_state
kernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, warmup_window_size)
kernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_warm, warmup_window_size, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
# result_warm, (step_size_saved, num_leapfrog_steps_saved) = tfp.mcmc.sample_chain(
# warmup_window_size, current_state, kernel = kernel_warm,
# seed = random.PRNGKey(1954), trace_fn = trace_fn)
result_warm, kernel_args, final_kernel_args = tfp.mcmc.sample_chain(
warmup_window_size, current_state, kernel = kernel_warm,
seed = random.PRNGKey(1954), return_final_kernel_results = True)
# step_size = step_size_saved[warmup_window_size - 1]
# current_state = result_warm[warmup_window_size - 1, :, :]
# num_leapfrog_steps = num_leapfrog_steps_saved[warmup_window_size - 1]
tfp.mcmc.potential_scale_reduction(result_warm[:, :, :])
# kernel_warm2 = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, step_size, num_leapfrog_steps)
# kernel_warm2 = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm2, warmup_window_size)
# kernel_warm2 = tfp.mcmc.DualAveragingStepSizeAdaptation(
# kernel_warm2, warmup_window_size, target_accept_prob = 0.75,
# reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
# result_warm2, (step_size_saved) = tfp.mcmc.sample_chain(
# warmup_window_size, current_state, kernel = kernel_warm2,
# seed = random.PRNGKey(1954), trace_fn = trace_fn)
result_warm2 = tfp.mcmc.sample_chain(
num_results = warmup_window_size,
kernel = kernel_warm,
current_state = current_state,
previous_kernel_results = final_kernel_args,
seed = random.PRNGKey(1953)
)
tfp.mcmc.potential_scale_reduction(result_warm2.all_states[:, :, :])
print(problem_name)
print(max(rhat_warmup))
print(min(ess_warmup))
# print(len(step_size))
# print(step_size[0][warmup_window_size - 1])
max(tfp.mcmc.potential_scale_reduction(result_warm))
# Define kernel for warmup windows (should be the same in the long and short regime)
warmup_window_size = 10
if (problem_name == 'Bananas' or problem_name == 'GermanCredit'):
kernel_warm_init = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_warm_init = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm_init, warmup_window_size)
kernel_warm_init = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_warm_init, warmup_window_size, target_accept_prob = 0.75, #0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
result_warm, (step_size, max_trajectory_length) = tfp.mcmc.sample_chain(
warmup_window_size, initial_state, kernel = kernel_warm_init, seed = random.PRNGKey(1954),
trace_fn = trace_fn)
print(step_size[len(step_size) - 1])
print(max_trajectory_length[len(max_trajectory_length) - 1])
print(initial_state.shape)
print(result_warm[warmup_window_size, :, :])
# To run next window, define a new transition kernel
# REMARK: the maximum trajectory length isn't, if my understanding is correct,
# a tuning parameter; rather something that get's calculated at each step. So
# there's no need to pass it on.
kernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, step_size[len(step_size) - 1], 1)
kernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, warmup_window_size)
kernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_warm_init, warmup_window_size, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
result_warm2, (step_size, max_trajectory_length) = tfp.mcmc.sample_chain(
warmup_window_size, initial_state, kernel = kernel_warm, seed = random.PRNGKey(1954),
trace_fn = trace_fn)
print(result_warm.shape)
print(step_size.shape)
print(max_trajectory_length.shape)
step_size[len(step_size) - 1]
# nested_rhat(result_short.all_states, num_super_chains)
## Sandbox
# Pool chains into super chains
# num_super_chains = 4 # num_chains_short // num_chains_long
# num_sub_chains = num_chains_short // num_super_chains
# used_samples = num_samples # 5 # 2 * target_iter_mean # target_iter_mean
# result_state = result_short.all_states[0:used_samples, :, :]
# chain_states = result_state.reshape(used_samples, num_sub_chains,
# -1, num_dimensions)
# independent_chains_ndims = 1
# sample_ndims = 1
# sample_axis = tf.range(0, sample_ndims)
# chain_axis
# used_samples = result_state.shape[0]
# num_sub_chains = result_state.shape[1] // num_super_chains
# num_dimensions = result_state.shape[2]
# chain_states = result_state.reshape(used_samples, -1, num_sub_chains,
# num_dimensions)
# state = tf.convert_to_tensor(chain_states, name = 'state')
# mean_chain = tf.reduce_mean(state, axis = 0)
# mean_super_chain = tf.reduce_mean(state, axis = [0, 2])
# variance_chain = _reduce_variance(state, axis = 0, biased = False)
# variance_super_chain = _reduce_variance(mean_chain, axis = 1, biased = False) \
# + tf.reduce_mean(variance_chain, axis = 1)
# W = tf.reduce_mean(variance_super_chain, axis = 0)
# B = _reduce_variance(mean_super_chain, axis = 0, biased = False)
# rhat = tf.sqrt((W + B) / W)
# print(rhat)
# print(mean_chain.shape)
# print(mean_super_chain.shape)
# print("mean_super_chain: ", mean_super_chain)
# print(variance_chain.shape)
# print(variance_super_chain.shape)
# print(state.shape) # (5, 250, 4, 2)
# print(result_state.shape) # (5, 1000, 2)
# # 'manually' compute the mean of each super chain.
# print(np.mean(result_state[:, 0:250, 0]))
# print(np.mean(result_state[:, 250:500, 0]))
# print(np.mean(result_state[:, 500:750, 0]))
# print(np.mean(result_state[:, 750:1000, 0]))
# # compute the means after reshaping the results. Get agreement!
# print(np.mean(chain_states[:, 0, :, 0]))
# print(np.mean(chain_states[:, 1, :, 0]))
# print(np.mean(chain_states[:, 2, :, 0]))
# print(np.mean(chain_states[:, 3, :, 0]))
# print(result_state[:, 250, 0])
# print(chain_states[:, 0, 1, 0])
# simple_chain = np.array([[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]])
# simple_chain.shape # (4, 6)
# chain_reshape = simple_chain.reshape(4, 2, -1)
# chain_reshape.shape # (4, 2, 3)
# np.mean(chain_reshape, axis = 0) # returns mean for each chain
# np.mean(chain_reshape[:, 0, :]) # 1
# np.mean(chain_reshape[:, 1, :]) # 4
# np.mean(simple_chain[:, 0:2]) # 1
# np.mean(simple_chain[:, 3:6]) # 4 -- but it seems index should be 3:5
# # simple_chain[:, 3:6]
# ## Sandbox
# tf.compat.v1.disable_eager_execution() # need to disable eager in TF2.x
# state = result_short.all_states[1:range_iter[index], :, :]
# n = state.shape[0]
# m = state.shape[1]
# sample_ndims = 1
# independent_chains_ndims = 1
# sample_axis = tf.range(0, sample_ndims) # CHECK
# chain_axis = 0
# sample_and_chain_axis = tf.range(0, sample_ndims + independent_chains_ndims) # CHECK
# with tf.name_scope('potential_scale_reduction_single_state'):
# state = tf.convert_to_tensor(state, name = 'state')
# # CHECK: do we need to define a tf scope?
# n_samples = tf.compat.dimension_value(state.shape[0])
# # n = _axis_size(state, sample_axis)
# # m = _axis_size(state, chain_axis)
# # NOTE: These lines prompt the error message once the session is run.
# # x = tf.reduce_mean(state, axis=sample_axis, keepdims=True)
# # x_tf = tf.convert_to_tensor(x, name = 'x')
# # n_tf = _axis_size(x_tf)
# b_div_n = _reduce_variance(
# tf.reduce_mean(state, axis = 0, keepdims = False),
# sample_and_chain_axis, # sample and chain axis
# biased = False
# )
# w = tf.reduce_mean(
# _reduce_variance(state, sample_axis, keepdims = False,
# biased = False),
# axis = sample_and_chain_axis
# )
# # TODO: work out n and m from the number of chains being passed.
# # n = target_iter_mean
# # m = num_chains
# sigma_2_plus = ((n - 1) / n) * w + b_div_n
# rhat = ((m + 1.) / m) * sigma_2_plus / w - (n - 1.) / (m * n)
# # Launch the graph in a session. (TensorFlow uses differed action,
# # so need to explicitly request evaluation)
# sess = tf.compat.v1.Session()
# print(sess.run(rhat)) | nested_rhat/rhat_locker.ipynb | google-research/google-research | apache-2.0 | 37bc230e0c43d73b5e87fc047906c184 |
Otherwise, set your project ID here. | if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-00-f25b80479c89" # @param {type:"string"} | courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/solutions/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 | c77a818fba59556a8a1e5cd00811bda3 |
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI. | # Fill in your bucket name and region
BUCKET_NAME = "gs://qwiklabs-gcp-00-f25b80479c89" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://qwiklabs-gcp-00-f25b80479c89":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP | courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/solutions/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 | 82d2468bda242ed029c84235cd9f82df |
Train the model
Define your custom training job on Vertex AI.
Use the CustomTrainingJob class to define the job, which takes the following parameters:
display_name: The user-defined name of this training pipeline.
script_path: The local path to the training script.
container_uri: The URI of the training container image.
requirements: The list of Python package dependencies of the script.
model_serving_container_image_uri: The URI of a container that can serve predictions for your model โ either a prebuilt container or a custom container.
Use the run function to start training, which takes the following parameters:
args: The command line arguments to be passed to the Python script.
replica_count: The number of worker replicas.
model_display_name: The display name of the Model if the script produces a managed Model.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
The run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object. | # TODO
# Define your custom training job and use the run function to start the training
job = aiplatform.CustomTrainingJob(
display_name=JOB_NAME,
script_path="task.py",
container_uri=TRAIN_IMAGE,
requirements=["tensorflow_datasets==1.3.0"],
model_serving_container_image_uri=DEPLOY_IMAGE,
)
MODEL_DISPLAY_NAME = "cifar10-" + TIMESTAMP
# TODO
# Start the training
if TRAIN_CPU:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_CPU.name,
accelerator_count=TRAIN_NCPU,
)
else:
model = job.run(
model_display_name=MODEL_DISPLAY_NAME,
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_count=0,
) | courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/solutions/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 | 39d51c60fc98f78687336a98550de69f |
Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters:
- instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- prediction_format: The format of the batch prediction response file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- job_display_name: The human readable name for the prediction job.
- gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.
- model_parameters: Additional filtering parameters for serving prediction results.
- machine_type: The type of machine to use for training.
- accelerator_type: The hardware accelerator type.
- accelerator_count: The number of accelerators to attach to a worker replica.
- starting_replica_count: The number of compute instances to initially provision.
- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
Compute instance scaling
You can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.
If you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes. | MIN_NODES = 1
MAX_NODES = 1
# The name of the job
BATCH_PREDICTION_JOB_NAME = "cifar10_batch-" + TIMESTAMP
# Folder in the bucket to write results to
DESTINATION_FOLDER = "batch_prediction_results"
# The Cloud Storage bucket to upload results to
BATCH_PREDICTION_GCS_DEST_PREFIX = BUCKET_NAME + "/" + DESTINATION_FOLDER
# TODO
# Make SDK batch_predict method call
batch_prediction_job = model.batch_predict(
instances_format="jsonl",
predictions_format="jsonl",
job_display_name=BATCH_PREDICTION_JOB_NAME,
gcs_source=BATCH_PREDICTION_GCS_SOURCE,
gcs_destination_prefix=BATCH_PREDICTION_GCS_DEST_PREFIX,
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_CPU,
accelerator_count=DEPLOY_NCPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=True,
) | courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/solutions/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 | 99afdfb8e80024c3fc8bcb57bc32e037 |
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket | delete_training_job = True
delete_model = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# TODO
# Delete the training job
job.delete()
# TODO
# Delete the model
model.delete()
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil -m rm -r $BUCKET_NAME | courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/solutions/sdk-custom-image-classification-batch.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 | 61031f757d0fd0d520ab1e5073376fbc |
์์ฃผ ๋ฉ์์ด ๋ณด์ด๊ธฐ๋ ํ์ง๋ง, ๋ฑํ ์ด๋ค ์ ๋ณด๋ฅผ ์ ๊ณตํ์ง๋ ์๋๋ค.
๋จ์ด๊ฐ ๊ตฌ์ธ ๊ด๊ณ ์ ๋ฑ์ฅํ๋ ๋น๋๋ฅผ ๊ฐ๋ก์ถ,
๋จ์ด๊ฐ ์ด๋ ฅ์์ ๋ฑ์ฅํ๋ ๋น๋๋ฅผ ์ธ๋ก์ถ | def text_size(total):
"""equals 8 if total is 0, 28 if total is 200"""
return 8 + total / 200 * 20
for word, job_popularity, resume_popularity in data:
plt.text(job_popularity, resume_popularity, word,
ha='center', va='center',
size=text_size(job_popularity + resume_popularity))
plt.xlabel("Popularity on Job Postings")
plt.ylabel("Popularity on Resumes")
plt.axis([0, 100, 0, 100])
plt.show() | notebook/ch20_natural_language_processing.ipynb | rnder/data-science-from-scratch | unlicense | fac0b44181179f3d7c8f9cbbffd5354d |
2) n-gram ๋ชจ๋ธ | #์ ๋์ฝ๋ ๋ฐ์ดํ๋ฅผ ์ผ๋ฐ ์์คํค ๋ฐ์ดํ๋ก ๋ณํ
def fix_unicode(text):
return text.replace(u"\u2019", "'")
def get_document():
url = "http://radar.oreilly.com/2010/06/what-is-data-science.html"
html = requests.get(url).text
soup = BeautifulSoup(html, 'html5lib')
#content = soup.find("div", "entry-content") # NoneType Error
content = soup.find("div", "article-body") # find article-body div
regex = r"[\w']+|[\.]" # ๋จ์ด๋ ๋ง์นจํ์ ํด๋นํ๋ ๋ฌธ์์ด
document = []
for paragraph in content("p"):
words = re.findall(regex, fix_unicode(paragraph.text))
document.extend(words)
return document
document = get_document()
#document
###+์์ฐจ์ ์ผ๋ก ๋ฑ์ฅํ๋ ๋จ์ด๋ค์ ๋ํ ์ ๋ณด๋ฅผ ์ป๊ธฐ ์ํจ?
a = ["We've",'all','heard', 'it']
b = ["We've",'all','heard', 'it']
list(zip(a,b))
bigrams = list(zip(document, document[1:]))
transitions = defaultdict(list)
for prev, current in bigrams:
transitions[prev].append(current)
#transitions
transitions
transitions['.']
#์์ ๋จ์ด๋ฅผ ์ ํํด์ผ ํ๋๋ฐ,, ๋ง์นจํ ๋ค์์ ๋ฑ์ฅํ๋ ๋จ์ด๋ค์ค ์์๋ก ํ๋๋ฅผ ์ ํํ๋๊ฒ๋ ๋ฐฉ๋ฒ.
def generate_using_bigrams(transitions):
current = "." # ๋ค์๋จ์ด๊ฐ ๋ฌธ์ฅ์ ์์์ด๋ผ๋ ๊ฒ์ ์๋ฏธ
result = []
while True:
next_word_candidates = transitions[current] # bigrams (current, _)
current = random.choice(next_word_candidates) # choose one at random
result.append(current) # append it to results
if current == ".": return " ".join(result) # if "." ์ข
๋ฃ
random.seed(0)
print("bigram sentences")
for i in range(10):
print(i, generate_using_bigrams(transitions))
print()
#ํฐ๋ฌด๋ ์๋ ๋ฌธ์ฅ์ด์ง๋ง, ๋ฐ์ดํฐ ๊ณผํ๊ณผ ๊ด๋ จ๋์ด ๋ณด์ผ๋ฒํ ์น์ฌ์ดํธ๋ฅผ ๋ง๋ค๋ ์ฌ์ฉํ ๋งํ ๊ฒ๋ค์ด๊ธฐ๋ ํ๋ค...? | notebook/ch20_natural_language_processing.ipynb | rnder/data-science-from-scratch | unlicense | 41393899a46e208187ae9c828c9e2d36 |
bigram : ๋๊ฐ์ ์ฐ์์ ์ธ ๋จ์ด
trigram : 3๊ฐ์ ์ฐ์์ ์ธ ๋จ์ด๋ฅผ ๋ณด๋..(n-gram๋ ์๋๋ง 3๊ฐ ์ ๋๋ง ๋ด๋ ์ถฉ๋ถ..) | ###+์์ฐจ์ ์ผ๋ก ๋ฑ์ฅํ๋ ๋จ์ด๋ค์ ๋ํ ์ ๋ณด๋ฅผ ์ป๊ธฐ ์ํจ?
a = ["We've",'all','heard', 'it']
b = ["We've",'all','heard', 'it']
b = ["We've",'all','heard', 'it']
list(zip(a,b))
#trigrams : ์ง์ ๋๊ฐ์ ๋จ์ด์ ์ํด ๋ค์ ๋จ์ด๊ฐ ๊ฒฐ์ ๋จ
trigrams = list(zip(document, document[1:], document[2:]))
trigram_transitions = defaultdict(list)
starts = []
for prev, current, next in trigrams:
if prev == ".": # ๋ง์ฝ ์ด์ ๋จ์ด๊ฐ ๋ง์นจํ ์๋ค๋ฉด
starts.append(current) # ์ด์ ์๋ก์ด ๋จ์ด์ ์์์ ์๋ฏธ
trigram_transitions[(prev, current)].append(next)
#์ด์ฅ์ ์์ ๋ฐ์ด๊ทธ๋จ๊ณผ ๋น์ทํ ๋ฐฉ์์ผ๋ก ์์ฑํ ์ ์๋ค
def generate_using_trigrams(starts, trigram_transitions):
current = random.choice(starts) # choose a random starting word
prev = "." # and precede it with a '.'
result = [current]
while True:
next_word_candidates = trigram_transitions[(prev, current)]
next = random.choice(next_word_candidates)
prev, current = current, next
result.append(current)
if current == ".":
return " ".join(result)
print("trigram sentences")
for i in range(10):
print(i, generate_using_trigrams(starts, trigram_transitions))
print()
#์กฐ๊ธ ๋ ๊ด์ฐฎ์ ๋ฌธ์ฅ.. | notebook/ch20_natural_language_processing.ipynb | rnder/data-science-from-scratch | unlicense | ef261ea4cadbbc273da1d07a42e021e9 |
trigram์ ์ฌ์ฉํ๋ฉด ๋ค์ ๋จ์ด๋ฅผ ์์ฑํ๋ ๊ฐ ๋จ๊ณ์์ ์ ํํ ์ ์๋ ๋จ์ด์ ์๊ฐ bigram์ ์ฌ์ฉํ ๋๋ง๋ค ํจ์ฌ ์ ์ด์ก๊ณ , ์ ํํ ์ ์๋ ๋จ์ด๊ฐ ๋ฑ ํ๋๋ง ์กด์ฌํ๋ ๊ฒฝ์ฐ๋ ๋ง์์ ๊ฒ์ด๋ค.
์ฆ, ์ด๋ฏธ ์ด๋ค ๋ฌธ์์์ ์กด์ฌํ๋ ๋ฌธ์ฅ(๋๋ ๊ธด๋ฌธ๊ตฌ)ํ๋๋ฅผ ๊ทธ๋๋ก ์์ฑํ์ ๊ฐ๋ฅ์ฑ๋ ์๋ค.
์ด๋ ๋ฐ์ดํฐ ๊ณผํ์ ๋ํ ๋ ๋ง์ ์์ธ์ด๋ค์ ๋ชจ์ผ๊ณ , ์ด๋ฅผ ํ ๋๋ก n-gram ๋ชจ๋ธ์ ๊ตฌ์ถํ๋ ๊ฒ์ ์๋ฏธ!
<p><span style="color:blue">**3) ๋ฌธ๋ฒ**</span></p>
๋ฌธ๋ฒ์ ๊ธฐ๋ฐํ์ฌ ๋ง์ด ๋๋ ๋ฌธ์ฅ์ ์์ฑํ๋ ๊ฒ
ํ์ฌ๋ ๋ฌด์์ด๋ฉฐ, ๊ทธ๊ฒ๋ค์ ์ด๋ป๊ฒ ์กฐํฉํ๋ฉด ๋ฌธ์ฅ์ด ๋๋์ง..
๋ช
์ฌ ๋ค์์๋ ํญ์ ๋์ฌ๊ฐ ๋ฐ๋ฅธ๋ค...๋ ๋ฐฉ์ | #ํญ๋ชฉ ์์ ๋ฐ์ค์ด ์์ผ๋ฉด ๋ ํ์ฅํ ์ ์๋ ๊ท์น์ด๊ณ , ๋๋จธ์ง๋ ์ข
๊ฒฐ์ด ๋ผ๊ณ ํ์.
# ์, '_s'๋ ๋ฌธ์ฅ(sentence) ๊ท์น์ ์๋ฏธ, '_NP'๋ ๋ช
์ฌ๊ตฌ(noun phrase), '_VP'๋ ๋์ฌ๊ตฌ
grammar = {
"_S" : ["_NP _VP"],
"_NP" : ["_N",
"_A _NP _P _A _N"],
"_VP" : ["_V",
"_V _NP"],
"_N" : ["data science", "Python", "regression"],
"_A" : ["big", "linear", "logistic"],
"_P" : ["about", "near"],
"_V" : ["learns", "trains", "tests", "is"]
} | notebook/ch20_natural_language_processing.ipynb | rnder/data-science-from-scratch | unlicense | 79b3aa27afea12a0f30f2841fd714d3e |
~~~
['_S']
['_NP','_VP']
['_N','_VP']
['Python','_VP']
['Python','_V','_NP']
['Python','trains','_NP']
['Python','trains','_A','_NP','_P','_A','_N']
['Python','trains','logistic','_NP','_P','_A','_N']
['Python','trains','logistic','_N','_P','_A','_N']
['Python','trains','logistic','data science','_P','_A','_N']
['Python','trains','logistic','data science','about','_A', '_N']
['Python','trains','logistic','data science','about','logistic','_N']
['Python','trains','logistic','data science','about','logistic','Python']
~~~ | # ํน์ ํญ๋ชฉ์ด ์ข
๊ฒฐ์ด์ธ์ง ์๋์ง?
def is_terminal(token):
return token[0] != "_"
# ๊ฐ ํญ๋ชฉ์ ๋์ฒด ๊ฐ๋ฅํ ๋ค๋ฅธ ํญ๋ชฉ ๋๋ ํญ๋ชฉ๋ค๋ก ๋ณํ์ํค๋ ํจ์
def expand(grammar, tokens):
for i, token in enumerate(tokens):
# ์ข
๊ฒฐ์ด๋ ๊ฑด๋๋
if is_terminal(token): continue
# ์ข
๊ฒฐ์ด๊ฐ ์๋ ๋จ์ด๋ ๋์ฒดํ ์ ์๋ ํญ๋ชฉ์ ์์๋ก ์ ํ
replacement = random.choice(grammar[token])
if is_terminal(replacement):
tokens[i] = replacement
else:
tokens = tokens[:i] + replacement.split() + tokens[(i+1):]
# ์๋ก์ด ๋จ์ด์ list์ expand๋ฅผ ์ ์ฉ
return expand(grammar, tokens)
# ์ด์ ๋ชจ๋ ๋จ์ด๊ฐ ์ข
๊ฒฐ์ด ์ด๊ธฐ๋๋ฌธ์ ์ข
๋ฃ
return tokens
def generate_sentence(grammar):
return expand(grammar, ["_S"])
print("grammar sentences")
for i in range(10):
print(i, " ".join(generate_sentence(grammar)))
print() | notebook/ch20_natural_language_processing.ipynb | rnder/data-science-from-scratch | unlicense | 98cb6a3b6bd5377dc077b16b7b98f4c8 |
<p><span style="color:blue">**5) ํ ํฝ ๋ชจ๋ธ๋ง**</span></p> | #๋จ์ด์ ๋ถํฌ์ ๋ฐ๋ผ ๊ฐ ํ ํฝ์ weight๋ฅผ ํ ๋น
def sample_from(weights):
'''i๋ฅผ weight[i] / sum(weight)์ ํ๋ฅ ๋ก ๋ฐํ'''
total = sum(weights)
rnd = total * random.random() # 0๊ณผ total ์ฌ์ด๋ฅผ ๊ท ์ผํ๊ฒ ์ ํ
for i, w in enumerate(weights):
rnd -= w # return the smallest i such that
if rnd <= 0: return i # sum(weights[:(i+1)]) >= rnd | notebook/ch20_natural_language_processing.ipynb | rnder/data-science-from-scratch | unlicense | 8b6fa16f2d15389f88fcd74208c723c9 |
~~~
๊ฒฐ๊ตญ, weight๊ฐ [1,1,3] ์ด๋ผ๋ฉด
1/5์ ํ๋ฃ๋ก 0,
1/5์ ํ๋ฅ ๋ก 1,
3/5์ ํ๋ฅ ๋ก 2๋ฅผ ๋ฐํ
~~~ | documents = [
["Hadoop", "Big Data", "HBase", "Java", "Spark", "Storm", "Cassandra"],
["NoSQL", "MongoDB", "Cassandra", "HBase", "Postgres"],
["Python", "scikit-learn", "scipy", "numpy", "statsmodels", "pandas"],
["R", "Python", "statistics", "regression", "probability"],
["machine learning", "regression", "decision trees", "libsvm"],
["Python", "R", "Java", "C++", "Haskell", "programming languages"],
["statistics", "probability", "mathematics", "theory"],
["machine learning", "scikit-learn", "Mahout", "neural networks"],
["neural networks", "deep learning", "Big Data", "artificial intelligence"],
["Hadoop", "Java", "MapReduce", "Big Data"],
["statistics", "R", "statsmodels"],
["C++", "deep learning", "artificial intelligence", "probability"],
["pandas", "R", "Python"],
["databases", "HBase", "Postgres", "MySQL", "MongoDB"],
["libsvm", "regression", "support vector machines"]
]
#์ด K=4๊ฐ์ ํ ํฝ์ ๋ฐํํด ๋ณด์!
K = 4
#๊ฐ ํ ํฝ์ด ๊ฐ ๋ฌธ์์ ํ ๋น๋๋ ํ์ (Counter๋ ๊ฐ๊ฐ์ ๋ฌธ์๋ฅผ ์๋ฏธ)
document_topic_counts = [Counter()
for _ in documents]
#๊ฐ ๋จ์ด๊ฐ ๊ฐ ํ ํฝ์ ํ ๋น๋๋ ํ์ (Counter๋ ๊ฐ ํ ํฝ์ ์๋ฏธ)
topic_word_counts = [Counter() for _ in range(K)]
#๊ฐ ํ ํฝ์ ํ ๋น์ฃ๋ ์ด ๋จ์ด์ (๊ฐ๊ฐ์ ์ซ์๋ ๊ฐ ํ ํฝ์ ์๋ฏธ)
topic_counts = [0 for _ in range(K)]
#๊ฐ ๋ฌธ์์ ํฌํจ๋๋ ์ด ๋จ์ด์ (๊ฐ๊ฐ์ ์ซ์๋ ๊ฐ ๋ฌธ์๋ฅผ ์๋ฏธ)
document_lengths = [len(d) for d in documents]
#๋จ์ด ์ข
๋ฅ์ ์
distinct_words = set(word for document in documents for word in document)
W = len(distinct_words)
#์ด ๋ฌธ์์ ์
D = len(documents)
# documents[3]์ ๋ฌธ์์ค ํ ํฝ 1๊ณผ ๊ด๋ จ ์๋ ๋จ์ด์ ์๋ฅผ ๊ตฌํ๋ฉด.
document_topic_counts[3][1]
#npl๋ผ๋ ๋จ์ด๊ฐ ํ ํฝ 2์ ์ฐ๊ด์ง์ด ๋์ค๋ ํ์๋?
topic_word_counts[2]["nlp"]
def p_topic_given_document(topic, d, alpha=0.1):
"""๋ฌธ์ d์ ๋ชจ๋ ๋จ์ด ์ค์์ topic์ ์ํ๋
๋จ์ด์ ๋น์จ (smoothing์ ๋ํ ๋น์จ)"""
return ((document_topic_counts[d][topic] + alpha) /
(document_lengths[d] + K * alpha))
def p_word_given_topic(word, topic, beta=0.1):
"""topic์ ์ํ ๋จ์ด ์ค์์ word์ ๋น์จ (smoothing์ ๋ํ ๋น์จ)"""
return ((topic_word_counts[topic][word] + beta) /
(topic_counts[topic] + W * beta))
def topic_weight(d, word, k):
"""๋ฌธ์์ ๋ฌธ์์ ๋จ์ด๊ฐ ์ฃผ์ด์ง๋ฉด, k๋ฒ์งธ ํ ํฝ์ weight๋ฅผ ๋ฐํ"""
return p_word_given_topic(word, k) * p_topic_given_document(k, d)
def choose_new_topic(d, word):
return sample_from([topic_weight(d, word, k)
for k in range(K)])
random.seed(0)
document_topics = [[random.randrange(K) for word in document]
for document in documents]
for d in range(D):
for word, topic in zip(documents[d], document_topics[d]):
document_topic_counts[d][topic] += 1
topic_word_counts[topic][word] += 1
topic_counts[topic] += 1
for iter in range(1000):
for d in range(D):
for i, (word, topic) in enumerate(zip(documents[d],
document_topics[d])):
# remove this word / topic from the counts
# so that it doesn't influence the weights
document_topic_counts[d][topic] -= 1
topic_word_counts[topic][word] -= 1
topic_counts[topic] -= 1
document_lengths[d] -= 1
# choose a new topic based on the weights
new_topic = choose_new_topic(d, word)
document_topics[d][i] = new_topic
# and now add it back to the counts
document_topic_counts[d][new_topic] += 1
topic_word_counts[new_topic][word] += 1
topic_counts[new_topic] += 1
document_lengths[d] += 1
#ํ ํฝ์ ์๋ฏธ๋ฅผ ์ฐพ๊ธฐ์ํด ๊ฐ ํ ํฝ์ ๋ํด ๊ฐ์ฅ ์ํฅ๋ ฅ์ด ๋์(weight ๊ฐ์ด ํฐ) ๋จ์ด๋ค์ด ๋ฌด์ธ์ธ์ง ๋ณด์
for k, word_counts in enumerate(topic_word_counts):
for word, count in word_counts.most_common():
if count > 0: print(k, word, count)
# ๋จ์ด๋ค์ ๋ณด๊ณ ๋ค์๊ณ ใ
๊ฐ์ด ์ด๋ฆ์ ์ง์ ํด์ฃผ์
topic_names = ["Big Data and programming languages",
"databases",
"machine learning",
"statistics"]
#์ฌ์ฉ์์ ๊ด์ฌ์ฌ๊ฐ ๋ฌด์์ธ์ง ์์๋ณผ ์ ์๋ค.
for document, topic_counts in zip(documents, document_topic_counts):
print(document)
for topic, count in topic_counts.most_common():
if count > 0:
print(topic_names[topic], count)
print() | notebook/ch20_natural_language_processing.ipynb | rnder/data-science-from-scratch | unlicense | 5b5e1326e96aef4b39da4f2783dd494a |
We will vary the fake root we introduced to obtain the phase-mismatch diagram. That is, the phase mismatch $\delta k$ is going to be some linear function of $z$. | plt.plot(x, [abs(f(z))**2 for z in x]) | Testing_make_nonlinear_interaction.ipynb | tabakg/potapov_interpolation | gpl-3.0 | eccb0cc625cf29f1c6141137fcdca279 |
What happens when we change the indices of refraction for the different modes? The phase-mismatch will shift depending on where the new $\delta k = 0$ occurs. The width of the peak may also change if the indices of refraction are large. | indices_of_refraction=[3.,5.,10.]
f = lambda z: functions.make_nonlinear_interaction(roots_to_use(z),
modes_to_use,Ex.delays,0,0,0.1,plus_or_minus_arr,indices_of_refraction=indices_of_refraction)
plt.plot(x, [abs(f(z))**2 for z in x]) | Testing_make_nonlinear_interaction.ipynb | tabakg/potapov_interpolation | gpl-3.0 | 7555a98d6a632e7866f6cd51fa189d6a |
Generating a Hamiltonian from a model
In this section we will use example 3 to generate a Hamiltonian with nonlinaer coefficients resulting when inserting a nonlinearity in a circuit. We will assume that the nonlinearity is inserting at the delay line of index 0 corresponding to $\tau_1$. | import sympy as sp
import itertools
from qnet.algebra.circuit_algebra import *
Ex = Time_Delay_Network.Example3(r1 = 0.9, r3 = 0.9, max_linewidth=35.,max_freq=25.)
Ex.run_Potapov()
E = Ex.E
roots = Ex.roots
M1 = Ex.M1
delays = Ex.delays
modes = functions.spatial_modes(roots,M1,E)
roots
## nonlinearity information
delay_index = 0
start_nonlin = 0.
duration_nonlin = .1
NONLIN_WEIGHT = 10.
m = len(roots)
indices = range(m)
chi_order = 3 ## i.e. chi-3 nonlinearity
plus_minus_combinations = list(itertools.combinations(range(chi_order + 1), 2)) ## pick which fields are annihilated
list_of_pm_arr = []
for tup in plus_minus_combinations:
ls = [1]*(chi_order+1)
for i in tup:
ls[i]=-1
list_of_pm_arr.append(ls)
a = [sp.symbols('a_'+str(i)) for i in range(m)]
a_H = [sp.symbols('a^H_'+str(i)) for i in range(m)]
A,B,C,D = Potapov.get_Potapov_ABCD(Ex.roots,Ex.vecs,Ex.T,z=0.)
#Omega = (A-A.H)/(2j) #### closed dynamics only. i.e. not damping
Omega = -1j*A ## full dynamics
H_lin_sp = 0
## with sympy only
for i in range(m):
for j in range(m):
H_lin_sp += a_H[i]*a[j]*Omega[i,j]
def make_nonlin_term_sp(combination,pm_arr):
'''
Make symbolic term
With sympy only
'''
r = 1
for index,sign in zip(combination,pm_arr):
if sign == 1:
r*= a_H[index]
else:
r *= a[index]
return r | Testing_make_nonlinear_interaction.ipynb | tabakg/potapov_interpolation | gpl-3.0 | 371fa942eef97f4390f46524eb7f2b02 |
Let's impose a large 'index of refraction'. In the future we will replaces this by better conditions for phase-mismatch, including realistic values. For now, this will narrow the gain versus $\Delta k$ function so that few interaction terms remain. | def weight(combination,pm_arr):
roots_to_use = np.array([roots[i].imag for i in combination])
modes_to_use = [modes[i] for i in combination]
return functions.make_nonlinear_interaction(roots_to_use, modes_to_use, delays, delay_indices,
start_nonlin,duration_nonlin,pm_arr,
indices_of_refraction = [1000.]*len(combination),
eps=1e-12,)
## TODO: add a priori check to restrict exponential growth
weights = {}
count = 0
for pm_arr in list_of_pm_arr:
field_combinations = itertools.combinations_with_replacement(range(m), chi_order+1)
for combination in field_combinations:
count += 1
weights[tuple(combination),tuple(pm_arr)] = weight(combination,pm_arr)
print count
plt.hist([abs(x) for x in [weights[key] for key in weights] ],bins=100); | Testing_make_nonlinear_interaction.ipynb | tabakg/potapov_interpolation | gpl-3.0 | ef0767af2f940ff17b364edb8d439d50 |
As we see above, most of the interactions are negligible. Let's drop them out. | significant_weight_keys = [key for key in weights if abs(weights[key]) > 1e-4]
significant_weights = dict((key,weights[key]) for key in significant_weight_keys)
significant_weights = {k:v for k,v in weights.iteritems() if abs(v) > 1e-4} ## more elegant
len(significant_weights)
H_nonlin_sp = 0 ## with sympy only
for combination,pm_arr in significant_weights:
H_nonlin_sp += make_nonlin_term_sp(combination,pm_arr)*significant_weights[combination,pm_arr]
H_sp = H_lin_sp + H_nonlin_sp*NONLIN_WEIGHT
def make_sp_conj(A):
'''
Returns the symbolic conjugate of A.
Args:
A (symbolic expression in symbols a[i] and a_H[i])
Returns:
The complex conjugate of A
'''
A_H = sp.conjugate(H_sp)
for i in range(len(a)):
A_H = A_H.subs(sp.conjugate(a[i]),a_H[i])
A_H = A_H.subs(sp.conjugate(a_H[i]),a[i])
return A_H
def make_eq_motion(H_sp):
'''
Input is a tuple or list, output is a matrix vector
'''
A_H = make_sp_conj(H_sp)
diff_ls = [1j*sp.diff(H_sp,var) for var in a_H] + [-1j*sp.diff(A_H,var) for var in a]
fs = [sp.lambdify( tuple(a+a_H),expression) for expression in diff_ls ]
return lambda arr: (np.asmatrix([ f(* arr ) for f in fs])).T
A_H = sp.conjugate(H_sp)
for i in range(len(a)):
A_H = A_H.subs(sp.conjugate(a[i]),a_H[i])
A_H = A_H.subs(sp.conjugate(a_H[i]),a[i])
eq_mot = make_eq_motion(H_sp)
def double_up(M1,M2=None):
if M2 == None:
M2 = np.zeros_like(M1)
top = np.hstack([M1,M2])
bottom = np.hstack([np.conj(M2),np.conj(M1)])
return np.vstack([top,bottom])
A_d,C_d,D_d = map(double_up,(A,C,D))
B_d = -double_up(C.H)
def make_f(eq_mot,B,a_in):
'''
Nonlinear equations of motion
'''
return lambda t,a: np.asarray(eq_mot(a)+B*a_in(t)).T[0]
def make_f_lin(A,B,a_in):
'''
Linear equations of motion
'''
return lambda t,a: np.asarray(A*np.asmatrix(a).T+B*a_in(t)).T[0]
a_in = lambda t: np.asmatrix([1.]*4).T
f = make_f(eq_mot,B_d,a_in)
f_lin = make_f_lin(A_d,B_d,a_in)
eq_res = eq_mot([1.]*10)
print eq_res
mat_res = A_d*np.asmatrix([1.]*10).T
print mat_res
## compute L2 error between
np.sqrt(sum(np.asarray(abs(eq_res - mat_res))**2))
r = ode(f).set_integrator('zvode', method='bdf')
r_lin = ode(f_lin).set_integrator('zvode', method='bdf')
y0 = np.asmatrix([0.]*10).T
t0=0.
r.set_initial_value(y0, t0)
r_lin.set_initial_value(y0, t0)
t1 = 100
dt = 0.01
Y = []
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
u = a_in(r.t)
Y.append(C_d*r.y+D_d*u)
Y_lin = []
while r_lin.successful() and r_lin.t < t1:
r_lin.integrate(r_lin.t+dt)
u = a_in(r_lin.t)
Y_lin.append(C_d*r_lin.y+D_d*u)
for i in range(4):
plt.plot([(y).real[i][0,0] for y in Y ])
for i in range(4):
plt.plot([(y).real[i][0,0] for y in Y_lin ]) | Testing_make_nonlinear_interaction.ipynb | tabakg/potapov_interpolation | gpl-3.0 | 6ca8e6e37f572e52c32b200f876c7088 |
When Making the nonlinear terms above zero, we find agreement with the linear equtions of motion.
Using the symbolic packages is kind of slow. For classical simulations maybe we can avoid that. We just need to extract the equations of motion, which should end up being sparse in the interaction terms.
TODO: implement without sympy, e.g. with Julia
Testing different cases with make_nonlinear_interaction
making sure different exceptions get caught | roots_to_use = np.array([roots[i] for i in combination])
modes_to_use = [modes[i] for i in combination]
def call_make_non_lin():
return functions.make_nonlinear_interaction(roots_to_use, modes_to_use, delays, delay_indices,
start_nonlin,duration_nonlin,pm_arr,
indices_of_refraction,
eps=1e-12,func=lambda z : z.imag)
call_make_non_lin()
indices_of_refraction = 1000.
call_make_non_lin()
start_nonlin = -1 ## this shouldn't happen
call_make_non_lin()
start_nonlin = [1]*len(roots_to_use)
call_make_non_lin()
start_nonlin = 0.00001
duration_nonlin = .099
call_make_non_lin()
start_nonlin = [0.00001]*len(roots_to_use)
duration_nonlin = .099
call_make_non_lin() | Testing_make_nonlinear_interaction.ipynb | tabakg/potapov_interpolation | gpl-3.0 | e0c88fcc83d522cde0452d332e40ba56 |
Unused methods below | ## consolidated weights do not take into account which modes are createad or annihilated.
consolidated_weightes = {}
for key in significant_weights:
if not key[0] in consolidated_weightes:
consolidated_weightes[key[0]] = significant_weights[key]
else:
consolidated_weightes[key[0]] += significant_weights[key]
## QNET annihilation and creation operators
a_ = [Destroy(local_space('fock', namespace = str(i))) for i in range(m)]
## Make linear Hamiltonian with QNET
H_lin = sum([a_[i].dag()*a_[i]*Omega[i,i] for i in range(m)]) ## with QNET
def make_nonlin_term(combination,pm_arr):
'''
Make symbolic term
With QNET
'''
r = 1
for index,sign in zip(combination,pm_arr):
if sign == 1:
r*= a_[index].dag()
else:
r *= a_[index]
return r
## Make nonlinear Hamiltonian in QNET
H_nonlin = 0 ## with QNET
for combination,pm_arr in significant_weights:
H_nonlin += make_nonlin_term(combination,pm_arr)*significant_weights[combination,pm_arr]
H_qnet = H_lin+H_nonlin | Testing_make_nonlinear_interaction.ipynb | tabakg/potapov_interpolation | gpl-3.0 | efb34d15ce9ec8c6ebd74e5e4d22b9b2 |
The follow code is same as before, but you can send the commands all in one go.
However, there are implicit wait for the driver so it can do AJAX request and render the page for elements
also, you can you find_element_by_xpath method |
# browser = webdriver.Firefox() #I only tested in firefox
# browser.get('http://costcotravel.com/Rental-Cars')
# browser.implicitly_wait(5)#wait for webpage download
# browser.find_element_by_id('pickupLocationTextWidget').send_keys("PHX");
# browser.implicitly_wait(5) #wait for the airport suggestion box to show
# browser.find_element_by_xpath('//li[@class="sayt-result"]').click()
# #click the airport suggestion box
# browser.find_element_by_xpath('//input[@id="pickupDateWidget"]').send_keys('08/27/2016')
# browser.find_element_by_xpath('//input[@id="dropoffDateWidget"]').send_keys('08/30/2016',Keys.RETURN)
# browser.find_element_by_xpath('//select[@id="pickupTimeWidget"]/option[@value="09:00 AM"]').click()
# browser.find_element_by_xpath('//select[@id="dropoffTimeWidget"]/option[@value="05:00 PM"]').click()
# browser.implicitly_wait(5) #wait for the clicks to be completed
# browser.find_element_by_link_text('SEARCH').click()
# #click the search box
# time.sleep(8) #wait for firefox to download and render the page
# n = browser.page_source #grab the html source code
type(n) #the site use unicode
soup = BeautifulSoup(n,'lxml') #use BeautifulSoup to parse the source
print "--------------first 1000 characters:--------------\n"
print soup.prettify()[:1000]
print "\n--------------last 1000 characters:--------------"
print soup.prettify()[-1000:]
table = soup.find('div',{'class':'rentalCarTableDetails'}) #find the table
print "--------------first 1000 characters:--------------\n"
print table.prettify()[:1000]
print "\n--------------last 1000 characters:--------------"
print table.prettify()[-1000:]
tr = table.select('tr') #let's look at one of the row
type(tr)
#lets look at first three row
for i in tr[0:3]:
print i.prettify()
print "-----------------------------------" | costco-rental.ipynb | scko823/web-scraping-selenium-example | mit | 9cdc47ff8d70154b38c5252e8efb51ec |
let play with one of the row | row = tr[3]
row.find('th',{'class':'tar'}).text.encode('utf-8')
row
row.contents[4].text #1. this is unicode, 2. the dollar sign is in the way
'Car' in 'Econ Car' #use this string logic to filter out unwanted data
rows = [i for i in tr if (('Price' not in i.contents[0].text and 'Fees' not in i.contents[0].text and 'Location' not in i.contents[0].text and i.contents[0].text !='') and len(i.contents[0].text)<30)]
# use this crazy list comprehension to get the data we want
#1. don't want the text 'Price' in the first column
#2. don't want the text 'Fee' in the first column
#3. don't want the text 'Location' in the first column
#4. the text length of first column must be less than 30 characters long
rows[0].contents[0].text #just exploring here...
rows[0].contents[4].text #need to get rid of the $....
rows[3].contents[0].text #need to make it utf-8
#process the data
prices = {}
for i in rows:
#print the 1st column text
print i.contents[0].text.encode('utf-8')
prices[i.contents[0].text.encode('utf-8')] = [i.contents[1].text.encode('utf-8'),i.contents[2].text.encode('utf-8'), i.contents[3].text.encode('utf-8'),i.contents[4].text.encode('utf-8')]
prices
iteritems = prices.iteritems()
#call .iteritems() on a dictionary will give you a generator which you can iter over
iteritems.next() #run me five times
for name, priceList in prices.iteritems():
newPriceList = []
for i in priceList:
newPriceList.append(i.replace('$',''))
prices[name] = newPriceList
prices
data = pd.DataFrame.from_dict(prices, orient='index') #get a pandas DataFrame from the prices dictionary
data
data = data.replace('Not Available', numpy.nan) #replace the 'Not Available' data point to numpy.nan
data = pd.to_numeric(data, errors='coerce') #cast to numeric data
data
data.columns= ['Alamo','Avis','Budget','Enterprise'] #set column names
data
data.notnull() #check for missing data
data.min(axis=1, skipna=True) #look at the cheapest car in each class | costco-rental.ipynb | scko823/web-scraping-selenium-example | mit | 34a1279ec4b4e8eedaf7490b4dd32c28 |
Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.
Investigating the Data | #####################################
# 2 #
#####################################
## Find the total number of rows and the number of unique students (account keys)
## in each table.
# Part 1
print(
len(enrollments),
len(daily_engagement),
len(project_submissions)
)
# Part 2
def get_unique_students(file_name):
"""
Retrieves a list of unique account keys from the specified file
"""
unqiue_students = set()
for e in file_name:
unqiue_students.add(e["account_key"])
return unqiue_students
u_enrollments = get_unique_students(enrollments)
u_daily_engagement = get_unique_students(daily_engagement)
u_project_submissions = get_unique_students(project_submissions)
print(
len(u_enrollments),
len(u_daily_engagement),
len(u_project_submissions)
) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 06f1934f4213c1afd401869a6f6ab92d |
Problems in the Data | #####################################
# 3 #
#####################################
## Rename the "acct" column in the daily_engagement table to "account_key".
for engagement_record in daily_engagement:
engagement_record['account_key'] = engagement_record['acct']
del[engagement_record['acct']] | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | db011cb7ceb817e4cebb1aa1e35819f9 |
Missing Engagement Records | #####################################
# 4 #
#####################################
## Find any one student enrollments where the student is missing from the daily engagement table.
## Output that enrollment.
for e in enrollments:
if e["account_key"] not in u_daily_engagement:
print("\n", e) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | fb9c4b9082540e137a44990de1a0afee |
Checking for More Problem Records | #####################################
# 5 #
#####################################
## Find the number of surprising data points (enrollments missing from
## the engagement table) that remain, if any.
for ix, e in enumerate(enrollments):
if e["account_key"] not in u_daily_engagement and e["join_date"] != e["cancel_date"]:
print("\n", "Index: %i" % ix, "\n Correspoinding record: \n %s" % e) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 8cb8fc763bbe60c592b3509f87bc195a |
Tracking Down the Remaining Problems | # Create a set of the account keys for all Udacity test accounts
udacity_test_accounts = set()
for enrollment in enrollments:
if enrollment['is_udacity']:
udacity_test_accounts.add(enrollment['account_key'])
len(udacity_test_accounts)
# Given some data with an account_key field, removes any records corresponding to Udacity test accounts
def remove_udacity_accounts(data):
non_udacity_data = []
for data_point in data:
if data_point['account_key'] not in udacity_test_accounts:
non_udacity_data.append(data_point)
return non_udacity_data
# Remove Udacity test accounts from all three tables
non_udacity_enrollments = remove_udacity_accounts(enrollments)
non_udacity_engagement = remove_udacity_accounts(daily_engagement)
non_udacity_submissions = remove_udacity_accounts(project_submissions)
print(
len(non_udacity_enrollments),
len(non_udacity_engagement),
len(non_udacity_submissions)) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 76221596025fcaae43afc04c18503a82 |
Refining the Question | #####################################
# 6 #
#####################################
## Create a dictionary named paid_students containing all students who either
## haven't canceled yet or who remained enrolled for more than 7 days. The keys
## should be account keys, and the values should be the date the student enrolled.
paid_students = dict()
for e in non_udacity_enrollments:
# check wether days_to_cancel == None or days_to_cancel > 7
if e["days_to_cancel"] == None or e["days_to_cancel"] > 7:
# store account key and join date in temporary variables
temp_key = e["account_key"]
temp_date = e["join_date"]
# check wether account key already exists in temp variable or if join date > existing join date
if temp_key not in paid_students or temp_date > paid_students[temp_key]:
# add account_key and enrollment_date to
paid_students[temp_key] = temp_date
len(paid_students) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 1815b5e3b1e13aac55c384355d4057cf |
Getting Data from First Week | # Takes a student's join date and the date of a specific engagement record,
# and returns True if that engagement record happened within one week
# of the student joining.
def within_one_week(join_date, engagement_date):
time_delta = engagement_date - join_date
return time_delta.days >= 0 and time_delta.days < 7
def remove_free_trial_cancels(data):
new_data = []
for data_point in data:
if data_point['account_key'] in paid_students:
new_data.append(data_point)
return new_data
paid_enrollments = remove_free_trial_cancels(non_udacity_enrollments)
paid_engagement = remove_free_trial_cancels(non_udacity_engagement)
paid_submissions = remove_free_trial_cancels(non_udacity_submissions)
#####################################
# 7 #
#####################################
## Create a list of rows from the engagement table including only rows where
## the student is one of the paid students you just found, and the date is within
## one week of the student's join date.
paid_engagement_in_first_week = []
# loop over engagements
for e in non_udacity_engagement:
# check if student is in paid students and if engagement date is valid
if e["account_key"] in paid_students and within_one_week(paid_students[e["account_key"]], e["utc_date"]) == True:
paid_engagement_in_first_week.append(e)
len(paid_engagement_in_first_week) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | cfedd29bfb929e9dc1ff4faa0ce5b5a3 |
Exploring Student Engagement | from collections import defaultdict
# Create a dictionary of engagement grouped by student.
# The keys are account keys, and the values are lists of engagement records.
engagement_by_account = defaultdict(list)
for engagement_record in paid_engagement_in_first_week:
account_key = engagement_record['account_key']
engagement_by_account[account_key].append(engagement_record)
# Create a dictionary with the total minutes each student spent in the classroom during the first week.
# The keys are account keys, and the values are numbers (total minutes)
total_minutes_by_account = {}
for account_key, engagement_for_student in engagement_by_account.items():
total_minutes = 0
for engagement_record in engagement_for_student:
total_minutes += engagement_record['total_minutes_visited']
total_minutes_by_account[account_key] = total_minutes
import numpy as np
# Summarize the data about minutes spent in the classroom
total_minutes = list(total_minutes_by_account.values())
print('Mean:', np.mean(total_minutes))
print('Standard deviation:', np.std(total_minutes))
print('Minimum:', np.min(total_minutes))
print('Maximum:', np.max(total_minutes)) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | ae955d278a9fda1811150f3ab65da063 |
Debugging Data Analysis Code | #####################################
# 8 #
#####################################
## Go through a similar process as before to see if there is a problem.
## Locate at least one surprising piece of data, output it, and take a look at it.
for k,v in total_minutes_by_account.items():
if v > 7200:
print("\n", "account key: ", k, "value: ", v)
print(
paid_engagement_in_first_week["account_key" == 460],
paid_engagement_in_first_week["account_key" == 140],
paid_engagement_in_first_week["account_key" == 108],
paid_engagement_in_first_week["account_key" == 78]
) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 3299d0b478b92a87c902610a37f2b2d3 |
Lessons Completed in First Week | #####################################
# 9 #
#####################################
## Adapt the code above to find the mean, standard deviation, minimum, and maximum for
## the number of lessons completed by each student during the first week. Try creating
## one or more functions to re-use the code above.
def group_data(data, key_name):
"""
Given data in dict form and a key, the function returns a grouped data set
"""
grouped_data = defaultdict(list)
for e in data:
key = e[key_name]
grouped_data[key].append(e)
return grouped_data
engagement_by_account = group_data(paid_engagement_in_first_week, "account_key")
def sum_grouped_data(data, field_name):
"""
Given data in dict form and a field name, the function returns sum of the field name per key
"""
summed_data = {}
for key, values in data.items():
total = 0
for value in values:
total += value[field_name]
summed_data[key] = total
return summed_data
total_lessons_per_account = sum_grouped_data(engagement_by_account, "lessons_completed")
def describe_data(data):
"""
Given a dataset the function returns mean, std. deviation, min and max
"""
print(
"Mean: %f" % np.mean(data),
"Standard deviation: %f" % np.std(data),
"Min: %f" % np.min(data),
"Max: %f" % np.max(data))
plt.hist(data)
describe_data(list(total_lessons_per_account.values())) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 9b59a140b106e18119ba00763e192d58 |
Number of Visits in First Week | ######################################
# 10 #
######################################
## Find the mean, standard deviation, minimum, and maximum for the number of
## days each student visits the classroom during the first week.
for el in paid_engagement_in_first_week:
if el["num_courses_visited"] > 0:
el["has_visited"] = 1
else:
el["has_visited"] = 0
engagement_by_account = group_data(paid_engagement_in_first_week, "account_key")
total_visits_per_day_per_account = sum_grouped_data(engagement_by_account, "has_visited")
describe_data(list(total_visits_per_day_per_account.values())) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 9e05dc4f98d060f30c8c314f30fb3aa3 |
Splitting out Passing Students | ######################################
# 11 #
######################################
## Create two lists of engagement data for paid students in the first week.
## The first list should contain data for students who eventually pass the
## subway project, and the second list should contain data for students
## who do not.
subway_project_lesson_keys = ['746169184', '3176718735']
passing_engagement = []
non_passing_engagement = []
# loop over project submission data
for el in paid_submissions:
# check if project submission account key is in engagement data
if el["account_key"] in paid_engagement:
print(e["account_key"])
# check if lesson key is in subway_project_lesson key
if el["lesson_key"] in subway_project_lesson_keys:
print(e["lesson_key"])
# check if assigned_rating is PASSED or DISTINCTION
if el["assigned_rating"] in ["PASSED", "DISTINCTION"]:
print(e["assigned_rating"])
# if so, add record to passing_engagement list
passing_engagement.append(el)
# else add record to non_passing_engagement list
else:
non_passing_engagement.append(el)
print("Passing: ", len(passing_engagement), "Not passing: ", len(non_passing_engagement))
subway_project_lesson_keys = ['746169184', '3176718735']
pass_subway_project = set()
for el in paid_submissions:
if ((el["lesson_key"] in subway_project_lesson_keys) and
(el["assigned_rating"] == 'PASSED' or el["assigned_rating"] == 'DISTINCTION')):
pass_subway_project.add(el['account_key'])
len(pass_subway_project)
passing_engagement = []
non_passing_engagement = []
for el in paid_engagement_in_first_week:
if el['account_key'] in pass_subway_project:
passing_engagement.append(el)
else:
non_passing_engagement.append(el)
print(len(passing_engagement))
print(len(non_passing_engagement)) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | c72849f65cef03e62429bbb6c04480fd |
Comparing the Two Student Groups | ######################################
# 12 #
######################################
## Compute some metrics you're interested in and see how they differ for
## students who pass the subway project vs. students who don't. A good
## starting point would be the metrics we looked at earlier (minutes spent
## in the classroom, lessons completed, and days visited).
# prepare passing data
passing_engagement_grouped = group_data(passing_engagement, "account_key")
non_passing_engagement_grouped = group_data(non_passing_engagement, "account_key")
passing_minutes = sum_grouped_data(passing_engagement_grouped, "total_minutes_visited")
passing_lessons = sum_grouped_data(passing_engagement_grouped, "lessons_completed")
passing_days = sum_grouped_data(passing_engagement_grouped, "has_visited")
passing_projects = sum_grouped_data(passing_engagement_grouped, "projects_completed")
# prepare non passing data
non_passing_minutes = sum_grouped_data(non_passing_engagement_grouped, "total_minutes_visited")
non_passing_lessons = sum_grouped_data(non_passing_engagement_grouped, "lessons_completed")
non_passing_days = sum_grouped_data(non_passing_engagement_grouped, "has_visited")
non_passing_projects = sum_grouped_data(non_passing_engagement_grouped, "projects_completed")
# compare
print("Minutes", "\n")
describe_data(list(passing_minutes.values()))
describe_data(list(non_passing_minutes.values()))
print("\n", "Lessons", "\n")
describe_data(list(passing_lessons.values()))
describe_data(list(non_passing_lessons.values()))
print("\n", "Days", "\n")
describe_data(list(passing_days.values()))
describe_data(list(non_passing_days.values()))
print("\n", "Projects", "\n")
describe_data(list(passing_projects.values()))
describe_data(list(non_passing_projects.values()))
passing_engagement[0:2] | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 7e8991805d1dd8814f0e69f3826c4b48 |
Making Histograms | ######################################
# 13 #
######################################
## Make histograms of the three metrics we looked at earlier for both
## students who passed the subway project and students who didn't. You
## might also want to make histograms of any other metrics you examined.
# setup
%matplotlib inline
import matplotlib.pyplot as plt
# minutes passing
plt.title("Passing students by minute")
plt.hist(list(passing_minutes.values()))
# minutes non-passing
plt.title("_NON_ Passing students by minute")
plt.hist(list(non_passing_minutes.values()))
# lessons
plt.title("Passing students by lessons")
plt.hist(list(passing_lessons.values()))
# lessons non-passing
plt.title("_NON_ Passing students by lessons")
plt.hist(list(non_passing_lessons.values()))
# days
plt.title("Passing students by days")
plt.hist(list(passing_days.values()))
# days non-passing
plt.title("_NON_ Passing students by days")
plt.hist(list(non_passing_days.values())) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | c52e91d71b2dcb269c2691a1999bdf87 |
Improving Plots and Sharing Findings | ######################################
# 14 #
######################################
## Make a more polished version of at least one of your visualizations
## from earlier. Try importing the seaborn library to make the visualization
## look better, adding axis labels and a title, and changing one or more
## arguments to the hist() function.
import seaborn as sns
# seaborn only
plt.title("_NON_ Passing students by days with S-E-A-B-O-R-N")
plt.xlabel("days spent in the classroom")
plt.ylabel("frequency")
plt.hist(list(non_passing_days.values()), bins=8) | p2/L1_Starter_Code.ipynb | stefanbuenten/nanodegree | mit | 15be3fb1a28224cbe794134764669eb8 |
Problem 1
The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (nn.max_pool()) of stride 2 and kernel size 2.
With adding the max_pool of stride 2 and kernel size 2 and increasing num_of_steps, the performance has improved from 89.8 to 93.3% | batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1))
layer1_biases = tf.Variable(tf.zeros([depth]))
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1))
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
# Adding the max pool
max_pool = tf.nn.max_pool(hidden, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
max_pool = tf.nn.max_pool(hidden, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 5001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 250 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels)) | machine-learning/deep-learning/udacity/ud730/4_convolutions.ipynb | pk-ai/training | mit | c061626ff5171f630f36dc0b5458bb53 |
Variables can be defined freely and change type. There is a very handy print function (this is very different from Python2!). The format function can be used to customize the output. More at https://pyformat.info/ | a = 42
b = 256
z = 2 + 3j
w = 5 - 6j
print("I multiply", a, "and", b, "and I get", a * b)
print("Compex numbers!", z + w)
print("Real:", z.real)
# Variables as objects (in Python everything is an object)
print("Abs:", abs(z))
almost_pi = 3.14
better_pi = 3.14159265358979323846264338327950288419716939937510
c = 299792458
print("Look at his scientific notation {:.2E} or ar this nice rounding {:.3f}".format(c, better_pi)) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 78db03b9931c9ceb20471650a98138f0 |
Note that Python does not require semicolons to terminate an instruction (but they don't harm) but require the indendation to be respected. (After for, if, while, def, class, ...) | for i in range(5):
if (not i%2 == 0 or i == 0):
print(i) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | c896bfa99eae3e3f580cb20c887178c4 |
Strucutred Data
It's easy to work with variables of different nature. There are three kinds of structured variable: tuple (), lists [], and dicts {}. Tuples are immutable (ofter output of functions is given as a tuple). Lists are the usual arrays (multidimensional). Dictionaries are associative arrays with keywords. | a = 5
a = "Hello, World"
# Multiple assignation
b, c = "Hello", "World"
print(a)
print(b, c)
tuple_example = (1,2,3)
print("Tuple", tuple_example[0])
# tuple_example[1] = 3
list_example = [1,2,3]
print("List 1", list_example[0])
list_example[1] = 4
print("List 2", list_example[1])
dict_example = {'one' : 1,
'two' : 2,
'three' : 3
}
print("Dict", dict_example['one']) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 468fefc2f8f7f244095748173c870d85 |
Lists are very useful as most of the methods are build it, like for sorting, reversing, inserting, deleting, slicing, ... | random_numbers = [1,64,78,13,54,34, "Ravioli"]
print("Length:", len(random_numbers))
true_random = random_numbers[0:5]
print("Sliced:", true_random)
print("Sorted:", sorted(true_random))
print("Max:", max(true_random))
random_numbers.remove("Ravioli")
print("Removed:", random_numbers)
multi_list = ["A string", ["a", "list"], ("A", "Tuple"), 5]
print("Concatenated list", random_numbers + multi_list) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 2d291a2d1e3ec774ad090ffa3d7b5c97 |
CAVEAT: List can be dangerous and have unexpected behavior due to the default copy method (like pointers pointing to the same area of memory) | cool_numbers = [0, 11, 42]
other_numbers = cool_numbers
print(other_numbers)
cool_numbers.append(300)
print(other_numbers) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 74a61684ed2dc33b144a23b0072d6025 |
To avoid this problem usually slicing is used. | cool_numbers = [0, 11, 42]
other_numbers = cool_numbers[:]
print(other_numbers)
cool_numbers.append(300)
print(other_numbers) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 24f62501fa930544eaa7f8d6c189a5ba |
String are considered list and slicing can be applied on strings, with a sleek behavior with respect to indeces: | s = "GNU Emacs"
# No problem with "wrong" index
print(s[4:100])
# Backwards!
print(s[-9:-6]) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 9500f83e828363af7f9cc0acbd00584e |
With a for loop it is possible to iterate over lists. (But attention not to modify the list over which for is iterating!) | for num in cool_numbers:
print("I like the number", num) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 193bbe63e62f31ea5229b62987283c3e |
List can generate other list via list comprehension which is a functional way to operate on a list or a subset defined by if statements. | numbers = [0, 1, 2, 3, 4, 5, 6, 7]
# Numbers via list comprehension
numbers = [i for i in range(0,8)]
print("Numbers:", numbers)
even = [x for x in numbers if x%2 == 0]
odd = [x for x in numbers if not x in even]
print("Even:", even)
print("Odd:", odd) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 69a9242c0a5a3cb892a0ddf297395f90 |
Functions
Python can have user-defined functions. There are some details about passing by reference or passing by value (what Python actually does is passing by assignment, details here: https://docs.python.org/3/faq/programming.html#how-do-i-write-a-function-with-output-parameters-call-by-reference). There are no return and arguments type but there is no overloading. | def say_hello(to = "Gabriele"):
print("Hello", to)
say_hello()
say_hello("Albert")
def sum_and_difference(a, b):
return (a + b, a - b)
(sum, diff) = sum_and_difference(10, 15)
print("Sum: {}, Diff: {}".format(sum, diff))
def usless_box(a,b,c,d,e,f):
return a,b,c,d,e,f
first, _, _, _, _, _ = usless_box(100, 0, 1, 2, 3, 4)
print(first) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 23700d3015ac95a0da97088b608b0475 |
A very useful construct is try-except that can be used to handle errors. | hey = "String"
ohi = 6
try:
print(hey/3)
except:
print("Error in hey!")
try:
print(ohi/3)
except:
print("Error in ohi!") | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 8886b4ef7a9161d7c92e22305c5c893c |
NOTE: Prefer this name convenction (no CamelCase) and space over tabs
There is full support to OOP with Ineheritance, Encapsulation and Polymorphism. (https://docs.python.org/3/tutorial/classes.html)
Shipped with battery included
For Python there exist a huge number of modules that extend the potentiality of Python. Here are some examples:
OS
os is a module for interacting with the system and with files | # Modules have to be imported
# In this way I import thw whole module
import os
# To access an object inside the module I have to prepend the name
# In this way I import only a function but I don't have to prepend the
# module's name
from os import getcwd
print(os.getcwd())
print(getcwd()) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 67a3898f78c0cfd00a2d350b420bb22a |
os with Python's capability for manipulating string is a very simple way to interact with files and dir | dir = "test"
files = os.listdir(dir)
print(files)
# Sorting
files.sort()
print(files)
# I take the subset starting with d and not ending with 10 and that are not directories
dfiles = [f for f in files if f.startswith("d") and not f.endswith("10") and not os.path.isdir(f)]
print(dfiles)
for f in dfiles:
data = f.split("_")
n1 = data[1]
n2 = data[2]
print("From the name of the file {} I have extrected {} {}".format(f, n1, n2)) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 29ed325ff2910883644b97f6be1e0c0a |
Sys (and argparse)
sys is another module for interactive with the system or to obtain information about it, in particular by means of the command line.
argparse is a module for defining flags and arguments. | import sys
# sys provides the simplest way to pass command line arguments to a python script
print(sys.argv[0])
# argparse is more flexible but requires also more setup | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 20e9ffbc3aa46752a459229bb39f2540 |
NumPy
Numpy is a module that provides a framework for numerical application. It defines new type of data highly optimized (NumPy is written in C) and provides simple interfaces for importing data from files and manipulate them. It is well integrated with the other scientific libraries for Python as it serves as base in many cases (SciPy, Matplotlib, Pandas, ...) Its fundamental object is the numpy array.
With good (enough) documentation! | # Standard import
import numpy as np
# Array from list
num = [0,1,2]
print("List:", num)
x = np.array(num)
print("Array:", x)
y = np.random.randint(3, size = (3))
print("Random", y)
z = np.array([x,y])
print("z:", z)
print("Shape", z.shape)
zres = z.reshape(3,2)
print("z reshaped:", zres)
# Attention: numpy does not alter any object!
# Operation behave well on arrays
y3 = y + 3
print("y + 3:", y3)
print("y squared:", y**2)
# Many built-in operations
print("Scalar product:", np.dot(x,y))
# Handy way to create an equispaced array
xx = np.linspace(0, 15, 16)
print("xx:", xx)
yy = np.array([x**2 for x in xx])
print("yy:", yy)
zz = yy.reshape(4,4)
print("zz", zz)
print("Eigenvalues:", np.linalg.eigvals(zz)) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 1a2d3dc36b393b80df37a22f2d7aee72 |
NumPy offers tools for:
- Linear algebra
- Logic functions
- Datatypes
- Constant of nature
- Matematical functions (also special, as Hermite, Legendre...)
- Polynomials
- Statistics
- Sorting, searching and counting
- Fourier Transform
- Random generation
- Integration with C/C++ and Fortran code | # Example: Polynomail x^2 + 2 x + 1
p = np.poly1d([1, 2, 1])
print(p)
# Evaluate it at 1
print("p(1):", p(1))
# Find the roots
print("Roots:", p.r)
# Take derivative
print("Deriv:", np.polyder(p)) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 7306f45809ceb2e24d0397a2a0d877a3 |
Interaction with files is really simple | arr = np.random.random(10)
# Prints a single column file, for arrays print many columns
np.savetxt("array.dat", arr)
files = os.listdir(".")
print([f for f in files if f == "array.dat"])
data = np.loadtxt("array.dat")
print(data) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | a3db3613fa9c9cffded3f5bd5c5a6c89 |
It is possible to save data compressed in a gzip by appending tar.gz to the name of the file (in this case array.dat.tar.gz).
REMEMBER:
- To create: tar cvzf archive.tar.gz folder
- To extract: tar xvzf archive.tar.gz
Matplotlib
Matplotlib is the tool for plotting and graphics | import matplotlib.pyplot as plt
plt.plot(arr)
plt.ylabel('Some numbers')
plt.xlabel('An index')
plt.title("The title!")
plt.show() | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | afe366ccf6ce511c71107c8df2a59b2f |
Matplotlib has a seamless integration with NumPy | x = np.linspace(0,2 * np.pi, 100)
y = np.sin(x)
z = np.cos(x)
plt.plot(x, y, "r-", x, z, "g-")
plt.show() | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 77a77a5a9808ae5309242ce11467ed45 |
Matplotlib has a great library of examples (https://matplotlib.org/examples/) that in particular contains many of the most common plots (histograms, contour, scatter, pie, ...) | # Plot of the Lorenz Attractor based on Edward Lorenz's 1963 "Deterministic
# Nonperiodic Flow" publication.
# http://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2
#
# Note: Because this is a simple non-linear ODE, it would be more easily
# done using SciPy's ode solver, but this approach depends only
# upon NumPy.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def lorenz(x, y, z, s=10, r=28, b=2.667):
x_dot = s*(y - x)
y_dot = r*x - y - x*z
z_dot = x*y - b*z
return x_dot, y_dot, z_dot
dt = 0.01
stepCnt = 10000
# Need one more for the initial values
xs = np.empty((stepCnt + 1,))
ys = np.empty((stepCnt + 1,))
zs = np.empty((stepCnt + 1,))
# Setting initial values
xs[0], ys[0], zs[0] = (0., 1., 1.05)
# Stepping through "time".
for i in range(stepCnt):
# Derivatives of the X, Y, Z state
x_dot, y_dot, z_dot = lorenz(xs[i], ys[i], zs[i])
xs[i + 1] = xs[i] + (x_dot * dt)
ys[i + 1] = ys[i] + (y_dot * dt)
zs[i + 1] = zs[i] + (z_dot * dt)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(xs, ys, zs, lw=0.5)
ax.set_xlabel("X Axis")
ax.set_ylabel("Y Axis")
ax.set_zlabel("Z Axis")
ax.set_title("Lorenz Attractor")
plt.show() | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | da406a94d3493a4a32028769daa0a71b |
SciPy
SciPy is a module that relies on NumPy and provides many ready-made tools used in science. Examples:
- Optimization
- Integration
- Interpolation
- Signal processing
- Statistics
Example, minimize: $f\left(\mathbf{x}\right)=\sum_{i=1}^{N-1}100\left(x_{i}-x_{i-1}^{2}\right)^{2}+\left(1-x_{i-1}\right)^{2}.$ | import numpy as np
from scipy.optimize import minimize
def rosen(x):
"""The Rosenbrock function"""
return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(rosen, x0, method='nelder-mead', options={'xtol': 1e-8, 'disp': True})
print(res.x) | seminar_3/Introduction to Python.ipynb | Sbozzolo/Open-Source-Tools-for-Physics | gpl-3.0 | 678f7ba22bff3563df5fbbf74862112e |
True Values
The "true" values can be computed analytically in this case, so we did so.
We can also compute the distribution for weighting the errors. | def compute_value_dct(theta_lst, features):
return [{s: np.dot(theta, x) for s, x in features.items()} for theta in theta_lst]
def compute_values(theta_lst, X):
return [np.dot(X, theta) for theta in theta_lst]
def compute_errors(value_lst, error_func):
return [error_func(v) for v in value_lst]
def rmse_factory(true_values, d=None):
true_values = np.ravel(true_values)
# sensible default for weighting distribution
if d is None:
d = np.ones_like(true_values)
else:
d = np.ravel(d)
assert(len(d) == len(true_values))
# the actual root-mean square error
def func(v):
diff = true_values - v
return np.sqrt(np.mean(d*diff**2))
return func | rlbench/off_policy_comparison-short.ipynb | rldotai/rlbench | gpl-3.0 | 32310a2d640d4b6c612b422431c363c2 |
Comparing the Errors
For each algorithm, we get the associated experiment, and calculate the errors at each timestep, averaged over the runs performed with that algorithm. | # define the experiment
num_states = 8
num_features = 6
num_active = 3
num_runs = 10
max_steps = 10000
# set up environment
env = chicken.Chicken(num_states)
# Define the target policy
pol_pi = policy.FixedPolicy({s: {0: 1} for s in env.states})
# Define the behavior policy
pol_mu = policy.FixedPolicy({s: {0: 1} if s < 4 else {0: 0.5, 1: 0.5} for s in env.states})
# state-dependent gamma
gm_dct = {s: 0.9 for s in env.states}
gm_dct[0] = 0
gm_func = parametric.MapState(gm_dct)
gm_p_func = parametric.MapNextState(gm_dct)
# set up algorithm parameters
update_params = {
'alpha': 0.02,
'beta': 0.002,
'gm': gm_func,
'gm_p': gm_p_func,
'lm': 0.0,
'lm_p': 0.0,
'interest': 1.0,
}
# Run all available algorithms
data = dict()
for name, alg in algos.algo_registry.items():
print(name)
run_lst = []
for i in range(num_runs):
print("Run: %d"%i, end="\r")
episode_data = dict()
# Want to use random features
phi = features.RandomBinary(num_features, num_active)
episode_data['features'] = {s: phi(s) for s in env.states}
# Set up the agent
_update_params = update_params.copy()
if name == 'ETD':
_update_params['alpha'] = 0.002
agent = OffPolicyAgent(alg(phi.length), pol_pi, pol_mu, phi, _update_params)
# Run the experiment
episode_data['steps'] = run_contextual(agent, env, max_steps)
run_lst.append(episode_data)
data[name] = run_lst
# True values & associated stationary distribution
theta_ls = np.array([ 0.4782969, 0.531441 , 0.59049, 0.6561, 0.729, 0.81, 0.9, 1.])
d_pi = np.ones(num_states)/num_states
D_pi = np.diag(d_pi)
# define the error/objective function
err_func = rmse_factory(theta_ls, d=d_pi)
baseline = err_func(np.zeros(num_states))
for name, experiment in data.items():
print(name)
errors = []
for episode in experiment:
feats = experiment[0]['features']
X = np.array([feats[k] for k in sorted(feats.keys())])
steps = experiment[0]['steps']
thetas = list(pluck('theta', steps))
# compute the values at each step
val_lst = compute_values(thetas, X)
# compute the errors at each step
err_lst = compute_errors(val_lst, err_func)
errors.append(err_lst)
# calculate the average error
clipped_errs = np.clip(errors, 0, 100)
avg_err = np.mean(clipped_errs, axis=0)
# plot the errors
fig, ax = plt.subplots()
ax.plot(avg_err)
# format the graph
ax.set_ylim(1e-2, 2)
ax.axhline(baseline, c='red')
ax.set_yscale('log')
plt.show() | rlbench/off_policy_comparison-short.ipynb | rldotai/rlbench | gpl-3.0 | 79a765f9e6853d000696ffc8c55adc90 |
Ejercicio Weigthed Netwroks
Cree una red no direccionada con los siguientes pesos.
(a, b) = 0.3
(a, c) = 1.0
(a, d) = 0.9
(a, e) = 1.0
(a, f) = 0.4
(c, f) = 0.2
(b, h) = 0.2
(f, j) = 0.8
(f, g) = 0.9
(j, g) = 0.6
(g, k) = 0.4
(g, h) = 0.2
(k, h) = 1.0 | # To create a weighted, undirected graph, the edges must be provided in the form: (node1, node2, weight)
edges = [('a', 'b', 0.3), ('a', 'c', 1.0), ('a', 'd', 0.9), ('a', 'e', 1.0), ('a', 'f', 0.4),
('c', 'f', 0.2), ('b', 'h', 0.2), ('f', 'j', 0.8), ('f', 'g', 0.9), ('j', 'g', 0.6),
('g', 'k', 0.4), ('g', 'h', 0.2), ('k', 'h', 1.0)]
def edges_to_weighted_graph(edges):
edges = list(edges)
graph = {}
for i in range(0,len(edges)):
if graph.get(edges[i][0], None):
graph[edges[i][0]].add((edges[i][1], edges[i][2]))
else:
if len(edges[i]) == 3:
graph[edges[i][0]] = set([(edges[i][1],edges[i][2])])
else:
graph[edges[i][0]] = set([])
if len(edges[i]) == 3:
if graph.get(edges[i][1], None):
graph[edges[i][1]].add((edges[i][0],edges[i][2]))
else:
graph[edges[i][1]] = set([(edges[i][0],edges[i][2])])
return graph
graph = edges_to_weighted_graph(edges)
print (graph)
""" With NetworkX """
FG = nx.Graph()
FG.add_weighted_edges_from(edges)
print (str(FG)) | santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | 1a1b65382d7676494b5571a8a796f712 |
Imprima la matriz de adyasencia | def adjacency_matrix(graph):
keys = list(graph.keys())
keys.sort()
adj_matrix = np.zeros((len(keys),len(keys)))
for node, edges in graph.items():
for edge in edges:
adj_matrix[keys.index(node)][keys.index(edge[0])] = edge[1]
return (adj_matrix, keys)
print (adjacency_matrix(graph))
""" With NetworkX """
A = nx.adjacency_matrix(FG)
print (A) | santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | a23401888fc8c47cd3cb1662d1399383 |
Ejercicio Weak & Strong ties
Con la misma red anterior asuma que un link debil es inferior a 0.5, cree un cรณdigo que calcule si se cumple la propiedad "strong triadic closure" | def weighted_element_neighbours(tuple_graph, element):
for index, item in enumerate(tuple_graph):
if element[0] == item[0]:
neighbours = [i[0] for i in item[1]]
return neighbours
raise IndexNotFoundError('Error: the requested element was not found')
def weighted_graph_to_tuples(graph):
output_graph = []
for node, neighbours in graph.items():
output_graph.append((node,list(neighbours)))
return output_graph
def triadic_closure(graph):
tuple_graph = weighted_graph_to_tuples(graph)
L = np.zeros((len(tuple_graph),), dtype=np.int)
for i in range(0, len(tuple_graph)):
element_at_i = tuple_graph[i][0]
for j in range(0, len(tuple_graph[i][1])-1):
current = tuple_graph[i][1][j]
weight_current = current[1]
if weight_current >= 0.5:
for k in range(j+1, len(tuple_graph[i][1])):
comparison = tuple_graph[i][1][k]
weight_comparison = comparison[1]
if weight_comparison >= 0.5:
# Search if there is a link
if not comparison[0] in weighted_element_neighbours(tuple_graph, current):
return False
return True
print(triadic_closure(graph))
edges2 = [('a','b',0.1),('a','c',0.5),('a','d',0.9),('a','e',0.6),('c','d',0.1),('c','e',0.4),('d','e',0.9)]
graph2 = edges_to_weighted_graph(edges2)
print(triadic_closure(graph2))
""" With NetworkX """
| santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | 8f60f6c8b9ceb3f37b1fde01c90cf7c6 |
Cambie un peso de los links anteriores para que se deje de cumplir la propiedad y calcule si es cierto. Explique.
Escriba un cรณdigo que detecte puntes locales y que calcule el span de cada puente local | import copy
""" The following code is thought for unweighted graphs """
edges3 = [(1,2),(1,3),(1,5),(5,6),(2,6),(2,1),(2,4)]
edges4 = [('a','b'),('a','c'),('a','d'),('a','e'),('a','f'),
('b','h'),('c','d'),('c','e'),('c','f'),('d','e'),
('f','j'),('f','g'),('j','g'),('g','k'),('g','h'),
('k','h')]
""" This function was taken from Python Software Foundation.
Python Patterns - Implementing Graphs. https://www.python.org/doc/essays/graphs/
(Visited in march 2017) """
def find_shortest_path(graph, start, end, path=[]):
path = path + [start]
if start == end:
return path
if not start in graph:
return None
shortest = None
for next in graph[start]:
if next not in path:
newpath = find_shortest_path(graph, next, end, path)
if newpath:
if not shortest or len(newpath) < len(shortest):
shortest = newpath
return shortest
# Returns a tuple containing two values:
# Input: an undirected graph G in form of a dict
# (True, span) if there is a local bridge (span > 2) between two nodes
# (True, None) if there is a bridge between two nodes
# (False, None) otherwise
def bridge(graph, start, end):
if not end in graph[start]:
return (False, None)
new_graph = copy.deepcopy(graph)
new_graph[start] = graph[start] - {end}
new_graph[end] = graph[end] - {start}
span_path = find_shortest_path(new_graph, start, end)
if not span_path:
# Global bridge
return (True, None)
path_length = len(span_path) - 1
if path_length > 2:
return (True, path_length)
elif path_length == 2:
return (False, path_length)
elif path_length == 1:
raise MultiGraphNotAllowedError('Error: Multigraphs are not allowed')
else:
raise ReflexiveRelationsNotAllowedError('Error: Reflexive relations are not allowed')
graph3 = edges_to_graph(edges3)
# Return the places of the graph where there is a bridge and the
# span of each bridge as a vector of tuples in the form (start, end, span)
def local_bridges(graph):
nodes = list(graph.keys())
result = []
for i in range(0, len(nodes)-1):
node1 = nodes[i]
for j in range(i+1, len(nodes)):
node2 = nodes[j]
brd = bridge(graph, nodes[i], nodes[j])
if brd[0] and brd[1] != None:
result.append((nodes[i],nodes[j],{'span':brd[1]}))
return result
brds = local_bridges(graph3)
print(brds)
graph4 = edges_to_graph(edges4)
print(local_bridges(graph4))
def distance_matrix(graph):
keys = list(graph.keys())
keys.sort()
d_matrix = np.zeros((len(keys),len(keys)))
for i in range(0, len(keys)):
for j in range(0, len(keys)):
start = keys[i]
end = keys[j]
path = find_shortest_path(graph, start, end)
d_matrix[i][j] = len(path)-1
return (d_matrix, keys)
""" With NetworkX """
| santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | 8564fdbdca41c16829b729ee32a24609 |
Ejercicio Random Networks
genere 1000 redes aleatorias N = 12, p = 1/6 y grafique la distribuciรณn del nรบmero de enlaces | import random
import seaborn as sns
%matplotlib inline
N = 12
p = float(1)/6
def random_network_links(N, p):
edges = []
for i in range(0, N-1):
for j in range(i+1, N):
rand = random.random()
if rand <= p:
edges.append((i+1,j+1))
return edges
def random_network_links2(N, p):
edges = []
adj_matrix = np.zeros((N,N), dtype=int)
for i in range(0, N-1):
for j in range(i+1, N):
rand = random.random()
if rand <= p:
edges.append((i+1,j+1))
adj_matrix[i][j] = 1
adj_matrix[j][i] = 1
for i in range(0, N):
if sum(adj_matrix[i]) == 0:
edges.append((i+1,))
return edges
# Returns a number of random networks in the form of a list of edges
def random_networks(number_of_networks, N, p):
networks = []
for i in range(0, number_of_networks):
networks.append(random_network_links2(N,p))
return networks
def len_edges(edges_graph):
result = 0
for edge in edges_graph:
if len(edge) == 2:
result += 1
return result
networks1 = random_networks(1000,N,p)
len_edges1 = [len_edges(i) for i in networks1]
ax = sns.distplot(len_edges1)
""" With NetworkX """
def random_networks_nx(number_of_networks, N, p):
networks = []
for i in range(0, number_of_networks):
G_ran = nx.gnp_random_graph(N,p)
networks.append(G_ran)
return networks
networks2 = random_networks_nx(1000,N,p)
len_edges2 = [len(G.edges()) for G in networks2]
sns.distplot(len_edges2)
| santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | a468a8f83b536682007fc78c33c11997 |
Grafique la distribuciรณn del promedio de grados en cada una de las redes generadas del ejercicio anterior | % matplotlib inline
# Transform the list of lists of edges to a list of dicts, this is done to
# calculate the average degree distribution in the next methods
networks1_graph = [edges_to_graph(edges) for edges in networks1]
def degrees(graph):
degrees = {}
for node, links in graph.items():
degrees[node] = len(links)
return degrees
def avg_degree(graph):
dgrs = degrees(graph)
return float(sum(dgrs.values()))/len(dgrs)
avg_degrees1 = [avg_degree(network) for network in networks1_graph]
ax = sns.distplot(avg_degrees1)
""" With NetworkX """
def avg_degree_nx(graph):
graph_degrees = graph.degree()
return float(sum(graph_degrees.values()))/len(graph_degrees)
avg_degrees2 = [avg_degree_nx(network) for network in networks2]
sns.distplot(avg_degrees2) | santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | 48a7e5a2f09e651c158a53067f9cf269 |
Haga lo mismo para redes con 100 nodos | % matplotlib inline
networks100_1 = random_networks(1000, 100, p)
networks100_2 = random_networks_nx(1000,100,p)
len_edges100_1 = [len_edges(i) for i in networks100_1]
ax = sns.distplot(len_edges100_1)
len_edges100_2 = [len(G.edges()) for G in networks100_2]
sns.distplot(len_edges100_2)
networks100_1_graph = [edges_to_graph(edges) for edges in networks100_1]
avg_degrees100_1 = [avg_degree(network) for network in networks100_1_graph]
avg_degrees100_2 = [avg_degree_nx(network) for network in networks100_2]
ax = sns.distplot(avg_degrees100_1)
sns.distplot(avg_degrees100_2)
| santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | d5d14dc8f95adc7b33fa8d353ef76ae8 |
Ejercicio Random Networks - Componente Gigante
Grafique como crece el tamaรฑo del componente mรกs grande de una red aleatoria con N=100 nodos y diferentes valores de p
(grafique con promedio de grado entre 0 y 4 cada 0.05) | """ The following code snippet was taken from Mann, Edd. Depth-First Search and Breadth-First Search in Python.
http://eddmann.com/posts/depth-first-search-and-breadth-first-search-in-python/ """
graph5 = copy.deepcopy(graph4)
graph5['m'] = {'n'}
graph5['n'] = {'m'}
def bfs(graph, start):
visited, queue = set(), collections.deque([start])
while queue:
vertex = queue.popleft()
if vertex not in visited:
visited.add(vertex)
queue.extend(graph[vertex] - visited)
return visited
# return a list of lists of nodes of 'graph' each one being the nodes that
# define a specific connected component of of 'graph'
def connected_components(graph):
components = []
nodes = set(graph.keys())
while len(nodes):
root = next(iter(nodes))
visited = bfs(graph, root)
components.append(visited)
nodes = nodes - visited
return components
# Returns a set containing the nodes of a graph's biggest component
def biggest_component_nodes(graph):
components = connected_components(graph)
lengths = [len(component) for component in components]
max_component = 0
max_index = -1
for i in range(0, len(lengths)):
if lengths[i] > max_component:
max_component = lengths[i]
max_index = i
return components[max_index]
# Returns a subgraph containing the biggest connected component of 'graph'
def biggest_component(graph):
nodes = biggest_component_nodes(graph)
nodes = list(nodes)
subgraph = {k:graph[k] for k in nodes if k in graph}
return subgraph
# Plot results
import matplotlib.pyplot as plt
import plotly.plotly as py
from plotly.graph_objs import Scatter, Figure, Layout
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
def plot_giant_component_growth(N):
p_vector = []
avg_degree_vector = []
p = 0.0
while p <= 1:
p_vector.append(p)
network = random_network_links2(N,p)
network = edges_to_graph(network)
component = biggest_component(network)
avg_degree_vector.append(avg_degree(component))
p += 0.05
plt.plot(p_vector, avg_degree_vector, "o")
plot_giant_component_growth(100)
| santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | 411fb66f01402cb135aa2114240d4c87 |
Grafique cuรกl es el porcentaje de nodos del componente mรกs grande para diferentes valores de p | def plot_giant_component_growth_nodes(N):
p_vector = []
node_percentages = []
p = 0.0
while p <= 1:
p_vector.append(p)
network = random_network_links2(N,p)
network = edges_to_graph(network)
component = biggest_component(network)
component_percentage = float(len(component))/len(network)
node_percentages.append(component_percentage)
p += 0.001
plt.plot(p_vector, node_percentages, "o")
plot_giant_component_growth_nodes(100) | santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | d9f437ef4f99288d6e9b6cc1d7160dc0 |
Identifique para que valores de p el componente mas grande esta totalmente interconectado | def identify_p_value_for_total_connection(N):
p = 0.0
while p <= 1:
network = random_network_links2(N,p)
network = edges_to_graph(network)
component = biggest_component(network)
component_percentage = float(len(component))/len(network)
if component_percentage == 1:
return p
p += 0.001
return 1 # Default value for a totally connected component
identify_p_value_for_total_connection(100) | santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | spulido99/NetworksAnalysis | mit | 389ef5ed63a3d7aba6dc439d83b43a3d |
This problem originated from a blog post I wrote for DataCamp on graph optimization here. The algorithm I sketched out there for solving the Chinese Problem on the Sleeping Giant state park trail network has since been formalized into the postman_problems python library. I've also added the Rural Postman solver that is implemented here.
So the three main enhancements in this post from the original DataCamp article and my second iteration published here updating to networkx 2.0 are:
1. OpenStreetMap for graph data and visualization.
2. Implementing the Rural Postman algorithm to consider optional edges.
3. Leveraging the postman_problems library.
This code, notebook and data for this post can be found in the postman_problems_examples repo.
The motivation and background around this problem is written up more thoroughly in the previous posts and postman_problems.
Table of Contents
Table of Contents
{:toc} | import mplleaflet
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
# can be found in https://github.com/brooksandrew/postman_problems_examples
from osm2nx import read_osm, haversine
from graph import contract_edges, create_rpp_edgelist
from postman_problems.tests.utils import create_mock_csv_from_dataframe
from postman_problems.solver import rpp, cpp
from postman_problems.stats import calculate_postman_solution_stats | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 55516ee259ab01966e0838ee5321b42e |
Create Graph from OSM | # load OSM to a directed NX
g_d = read_osm('sleepinggiant.osm')
# create an undirected graph
g = g_d.to_undirected() | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | dbf044faeff624ec1870307f6c708128 |
Adding edges that don't exist on OSM, but should | g.add_edge('2318082790', '2318082832', id='white_horseshoe_fix_1') | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 0512c1d42d80309f4d1d45637a85f2b5 |
Adding distance to OSM graph
Using the haversine formula to calculate distance between each edge. | for e in g.edges(data=True):
e[2]['distance'] = haversine(g.node[e[0]]['lon'],
g.node[e[0]]['lat'],
g.node[e[1]]['lon'],
g.node[e[1]]['lat']) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 0b497ae12bb93fc01eb5639df94924de |
Create graph of required trails only
A simple heuristic with a couple tweaks is all we need to create the graph with required edges:
Keep any edge with 'Trail' in the name attribute.
Manually remove the handful of trails that are not part of the required Giant Master route. | g_t = g.copy()
for e in g.edges(data=True):
# remove non trails
name = e[2]['name'] if 'name' in e[2] else ''
if ('Trail' not in name.split()) or (name is None):
g_t.remove_edge(e[0], e[1])
# remove non Sleeping Giant trails
elif name in [
'Farmington Canal Linear Trail',
'Farmington Canal Heritage Trail',
'Montowese Trail',
'(white blazes)']:
g_t.remove_edge(e[0], e[1])
# cleaning up nodes left without edges
for n in nx.isolates(g_t.copy()):
g_t.remove_node(n) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | d617cd35682a3a38286cf7ab969cb5fc |
Viz Sleeping Giant Trails
All trails required for the Giant Master: | fig, ax = plt.subplots(figsize=(1,8))
pos = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes()}
nx.draw_networkx_edges(g_t, pos, width=2.5, edge_color='black', alpha=0.7)
mplleaflet.save_html(fig, 'maps/sleepinggiant_trails_only.html') | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 17d861da69c88a4b4eed09a18d72303d |
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/sleepinggiant_trails_only.html" height="400" width="750"></iframe>
Connect Edges
In order to run the RPP algorithm from postman_problems, the required edges of the graph must form a single connected component. We're almost there with the Sleeping Giant trail map as-is, so we'll just connect a few components manually.
Here's an example of a few floating components (southwest corner of park):
<img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/sleepinggiant_disconnected_components.png" width="500">
OpenStreetMap makes finding these edge (way) IDs simple. Once grabbing the ? cursor, you can click on any edge to retrieve IDs and attributes.
<img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/osm_edge_lookup.png" width="1000">
Define OSM edges to add and remove from graph | edge_ids_to_add = [
'223082783',
'223077827',
'40636272',
'223082785',
'222868698',
'223083721',
'222947116',
'222711152',
'222711155',
'222860964',
'223083718',
'222867540',
'white_horseshoe_fix_1'
]
edge_ids_to_remove = [
'17220599'
] | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 8129e414564cf1cd4ef3f4fce16f381f |
Add attributes for supplementary edges | for e in g.edges(data=True):
way_id = e[2].get('id').split('-')[0]
if way_id in edge_ids_to_add:
g_t.add_edge(e[0], e[1], **e[2])
g_t.add_node(e[0], lat=g.node[e[0]]['lat'], lon=g.node[e[0]]['lon'])
g_t.add_node(e[1], lat=g.node[e[1]]['lat'], lon=g.node[e[1]]['lon'])
if way_id in edge_ids_to_remove:
if g_t.has_edge(e[0], e[1]):
g_t.remove_edge(e[0], e[1])
for n in nx.isolates(g_t.copy()):
g_t.remove_node(n) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 0216d8286275b1f9515159e12ead975f |
Ensuring that we're left with one single connected component: | len(list(nx.connected_components(g_t))) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 7129e297534ce837bd192254d7faa66e |
Viz Connected Component
The map below visualizes the required edges and nodes of interest (intersections and dead-ends where degree != 2): | fig, ax = plt.subplots(figsize=(1,12))
# edges
pos = {k: (g_t.node[k].get('lon'), g_t.node[k].get('lat')) for k in g_t.nodes()}
nx.draw_networkx_edges(g_t, pos, width=3.0, edge_color='black', alpha=0.6)
# nodes (intersections and dead-ends)
pos_x = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes() if (g_t.degree(k)==1) | (g_t.degree(k)>2)}
nx.draw_networkx_nodes(g_t, pos_x, nodelist=pos_x.keys(), node_size=35.0, node_color='red', alpha=0.9)
mplleaflet.save_html(fig, 'maps/trails_only_intersections.html') | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 49b6770a53521e60ee77a5a265b4fa68 |
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/trails_only_intersections.html" height="400" width="750"></iframe>
Viz Trail Color
Because we can and it's pretty. | name2color = {
'Green Trail': 'green',
'Quinnipiac Trail': 'blue',
'Tower Trail': 'black',
'Yellow Trail': 'yellow',
'Red Square Trail': 'red',
'White/Blue Trail Link': 'lightblue',
'Orange Trail': 'orange',
'Mount Carmel Avenue': 'black',
'Violet Trail': 'violet',
'blue Trail': 'blue',
'Red Triangle Trail': 'red',
'Blue Trail': 'blue',
'Blue/Violet Trail Link': 'purple',
'Red Circle Trail': 'red',
'White Trail': 'gray',
'Red Diamond Trail': 'red',
'Yellow/Green Trail Link': 'yellowgreen',
'Nature Trail': 'forestgreen',
'Red Hexagon Trail': 'red',
None: 'black'
}
fig, ax = plt.subplots(figsize=(1,10))
pos = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes()}
e_color = [name2color[e[2].get('name')] for e in g_t.edges(data=True)]
nx.draw_networkx_edges(g_t, pos, width=3.0, edge_color=e_color, alpha=0.5)
nx.draw_networkx_nodes(g_t, pos_x, nodelist=pos_x.keys(), node_size=30.0, node_color='black', alpha=0.9)
mplleaflet.save_html(fig, 'maps/trails_only_color.html', tiles='cartodb_positron') | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | b2b1fee84c8fabd4ae79655008d1c9b9 |
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/trails_only_color.html" height="400" width="750"></iframe>
Check distance
This is strikingly close (within 0.25 miles) to what I calculated manually with some guess work from the SG trail map on the first pass at this problem here, before leveraging OSM. | print('{:0.2f} miles of required trail.'.format(sum([e[2]['distance']/1609.34 for e in g_t.edges(data=True)]))) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 4c518501a047f7511bbc5a8486a77d0e |
Contract Edges
We could run the RPP algorithm on the graph as-is with >5000 edges. However, we can simplify computation by contracting edges into logical trail segments first. More details on the intuition and methodology in the 50 states post. | print('Number of edges in trail graph: {}'.format(len(g_t.edges())))
# intialize contracted graph
g_tc = nx.MultiGraph()
# add contracted edges to graph
for ce in contract_edges(g_t, 'distance'):
start_node, end_node, distance, path = ce
contracted_edge = {
'start_node': start_node,
'end_node': end_node,
'distance': distance,
'name': g[path[0]][path[1]].get('name'),
'required': 1,
'path': path
}
g_tc.add_edge(start_node, end_node, **contracted_edge)
g_tc.node[start_node]['lat'] = g.node[start_node]['lat']
g_tc.node[start_node]['lon'] = g.node[start_node]['lon']
g_tc.node[end_node]['lat'] = g.node[end_node]['lat']
g_tc.node[end_node]['lon'] = g.node[end_node]['lon'] | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 90d85e8550eb2c8eb3e2059178cecdc5 |
Edge contraction reduces the number of edges fed to the RPP algorithm by a factor of ~40. | print('Number of edges in contracted trail graoh: {}'.format(len(g_tc.edges()))) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 0f599daa64d1c3817ee20d4456c1df47 |
Solve CPP
First, let's see how well the Chinese Postman solution works.
Create CPP edgelist | # create list with edge attributes and "from" & "to" nodes
tmp = []
for e in g_tc.edges(data=True):
tmpi = e[2].copy() # so we don't mess w original graph
tmpi['start_node'] = e[0]
tmpi['end_node'] = e[1]
tmp.append(tmpi)
# create dataframe w node1 and node2 in order
eldf = pd.DataFrame(tmp)
eldf = eldf[['start_node', 'end_node'] + list(set(eldf.columns)-{'start_node', 'end_node'})]
# create edgelist mock CSV
elfn = create_mock_csv_from_dataframe(eldf) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | bbd17349337698fecab72c72bacfae4a |
Start node
The route is designed to start at the far east end of the park on the Blue trail (node '735393342'). While the CPP and RPP solutions will return a Eulerian circuit (loop back to the starting node), we could truncate this last long doublebacking segment when actually running the route
<img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/sleepinggiant_starting_node.png" width="600">
Solve | circuit_cpp, gcpp = cpp(elfn, start_node='735393342') | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | d1484b7d33a8e5d71e2cc19b3891b3b1 |
CPP Stats
(distances in meters) | cpp_stats = calculate_postman_solution_stats(circuit_cpp)
cpp_stats
print('Miles in CPP solution: {:0.2f}'.format(cpp_stats['distance_walked']/1609.34)) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 7483a9cb3e602f3710555953505f6817 |
Solve RPP
With the CPP as benchmark, let's see how well we do when we allow for optional edges in the route. | %%time
dfrpp = create_rpp_edgelist(g_tc,
graph_full=g,
edge_weight='distance',
max_distance=2500) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 48cad7b6e4679729c704f7335274727e |
Required vs optional edge counts
(1=required and 0=optional) | Counter( dfrpp['required']) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 113129279e549a9b85078d6cf3d4107d |
Solve RPP | # create mockfilename
elfn = create_mock_csv_from_dataframe(dfrpp)
%%time
# solve
circuit_rpp, grpp = rpp(elfn, start_node='735393342') | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | ea944b46c81ec33a48e8c773e6b32206 |
RPP Stats
(distances in meters) | rpp_stats = calculate_postman_solution_stats(circuit_rpp)
rpp_stats | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | bbae978f15bcb337c00d6c2277abed0f |
Leveraging the optional roads and trails, we're able to shave a about 3 miles off the CPP route. Total mileage checks in at 30.71, just under a 50K (30.1 miles). | print('Miles in RPP solution: {:0.2f}'.format(rpp_stats['distance_walked']/1609.34)) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | cddfaad3ef7a4582cfb95b0104d24210 |
Viz RPP Solution | # hack to convert 'path' from str back to list. Caused by `create_mock_csv_from_dataframe`
for e in circuit_rpp:
if type(e[3]['path']) == str:
exec('e[3]["path"]=' + e[3]["path"]) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | cc4f729e1c9bbb41cbf91d99a6996c05 |
Create graph from RPP solution | g_tcg = g_tc.copy()
# calc shortest path between optional nodes and add to graph
for e in circuit_rpp:
granular_type = 'trail' if e[3]['required'] else 'optional'
# add granular optional edges to g_tcg
path = e[3]['path']
for pair in list(zip(path[:-1], path[1:])):
if (g_tcg.has_edge(pair[0], pair[1])) and (g_tcg[pair[0]][pair[1]][0].get('granular_type') == 'optional'):
g_tcg[pair[0]][pair[1]][0]['granular_type'] = 'trail'
else:
g_tcg.add_edge(pair[0], pair[1], granular='True', granular_type=granular_type)
# add granular nodes from optional edge paths to g_tcg
for n in path:
g_tcg.add_node(n, lat=g.node[n]['lat'], lon=g.node[n]['lon']) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | c0359d5b43bf291b3920a6640fd591e2 |
Viz: RPP optional edges
The RPP algorithm picks up some logical shortcuts using the optional trails and a couple short stretches of road.
<font color='black'>black</font>: required trails
<font color='blue'>blue</font>: optional trails and roads | fig, ax = plt.subplots(figsize=(1,8))
pos = {k: (g_tcg.node[k].get('lon'), g_tcg.node[k].get('lat')) for k in g_tcg.nodes()}
el_opt = [e for e in g_tcg.edges(data=True) if e[2].get('granular_type') == 'optional']
nx.draw_networkx_edges(g_tcg, pos, edgelist=el_opt, width=6.0, edge_color='blue', alpha=1.0)
el_tr = [e for e in g_tcg.edges(data=True) if e[2].get('granular_type') == 'trail']
nx.draw_networkx_edges(g_tcg, pos, edgelist=el_tr, width=3.0, edge_color='black', alpha=0.8)
mplleaflet.save_html(fig, 'maps/rpp_solution_opt_edges.html', tiles='cartodb_positron') | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 636a1ee7816d749a3fe0044f9f10095e |
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/rpp_solution_opt_edges.html" height="400" width="750"></iframe>
Viz: RPP edges counts | ## Create graph directly from rpp_circuit and original graph w lat/lon (g)
color_seq = [None, 'black', 'magenta', 'orange', 'yellow']
grppviz = nx.MultiGraph()
for e in circuit_rpp:
for n1, n2 in zip(e[3]['path'][:-1], e[3]['path'][1:]):
if grppviz.has_edge(n1, n2):
grppviz[n1][n2][0]['linewidth'] += 2
grppviz[n1][n2][0]['cnt'] += 1
else:
grppviz.add_edge(n1, n2, linewidth=2.5)
grppviz[n1][n2][0]['color_st'] = 'black' if g_t.has_edge(n1, n2) else 'red'
grppviz[n1][n2][0]['cnt'] = 1
grppviz.add_node(n1, lat=g.node[n1]['lat'], lon=g.node[n1]['lon'])
grppviz.add_node(n2, lat=g.node[n2]['lat'], lon=g.node[n2]['lon'])
for e in grppviz.edges(data=True):
e[2]['color_cnt'] = color_seq[1] if 'cnt' not in e[2] else color_seq[e[2]['cnt'] ]
| _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 92a6b49a7ad6849ea3e96f80e8f0d736 |
Edge walks per color:
<font color='black'>black</font>: 1 <br>
<font color='magenta'>magenta</font>: 2 <br> | fig, ax = plt.subplots(figsize=(1,10))
pos = {k: (grppviz.node[k]['lon'], grppviz.node[k]['lat']) for k in grppviz.nodes()}
e_width = [e[2]['linewidth'] for e in grppviz.edges(data=True)]
e_color = [e[2]['color_cnt'] for e in grppviz.edges(data=True)]
nx.draw_networkx_edges(grppviz, pos, width=e_width, edge_color=e_color, alpha=0.7)
mplleaflet.save_html(fig, 'maps/rpp_solution_edge_cnts.html', tiles='cartodb_positron') | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 8aefbf3bb5e2af231d717af88a69fc5a |
<iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/rpp_solution_edge_cnts.html" height="400" width="750"></iframe>
Create geojson solution
Used for the forthcoming D3 route animation. | geojson = {'features':[], 'type': 'FeatureCollection'}
time = 0
path = list(reversed(circuit_rpp[0][3]['path']))
for e in circuit_rpp:
if e[3]['path'][0] != path[-1]:
path = list(reversed(e[3]['path']))
else:
path = e[3]['path']
for n in path:
time += 1
doc = {'type': 'Feature',
'properties': {
'latitude': g.node[n]['lat'],
'longitude': g.node[n]['lon'],
'time': time,
'id': e[3].get('id')
},
'geometry':{
'type': 'Point',
'coordinates': [g.node[n]['lon'], g.node[n]['lat']]
}
}
geojson['features'].append(doc)
with open('circuit_rpp.geojson','w') as f:
json.dump(geojson, f) | _ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb | brooksandrew/simpleblog | mit | 897df53c0c86d908eb0d82dd4646d526 |