text
stringlengths 226
34.5k
|
---|
How to do POST using requests module with Flask server?
Question: I am having trouble uploading a file to my Flask server using the Requests
module for Python.
import os
from flask import Flask, request, redirect, url_for
from werkzeug import secure_filename
UPLOAD_FOLDER = '/Upload/'
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
@app.route("/", methods=['GET', 'POST'])
def index():
if request.method == 'POST':
file = request.files['file']
if file:
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return redirect(url_for('index'))
return """
<!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<form action="" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form>
<p>%s</p>
""" % "<br>".join(os.listdir(app.config['UPLOAD_FOLDER'],))
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
I am able to upload a file via web page, but I wanted to upload file with the
requests module like this:
import requests
r = requests.post('http://127.0.0.1:5000', files={'random.txt': open('random.txt', 'rb')})
It keeps returning 400 and saying that "The browser (or proxy) sent a request
that this server could not understand"
I feel like I am missing something simple, but I cannot figure it out.
Answer: You upload the file as the `random.txt` field:
files={'random.txt': open('random.txt', 'rb')}
# ^^^^^^^^^^^^ this is the field name
but look for a field named `file` instead:
file = request.files['file']
# ^^^^^^ the field name
Make those two match; using `file` for the `files` dictionary, for example:
files={'file': open('random.txt', 'rb')}
Note that `requests` will automatically detect the filename for that open
fileobject and include it in the part headers.
|
Scipy minimize a scalar with Brent method throws an Overflow 34
Question: I'd like to find a local minimum of the function `f(x) = x^3 + x^2 + x - 2`
where `x` is between `<-10; 10>`. I use Anaconda 3 on Windows 64bit.
My scipy python code throws an error:
from scipy import optimize
def f(x):
return (x**3)+(x**2)+x-2
x_min = optimize.minimize_scalar(f, bounds=[-10, 10], method='brent')
> OverflowError: (34, 'Result too large')
Isn't a power to 3 too simple function to break a scipy optimization package?
Answer: When using local boundaries, changing `method` to `'bounded'` is required
from scipy import optimize
def f(x):
return (x**3)+(x**2)+x-2
x_min = optimize.minimize_scalar(f, bounds=[-10, 10], method='bounded')
print(x_min)
|
How to insert NaN array into a numpy 2D array
Question: I'm trying to insert an arbitrary number of rows of NaN values within a 2D
array at specific places. I'm logging some data from a microcontroller in a
.csv file and parsing with python.
The data is stored in a 3 column 2D array like this
[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (125.0, 1.0, -44.0) ...,
(39.0, 1.0, -47.0) (40.0, 1.0, -45.0) (41.0, 1.0, -47.0)]
The first column is an sequence counter. What I'm trying to do is iterate
through the sequence values, diff current and previous sequence number and
insert as many rows with nan as there are missing sequences.
Basically,
[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (125.0, 1.0, -44.0)]
would become
[(122.0, 1.0, -47.0) (123.0, 1.0, -47.0) (nan, nan, nan) (125.0, 1.0, -44.0)]
However the following implementation of `np.insert` produces an error
while (i < len(list[1])):
pid = list[i][0]
newMissing = (pid - LastGoodId + 255) % 256
TotalMissing = TotalMissing + newMissing
np.insert(list,i,np.zeros(newMissing,1) + np.nan)
i = i + newMissing
list[i][0] = TotalMissing
LastGoodId = pid
> \---> 28 np.insert(list,i,np.zeros(newMissing,1) + np.nan) 29 i = i +
> newMissing 30 list[i][0] = TotalMissing
>
> TypeError: data type not understood
Any ideas on how I can accomplish this?
Answer: From the [doc of
`np.insert()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.insert.html):
import numpy as np
a = np.arrray([(122.0, 1.0, -47.0), (123.0, 1.0, -47.0), (125.0, 1.0, -44.0)]))
np.insert(a, 2, np.nan, axis=0)
array([[ 122., 1., -47.],
[ 123., 1., -47.],
[ nan, nan, nan],
[ 125., 1., -44.]])
|
Extracting data from multiple files with python
Question: I'm trying to extract data from a directory with 12 .txt files. Each file
contains 3 columns of data (X,Y,Z) that i want to extract. I want to collect
all the data in one df(InforDF), but so far i only succeeded in creating a df
with all of the X,Y and Z data in the same column. This is my code:
import pandas as pd
import numpy as np
import os
import fnmatch
path = os.getcwd()
file_list = os.listdir(path)
InfoDF = pd.DataFrame()
for file in file_list:
try:
if fnmatch.fnmatch(file, '*.txt'):
filedata = open(file, 'r')
df = pd.read_table(filedata, delim_whitespace=True, names={'X','Y','Z'})
except Exception as e:
print(e)
What am i doing wrong?
Answer:
df = pd.read_table(filedata, delim_whitespace=True, names={'X','Y','Z'})
this line replace `df` at each iteration of the loop, that's why you only have
the last one at the end of your program.
what you can do is to save all your dataframe in a list and concatenate them
at the end
df_list = []
for file in file_list:
try:
if fnmatch.fnmatch(file, '*.txt'):
filedata = open(file, 'r')
df_list.append(pd.read_table(filedata, delim_whitespace=True, names={'X','Y','Z'}))
df = pd.concat(df_list)
alternatively, you can write it:
df_list = pd.concat([pd.read_table(open(file, 'r'), delim_whitespace=True, names={'X','Y','Z'}) for file in file_list if fnmatch.fnmatch(file, '*.txt')])
|
Calling a setuptools entry point from within the library
Question: I have a setuptools-based Python (3.5) project with multiple scripts as entry
points similar to the following:
entry_points={
'console_scripts': [
'main-prog=scripts.prog:main',
'prog-viewer=scripts.prog_viewer:main'
]}
So there is supposed to be a main script, run as `main-prog` and an auxiliary
script `prog-viewer` (which does some Tk stuff).
The problem is that I want to be able to run the `prog-viewer` in a `Popen`
subprocess from `main-prog` (or rather form my library) without having to
resort to manually figuring out the paths and then adapt to the different OS.
Also, what do I do when my PATH contains a script with the same name that does
not belong to my library? Can I tell my program to
`Popen(scripts.prog_viewer:main)`?
Answer: You could run a python command with Popen, for example:
Popen('python -c "from scripts.prog import main; main()"', shell=True)
|
How to Create a file at a specific path in python?
Question: I am writing below code which is not working:
cwd = os.getcwd()
print (cwd)
log = path.join(cwd,'log.out')
os.chdir(cwd) and Path(log.out).touch() and os.chmod(log.out, 777)
how can I create a log.out into cwd ?
Answer: you can call the usual linux `touch` command via `subprocess`
import subprocess
subprocess.call(["touch", cwd+"/log.out"])
|
how to check if date is in certain interval python?
Question: I'm importing dates from yahoo finance and want to transform them in a format
so that I can compare them with today to check if the date is between 3 and 9
months from now.
Here is what I have so far:
today = time.strftime("%Y-%m-%d")
today = datetime.datetime.strptime(today, '%Y-%m-%d')
int_begin = today + datetime.timedelta(days=90)
int_end = today + datetime.timedelta(days=270)
for i in opt["Expiry"]:
transf_date = datetime.datetime.strptime(opt["Expiry"][1],'%b %d, %Y')
transf_date = datetime.datetime.strftime(transf_date,"%Y-%m-%d")
if int_begin <= transf_date and transf_date <= int_end:
print "True:",i
else:
print "False:",i
Here is the content of opt["Expiry"]
0 Jan 20, 2017
1 Jan 20, 2017
2 Jan 20, 2017
3 Jan 20, 2017
4 Jan 20, 2017
5 Jan 20, 2017
6 Jan 20, 2017
7 Jan 20, 2017
8 Jan 20, 2017
9 Jan 20, 2017
10 Jan 20, 2017
11 Jan 20, 2017
12 Jan 20, 2017
13 Jan 20, 2017
14 Jan 20, 2017
15 Jan 20, 2017
16 Jan 20, 2017
17 Jan 19, 2018
18 Jan 20, 2017
19 Jan 19, 2018
20 Jan 20, 2017
21 Mar 17, 2017
22 Jan 20, 2017
23 Mar 17, 2017
24 Jan 20, 2017
25 Mar 17, 2017
26 Apr 21, 2017
27 Jun 16, 2017
28 Jan 19, 2018
29 Jan 20, 2017
...
432 Jan 20, 2017
433 Jan 19, 2018
434 Oct 21, 2016
435 Jan 20, 2017
436 Jan 19, 2018
437 Oct 21, 2016
438 Jan 20, 2017
439 Jan 19, 2018
440 Oct 21, 2016
441 Jan 20, 2017
442 Jan 19, 2018
443 Oct 21, 2016
444 Jan 20, 2017
445 Jan 19, 2018
446 Oct 21, 2016
447 Jan 20, 2017
448 Oct 21, 2016
449 Jan 20, 2017
450 Oct 21, 2016
451 Jan 20, 2017
452 Oct 21, 2016
453 Jan 20, 2017
454 Oct 21, 2016
455 Jan 20, 2017
456 Oct 21, 2016
457 Jan 20, 2017
458 Jan 20, 2017
459 Jan 20, 2017
460 Jan 20, 2017
461 Jan 20, 2017
It seems like I have the same date format, which is "%Y-%m-%d", but I still
get no values filtered out. All of them come out as true, being inside the
interval.
Answer: You're using `transf_date = datetime.datetime.strptime(opt["Expiry"][1],'%b
%d, %Y')` instead of `transf_date =
datetime.datetime.strptime(opt["Expiry"][i],'%b %d, %Y')`, meaning that even
though you're iterating over the entire `opt["Expiry"]`, you're always
processing the same entry.
|
Error in tf.contrib.learn Quickstart, no attribute named load_csv
Question: I am getting started in tensorflow on OSX and installed the lasted version
following the guidelines for a pip installation using:
echo $TF_BINARY_URL
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py2-none-any.whl
Quick overview:
OS: OS X El Capitan version 10.11.6 (15G31)
Python: Python 2.7.12_1 installed with `brew install python`
TensorFlow: 0.11.0rc0 from `import tensorflow as tf; print(tf.__version__)`
I can run TensorFlow using:
python
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
>>> Hello, TensorFlow!
So TensorFlow is installed and running the basic commands.
But when I run the code for tf.contrib.learn Quickstart from here:
<https://www.tensorflow.org/versions/r0.11/tutorials/tflearn/index.html>
I get the following issue:
Traceback (most recent call last):
File "tf_learn_quickstart.py", line 13, in <module>
training_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TRAINING,
AttributeError: 'module' object has no attribute 'load_csv'
I can't figure out what went wrong as everything else seems to be working
fine. Any ideas what is wrong?
Answer: This function has been deprecated:
<https://github.com/tensorflow/tensorflow/commit/2d4267507e312007a062a90df37997bca8019cfb>
And the tutorial seems not up to date. I believe you can simply replace
load_csv with load_csv_with_header to get it work.
|
Python string have not control characters
Question: i have proxy string:
proxy = '127.0.0.1:8080'
i need check is it real string:
def is_proxy(proxy):
return not any(c.isalpha() for c in proxy)
to skip string like:
fail_proxy = 'This is proxy: 127.0.0.1:8080'
but some time i have like:
fail_proxy2 = '127.0.0.1:8080\r'
is_proxy(fail_proxy2) is True
True
need False
Answer: Try the following specific approach using `re` module(regexp):
import re
def is_proxy(proxy):
return re.fullmatch('^\d{1,3}\.\d{1,3}\.\d{1,3}.\d{1,3}:\d{1,5}$', proxy) is not None
proxy1 = '127.0.0.1:8080'
proxy2 = '127.0.0.1:8080\r'
print(is_proxy(proxy1)) # True
print(is_proxy(proxy2)) # False
As for port number (`\d{1,5}`): range **1-65535** are available for port
numbers
|
How would i make the computer assign a name to the automaticly that can be recalled later in python
Question: My objective is to make the computer assign a name to the user file
automatically but that can also be recalled later.
import random
r = random.choice()#i want this too be a random name that the computer can recall later
while True:
with open(r, "w") as f:
f.write(input("Write to file here -"))
if 1==1:
True
else:
break
Answer: If you're just trying to generate a random name, use a list and random.choice.
Here is an example. `print(random.choice(["Hello","World","!"]))`, this will
give you a random string from the list either 'Hello', 'World', or '!'. If you
want more help on the random module I suggest looking at the docs,
<https://docs.python.org/2/library/random.html>.
|
Python Repeat List to Max Number of Elements
Question: What is the most efficient method to repeat a list up to a max element length?
To take this:
list = ['one', 'two', 'three']
max_length = 7
And produce this:
final_list = ['one', 'two', 'three', 'one', 'two', 'three', 'one']
Answer: I'd probably use `iterools.cycle` and `itertools.islice`:
>>> from itertools import cycle, islice
>>> lst = [1, 2, 3]
>>> list(islice(cycle(lst), 7))
[1, 2, 3, 1, 2, 3, 1]
|
WxPython's ScrolledWindow element collapses to minimum size
Question: I am using a Panel within a Frame to display images (the GUI need to switch
between multiple panels and hence the hierarchy). As images should be
displayed in native size I used ScrolledWindow as the panel parent. The
scrolls do appear and work, but it causes the Panel to collapse to minimum
size and it needs to be resized using drag&drop every time. Is there a way
around this?
Below is a reduced version of the code, which shows the problem:
import os
import wx
from wx.lib.pubsub import pub
class Edit_Panel(wx.PyScrolledWindow):
def __init__(self, parent):
super(Edit_Panel, self).__init__(parent)
# Display size
width, height = wx.DisplaySize()
self.photoMaxSize = height - 500
# Loaded image
self.loaded_image = None
# Icons
self.open_icon_id = 500
# Generate panel
self.layout()
def layout(self):
self.main_sizer = wx.BoxSizer(wx.VERTICAL)
divider = wx.StaticLine(self, -1, style = wx.LI_HORIZONTAL)
self.main_sizer.Add(divider, 0, wx.ALL | wx.EXPAND)
self.toolbar = self.init_toolbar()
self.main_sizer.Add(self.toolbar, 0, wx.ALL)
img = wx.EmptyImage(self.photoMaxSize, self.photoMaxSize)
self.image_control = wx.StaticBitmap(self, wx.ID_ANY,
wx.BitmapFromImage(img))
self.main_sizer.Add(self.image_control, 0, wx.ALL | wx.CENTER, 5)
self.image_label = wx.StaticText(self, -1, style = wx.ALIGN_CENTRE)
self.main_sizer.Add(self.image_label, 0, wx.ALL | wx.ALIGN_CENTRE, 5)
self.SetSizer(self.main_sizer)
fontsz = wx.SystemSettings.GetFont(wx.SYS_SYSTEM_FONT).GetPixelSize()
self.SetScrollRate(fontsz.x, fontsz.y)
self.EnableScrolling(True, True)
def init_toolbar(self):
toolbar = wx.ToolBar(self)
toolbar.SetToolBitmapSize((16, 16))
open_ico = wx.ArtProvider.GetBitmap(wx.ART_FILE_OPEN, wx.ART_TOOLBAR, (16, 16))
open_tool = toolbar.AddSimpleTool(self.open_icon_id, open_ico, "Open", "Open an Image Directory")
handler = self.on_open_reference
self.Bind(event = wx.EVT_MENU, handler = handler, source = open_tool)
toolbar.Realize()
return toolbar
def on_open_reference(self, event, wildcard = None):
if wildcard is None:
wildcard = self.get_wildcard()
defaultDir = '~/'
dbox = wx.FileDialog(self, "Choose an image to display", defaultDir = defaultDir, wildcard = wildcard, style = wx.OPEN)
if dbox.ShowModal() == wx.ID_OK:
file_name = dbox.GetPath()
# load image
self.load_image(image = file_name)
dbox.Destroy()
def get_wildcard(self):
wildcard = 'Image files (*.jpg;*.png;*.bmp)|*.png;*.bmp;*.jpg;*.jpeg'
return wildcard
def load_image(self, image):
self.loaded_image = image
# Load image
img = wx.Image(image, wx.BITMAP_TYPE_ANY)
# Label image name
image_name = os.path.basename(image)
self.image_label.SetLabel(image_name)
# scale the image, preserving the aspect ratio
scale_image = True
if scale_image:
W = img.GetWidth()
H = img.GetHeight()
if W > H:
NewW = self.photoMaxSize
NewH = self.photoMaxSize * H / W
else:
NewH = self.photoMaxSize
NewW = self.photoMaxSize * W / H
img = img.Scale(NewW, NewH)
self.image_control.SetBitmap(wx.BitmapFromImage(img))
# Render
self.main_sizer.Layout()
self.main_sizer.Fit(self)
self.Refresh()
pub.sendMessage("resize", msg = "")
class Viewer_Frame(wx.Frame):
def __init__(self, parent, id, title):
super(Viewer_Frame, self).__init__(parent = parent, id = id, title = title)
# Edit panel
self.edit_panel = Edit_Panel(self)
# Default panel
self.main_panel = self.edit_panel
# Render frame
self.render_frame()
# Subscription to re-render
pub.subscribe(self.resize_frame, ("resize"))
def render_frame(self):
# Main Sizer
self.main_sizer = wx.BoxSizer(wx.VERTICAL)
# Add default sizer
self.main_sizer.Add(self.main_panel, 1, wx.EXPAND)
# Render
self.SetSizer(self.main_sizer)
self.Show()
self.main_sizer.Fit(self)
self.Center()
def resize_frame(self, msg):
self.main_sizer.Fit(self)
if __name__ == "__main__":
app = wx.App(False)
frame = Viewer_Frame(parent = None, id = -1, title = 'Toolkit')
app.MainLoop()
Answer: You're calling `Fit()`, so you're explicitly asking the panel to fit its
contents, but you don't specify the min/best size of this contents anywhere
(AFAICS, there is a lot of code here, so I could be missing something).
If you want to use some minimal size for the panel, just set it using
`SetMinSize()`.
|
"while" loop - Re-execution of the program
Question: For starters I'll just say that I'm fresh Python programmer. I started writing
database client and I have problem. Perhaps this question for many will seem
silly, but for me as a rookie is it a problem.
I write main_module that adds data to the database I would like to after the
last condition, user have the possibility to add more data. Something like
that: repeat = input("Do you wanna create new record?(Y/N)") if repeat = Y:
(the program starts from the beginning) else: (Program close)
Where such a condition should be found?
Below is my code
import sqlite3
import time
import datetime
import sys
conn = sqlite3.connect('template.db')
c = conn.cursor()
def create_table(): #
c.execute("CREATE TABLE IF NOT EXISTS Theft(template1 TEXT, template2 TEXT, template3 TEXT, template4 TEXT, template5 TEXT)")
def data_entry():
unix = int(time.time())
template1 = str(datetime.datetime.fromtimestamp(unix).strftime('%Y-%m-%d %H:%M:%S'))
c.execute("INSERT INTO Theft(template1, template2, template3, template4, template5) VALUES (?, ?, ?, ?, ?)",
(template1, template2, template3, template4, template5))
conn.commit()
template2 = input("ENTER template ")
template3 = input("ITEM template ")
template4 = input("INPUT template ")
template5 = input("INPUT YOUR template: ")
accept = input("Do you wanna create new record in DB? (Y/N)")
if accept == "y":
create_table(), data_entry()
elif accept == "Y":
create_table(), data_entry()
else:
sys.exit(0)
c.close()
conn.close()
Regards Jmazure :)
Answer: You have to put your input procedure inside an infinite loop that will be
broken only if the user input something else than "y" or "Y":
while True:
accept = input("Do you wanna create new record in DB? (Y/N)")
if accept.lower() == "y": <-- checks if accept =="y" or accept == "Y"
create_table(), data_entry()
else:
break
|
Python - replace a line by its column in file
Question: Sorry for posting such an easy question, but i couldn't find an answer on
google. I wish my code to do something like this code:
lines = open("Bal.txt").write
lines[1] = new_value
lines.close()
p.s i wish to replace the line in a file with a value
Answer: xxx.dat before:
ddddddddddddddddd
EEEEEEEEEEEEEEEEE
fffffffffffffffff
with open('xxx.txt','r') as f:
x=f.readlines()
x[1] = "QQQQQQQQQQQQQQQQQQQ\n"
with open('xxx.txt','w') as f:
f.writelines(x)
xxx.dat after:
ddddddddddddddddd
QQQQQQQQQQQQQQQQQQQ
fffffffffffffffff
Note:f.read() returns a string, whereas f.readlines() returns a list, enabling
you to replace an occurrence within that list.
Inclusion of the `\n` (Linux) newline character is important to separate
line[1] from line[2] when you next read the file, or you would end up with:
ddddddddddddddddd
QQQQQQQQQQQQQQQQQQQfffffffffffffffff
|
Python regex words boundary with unexpected results
Question:
import re
sstring = "ON Any ON Any"
regex1 = re.compile(r''' \bON\bANY\b''', re.VERBOSE)
regex2 = re.compile(r'''\b(ON)?\b(Any)?''', re.VERBOSE)
regex3 = re.compile(r'''\b(?:ON)?\b(?:Any)?''', re.VERBOSE)
for a in regex1.findall(sstring): print(a)
print("----------")
for a in regex2.findall(sstring): print(a)
print("----------")
for a in regex3.findall(sstring): print(a)
print("----------")
> * * *
>
> ('ON', '') ('', '') ('', 'Any') ('', '') ('ON', '') ('', '') ('', 'Any')
>
> ## ('', '')
>
> ON
>
> Any
>
> ON
>
> Any
>
> * * *
Having read many articles on the internet and S.O. I think I still don't
understand the regex word boundary: `\b`
The first regex doesn't give me the expected result I think it's must give me
"ON Any On Any" but it still not give me that.
The second regex gives me tuples and I don't know why or understand the
meaning of: ('', '')
The third regex gives prints the results on separated lines and empty lines in
betweens
Could you please help me to understand that.
Answer: Note that to match `ON ANY` you need to add an escaped (since you are using
`re.VERBOSE` flag) space between `ON` and `ANY` as `\b` word boundary _being
a**zero-width assertion_** does not consume any text, just asserts a position
between specific characters. That is the reason for your first
`re.compile(r''' \bON\bANY\b''', re.VERBOSE)` approach failure.
Use
rx = re.compile(r''' \bON\ ANY\b ''', re.VERBOSE|re.IGNORECASE)
See the [Python demo](https://ideone.com/cBsGLd)
The `re.compile(r'''\b(ON)?\b(Any)?''', re.VERBOSE)` returns tuples since you
defined `(...)` _capturing groups_ in the pattern.
The `re.compile(r'''\b(?:ON)?\b(?:Any)?''', re.VERBOSE)` matches optional
sequences, either `ON` or `Any`, so you get those words as values. You get
empty values as well because this regex can match just a word boundary (all
other subpatterns are optional).
More details about word boundaries:
* [Regular-Expressions.info](http://www.regular-expressions.info/wordboundaries.html)
* [SO Word Boundary Documentation](http://stackoverflow.com/documentation/regex/1539/word-boundary#t=201610051404594046741)
* [Java Regex Word Boundaries](http://stackoverflow.com/questions/21389837/java-regex-word-boundaries) (this is still a word boundary in a regex, also applicable here)
|
Can we use AWK and gsub() to process data with multiple colons ":" ? How?
Question: Here is an example of the data:
Col_01:14 .... Col_20:25 Col_21:23432 Col_22:639142
Col_01:8 .... Col_20:25 Col_22:25134 Col_23:243344
Col_01:17 .... Col_21:75 Col_23:79876 Col_25:634534 Col_22:5 Col_24:73453
Col_01:19 .... Col_20:25 Col_21:32425 Col_23:989423
Col_01:12 .... Col_20:25 Col_21:23424 Col_22:342421 Col_23:7 Col_24:13424 Col_25:67
Col_01:3 .... Col_20:95 Col_21:32121 Col_25:111231
As you can see, some of these columns are not in the correct order...
Now, I think the correct way to import this file into a dataframe is to
preprocess the data such that you can output a dataframe with `NaN` values,
e.g.
Col_01 .... Col_20 Col_21 Col22 Col23 Col24 Col25
8 .... 25 NaN 25134 243344 NaN NaN
17 .... NaN 75 2 79876 73453 634534
19 .... 25 32425 NaN 989423 NaN NaN
12 .... 25 23424 342421 7 13424 67
3 .... 95 32121 NaN NaN NaN 111231
The solution was shown by @JamesBrown here : [How to preprocess and load a
"big data" tsv file into a python
dataframe?](http://stackoverflow.com/questions/39398986/how-to-preprocess-and-
load-a-big-data-tsv-file-into-a-python-dataframe/)
Using said awk script:
BEGIN {
PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)
}
NR==1 { # the header cols is in the beginning of data file
# FORGET THIS: header cols from another file replace NR==1 with NR==FNR and see * below
split($0,a," ") # mkheader a[1]=first_col ...
for(i in a) { # replace with a[first_col]="" ...
a[a[i]]
printf "%6s%s", a[i], OFS # output the header
delete a[i] # remove a[1], a[2], ...
}
# next # FORGET THIS * next here if cols from another file UNTESTED
}
{
gsub(/: /,"=") # replace key-value separator ": " with "="
split($0,b,FS) # split record from ","
for(i in b) {
split(b[i],c,"=") # split key=value to c[1]=key, c[2]=value
b[c[1]]=c[2] # b[key]=value
}
for(i in a) # go thru headers in a[] and printf from b[]
printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}
And put the headers into a text file `cols.txt`
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
My question now: how do we use awk if we have data that is not `column: value`
but `column: value1: value2: value3`?
We would want the database entry to be `value1: value2: value3`
Here's the new data:
Col_01:14:a:47 .... Col_20:25:i:z Col_21:23432:6:b Col_22:639142:4:x
Col_01:8:z .... Col_20:25:i:4 Col_22:25134:u:0 Col_23:243344:5:6
Col_01:17:7:z .... Col_21:75:u:q Col_23:79876:u:0 Col_25:634534:8:1
We still provide the columns beforehand with `cols.txt`
How can we create a similar database structure? Is it possible to use `gsub()`
to limit to the first value before `:` which is the same as the header?
EDIT: This doesn't _have to_ be awk based. Any language will do naturally
Answer: Here is another alternative...
$ awk -v OFS='\t' '{for(i=1;i<NF;i+=2) # iterate over name: value pairs
{c=$i; # copy name in c to modify
sub(/:/,"",c); # remove colon
a[NR,c]=$(i+1); # collect data by row number, name
cols[c]}} # save name
END{n=asorti(cols,icols); # sort names
for(j=1;j<=n;j++) printf "%s", icols[j] OFS; # print header
print "";
for(i=1;i<=NR;i++) # print data
{for(j=1;j<=n;j++)
{v=a[i,icols[j]];
printf "%s", (v?v:"NaN") OFS} # replace missing data with NaN
print ""}}' file | column -t # pipe to column for pretty print
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
14:a:47 25:i:z 23432:6:b 639142:4:x NaN NaN
8:z 25:i:4 NaN 25134:u:0 243344:5:6 NaN
17:7:z NaN 75:u:q NaN 79876:u:0 634534:8:1
|
How do you make two turtles draw at once in Python?
Question: How do you make two turtles draw at once? I know how to make turtles draw and
how to make two or more but I don't know how you can make them draw at the
same time. Please help!
Answer: Here's a minimalist example using timer events:
import turtle
t1 = turtle.Turtle(shape="turtle")
t2 = turtle.Turtle(shape="turtle")
t1.setheading(45)
t2.setheading(-135)
def move_t1():
t1.forward(1)
turtle.ontimer(move_t1, 10)
def move_t2():
t2.forward(1)
turtle.ontimer(move_t2, 10)
turtle.ontimer(move_t1, 10)
turtle.ontimer(move_t2, 10)
turtle.exitonclick()
|
Python Flask Sqlalchemy Subst Query
Question: I am working on an internel search engine at my company written in python
utilizing flask and sqlalchemy(sqlite). My current problem is that I would
like to.
A.) Query on a certain amount of information for the description field B.)
Preferable query before it 50 characters and after it.
Very similiar to google under the link field. If you search for something, it
returns the links with 100 characters of words below that.
I was reading the documentation and found that there is no mid() function in
sqlalchemy. I also noticed from this post that sqlalchemy only support max,
min, and avg [sqlalchemy: get max/min/avg values from a
table](http://stackoverflow.com/questions/7133007/sqlalchemy-get-max-min-avg-
values-from-a-table)
SQL Documentation of functions
<http://docs.sqlalchemy.org/en/latest/core/functions.html>
I was trying to implement a query such as
links = Item.query(func.mid(Item.description, 0, 200).like('%helloworld%'))
I releazed sqlite has the syntax Substr and have tried
Item.query.filter(func.substr(Item.description,0, 200) == '%helloworld%')
Is there a way in sqlalchemy to navigate around this issue?
My code:
from sqlalchemy.sql.functions import func
def mainSearch(searchterm):
links = Item.query(func.mid(Item.title, 1, 3).Item.title.like('%e%'))
return links
HTML/Jinja code:
{% for link in links.items %}
<div id="resultbox">
<div id="linkTitle"><h4><a href="{{ link.link }}">{{ link.title }}</a></h4> </div>
<div id="lastUpdated">Last Updated: {{ link.last_updated }} </div>
<div id="linkLink">{{ link.link }}</div>
<div id="linkDescription">{{ link.description | safe }}</div>
</div>
Error
TypeError: 'BaseQuery' object is not callable
My database: Sqlite
I wanted to a query in sql similar too:
SELECT MID(column_name,start,length) AS some_name FROM table_name;
**Overall I am trying to do this to the data we query in Column Description:**
**Example text:**
An article (abbreviated to ART) is a word (prefix or suffix) that is used
alongside a noun to indicate the type of reference being made by the noun.
Articles specify grammatical definiteness of the noun, in some languages
extending to volume or numerical scope. The articles in the English language
are the and a/an, and (in certain contexts) some. "An" and "a" are modern
forms of the Old English "an", which in Anglian dialects was the number "one"
(compare "on", in Saxon dialects) and survived into Modern Scots as the number
"owan". Both "on" (respelled "one" by the Normans) and "an" survived into
Modern English, with "one" used as the number and "an" ("a", before nouns that
begin with a consonant sound) as an indefinite article.
In many languages, articles are a special part of speech, which cannot easily
be combined with other parts of speech. In English, articles are frequently
considered a part of a broader speech category called determiners, which
combines articles and demonstratives (such as "this" and "that").
In languages that employ articles, every common noun, with some exceptions, is
expressed with a certain definiteness (e.g., definite or indefinite), just as
many languages express every noun with a certain grammatical number (e.g.,
singular or plural). Every noun must be accompanied by the article, if any,
corresponding to its definiteness, and the lack of an article (considered a
zero article) itself specifies a certain definiteness. This is in contrast to
other adjectives and determiners, which are typically optional. This
obligatory nature of articles makes them among the most common words in many
languages—in English, for example, the most frequent word is the.[1]
Articles are usually characterized as either definite or indefinite.[2] A few
languages with well-developed systems of articles may distinguish additional
subtypes. Within each type, languages may have various forms of each article,
according to grammatical attributes such as gender, number, or case, or
according to adjacent sounds.
**To this**
An article (abbreviated to ART) is a word (prefix or suffix) that is used
alongside a noun to indicate the type of reference being made by the noun.
Articles specify grammatical definiteness of the noun,
So it doesnt crash the database by grabbing text of 100,000 words long. I only
need the first 100
Answer: This has nothing to do with the `mid` function. The error message says
`'BaseQuery' object is not callable.` Where are you calling `BaseQuery`? Here:
Item.query(...)
The correct incantation is:
db.session.query(func.mid(...))
or
Item.query.with_entities(func.mid(...))
|
GAE python: success message or add HTML class after redirect
Question: I've a website with a contact form running on a google App Engine. After
submitting I'd like to redirect and show a message to the user to let him know
the message was sent, this can eighter be a alert message or adding a class to
a html tag. How can I do that?
my python file looks like this:
import webapp2
import jinja2
import os
from google.appengine.api import mail
jinja_environment = jinja2.Environment(autoescape=True,loader=jinja2.FileSystemLoader(os.path.join(os.path.dirname(__file__), 'templates')))
class index(webapp2.RequestHandler):
def get(self):
template = jinja_environment.get_template('index.html')
self.response.write(template.render())
def post(self):
vorname=self.request.get("vorname")
...
message=mail.EmailMessage(sender="...",subject="...")
if not mail.is_email_valid(email):
self.response.out.write("Wrong email! Check again!")
message.to="..."
message.body=""" Neue Nachricht erhalten:
Vorname: %s
... %(vorname,...)
self.redirect('/#Kontakt')
app = webapp2.WSGIApplication([('/', index)], debug=True)
I already tried this in my html file:
<script>
function sentAlert() {
alert("Nachricht wurde gesendet");
}
</script>
<div class="submit">
<input type="submit" value="Senden" onsubmit="return sentAlert()"
id="button-blue"/>
</div>
but it does it before the redirect and therefore doesn't work. Does someone
have an idea how to do that?
Answer: After the redirect a request **different** than the POST one for which the
email was sent will be served.
So you need to **persist** the information about the email being sent across
requests, saving it in the POST request handler code and retrieving it in the
subsequent GET request handler code (be it the redirected one or any other one
for that matter).
To persist the info you can, for example, use the user's session (if you
already have one, see [Passing data between pages in a redirect() function in
Google App Engine](http://stackoverflow.com/questions/37134540/passing-data-
between-pages-in-a-redirect-function-in-google-app-engine?noredirect=1)), or
GAE's memcache/datastore/GCS.
Once the info is retrieved you can use it any way you wish.
|
pip error after upgrading pip & scrapy by "pip install --upgrade"
Question: Using debian 8(jessie) amd64 with python 2.7.9. I tried following commands:
pip install --upgrade pip
pip install --upgrade scrapy
after that, I am getting following pip error
root@debian:~# pip
Traceback (most recent call last):
File "/usr/local/bin/pip", line 11, in <module>
load_entry_point('pip==8.1.2', 'console_scripts', 'pip')()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 567, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2604, in load_entry_point
return ep.load()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2264, in load
return self.resolve()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2270, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/local/lib/python2.7/dist-packages/pip/__init__.py", line 16, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "/usr/local/lib/python2.7/dist-packages/pip/vcs/mercurial.py", line 9, in <module>
from pip.download import path_to_url
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 39, in <module>
from pip._vendor import requests, six
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/__init__.py", line 53, in <module>
from .packages.urllib3.contrib import pyopenssl
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/contrib/pyopenssl.py", line 54, in <module>
import OpenSSL.SSL
File "/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL
File "/usr/lib/python2.7/dist-packages/OpenSSL/rand.py", line 11, in <module>
from OpenSSL._util import (
File "/usr/lib/python2.7/dist-packages/OpenSSL/_util.py", line 4, in <module>
binding = Binding()
File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 89, in __init__
self._ensure_ffi_initialized()
File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 113, in _ensure_ffi_initialized
libraries=libraries,
File "/usr/lib/python2.7/dist-packages/cryptography/hazmat/bindings/utils.py", line 80, in build_ffi
extra_link_args=extra_link_args,
File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 437, in verify
lib = self.verifier.load_library()
File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 114, in load_library
return self._load_library()
File "/usr/local/lib/python2.7/dist-packages/cffi/verifier.py", line 225, in _load_library
return self._vengine.load_library()
File "/usr/local/lib/python2.7/dist-packages/cffi/vengine_cpy.py", line 174, in load_library
lst = list(map(self.ffi._get_cached_btype, lst))
File "/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 409, in _get_cached_btype
BType = type.get_cached_btype(self, finishlist)
File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 61, in get_cached_btype
BType = self.build_backend_type(ffi, finishlist)
File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 507, in build_backend_type
base_btype = self.build_baseinttype(ffi, finishlist)
File "/usr/local/lib/python2.7/dist-packages/cffi/model.py", line 525, in build_baseinttype
% self._get_c_name())
cffi.api.CDefError: 'point_conversion_form_t' has no values explicitly defined: refusing to guess which integer type it is meant to be (unsigned/signed, int/long)
googled for several similar problem, cffi or cryptography may cause this
problem, but i can't find any clear way to fix it.
Answer: Got the exact same error today, but in a different situation. I suspect this
is related to `cryptography` module.
What helped me was to install a specific version of `cffi` package:
pip install cffi==1.7.0
|
Access folder that a custom python function resides in
Question: How do I access a folder that a python function resides in?
For example, lets say that I have a N by 2 array of data. First column is the
independent variable, and second is the dependent variable. I need to
interpolate this data with different array of independent variable who's range
is contained in the original independent variable. This procedure is used in
multiple different codes with varying range of independent variables such that
I do not want to copy this data file to multiple places. I would like to write
a single function that achieves this, with the single copy of data inside the
folder containing the function itself.
My example attempts are:
import numpy as np
from scipy.interpolate import splev, splrep
def function(some_array):
filepath = './file_path_in_the_function_folder.txt'
some_data = np.loadtxt(filepath)
interpolated_data = splev(some_array, splrep(some_data[:,0], some_data[:,1]))
return interpolated_data
However, './' does not recognize the location of the function, rather it
directs to the current working directory of the script that imports the
function. How can I circumvent this problem?
Answer: Like this:
import os
my_dir = os.path.dirname(__file__)
fname = 'file_path_in_the_function_folder.txt'
filepath = os.path.join(my_dir, fname)
As explained in the [data
model](https://docs.python.org/3/reference/datamodel.html?highlight=__file__),
you can use the `__file__` name for getting the path of the current module. In
python 3.4+ it's an absolute path, for earlier version you can't easily know
if it's absolute or relative - but you usually needn't care, either.
|
Finding the max of each continguous subarray of a given size
Question: I'm trying to solve the following problem in Python
> Given an array and an integer k, find the maximum for each and every
> contiguous subarray of size k.
The idea is to use a double ended queue. This is my code:
def diff_sliding_window(arr, win):
# max = -inf
Q = []
win_maxes = [] # max of each window
for i in range(win):
print(Q)
while len(Q) > 0 and arr[i] >= arr[len(Q) - 1]:
# get rid of the index of the smaller element
Q.pop() # removes last element
Q.append(i)
# print('>>', Q)
for i in range(win, len(arr)):
# win_maxes.append(arr[Q[0]])
print(arr[Q[0]])
while len(Q) > 0 and Q[0] <= i - win:
Q.pop()
while len(Q) > 0 and arr[i] >= arr[len(Q)-1]:
Q.pop(0)
Q.append(i)
# win_maxes.append(arr[Q[0]])
print(arr[Q[0]])
But I can't figure out why for the test cases:
t1 = [1, 3, -1, -3, 5, 3, 6, 7]
t2 = [12, 1, 78, 90, 57, 89, 56]
that I'm not getting the correct results.
* * *
**Update** :
I've made the changes that Matt Timmermans suggested, but I'm still not
obtaining the proper output. For `t2`, and `win = 3`
78
90
90
89 <--- should be 90
89
Here is my updated code:
from collections import deque
def diff_sliding_window(arr, win):
# max = -inf
Q = deque()
win_maxes = [] # max of each window
for i in range(win):
# print(Q)
while len(Q) > 0 and arr[i] >= arr[Q[len(Q)-1]]:
# get rid of the index of the smaller element
Q.pop() # removes last element
Q.append(i)
# print('>>', Q)
for i in range(win, len(arr)):
# win_maxes.append(arr[Q[0]])
print(arr[Q[0]])
while len(Q) > 0 and Q[0] <= i - win:
Q.pop()
while len(Q) > 0 and arr[i] >= arr[Q[len(Q)-1]]:
Q.popleft()
Q.append(i)
print(arr[Q[0]])
Answer: It looks like you are trying to implement the O(n) algorithm for this problem,
which would be better than the other two answers here at this time.
But, your implementation is incorrect. Where you say `arr[i] >=
arr[len(Q)-1]`, you _should_ say `arr[i] >= arr[Q[len(Q)-1]]` or `arr[i] >=
arr[Q[-1]]`. You also swapped the `pop` and `pop(0)` cases in the second loop.
It looks like it will be correct after you fix those.
Also, though, your algorithm is not O(n), because you using `Q.pop(0)`, which
takes O(k) time. Your total running time is therefore O(kn) instead. Using a
deque for `Q` will fix this.
Here it is all fixed, with some comments to show how it works:
from collections import deque
def diff_sliding_window(arr, win):
if win > len(arr):
return []
win_maxes = [] # max of each window
#Q contains indexes of items in the window that are greater than
#all items to the right of them. This always includes the last item
#in the window
Q = deque()
#fill Q for initial window
for i in range(win):
#remove anything that isn't greater than the new item
while len(Q) > 0 and arr[i] >= arr[Q[-1]]:
Q.pop()
Q.append(i)
win_maxes.append(arr[Q[0]])
for i in range(win, len(arr)):
#remove indexes (at most 1, really) left of window
while len(Q) > 0 and Q[0] <= (i-win):
Q.popleft()
#remove anything that isn't greater than the new item
while len(Q) > 0 and arr[i] >= arr[Q[-1]]:
Q.pop()
Q.append(i)
win_maxes.append(arr[Q[0]])
return win_maxes
try it: <https://ideone.com/kQ1qsQ>
Proof that this is O(N): Each iteration of the inner loops removes an item
from Q. Since there are only `len(arr)` added to Q in total, there can be at
most `len(arr)` _total_ iterations of the inner loops.
|
Most efficient way to determine overlapping timeseries in Python
Question: I am trying to determine what percentage of the time that two time series
overlap using python's pandas library. The data is nonsynchronous so the times
for each data point do not line up. Here is an example:
**Time Series 1**
2016-10-05 11:50:02.000734 0.50
2016-10-05 11:50:03.000033 0.25
2016-10-05 11:50:10.000479 0.50
2016-10-05 11:50:15.000234 0.25
2016-10-05 11:50:37.000199 0.50
2016-10-05 11:50:49.000401 0.50
2016-10-05 11:50:51.000362 0.25
2016-10-05 11:50:53.000424 0.75
2016-10-05 11:50:53.000982 0.25
2016-10-05 11:50:58.000606 0.75
**Time Series 2**
2016-10-05 11:50:07.000537 0.50
2016-10-05 11:50:11.000994 0.50
2016-10-05 11:50:19.000181 0.50
2016-10-05 11:50:35.000578 0.50
2016-10-05 11:50:46.000761 0.50
2016-10-05 11:50:49.000295 0.75
2016-10-05 11:50:51.000835 0.75
2016-10-05 11:50:55.000792 0.25
2016-10-05 11:50:55.000904 0.75
2016-10-05 11:50:57.000444 0.75
Assuming the series holds its value until the next change what is the most
efficient way to determine the percentage of time that they have the same
value?
**Example**
Lets calculate the time that these series overlap starting at 11:50:07.000537
and ending at 2016-10-05 11:50:57.000444 0.75 since we have data for both
series for that period. Time that there is overlap:
* 11:50:10.000479 - 11:50:15.000234 (both have a value of 0.5) **4.999755 seconds**
* 11:50:37.000199 - 11:50:49.000295 (both have a value of 0.5) **12.000096 seconds**
* 11:50:53.000424 - 11:50:53.000982 (both have a value of 0.75) **0.000558 seconds**
* 11:50:55.000792 - 11:50:55.000904 (both have a value of 0.25) **0.000112 seconds**
The result (4.999755+12.000096+0.000558+0.000112) / 49.999907 = **34%**
One of the issues is my actual timeseries has much more data such as 1000 -
10000 observations and I need to run many more pairs. I thought about forward
filling a series and then simply comparing the rows and dividing the total
number of matches over the total number of rows but I do not think this would
be very efficient.
Answer: **_setup_**
create 2 time series
from StringIO import StringIO
import pandas as pd
txt1 = """2016-10-05 11:50:02.000734 0.50
2016-10-05 11:50:03.000033 0.25
2016-10-05 11:50:10.000479 0.50
2016-10-05 11:50:15.000234 0.25
2016-10-05 11:50:37.000199 0.50
2016-10-05 11:50:49.000401 0.50
2016-10-05 11:50:51.000362 0.25
2016-10-05 11:50:53.000424 0.75
2016-10-05 11:50:53.000982 0.25
2016-10-05 11:50:58.000606 0.75"""
s1 = pd.read_csv(StringIO(txt1), sep='\s{2,}', engine='python',
parse_dates=[0], index_col=0, header=None,
squeeze=True).rename('s1').rename_axis(None)
txt2 = """2016-10-05 11:50:07.000537 0.50
2016-10-05 11:50:11.000994 0.50
2016-10-05 11:50:19.000181 0.50
2016-10-05 11:50:35.000578 0.50
2016-10-05 11:50:46.000761 0.50
2016-10-05 11:50:49.000295 0.75
2016-10-05 11:50:51.000835 0.75
2016-10-05 11:50:55.000792 0.25
2016-10-05 11:50:55.000904 0.75
2016-10-05 11:50:57.000444 0.75"""
s2 = pd.read_csv(StringIO(txt2), sep='\s{2,}', engine='python',
parse_dates=[0], index_col=0, header=None,
squeeze=True).rename('s2').rename_axis(None)
* * *
**_TL;DR_**
df = pd.concat([s1, s2], axis=1).ffill().dropna()
overlap = df.index.to_series().diff().shift(-1) \
.fillna(0).groupby(df.s1.eq(df.s2)).sum()
overlap.div(overlap.sum())
False 0.666657
True 0.333343
Name: duration, dtype: float64
* * *
**_explanation_**
**_build base`pd.DataFrame` `df`_**
* use `pd.concat` to align indexes
* use `ffill` to let values propagate forward
* use `dropna` to get rid of values of one series prior to the other starting
* * *
df = pd.concat([s1, s2], axis=1).ffill().dropna()
df
[![enter image description
here](https://i.stack.imgur.com/Kkebu.png)](https://i.stack.imgur.com/Kkebu.png)
**_calculate`'duration'`_**
from current time stamp to next
df['duration'] = df.index.to_series().diff().shift(-1).fillna(0)
df
[![enter image description
here](https://i.stack.imgur.com/1ZkA5.png)](https://i.stack.imgur.com/1ZkA5.png)
**_calculate overlap_**
* `df.s1.eq(df.s2)` gives boolean series of when `s1` overlaps with `s2`
* use `groupby` above boolean series to aggregate total duration when `True` and `False`
* * *
overlap = df.groupby(df.s1.eq(df.s2)).duration.sum()
overlap
False 00:00:33.999548
True 00:00:17.000521
Name: duration, dtype: timedelta64[ns]
**_percentage of time with same value_**
overlap.div(overlap.sum())
False 0.666657
True 0.333343
Name: duration, dtype: float64
|
Python: ImportError: No module named 'tutorial.quickstart'
Question: I am getting import error even when I am following the tutorial
<http://www.django-rest-framework.org/tutorial/quickstart/> line by line.
from tutorial.quickstart import views
> ImportError: No module named 'tutorial.quickstart'
**where my urls.py file looks like**
from django.conf.urls import url, include
from rest_framework import routers
from tutorial.quickstart import views
router = routers.DefaultRouter()
router.register(r'users', views.UserViewSet)
router.register(r'groups', views.GroupViewSet)
urlpatterns = [
url(r'^', include(router.urls)),
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))
]
Note: I have the project in Rest_Tutorial folder which consist of virtual
enviroment - `env` and project `tutorial`. This tutorial consist of
`quickstart` and `tutorial`
Answer: Make sure your tutorial.quickstart is in the same folder as your project. Also
make sure it is unzipped ! Otherwise use a absolute path.
Hope it helps !
|
Accessing a path which case sensitive without writing so
Question: I would like to know whether it possible to access linux path like:
`/home/dan/CaseSensitivE/test.txt`
In a way we write it as `/home/dan/casesensitive/test.txt` and it goes to the
right place, means python consider paths as not case sensitive and allow
entering them that way, although they are case sensitive.
Answer: As Klaus said, the simple answer is no. You could, however, take a more
laborious route, and enumerate all folders/files in your top directory
(`os.path, glob`), convert to lower case (`string.lower`), test equality, step
one level down, etc.
This works for me:
import os
def match_lowercase_path(path):
# get absolute path
path = os.path.abspath(path)
# try it first
if os.path.exists(path):
correct_path = path
# no luck
else:
# works on linux, but there must be a better way
components = path.split('/')
# initialise answer
correct_path = '/'
# step through
for c in components:
if os.path.isdir(correct_path + c):
correct_path += c +'/'
elif os.path.isfile(correct_path + c):
correct_path += c
else:
match = find_match(correct_path, c)
correct_path += match
return correct_path
def find_match(path, ext):
for child in os.listdir(path):
if child.lower() == ext:
if os.path.isdir(path + child):
return child + '/'
else:
return child
else:
raise ValueError('Could not find a match for {}.'.format(path + ext))
|
Android notificationcompat sound/vibration not working
Question: I've spent my last 2 hours trying to figure our why my notification sent from
FireBase doesn't make any sound or vibration.
I have looked on many topics about this problem and tried different
combinations with `.setDefaults` `.setVibrate(new long[] { 1000, 1000, 1000,
1000, 1000 })` `.setSound(defaultSoundUri)`
What I have right now is this:
Intent intent = new Intent(this, MainActivity.class);
intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent,
PendingIntent.FLAG_ONE_SHOT);
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this)
.setSmallIcon(R.mipmap.ic_launcher)
.setContentTitle("Firebase Push Notification")
.setContentText(messageBody)
.setAutoCancel(true)
.setPriority(Notification.PRIORITY_MAX)
.setDefaults(-1)
.setContentIntent(pendingIntent);
NotificationManager notificationManager =
(NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
notificationManager.notify(0, notificationBuilder.build());
I don't have any errors and the notification is showing but I repeat, no
sound, no vibration, no led lights, no heads-up.
To send the notification I use a python library on my server:
# Send to single device.
from pyfcm import FCMNotification
push_service = FCMNotification(api_key="xxxxxxxxxxxxxxxx")
registration_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
message_title = "Test"
message_body = "Test notification"
result = push_service.notify_single_device(registration_id=registration_id, message_title=message_title, message_body=message_body)
print result
Note: I already have in manifest defined permission: `<uses-permission
android:name="android.permission.VIBRATE" />` Any ideea ?
Answer: **You aren't sending sound flag from your server** maybe that's why you aren't
getting notification sound. Please try adding:
sound = "default"
This will probably get you the sound of notification.
Have a look at this:
<https://firebase.google.com/docs/cloud-messaging/http-server-ref>
for better understanding of fcm downstream android notification flags.
Do let me know if it changes anything for you.
|
Python usage of regular expressions
Question: How can I extract _string1#string2_ from the bellow line?
<![CDATA[<html><body><p style="margin:0;">string1#string2</p></body></html>]]>
The # character and the structure of the line is always the same.
Answer: I would like to refer you to this
[gem](http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-
xhtml-self-contained-tags):
In synthesis a regex is not the appropriate tool for this job
Also have you tried an
[XML](https://docs.python.org/3/library/xml.etree.elementtree.html) parser
instead?
EDIT:
import xml.etree.ElementTree as ET
a = "<html><body><p style=\"margin:0;\">string1#string2</p></body></html>"
root = ET.fromstring(a)
c = root[0][0].text
OUT:
c
'string1#string2'
d = c.replace('#', ' ').split()
Out:
d
['string1', 'string2']
|
Using pandas to scrape weather data from wundergound
Question: I came across a very useful set of scripts on the Shane Lynn for the [Analysis
of Weather data](http://www.shanelynn.ie/analysis-of-weather-data-using-
pandas-python-and-seaborn/). The first script, used to scrape data from
Weather Underground, is as follows:
import requests
import pandas as pd
from dateutil import parser, rrule
from datetime import datetime, time, date
import time
def getRainfallData(station, day, month, year):
"""
Function to return a data frame of minute-level weather data for a single Wunderground PWS station.
Args:
station (string): Station code from the Wunderground website
day (int): Day of month for which data is requested
month (int): Month for which data is requested
year (int): Year for which data is requested
Returns:
Pandas Dataframe with weather data for specified station and date.
"""
url = "http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID={station}&day={day}&month={month}&year={year}&graphspan=day&format=1"
full_url = url.format(station=station, day=day, month=month, year=year)
# Request data from wunderground data
response = requests.get(full_url, headers={'User-agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'})
data = response.text
# remove the excess <br> from the text data
data = data.replace('<br>', '')
# Convert to pandas dataframe (fails if issues with weather station)
try:
dataframe = pd.read_csv(io.StringIO(data), index_col=False)
dataframe['station'] = station
except Exception as e:
print("Issue with date: {}-{}-{} for station {}".format(day,month,year, station))
return None
return dataframe
# Generate a list of all of the dates we want data for
start_date = "2016-08-01"
end_date = "2016-08-31"
start = parser.parse(start_date)
end = parser.parse(end_date)
dates = list(rrule.rrule(rrule.DAILY, dtstart=start, until=end))
# Create a list of stations here to download data for
stations = ["ILONDON28"]
# Set a backoff time in seconds if a request fails
backoff_time = 10
data = {}
# Gather data for each station in turn and save to CSV.
for station in stations:
print("Working on {}".format(station))
data[station] = []
for date in dates:
# Print period status update messages
if date.day % 10 == 0:
print("Working on date: {} for station {}".format(date, station))
done = False
while done == False:
try:
weather_data = getRainfallData(station, date.day, date.month, date.year)
done = True
except ConnectionError as e:
# May get rate limited by Wunderground.com, backoff if so.
print("Got connection error on {}".format(date))
print("Will retry in {} seconds".format(backoff_time))
time.sleep(10)
# Add each processed date to the overall data
data[station].append(weather_data)
# Finally combine all of the individual days and output to CSV for analysis.
pd.concat(data[station]).to_csv("data/{}_weather.csv".format(station))
However, I get the error:
Working on ILONDONL28
Issue with date: 1-8-2016 for station ILONDONL28
Issue with date: 2-8-2016 for station ILONDONL28
Issue with date: 3-8-2016 for station ILONDONL28
Issue with date: 4-8-2016 for station ILONDONL28
Issue with date: 5-8-2016 for station ILONDONL28
Issue with date: 6-8-2016 for station ILONDONL28
Can anyone help me with this error?
The data for the chosen station and the time period is available, as shown at
this
[link](https://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=ILONDONL28&day=01&month=08&year=2016&graphspan=day&format=1).
Answer: The output you are getting is because an exception is being raised. If you
added a `print e` you would see that this is because `import io` was missing
from the top of the script. Secondly, the station name you gave was out by one
character. Try the following:
import io
import requests
import pandas as pd
from dateutil import parser, rrule
from datetime import datetime, time, date
import time
def getRainfallData(station, day, month, year):
"""
Function to return a data frame of minute-level weather data for a single Wunderground PWS station.
Args:
station (string): Station code from the Wunderground website
day (int): Day of month for which data is requested
month (int): Month for which data is requested
year (int): Year for which data is requested
Returns:
Pandas Dataframe with weather data for specified station and date.
"""
url = "http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID={station}&day={day}&month={month}&year={year}&graphspan=day&format=1"
full_url = url.format(station=station, day=day, month=month, year=year)
# Request data from wunderground data
response = requests.get(full_url)
data = response.text
# remove the excess <br> from the text data
data = data.replace('<br>', '')
# Convert to pandas dataframe (fails if issues with weather station)
try:
dataframe = pd.read_csv(io.StringIO(data), index_col=False)
dataframe['station'] = station
except Exception as e:
print("Issue with date: {}-{}-{} for station {}".format(day,month,year, station))
return None
return dataframe
# Generate a list of all of the dates we want data for
start_date = "2016-08-01"
end_date = "2016-08-31"
start = parser.parse(start_date)
end = parser.parse(end_date)
dates = list(rrule.rrule(rrule.DAILY, dtstart=start, until=end))
# Create a list of stations here to download data for
stations = ["ILONDONL28"]
# Set a backoff time in seconds if a request fails
backoff_time = 10
data = {}
# Gather data for each station in turn and save to CSV.
for station in stations:
print("Working on {}".format(station))
data[station] = []
for date in dates:
# Print period status update messages
if date.day % 10 == 0:
print("Working on date: {} for station {}".format(date, station))
done = False
while done == False:
try:
weather_data = getRainfallData(station, date.day, date.month, date.year)
done = True
except ConnectionError as e:
# May get rate limited by Wunderground.com, backoff if so.
print("Got connection error on {}".format(date))
print("Will retry in {} seconds".format(backoff_time))
time.sleep(10)
# Add each processed date to the overall data
data[station].append(weather_data)
# Finally combine all of the individual days and output to CSV for analysis.
pd.concat(data[station]).to_csv(r"data/{}_weather.csv".format(station))
Giving you an output CSV file starting as follows:
,Time,TemperatureC,DewpointC,PressurehPa,WindDirection,WindDirectionDegrees,WindSpeedKMH,WindSpeedGustKMH,Humidity,HourlyPrecipMM,Conditions,Clouds,dailyrainMM,SoftwareType,DateUTC,station
0,2016-08-01 00:05:00,17.8,11.6,1017.5,ESE,120,0.0,0.0,67,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:05:00,ILONDONL28
1,2016-08-01 00:20:00,17.7,11.0,1017.5,SE,141,0.0,0.0,65,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:20:00,ILONDONL28
2,2016-08-01 00:35:00,17.5,10.8,1017.5,South,174,0.0,0.0,65,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:35:00,ILONDONL28
If you are not getting a CSV file, I suggest you add a full path to the output
filename.
|
Generate html document with images and text within python script (without servers if possible)
Question: How can I generate HTML containing images and text, using a template and css,
in python?
There are few similar questions on stackoverflow (e.g.:
[Q1](http://stackoverflow.com/questions/6748559/generating-html-documents-in-
python), [Q2](http://stackoverflow.com/questions/98135/how-do-i-use-django-
templates-without-the-rest-of-django),
[Q3](http://stackoverflow.com/questions/16523939/how-to-write-and-save-html-
file-in-python)) but they offer solutions that (to me) seem overkill, like
requiring servers (e.g. genshi).
Simple code using `django` would be as follows:
from django.template import Template, Context
from django.conf import settings
settings.configure() # We have to do this to use django templates standalone - see
# http://stackoverflow.com/questions/98135/how-do-i-use-django-templates-without-the-rest-of-django
# Our template. Could just as easily be stored in a separate file
template = """
<html>
<head>
<title>Template {{ title }}</title>
</head>
<body>
Body with {{ mystring }}.
</body>
</html>
"""
t = Template(template)
c = Context({"title": "title from code",
"mystring":"string from code"})
print t.render(c)
(From here: [Generating HTML documents in
python](http://stackoverflow.com/questions/6748559/generating-html-documents-
in-python))
This code produces an error, supposedly because I need to set up a backend:
Traceback (most recent call last):
File "<input>", line 17, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/template/base.py", line 184, in __init__
engine = Engine.get_default()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/template/engine.py", line 81, in get_default
"No DjangoTemplates backend is configured.")
django.core.exceptions.ImproperlyConfigured: No DjangoTemplates backend is configured.
Is there a simple way to have a template.html and style.css and a bunch of
images, and use data in a python script to replace placeholders in the
template.html without having to set up a server?
Answer: There are quite a few python template engines - for a start you may want to
have a look here : <https://wiki.python.org/moin/Templating>
As far as I'm concerned I'd use jinja2 but YMMV.
|
Python Case Matching Input and Output
Question: I'm doing the pig latin question that I'm sure everyone here is familiar with
it. The only thing I can't seem to get is matching the case of the input and
output. For example, when the user enters Latin, my code produces `atinLay`. I
want it to produce `Atinlay`.
import string
punct = string.punctuation
punct += ' '
vowel = 'aeiouyAEIOUY'
consonant = 'bcdfghjklmnpqrstvwxzBCDFGHJKLMNPQRSTVWXZ'
final_word = input("Please enter a single word. ")
first_letter = final_word[:1]
index = 0
if any((p in punct) for p in final_word):
print("You did not enter a single word!")
else:
while index < len(final_word) and (not final_word[index] in vowel):
index = index+1
if any((f in vowel) for f in first_letter):
print(final_word + 'yay')
elif index < len(final_word):
print(final_word[index:]+final_word[:index]+'ay')
Answer: What you need is
[`str.title()`](http://docs.python/3/library/stdtypes.html#str.title). Once
you have done your piglatin conversion, you can use `title()` built-in
function to produce the desired output, like so:
>>> "atinLay".title()
'Atinlay'
To check if a string is lower case, you can use
[`str.islower()`](http://docs.python/3/library/library/stdtypes.html#str.islower).
Take a peek at the docs.
|
Float value behaviour in Python 2.6 and Python 2.7
Question: I have to convert string to tuple of float. In Python 2.7, it gives correct
conversion, but in Python it is not same case.
I want same behaviour in Python 2.6
Can anyone help me why this is not same in Python 2.6 and how to do in Python
2.6.
**Python 2.6**
>>> a
'60.000,494.100,361.600,553.494'
>>> eval(a)
(60.0, 494.10000000000002, 361.60000000000002, 553.49400000000003)
>>> import ast
>>> ast.literal_eval(a)
(60.0, 494.10000000000002, 361.60000000000002, 553.49400000000003)
>>>
>>> for i in a.split(","):
... float(i)
...
60.0
494.10000000000002
361.60000000000002
553.49400000000003
>>>
**Python 2.7**
>>> a
'60.000,494.100,361.600,553.494'
>>> eval(a)
(60.0, 494.1, 361.6, 553.494)
>>> import ast
>>> ast.literal_eval(a)
(60.0, 494.1, 361.6, 553.494)
>>>
>>> for i in a.split(","):
... float(i)
...
60.0
494.1
361.6
553.494
Its not look good
**[Edit 2]**
**I just print value and condition**
print fGalleyTopRightOddX, ">=", tLinetextBbox[2], fGalleyTopRightOddX>=tLinetextBbox[2]
361.6 >= 361.6 False
I calculate `tLinetextBbox` value from string and which is
`361.60000000000002` and `fGalleyTopRightOddX` value is `361.6`
I am working on **Python Django** project where **apache** is server.
1. `fGalleyTopRightOddX` i.e. `361.6` is calculated in apache environment
2. `tLinetextBbox` i.e. `361.60000000000002` is calculated on cmd means I pass `fGalleyTopRightOddX` to program which run by command `line os.system`
**[Edit 3]** Just one more information,
when I log diction in text file then i get `tLinetextBbox` vale as
`361.59999999999997`
Answer: In order to get the same result in Python 2.6, you have to explicitly do:
'%.12g' % float_variable
Better to create a custom function to do this as:
def convert_to_my_float(float_value):
return float('%.12g' % float_value)
* * *
As per [Python's Decimal
Objects](https://docs.python.org/2/library/decimal.html#decimal-objects)
Document:
> Changed in version 2.6: leading and trailing whitespace characters are
> permitted when creating a Decimal instance from a string.
>
> Changed in version 2.7: The argument to the constructor is now permitted to
> be a float instance.
The answer to _Why they are behaving differently?_ is, because
`float.__repr__()` and `float.__str__()` methods in Python 2.7 changed.
|
Python Pyramid url replacement variable restrictions
Question: I'm developing in Pyramid 1.7 and running into an interesting scenario where
some URL dispatch replacement variables match the route, while others do not.
These variables are numbers, which may not be best practice or even be allowed
from what I can tell in the documentation:
<http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/urldispatch.html>
My route is essentially defined as:
config.add_route("my_route", "/path/more_path/{num_var1}-{num_var2}-even_more_path")
The funny thing I'm seeing is that if num_var1 = 1 and num_var2 = 1, the path
resolves fine. If num_var1 = 100 and num_var_2 = 100, it also resolves fine.
Yet, if num_var1 = 1 and num_var2 = 100, it fails to resolve. Is this a
failure I should expect for some reason or should this properly resolve?
Thanks!
Answer: I made a test case which passes fine with your examples. Feel free to play
with this until it reproduces your issues, but until then I'm not sure how to
help.
from pyramid.config import Configurator
from webtest import TestApp
config = Configurator()
config.add_route('my_route', '/path/more_path/{num_var1}-{num_var2}-even_more_path')
config.add_view(lambda r: 'hello', route_name='my_route', renderer='string')
app = config.make_wsgi_app()
test = TestApp(app)
test.get('/path/more_path/1-1-even_more_path')
test.get('/path/more_path/100-100-even_more_path')
test.get('/path/more_path/1-100-even_more_path')
test.get('/path/more_path/1-100') # fails, missing extra path
|
Python subprocess not returning
Question: I want to call a Python script from Jenkins and have it build my app, FTP it
to the target, and run it.
I am trying to build and the `subprocess` command fails. I have tried this
with both `subprocess.call()` and `subprocess.popen()`, with the same result.
When I evaluate `shellCommand` and run it from the command line, the build
succeeds.
Note that I have 3 shell commands: 1) remove work directory, 2) create a
fresh, empty, work directory, then 3) build. The first two commands return
from `subprocess`, but the third hangs (although it completes when run from
the command line).
What am I doing wrongly? Or, what alternatives do I have for executing that
command?
# +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
def ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand):
try:
process = subprocess.call(shellCommand, shell=True, stdout=subprocess.PIPE)
#process.wait()
return process #.returncode
except KeyboardInterrupt, e: # Ctrl-C
raise e
except SystemExit, e: # sys.exit()
raise e
except Exception, e:
print 'Exception while executing shell command : ' + shellCommand
print str(e)
traceback.print_exc()
os._exit(1)
# +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
def BuildApplciation(arguments):
# See http://gnuarmeclipse.github.io/advanced/headless-builds/
jenkinsWorkspaceDirectory = arguments.eclipseworkspace + '/jenkins'
shellCommand = 'rm -r ' + jenkinsWorkspaceDirectory
ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand)
shellCommand = 'mkdir ' + jenkinsWorkspaceDirectory
if not ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand) == 0:
print "Error making Jenkins work directory in Eclipse workspace : " + jenkinsWorkspaceDirectory
return False
application = 'org.eclipse.cdt.managedbuilder.core.headlessbuild'
shellCommand = 'eclipse -nosplash -application ' + application + ' -import ' + arguments.buildRoot + '/../Project/ -build myAppApp/TargetRelease -cleanBuild myAppApp/TargetRelease -data ' + jenkinsWorkspaceDirectory + ' -D DO_APPTEST'
if not ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand) == 0:
print "Error in build"
return False
Answer: Ok, I go it. No comments or answers so far, but, rather then delete the
question, I will leave it here in case it helps anyone in future.
I Googled further and found [this
page](https://www.webcodegeeks.com/python/python-subprocess-example/), which,
at 1.2 says
> One way of gaining access to the output of the executed command would be to
> use PIPE in the arguments stdout or stderr, but the child process will block
> if it generates enough output to a pipe to fill up the OS pipe buffer as the
> pipes are not being read from.
Sure enough, when I deleted the `, stdout=subprocess.PIPE` from the code
above, it worked as expected.
As I only want the exit code from the subprocess, the above code is enough for
me. Read the linked page if you want the output of the command.
|
How to make a python program that lists the positions and displays and error message if not found
Question: I did this code:
sentence = input("Type in your sentance ").lower().split()
Word = input("What word would you like to find? ")
Keyword = Word.lower().split().append(Word)
positions = []
for (S, subword) in enumerate(sentence):
if (subword == Word):
positions.append
print("The word" , Word , "is in position" , S+1)
But there are 2 problems with it; I dont know how to write a code when the
users word is not found and to but the positions in "The word position is in
[1,3,6,9]. Any help? Thanks
Answer: Your code is having multiple errors. I am pasting here the sample code for
your reference:
from collections import defaultdict
sentence_string = raw_input('Enter Sentence: ')
# Enter Sentence: Here is the content I need to check for index of all words as Yes Hello Yes Yes Hello Yes
word_string = raw_input("Enter Words: ")
# Enter Words: yes hello
word_list = sentence_string.lower().split()
words = word_string.lower().split()
my_dict = defaultdict(list)
for i, word in enumerate(word_list):
my_dict[word].append(i)
for word in words:
print "The word" , word, "is in position " , my_dict[word]
# The word yes is in position [21, 23, 24, 26]
# The word hello is in position [22, 25]
The approach here is:
1. Break your sentence i.e `sentence_string` here into list of words
2. Break your word string into list of words.
3. Create a dictionary `my_dict` to store all the indexes of the words in `word_list`
4. Iterate over the `words` to get your result with index, based on the value you store in `my_dict`.
_Note:_ The commented part in above example is basically the output of the
code.
|
How to avoid encoding parameter when opening file in Python3
Question: When I am working on a .txt file on a Windows device I must save as either:
ANSI, Unicode, Unicode big endian, or UTF-8. When I run Python3 on an OSX
device and try to import and read the .txt file, I have to do something along
the lines of:
with open('ships.txt', 'r', encoding='utf-8') as f:
for line in f.readlines():
print(line)
Is there a particular format I should use to encode the .txt file on the
Windows device to avoid adding the encoding parameter when opening the file in
Python?
Answer: Call `locale.getpreferredencoding(False)` on your OSX device. That's the
default encoding used for reading a file on that device. Save in that encoding
on Windows and you won't need to specify the encoding on your OSX device.
But as the [Zen of Python](https://www.python.org/dev/peps/pep-0020/) says,
"Explicit is better than implicit." Since you know the encoding, why not
specify it?
|
Python: UserWarning: This pattern has match groups. To actually get the groups, use str.extract
Question: I have a dataframe and I try to get string, where on of column contain some
string Df looks like
member_id,event_path,event_time,event_duration
30595,"2016-03-30 12:27:33",yandex.ru/,1
30595,"2016-03-30 12:31:42",yandex.ru/,0
30595,"2016-03-30 12:31:43",yandex.ru/search/?lr=10738&msid=22901.25826.1459330364.89548&text=%D1%84%D0%B8%D0%BB%D1%8C%D0%BC%D1%8B+%D0%BE%D0%BD%D0%BB%D0%B0%D0%B9%D0%BD&suggest_reqid=168542624144922467267026838391360&csg=3381%2C3938%2C2%2C3%2C1%2C0%2C0,0
30595,"2016-03-30 12:31:44",yandex.ru/search/?lr=10738&msid=22901.25826.1459330364.89548&text=%D1%84%D0%B8%D0%BB%D1%8C%D0%BC%D1%8B+%D0%BE%D0%BD%D0%BB%D0%B0%D0%B9%D0%BD&suggest_reqid=168542624144922467267026838391360&csg=3381%2C3938%2C2%2C3%2C1%2C0%2C0,0
30595,"2016-03-30 12:31:45",yandex.ru/search/?lr=10738&msid=22901.25826.1459330364.89548&text=%D1%84%D0%B8%D0%BB%D1%8C%D0%BC%D1%8B+%D0%BE%D0%BD%D0%BB%D0%B0%D0%B9%D0%BD&suggest_reqid=168542624144922467267026838391360&csg=3381%2C3938%2C2%2C3%2C1%2C0%2C0,0
30595,"2016-03-30 12:31:46",yandex.ru/search/?lr=10738&msid=22901.25826.1459330364.89548&text=%D1%84%D0%B8%D0%BB%D1%8C%D0%BC%D1%8B+%D0%BE%D0%BD%D0%BB%D0%B0%D0%B9%D0%BD&suggest_reqid=168542624144922467267026838391360&csg=3381%2C3938%2C2%2C3%2C1%2C0%2C0,0
30595,"2016-03-30 12:31:49",kinogo.co/,1
30595,"2016-03-30 12:32:11",kinogo.co/melodramy/,0
And another df with urls
url
003\.ru\/[a-zA-Z0-9-_%$#?.:+=|()]+\/mobilnyj_telefon_bq_phoenix
003\.ru\/[a-zA-Z0-9-_%$#?.:+=|()]+\/mobilnyj_telefon_fly_
003\.ru\/sonyxperia
003\.ru\/[a-zA-Z0-9-_%$#?.:+=|()]+\/mobilnye_telefony_smartfony
003\.ru\/[a-zA-Z0-9-_%$#?.:+=|()]+\/mobilnye_telefony_smartfony\/brands5D5Bbr_23
1click\.ru\/sonyxperia
1click\.ru\/[a-zA-Z0-9-_%$#?.:+=|()]+\/chasy-motorola
I use
urls = pd.read_csv('relevant_url1.csv', error_bad_lines=False)
substr = urls.url.values.tolist()
data = pd.read_csv('data_nts2.csv', error_bad_lines=False, chunksize=50000)
result = pd.DataFrame()
for i, df in enumerate(data):
res = df[df['event_time'].str.contains('|'.join(substr), regex=True)]
but it return me
UserWarning: This pattern has match groups. To actually get the groups, use str.extract.
How can I fix that?
Answer: At least one of the regex patterns in `urls` must use a capturing group.
`str.contains` only returns True or False for each row in `df['event_time']`
\-- it does not make use of the capturing group. Thus, the `UserWarning` is
alerting you that the regex uses a capturing group but the match is not used.
If you wish to remove the `UserWarning` you could find and remove the
capturing group from the regex pattern(s). They are not shown in the regex
patterns you posted, but they must be there in your actual file. Look for
parentheses outside of the character classes.
Alternatively, you could suppress this particular UserWarning by putting
import warnings
warnings.filterwarnings("ignore", 'This pattern has match groups')
before the call to `str.contains`.
* * *
Here is a simple example which demonstrates the problem (and solution):
# import warnings
# warnings.filterwarnings("ignore", 'This pattern has match groups') # uncomment to suppress the UserWarning
import pandas as pd
df = pd.DataFrame({ 'event_time': ['gouda', 'stilton', 'gruyere']})
urls = pd.DataFrame({'url': ['g(.*)']}) # With a capturing group, there is a UserWarning
# urls = pd.DataFrame({'url': ['g.*']}) # Without a capturing group, there is no UserWarning. Uncommenting this line avoids the UserWarning.
substr = urls.url.values.tolist()
df[df['event_time'].str.contains('|'.join(substr), regex=True)]
prints
script.py:10: UserWarning: This pattern has match groups. To actually get the groups, use str.extract.
df[df['event_time'].str.contains('|'.join(substr), regex=True)]
Removing the capturing group from the regex pattern:
urls = pd.DataFrame({'url': ['g.*']})
avoids the UserWarning.
|
Creating a dictionary in python by combining two .csv files
Question: I am trying to create a dictionary in python by combining data from two .csv
files, by matching the first column of the two files. This is what I have so
far
import csv
with open('a.csv', 'r') as my_file1 :
rows1 = list(csv.reader(my_file1))
with open('aa.csv', 'r') as my_file2 :
rows2 = list(csv.reader(my_file2))
max_length = min(len(rows1), len(rows2))
for i in range(10):
new_dict = {}
if (rows1[i][0]== rows2[i][0]):
temp = {(rows1[i][0], (rows1[i][5], rows1[i][6], rows2[i][5], rows2[i][6] )) }
new_dict.update(temp)
print(new_dict)
The output that I get is the last data entry in arrays. It does not seem to
append all the values. This is what the output looks like
{'2016-09-12': ('1835400', '45.75', '21681500', '9.78')}
instead of the complete list with keys and values. How do I go about
correcting this? Thanks!
Answer: You're creating a new dictionary on every iteration of your `for`, so only the
update from the last iteration is kept, others have been thrown away.
You can solve this by moving the dictionary setup outside the `for`:
new_dict = {}
for i in range(10):
...
|
How To Change Pycharms Default Testing Skeleton From Unittest Format to Pytest?
Question: I'm trying to change from Unittest to PyTests. After changing the default test
runner from Unittests to py.test under Python integration Tools I'm still
getting the Unittest skeleton when creating a new test:
Instead of this:
from unittest import TestCase
class Test<selected function>(TestCase):
pass
I want it to be this:
import pytest
class Test< selected function >:
def test_<selected function>:
pass
I tried changing the Python Unit Test Code Template under
Preferences>Editor>File and Code templates.
No luck. Where do I change the default testing template?
Answer: In my case, the best solution I came up with was creating a new template. I
called it `Python Test` and the template is as follows.
# -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
import pytest
def test():
pass
This speeds up enough my test creation. At some point they will probably add
the testing functionality.
|
Unexpected behaviour in python multiprocessing
Question: I'm trying to understand the following odd behavior observed using the `python
mutiprocessing`.
Sample testClass: import os import multiprocessing
class testClass(multiprocessing.Process):
def __del__(self):
print "__del__ PID: %d" % os.getpid()
print self.a
def __init__(self):
multiprocessing.Process.__init__(self)
print "__init__ PID: %d" % os.getpid()
self.a = 0
def run(self):
print "method1 PID: %d" % os.getpid()
self.a = 1
And a little test program: from testClass import testClass
print "Start"
proc_list = []
proc_list.append(testClass())
proc_list[-1].start()
proc_list[-1].join()
print "End"
This produces:
Start
__init__ PID: 89578
method1 PID: 89585
End
__del__ PID: 89578
0
Why it does not print `1`?
I'm guessing that it's related to the fact that `run` is actually being
executed on a different process as can be seen. If this is the expected
behavior how is everyone using multiprocessing where processes have an
expensive `__init__` as in processes that need to open a database?
And shouldn't this behaviour be better highlighted in multiprocessing
documentation?
Answer: You can wrap your expensive initialization inside a context manager:
def run(self):
with expensive_initialization() as initialized_object:
do_some_logic_here(initialized_object)
You will have a chance to properly initialize your object before calling
`do_some_logic_here`, and to properly release the resources after leaving the
context manager's block.
See
[documentation](https://docs.python.org/3.6/reference/datamodel.html#context-
managers).
|
How to avoid .pyc files using selenium webdriver/python while running test suites?
Question: There's no relevant answer to this question. When I run my test cases inside a
test suite using selenium webdriver with python the directory gets trashed
with .pyc files. They do not appear if I run test cases separately, only when
I run them inside one test suite.How to avoid them?
import unittest
from FacebookLogin import FacebookLogin
from SignUp import SignUp
from SignIn import SignIn
class TestSuite(unittest.TestSuite):
def suite():
suite = unittest.TestSuite()
suite.addTest(FacebookLogin("test_FacebookLogin"))
suite.addTest(SignUp("test_SignUp"))
suite.addTest(SignIn("test_SignIn"))
return suite
if __name__ == "__main__":
unittest.main()
Answer: `pyc` files are created any time you `import` a module, but not when you run a
module directly as a script. That's why you're seeing them when you import the
modules with the test code but don't see them created when you run the modules
separately.
If you're invoking Python from the command line, you can suppress the creating
of `pyc` files by using the `-B` argument. You can also set the environment
variable `PYTHONDONTWRITEBYTECODE` with the same effect. I don't believe
there's a way to change that setting with Python code after the interpreter is
running.
In Python 3.2 and later, the `pyc` files get put into a separate `__pycache__`
folder, which might be visually nicer. It also allows multiple `pyc` files to
exist simultaneously for different interpreter versions that have incompatible
bytecode (a "tag" is added to the file name indicating which interpreter uses
each file).
But even in earlier versions of Python, I think that saying the `pyc` files
are "trashing" the directory is a bit hyperbolic. Usually you can exempt the
created files from source control (e.g. by listing `.pyc` in a `.gitignore` or
equivalent file), and otherwise ignore them. Having `pyc` files around speeds
up repeated imports of the file, since the interpreter doesn't need to
recompile the source to bytecode if the `pyc` file already has the bytecode
available.
|
How to debug cython in and IDE
Question: I am trying to debug a Cython code, that wraps a c++ class, and the error I am
hunting is somewhere in the C++ code.
It would be awfully convenient if I could somehow debug as if it were written
in one language, i.e. if there's an error in the C++ part, it show me the
source code line there, if the error is in the python part it does the same.
Right now I always have to try and replicate the python code using the class
in C++ and right now I have an error that only occurs when running through
python ... I hope somebody can help me out :)
Answer: It's been a while for me and I forgot how I exactly did it, but when I was
writing my own C/C++ library and interfaced it with swig into python, I was
able to debug the C code with [DDD](https://www.gnu.org/software/ddd/). It was
important to compile with debug options. It wasn't great, but it worked for
me. I think you had to run `ddd python` and within the python terminal run my
faulty C code. You would have to make sure all linked libraries including
yours is loaded with the source code so that you could set breakpoints.
|
Output list of files from slideshow
Question: I have adapted a python script to display a slideshow of images. The original
script can be found at <https://github.com/cgoldberg/py-slideshow>
I want to be able to record the filename of each of the images that is
displayed so that I may more easily debug any errors (i.e., remove
incompatible images).
I have attempted to include a command to write the filename to a text file in
the `def get_image_paths` function. However, that has not worked. My code
appears below - any help is appreciated.
import pyglet
import os
import random
import argparse
window = pyglet.window.Window(fullscreen=True)
def get_scale(window, image):
if image.width > image.height:
scale = float(window.width) / image.width
else:
scale = float(window.height) / image.height
return scale
def update_image(dt):
img = pyglet.image.load(random.choice(image_paths))
sprite.image = img
sprite.scale = get_scale(window, img)
if img.height >= img.width:
sprite.x = ((window.width / 2) - (sprite.width / 2))
sprite.y = 0
elif img.width >= img.height:
sprite.y = ((window.height / 2) - (sprite.height / 2))
sprite.x = 0
else:
sprite.x = 0
sprite.y = 0
window.clear()
thefile=open('test.txt','w')
def get_image_paths(input_dir='.'):
paths = []
for root, dirs, files in os.walk(input_dir, topdown=True):
for file in sorted(files):
if file.endswith(('jpg', 'png', 'gif')):
path = os.path.abspath(os.path.join(root, file))
paths.append(path)
thefile.write(file)
return paths
@window.event()
def on_draw():
sprite.draw()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('dir', help='directory of images',
nargs='?', default=os.getcwd())
args = parser.parse_args()
image_paths = get_image_paths(args.dir)
img = pyglet.image.load(random.choice(image_paths))
sprite = pyglet.sprite.Sprite(img)
pyglet.clock.schedule_interval(update_image, 3)
pyglet.app.run()
Answer: System don't have to write to file at once but it can keep text in buffer and
saves when you close file. So probably you have to close file.
Or you can use `thefile.flush()` after every `thefile.write()` to send new
text from buffer to file at once.
|
Extract the year and the month from a line in a file and use a map to print every time its found to add 1 to the value
Question:
def Stats():
file = open('mbox.txt')
d = dict()
for line in file:
if line.startswith('From'):
words = line.split()
for words in file:
key = words[3] + " " + words[6]
if key:
d[key] +=1
return d
The line reads
From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008
I want to pull "Jan 2008" as the key
My error message:
Traceback (most recent call last):
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 78, in <module>
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 76, in <module>
File "C:\Users\Robert\Documents\PYTHON WORKSPACE\Program 1.py", line 63, in <module>
builtins.KeyError: 'u -'
Answer: Not the direct answer to the question, but a possible alternative solution -
use the [`dateutil`](https://labix.org/python-dateutil) datetime parser in a
"fuzzy" mode and simply format the extracted `datetime` object via
[`.strftime()`](https://docs.python.org/2/library/datetime.html#datetime.datetime.strftime):
In [1]: from dateutil.parser import parse
In [2]: s = "From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008"
In [3]: parse(s, fuzzy=True).strftime("%b %Y")
Out[3]: 'Jan 2008'
|
How do I find the smallest number in a list of random integers on Python without using min()?
Question: I'm trying to figure out why this code isn't working! The only part not
working is the smallestNumber, it always comes back at zero? What am I doing
wrong?
import random
X = random.randint(10,15)
pickedNumber =int( input("Please enter a number: "))
print("Generating", (pickedNumber), "random Numbers between 20 and 50:")
for numberCount in range(1,pickedNumber+1):
numberCount = random.randint(20,50)
sum = 0
sum += numberCount
print(numberCount)
print('The sum = ',sum)
print('the average = ', sum/pickedNumber)
for pickedNumber in range(0,X,1):
number = random.randint(20,50)
if pickedNumber== 0 or pickedNumber < smallestNumber:
smallestNumber = pickedNumber
print('The smallest = ',smallestNumber)
Answer: There are several problems with your code that I won't address directly, but
as for finding the smallest item in a list of integers, simply sort the list
and return the first item in the list:
some_list = [5,7,1,9]
sorted_list = sorted(some_list)
smallest = sorted_list[0]
>>> smallest
1
You don't necessarily need to create a second list to store the sorted result,
it is simply there to illustrate the difference.
EDIT: You could also just store the first random int generated as
`smallestNumber` and then as each new random int is created, compare it to
`smallestNumber` and if the new int is smaller, set `smallestNumber` to it.
Example:
//stuff to generate random int done here
smallest = new_random_int
//generate the next random int
if next_random_int < smallest:
smallest = next_random_int
And so on, where `new_random_int` is the very first random integer you
generate, and `next_random_int` is any random int you generate after that.
Also, welcome to Stack Overflow!
|
Image slideshow useing Python ttk
Question: I am looking for a way to display multiple photos in a slide show format.
I have not tried anything as I have no idea of what I'm doing to get to that
stage as there is no information anywhere that solves my problem.
thank you.
Answer: NOT MY OWN CODE, TAKEN FROM <https://www.daniweb.com/programming/software-
development/code/468841/tkinter-image-slide-show-python>`
''' tk_image_slideshow3.py
create a Tkinter image repeating slide show
tested with Python27/33 by vegaseat 03dec2013
'''
from itertools import cycle
try:
# Python2
import Tkinter as tk
except ImportError:
# Python3
import tkinter as tk
class App(tk.Tk):
'''Tk window/label adjusts to size of image'''
def __init__(self, image_files, x, y, delay):
# the root will be self
tk.Tk.__init__(self)
# set x, y position only
self.geometry('+{}+{}'.format(x, y))
self.delay = delay
# allows repeat cycling through the pictures
# store as (img_object, img_name) tuple
self.pictures = cycle((tk.PhotoImage(file=image), image)
for image in image_files)
self.picture_display = tk.Label(self)
self.picture_display.pack()
def show_slides(self):
'''cycle through the images and show them'''
# next works with Python26 or higher
img_object, img_name = next(self.pictures)
self.picture_display.config(image=img_object)
# shows the image filename, but could be expanded
# to show an associated description of the image
self.title(img_name)
self.after(self.delay, self.show_slides)
def run(self):
self.mainloop()
# set milliseconds time between slides
delay = 3500
# get a series of gif images you have in the working folder
# or use full path, or set directory to where the images are
image_files = [
'Slide_Farm.gif',
'Slide_House.gif',
'Slide_Sunset.gif',
'Slide_Pond.gif',
'Slide_Python.gif'
]
# upper left corner coordinates of app window
x = 100
y = 50
app = App(image_files, x, y, delay)
app.show_slides()
app.run()
|
Using Python to Read Rows of CSV Files With Column Content containing Comma
Question: I am trying to parse this CSV and print out the various columns separately.
However my code is having difficulty doing so possibly due to the commas in
the addresses, making it hard to split them into 3 columns.
How can this be done?
**Code**
with open("city.csv") as f:
for row in f:
print row.split(',')
**Result**
['original address', 'latitude', 'longitude\n']
['"2 E Main St', ' Madison', ' WI 53703"', '43.074691', '-89.384168\n']
['"Minnesota State Capitol', ' St Paul', ' MN 55155"', '44.955143', '-93.102307\n']
['"500 E Capitol Ave', ' Pierre', ' SD 57501"', '44.36711', '-100.346342\n']
**city.csv**
original address,latitude,longitude
"2 E Main St, Madison, WI 53703",43.074691,-89.384168
"Minnesota State Capitol, St Paul, MN 55155",44.955143,-93.102307
"500 E Capitol Ave, Pierre, SD 57501",44.36711,-100.346342
Answer: If you just want to parse the file, I would recommend using [Pandas
Library](http://pandas.pydata.org/)
import pandas as pd
data_frame = pd.read_csv("city.csv")
which gives you a data frame that looks like this in iPython notebook.
[![resulting data
frame](http://i.stack.imgur.com/ZPW6j.png)](http://i.stack.imgur.com/ZPW6j.png)
|
Is it possible to check for global variables in IPython when running a file?
Question: I have a file like so:
import pandas a pd
def a_func():
print 'doing stuff'
if __name__ == "__main__":
if 'data' not in globals():
print 'loading data...'
data = pd.read_csv('datafile.csv')
When I run the file in IPython with `run file.py`, it always loads the data,
but when I print `globals.keys()` in IPython, I can see the `data` variable.
Is there a way to access the global variables from IPython from within my
`file.py` script, so I don't have to load the data every time I run the script
in IPython?
Answer: Everytime a python file is executed the globals() dictionary is reset by the
interpreter. So if you will try to do something like
print globals().keys()
you can see that 'data' is not in globals. This dictionary gets updated as the
program runs. So I don't think you can refer to the globals() of the IPython
in the program.
check this [link](https://docs.python.org/2/faq/programming.html#how-can-i-
have-modules-that-mutually-import-each-other) ,according to it, globals are
emptied.
|
CPython 2.7 + Java
Question: My major program is written in Python 2.7 (on Mac) and need to leverage some
function which is written in a Java 1.8, I think CPython cannot import Java
library directly (different than Jython)?
If there is no solution to call Java from CPython, could I integrate in this
way -- wrap the Java function into a Java command line application, Python 2.7
call this Java application (e.g. using `os.system`) by passing command line
parameter as inputs, and retrieve its console output?
regards, Lin
Answer: * If you have lot of dependcieis on Java/JVM, you can consider using `Jython`.
* If you would like to develop a scalable/maintainable application, consider using microservices and keep Java and Python components separate.
* If your call to Java is simple and it is easy to capture the output and failure, you can go ahead with this running the system command to invoke Java parts.
|
looping over product to compute a serie in python
Question: I'm just gonna compute the result of below serie in python:
The formula
[![enter image description
here](http://i.stack.imgur.com/vxNE0.gif)](http://i.stack.imgur.com/vxNE0.gif)
So, here is my function to compute:
def compute(limit):
pi = 1
for i in range(1,limit):
pi = pi * ((4*(i**2))//(4*(i**2)-1))
print(2*pi)
compute(10000) /*returns 2*/
I know it is a silly question. But could you address the problem with this
snippet?
Answer: As others have already mentioned, `//` is integer division. No matter whether
you're dividing floats or integers, the result will always be an integer. Pi
is not an integer, so you should use float division: `/` and tell Python
explicitly that you really want a float by converting one of the numbers to
float*. For example, you could do `4` -> `4.` (notice the dot).
You could do the same thing in a more clear way using
[`functools`](https://docs.python.org/3/library/functools.html) and
[`operator`](https://docs.python.org/2/library/operator.html) modules and a
generator expression.
import functools
import operator
def compute(limit):
return 2 * functools.reduce(operator.mul, ((4.*(i**2)/(4*(i**2)-1) for i in range(1, limit + 1))
* Python 3 does float division with `4/(something)` even without this, but Python 2.7 will need `from __future__ import division` to divide as freely as Python 3, otherwise the division will result in an integer in any way.
|
Elasticsearch 2.4 nodes does not form cluster with ConnectTransportException
Question: I am already running ELK stack with Elasticsearch(ES) 1.7 with docker
container with 3 nodes, each running one ES container, running behind `nginx`
server. Now I am trying to upgrade ES to 2.4.0. Root user is not allowed in ES
2.4.0 so I am using `-Des.root.insecure.allow=true` option.
#Pulling SLES12 thin base image
FROM private-registry-1
#Author
MAINTAINER xyz
# Pre-requisite - Adding repositories
RUN zypper ar private-registry-2
RUN zypper --no-gpg-checks -n refresh
#Install required packages and dependencies
RUN zypper -n in net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1
#Downloading elasticsearch executable
ENV ES_VERSION=2.4.0
ENV ES_CLUSTER_NAME=ccs-elasticsearch
ENV ES_DIR="//opt//log-management//elasticsearch"
ENV ES_DATA_PATH="//data"
ENV ES_LOGS_PATH="//var//log"
ENV ES_CONFIG_PATH="${ES_DIR}//config"
ENV ES_REST_PORT=9200
ENV ES_INTERNAL_COM_PORT=9300
WORKDIR /opt/log-management
RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate
RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \
&& rm ${ES_DIR}-${ES_VERSION}.tar.gz \
&& mv ${ES_DIR}-${ES_VERSION} ${ES_DIR} \
&& cp ${ES_DIR}/config/elasticsearch.yml ${ES_CONFIG_PATH}/elasticsearch-default.yml
#Exposing elasticsearch server container port to the HOST
EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT}
#Removing binary files which are not needed
RUN zypper -n rm wget
# Removing zypper repos
RUN zypper rr caspiancs_common
COPY query-crs-es.sh ${ES_DIR}/bin/query-crs-es.sh
RUN chmod +x ${ES_DIR}/bin/query-crs-es.sh
COPY query-crs-wrapper.py ${ES_DIR}/bin/query-crs-wrapper.py
RUN chmod +x ${ES_DIR}/bin/query-crs-wrapper.py
ENV CRS_PARSER_PYTHON_SCRIPT="${ES_DIR}//bin//query-crs-wrapper.py"
#Copy elastic search bootstrap script
COPY elasticsearch-bootstrap-and-run.sh ${ES_DIR}/
RUN chmod +x ${ES_DIR}/elasticsearch-bootstrap-and-run.sh
COPY config-es-cluster ${ES_DIR}/bin/config-es-cluster
RUN chmod +x ${ES_DIR}/bin/config-es-cluster
COPY elasticsearch-config-script ${ES_DIR}/bin/elasticsearch-config-script
RUN chmod +x ${ES_DIR}/bin/elasticsearch-config-script
#Running elasticsearch executable
WORKDIR ${ES_DIR}
ENTRYPOINT ${ES_DIR}/elasticsearch-bootstrap-and-run.sh
Configuration file will be modified by `elasticsearch-config` and `config-es-
cluster`, mentioned in Dockerfile, as follows:
#Bootstrap script to configure elasticsearch.yml file
echo "cluster.name: ${ES_CLUSTER_NAME}" > ${ES_CONFIG_PATH}/elasticsearch.yml
echo "path.data: ${ES_DATA_PATH}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "path.logs: ${ES_LOGS_PATH}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#Performance optimization settings
echo "index.number_of_replicas: 1" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "index.number_of_shards: 3" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "discovery.zen.ping.multicast.enabled: false" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "bootstrap.mlockall: true" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "indices.memory.index_buffer_size: 50%" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#Search thread pool
echo "threadpool.search.type: fixed" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "threadpool.search.size: 20" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "threadpool.search.queue_size: 100000" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#Index thread pool
echo "threadpool.index.type: fixed" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "threadpool.index.size: 60" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "threadpool.index.queue_size: 200000" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#publish host as container host address
#echo "network.publish_host: ${CONTAINER_HOST_ADDRESS}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "network.bind_host: ${CONTAINER_HOST_ADDRESS}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "network.publish_host: ${CONTAINER_PRIVATE_IP}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "network.bind_host: ${CONTAINER_PRIVATE_IP}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "network.host: ${CONTAINER_HOST_ADDRESS}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "network.host: 0.0.0.0" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "htpp.port: 9200" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "transport.tcp.port: 9300-9400" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#configure elasticsearch.yml for clustering
echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "discovery.zen.minimum_master_nodes: 1" >> ${ES_CONFIG_PATH}/elasticsearch.yml
`ELASTICSEARCH_IPS` is array of IPs of other nodes, which is obtained by all
nodes running a script called `query-crs-es.sh`. Eventually Array will have
IPs of other two nodes of cluster. Please note they will be node's IP, not
container private IPs.
When ever I try to run the container I use `ansible`. During start up, all
nodes get up but failed to form cluster. I consistently get these error
Node1:
[2016-10-07 09:45:23,313][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2016-10-07 09:45:23,474][INFO ][node ] [Dragon Lord] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-07 09:45:23,474][INFO ][node ] [Dragon Lord] initializing ...
[2016-10-07 09:45:23,970][INFO ][plugins ] [Dragon Lord] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-07 09:45:23,994][INFO ][env ] [Dragon Lord] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [2.5tb], net total_space [2.5tb], spins? [possibly], types [xfs]
[2016-10-07 09:45:23,994][INFO ][env ] [Dragon Lord] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-07 09:45:24,028][WARN ][threadpool ] [Dragon Lord] requested thread pool size [60] for [index] is too large; setting to maximum [32] instead
[2016-10-07 09:45:25,540][INFO ][node ] [Dragon Lord] initialized
[2016-10-07 09:45:25,540][INFO ][node ] [Dragon Lord] starting ...
[2016-10-07 09:45:25,687][INFO ][transport ] [Dragon Lord] publish_address {172.17.0.15:9300}, bound_addresses {[::]:9300}
[2016-10-07 09:45:25,693][INFO ][discovery ] [Dragon Lord] ccs-elasticsearch/5wNwWJRFRS-2dRY5AGqqGQ
[2016-10-07 09:45:28,721][INFO ][cluster.service ] [Dragon Lord] new_master {Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-10-07 09:45:28,765][INFO ][http ] [Dragon Lord] publish_address {172.17.0.15:9200}, bound_addresses {[::]:9200}
[2016-10-07 09:45:28,765][INFO ][node ] [Dragon Lord] started
[2016-10-07 09:45:28,856][INFO ][gateway ] [Dragon Lord] recovered [20] indices into cluster_state
Node2:
[2016-10-07 09:45:58,561][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2016-10-07 09:45:58,729][INFO ][node ] [Defensor] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-07 09:45:58,729][INFO ][node ] [Defensor] initializing ...
[2016-10-07 09:45:59,215][INFO ][plugins ] [Defensor] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-07 09:45:59,237][INFO ][env ] [Defensor] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [2.5tb], net total_space [2.5tb], spins? [possibly], types [xfs]
[2016-10-07 09:45:59,237][INFO ][env ] [Defensor] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-07 09:45:59,266][WARN ][threadpool ] [Defensor] requested thread pool size [60] for [index] is too large; setting to maximum [32] instead
[2016-10-07 09:46:00,733][INFO ][node ] [Defensor] initialized
[2016-10-07 09:46:00,733][INFO ][node ] [Defensor] starting ...
[2016-10-07 09:46:00,833][INFO ][transport ] [Defensor] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300}
[2016-10-07 09:46:00,837][INFO ][discovery ] [Defensor] ccs-elasticsearch/RXALMe9NQVmbCz5gg1CwHA
[2016-10-07 09:46:03,876][WARN ][discovery.zen ] [Defensor] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
[2016-10-07 09:46:06,899][WARN ][discovery.zen ] [Defensor] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
[2016-10-07 09:46:09,917][WARN ][discovery.zen ] [Defensor] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
Node3:
[2016-10-07 09:45:58,624][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2016-10-07 09:45:58,806][INFO ][node ] [Scarlet Beetle] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-07 09:45:58,806][INFO ][node ] [Scarlet Beetle] initializing ...
[2016-10-07 09:45:59,341][INFO ][plugins ] [Scarlet Beetle] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-07 09:45:59,363][INFO ][env ] [Scarlet Beetle] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [2.5tb], net total_space [2.5tb], spins? [possibly], types [xfs]
[2016-10-07 09:45:59,363][INFO ][env ] [Scarlet Beetle] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-07 09:45:59,390][WARN ][threadpool ] [Scarlet Beetle] requested thread pool size [60] for [index] is too large; setting to maximum [32] instead
[2016-10-07 09:46:00,795][INFO ][node ] [Scarlet Beetle] initialized
[2016-10-07 09:46:00,795][INFO ][node ] [Scarlet Beetle] starting ...
[2016-10-07 09:46:00,927][INFO ][transport ] [Scarlet Beetle] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300}
[2016-10-07 09:46:00,931][INFO ][discovery ] [Scarlet Beetle] ccs-elasticsearch/SFWrVwKRSUu--4KiZK4Kfg
[2016-10-07 09:46:03,965][WARN ][discovery.zen ] [Scarlet Beetle] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
[2016-10-07 09:46:06,990][WARN ][discovery.zen ] [Scarlet Beetle] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
As you can see from logs, Node 2 and 3 are aware of master, Node1, but unable
to connect. I have tried most of the configurations about `network.host` which
you can see commented in configuration code and neither of them work. Any
leads will be appreciated.
This is the state of ports:
netstat -nlp | grep 9200
tcp 0 0 10.240.135.140:9200 0.0.0.0:* LISTEN 188116/docker-proxy
tcp 0 0 10.240.137.112:9200 0.0.0.0:* LISTEN 187240/haproxy
netstat -nlp | grep 9300
tcp 0 0 :::9300 :::* LISTEN 188085/docker-proxy
Answer: I was able to form cluster with following settings
`network.publish_host=CONTAINER_HOST_ADDRESS` i.e. address of node where the
container is running.
network.bind_host=0.0.0.0
transport.publish_port=9300
transport.publish_host=CONTAINER_HOST_ADDRESS
`tranport.publish_host` is important when your are running ES behind proxy/
load balancer such nginx or haproxy.
|
Calculate moving average in numpy array with NaNs
Question: I am trying to calculate the moving average in a large numpy array that
contains NaNs. Currently I am using:
import numpy as np
def moving_average(a,n=5):
ret = np.cumsum(a,dtype=float)
ret[n:] = ret[n:]-ret[:-n]
return ret[-1:]/n
When calculating with a masked array:
x = np.array([1.,3,np.nan,7,8,1,2,4,np.nan,np.nan,4,4,np.nan,1,3,6,3])
mx = np.ma.masked_array(x,np.isnan(x))
y = moving_average(mx).filled(np.nan)
print y
>>> array([3.8,3.8,3.6,nan,nan,nan,2,2.4,nan,nan,nan,2.8,2.6])
The result I am looking for (below) should ideally have NaNs only in the place
where the original array, x, had NaNs and the averaging should be done over
the number of non-NaN elements in the grouping (I need some way to change the
size of n in the function.)
y = array([4.75,4.75,nan,4.4,3.75,2.33,3.33,4,nan,nan,3,3.5,nan,3.25,4,4.5,3])
I could loop over the entire array and check index by index but the array I am
using is very large and that would take a long time. Is there a numpythonic
way to do this?
Answer: I'll just add to the great answers before that you could still use cumsum to
achieve this:
import numpy as np
def moving_average(a, n=5):
ret = np.cumsum(a.filled(0))
ret[n:] = ret[n:] - ret[:-n]
counts = np.cumsum(~a.mask)
counts[n:] = counts[n:] - counts[:-n]
ret[~a.mask] /= counts[~a.mask]
ret[a.mask] = np.nan
return ret
x = np.array([1.,3,np.nan,7,8,1,2,4,np.nan,np.nan,4,4,np.nan,1,3,6,3])
mx = np.ma.masked_array(x,np.isnan(x))
y = moving_average(mx)
|
Spotfire: Date filtering with action control
Question: I am working on a spotfire app and I am trying to create an action control
that filters dates. I am new to ironpython and can't figure out what is wrong
with my script:
from Spotfire.Dxp.Application.Visuals import *
import datetime as dt
visual = viz.As[VisualContent]()
visual.Data.WhereClauseExpression = '[Agreement End Date] < dt.date.today()'
When the above script is run I get "The expression is not valid after '(' on
line 1 character 34. Here Agreement End Date is the column I am trying to
filter on. I have looked around and haven't been able to find an answer (I
realize this probably a very simple task for someone experienced in such
things).
Any help is greatly appreciated!
Answer: I figured out what was going on here, you need to use spotfire functions
inside of the WhereClauseExpression string. The following code fixes the
issue:
from Spotfire.Dxp.Application.Visuals import *
visual = viz.As[VisualContent]()
visual.Data.WhereClauseExpression = '[Agreement End Date] < DateTimeNow()'
|
Python Threading: Making the thread function return from an external signal
Question: Could anyone please point out whats wrong with this code. I am trying to
return the thread through a variable flag, which I want to control in my main
thread.
# test27.py
import threading
import time
lock = threading.Lock()
def Read(x,y):
flag = 1
while True:
lock.acquire()
try:
z = x+y; w = x-y
print z*w
time.sleep(1)
if flag == 0:
print "ABORTING"
return
finally:
print " SINGLE run of thread executed"
lock.release()
# test28.py
import time, threading
from test27 import Read
print "Hello Welcome"
a = 2; b = 5
t = threading.Thread(target = Read, name = 'Example Thread', args = (a,b))
t.start()
time.sleep(5)
t.flag = 0 # This is not updating the flag variable in Read FUNCTION
t.join() # Because of the above command I am unable to wait until the thread finishes. It is blocking.
print "PROGRAM ENDED"
Answer: Regular variables should not be tracked in threads. This is done to prevent
race condition. You must use thread-safe constructs to communicate between
threads. For a simple flag use `threading.Event`. Also you cannot access local
variable `flag` via thread object. It is local, and is only visible from
scope. You must either use a global variable, as in my example below or create
an Object before calling your thread and use a member variable.
from threading import Event
flag = Event()
def Read(x,y):
global flag
flag.clear()
...
if flag.is_set():
return
main thread:
sleep(5)
flag.set()
P.S.: I just noticed that you attempted to use lock() in the thread, but
failed to use it in the main thread. For a simple flag go with Event. For a
lock() you need to lock both parts and mitigate a risk of a deadlock.
|
autoit.pixel_search returning color is not found
Question: I'm trying to grab the coordinates for a specific pixel value on the screen,
but I can't seem to get any results. The error I get is
"autoit.autoit.AutoItError: color is not found".
To verify my code I have the mouse move the the pixel that has the colour I
want. This is not necessary, it was just part of a test. I have two monitors
and my fear was that the pixel search couldn't distinguish what monitor I
wanted. So to test autoit knew where to look I did a basic "move mouse". Sure
enough it moved to my image on monitor one, so I know it has the right
monitor.
Second I tested if the "autoit.pixel_get_color" could grab the value I wanted,
it does (65281).Thought I might have to use the decimal instead of the HEX
provided from the Windows Info application.
I tested with the code below, this is the code using SciTE - light (.au3 file)
and it works fine.
$coord = PixelSearch(0, 0, 1434, 899, 0x00FF02)
If Not @error Then
MsgBox(0, "X and Y are:", $coord[0] & "," & $coord[1])
EndIf
I tested grabbing the pixel with pyautogui and ultimately I can do it, but it
is not as "clean" as autoit, so I'm trying to avoid it if possible. Autoit has
that nice Window info screen that shows me the color, so it is really easy to
just plug numbers into my script.
Here is the code I have written currently in Python.
import autoit
import pyautogui
pyautogui.confirm('Press OK to start running script')
autoit.mouse_move(374,608,10) # move mouse to where the color I want is located.
pixelcolor = autoit.pixel_get_color(374,608) #get color of pixel
pixelsearch = autoit.pixel_search(0,0,1434,899,0x00FF02) # search entire screen for color
pixelsearch = autoit.pixel_search(0,0,1434,899,65281) # Tried using the value from the get_color, still same error.
Any Ideas?
Answer: So I figured out how to resolve my problem. I don't know why it works or what
caused the problem, but for now here is the solution
The correct formula for PixelSearch is PixelSearch(left, top, right, bottom).
After playing around with the numbers it appears pyautoit is using (right,
top, left, bottom). If I plug in my numbers with that formula it works
perfectly, EXCEPT on my third monitor.
My third monitor seems to work with (left, top, right, bottom). I am wondering
if it has something to do with negative numbers (-1680, 0, -3, 1050), not 100%
sure.
I tested this on my work computer (two monitors), home computer, (three
monitors), and my laptop. In all scenarios the (right, top, left, bottom)
worked, except home computer on the third monitor.
Hope this helps someone else out in the future.
|
Python Load csv file to Oracle table
Question: I'm a python beginner. I'm trying to insert records into a Oracle table from a
csv file. csv file format : Artist_name, Artist_type, Country . I'm getting
below error:
Error: File "artist_dim.py", line 42, in <module>
cur.execute(sqlquery)
cx_Oracle.DatabaseError: ORA-00917: missing comma
import cx_Oracle as cx
import csv
import sys
##### Step 1 : Connect to Oracle Database#########
conn_str=u'hr/hr@localhost:1521/PDBORCL'
conn= cx.connect(conn_str)
cur=conn.cursor()
#######################################
#### Step 2: FETCH LATEST ROW ID FROM ARTIST_DIM###
query="SELECT nvl(max(row_id)+1,1) from artist_dim"
cur.execute(query)
rownum=cur.fetchone()
x=rownum[0]
with open('D:\python\Artist.csv') as f:
reader=csv.DictReader(f,delimiter=',')
for row in reader:
sqlquery="INSERT INTO ARTIST_DIM VALUES (%d,%s,%s,%s)" %(x,row['Artist_name'],row['Artist_type'],row['Country'])
cur.execute(sqlquery)
x=x+1
conn.commit()
When I try to read the file it is working correctly.
##### Just to Read CSV File############################
with open('D:\python\Artist.csv') as f:
reader = csv.DictReader(f, delimiter=',')
for row in reader:
a="row_id %d Artist : %s type : %s Country : %s " %(x,row['Artist_name'],row['Artist_type'],row['Country'])
print(a)
x=x+1
print(row['Artist_name'],",",row['Artist_type'],",",row['Country'])
Also, when I try to insert using hard coded values it is working
sqlquery1="INSERT INTO ARTIST_DIM VALUES (%d,'Bob','Bob','Bob')" %x
cur.execute(sqlquery1)
Answer: Put quotes around the values:
sqlquery="INSERT INTO ARTIST_DIM VALUES (%d,'%s','%s','%s')" %(x,row['Artist_name'],row['Artist_type'],row['Country'])
Without the quotes it translates to:
sqlquery="INSERT INTO ARTIST_DIM VALUES (1, Bob, Bob, Bob)"
|
Python generating a lookup table of lambda expressions
Question: I'm building a game and in order to make it work, I need to generate a list of
"pre-built" or "ready to call" expressions. I'm trying to do this with lambda
expressions, but am running into an issue generating the lookup table. The
code I have is similar to the following:
import inspect
def test(*args):
string = "Test Function: "
for i in args:
string += str(i) + " "
print(string)
funct_list = []
# The problem is in this for loop
for i in range(20):
funct_list.append(lambda: test(i, "Hello World"))
for i in funct_list:
print(inspect.getsource(i))
The output I get is:
funct_list.append(lambda: test(i, "Hello World"))
funct_list.append(lambda: test(i, "Hello World"))
funct_list.append(lambda: test(i, "Hello World"))
funct_list.append(lambda: test(i, "Hello World"))
...
and I need it to go:
funct_list.append(lambda: test(1, "Hello World"))
funct_list.append(lambda: test(2, "Hello World"))
funct_list.append(lambda: test(3, "Hello World"))
funct_list.append(lambda: test(4, "Hello World"))
...
I tried both of the following and neither work
for i in range(20):
funct_list.append(lambda: test(i, "Hello World"))
for i in range(20):
x = (i, "Hello World")
funct_list.append(lambda: test(*x))
My question is how do you generate lists of lambda expressions with some of
the variables inside the lambda expression already set.
Answer: As others have mentioned, Python's closures are _late binding_ , which means
that variables from an outside scope referenced in a closure (in other words,
the variables _closed over_) are looked up at the moment the closure is called
and _not_ at the time of definition.
In your example, the closure in question is formed when your lambda references
the variable `i` from the outside scope. However, when your lambda is called
later on, the loop has already finished and left the variable `i` with the
value 19.
An easy but not particularly elegant fix is to use a default argument for the
lambda:
for i in range(20):
funct_list.append(lambda x=i: test(x, "Hello World"))
Unlike closure variables, default arguments are bound early and therefore
achieve the desired effect of capturing the value of the variable `i` at the
time of lambda definition.
A better way is use `functools.partial` which allows you to partially apply
some arguments of the function, "fixing" them to a certain value:
from functools import partial
for i in range(20):
funct_list.append(partial(lambda x: test(x, "Hello World"), i))
|
Python can't find setuptools
Question: I got the following ImportError as i tried to setup.py install a package:
Traceback (most recent call last):
File "setup.py", line 4, in <module>
from setuptools import setup, Extension
ImportError: No module named setuptools
This happens although setuptools is already installed:
amir@amir-debian:~$ sudo apt-get install python-setuptools
[sudo] password for amir:
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-setuptools is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Why can't python find the setuptools module?
Answer: It's possible you have multiple python versions installed on your system. For
example if you installed your python from source, and then again with apt-get.
Apt-get will install to the default python version. Make sure you are being
consistent.
Potentially using pip install setuptools could solve your problem.
Try these commands:
$which python
/usr/bin/python
$python --version
Python 2.7.12
Making sure that the output matches your expectations.
It may be worth removing previous installations and starting over as this
answer suggests:
[Python 3: ImportError "No Module named
Setuptools"](http://stackoverflow.com/questions/14426491/python-3-importerror-
no-module-named-setuptools)
|
Python3 requests library to submit form that disallows post request
Question: I am trying to get the police district from a given location at the [Philly
Police webpage](https://www.phillypolice.com/districts/). I too many locations
to do this by hand, so I am trying to automate the process using Python's
requests library. The webpage's form that holds the location value is as
follows:
<form id="search-form" method="post" action="districts/searchAddress">
<fieldset>
<div class="clearfix">
<label for="search-address-box"><span>Enter Your Street Address</span></label>
<div class="input">
<input tabindex="1" class="district-street-address-input" id="search-address-box" name="name" type="text" value="">
</div>
</div>
<div class="actions" style="float: left;">
<button tabindex="3" type="submit" class="btn btn-success">Search</button>
</div>
<a id="use-location" href="https://www.phillypolice.com/districts/index.html?_ID=7&_ClassName=DistrictsHomePage#" style="float: left; margin: 7px 0 0 12px;"><i class="icon-location-arrow"></i>Use Current Location</a>
<div id="current-location-display" style="display: none;"><p>Where I am right now.</p></div>
</fieldset>
</form>
However when I try to post or put to the webpage using the following:
r = requests.post('http://www.phillypolice.com/districts',data={'search-address-box':'425 E. Roosevelt Blvd'})
I get error 405, POST is not allowed. I then turned off Javascript and tried
to find the district on the webpage, and when I hit submit I received the same
405 error message. Therefore the form is definitely not submitted and the
district is found using JavaScript.
Is there a way to simulate 'clicking' the submit button to trigger the
JavaScript using the requests library?
Answer: The data is retrieved after first querying google maps to the the coordinates
where the final request is a get like the following:
[![enter image description
here](http://i.stack.imgur.com/gBwqd.png)](http://i.stack.imgur.com/gBwqd.png)
You can setup a free account with the [bing maps
api](https://www.bingmapsportal.com/) and get the coords you need to make the
get request:
import requests
key = "my_key"
coord_params = {"output": "json",
"key": key}
# This provides the coordinates.
coords_url = "https://dev.virtualearth.net/REST/v1/Locations"
# Template to pass each address to in your actual loop.
template = "{add},US"
url = "https://api.phillypolice.com/jsonservice/Map/searchAddress.json"
with requests.Session() as s:
# Add the query param passing in each zipcode
coord_params["query"] = template.format(add="425 E. Roosevelt Blvd")
js = s.get(coords_url, params=coord_params).json()
# Parse latitude and longitude from the returned json.
# Call str to make make it into `(lat, lon)`
latitude_longitude = str((js[u'resourceSets'][0][u'resources'][0]["point"][u'coordinates']))
data = s.get(url, params={"latlng": latitude_longitude})
print(data.json())
If we run it minus my key:
In [2]: import requests
...:
...: key = "my_key..."
...:
...: coord_params = {"output": "json",
...: "key": key}
...: coords_url = "https://dev.virtualearth.net/REST/v1/Locations"
...: template = "{add},US"
...: url = "https://api.phillypolice.com/jsonservice/Map/searchAddress.json"
...: with requests.Session() as s:
...: coord_params["query"] = template.format(add="425 E. Roosevelt Blvd")
...:
...: js = s.get(coords_url, params=coord_params).json()
...: latitude_longitude = str(js[u'resourceSets'][0][u'resources'][0]["po
...: int"][u'coordinates'])
...: print(latitude_longitude)
...: data = s.get(url, params={"latlng": latitude_longitude})
...: print(data.json())
...:
[40.02735900878906, -75.1153564453125]
{'response': ['35', '2', 'Marques Newsome', 'PPD.35_PSA2@phila.gov ', '267-357-1436']}
You can see it matches the response you see if you look at the request in your
browser.
|
pexpect not executing command by steps
Question: I have this Python3 code which use Pexpect.
import pexpect
import getpass
import sys
def ssh(username,password,host,port,command,writeline):
child = pexpect.spawn("ssh -p {} {}@{} '{}'".format(port,username,host,command))
child.expect("password: ")
child.sendline(password)
if(writeline):
print(child.read())
def scp(username,password,host,port,file,dest):
child = pexpect.spawn("scp -P {} {} {}@{}:{}".format(port,file,username,host,dest))
child.expect("password: ")
child.sendline(password)
try:
filename = sys.argv[1]
print("=== sendhw remote commander ===")
username = input("Username: ")
password = getpass.getpass("Password: ")
ssh(username,password,"some.host.net","22","mkdir ~/srakrnSRV",False)
scp(username,password,"some.host.net","22",filename,"~/srakrnSRV")
ssh(username,password,"some.host.net","22","cd srakrnSRV && sendhw {}".format(filename),True)
except IndexError:
print("No homework name specified.")
My aim is to:
* SSH into the host with the `ssh` function, create the directory `srakrnSRV`, then
* upload a file into the `srakrnSRV` directory, which is previously created
* `cd` into `srakrnSRV`, and execute the `sendhw <filename>` command. The `filename` variable is defined by command line parameteres, and print the result out.
After running the entire code, Python prints out
b'\r\nbash: line 0: cd: srakrnSRV: No such file or directory\r\n'
which is not expected, as the directory should be previously created.
Also, I tried manually creating the `srakrnSRV` folder in my remote host.
After running the command again, it appears that `scp` function is also not
running. The only runnning pexpect coomand was the last `ssh` function.
How to make it execute in order? Thanks in advance!
Answer: You may lack permission for executing commands through ssh. Also there is
possibility that your program sends scp before prompt occurs.
|
Python: How to convert google location timestaMps in a year-month-day-hour-minute-seconds format?
Question: I am playing around with my google location data (which one can download here
<https://takeout.google.com/settings/takeout>).
The location data is a json file, of which one variable is 'timestaMps' (e.g.
one observation is "1475146082971"). How do I convert this into a datetime?
Thanks!
Answer: Use
[fromtimestamp](https://docs.python.org/2/library/datetime.html#datetime.date.fromtimestamp)
method from datetime module. To convert your 'timestaMps' to timestamp you
need to convert it to int(). Formating of timestamp is done with
[strftime()](https://docs.python.org/2/library/time.html#time.strftime).
from datetime import datetime
datetime.fromtimestamp(int("1475146082971")).strftime('%Y-%m-%d %H:%M:%S')
|
JSON sub for loop produces KeyError, but key exists
Question: I'm trying to add the JSON output below into a dictionary, to be saved into a
SQL database.
{'Parkirisca': [
{
'ID_Parkirisca': 2,
'zasedenost': {
'Cas': '2016-10-08 13:17:00',
'Cas_timestamp': 1475925420,
'ID_ParkiriscaNC': 9,
'P_kratkotrajniki': 350
}
}
]}
I am currently using the following code to add the value to a dictionary:
import scraperwiki
import json
import requests
import datetime
import time
from pprint import pprint
html = requests.get("http://opendata.si/promet/parkirisca/lpt/")
data = json.loads(html.text)
for carpark in data['Parkirisca']:
zas = carpark['zasedenost']
free_spaces = zas.get('P_kratkotrajniki')
last_updated = zas.get('Cas_timestamp')
parking_type = carpark.get('ID_Parkirisca')
if parking_type == "Avtomatizirano":
is_automatic = "Yes"
else:
is_automatic = "No"
scraped = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
savetodb = {
'scraped': scraped,
'id': carpark.get("ID_Parkirisca"),
'total_spaces': carpark.get("St_mest"),
'free_spaces': free_spaces,
'last_updated': last_updated,
'is_automatic': is_automatic,
'lon': carpark.get("KoordinataX_wgs"),
'lat': carpark.get("KoordinataY_wgs")
}
unique_keys = ['id']
pprint savetodb
However when I run this, it gets stuck at `for zas in carpark["zasedenost"]`
and outputs the following error:
Traceback (most recent call last):
File "./code/scraper", line 17, in <module>
for zas in carpark["zasedenost"]:
KeyError: 'zasedenost'
I've been led to believe that `zas` is in fact now a string, rather than a
dictionary, but I'm new to Python and JSON, so don't know what to search for
to get a solution. I've also searched here on Stack Overflow for `KeyErrror
when key exist` questions, but they didn't help, and I believe that this might
be due to the fact that's a sub for loop.
Update: Now, when I swapped the double quotes for single quotes, I get the
following error:
Traceback (most recent call last):
File "./code/scraper", line 17, in <module>
free_spaces = zas.get('P_kratkotrajniki')
AttributeError: 'unicode' object has no attribute 'get'
Answer: I fixed up your code:
1. Added required imports.
2. Fixed the `pprint savetodb` line which isn't valid Python.
3. Didn't try to iterate over `carpark['zasedenost']`.
I then added another `pprint` statement in the `for` loop to see what's in
`carpark` when the `KeyError` occurs. From there, the error is clear. (Not all
the elements in the array in your JSON contain the `'zasedenost'` key.)
Here's the code I used:
import datetime
import json
from pprint import pprint
import time
import requests
html = requests.get("http://opendata.si/promet/parkirisca/lpt/")
data = json.loads(html.text)
for carpark in data['Parkirisca']:
pprint(carpark)
zas = carpark['zasedenost']
free_spaces = zas.get('P_kratkotrajniki')
last_updated = zas.get('Cas_timestamp')
parking_type = carpark.get('ID_Parkirisca')
if parking_type == "Avtomatizirano":
is_automatic = "Yes"
else:
is_automatic = "No"
scraped = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
savetodb = {
'scraped': scraped,
'id': carpark.get("ID_Parkirisca"),
'total_spaces': carpark.get("St_mest"),
'free_spaces': free_spaces,
'last_updated': last_updated,
'is_automatic': is_automatic,
'lon': carpark.get("KoordinataX_wgs"),
'lat': carpark.get("KoordinataY_wgs")
}
unique_keys = ['id']
pprint(savetodb)
And here's the output on the iteration where the `KeyError` occurs:
{u'A_St_Mest': None,
u'Cena_dan_Eur': None,
u'Cena_mesecna_Eur': None,
u'Cena_splosno': None,
u'Cena_ura_Eur': None,
u'ID_Parkirisca': 7,
u'ID_ParkiriscaNC': 72,
u'Ime': u'P+R Studenec',
u'Invalidi_St_mest': 9,
u'KoordinataX': 466947,
u'KoordinataX_wgs': 14.567929171694901,
u'KoordinataY': 101247,
u'KoordinataY_wgs': 46.05457609543313,
u'Opis': u'2,40 \u20ac /dan',
u'St_mest': 187,
u'Tip_parkirisca': None,
u'U_delovnik': u'24 ur (ponedeljek - petek)',
u'U_sobota': None,
u'U_splosno': None,
u'Upravljalec': u'JP LPT d.o.o.'}
Traceback (most recent call last):
File "test.py", line 14, in <module>
zas = carpark['zasedenost']
KeyError: 'zasedenost'
As you can see, the error is quite accurate. There's no key `'zasedenost'` in
the dictionary. If you look through your JSON, you'll see that's true for a
number of the elements in that array.
I'd suggest a fix, but I don't know what you want to do in the case where this
dictionary key is absent. Perhaps you want something like this:
zas = carpark.get('zasedenost')
if zas is not None:
free_spaces = zas.get('P_kratkotrajniki')
last_updated = zas.get('Cas_timestamp')
else:
free_spaces = None
last_updated = None
|
How modules know each other
Question: I can plot data from a CSV file with the following code:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('test0.csv',delimiter='; ', engine='python')
df.plot(x='Column1', y='Column3')
plt.show()
But I don't understand one thing. How `plt.show()` knows about `df`? I'll make
more sense to me seeing, somewhere, an expression like:
plt = something(df)
I have to mention I'm just learning Python.
Answer: Matplotlib has two "interfaces": a [Matlab-style
interface](http://jakevdp.github.io/mpl_tutorial/tutorial_pages/tut1.html) and
an [object-oriented
interface](http://jakevdp.github.io/mpl_tutorial/tutorial_pages/tut2.html).
Plotting with the Matlab-style interface looks like this:
import matplotlib.pyplot as plt
plt.plot(x, y)
plt.show()
The call to `plt.plot` implicitly creates a figure and an axes on which to
draw. The call to `plt.show` displays all figures.
Pandas is supporting the Matlab-style interface by implicitly creating a
figure and axes for you when `df.plot(x='Column1', y='Column3')` is called.
Pandas can also use the more flexible object-oriented interface, in which case
your code would look like this:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('test0.csv',delimiter='; ', engine='python')
fig, ax = plt.subplots()
df.plot(ax=ax, x='Column1', y='Column3')
plt.show()
Here the axes, `ax`, is explicitly created and passed to `df.plot`, which then
calls `ax.plot` under the hood.
One case where the object-oriented interface is useful is when you wish to use
`df.plot` more than once while still drawing on the same axes:
fig, ax = plt.subplots()
df.plot(ax=ax, x='Column1', y='Column3')
df2.plot(ax=ax, x='Column2', y='Column4')
plt.show()
|
Mine Tweets between two dates in Python
Question: I would like to mine tweets for two keywords for a specific period of time. I
currently have the code below, but how do I add so it only mine tweets between
two dates? (10/03/2016 - 10/07/2016) Thank you!
#Import the necessary methods from tweepy library
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
#Variables that contains the user credentials to access Twitter API
access_token = "ENTER YOUR ACCESS TOKEN"
access_token_secret = "ENTER YOUR ACCESS TOKEN SECRET"
consumer_key = "ENTER YOUR API KEY"
consumer_secret = "ENTER YOUR API SECRET"
#This is a basic listener that just prints received tweets to stdout.
class StdOutListener(StreamListener):
def on_data(self, data):
print data
return True
def on_error(self, status):
print status
if __name__ == '__main__':
#This handles Twitter authetification and the connection to Twitter Streaming API
l = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, l)
#This line filter Twitter Streams to capture data by the keywords: 'python', 'javascript', 'ruby'
stream.filter(track=['python', 'javascript', 'ruby'])
Answer: You can't. Have a look at [this
question](http://stackoverflow.com/questions/26205102/making-very-specific-
time-requests-to-the-second-on-twitter-api-using-python), that is the closest
you can get.
The Twitter API does not allow to search by time. Trivially, what you can do
is fetching tweets and looking at their timestamps afterwards in Python, but
that is highly inefficient.
|
python multiprocessing, cpu-s and cpu cores
Question: I was trying out `python3` `multiprocessing` on a machine that has 8 cpu-s and
each cpu has four cores (information is from `/proc/cpuinfo`). I wrote a
little script with a useless function and I use `time` to see how long it
takes for it to finish.
from multiprocessing import Pool,cpu_count
def f(x):
for i in range(100000000):
x*x
return x*x
with Pool(8) as p:
a = p.map(f,[x for x in range(8)])
#~ f(5)
Calling `f()` without multiprocessing takes about 7s (`time`'s "real" output).
Calling `f()` 8 times with a pool of 8 as seen above, takes around 7s again.
If I call it 8 times with a pool of 4 I get around 13.5s, so there's some
overhead in starting the script, but it runs twice as long. So far so good.
Now here comes the part that I do not understand. If there are 8 cpu-s each
with 4 cores, if I call it 32 times with a pool of 32, as far as I see it
should run for around 7s again, but it takes 32s which is actually slightly
longer than running `f()` 32 times on a pool of 8.
So my question is `multiprocessing` not able to make use of cores or I don't
understand something about cores or is it something else?
Answer: Simplified and short.. Cpu-s and cores are hardware that your computer have.
On this hardware there is a operating system, the middleman between hardware
and the programs running on the computer. The programs running on the computer
are allotted cpu time. One of these programs is the python interpetar, which
runs all the programs that has the endswith .py. So of the cpu time on your
computer, time is allotted to python3.* which in turn allot time to the
program you are running. This speed will depend on what hardware you have,
what operation you are running, and how cpu-time is allotted between all these
instances.
**How is cpu-time allotted?** It is an like an while loop, the OS is
distributing incremental between programs, and the python interpreter is
incremental distributing it's distribution time to programs runned by the
python interpretor. This is the reason the entire computer halts when a
program misbehaves.
**Many processes does not equal** more access to hardware. It does equal more
allotted cpu-time from the python interpretor allotted time. Since you
increase the number of programs under the python interpretor which do work for
your application.
**Many processes does equal** more work horses.
* * *
You see this in practice in your code. You increase the number of workhorses
to the point where the python interpreters allotted cpu-time is divided up
between so many processes that all of them slows down.
|
Seaborn boxplot: TypeError: unsupported operand type(s) for /: 'str' and 'int'
Question: I try to make vertical seaborn boxplot like this
import pandas as pd
df = pd.DataFrame({'a' : ['a', 'b' , 'b', 'a'], 'b' : [5, 6, 4, 3] })
import seaborn as sns
import matplotlib.pylab as plt
%matplotlib inline
sns.boxplot( x= "b",y="a",data=df )
i get
[![enter image description
here](http://i.stack.imgur.com/7mzYW.png)](http://i.stack.imgur.com/7mzYW.png)
i write orient
sns.boxplot( x= "c",y="a",data=df , orient = "v")
and get
TypeError: unsupported operand type(s) for /: 'str' and 'int'
but
sns.boxplot( x= "c",y="a",data=df , orient = "h")
works coreect! what's wrong?
TypeError Traceback (most recent call last)
<ipython-input-16-5291a1613328> in <module>()
----> 1 sns.boxplot( x= "b",y="a",data=df , orient = "v")
C:\Program Files\Anaconda3\lib\site-packages\seaborn\categorical.py in boxplot(x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, fliersize, linewidth, whis, notch, ax, **kwargs)
2179 kwargs.update(dict(whis=whis, notch=notch))
2180
-> 2181 plotter.plot(ax, kwargs)
2182 return ax
2183
C:\Program Files\Anaconda3\lib\site-packages\seaborn\categorical.py in plot(self, ax, boxplot_kws)
526 def plot(self, ax, boxplot_kws):
527 """Make the plot."""
--> 528 self.draw_boxplot(ax, boxplot_kws)
529 self.annotate_axes(ax)
530 if self.orient == "h":
C:\Program Files\Anaconda3\lib\site-packages\seaborn\categorical.py in draw_boxplot(self, ax, kws)
463 positions=[i],
464 widths=self.width,
--> 465 **kws)
466 color = self.colors[i]
467 self.restyle_boxplot(artist_dict, color, props)
C:\Program Files\Anaconda3\lib\site-packages\matplotlib\__init__.py in inner(ax, *args, **kwargs)
1816 warnings.warn(msg % (label_namer, func.__name__),
1817 RuntimeWarning, stacklevel=2)
-> 1818 return func(ax, *args, **kwargs)
1819 pre_doc = inner.__doc__
1820 if pre_doc is None:
C:\Program Files\Anaconda3\lib\site-packages\matplotlib\axes\_axes.py in boxplot(self, x, notch, sym, vert, whis, positions, widths, patch_artist, bootstrap, usermedians, conf_intervals, meanline, showmeans, showcaps, showbox, showfliers, boxprops, labels, flierprops, medianprops, meanprops, capprops, whiskerprops, manage_xticks, autorange)
3172 bootstrap = rcParams['boxplot.bootstrap']
3173 bxpstats = cbook.boxplot_stats(x, whis=whis, bootstrap=bootstrap,
-> 3174 labels=labels, autorange=autorange)
3175 if notch is None:
3176 notch = rcParams['boxplot.notch']
C:\Program Files\Anaconda3\lib\site-packages\matplotlib\cbook.py in boxplot_stats(X, whis, bootstrap, labels, autorange)
2036
2037 # arithmetic mean
-> 2038 stats['mean'] = np.mean(x)
2039
2040 # medians and quartiles
C:\Program Files\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py in mean(a, axis, dtype, out, keepdims)
2883
2884 return _methods._mean(a, axis=axis, dtype=dtype,
-> 2885 out=out, keepdims=keepdims)
2886
2887
C:\Program Files\Anaconda3\lib\site-packages\numpy\core\_methods.py in _mean(a, axis, dtype, out, keepdims)
70 ret = ret.dtype.type(ret / rcount)
71 else:
---> 72 ret = ret / rcount
73
74 return ret
TypeError: unsupported operand type(s) for /: 'str' and 'int'
Answer: For seaborn's boxplots it is important to keep an eye on the x-axis and y-axis
assignments, when switching between horizontal and vertical alignment:
%matplotlib inline
import pandas as pd
import seaborn as sns
df = pd.DataFrame({'a' : ['a', 'b' , 'b', 'a'], 'b' : [5, 6, 4, 3] })
# horizontal boxplots
sns.boxplot(x="b", y="a", data=df, orient='h')
# vertical boxplots
sns.boxplot(x="a", y="b", data=df, orient='v')
Mixing up the columns will cause seaborn to try to calculate the summary
statistics of the boxes on categorial data, which is bound to fail.
|
Pandas plot without specifying index
Question: Given the data:
Column1; Column2; Column3
1; 4; 6
2; 2; 6
3; 3; 8
4; 1; 1
5; 4; 2
I can plot it via:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('test0.csv',delimiter='; ', engine='python')
titles = list(df)
for title in titles:
if title == titles[0]:
continue
df.plot(titles[0],title, linestyle='--', marker='o')
plt.savefig(title+'.png')
But if, instead, data was missing `Column1` like:
Column2; Column3
4; 6
2; 6
3; 8
1; 1
4; 2
How do I plot it?
May be, something like `df.plot(title, linestyle='--', marker='o')`?
Answer: I am not sure what you are trying to achieve but you could reset index and set
it as you would like:
In[11]: df
Out[11]:
Column1 Column2 Column3
0 1 4 6
1 2 2 6
2 3 3 8
3 4 1 1
4 5 4 2
so if you want to plot col 2 as X axis and 3 as Y axis you could do something
like:
df.set_index('Column2')['Column3'].plot()
|
My PyQt app runs fine inside Idle but throws an error when trying to run from cmd
Question: So I'm learning PyQt development and I typed this into a new file inside IDLE:
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
def window():
app = QApplication(sys.argv)
win = QDialog()
b1 = QPushButton(win)
b1.setText("Button1")
b1.move(50,20)
b1.clicked.connect(b1_clicked)
b2=QPushButton(win)
b2.setText("Button2")
b2.move(50,50)
QObject.connect(b2,SIGNAL("clicked()"),b2_clicked)
win.setGeometry(100,100,200,100)
win.setWindowTitle("PyQt")
win.show()
sys.exit(app.exec_())
def b1_clicked():
print("Button 1 clicked")
def b2_clicked():
print("Button 2 clicked")
if __name__ == '__main__':
window()
The app does what is supposed to, which is to open a dialog box with two
buttons on it, when run inside IDLE. When I try to run the same program from
cmd I get this message:
Traceback (most recent call last): File "C:\Python34\Basic2buttonapp.py", line
2, in from PyQt4.QtCore import * ImportError: No module named 'PyQt4'
I've already tried typing python.exe inside cmd to see if Im running the
correct version of python from within the cmd, but this does not seem to be
the problem. I know it has to do with the communication between python 3.4 and
the module, but it seems weird to me that it only happens when trying to run
it from cmd.
If anyone has the solution I'll be very thankful.
Answer: This is because when running from the command line you're using a different
version of Python to the one in IDLE (with different installed packages). You
can find which Python is being used by running the following from the command
line:
python -c "import sys;print(sys.executable)"
...or within IDLE:
import sys
print(sys.executable)
If those two don't match, there is your problem. To fix it, you need to update
your `PATH` variable to put the _parent folder_ of the Python executable
referred to by IDLE at the front. You can get instructions of how to do this
on Windows [here](http://stackoverflow.com/questions/9546324/adding-directory-
to-path-environment-variable-in-windows).
|
How to generate random number with a large number of decimals in Python?
Question: How it's going?
I need to generate a random number with a large number of decimal to use in
advanced calculation.
I've tried to use this code:
round(random.uniform(min_time, max_time), 1)
But it doesn't work for length of decimals above 15.
If I use, for e.g:
round(random.uniform(0, 0.5), 100)
It returns 0.586422176354875, but I need a code that returns a number with 100
decimals.
**Can you help me?**
Thanks!
Answer: ## 100 decimals
The first problem is how to create a number with 1000 decimals at all.
This won't do:
>>> 1.23456789012345678901234567890
1.2345678901234567
Those are floating point numbers which have limitations far from 100 decimals.
Luckily, in Python, there is the [`decimal` built-in
module](https://docs.python.org/2/library/decimal.html) which can help:
>>> from decimal import Decimal
>>> Decimal('1.2345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901')
Decimal('1.2345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901')
Decimal can have any precision you need and it won't introduce [floating point
errors](http://stackoverflow.com/q/19473770/389289), but it will be much
slower.
## random
Now you just have to create a string with 100 decmals and give it to
`Decimal`.
This will create one random digit:
random.choice('0123456789')
This will create 100 random digits and concatenate them:
''.join(random.choice('0123456789') for i in range(100))
Now just create a `Decimal`:
Decimal('0.' + ''.join(random.choice('0123456789') for i in range(100)))
This creates a number between 0 and 1. Multiply it or divide to get a
different range.
|
python class import issue
Question: I am new to python and doing some programming for school I have written code
for a roster system and I am supposed to use dictionaries. I keep getting
error No module named 'players_Class' Can someone tell me what I am doing
wrong
class Players:
def __init__(self, name, number, jersey):
self.__name = name
self.__number = number
self.__jersey = jersey
def setname(self, name):
self.__name = name
def setnumber(self, number):
self.__number = number
def setjersey(self, jersey):
self.__jersey = jersey
def getname(self):
return self.__name
def getnumber(self):
return self.__number
def getjersey(self):
return self.__jersey
def displayData(self):
print("")
print("Player Information ")
print("------------------------")
print("Name:", self.__name)
print("Phone Number:", self.__number)
print("Jersey Number:", self.__jersey)
import players_Class
def displayMenu():
print("1. Display Players")
print("2. Add Player")
print("3. Remove Player")
print("4. Edit Player")
print("9. Exit Program")
print("")
return int(input("Selection> "))
def printPlayers(players):
if len(players) == 0:
print("Player not in List.")
else:
for x in players.keys():
players[x].displayData(self)
def addplayers(players):
newName = input("Enter new Players Name: ")
newNumber = int(input("Players Phone Number: "))
newJersey = input("Players Jersey Number: ")
players[newName] = (newName, newNumber, newJersey)
return players
def removeplayers(players):
removeName = input("Enter Player Name to be removed: ")
if removeName in players:
del players[removeName]
else:
print("Player not found in list.")
return players
def editplayers(players):
oldName = input("Enter the Name of the Player you want to edit: ")
if oldName in players:
newName = input("Enter the player new name: ")
newNumber = int(input("Players New Number: "))
newJersey = input("Players New Jersey Number: ")
players[oldName] = petClass.Pet(newName, newNumber, newJersey)
else:
print("Player Not Found")
return players
print("Welcome to the Team Manager")
players = {}
menuSelection = displayMenu()
while menuSelection != 9:
if menuSelection == 1:
printPlayers(players)
elif menuSelection == 2:
players = addplayers(players)
elif menuSelection == 3:
players = removeplayers(players)
elif menuSelection == 4:
players = editplayers(players)
menuSelection = displayMenu()
print ("Exiting Program...")
Answer: you have unused
import players_Class
statement in your code. just erase it!
|
Python: Ending line every N characters when writing to text file
Question: I am reading the webpage at "<https://google.com>" and writing as a string to
a notepad file. In the notepad file, I want to break and make a newline every
N characters while writing, so that I don't have to scroll horizontally in
notepad. I have looked up a number of solutions but none of them do this so
far. Thanks for any suggestions.
import urllib.request
page = urllib.request.urlopen("http://www.google.com")
webfile = page.readlines()
with open("file01.txt", 'w') as f:
for line in webfile:
f.write(str(line))
f.close()
Answer: Better yet, use the
[textwrap](https://docs.python.org/2.7/library/textwrap.html) library. Then
you can use
textwrap.fill(str(line))
and get breaks on whitespace and other useful additions.
|
How do I create an animated gif in Python using Wand?
Question: The instructions are simple enough in the [Wand docs](http://docs.wand-
py.org/en/0.4.1/guide/sequence.html) for _reading_ a sequenced image (e.g.
animated gif, icon file, etc.):
>>> from wand.image import Image
>>> with Image(filename='sequence-animation.gif') as image:
... len(image.sequence)
...but I'm not sure how to _create_ one.
In Ruby this is easy using **RMagick** , since you have `ImageList`s. (see [my
gist](https://gist.github.com/dguzzo/99bbca3df827df475f768383a1b04102) for an
example.)
I tried creating an `Image` (as the "container") and instantiating each
`SingleImage` with an image path, but I'm pretty sure that's wrong, especially
since the constructor documentation for `SingleImage` doesn't look for use by
the end-user.
I also tried creating a `wand.sequence.Sequence` and going from that angle,
but hit a dead-end as well. I feel very lost.
Answer: The best examples are located in the unit-tests shipped with the code.
[`wand/tests/sequence_test.py`](https://github.com/dahlia/wand/blob/master/tests/sequence_test.py)
for example.
For creating an animated gif with wand, remember to load the image into the
sequence, and then set the additional delay/optimize handling after all frames
are loaded.
from wand.image import Image
with Image() as wand:
# Add new frames into sequance
with Image(filename='1.png') as one:
wand.sequence.append(one)
with Image(filename='2.png') as two:
wand.sequence.append(two)
with Image(filename='3.png') as three:
wand.sequence.append(three)
# Create progressive delay for each frame
for cursor in range(3):
with wand.sequence[cursor] as frame:
frame.delay = 10 * (cursor + 1)
# Set layer type
wand.type = 'optimize'
wand.save(filename='animated.gif')
[![output
animated.gif](https://i.stack.imgur.com/do522.gif)](https://i.stack.imgur.com/do522.gif)
|
Python Regex for ignoring a sentence with two consecutive upper case letters
Question: I have a simple problem at hand to ignore the sentences that contain two or
more consecutive capital letters and many more grammar rules .
**Issue:** By the definition the regex should not match the string `'This is
something with two CAPS.'` , but it does match.
**Code:**
''' Check if a given sentence conforms to given grammar rules
$ Rules
* Sentence must start with a Uppercase character (e.g. Noun/ I/ We/ He etc.)
* Then lowercase character follows.
* There must be spaces between words.
* Then the sentence must end with a full stop(.) after a word.
* Two continuous spaces are not allowed.
* Two continuous upper case characters are not allowed.
* However the sentence can end after an upper case character.
'''
import re
# Returns true if sentence follows these rules else returns false
def check_sentence(sentence):
checker = re.compile(r"^((^(?![A-Z][A-Z]+))([A-Z][a-z]+)(\s\w+)+\.$)")
return checker.match(sentence)
print(check_sentence('This is something with two CAPS.'))
**Output:**
<_sre.SRE_Match object; span=(0, 32), match='This is something with two CAPS.'>
Answer: It’s probably easier to write your regex in the negative (find all sentences
that are bad sentences) than it is in the positive.
checker = re.compile(r'([A-Z][A-Z]|[ ][ ]|^[a-z])')
check2 = re.compile(r'^[A-Z][a-z].* .*\.$')
return not checker.findall(sentence) and check2.findall(sentence)
|
How to do custom python imports?
Question: Is there a way to have custom behaviour for import statements in Python? How?
E.g.:
import "https://github.com/kennethreitz/requests"
import requests@2.11.1
import requests@7322a09379565bbeba9bb40000b41eab8856352e
Alternatively, in case this isn't possible... Can this be achieved in a
standard way with function calls? How? E.g.:
import gitloader
repo = gitloader.repo("https://github.com/kennethreitz/requests")
requests = repo.from_commit("7322a09379565bbeba9bb40000b41eab8856352e")
There are two reasons for why I would like to do this. The first reason is
convenience (Golang style imports). The second reason is that I need
cryptographic verification of plugin modules for a project. I'm using Python
3.x
Answer: What you're essentially asking is customization of the import steps. This was
made possible with [PEP 302](https://www.python.org/dev/peps/pep-0302/) which
brought about certain hooks for for customization.
That PEP is currently not the source from which you should learn how importing
works; rather, look at the [documentation for the `import`
statement](https://docs.python.org/3/reference/import.html) for reference.
There [it states](https://docs.python.org/3/reference/import.html#finders-and-
loaders):
> Python includes a number of default finders and importers. The first one
> knows how to locate built-in modules, and the second knows how to locate
> frozen modules. A third default finder searches an import path for modules.
> The import path is a list of locations that may name file system paths or
> zip files. _It can also be extended to search for any locatable resource,
> such as those identified by URLs._
>
> _The import machinery is extensible, so new finders can be added to extend
> the range and scope of module searching_.
In short, you'll need to define the appropriate finder (to find the modules
you're looking for) and an appropriate loader for loading these.
|
Maximum recursion depth exceeded in python
Question: I am trying to make power function by recursion. But I got run time error like
Maximum recursion depth exceeded. I will appreciate any help!! Here is my
code.
def fast_power(a,n):
if(n==0):
return 1
else:
if(n%2==0):
return fast_power(fast_power(a,n/2),2)
else:
return fast_power(fast_power(a,n/2),2)*a
Answer: You should use `n // 2` instead of `n / 2`:
>>> 5 // 2
2
>>> 5 / 2
2.5
(At least in python3)
The problem is that once you end up with floats it takes quite a while before
you end up at `0` by dividing by `2`:
>>> from itertools import count
>>> n = 5
>>> for i in count():
... n /= 2
... if n == 0:
... break
...
>>> i
1076
So as you can see you would need more than 1000 recursive calls to reach `0`
from `5`, and that's above the default recursion limit. Besides: that
algorithm is meant to be run with integer numbers.
* * *
This said I'd write that function as something like:
def fast_power(a, n):
if n == 0:
return 1
tmp = fast_power(a, n//2)
tmp *= tmp
return a*tmp if n%2 else tmp
Which produces:
>>> fast_power(2, 7)
128
>>> fast_power(3, 7)
2187
>>> fast_power(13, 793)
22755080661651301134628922146701289018723006552429644877562239367125245900453849234455323305726135714456994505688015462580473825073733493280791059868764599730367896428134533515091867511617127882942739592792838327544860344501784014930389049910558877662640122357152582905314163703803827192606896583114428235695115603966134132126414026659477774724471137498587452807465366378927445362356200526278861707511302663034996964296170951925219431414726359869227380059895627848341129113432175217372073248096983111394024987891966713095153672274972773169033889294808595643958156933979639791684384157282173718024930353085371267915606772545626201802945545406048262062221518066352534122215300640672237064641040065334712571485001684857748001990405649808379706945473443683240715198330842716984731885709953720968428395490414067791229792734370523603401019458798402338043728152982948501103056283713360751853
|
Removing punctuation/symbols from a list with Python except periods, commas
Question: In Python, I need to remove almost all punctuation from a list but save
periods and commas. Should I create a function to do this or a variable?
Basically I want to delete all symbols except letters (I've already converted
uppercase letters to lowercase) and periods and commas (and maybe
apostrophes).
#Clean tokens up (remove symbols except ',' and '.')
def depunctuate()
clean_tokens = []
for i in lc_tokens:
if (i not in [a-z.,])
...
Answer: You can build a set of unwanted punctuation from
[`string.punctuation`](https://docs.python.org/2/library/string.html#string.punctuation)
\- which provides a string containing punctuation, and then use a _list
comprehension_ to filter out the letters contained in the set:
import string
to_delete = set(string.punctuation) - {'.', ','} # remove comma and fullstop
clean_tokens = [x for x in lc_tokens if x not in to_delete]
|
Falcon parsing json error
Question: I'm trying out Falcon for a small api project. Unfortunate i'm stuck on the
json parsing stuff and code from the documentation examples does not work.
I have tried so many things i've found on Stack and Google but no changes.
I've tried the following codes that results in the errors below
import json
import falcon
class JSON_Middleware(object):
def process_request(self, req, resp):
raw_json = json.loads(req.stream.read().decode('UTF-8'))
"""Exception: AttributeError: 'str' object has no attribute 'read'"""
raw_json = json.loads(req.stream.read(), 'UTF-8')
"""Exception: TypeError: the JSON object must be str, not 'bytes'"""
raw_json = json.loads(req.stream, 'UTF-8')
"""TypeError: the JSON object must be str, not 'Body'"""
I'm on the way of giving up, but if somebody can tell me why this is happening
and how to parse JSON in Falcon i would be extremely thankful.
Thanks
Environment: OSX Sierra Python 3.5.2 Falcon and other is the latest version
from Pip
Answer: your code should work if other pieces of code are in place . a quick
test(filename app.py):
import falcon
import json
class JSON_Middleware(object):
def process_request(self, req, resp):
raw_json = json.loads(req.stream.read())
print raw_json
class Test:
def on_post(self,req,resp):
pass
app = application = falcon.API(middleware=JSON_Middleware())
t = Test()
app.add_route('/test',t)
run with: `gunicorn app`
`$ curl -XPOST 'localhost:8000' -d '{"Hello":"wold"}'`
|
How to avoid re-importing modules and re-defining large object every time a script runs
Question: This must have an answer but I cant find it. I am using a quite large python
module called quippy. With this module one can define an intermolecular
potential to use as a calculator in ASE like so:
from quippy import *
from ase import atoms
pot=Potential("Potential xml_label=gap_h2o_2b_ccsdt_3b_ccsdt",param_filename="gp.xml")
some_structure.set_calculator(pot)
This is the beginning of a script. The problem is that the `import` takes
about 3 seconds and `pot=Potential...` takes about 30 seconds with 100% cpu
load. (I believe it is due to parsing a large ascii xml-file.) If I would be
typing interactively I could keep the module imported and the potential
defined, but when running the script it is done again on each run.
Can I save the module and the potential object in memory/disk between runs?
Maybe keep a python process idling and keeping those things in memory? Or run
these lines in the interpreter and somehow call the rest of the script from
there?
Any approach is fine, but some help is be appreciated!
Answer: You can either use raw files or modules such as `pickle` to store data easily.
import cPickle as pickle
from quippy import Potential
try: # try previously calculated value
with open('/tmp/pot_store.pkl') as store:
pot = pickle.load(store)
except OSError: # fall back to calculating it from scratch
pot = quippy.Potential("Potential xml_label=gap_h2o_2b_ccsdt_3b_ccsdt",param_filename="gp.xml")
with open('/tmp/pot_store.pkl', 'w') as store:
pot = pickle.dump(pot, store)
There are various optimizations to this, e.g. checking whether your pickle
file is older than the file generating it's value.
|
python XML get text inside <p>...</p> tag
Question: I guys, I have an xml structure which looks somewhat like this.
<abstract>
<p id = "p-0001" num = "0000">
blah blah blah
</p>
</abstract>
I would like to extract the `<p>` tag inside the `<abstract>` tag only.
I tried:
import xml.etree.ElementTree as ET
xroot = ET.parse('100/A/US07640598-20100105.XML').getroot()
for row in xroot.iter('p'):
print row.text
This get all the `<p>` tag in my xml which is not a good idea.
Is there anyway i can extract the text inside
My desire output would be extracting "blah blah blah"
Answer: You can use an _XPath expression_ to search for `p` elements specifically
inside the `abstract`:
for p in xroot.xpath(".//abstract//p"):
print(p.text.strip())
Or, if using `iter()` you may have a nested loop:
for abstract in xroot.iter('abstract'):
for p in abstract.iter('p'):
print(p.text.strip())
|
SQLAlchemy not finding Postgres table connected with postgres_fdw
Question: Please excuse any terminology typos, don't have a lot of experience with
databases other than SQLite. I'm trying to replicate what I would do in SQLite
where I could ATTACH a database to a second database and query across all the
tables. I wasn't using SQLAlchemy with SQLite
I'm working with SQLAlchemy 1.0.13, Postgres 9.5 and Python 3.5.2 (using
Anaconda) on Win7/54. I have connected two databases (on localhost) using
postgres_fdw and imported a few of the tables from the secondary database. I
can successfully manually query the connected table with SQL in PgAdminIII and
from Python using psycopg2. With SQLAlchemy I've tried:
# Same connection string info that psycopg2 used
engine = create_engine(conn_str, echo=True)
class TestTable(Base):
__table__ = Table('test_table', Base.metadata,
autoload=True, autoload_with=engine)
# Added this when I got the error the first time
# test_id is a primary key in the secondary table
Column('test_id', Integer, primary_key=True)
and get the error:
sqlalchemy.exc.ArgumentError: Mapper Mapper|TestTable|test_table could not
assemble any primary key columns for mapped table 'test_table'
Then I tried:
insp = reflection.Inspector.from_engine(engine)
print(insp.get_table_names())
and the attached tables aren't listed (the tables from the primary database do
show up). Is there a way to do what I am trying to accomplish?
Answer: In order to map a table [SQLAlchemy needs there to be at least one column
denoted as a primary key
column](http://docs.sqlalchemy.org/en/latest/faq/ormconfiguration.html#how-do-
i-map-a-table-that-has-no-primary-key). This does not mean that the column
need actually be a primary key column in the eyes of the database, though it
is a good idea. Depending on how you've imported the table from your foreign
schema it may not have a representation of a primary key constraint, or any
other constraints for that matter. You can work around this by either
[overriding the reflected primary key
column](http://docs.sqlalchemy.org/en/latest/core/reflection.html#overriding-
reflected-columns) in the **`Table` instance** (not in the mapped classes
body), or better yet tell the mapper what columns comprise the candidate key:
engine = create_engine(conn_str, echo=True)
test_table = Table('test_table', Base.metadata,
autoload=True, autoload_with=engine)
class TestTable(Base):
__table__ = test_table
__mapper_args__ = {
'primary_key': (test_table.c.test_id, ) # candidate key columns
}
To inspect foreign table names use the
[`PGInspector.get_foreign_table_names()`](http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#sqlalchemy.dialects.postgresql.base.PGInspector.get_foreign_table_names)
method:
print(insp.get_foreign_table_names())
|
NLTK AssertionError when taking sentences from PlaintextCorpusReader
Question: I'm using a PlaintextCorpusReader to work with some files from Project
Gutenberg. It seems to handle word tokenization without issue, but chokes when
I request sentences or paragraphs.
I start by downloading [a Gutenberg book (in UTF-8
plaintext)](http://www.gutenberg.org/cache/epub/345/pg345.txt) to the current
directory. Then:
>>> from nltk.corpus import PlaintextCorpusReader
>>> r = PlaintextCorpusReader('.','Dracula.txt')
>>> r.words()
['DRACULA', 'CHAPTER', 'I', 'JONATHAN', 'HARKER', "'", ...]
>>> r.sents()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/nltk/util.py", line 765, in __repr__
for elt in self:
File "/usr/local/lib/python3.5/dist-packages/nltk/corpus/reader/util.py", line 296, in iterate_from
new_filepos = self._stream.tell()
File "/usr/local/lib/python3.5/dist-packages/nltk/data.py", line 1333, in tell
assert check1.startswith(check2) or check2.startswith(check1)
AssertionError
I've tried modifying the book in various ways: stripping off the header,
removing newlines, adding a period to the end to finish the last "sentence".
The error remains. Am I doing something wrong? Or am I running up against some
limitation in NLTK?
(Running Python 3.5.0, NLTK 3.2.1, on Ubuntu. Problem appears in other Python
3.x versions as well.)
EDIT: Introspection shows the following locals at the point of exception.
>>> pprint.pprint(inspect.trace()[-1][0].f_locals)
{'buf_size': 63,
'bytes_read': 75,
'check1': "\n\n\n CHAPTER I\n\nJONATHAN HARKER'S JOURNAL\n\n(_Kept i",
'check2': '\n'
'\n'
' CHAPTER I\n'
'\n'
"JONATHAN HARKER'S JOURNAL\n"
'\n'
'(_Kept in shorthand._)',
'est_bytes': 9,
'filepos': 11,
'orig_filepos': 75,
'self': <nltk.data.SeekableUnicodeStreamReader object at 0x7fd2694b90f0>}
In other words, check1 is losing an initial newline somehow.
Answer: That particular file has a UTF-8 Byte Order Mark (EF BB BF) at the start,
which is confusing NLTK. Removing those bytes manually, or copy-pasting the
entire text into a new file, fixes the problem.
I'm not sure why NLTK can't handle BOMs, but at least there's a solution.
|
How to display image from current working directory in Python
Question: I would like to display an image using multiple label in a GUI(Qt Designer).
The image file should be grab from current working directory and display on it
own label upon user press Push Button.
Image can be displayed in label_2 when i hardcoded the image directory path
but not for label_1.
def capture_image(self):
cam = cv2.VideoCapture(0)
print('Execute_captureImage')
i = 1
while i <= int(self.qty_selected):
# while i < 2:
ret, frame = cam.read()
cv2.imshow('Please review image before capture', frame)
if not ret:
break
k = cv2.waitKey(1)
if k % 256 == 27:
# ESC pressed
print("Escape hit, closing...")
break
if k % 256 == 32:
# SPACE pressed
self.img_name = self.lotId + '_{}.jpeg'.format(i)
path = 'c:\\Users\\Desktop\\Python\\Testing' + '\\' + self.lotId + '\\' + self.img_name
print('CurrentImage = ' + path)
if not os.path.exists(path):
print('Not yet exist')
cv2.imwrite(
os.path.join('c:\\Users\\Desktop\\Python\\Testing' + '\\' + self.lotId,
self.img_name),
frame)
print("{}".format(self.img_name))
# i += 1
break
else:
print('Image already exist')
i += 1
cam.release()
cv2.destroyAllWindows()
def display_image(self):
label_vid01 = 'c:\\Users\\Desktop\\Python\\Testing' + '\\' + self.lotId + '\\' + self.img_name
label_vid02 = 'c:\\Users\\Desktop\\Python\\Testing' + '\\' + self.lotId + '\\' + self.img_name
# label_vid03 = 'c:/Users/Desktop/Python/Image/image3.jpg'
self.label_vid01.setScaledContents(True)
self.label_vid02.setScaledContents(True)
self.label_vid03.setScaledContents(True)
self.label_vid01.setPixmap(QtGui.QPixmap(label_vid01))
self.label_vid02.setPixmap(QtGui.QPixmap(label_vid02))
print(repr(label_vid01))
print(os.path.exists(label_vid01))
May i know where is the mistake?
'self.lotId' is input text based on user input.
[label snapshot](http://i.stack.imgur.com/qXEtw.jpg)
Answer: The demo script below works for me on Windows XP. If this also works for you,
the problem must be in the `capture_image` function in your example (which I
cannot test at the moment).
import sys, os
from PyQt4 import QtCore, QtGui
class Window(QtGui.QWidget):
def __init__(self):
super(Window, self).__init__()
layout = QtGui.QVBoxLayout(self)
self.viewer = QtGui.QListWidget(self)
self.viewer.setViewMode(QtGui.QListView.IconMode)
self.viewer.setIconSize(QtCore.QSize(256, 256))
self.viewer.setResizeMode(QtGui.QListView.Adjust)
self.viewer.setSpacing(10)
self.button = QtGui.QPushButton('Test', self)
self.button.clicked.connect(self.handleButton)
self.edit = QtGui.QLineEdit(self)
layout.addWidget(self.viewer)
layout.addWidget(self.edit)
layout.addWidget(self.button)
def handleButton(self):
self.viewer.clear()
name = self.edit.text()
for index in range(3):
pixmap = QtGui.QPixmap()
path = r'E:\Python\Testing\%s\%s_%d.jpg' % (name, name, index + 1)
print('load (%s) %r' % (pixmap.load(path), path))
item = QtGui.QListWidgetItem(os.path.basename(path))
item.setIcon(QtGui.QIcon(path))
self.viewer.addItem(item)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
window = Window()
window.setGeometry(800, 150, 650, 500)
window.show()
sys.exit(app.exec_())
|
Specific background color for Tk in Python
Question: How to set specific color such as #B0BF1A instead of black,white,grey
window.configure(background='white')
browse_label = gui.Label(window, text="Image path :", bg="white").place(x=20, y=20)
Answer: I'm not sure whether this is compatible in python 2.7, but try this: [Default
window colour Tkinter and hex colour
codes](http://stackoverflow.com/questions/11340765/default-window-colour-
tkinter-and-hex-colour-codes)
The code of the accepted answer is as follows (NOT MINE):
import Tkinter
mycolor = '#%02x%02x%02x' % (64, 204, 208) # set your favourite rgb color
mycolor2 = '#40E0D0' # or use hex if you prefer
root = Tkinter.Tk()
root.configure(bg=mycolor)
Tkinter.Button(root, text="Press me!", bg=mycolor, fg='black',
activebackground='black', activeforeground=mycolor2).pack()
root.mainloop()
|
Python google query with requests module, get responce in http format
Question: I want to execute a google query with requests module in python. Here is my
script:
import requests
searchfor = 'test'
payload = {'q': searchfor, 'key': API_KEY, 'cx': SEARCH_ENGINE_ID}
link = 'https://www.googleapis.com/customesearch/v1'
r = requests.get(link, paramas=payload)
print r.content
and then run the script as: ./googler.py > out
and the out is in json format.
How can I get the responce in http format?
Answer: The google api only supports [atom/Json](https://developers.google.com/custom-
search/json-api/v1/overview).
So you have to parse the JSON to HTML. You maybe want to check [json
package](https://docs.python.org/2/library/json.html).
append something like this to your file:
import json
items = json.loads(r)['items']
print "<html><body>"
for item in items:
print "<a href=" +item['url'] + ">" + item['title'] + "</a>"
print "</body></html>"
|
Plotting decision tree, graphvizm pydotplus
Question: I'm following the tutorial for decision tree on [scikit](http://scikit-
learn.org/stable/modules/tree.html) documentation. I have `pydotplus 2.0.2`
but it is telling me that it does not have `write` method - error below. I've
been struggling for a while with it now, any ideas, please? Many thanks!
from sklearn import tree
from sklearn.datasets import load_iris
iris = load_iris()
clf = tree.DecisionTreeClassifier()
clf = clf.fit(iris.data, iris.target)
from IPython.display import Image
dot_data = tree.export_graphviz(clf, out_file=None)
import pydotplus
graph = pydotplus.graphviz.graph_from_dot_data(dot_data)
Image(graph.create_png())
and my error is
/Users/air/anaconda/bin/python /Users/air/PycharmProjects/kiwi/hemr.py
Traceback (most recent call last):
File "/Users/air/PycharmProjects/kiwi/hemr.py", line 10, in <module>
dot_data = tree.export_graphviz(clf, out_file=None)
File "/Users/air/anaconda/lib/python2.7/site-packages/sklearn/tree/export.py", line 375, in export_graphviz
out_file.write('digraph Tree {\n')
AttributeError: 'NoneType' object has no attribute 'write'
Process finished with exit code 1
\----- UPDATE -----
Using the fix with `out_file`, it throws another error:
Traceback (most recent call last):
File "/Users/air/PycharmProjects/kiwi/hemr.py", line 13, in <module>
graph = pydotplus.graphviz.graph_from_dot_data(dot_data)
File "/Users/air/anaconda/lib/python2.7/site-packages/pydotplus/graphviz.py", line 302, in graph_from_dot_data
return parser.parse_dot_data(data)
File "/Users/air/anaconda/lib/python2.7/site-packages/pydotplus/parser.py", line 548, in parse_dot_data
if data.startswith(codecs.BOM_UTF8):
AttributeError: 'NoneType' object has no attribute 'startswith'
Answer: The problem is that you are setting the parameter `out_file' to`None`. If you
look at the [documentation][1], if you set it at`None`it returns
the`string`file directly and does not create a file. And of course a`string`
does not have a 'write' method.
Therefore, do as follows :
dot_data = tree.export_graphviz(clf)
graph = pydotplus.graphviz.graph_from_dot_data(dot_data)
|
Seaching big files using list in Python - How can improve the speed?
Question: I have a folder with 300+ .txt files with total size of 15GB+. These files
contain tweets. Each line is a different tweet. I have a list of keywords I'd
like to search the tweets for. I have created a script that searches each line
of every file for every item on my list. If the tweet contains the keyword,
then it writes the line into another file. This is my code:
# Search each file for every item in keywords
print("Searching the files of " + filename + " for the appropriate keywords...")
for file in os.listdir(file_path):
f = open(file_path + file, 'r')
for line in f:
for key in keywords:
if re.search(key, line, re.IGNORECASE):
db.write(line)
This is the format each line has:
{"created_at":"Wed Feb 03 06:53:42 +0000 2016","id":694775753754316801,"id_str":"694775753754316801","text":"me with Dibyabhumi Multiple College students https:\/\/t.co\/MqmDwbCDAF","source":"\u003ca href=\"http:\/\/www.facebook.com\/twitter\" rel=\"nofollow\"\u003eFacebook\u003c\/a\u003e","truncated":false,"in_reply_to_status_id":null,"in_reply_to_status_id_str":null,"in_reply_to_user_id":null,"in_reply_to_user_id_str":null,"in_reply_to_screen_name":null,"user":{"id":5981342,"id_str":"5981342","name":"Lava Kafle","screen_name":"lkafle","location":"Kathmandu, Nepal","url":"http:\/\/about.me\/lavakafle","description":"@deerwalkinc 24000+ tweeps bigdata #Team #Genomics http:\/\/deerwalk.com #Genetic #Testing #population #health #management #BigData #Analytics #java #hadoop","protected":false,"verified":false,"followers_count":24742,"friends_count":23169,"listed_count":1481,"favourites_count":147252,"statuses_count":171880,"created_at":"Sat May 12 04:49:14 +0000 2007","utc_offset":20700,"time_zone":"Kathmandu","geo_enabled":true,"lang":"en","contributors_enabled":false,"is_translator":false,"profile_background_color":"EDECE9","profile_background_image_url":"http:\/\/abs.twimg.com\/images\/themes\/theme3\/bg.gif","profile_background_image_url_https":"https:\/\/abs.twimg.com\/images\/themes\/theme3\/bg.gif","profile_background_tile":false,"profile_link_color":"088253","profile_sidebar_border_color":"FFFFFF","profile_sidebar_fill_color":"E3E2DE","profile_text_color":"634047","profile_use_background_image":true,"profile_image_url":"http:\/\/pbs.twimg.com\/profile_images\/677805092859420672\/kzoS-GZ__normal.jpg","profile_image_url_https":"https:\/\/pbs.twimg.com\/profile_images\/677805092859420672\/kzoS-GZ__normal.jpg","profile_banner_url":"https:\/\/pbs.twimg.com\/profile_banners\/5981342\/1416802075","default_profile":false,"default_profile_image":false,"following":null,"follow_request_sent":null,"notifications":null},"geo":null,"coordinates":null,"place":null,"contributors":null,"is_quote_status":false,"retweet_count":0,"favorite_count":0,"entities":{"hashtags":[],"urls":[{"url":"https:\/\/t.co\/MqmDwbCDAF","expanded_url":"http:\/\/fb.me\/Yj1JW9bJ","display_url":"fb.me\/Yj1JW9bJ","indices":[45,68]}],"user_mentions":[],"symbols":[]},"favorited":false,"retweeted":false,"possibly_sensitive":false,"filter_level":"low","lang":"en","timestamp_ms":"1454482422661"}
The script works but **it takes a lot of time**. For ~40 keywords it needs
more than 2 hours. Obviously my code is not optimized. What can I do to
improve the speed?
p.s. I have read some relevant questions regarding searching and speed but I
suspect that the problem in my script lies in the fact that I'm using a list
for the keywords. I've tried some of the suggested solutions but to no avail.
Answer: ## 1) External library
If you're willing to lean on external libraries (and time to execute is more
important than the one-off time cost to install), you might be able to gain
some speed by loading each file into a simple Pandas DataFrame and performing
the keyword search as a vector operation. To get the matching tweets, you
would do something like:
import pandas as pd
dataframe_from_text = pd.read_csv("/path/to/file.txt")
matched_tweets_index = dataframe_from_text.str.match("keyword_a|keyword_b")
dataframe_from_text[matched_tweets_index] # Uses the boolean search above to filter the full dataframe
# You'd then have a mini dataframe of matching tweets in `dataframe_from_text`.
# You could loop through these to save them out to a file using the `.to_dict(orient="records")` format.
Dataframe operations within Pandas can be really quick so might be worth
investigating.
## 2) Group your regex
Looks like you're not logging which keyword you matched against. If this is
true, you could group your keywords into a single regex query like so:
for line in f:
keywords_combined = "|".join(keywords)
if re.search(keywords_combined, line, re.IGNORECASE):
db.write(line)
I've not tested this but by reducing the number of loops per line, that could
trim some time off.
|
Migrating from AMPL to Pyomo
Question: I am trying to use open source Pyomo lib instead of ampl, so i am trying
migrating the ampl car problem that comes in the Ipopt source code tarball as
example, but i am having got problems with the end condition (reach a place
with zero speed at final iteration) and with the cost function (minimize final
time).
The code below states the dae system:
# min tf
# dx/dt = 0
# dv/dt = a - R*v^2
# x(0) = 0; x(tf) = 100
# v(0) = 0; v(tf) = 0
# -3 <= a <= 1 (a is the control variable)
#!Python3.5
from pyomo.environ import *
from pyomo.dae import *
N = 20;
T = 10;
L = 100;
m = ConcreteModel()
# Parameters
m.R = Param(initialize=0.001)
# Variables
def x_init(m, i):
return i*L/N
m.t = ContinuousSet(bounds=(0,1000))
m.x = Var(m.t, bounds=(0,None), initialize=x_init)
m.v = Var(m.t, bounds=(0,None), initialize=L/T)
m.a = Var(m.t, bounds=(-3.0,1.0), initialize=0)
# Derivatives
m.dxdt = DerivativeVar(m.x, wrt=m.t)
m.dvdt = DerivativeVar(m.v, wrt=m.t)
# Objetives
m.obj = Objective(expr=m.t[N])
# DAE
def _ode1(m, i):
if i==0:
return Constraint.Skip
return m.dxdt[i] == m.v[i]
m.ode1 = Constraint(m.t, rule=_ode1)
def _ode2(m, i):
if i==0:
return Constraint.Skip
return m.dvdt[i] == m.a[i] - m.R*m.v[i]**2
m.ode2 = Constraint(m.t, rule=_ode2)
# Constraints
def _init(m):
yield m.x[0] == 0
yield m.v[0] == 0
yield ConstraintList.End
m.init = ConstraintList(rule=_init)
'''
def _end(m, i):
if i==N:
return m.x[i] == L amd m.v[i] == 0
return Constraint.Skip
m.end = ConstraintList(rule=_end)
'''
# Discretize
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(m, nfe=N, wrt=m.t, scheme='BACKWARD')
# Solve
solver = SolverFactory('ipopt', executable='C:\\EXTERNOS\\COIN-OR\\win32-msvc12\\bin\\ipopt')
results = solver.solve(m, tee=True)
Answer: Currently, a ContinuousSet in Pyomo has to be bounded. This means that in
order to solve a minimum time optimal control problem using this tool, the
problem must be reformulated to remove the time scaling from the
ContinuousSet. In addition, you have to introduce an extra variable to
represent the final time. I've added an example to the [Pyomo github
repository](https://github.com/Pyomo/pyomo/blob/master/examples/dae/car_example.py)
showing how this can be done for your problem.
|
how to complex manage shell processes with asyncio?
Question: I want to track reboot process of daemon with python's asyncio module. So I
need to run shell command `tail -f -n 0 /var/log/daemon.log` and analyze it's
output while, let's say, `service daemon restart` executing in background.
Daemon continues to write to log after service restart command finished it's
execution and reports it's internal checks. Track process read checks info and
reports that reboot was successful or not based it's internal logic.
import asyncio
from asyncio.subprocess import PIPE, STDOUT
async def track():
output = []
process = await asyncio.create_subprocess_shell(
'tail -f -n0 ~/daemon.log',
stdin=PIPE, stdout=PIPE, stderr=STDOUT
)
while True:
line = await process.stdout.readline()
if line.decode() == 'reboot starts\n':
output.append(line)
break
while True:
line = await process.stdout.readline()
if line.decode() == '1st check completed\n':
output.append(line)
break
return output
async def reboot():
lines = [
'...',
'...',
'reboot starts',
'...',
'1st check completed',
'...',
]
p = await asyncio.create_subprocess_shell(
(
'echo "rebooting"; '
'for line in {}; '
'do echo $line >> ~/daemon.log; sleep 1; '
'done; '
'echo "rebooted";'
).format(' '.join('"{}"'.format(l) for l in lines)),
stdin=PIPE, stdout=PIPE, stderr=STDOUT
)
return (await p.communicate())[0].splitlines()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(
asyncio.ensure_future(track()),
asyncio.ensure_future(reboot())
))
loop.close()
This code is only way I've found to run two coroutines in parallel. But how to
run `track()` strictly before `reboot` to not miss any possible output in log?
And how to retrieve return values of both coroutines?
Answer: > But how to run track() strictly before reboot to not miss any possible
> output in log?
You could `await` the first subprocess creation before running the second one.
> And how to retrieve return values of both coroutines?
[`asyncio.gather`](https://docs.python.org/3.5/library/asyncio-
task.html#asyncio.gather) returns the aggregated results.
Example:
async def main():
process_a = await asyncio.create_subprocess_shell([...])
process_b = await asyncio.create_subprocess_shell([...])
return await asyncio.gather(monitor_a(process_a), monitor_b(process_b))
loop = asyncio.get_event_loop()
result_a, result_b = loop.run_until_complete(main())
|
Python SQLITE3 Inserting Backwards
Question: I have a small piece of code which inserts some data into a database. However,
the data is being inserting in a reverse order.
If i "commit" after the for loop has run through, it inserts backwards, if i
"commit" as part of the for loop, it inserts in the correct order, however it
is much slower.
How can i commit after the for loop but still retain the correct order?
import subprocess, sqlite3
output4 = subprocess.Popen(['laZagne.exe', 'all'], stdout=subprocess.PIPE).communicate()[0]
lines4 = output4.splitlines()
conn = sqlite3.connect('DBNAME')
cur = conn.cursor()
for j in lines4:
print j
cur.execute('insert into Passwords (PassString) VALUES (?)',(j,))
conn.commit()
conn.close()
Answer: You can't rely on _any_ ordering in SQL database tables. Insertion takes place
in an implementation-dependent manner, and where rows end up depends entirely
on the storage implementation used and the data that is already there.
As such, no reversing takes place; if you are selecting data from the table
again and these rows come back in a reverse order, then that's a coincidence
and not a choice the database made.
If rows must come back in a specific order, use `ORDER BY` when selecting. You
could order by `ROWID` for example, which _may_ be increasing monotonically
for new rows and thus give you an approximation for insertion order. See
[_ROWIDs and the INTEGER PRIMARY
KEY_](http://sqlite.org/lang_createtable.html#rowid).
|
install package from a requirenment txt and failed
Question: I read the rnn tutorial in <https://github.com/dennybritz/rnn-tutorial-rnnlm>
and follow the installations to set up the environment. But I got the error
which I have no idea about this. I set up it in `virtualenv` in Ubuntu 14. I
have search the similar problem and use their solution but it does not work.
The method I have tried: 1\. update `gcc` 2\. reinstall `python-dev` 3\.
install `libxxx`(sorry about this inaccurate name but there are a mess of such
files so I can not remember)
Note: 1. I am not an expert in ubuntu so if you can help me and hope you can
provide a detailed explanation or solution if you are avaible. 2.I have tried
to reinstall ubuntu and it does not work
creating build/temp.linux-x86_64-2.7/Modules/2.x
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DHAVE_RL_CALLBACK -DHAVE_RL_CATCH_SIGNAL -DHAVE_RL_COMPLETION_APPEND_CHARACTER -DHAVE_RL_COMPLETION_DISPLAY_MATCHES_HOOK -DHAVE_RL_COMPLETION_MATCHES -DHAVE_RL_COMPLETION_SUPPRESS_APPEND -DHAVE_RL_PRE_INPUT_HOOK -I. -I/usr/include/python2.7 -c Modules/2.x/readline.c -o build/temp.linux-x86_64-2.7/Modules/2.x/readline.o
In file included from Modules/2.x/readline.c:31:0:
./readline/readline.h:385:1: warning: function declaration isn’t a prototype [-Wstrict-prototypes]
extern int rl_message ();
^
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/Modules/2.x/readline.o readline/libreadline.a readline/libhistory.a -lncurses -o build/lib.linux-x86_64-2.7/gnureadline.so
/usr/bin/ld: cannot find -lncurses
collect2: error: ld returned 1 exit status
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up...
Command /home/shuyu-lyu/venv/bin/python -c "import setuptools, tokenize;__file__='/home/shuyu-lyu/venv/build/gnureadline/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-xF2y3k-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/shuyu-lyu/venv/include/site/python2.7 failed with error code 1 in /home/shuyu-lyu/venv/build/gnureadline
Traceback (most recent call last):
File "/home/shuyu-lyu/venv/bin/pip", line 11, in <module>
sys.exit(main())
File "/home/shuyu-lyu/venv/local/lib/python2.7/site-packages/pip/__init__.py", line 185, in main
return command.main(cmd_args)
File "/home/shuyu-lyu/venv/local/lib/python2.7/site-packages/pip/basecommand.py", line 161, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 66: ordinal not in range(128)
Answer: Looks like ncurses library is not installed. If you are on Ubuntu, you should
first install it with
sudo apt-get install libncurses5-dev libncursesw5-dev
|
Python, scipy : minimize multivariable function in integral expression
Question: how can I minimize a function (uncostrained), respect a[0] and a[1]? example
(this is a simple example for I uderstand scipy, numpy and py):
import numpy as np
from scipy.integrate import *
from scipy.optimize import *
def function(a):
return(quad(lambda t: ((np.cos(a[0]))*(np.sin(a[1]))*t),0,3))
i tried:
l=np.array([0.1,0.2])
res=minimize(function,l, method='nelder-mead',options={'xtol': 1e-8, 'disp': True})
but I get errors. I get the results in matlab.
any idea ?
thanks in advance
Answer: This is just a guess, because you haven't included enough information in the
question for anyone to really know what the problem is. Whenever you ask a
question about code that generates an error, always include the complete error
message in the question. Ideally, you should include a [minimal, complete and
verifiable example](http://stackoverflow.com/help/mcve) that we can run to
reproduce the problem. Currently, you define `function`, but later you use the
undefined function `chirplet`. That makes it a little bit harder for anyone to
understand your problem.
Having said that...
[`scipy.integrate.quad`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html)
returns two values: the estimate of the integral, and an estimate of the
absolute error of the integral. It looks like you haven't taken this into
account in `function`. Try something like this:
def function(a):
intgrl, abserr = quad(lambda t: np.cos(a[0])*np.sin(a[1])*t, 0, 3)
return intgrl
|
Why is BeautifulSoup not extracting all of HTML from a webpage?
Question: I am trying to extract text from this website:
[searchgurbani](https://www.searchgurbani.com/guru_granth_sahib/ang_by_ang).
This website has some old scripture translated in English and Punjabi (an
Indian Language) line-by-line. It makes a very good parallel corpus. I have
successfully extracted all the English translations in a separate text file.
But when I go for Punjabi, It returns nothing.
This is the Inspect element screenshot: (Highlighted text is the translated
Punjabi language)
[Screenshot 1](http://i.stack.imgur.com/Jzai7.png)
In Screenshot 1, highlighted text which belongs to _class=lang_16_ is not
listed in the soup object _beautiful_ which should contain all of the HTML.
Here is the Python code:
outputFilePunjabi = open("1.txt","w",newline="",encoding="utf-16")
r=urlopen("")
beautiful = BeautifulSoup(r.read().decode('utf-8'),"html5lib")
#beautiful = BeautifulSoup(r.read().decode('utf-8'),"lxml")
punjabi_text = beautiful.find_all(class_="lang_16")
for i in punjabi_text:
outputFilePunjabi.write(i.get_text())
outputFilePunjabi.write('\n')
If I run the same code with _class_=lang_4_ it does the work.
Please do the following to see lang_16 in inspect element:
_Please do the following on that web page: Go to preferences --> Tick
"translation of Sri Guru Granth Sahib ji (by S. Manmohan Singh) - Punjabi"
under Additional Translations available on Guru Granth Shahib: --> scroll down
- submit changes -> reopen page_
Please guide me where I am going wrong.
(python version = 3.5)
_PS: I have very less experience in web scrapping._
Answer: Remember you've suggested to do the following:
> Please do the following on that web page: Go to preferences -> Tick
> "ranslation of Sri Guru Granth Sahib ji (by S. Manmohan Singh) - Punjabi"
> under Additional Translations available on Guru Granth Shahib: -> scroll
> down - submit changes
Now, this is also required when you download the page in Python. In other
words, use [`requests`](http://docs.python-requests.org/en/master/) and **set
the`lang_16="yes"` cookie** to enable the Punjabi translation:
import requests
from bs4 import BeautifulSoup
with requests.Session() as session:
response = session.get("https://www.searchgurbani.com/guru_granth_sahib/ang_by_ang", cookies={
"lang_16": "yes"
})
soup = BeautifulSoup(response.content, "html5lib")
for item in soup.select(".lang_16"):
print(item.get_text())
Prints:
ਵਾਹਿਗੁਰੂ ਕੇਵਲ ਇਕ ਹੈ। ਸੱਚਾ ਹੈ ਉਸ ਦਾ ਨਾਮ, ਰਚਨਹਾਰ ਉਸ ਦੀ ਵਿਅਕਤੀ ਅਤੇ ਅਮਰ ਉਸ ਦਾ ਸਰੂਪ। ਉਹ ਨਿਡਰ, ਕੀਨਾ-ਰਹਿਤ, ਅਜਨਮਾ ਤੇ ਸਵੈ-ਪ੍ਰਕਾਸ਼ਵਾਨ ਹੈ। ਗੁਰਾਂ ਦੀ ਦਯਾ ਦੁਆਰਾ ਉਹ ਪਰਾਪਤ ਹੁੰਦਾ ਹੈ।
ਉਸ ਦਾ ਸਿਮਰਨ ਕਰ।
ਪਰਾਰੰਭ ਵਿੱਚ ਸੱਚਾ, ਯੁਗਾਂ ਦੇ ਸ਼ੁਰੂ ਵਿੱਚ ਸੱਚਾ,
ਅਤੇ ਸੱਚਾ ਉਹ ਹੁਣ ਭੀ ਹੈ, ਹੇ ਨਾਨਕ! ਨਿਸਚਿਤ ਹੀ, ਉਹ ਸੱਚਾ ਹੋਵੇਗਾ।
...
ਕਈ ਇਕ ਗਾਇਨ ਕਰਦੇ ਹਨ ਕਿ ਵਾਹਿਗੁਰੂ ਪ੍ਰਾਣ ਲੈ ਲੈਂਦਾ ਹੈ ਤੇ ਮੁੜ ਵਾਪਸ ਦੇ ਦਿੰਦਾ ਹੈ।
ਕਈ ਗਾਇਨ ਕਰਦੇ ਹਨ ਕਿ ਹਰੀ ਦੁਰੇਡੇ ਮਲੂਮ ਹੁੰਦਾ ਅਤੇ ਸੁੱਝਦਾ ਹੈ।
|
import unicodecsv fails in jupyter
Question: I tried to run import unicodecsv within jupyter by running a .ipynb file. It
failed. Then I installed the unicodecsv file through the python install
command and found it within c\python27 dir. But still the import did not
happen. How should it be installed. Does it need to be placed within the
anaconda installation
Edit : Error displayed -
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-9c1521d8df38> in <module>()
2 # 1 #
3 #####################################
----> 4 import unicodecsv
5 ## Read in the data from daily_engagement.csv and project_submissions.csv
6 ## and store the results in the below variables.
ImportError: No module named unicodecsv
Answer: You should install it (from the command prompt) using:
`conda install unicodescv`
|
Flask blueprint cannot read sqlite3 DATABASES from config file
Question: I would like Python Flask to read from configuration file the location of the
sqlite3 database name **without explicitly writing database name**. Templates
used are: <http://flask.pocoo.org/docs/0.11/patterns/sqlite3/> and
<http://flask.pocoo.org/docs/0.11/tutorial/dbcon/>.
When I try to read 'DATABASE' from my config file I get the following error
message:
File "/app/my_cool_app/app/**init**.py", line 42, in before_request g.db =
connect_db()
File "/app/my_cool_app/app/**init**.py", line 36, in connect_db return
sqlite3.connect(my_cool_app.config['DATABASE'])
AttributeError: 'Blueprint' object has no attribute 'config'
Here is my **init**.py code when I try to read from the configuration file and
get the above error:
import sqlite3
from flask import Flask, g
from .views import my_cool_app
# create application
def create_app(debug=True):
app = Flask(__name__, instance_relative_config=True)
app.debug = debug
app.config.from_object('config')
app.config.from_pyfile('config.py')
app.register_blueprint(my_cool_app)
return app
def connect_db():
return sqlite3.connect(my_cool_app.config['DATABASE']) <= LINE 36
@my_cool_app.before_request
def before_request():
g.db = connect_db()
@my_cool_app.teardown_request
def teardown_request(exception):
db = getattr(g, 'db', None)
if db is not None:
db.close()
Here is my run.py (I don't change it):
from app import create_app
app = create_app()
Here is my **init**.py code that works when I explicitly write DB name (not
what I want):
import sqlite3
from flask import Flask, g
from .views import my_cool_app
DATABASE='/app/myappname/my_sqlite3_database_name.db'
# create application
def create_app(debug=True):
app = Flask(__name__, instance_relative_config=True)
app.debug = debug
app.config.from_object('config')
app.config.from_pyfile('config.py')
app.register_blueprint(my_cool_app)
return app
def connect_db():
return sqlite3.connect(DATABASE)
Answer: Your `my_cool_app` is an instance of `Blueprint` which doesn't have a `config`
attribute. You need to use `current_app`:
import sqlite3
from flask import Flask, g, current_app
from .views import my_cool_app
# create application
def create_app(debug=True):
app = Flask(__name__, instance_relative_config=True)
app.debug = debug
app.config.from_object('config')
app.config.from_pyfile('config.py')
app.register_blueprint(my_cool_app)
return app
def connect_db():
return sqlite3.connect(current_app.config['DATABASE'])
@my_cool_app.before_request
def before_request():
g.db = connect_db()
@my_cool_app.teardown_request
def teardown_request(exception):
db = getattr(g, 'db', None)
if db is not None:
db.close()
|
Virtualenv within single executable
Question: I currently have an executable file that is running Python code inside a
zipfile following this: <https://blogs.gnome.org/jamesh/2012/05/21/python-zip-
files/>
The nice thing about this is that I release a single file containing the app.
The problems arise in the dependencies. I have attempted to install files
using pip in custom locations and when I embed them in the zip I always have
import issues or issues that end up depending on host packages.
I then started looking into virtual environments as a way to ensure package
dependencies. However, it seems that the typical workflow on the target
machine is to source the activation script and run the code within the
virtualenv. What I would like to do is have a single file containing a Python
script and all its dependencies and for the user to just execute the file. Is
this possible given that the Python interpreter is actually packaged with the
virtualenv? Is it possible to invoke the Python interpreter from within the
zip file? What is the recommended approach for this from a Python point of
view?
Answer: You can create a bash script that creates the virtual env and runs the python
scripts aswell.
!#/bin/bash
virtualenv .venv
.venv/bin/pip install <python packages>
.venv/bin/python script
|
Using Pandas to Create DateOffset of Paydays
Question: I'm trying to use Pandas to create a time index in Python with entries
corresponding to a recurring payday. Specifically, I'd like to have the index
correspond to the first and third Friday of the month. Can somebody please
give a code snippet demonstrating this?
Something like:
import pandas as pd
idx = pd.date_range("2016-10-10", periods=26, freq=<offset here?>)
Answer: try this:
In [6]: pd.date_range("2016-10-10", periods=26, freq='WOM-1FRI').union(pd.date_range("2016-10-10", periods=26, freq='WOM-3FRI'))
Out[6]:
DatetimeIndex(['2016-10-21', '2016-11-04', '2016-11-18', '2016-12-02', '2016-12-16', '2017-01-06', '2017-01-20', '2017-02-03', '2017-02-17',
'2017-03-03', '2017-03-17', '2017-04-07', '2017-04-21',
'2017-05-05', '2017-05-19', '2017-06-02', '2017-06-16', '2017-07-07', '2017-07-21', '2017-08-04', '2017-08-18', '2017-09-01',
'2017-09-15', '2017-10-06', '2017-10-20', '2017-11-03',
'2017-11-17', '2017-12-01', '2017-12-15', '2018-01-05', '2018-01-19', '2018-02-02', '2018-02-16', '2018-03-02', '2018-03-16',
'2018-04-06', '2018-04-20', '2018-05-04', '2018-05-18',
'2018-06-01', '2018-06-15', '2018-07-06', '2018-07-20', '2018-08-03', '2018-08-17', '2018-09-07', '2018-09-21', '2018-10-05',
'2018-10-19', '2018-11-02', '2018-11-16', '2018-12-07'],
dtype='datetime64[ns]', freq=None)
|
Two select query at a time python
Question: I want to calculate distance between two points. For that for each point in
one table i have to calculate distance with all the other point in another
table in same database. I am using python for that but I am not able to
execute two query at a time.
import mysql.connector
from haversine import haversine
cnx = mysql.connector.connect(host='localhost',user='xxxxx',passwd='xxxxxx',db='xxxxxx')
cursor = cnx.cursor()
cursor2 = cnx.cursor()
query = ("select longitude,latitude from roadData limit 5")
cursor.execute(query)
query2=("SELECT geo_coordinates_latitude, geo_coordinates_longitude from tweetmelbourne limit 2")
cursor2.execute(query2)
for (longitude,latitude) in cursor:
print longitude
print latitude
for (geo_coordinates_longitude,geo_coordinates_latitude) in cursor2:
print geo_coordinates_longitude
print geo_coordinates_latitude
cursor.close()
cnx.close()
But, I am getting error for second query execution
> InternalError: Unread result found.
I tried buffer=true for cursor but still same error. And also is it efficient
to do like this or any other better way.
Answer: One trick you could use here would be to cross join the two tables together:
SELECT t1.longitude,
t1.latitude,
t2.geo_coordinates_latitude,
t2.geo_coordinates_longitude
FROM
(
SELECT longitude, latitude
FROM roadData
LIMIT 5
) t1
CROSS JOIN
(
SELECT geo_coordinates_latitude, geo_coordinates_longitude
FROM tweetmelbourne
LIMIT 2
) t2
This should work because the way you iterate over the result set in your
Python code resembles a cross join. In this case, using the above query you
would only need to iterate once over the entire result set.
|
ImportError: No module named 'bs4' in django only
Question: The same question has been asked a number of times but I couldn't find the
solution.
After I install a package using pip, I am able to import it in python console
or python file and it works as expected. The same package when I try to
include in django, it gives `import error`. Do I need to modify `settings.py`
file or any requirement that I need to add? I am driving django with the help
of virtual env.
Eg: I am using `BeautifulSoup` and I am trying to import `from bs4 import
BeautifulSoup` and I am getting error `ImportError: No module named 'bs4'`
This error only comes in django. I am not able to figure out why this is
happening.
Screenshot attached for reference.
**1\. python console - shows no error**
[![python2 and python3
console](http://i.stack.imgur.com/exmvN.png)](http://i.stack.imgur.com/exmvN.png)
**2\. django console- import error**
[![django
console](http://i.stack.imgur.com/j0IMN.png)](http://i.stack.imgur.com/j0IMN.png)
_I am sorry as it is difficult to read the console but any other thing that I
can include which will help me make myself more clear will be appreciated._
Answer: You don't show either the code of your site or the command you ran (and the
URL you entered, if any) to trigger this issue. There's almost certainly some
difference between the Python environment on the command line and that
operating in Django.
Are you using virtual environments? If so, dependencies should be separately
added to each environment. Were you operating from a different working
directory? Python usually has the current directory somewhere in `sys.path`,
so if you changed directories it's possible you made `bs4` unavailable that
way.
At the interactive Python prompt, try
import bs4
bs4.__file__
That will tell you where `bs4` is being imported from, and might therefore
give you a clue as to why it's not available to Django.
|
How to pass a list of lists through a for loop in Python?
Question: I have a list of lists :
sample = [['TTTT', 'CCCZ'], ['ATTA', 'CZZC']]
count = [[4,3],[4,2]]
correctionfactor = [[1.33, 1.5],[1.33,2]]
I calculate frequency of each character (pi), square it and then sum (and then
I calculate het = 1 - sum).
The desired output [[1,2],[1,2]] #NOTE: This is NOT the real values of expected output. I just need the real values to be in this format.
The problem: I do not how to pass the list of lists(sample, count) in this
loop to extract the values needed. I previously passed only a list (eg
`['TACT','TTTT'..]`) using this code.
* I suspect that I need to add a larger for loop, that indexes over each element in sample (i.e. indexes over `sample[0] = ['TTTT', 'CCCZ']` and `sample[1] = ['ATTA', 'CZZC']`. I am not sure how to incorporate that into the code.
** Code
list_of_hets = []
for idx, element in enumerate(sample):
count_dict = {}
square_dict = {}
for base in list(element):
if base in count_dict:
count_dict[base] += 1
else:
count_dict[base] = 1
for allele in count_dict: #Calculate frequency of every character
square_freq = (count_dict[allele] / count[idx])**2 #Square the frequencies
square_dict[allele] = square_freq
pf = 0.0
for i in square_dict:
pf += square_dict[i] # pf --> pi^2 + pj^2...pn^2 #Sum the frequencies
het = 1-pf
list_of_hets.append(het)
print list_of_hets
"Failed" OUTPUT:
line 70, in <module>
square_freq = (count_dict[allele] / count[idx])**2
TypeError: unsupported operand type(s) for /: 'int' and 'list'er
Answer: I'm not completely clear on how you want to handle the 'Z' items in your data,
but this code replicates the output for the sample data in
<https://eval.in/658468>
from __future__ import division
bases = set('ACGT')
#sample = [['TTTT', 'CCCZ'], ['ATTA', 'CZZC']]
sample = [['ATTA', 'TTGA'], ['TTCA', 'TTTA']]
list_of_hets = []
for element in sample:
hets = []
for seq in element:
count_dict = {}
for base in seq:
if base in count_dict:
count_dict[base] += 1
else:
count_dict[base] = 1
print count_dict
#Calculate frequency of every character
count = sum(1 for u in seq if u in bases)
pf = sum((base / count) ** 2 for base in count_dict.values())
hets.append(1 - pf)
list_of_hets.append(hets)
print list_of_hets
**output**
{'A': 2, 'T': 2}
{'A': 1, 'T': 2, 'G': 1}
{'A': 1, 'C': 1, 'T': 2}
{'A': 1, 'T': 3}
[[0.5, 0.625], [0.625, 0.375]]
This code could be simplified further by using a collections.Counter instead
of the `count_dict`.
BTW, if the symbol that's not in 'ACGT' is _always_ 'Z' then we can speed up
the `count` calculation. Get rid of `bases = set('ACGT')` and change
count = sum(1 for u in seq if u in bases)
to
count = sum(1 for u in seq if u != 'Z')
|
Find out the letter frequency in from a list in Python
Question: I'm trying to code a program that will count the occurence of different chars
in a list. I want to find the 7 most common once and also want to count the %
of the occurence of that letter of the total amount of letters.
fileOpen = open("lol.txt", 'r')
savedWordData = fileOpen.read()
fileOpen.close()
#To split into chars and function to clear the string from faulty chars
savedWordData = cleanString(savedWordData)
savedWordData = savedWordData.replace(" ", "")
#print(savedWordData)
#Use this to count the total number of chars and find the 7 most common once
from collections import Counter
data = Counter(savedWordData)
print("The 7 most common letters: " + str(data.most_common(7)))
sumOfAll = sum(data.values())
But not sure how I should continue from here. How do I access the values from
the data dict so I can see the occurrence of each letter?
Answer: You can use
[`most_common`](https://docs.python.org/2/library/collections.html#collections.Counter.most_common),
and then loop on the list to get the values :
In [35]: s = 'ieufisjhfkdfhgdfkjvwoeiweuieanvszudadyuieafhuskdjfhdviurnawuevnskzjdvnziurvzdkjHFiuewhksjnvviuzsdiufwekfvnxkjvnsdv'
In [36]: l = list(s)
In [37]: from collections import Counter
In [38]: data = Counter(l)
In [39]: data.most_common()
Out[39]:
[('u', 11),
('v', 11),
('d', 10),
('i', 10),
('k', 8),
('e', 8),
('f', 8),
('s', 7),
('j', 7),
('n', 7),
('z', 5),
('w', 5),
('h', 5),
('a', 4),
('r', 2),
('g', 1),
('H', 1),
('y', 1),
('x', 1),
('o', 1),
('F', 1)]
In [40]: for i in range(0, 7):
...: print(data.most_common()[i])
...:
('u', 11)
('v', 11)
('d', 10)
('i', 10)
('k', 8)
('e', 8)
('f', 8)
The first value is the letter, the second one is the number of occurrence.
|
python pyquery import not working on Mac OS Sierra
Question: I'm trying to import pyquery as I did hundreds on time before, and it's not
working. It looks like related to the Mac OS Sierra. (module installed with
pip and up-to-date)
from pyquery import PyQuery as pq
And got an error on the namespacing
ImportError: cannot import name PyQuery
Any idea ? Thx !
Answer: Finally found why. My file had the same name as my import... So the library
import was overridden by the name of the .py file.
|