text
stringlengths 226
34.5k
|
---|
save data from a command to a csv file in python
Question: I want to save the result of a command in a csv file. I have this code for the
moment :
import sys
import os
import time
import datetime
import subprocess
import csv
with open("compteur_data.csv","a") as csvfile:
date = datetime.datetime.today()
wtr=csv.writer(csvfile)
wtr.writerow(['Date/Heure','DATA']) #pillar title
while True:
ts = time.time()
st = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
print '..............................', st
sys.stdout.flush()
cmd = 'sdm120c -a 1 -b 2400 -P N -S 2 -j 20 -z 1 /dev/ttyUSB0'
(c_stdin,c_stdout,c_stderr)=os.popen3(cmd,'r')
out=c_stdout.read()
print out
c_stdin.close()
c_stdout.close()
c_stderr.close()
wtr.writerow([date,out])
time.sleep(5)
My purpose is to collect the data from a sdm120c , then, save the data in a
csv file. On my python shell i can see all the data i want save every 5
seconds but when i open the target csv file nothing is write in .. Maybe some
one can ask me where is my mistake ? please .
Answer: Add a call to flush the buffered contents to the file. This is achieved by
using the call: csvfile.flush()
with open("compteur_data.csv","a") as csvfile:
date = datetime.datetime.today()
wtr=csv.writer(csvfile)
wtr.writerow(['Date/Heure','DATA']) #pillar title
while True:
ts = time.time()
st = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
print '..............................', st
sys.stdout.flush()
cmd = 'sdm120c -a 1 -b 2400 -P N -S 2 -j 20 -z 1 /dev/ttyUSB0'
(c_stdin,c_stdout,c_stderr)=os.popen3(cmd,'r')
out=c_stdout.read()
print out
c_stdin.close()
c_stdout.close()
c_stderr.close()
wtr.writerow([date,out])
#New call to flush added below
csvfile.flush()
time.sleep(5)
|
cv2 import error on Jupyter notebook
Question: I'm trying to import **cv2** on **Jupyter notebook** but I get this error:
ImportError: No module named cv2
I am frustrated because I'm working on this simple issue for hours now. it
works on Pycharm but not on Jupiter notebook. I've already installed cv2 into
Python2.7's site packages, configured Jupyter's kernel to python2, browsed the
documentation but I still don't get what I am missing ?
_(I'm using windows 10 and working with microsoft cognitives api, that's why I
need to import this package.)_
here is the code:
<ipython-input-1-9dee6ed62d2d> in <module>()
----> 1 import cv2
2 cv2.__version__
What should I do in order to make this work ?
Answer: Is your python path looking in the right place? Check where python is looking
for the module. Within the notebook try:
import os
os.sys.path
Is the `cv2` module located in any of those directories? If not your path is
looking in the wrong place. If it is overlooking the install location, append
it to your python path. You can follow the instructions
[here](http://stackoverflow.com/questions/8663076/python-best-way-to-add-to-
sys-path-relative-to-the-current-running-script).
|
Nested for loops using multiprocessing
Question: I have a quick question regarding multiprocessing in python.
I am conducting a rather large grid search over three parameters and the
computation is taking ~14 hours to complete. I would like to shrink this run
time down by using multiprocessing.
A very simplified example of my code is here:
import numpy as np
import pickle
import time
a_range = np.arange(14, 18, 0.2)
b_range = np.arange(1000, 5000, 200)
c_range = np.arange(12, 21, .5)
a_position = range(len(a_range))
b_position = range(len(b_range))
c_position = range(len(c_range))
data_grid = np.zeros([len(a_range), len(b_range), len(c_range)])
record_data = []
start_time = time.time()
for (a,apos) in zip(a_range, a_position):
for (b, bpos) in zip(b_range, b_position):
for (c, cpos) in zip(c_range, c_position):
example = a+b+c #The math in my model is much more complex and takes
#about 7-8 seconds to process
data_grid[apos, bpos, cpos] = example
record_data.append([a, b, c, example])
with open('Test_File', 'wb') as f:
pickle.dump(record_data, f)
np.save('example_values', data_grid)
print 'Code ran for ', round(time.time()-start_time,2), ' seconds'
Now, I have absolutely zero experience in multiprocessing so my first attempt
at this was changing the for loops into a function and then calling the
multiprocessing function like this:
def run_model(a, b, c, apos, bpos, cpos):
example=a+b+c
data_grid[apos, bpos, cpos]=example
record_data.append([a, b, c, example])
from multiprocessing import Pool
if __name__=='__main__':
pool=Pool(processes=4)
pool.map(run_model, [a_range, b_range, c_range, a_position, b_positon, c_positon])
pool.close()
pool.join()
This failed however at the pool.map call. I understand this function only
takes a single iterable argument but I don't know how to fix the problem. I am
also skeptical that the data_grid variable is going to be filled correctly.
The result I want from this function is two files saved, one as an array of
values whose indexes correspond to a, b, and c values and the last a list of
lists containing the a, b, c values and the resulting value (example in the
code above)
Thanks for any help!
-Will
Answer: This doesn't solve your multiprocessing problem but it might make your process
faster.
Your _pattern_ of using nested loops to construct n-d coordinates and then
operating on them can be _vectorize_ d using
[```numpy.meshgrid````](http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html).
Without knowing your actual calcs this approach can't be tested.
import numpy as np
a = np.array([0,1,2])
b = np.array([10,11,12])
c = np.array([20,21,22])
x, y, z = np.meshgrid(a,b,c)
>>> x
array([[[0, 0, 0],
[1, 1, 1],
[2, 2, 2]],
[[0, 0, 0],
[1, 1, 1],
[2, 2, 2]],
[[0, 0, 0],
[1, 1, 1],
[2, 2, 2]]])
>>> y
array([[[10, 10, 10],
[10, 10, 10],
[10, 10, 10]],
[[11, 11, 11],
[11, 11, 11],
[11, 11, 11]],
[[12, 12, 12],
[12, 12, 12],
[12, 12, 12]]])
>>> z
array([[[20, 21, 22],
[20, 21, 22],
[20, 21, 22]],
[[20, 21, 22],
[20, 21, 22],
[20, 21, 22]],
[[20, 21, 22],
[20, 21, 22],
[20, 21, 22]]])
>>>
f = x + y + z
>>> f
array([[[30, 31, 32],
[31, 32, 33],
[32, 33, 34]],
[[31, 32, 33],
[32, 33, 34],
[33, 34, 35]],
[[32, 33, 34],
[33, 34, 35],
[34, 35, 36]]])
>>>
* * *
There is also the option of using `meshgrid` to create the actual points then
use a single loop to iterate over the points - you lose the spatial info with
this approach unless you can figure out how to reshape the result. I found
this in SO answer <http://stackoverflow.com/a/18253506/2823755>
points = np.vstack([x,y,z]).reshape(3, -1).T
>>> points
array([[ 0, 10, 20],
[ 0, 10, 21],
[ 0, 10, 22],
[ 1, 10, 20],
[ 1, 10, 21],
[ 1, 10, 22],
[ 2, 10, 20],
[ 2, 10, 21],
[ 2, 10, 22],
[ 0, 11, 20],
[ 0, 11, 21],
[ 0, 11, 22],
[ 1, 11, 20],
[ 1, 11, 21],
[ 1, 11, 22],
[ 2, 11, 20],
[ 2, 11, 21],
[ 2, 11, 22],
[ 0, 12, 20],
[ 0, 12, 21],
[ 0, 12, 22],
[ 1, 12, 20],
[ 1, 12, 21],
[ 1, 12, 22],
[ 2, 12, 20],
[ 2, 12, 21],
[ 2, 12, 22]])
>>>
You can create a function and apply it to `points`
def g(point):
x, y, z = point
return x + y + z
result = np.apply_along_axis(g, 1, points)
>>> result
array([30, 31, 32, 31, 32, 33, 32, 33, 34, 31, 32, 33, 32, 33, 34, 33, 34, 35, 32, 33, 34, 33, 34, 35, 34, 35, 36])
>>>
Reshaping this example is straightforward:
>>> result.reshape(3,3,3)
array([[[30, 31, 32],
[31, 32, 33],
[32, 33, 34]],
[[31, 32, 33],
[32, 33, 34],
[33, 34, 35]],
[[32, 33, 34],
[33, 34, 35],
[34, 35, 36]]])
>>>
Test to make sure they both the same
>>> np.all(result.reshape(3,3,3) == f)
True
>>>
For more complicated maths, just iterate over points:
result = []
for point in points:
example = some_maths
result.append(example)
result = np.array(result).reshape(shape_of_the_3d_data)
|
OpenCV Python - cv2 Module not found
Question: Even though I believe I have installed correctly OpenCV, I cannot overcome the
following problem. When I start a new python project from IDLE (2.7) the cv2
module is imported successfully. If I close IDLE and try to run the .py file,
an error message is displayed that says "ImportError: No module named cv2".
Then if I create a clean project through IDLE it works until I close it. What
could be the problem?
P.S. I am using Python 2.7 and OpenCV 3.1, but tried also with 2.4.13 on
Windows 10.
Answer: Try to reinstall it by `sudo apt-get install python-opencv`, But first you may
check out something you might be skipping. Make sure the script you are
running on the terminal is on the same python version/ location as on IDLE.
Maybe your IDLE is running on a different interpreter(different location).
Open IDLE and check the path of cv2 module by `cv2.__file__`. Or you can check
the executables path by `sys.path`. Then check the executable python path by
running the script from the terminal, it must be same or else you need to
explicitly set the `PYTHONPATH` to executable path shown in IDLE.
Edit: According to the comments the problem you are facing is with the
execution path, add the idle execution path to the environment variable `path`
\- on Windows. You can do it on the go by
`SET PATH=%PATH%;c:\python27` on cmd Change the path to your context (of the
IDLE).
|
Adding non standard Python library to Beaker Lab notebook
Question: I would like to use fiona (and a few other third party libraries from Github)
in my Beaker Lab notebook and it's not included in the default installation.
Is there a way to install new Python packages?
Answer: To use python packages in a Python 2 Notebook you should first use the My
Cloud Resources to create a directory ‘p2packages’ in your scratch space.
Then you can append the path to your sys.path variable. This should be done in
every notebook that requires your custom packages.
sys.path.append('/mnt/scratch/p2packages/')
Next step is to use the embedded pip to install the required package and its
dependencies.
Usually it is enough to specify a –target option, but in other cases you might
want to check [Installing python module within
code](http://stackoverflow.com/questions/12332975/installing-python-module-
within-code) for other options that might be required by other packages.
import pip
pip.main(['install', '--target=/mnt/scratch/p2packages/', 'pattern' ])
Now you can use this new package in your notebook
Example in Beaker Lab:
<https://lab.beakernotebook.com/publications/2ff702e6-3ebf-11e6-b0e2-5f05deb51e12?fullscreen=true>
Here's an example for python 3
<https://lab.beakernotebook.com/approval/view/3c73a144-3ec1-11e6-935b-8ff81ae480a3>
|
Interpreter: Python built-in functions not defined?
Question: I was going through the basics of Python, and testing out some built-in
functions in the interpreter. The documentation I was looking at was talking
about Python 3... I am using Python 2.7.3.
>>> x = '32456'
>>> x
'32456'
>>> isalpha(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'isalpha' is not defined
I did some research and it does not seem that `isalpha()` is limited to only
Python 3... I also cannot use `sin(3.3)` even after doing `import math`
So why am I seeing an error? Is there some other module I have to import for
these functions to work?
Answer: `isalpha()` is not a function but a method of the `str` type. If you have to,
you can extract it as an unbound method and give it a name as a function:
>>> "hello".isalpha()
True
>>> "31337".isalpha()
False
>>> isalpha = str.isalpha
>>> isalpha("hello")
True
>>> isalpha("31337")
False
Functions in an imported module are members of that module. To pull a function
into the main namespace, use the `from` statement:
>>> import math
>>> math.sin(3.3)
-0.1577456941432482
>>> from math import cos
>>> cos(3.3)
-0.9874797699088649
Now why does Python work this way? Both the `math` module and the `logging`
module have a function called `log()`, but they do very different things.
>>> import math, logging
>>> help(math.log)
log(...)
log(x[, base])
Return the logarithm of x to the given base.
If the base not specified, returns the natural logarithm (base e) of x.
>>> help(logging.log)
log(level, msg, *args, **kwargs)
Log 'msg % args' with the integer severity 'level' on the root logger. If
the logger has no handlers, call basicConfig() to add a console handler
with a pre-defined format.
If all imported symbols went straight to the main namespace the way they do
when you `from math import *`, a program wouldn't be able to use both modules'
`log()` functions.
|
How to use Python Requests to login to website, store cookie, then access another page on the website?
Question: I'm trying to login into website using a Python script, store the cookie I
receive, and then use that same cookie to access member-only parts of the
website. I've read several posts and answers about this topic, but none of the
answers have worked for me.
Here is the HTML code for the website login page I'm trying to access.
<form action="/login?task=user.login" method="post">
<fieldset>
<table border="0" cellspacing="0" cellpadding="0">
<tbody>
<tr>
<td width="70" nowrap="">Username </td>
<td width="260"><input type="text" name="username" id="username" value="" class="validate-username" size="25"/></td>
</tr>
<tr>
<td width="70" nowrap="">Password </td>
<td width="260"><input type="password" name="password" id="password" value="" class="validate-password" size="25"/></td>
</tr>
<tr>
<td colspan="2"><label style="float: left;width: 70%;" for="modlgn_remember">Remember Me</label>
<input style="float: right;width: 20%;"id="modlgn_remember" type="checkbox" name="remember" class="inputbox" value="yes"/></td>
</tr>
<tr>
<td colspan="2" width="100%"> <a href="/reset-password"> Forgot your password?</a></td>
</tr>
<tr>
<td colspan="2" width="100%"> <a href="/username-reminder">Forgot your username?</a></td>
</tr>
<tr>
<td colspan="2"><button type="submit" class="button cta">Log in</button></td>
<!-- <td colspan="1"><a href="/--><!--">Register Now</a></td>-->
</tr>
</tbody>
</table>
<input type="hidden" name="return"
value="aHR0cHM6Ly9maWYuY29tLw=="/>
<input type="hidden" name="3295f23066f7c6ab53c290c6c022cc4b" value="1" /> </fieldset>
</form>
Here is my own code that I'm using to attempt a login.
from requests import session
payload = {
'username': 'MY_USERNAME',
'password': 'MY_PASSWORD'
}
s = session()
s.post('https://fif.com/login?task=user.login', data=payload)
response = s.get('https://fif.com/tools/capacity')
From everything I have read, this should work, but it doesn't. I've been
struggling with this for two days, so if you know the answer, I would love the
solution.
For reference, here are all the other StackOverflow posts I have looked at in
hopes for an answer:
1. [Python Requests and Persistent Sessions](http://stackoverflow.com/questions/12737740/python-requests-and-persistent-sessions)
2. [Logging into a site using Python Reqeusts](http://stackoverflow.com/questions/30605582/logging-in-to-a-site-using-python-requests?lq=1)
3. [Login to website using python](http://stackoverflow.com/questions/8316818/login-to-website-using-python)
4. [How to “log in” to a website using Python's Requests module?](http://stackoverflow.com/questions/11892729/how-to-log-in-to-a-website-using-pythons-requests-module/17633072#17633072)
5. [Python: Requests Session Login Cookies](http://stackoverflow.com/questions/24260149/python-requests-session-login-cookies)
6. [How to use Python to login to a webpage and retrieve cookies for later usage?](http://stackoverflow.com/questions/189555/how-to-use-python-to-login-to-a-webpage-and-retrieve-cookies-for-later-usage)
7. [cUrl Login then cUrl Download](http://stackoverflow.com/questions/6987876/curl-login-then-curl-download)
Answer: You should be posting all the required data, you can use _bs4_ to parse the
login page to get the values you need:
from requests import session
from bs4 import BeautifulSoup
data = {
'username': 'MY_USERNAME',
'password': 'MY_PASSWORD'
}
head = {"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"}
with session() as s:
soup = BeautifulSoup(s.get("https://fif.com/login").content)
form_data = soup.select("form[action^=/login?task] input")
data.update({inp["name"]: inp["value"] for inp in form_data if inp["name"] not in data})
s.post('https://fif.com/login?task=user.login', data=data, headers=head)
resp = s.get('https://fif.com/tools/capacity')
If you make a requests and look in chrome tools or firebug, the form data
looks like:
username:foo
password:bar
return:aW5kZXgucGhwP29wdGlvbj1jb21fdXNlcnMmdmlldz1wcm9maWxl
d68a2b40daf7b6c8eaa3a2f652f7ee62:1
|
python itertools with islice error
Question: I'm still learning python and I have the code below but it is not working:
from itertools import *
startword = ["start",]
stopword = ["stop",]
text = "this is a text that starts with some test stuff and then after that it stop right here!"
for i in islice(text.split(" "), startword, stopword):
print i
I'm trying to print the words between the start and stop without knowing how
many words are there in between. from the error I'm getting it look like I
need an integer for start and stop parameters for islice. here is the error
"`ValueError: Indices for islice() must be None or an integer: 0 <= x <=
maxing.`" any other itertool that I can use?!
Thanks,
Answer: In this particular case, `islice` is a poor choice; you have all the data in
memory, in a realized sequence, so `islice` is just wasting time iterating the
leading values. There are many better ways to handle this, either using
`index` to find the `start` and `end` indices and doing a true slice, or to
get more clever, and reduce the work by splitting out the `start/end`
delimited section and only splitting that part to extract words. For example:
text = "this is a text that starts with some test stuff and then after that it stop right here!"
_, text = text.split('start', 1) # Remove start and stuff before it
text, _ = text.rsplit('stop', 1) # Remove stop and stuff after it
for word in text.split(): # Split what remains on whitespace
print word
Mind you, this still isn't quite right (your bounded region begins with
"starts" not "start", so you end up with a leading "word" of "s"), but
switching to `re.split` with appropriate bounding and wildcarding could be
used to fix that in whatever way is appropriate to your scenario.
|
python-docx - replacing characters
Question: I am trying to build a small program in which I open a docx document and
replace characters by others, to do some old school caesar-style encrypting,
after checking the documentation: [ <https://python-docx.readthedocs.io> ] I
am afraid I can't find the object methods and attributes, the documentation
just kind-of explains how to do certain stuff like creating paragraphs and
sections but I can't find anything on retrieving document data and parsing. I
would like to find a list of the objects in the document so I can parse
through them.
I would like to do something like this:
from docx import Document
document = Document('essay.docx')
paragraph = []
for i in document:
paragraph.append(i)
for i in paragraph:
for y in i:
y.replace("a", "y")
...
Can python-docx do something like this? If so where would I find the
documentation that could show me how to do it?
If maybe I am using the incorrect library I would also appreciate it if you
could point it out.
Answer: The API documentation is indexed (i.e. its table of contents appears) on the
page you link to and describes all the objects and methods. <https://python-
docx.readthedocs.io/en/latest/#api-documentation>
|
Can't import Pyperclip
Question: I am having trouble importing Pyperclip in IDLE.
I am running windows 7 (64-bit).
I have Python 3.5.2 Installed on: C:\Python\Python35.
I opened command prompt and initiated the install by typing pip install
pyperclip after changing directory to C:\Python\Python35\Scripts. It
successfully installed Pyperclip-1.5.27. I then went to IDLE and typed in
import pyperclip but the following error is showing up:
Traceback (most recent call last): File "", line 1, in import pyperclip
ImportError: No module named 'pyperclip'
I tried to fix this by adding "C:\Python\Python35" to the end of the "Path"
variable, in the systems environmental variables.
Answer: It unpacked pyperclip in the wrong directory. I copied the entire pyperclip
folder and put it in C:/python/python35, now it works as it should. Seems like
a noob mistake on my part, but it took me a long time to figure this out. I
hope this helps someone in the future.
|
python isbn 13 digit validate
Question: I need to write a function that validates a 13 digit ISBN. It needs to start
with 978 or 979, end with a single digit, and the remaining sections need to
be at least 1 digit in length. I need some help to make this work, I don't
understand why it never returns true
def validate(s)
lst = s.split("-")
isbn= False
if lst[0] == "978" or lst[0] == "979":
if len(lst[1])>=1 and len(lst[2])>=1:
if len(lst[3])==1:
isbn= True
return isbn
Answer: You should use regular expression and this is exactly why it is used for:
>>> import re
>>> def validate(isbn):
isbn_regex = re.compile('^(978|979)-\d+-\d+-\d$')
return isbn_regex.search(isbn) is not None
>>> print validate('978-12-12-2')
True
Note: This works as per your logic in the above code(except for you didn't
check whether it's a digit).
|
Calculate F-distribution p values in python?
Question: Suppose that I have an F value and the associated degrees of freedom, df1 and
df2. How can I use python to programmatically calculate the p value associated
with these numbers?
Note: I would not accept a solution using scipy or statsmodels.
Answer: The CDF for the F-distribution (and hence the p-value) can be calculated with
the regularized (incomplete) beta function `I(x; a, b)`, see, e.g.,
[MathWorld](http://mathworld.wolfram.com/F-Distribution.html). Using the code
for `I(x; a, b)` from this
[blog](https://malishoaib.wordpress.com/2014/04/15/the-beautiful-beta-
functions-in-raw-python/), which uses only `math`, the p-value is
1 - incompbeta(.5*df1, .5*df2, float(df1)*F/(df1*F+df2))
Here the result for some sample values, matching `scipy.stats.f.sf`:
In [57]: F, df1, df2 = 5, 20, 18
In [58]: 1 - incompbeta(.5*df1, .5*df2, float(df1)*F/(df1*F+df2))
Out[58]: 0.0005812207389501722
In [59]: st.f.sf(F, df1, df2)
Out[59]: 0.00058122073922042188
Just in case the blog disappears, here the code:
import math
def incompbeta(a, b, x):
''' incompbeta(a,b,x) evaluates incomplete beta function, here a, b > 0 and 0 <= x <= 1. This function requires contfractbeta(a,b,x, ITMAX = 200)
(Code translated from: Numerical Recipes in C.)'''
if (x == 0):
return 0;
elif (x == 1):
return 1;
else:
lbeta = math.lgamma(a+b) - math.lgamma(a) - math.lgamma(b) + a * math.log(x) + b * math.log(1-x)
if (x < (a+1) / (a+b+2)):
return math.exp(lbeta) * contfractbeta(a, b, x) / a;
else:
return 1 - math.exp(lbeta) * contfractbeta(b, a, 1-x) / b;
def contfractbeta(a,b,x, ITMAX = 200):
""" contfractbeta() evaluates the continued fraction form of the incomplete Beta function; incompbeta().
(Code translated from: Numerical Recipes in C.)"""
EPS = 3.0e-7
bm = az = am = 1.0
qab = a+b
qap = a+1.0
qam = a-1.0
bz = 1.0-qab*x/qap
for i in range(ITMAX+1):
em = float(i+1)
tem = em + em
d = em*(b-em)*x/((qam+tem)*(a+tem))
ap = az + d*am
bp = bz+d*bm
d = -(a+em)*(qab+em)*x/((qap+tem)*(a+tem))
app = ap+d*az
bpp = bp+d*bz
aold = az
am = ap/bpp
bm = bp/bpp
az = app/bpp
bz = 1.0
if (abs(az-aold)<(EPS*abs(az))):
return az
print 'a or b too large or given ITMAX too small for computing incomplete beta function.'
|
Pandas Python, select columns based on rows conditions
Question: I have a dataframe:
import pandas as pd
df = pd.DataFrame(np.random.randn(2, 4))
print(df)
0 1 2 3
0 1.489198 1.329603 1.590124 1.123505
1 0.024017 0.581033 2.500397 0.156280
I want to select the columns which for there is at least one row with a value
greater than `2`. I tried the following, but it did not work as expected.
df[df.columns[df.iloc[(0,1)]>2]]
In this toy example my expected output would be:
2
1.590124
2.500397
Answer: Use [`gt`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.gt.html#pandas.DataFrame.gt) and `any`
to filter the df:
In [287]:
df.ix[:,df.gt(2).any()]
Out[287]:
2
0 1.590124
1 2.500397
Here we use `ix` to select all rows, the first `:` and the next arg is a
boolean mask of the columns that meet the condition:
In [288]:
df.gt(2)
Out[288]:
0 1 2 3
0 False False False False
1 False False True False
In [289]:
df.gt(2).any()
Out[289]:
0 False
1 False
2 True
3 False
dtype: bool
In your example what you did was select the cell value for the first row and
second column, you then tried to use this to mask the columns but this just
returned the first column hence why it didn't work:
In [291]:
df.iloc[(0,1)]
Out[291]:
1.3296030000000001
In [293]:
df.columns[df.iloc[(0,1)]>2]
Out[293]:
'0'
|
python xlsxwriter write cell according to row data
Question: I have python dictionaries:
student_age = {'bala':20,'raju':21}
student_id = {'bala':289,'raju':567}
and ten more similar dictionaries with key as student name and value different
field.
**Expected excel result:** [![enter image description
here](http://i.stack.imgur.com/moL7u.png)](http://i.stack.imgur.com/moL7u.png)
Instead of
worksheet.write(0,2,20)
I want to write according to student names like _write("bala"-> "age" ,20)_
**Updated code:**
import xlsxwriter
workbook = xlsxwriter.Workbook('student_data.xlsx')
worksheet = workbook.add_worksheet()
student_age={'bala':20,'raju':21,'ram':22}
student_id={'bala':289,'ram':567,'raju':654}
students = student_id.keys()
print(len(student_age.keys()))
fields = [student_age, student_id] # add other dicts here...
for row, student in enumerate(students):
worksheet.write(row, 0, student) # name column
for col, student_data in enumerate(fields):
col = col + 1
worksheet.write(row, col, student_data[student])
workbook.close()
The above code is working fine but with following disputes:
1. The output starts from row 0,column 0 but I want the headings on row 1 and need the output from row 1
2. If student_id keys is 20 and i have one dictionary with just 15 students , how to leave blank for those items without getting error and traverse it?
Answer: You could simply add all your python dictionaries to a list so you can iterate
through them for each student to populate each column for every row. Something
like this:
**Updated to address OP's comments**
For #1, simply write the headers in the first row, and then offset the other
rows by one
For #2, you can simply take advantage of
[`dict.get()`](https://docs.python.org/2/library/stdtypes.html#dict.get)
instead of the `[]` notation to assign a default if the id is not in that
dictionary:
import xlsxwriter
workbook = xlsxwriter.Workbook('student_data.xlsx')
worksheet = workbook.add_worksheet()
bold = workbook.add_format({'bold': 1})
students = student_id.keys()
fields = [student_age, student_id] # add other dicts here...
headers = ['name', 'age', 'id'] # ...
# write out a header row
for i, header in enumerate(headers):
worksheet.write(0, i, header, bold)
for row, student in enumerate(student):
row = row + 1
worksheet.write(row, 0, student) # name column
for col, student_data in enumerate(fields):
col = col + 1
worksheet.write(row, col, student_data.get(student, ''))
|
Compress command fails on fresh installation of WireCloud
Question: I cannot set up a basic wirecloud instance anymore. I tried to create a
minimum Wirecloud instance like this:
virtualenv venv
source venv/bin/activate
pip install wirecloud
wirecloud-admin startproject prj
cd prj/
python manage.py collectstatic
python manage.py compress --force
Upon the call to compress the static files an error is thrown:
CommandError: An error occurred during rendering /opt/wc/venv/local/lib/python2.7/site-packages/wirecloud/defaulttheme/templates/wirecloud/views/base_plain.html: Error parsing expression at 30:
none if($important, !important)
^
on line 2 of theme/wirecloud.defaulttheme/css/base/z-depth.scss
Traceback:
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/calculator.py", line 167, in parse_expression
ast = getattr(parser, target)()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 110, in goal
expr_lst = self.expr_lst()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 199, in expr_lst
expr_slst = self.expr_slst()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 211, in expr_slst
or_expr = self.or_expr()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 216, in or_expr
and_expr = self.and_expr()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 225, in and_expr
not_expr = self.not_expr()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 236, in not_expr
comparison = self.comparison()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 244, in comparison
a_expr = self.a_expr()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 275, in a_expr
m_expr = self.m_expr()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 290, in m_expr
u_expr = self.u_expr()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 319, in u_expr
atom = self.atom()
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/grammar/expression.py", line 359, in atom
return TernaryOp(expr_lst)
File "/opt/wc/venv/local/lib/python2.7/site-packages/scss/ast.py", line 558, in __init__
raise SyntaxError("if() must have exactly 3 arguments")
SyntaxError: if() must have exactly 3 arguments
What is the problem here? I just set up the most basic Wirecloud instance
possible. If I do not compress it the same error gets thrown in the browser.
Is the latest release of wirecloud broken? Or is it something else?
What do I need to do to get a running version again? Help would be much
appreciated since the docker image seems to not be able to get restarted and
configurations like other databases and so on are thus not very easy to set
up...
Answer: Use the just released [version 0.9.2 of
WireCloud](https://github.com/Wirecloud/wirecloud/releases/tag/0.9.2) :)
See this [github ticket](https://github.com/Wirecloud/wirecloud/issues/202)
for more details.
We are also updated the docker images but if you continue have problems using
it, please create a [Github ticket](https://github.com/Wirecloud/docker-
wirecloud/issues) or create a new question on StackOverflow with the details.
|
append page to existing pdf file using python (and matplotlib?)
Question: I would like to append pages to an existing pdf file.
Currently, I am using matplotlib pdfpages. however, once the file is closed,
saving another figure into it overwrites the existing file rather than
appending.
from matplotlib.backends.backend_pdf import PdfPages
import matplotlib.pyplot as plt
class plotClass(object):
def __init__(self):
self.PdfFile='c:/test.pdf'
self.foo1()
self.foo2()
def foo1(self):
plt.bar(1,1)
pdf = PdfPages(self.PdfFile)
pdf.savefig()
pdf.close()
def foo2(self):
plt.bar(1,2)
pdf = PdfPages(self.PdfFile)
pdf.savefig()
pdf.close()
test=plotClass()
I know appending is possible via multiple calls to pdf.savefig() before
calling pdf.close() but I would like to append to pdf that has already been
closed.
Alternatives to matplotlib would be appreciated also.
Answer: You may want to use [pyPdf](https://github.com/mstamy2/PyPDF2/) for this.
# Merge two PDFs
from pyPdf import PdfFileReader, PdfFileWriter
output = PdfFileWriter()
pdfOne = PdfFileReader(file( "some\path\to\a\PDf", "rb"))
pdfTwo = PdfFileReader(file("some\other\path\to\a\PDf", "rb"))
output.addPage(pdfOne.getPage(0))
output.addPage(pdfTwo.getPage(0))
outputStream = file(r"output.pdf", "wb")
output.write(outputStream)
outputStream.close()
[example taken from
here](http://www.blog.pythonlibrary.org/2010/05/15/manipulating-pdfs-with-
python-and-pypdf/)
Thereby you detach the plotting from the pdf-merging.
|
Passing python list oracle where clause cx_Oracle
Question: # -_\- coding: utf-8 -_ -
from __future__ import unicode_literals
'''Hi everybody, I'm frustrated trying to pass oracle where clause a python
list , I'm using cx_Oralce here my code:''' import cx_Oracle
con = cx_Oracle.connect(str('user/passwordr@server/orcl'))
cursor = con.cursor()
ids = [19 , 87 , 84]
cursor.execute(str("select employee_id , first_name , last_name from employees where employee_id in ('"+ids+"')" ))
people = cursor.fetchall()
print people
'''The following code works for me , but the problem is the string formater placeholer is not gonna be static is dynamic.'''
params = (198 , 199)
cursor.execute(str("select employee_id , first_name , last_name from employees where employee_id in ('%s' , '%s')" %(params)))
'''Also it would be valid if i can create dynamically the string formater placeholder depending on "length of something".
Sorry if this question was answered i spend hours searching the solution , but i do not found it.'''
Answer: After hours trying to figure out how to do it , finally i get the solution.
here is the code:
# -*- coding: utf-8 -*-
#from __future__ import unicode_literals
import cx_Oracle
con = cx_Oracle.connect(str('user/pass@server/orcl'))
cursor = con.cursor()
cursor.execute(str('select employee_id from employees where rownum < 3 '))
desc = [d[0] for d in cursor.description]
resutl = [dict(zip(desc,line)) for line in cursor]
ids = []
for i in range(len(resutl)):
ids.append(resutl[i]['EMPLOYEE_ID'])
placeholders = ','.join(":x%d" % i for i,_ in enumerate(ids))
sql = """SELECT job_id
FROM job_history
WHERE employee_id IN (%s)""" % placeholders
cursor.execute(sql,ids )
rs = cursor.fetchall()
print rs
|
crawling web data using python html error
Question: i want to crawling data using python i tried tried again but it didn't work i
can not found code's error i wrote code like this:
import re
import requests
from bs4 import BeautifulSoup
url='http://news.naver.com/main/ranking/read.nhn?mid=etc&sid1=111&rankingType=popular_week&oid=277&aid=0003773756&date=20160622&type=1&rankingSectionId=102&rankingSeq=1'
html=requests.get(url)
#print(html.text)
a=html.text
bs=BeautifulSoup(a,'html.parser')
print(bs)
print(bs.find('span',attrs={"class" : "u_cbox_contents"}))
i want to crawl reply data in news
[![enter image description
here](http://i.stack.imgur.com/9hzGp.png)](http://i.stack.imgur.com/9hzGp.png)
as you can see, i tried to searing this:
span, class="u_cbox_contents" in bs
but python only say "None"
> None
so i check bs using function print(bs)
and i check up bs variable's contents
but there is no span, class="u_cbox_contents"
why this happing?
i really don't know why
please help me
thanks for reading.
Answer: Requests will fetch the URL's contents, but will not execute any JavaScript.
I performed the same fetch with cURL, and I can't find any occurrence of
`u_cbox_contents` in the HTML code. Most likely, it's injected using
JavaScript, which explains why BeautifulSoup can't find it.
If you need the page's code as it would be rendered in a "normal" browser, you
could try [Selenium](http://docs.seleniumhq.org). Also have a look at
[this](http://stackoverflow.com/questions/1916711/programmatic-python-browser-
with-javascript) SO question.
|
Python: reflect positions in a 2D grid graph
Question: In a 2D graph with 10x10 nodes, I realized I want the nodes to be labelled
starting from the upper left corner, downwards and column-wise:
1st column -> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
2nd column -> [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
and so forth until I reach the 10th column. Instead, the code I have labels
them starting from the lower left corner, upwards and column-wise. **I guess
the caveat is in the`pos2` line but I don't know how to change it.** I've
tried to `reverse` the `inds` and `vals` lists but the result was a reflection
of the graph with respect to the y or vertical axis. Instead, I am looking for
a reflection with respect to the horizontal axis.
import networkx as nx
from pylab import *
import matplotlib.pyplot as plt
%pylab inline
#n=100 Number of nodes
ncols=10 #Number of columns in a 10x10 grid of positions
N=10 #Nodes per side
G=nx.grid_2d_graph(N,N)
labels = dict( ((i,j), i + (N-1-j) * N ) for i, j in G.nodes() )
nx.relabel_nodes(G,labels,False)
inds=labels.keys()
vals=labels.values()
inds=sorted(inds,reverse=False)
vals=sorted(vals, reverse=False)
pos2=dict(zip(vals,inds))
nx.draw_networkx(G, pos=pos2, with_labels=True, node_size = 250, node_color='lightblue')
plt.axis('off')
plt.show()
[![enter image description
here](http://i.stack.imgur.com/fIFQd.png)](http://i.stack.imgur.com/fIFQd.png)
Answer: You could just change the labels when you draw the graph like this
import networkx as nx
import matplotlib.pyplot as plt
#n=100 Number of nodes
ncols=10 #Number of columns in a 10x10 grid of positions
N=10 #Nodes per side
G=nx.grid_2d_graph(N,N)
pos = dict(zip(G.nodes(),G.nodes()))
ordering = [(y,N-1-x) for y in range(N) for x in range(N)]
labels = dict(zip(ordering, range(len(ordering))))
nx.draw_networkx(G, pos=pos, with_labels=False, node_size = 250, node_color='lightblue')
nx.draw_networkx_labels(G, pos=pos, labels=labels)
plt.axis('off')
plt.show()
|
SQL Server Error converting data type nvarchar to date python
Question: I am getting an error while calling a SQL Server stored procedure in python.
> Error converting data type nvarchar to date.
My code is as below.
from datetime import datetime
OnlyDate=datetime.now().date()
# I got date in OnlyDate in this formate :-2016-06-30
self.con.execute("exec dbo.ScrapeStatistics_SP @Op='6',@EndTime=now,@Site='testing',@ScrapeType='Category',@Date=OnlyDate")
self.con.commit()
Answer: `@Date=OnlyDate`
You are trying to insert the string `OnlyDate` into the `Date` column,
therefore ending up with an error that nvarchar (`"OnlyDate"`) can't be
converted to date.
You should use a parameterized query:
query = "exec dbo.ScrapeStatistics_SP @Op='6',@EndTime=now,@Site='testing',
@ScrapeType='Category',@Date=?"
self.con.execute(query , (OnlyDate,)) # note the comma! this needs to be a tuple
|
Heroku migrate app from cedar-10 to cedar-14
Question: I am just having an issue to push my changes after I followed steps to upgrade
my heroku instance from cedar-10 to cedar-14. Although it works if I create
new app and apply existing code, it doesn't work to on production app.
> Error
-----> Python app detected
ERROR:root:code for hash md5 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type md5
ERROR:root:code for hash sha1 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha1
ERROR:root:code for hash sha224 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha224
ERROR:root:code for hash sha256 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha256
ERROR:root:code for hash sha384 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha384
ERROR:root:code for hash sha512 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha512
Traceback (most recent call last):
File "/app/.heroku/python/bin/pip", line 9, in
load_entry_point('pip==1.5.6', 'console_scripts', 'pip')()
File "/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg/pkg_resources.py", line 343, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg/pkg_resources.py", line 2309, in load_entry_point
return ep.load()
File "/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg/pkg_resources.py", line 2015, in load
entry = __import(self.module_name, globals(),globals(), ['name'])
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/init.py", line 10, in
from pip.util import get_installed_distributions, get_prog
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/util.py", line 18, in
from pip.vendor.distlib import version
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/vendor/distlib/version.py", line 14, in
from .compat import string_types
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/compat.py", line 31, in
from urllib2 import (Request, urlopen, URLError, HTTPError,
ImportError: cannot import name HTTPSHandler
ERROR:root:code for hash md5 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type md5
ERROR:root:code for hash sha1 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha1
ERROR:root:code for hash sha224 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha224
ERROR:root:code for hash sha256 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha256
ERROR:root:code for hash sha384 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha384
ERROR:root:code for hash sha512 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha512
Traceback (most recent call last):
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip-diff", line 15, in
from pip.req import parse_requirements
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/__init.py", line 15, in
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/vcs/mercurial.py", line 10, in
from pip.download import path_to_url
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/download.py", line 38, in
from pip.vendor import requests, six
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/vendor/requests/__init.py", line 58, in
from . import utils
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/vendor/requests/utils.py", line 26, in
from .compat import parse_http_list as parse_list_header
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/_vendor/requests/compat.py", line 7, in
from .packages import chardet
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/_vendor/requests/packages/__init.py", line 3, in
from . import urllib3
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/vendor/requests/packages/urllib3/__init_.py", line 10, in
from .connectionpool import (
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/vendor/requests/packages/urllib3/connectionpool.py", line 31, in
from .connection import (
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/_vendor/requests/packages/urllib3/connection.py", line 45, in
from .util.ssl import (
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/vendor/requests/packages/urllib3/util/__init_.py", line 5, in
from .ssl_ import (
File "/app/tmp/buildpacks/python/vendor/pip-pop/pip/vendor/requests/packages/urllib3/util/ssl.py", line 2, in
from hashlib import md5, sha1, sha256
ImportError: cannot import name md5
$ pip install -r requirements.txt
ERROR:root:code for hash md5 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type md5
ERROR:root:code for hash sha1 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha1
ERROR:root:code for hash sha224 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha224
ERROR:root:code for hash sha256 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha256
ERROR:root:code for hash sha384 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha384
ERROR:root:code for hash sha512 was not found.
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 139, in
globals()[__func_name] = __get_hash(__func_name)
File "/app/.heroku/python/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha512
Traceback (most recent call last):
File "/app/.heroku/python/bin/pip", line 9, in
load_entry_point('pip==1.5.6', 'console_scripts', 'pip')()
File "/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg/pkg_resources.py", line 343, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg/pkg_resources.py", line 2309, in load_entry_point
return ep.load()
File "/app/.heroku/python/lib/python2.7/site-packages/distribute-0.6.36-py2.7.egg/pkg_resources.py", line 2015, in load
entry = __import(self.module_name, globals(),globals(), ['name'])
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/init.py", line 10, in
from pip.util import get_installed_distributions, get_prog
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/util.py", line 18, in
from pip._vendor.distlib import version
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/version.py", line 14, in
from .compat import string_types
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/compat.py", line 31, in
from urllib2 import (Request, urlopen, URLError, HTTPError,
ImportError: cannot import name HTTPSHandler
! Push rejected, failed to compile Python app
* * *
Any idea what it can be ?
Answer: Heroku support told me:
> This is not exactly a bug, moving between cedar-10 and cedar-14 is an entire
> operating system upgrade and creating a disk image with completely parity is
> nearly impossible. You can purge the build cache on your own with the
> `repo:purge_cache` command via [heroku-repo
> plugin](https://github.com/heroku/heroku-repo%20heroku-repo%20plugin).
Following those steps worked for me:
$ heroku plugins:install heroku-repo
$ heroku repo:purge_cache
|
Connecting to the Tor network with Python without getting "Proxy server is refusing connection"
Question: I've been trying to use Tor via Python only to come across the "Proxy server
is refusing connection" error.
I've trying this method using the Stem library:
<http://www.thedurkweb.com/automated-anonymous-interactions-with-websites-
using-python-and-tor/>
Any help to fixing this error?
Here is the code:
import stem.process
from stem import Signal
from stem.control import Controller
from splinter import Browser
proxyIP = "127.0.0.1"
proxyPort = 9150
proxy_settings = {"network.proxy.type":1,
"network.proxy.ssl": proxyIP,
"network.proxy.ssl_port": proxyPort,
"network.proxy.socks": proxyIP,
"network.proxy.socks_port": proxyPort,
"network.proxy.socks_remote_dns": True,
"network.proxy.ftp": proxyIP,
"network.proxy.ftp_port": proxyPort
}
browser = Browser('firefox', profile_preferences=proxy_settings)
def interactWithSite(browser):
browser.visit("http://dogdogfish.com/python-2/generating-b2b-sales-data-in-python/")
browser.fill("comment", "But the thing is... Why would anyone ever want to do this? I must have thought that times...")
browser.fill("author", "Pebblor El Munchy")
browser.fill("email", "barack@tehwhitehouz.gov")
browser.fill("url", "https://upload.wikimedia.org/wikipedia/en/1/16/Drevil_million_dollars.jpg")
button = browser.find_by_name("submit")
button.click()
interactWithSite(browser)
Answer: I deleted the SSL and FTP proxy and port settings and it worked. I also used
port 9150.
Here is the working code:
import stem.process
from stem import Signal
from stem.control import Controller
from splinter import Browser
proxyIP = "127.0.0.1"
proxyPort = 9150
proxy_settings = {"network.proxy.type":1,
"network.proxy.socks": proxyIP,
"network.proxy.socks_port": proxyPort,
"network.proxy.socks_remote_dns": True,
}
browser = Browser('firefox', profile_preferences=proxy_settings)
def interactWithSite(browser):
browser.visit("http://dogdogfish.com/python-2/generating-b2b-sales-data-in-python/")
browser.fill("comment", "But the thing is... Why would anyone ever want to do this? I must have thought that times...")
browser.fill("author", "Pebblor El Munchy")
browser.fill("email", "barack@tehwhitehouz.gov")
browser.fill("url", "https://upload.wikimedia.org/wikipedia/en/1/16/Drevil_million_dollars.jpg")
button = browser.find_by_name("submit")
button.click()
interactWithSite(browser)
|
Python - Using Pandas to eliminated curly brackets and output floats
Question: Having this large csv data set, that essentially has x and y values in each
column.
"{733.15, 179.5}",
"{565.5, 642.5}",
"{172.5, 375.5}",
"{223.5, 554.5}",....
....,
"{213.5, 666.5}",
"{851.5, 323.5}",
"{498.5, 638.5}",
"{763.5, 102.5}"
or by a table,
[![enter image description
here](http://i.stack.imgur.com/b0rsh.png)](http://i.stack.imgur.com/b0rsh.png)
A column is essentially this set and I can call each pair by indexing.
import numpy as np
import pandas as pd
import csv
brown = pd.read_csv('BrownM.csv',delimiter=',', header=None)
print brown[0]
this essentially calls the row above
print brown[0][0]
returns `{733.15, 179.5}`
but when wanting to select a value in this set,
print brown[0][0][1]
returns `7`
It's treating this data set as a string, when I want it to return floats when
called upon.
Also, is their a way to pass the file to where the curly brackets are
eliminated?
Answer: Or you can `extract` then `split`.
df.col1.str.extract(r'{(.*)}', expand=False).str.split(', ', expand=True)
* * *
### Timing
MaxU's solution is quicker as it does more in one step as opposed to mine that
takes two steps.
[![enter image description
here](http://i.stack.imgur.com/Ekenu.png)](http://i.stack.imgur.com/Ekenu.png)
|
nvcc fatal : Value 'sm_61' is not defined for option 'gpu-architecture' error with theano
Question: I was setting up python and theano for use with gpu on; ubunutu 14.04, GeForce
GTX 1080 already installed NVIDIA driver (367.27) and CUDA toolkits (7.5)
successfully for the system, but on testing with theano gpu implementation i
get the above error (for example; when importing theano with gpu enabled) I
have tried to look for possible solutions but didn't succeed. I'm a little new
to ubuntu and and gpu programming, so I would appreciate any insight into how
I can solve this problem. Thanks
Answer: As Robert Crovella said, SM 6.1 (sm_61) is only supported in CUDA 8.0 and
above, and thus you should download CUDA 8.0 Release Candidate from
<https://developer.nvidia.com/cuda-toolkit>
Ubuntu 14.04 is supported, and the instructions on the website on how to setup
should be straightforward (copy and paste lines to the console).
I would also recommend downloading CUDA 8.0 when it comes out, since the RC is
not the final version.
|
I cant import sklearn
Question: I try to import scikit-learn, but there is an error. i installed sklearn,
scipy on anaconda. i am using W10 and python 3.5.
>>> import sklearn
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import sklearn
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python35-32\lib\site-packages\sklearn\__init__.py", line 57, in <module>
from .base import clone
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python35-32\lib\site-packages\sklearn\base.py", line 9, in <module>
from scipy import sparse
ImportError: No module named 'scipy'
Answer: In linux there is `pip install <module>` to install a module, and if you are
using anaconda then `conda install <module>`, I believe there would be
something similar in windows.
If you are sure that you have installed `scipy` module, then probably the
python path is not looking for those directories.
You can try a [environment variable `PYTHONPATH` that has a list of
directories to append before launching python
prompt.](http://stackoverflow.com/questions/3402168/permanently-add-a-
directory-to-pythonpath) OR you can test it for a session by [adding it to
`sys.path`](http://stackoverflow.com/questions/12257747/adding-a-file-path-to-
sys-path-in-python)
|
Use both matplotlib inline and qt in jupyter notebook
Question: I am using Jupyter (with IPython) to analyze research data, as well as export
figures. I really like the notebook approach offered by Jupyter: when I
revisit an experiment after a long time, I can easily see how the figures
correspond to the data. This is of course using the inline backend.
However, when I want to explore new data, I prefer to use the QT backend. It
is faster than the inline one, and allows to easily scale, zoom in and out,
and nicely displays the X and Y coordinates in the bottom left corner.
Moreover, I can use the QT backend to determine good x and y limits to use in
the inline backend.
I have tried using the `%matplotlib notebook` magic, but it is simply too
slow. For some experiments I am plotting ~500 spectra (each consists of ~1000
data points), which is already slow in the inline backend. Even with less data
points, the notebook backend is just too slow to use.
Therefore, I would like to use both the QT backend, and the inline backend
whenever I plot something. (So, whenever I execute a cell which plots data, it
should both display the inline image, and pop up a QT backend window). This
way, I still have a nice overview of plots in my notebook, while also allowing
me to easily explore my data. Is there a way to achieve this?
Answer: This allows you to run QtConsole, plotting with the plotSin function, both
inline and through the QtConsole.
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
..
def plotChirp(Type, Exp, Rand):
# Orignal Chirp Funciton From:
# http://stackoverflow.com/questions/19410042/how-to-make-ipython-notebook-matplotlib-plot-inline
x = np.linspace(0, 3*np.pi, Rand)
plt.plot(x, np.sin(x**int(Exp)))
plt.title('A simple chirp ' + Type)
plt.show()
..
plotChirp("A", 5, 200) # Plots inline if you choose
[![enter image description
here](http://i.stack.imgur.com/9A4Xa.jpg)](http://i.stack.imgur.com/9A4Xa.jpg)
%connect_info # For your own connection
%qtconsole
QtConsole opens and now you can call your function to plot externally..
[![enter image description
here](http://i.stack.imgur.com/73Zr4.jpg)](http://i.stack.imgur.com/73Zr4.jpg)
Using `%matplotlib qt` allows for printing in a loop, but unfortunately it
seems to overlap the plots. Looking into subplots as a possible solution.
%matplotlib qt
for i in range(0,2):
if i == 0:
plotChirp("B",1, 400)
else:
plotChirp("c",6, 1000)
[![enter image description
here](http://i.stack.imgur.com/2cgAG.jpg)](http://i.stack.imgur.com/2cgAG.jpg)
|
Defining a complex-valued, piecewise function
Question: I'm trying to define a function f(x) which yields 1.0 for x = 0 and 1.0/(2j pi
x) otherwise. Here is a 'test' script I'm using:
import numpy as np
def f(x):
return np.piecewise(x,[x==0],[1.0, lambda x: 1.0/(2j*np.pi*x)])
x = np.linspace(-1,1,21)
print(x)
print(f(x))
If the lambda function is not complex-valued, this works as I would expect.
However, with the "2j" term I get the following warning and output:
/usr/local/lib/python2.7/dist-packages/numpy/lib/function_base.py:1144: ComplexWarning: Casting complex values to real discards the imaginary part
y[condlist[k]] = item(vals, *args, **kw)
[-1. -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0. 0.1 0.2 0.3 0.4
0.5 0.6 0.7 0.8 0.9 1. ]
[-0. -0. -0. -0. -0. -0. -0. -0. -0. -0. 1. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0.]
Apparently numpy's `piecewise` automatically takes the real part. Is this not
quite a limitation of `piecewise`? Is there another way to define piecewise
complex-valued functions (without having to define the real and imaginary
parts separately)?
Answer: From the
[`piecewise`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.piecewise.html)
docstring:
The output is the same shape and type as x...
For complex output, pass in a complex array.
Continuing your example with `f` and `x` defined:
In [91]: y = x + 0j
In [92]: f(y)
Out[92]:
array([-0.+0.15915494j, -0.+0.17683883j, -0.+0.19894368j, -0.+0.2273642j ,
-0.+0.26525824j, -0.+0.31830989j, -0.+0.39788736j, -0.+0.53051648j,
-0.+0.79577472j, -0.+1.59154943j, 1.+0.j , 0.-1.59154943j,
0.-0.79577472j, 0.-0.53051648j, 0.-0.39788736j, 0.-0.31830989j,
0.-0.26525824j, 0.-0.2273642j , 0.-0.19894368j, 0.-0.17683883j,
0.-0.15915494j])
|
Python dispy - keep package names for dependencies
Question: Is there a way to keep the package names for python modules that are
transmitted via dispy's depends feature? That would allow using
packages/modules in the same way when called with and without a dispy context.
Simple Example:
Module mypackage.dispytestDepends:
def myFun():
return "Foo"
main module as I'd like it to have (doesn't work):
def dependsFunTask(): #works only when called without dispy
import mypackage.dispytestDepends
ret = mypackage.dispytestDepends.myFun()
return ret
import dispy
if __name__ == '__main__':
cluster = dispy.JobCluster(dependsFunTask,depends = mypackage.dispytestDepends) # doesn't work
job = cluster.submit()
output = job()
print output ### output is None
mypackage.dependsFunTask() # works
Working, yet ugly version, since I have lots of code that would have to be
rewritten:
def dependsFunTask(): #only works when called through dispy
import dispytestDepends
ret = dispytestDepends.myFun()
return ret
import dispy
if __name__ == '__main__':
cluster = dispy.JobCluster(dependsFunTask,depends = mypackage.dispytestDepends) # works
job = cluster.submit()
output = job()
print output ### output is "Foo"
dependsFunTask() # doesn't work
Answer: This should work now with the current version of dispy. See also
<https://github.com/pgiri/dispy/issues/43>.
|
In Python, is it possible to expose modules from subpackages at package level?
Question: I have the following conundrum. I'm trying to expose some modules from a
subpackage of a package at the parent package level.
The folder structure is the essentially like this:
script.py
package/
__init__.py
module1.py
subpackage/
__init__.py
submodule1.py
submodule2.py
In the `script.py` file I currently have to write
from package.subpackage.submodule1 import foo
if I want to import something from the `submodule1.py` file, but I would like
to be able to expose the files `submodule1.py` and `submodule2.py` at package
level, so that all my imports can look like
from package.module1 import bar
from package.submodule1 import foo
from package.submodule2 import goo
Note that I don't want to expose `bar`, `foo` and `goo` at `package` level,
i.e. **not**
from package import bar
from package import foo
because the separation between modules is still important in my case.
Is this even possible? Is there a trick in the `__init__.py` file to do so?
Thanks!
Answer: For your purpose, python modules are just namespaces. That is, everything in
the globals of a module can be imported and used as `module.thingy`. You may
have noticed that in many modules, you also find some builtins. For example,
`logging.os` is just the regular `os` module.
So, in your package (`package/__init__.py`) import whatever you wish, and bind
it to the name you want to expose it as.
# package/__init__.py
# import package.module1 as module1 # one *can* do this, but its at best without benefit (see comments)
import package.subpackage.submodule1 as submodule1
import package.subpackage.submodule2 as submodule2
This allows to do `import package.submodule1` and `import package.submodule1
as foo`.
Note that this simple way will _not_ allow you to do `from package.submodule1
import bar`. For this, you need an actual dummy module.
# package/submodule1/__init__.py
from package.subpackage.submodule1 import *
|
Python3.5 -configure a single cell to expand instead of the entire row or column
Question: Picture a 4x4 grid in a tkinter window. I want to expand the cell at row 2,
column 2 but not everything else on row 2 or column 2. Im designing a text
window with selectable options on the left side in rows 1-15. Making row 2
with weight 1 and column 2 with weight 1 allows my Text widget to expand but
so does everything else in row 2 and column 2. Any way around this?
from tkinter import *
root = Tk()
lbl1 = Label(root, text="label1")
lbl1.grid(row=0, column=1)
lbl2 = Label(root, text="label2")
lbl2.grid(row=1, column=0)
lbl3 = Label(root, text="label3")
lbl3.grid(row=3, column=0)
lbl4 = Label(root, text="label4")
lbl4.grid(row=5, column=0)
txt = Text(root, state='disabled', bg='#E8E8E8')
txt.grid(row=1, column=1, padx=10, pady=10, sticky="NSEW", columnspan=2, rowspan=2)
root.rowconfigure(2, weight=1)
root.columnconfigure(2, weight=1)
root.mainloop()
Example 2:
from tkinter import *
root = Tk()
frame1 = Frame(root)
frame1.grid(row=0, column=1)
frame2 = Frame(root)
frame2.grid(row=1, column=0)
frame3 = Frame(root)
frame3.grid(row=1, column=1, rowspan=2, columnspan=2)
lbl1 = Label(frame1, text="label1")
lbl2 = Label(frame2, text="label2")
lbl3 = Label(frame2, text="label3")
lbl4 = Label(frame2, text="label4")
lbl1.grid(row=0, column=1, sticky=N)
lbl2.grid(row=3, column=0, sticky=N)
lbl3.grid(row=5, column=0, sticky=N)
lbl4.grid(row=7, column=0, sticky=N)
txt = Text(frame3, state='disabled', bg='#E8E8E8')
txt.grid(row=0, column=0, padx=10, pady=10, sticky="NSEW", columnspan=2, rowspan=2)
root.rowconfigure(2, weight=1)
root.columnconfigure(2, weight=1)
frame3.rowconfigure(0, weight=1)
frame3.columnconfigure(0, weight=1)
root.mainloop()
Example 2 has everything in the position I want it in but the Text widget does
not expand. Is it possible to set a frame to expand when using grid?
Answer: Your question asks about a 4x4 grid, but your example shows only two columns.
That makes it hard to understand what you want. In the comments you say you
simply want the text area of the example to grow and shrink and all the labels
together, so that's what I'll address.
The simplest solution is to have an extra row and column to the right and
below the text area. Have the text widget span into those areas, and give
those areas a weight of 1. That means that, as the window changes size, any
extra space is allocated to areas not occupied by buttons.
pro tip: I find layout problems much easier to visualize and solve when all of
the layout code is together.
It would look something like this:
lbl1.grid(row=0, column=1)
lbl2.grid(row=1, column=0)
lbl3.grid(row=2, column=0)
lbl4.grid(row=3, column=0)
txt.grid(row=1, column=1, padx=10, pady=10, sticky="NSEW", columnspan=2, rowspan=4)
root.rowconfigure(4, weight=1)
root.columnconfigure(2, weight=1)
* * *
I think your layout problems might be better solved by using `pack` instead of
`grid` for part of the layout. For example, you might start with three areas:
a toolbar, a side panel, and then main area with the text widget:
toolbar = Frame(root, ...)
side = Frame(root, ...)
main = Frame(root, ...)
toolbar.pack(side="top", fill="x")
side.pack(side="left", fill="y")
main.pack(side="right", fill="both", expand=True)
With that you now have three relatively independent areas. You can use `pack`
or `grid` in each of these frames independently, making it much easier to keep
track of rows and columns.
|
Python Selenium WebDriver Loop - send_keys working and then not working
Question: I'm using Python 2.7, Chrome 47 (with the chrome driver), and Selenium 2.53.5.
I'm practicing by creating a bot to gather info from a cvs file and then use
that info to open up ebay, type in quantity, and purchase as guest. So far it
works - kinda. The script will run, open ebay to the specific item, clear the
quantity, type in new quantity, 'buy now', proceed as guest, then it chooses
the country, first/last name, etc. When the script finishes filling out the
personal info ending with confirming the email address, it goes onto the next
order and opens up the ebay link again, and repeats. The problem is when it
repeats, sometimes it doesn't work and will skip the first name or type it in
super quick then go blank resulting in a crash. Running the same script can
sometimes crash on the second iteration, and sometimes it will make it to the
tenth iteration but hasn't gotten past the 10th so far. Here's part of my
code: It always works the first time perfectly.
for o in orders:
if (o[1] == "Batteries"):
driver.get("http://www.ebay.com/itm/20-Piece-Combo-Pack-Duracell-Duralock-10-AA-and-10-AAA-Size-Batteries-EXP-2025/272003416650?_trksid=p2045573.c100505.m3226&_trkparms=aid%3D555014%26algo%3DPL.DEFAULT%26ao%3D1%26asc%3D36866%26meid%3D1ff8810857444e0aa548e1aecb205110%26pid%3D100505%26rk%3D1%26rkt%3D1%26")
wait.until(EC.presence_of_element_located((By.CLASS_NAME, 'qtyInput')))
input_field = driver.find_element_by_name('quantity')
input_field.clear()
input_field.send_keys(o[0])
driver.find_element_by_id('binBtn_btn').click()
#GUESTCHECKOUT
wait.until(EC.presence_of_element_located((By.ID, 'gtChk')))
driver.find_element_by_id('gtChk').click()
driver.switch_to_frame(driver.find_element_by_tag_name("iframe"))
wait.until(EC.presence_of_element_located((By.ID, 'confirmEmail')))
t = list(pycountry.countries)
for country in t:
if(o[7] == country.alpha2):
o[7] = country.name
Select(driver.find_element_by_name('country')).select_by_visible_text(o[7])
driver.find_element_by_name('firstName').send_keys(name[0])
driver.find_element_by_name('lastName').send_keys(o[2])
driver.find_element_by_name('address1').send_keys(o[3])
driver.find_element_by_name('city').send_keys(o[4])
Select(driver.find_element_by_name('state')).select_by_value(o[6])
driver.find_element_by_name('zip').send_keys(o[5])
driver.find_element_by_name('dayphone1').send_keys("123")
driver.find_element_by_name('dayphone2').send_keys("456")
driver.find_element_by_name('dayphone3').send_keys("7890")
driver.find_element_by_name('email').send_keys("d@gmail.com")
driver.find_element_by_name('confirmEmail').send_keys("d@gmail.com")
Traceback (most recent call last):
File "autoBuyer.py", line 267, in <module>
driver.find_element_by_name('lastName').send_keys(name[1])
File "/Library/Python/2.7/site-packages/selenium-2.53.5- py2.7.egg/selenium/webdriver/remote/webelement.py", line 321, in send_keys
self._execute(Command.SEND_KEYS_TO_ELEMENT, {'value': keys_to_typing(value)})
File "/Library/Python/2.7/site-packages/selenium-2.53.5- py2.7.egg/selenium/webdriver/remote/webelement.py", line 456, in _execute
return self._parent.execute(command, params)
File "/Library/Python/2.7/site-packages/selenium-2.53.5- py2.7.egg/selenium/webdriver/remote/webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "/Library/Python/2.7/site-packages/selenium-2.53.5- py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: chrome=51.0.2704.106)
(Driver info: chromedriver=2.22.397929 (fb72fb249a903a0b1041ea71eb4c8b3fa0d9be5a),platform=Mac OS X 10.11.5 x86_64)
Answer: `StaleElementReferenceException`s generally indicate the the DOM refreshed
while you were interacting with the element. I commonly see them on highly
dynamic webpages.
If the calls similar to
`driver.find_element_by_name('dayphone1').send_keys("123")` are what seem to
be throwing the exception the most, I would create a function to handle that
specific type of call, and then decorate it with any of the `@retry` packages
out there.
I can't run your code as written, but here is a demonstration of the changes I
would make: (be sure to `pip install retry` first)
from retry import retry
from selenium.common.exceptions import StaleElementReferenceException
@retry(StaleElementReferenceException, tries=3)
def find_by_name_send_keys(driver, name_str, keys, clear=False):
elem = driver.find_element_by_name(name_str)
if clear:
elem.clear()
elem.send_keys(keys)
for o in orders:
if (o[1] == "Batteries"):
driver.get("http://www.ebay.com/itm/20-Piece-Combo-Pack-Duracell-Duralock-10-AA-and-10-AAA-Size-Batteries-EXP-2025/272003416650?_trksid=p2045573.c100505.m3226&_trkparms=aid%3D555014%26algo%3DPL.DEFAULT%26ao%3D1%26asc%3D36866%26meid%3D1ff8810857444e0aa548e1aecb205110%26pid%3D100505%26rk%3D1%26rkt%3D1%26")
wait.until(EC.presence_of_element_located((By.CLASS_NAME, 'qtyInput')))
find_by_name_send_keys(driver, 'quantity', o[0], clear=True)
driver.find_element_by_id('binBtn_btn').click()
#GUESTCHECKOUT
wait.until(EC.presence_of_element_located((By.ID, 'gtChk')))
driver.find_element_by_id('gtChk').click()
driver.switch_to_frame(driver.find_element_by_tag_name("iframe"))
wait.until(EC.presence_of_element_located((By.ID, 'confirmEmail')))
t = list(pycountry.countries)
for country in t:
if(o[7] == country.alpha2):
o[7] = country.name
Select(driver.find_element_by_name('country')).select_by_visible_text(o[7])
find_by_name_send_keys(driver, 'firstName', name[0])
find_by_name_send_keys(driver, 'lastName', o[2])
find_by_name_send_keys(driver, 'address1', o[3])
find_by_name_send_keys(driver, 'city', o[4])
Select(driver.find_element_by_name('state')).select_by_value(o[6])
find_by_name_send_keys(driver, 'zip', o[5])
find_by_name_send_keys(driver, 'dayphone1', "123")
find_by_name_send_keys(driver, 'dayphone2', "456")
find_by_name_send_keys(driver, 'dayphone3', "7890")
find_by_name_send_keys(driver, 'email', "d@gmail.com")
find_by_name_send_keys(driver, 'confirmEmail', "d@gmail.com")
|
How to build a sparse matrix in PySpark?
Question: I am new to Spark. I would like to make a sparse matrix a user-id item-id
matrix specifically for a recommendation engine. I know how I would do this in
python. How does one do this in PySpark? Here is how I would have done it in
matrix. The table looks like this now.
Session ID| Item ID | Rating
1 2 1
1 3 5
import numpy as np
data=df[['session_id','item_id','rating']].values
data
rows, row_pos = np.unique(data[:, 0], return_inverse=True)
cols, col_pos = np.unique(data[:, 1], return_inverse=True)
pivot_table = np.zeros((len(rows), len(cols)), dtype=data.dtype)
pivot_table[row_pos, col_pos] = data[:, 2]
Answer: Like that:
>>> from pyspark.mllib.linalg.distributed import CoordinateMatrix, MatrixEntry
>>> table = sqlContext.createDataFrame(
... sc.parallelize([[1, 2, 1], [1, 3, 5]])
... )
>>> mat = CoordinateMatrix(table.rdd.map(lambda row: MatrixEntry(*row)))
|
How to let pytest rewrite assert in non-test modules
Question: We defined all our custom assertions in a separate python file which is not a
test module.
For example: `custom_asserts.py`
class CustomAsserts(object):
def silly_assert(self, foo, bar):
assert foo == bar , 'some error message'
If we use `assert` directly in tests, we will get extra info about the
AssertionError, which is very useful.
Output of directly use assert in tests:
> assert 'foo' == 'bar', 'some error message'
E AssertionError: some error message
E assert 'foo' == 'bar'
E - foo
E + bar
But we found that if we call the assertion method we defined in separate
module, extra info won't show.
from custom_asserts import CustomAsserts
asserts = CustomAsserts()
def test_silly():
asserts.silly_assert('foo', 'bar')
Output after running the test:
> assert 'foo' == 'bar', 'some error message'
E AssertionError: some error message
And we also found this in pytest docs: [Advanced assertion
introspection](http://pytest.org/latest/assert.html#advanced-assertion-
introspection)
> pytest only rewrites test modules directly discovered by its test collection
> process, so asserts in supporting modules which are not themselves test
> modules will not be rewritten.
So my question is, is there a way to let pytest do the same assert rewriting
to other modules just like test modules? Or is there any hacky way to achieve
that?
Answer: It's kind of wired to answer my own question but I think I found the solution
and want to share.
The trick is in how pytest collect test modules. We can define `python_files`
in `pytest.ini` so that pytest will consider more modules as test modules.
For example, in my case, all my custom asserts module ends with 'asserts', so
my `pytest.ini` is:
[pytest]
python_files = *asserts.py test_*.py *_test.py
Another tricky thing is in `conftest.py`. It seems we have to avoid import the
asserts module in `conftest.py`. My assumption is that it looks like the
technology pytest uses to rewrite assert is actually rewrite `.pyc` file, and
since `conftest.py` is loaded before collecting, if we import the asserts
module, `.pyc` of the module would be generated before collecting, which may
make pytest unable to rewrite the `.pyc` file again.
So in my conftest.py, I have to do thing like:
@pytest.fixture(autouse=Ture)
def setup_asserts(request):
from custom_asserts import CustomAsserts
request.instance.asserts = CustomAsserts()
And I will get the extra error info just like using keyword `assert` directly
in test script.
|
Using LLDB Commands in Python Script
Question: I'm writing a Python script to use in Xcode's LLDB. I have this simple script
up and running:
import lldb
def say_hello(debugger, command, result, dict):
print command
def __lldb_init_module (debugger, dict):
debugger.HandleCommand('command script add -f sayhello.say_hello hello')
What I'd like to do is be able to use the output of LLDB's
XCUIApplication().debugDescription function in the Python script. So is there
a way to either:
a) Access XCUIApplication() within the python script.
b) Pass the XCUIApplication().debugDescription as an input to the say_hello
function in the Python script.
Answer: IIRC XCUIApplication is a function provided by the XCTest framework, so it is
a function in the program you are debugging. So you would call it the same way
you call any other function, using the "EvaluateExpression" API either on
SBTarget or on SBFrame. The result of evaluating the expression will be
returned to you in an SBValue, and you can print that or whatever you need
with it.
Note, unless you need to support a very old Xcode (6.x) it is more convenient
to use the new form of the python command:
def command_function(debugger, command, exe_ctx, result, internal_dict):
The exe_ctx is the SBExecutionContext in which the command is running. If you
do it this way, then you can just do:
def command_function(debugger, command, exe_ctx, result, internal_dict):
options = lldb.SBExpressionOptions()
thread = exe_ctx.GetThread()
if thread.IsValid():
value = thread.GetFrameAtIndex(0).EvaluateExpression("XCUIApplication().debugDescription", options)
if value.GetError().Success():
# Do whatever you want with the result of the expression
|
PySpark (Python 2.7): How to flatten values after reduce
Question: I'm reading a multiline-record file using SparkContext.newAPIHadoopFile with a
customized delimiter. Anyway, I already prepared, reduced my data. But now I
want to add the key to every line (entry) again and then write it to a Apache
Parquet file, which is then stored into HDFS.
This figure should explain my problem. What I'm looking for is the red arrow
e.g. the last transformation before writing the file. Any idea? I tried
flatMap but then the timestamp and float-value resulted in different records.
[![PySpark
chain](http://i.stack.imgur.com/Vz9dp.png)](http://i.stack.imgur.com/Vz9dp.png)
The Python-Script can be [downloaded here](http://matthias-
heise.eu/stackoverflow/sample.py) and the sample [text file
here](http://matthias-heise.eu/stackoverflow/sample.txt). I'm using the
Python-Code within a Jupyter Notebook.
Answer: Simple list comprehension should be more than enough:
from datetime import datetime
def flatten(kvs):
"""
>>> kvs = ("852-YF-008", [
... (datetime(2016, 5, 10, 0, 0), 0.0),
... (datetime(2016, 5, 9, 23, 59), 0.0)])
>>> flat = flatten(kvs)
>>> len(flat)
2
>>> flat[0]
('852-YF-008', datetime.datetime(2016, 5, 10, 0, 0), 0.0)
"""
k, vs = kvs
return [(k, v1, v2) for v1, v2 in vs]
In Python 2.7 you could also use lambda expression with tuple argument
unpacking but this is not portable and generally discouraged:
lambda (k, vs): [(k, v1, v2) for v1, v2 in vs]
Version independent:
lambda kvs: [(kvs[0], v1, v2) for v1, v2 in kvs[1]]
**Edit** :
If all you need is writing partitioned data then convert to Parquet directly
without `reduceByKey`:
(sheet
.flatMap(process)
.map(lambda x: (x[0], ) + x[1])
.toDF(["key", "datettime", "value"])
.write
.partitionBy("key")
.parquet(output_path))
|
must be string or read-only buffer, not long
Question: I am making a website using flask and MySQLdb below is my python file
**init.py**
from flask import Flask,render_template,request,url_for,flash,session
from flask_session import Session
from dbconnect import connection
from wtforms importForm,BooleanField,StringField,TextField,PasswordField,validators,IntegerField
from passlib.hash import sha256_crypt
from MySQLdb import escape_string as thwart
from flask_wtf import Form
from wtforms.validators import InputRequired
import gc
sess=Session()
SESSION_TYPE = 'memcache'
app=Flask(__name__)
@app.route('/')
def test():
return render_template("home.html")
@app.route('/about/')
def about():
return render_template("about.html")
@app.route('/dashbord/')
def dashbord():
return('hello')
@app.route('/contact/')
def contact():
return render_template("contact.html")
@app.route('/login/',methods=['GET','POST'])
def login():
return render_template("login.html")
class RegistrationForm(Form):
username=TextField('username',[validators.Length(min=4,max=20),validators.Required()])
email=TextField('email',[validators.Length(min=6,max=50),validators.Required()])
password=PasswordField('password',[validators.EqualTo('confirm',message="Password must match"),validators.Required()])
confirm=PasswordField("repeat password")
phone_no=IntegerField('phone_no',[validators.Required()])
@app.route("/sign_up",methods=['GET','POST'])
def sign():
try:
form=RegistrationForm(request.form)
if request.method == 'POST':
username=form.username.data
email=form.email.data
password=sha256_crypt.encrypt((str(form.password.data)))
phone_no=form.phone_no.data
c,conn =connection()
x = c.execute("SELECT * FROM customer WHERE username =(%s)",
(username,))
if int(x)>0:
flash("That username is taken")
return render_template('sign.html',form=form)
else:
args="INSERT INTO customer (username,email,password,phone_no) VALUES (%s,%s,%s,%s)",
(thwart(username),thwart(email),thwart(password),thwart(phone_no))
c.execute(*args)
conn.commit()
flash("Thanks for registering")
c.close()
conn.close()
gc.collect()
session['logged_in']=True
session['username']=username
return redirect(url_for('dashbord'))
return render_template("sign.html",form=form)
except Exception as e:
return(str(e))
if __name__=="__main__":
app.secret_key = 'super secret key'
app.config['SESSION_TYPE'] = 'filesystem'
sess.init_app(app)
app.run(debug=True)
I am getting the following error after clicking the submit button
> Must be string or read-only buffer, not long
Answer: your cursor.execute does not look correct to me which is in your case
c.execute as the optional value has a extra comma. please correct me if i am
wrong
|
python read date and time from csv
Question: my data looks like that:
GIdx,Date,num,Time
1,11/28/2012,20,10:05:50
1,11/28/2012,20,10:05:50
2,11/28/2012,20,10:09:24
2,11/28/2012,20,10:09:24
2,11/28/2012,20,10:09:25
2,11/28/2012,20,10:09:25
2,11/28/2012,20,10:09:26
3,11/28/2012,20,10:09:34
3,11/28/2012,20,10:09:34
i try to read column **Date** as `datetime` and column **Time** as `time` but
when I check the their type I get `Series`:
type(df['Date'])
class pandas.core.series.Series
type(df_original['Time'])
class pandas.core.series.Series
I did something like:
df=pd.read_csv(filename,sep=",", header = 0, na_values=['NA'])
Answer: You can add to [`read_csv`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.read_csv.html) parameter `parse_dates` with
columns where are `dates` and `times`:
import pandas as pd
import io
temp=u"""GIdx,Date,num,Time
1,11/28/2012,20,10:05:50
1,11/28/2012,20,10:05:50
2,11/28/2012,20,10:09:24
2,11/28/2012,20,10:09:24
2,11/28/2012,20,10:09:25
2,11/28/2012,20,10:09:25
2,11/28/2012,20,10:09:26
3,11/28/2012,20,10:09:34
3,11/28/2012,20,10:09:34"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), parse_dates=[['Date','Time']])
print (df)
Date_Time GIdx num
0 2012-11-28 10:05:50 1 20
1 2012-11-28 10:05:50 1 20
2 2012-11-28 10:09:24 2 20
3 2012-11-28 10:09:24 2 20
4 2012-11-28 10:09:25 2 20
5 2012-11-28 10:09:25 2 20
6 2012-11-28 10:09:26 2 20
7 2012-11-28 10:09:34 3 20
8 2012-11-28 10:09:34 3 20
print (df.dtypes)
Date_Time datetime64[ns]
GIdx int64
num int64
dtype: object
You can omit parameters `sep=","`, `header = 0` and `na_values=['NA']`,
because there are by default:
df=pd.read_csv(filename,sep=",", header = 0, na_values=['NA'])
df=pd.read_csv(filename)
|
Ctypes - Passing a Void Pointer from Python
Question: I am accessing a C++ DLL using Python Ctypes on Windows 7. I have the
documentation for the DLL, but I can't actually open it. I'm trying to use a
C++ function that takes in a function, which in turn takes in an unsigned int
and a void pointer. Here is a short code sample that fails:
import ctypes
import os
root = os.path.dirname(__file__)
lib = ctypes.WinDLL(os.path.join(root, 'x86', 'toupcam.dll')) #works
cam = lib.Toupcam_Open(None) #works
def f(event, ctx): #Python version of function to pass in
pass
#converting Python function to C function:
#CFUNTYPE params: return type, parameter types
func = ctypes.CFUNCTYPE(None, ctypes.c_uint, ctypes.c_void_p)(f)
res = lib.Toupcam_StartPullModeWithCallback(cam, func) #fails
Whenever I run this code I get this error on the last line:
OSError: exception: access violation writing 0x002CF330.
I don't really know how to approach this issue, since it's a C++ error not a
Python error. I think it has to do with my void pointer, since similar errors
I found online for C++ were pointer-related. Is there something wrong with the
Ctypes void pointer, or am I doing something wrong?
Answer: You need to declare the argument types of the functions you call using
`argtypes`. Since I don't know your exact API, here's an example:
Windows C DLL code with a callback:
typedef void (*CB)(int a);
__declspec(dllexport) void do_callback(CB func)
{
int i;
for(i=0;i<10;++i)
func(i);
}
Python code:
from ctypes import *
# You can use as a Python decorator.
@CFUNCTYPE(None,c_int)
def callback(a):
print(a)
# Use CDLL for __cdecl calling convention...WinDLL for __stdcall.
do_callback = CDLL('test').do_callback
do_callback.restype = None
do_callback.argtypes = [CFUNCTYPE(None,c_int)]
do_callback(callback)
Output:
0
1
2
3
4
5
6
7
8
9
|
How to solve error raised : No module named 'django.contrib.customuser'
Question: I am new to Django, trying to create a custom user for my project. When I am
running the server, it raises No module named 'django.contrib.customuser' and
sometimes, Manager isn't available; auth.User has been swapped for
Mysite.CustomUser. Even i changed my settings: django.contrib.auth to
django.contrib.custommuser. Please someone help me solving this. Here's my
code
### models.py:
from datetime import datetime
from django.db import models
from django.contrib.auth.models import User, BaseUserManager, AbstractUser, AbstractBaseUser
from django.utils.translation import ugettext_lazy as _
class CustomUserManager(BaseUserManager):
def _create_user(self, username, email, u, password, is_staff, is_active, **extra_fields):
now = datetime.now()
if not email:
raise ValueError('Users must have an email address')
email = self.normalize_email(email)
user = self.model(username=username, email=email, u=u, password=password,
is_staff=is_staff, is_active=False, last_login=now, date_joined=now, **extra_fields)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, username, email, u, password = None, **extra_fields):
return self._create_user(username, email, u, False, False, **extra_fields)
def create_superuser(self, username, email, u, password = None):
user = self._create_user(username, email, u, password, True, True)
user.set_password(password)
user.is_active=True
user.is_admin = True
user.is_superuser = True
user.save(using=self._db)
return user
class CustomUser(AbstractBaseUser):
username = models.CharField(max_length=30)
email = models.EmailField(max_length=30, unique=True, db_index=True)
password1 = models.CharField(max_length=30)
password2 = models.CharField(max_length=30)
CHOICES= (('LinkedinUser', 'LinkedinUser'),('FacebookUser', 'FacebookUser'),)
u = models.CharField(choices=CHOICES, max_length=20, default=0)
date_joined = models.DateTimeField(_('date joined'), default=datetime.now)
is_active = models.BooleanField(default=True)
is_admin = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
is_superuser = models.BooleanField(default=False)
REQUIRED_FIELDS = ('username', 'u')
USERNAME_FIELD = 'email'
objects = CustomUserManager()
class Meta:
verbose_name = _('user')
verbose_name_plural = _('users')
def get_full_name(self):
# The user is identified by their email address
return self.email
def get_short_name(self):
# The user is identified by their email address
return self.email
def __str__(self): # __unicode__ on Python 2
return self.email
def has_perm(self, perm, obj=None):
"Does the user have a specific permission?"
# Simplest possible answer: Yes, always
return True
def has_module_perms(self, app_label):
"Does the user have permissions to view the app `app_label`?"
# Simplest possible answer: Yes, always
return True
@property
def is_staff(self):
return self.is_admin
### forms.py
from django import forms
from django.contrib.auth.forms import UserChangeForm, UserCreationForm
from .models import CustomUser#, LinkedInUser, FacebookUser
import re
from django.contrib.auth.models import User
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth import get_user_model
class CustomUserForm(forms.ModelForm):
username = forms.RegexField(regex=r'^\w+$', widget=forms.TextInput(attrs=dict(required=True, max_length=30)), label=_("username"), error_messages={ 'invalid': _("This value must contain only letters, numbers and underscores.") })
email = forms.EmailField(widget=forms.TextInput(attrs=dict(required=True, max_length=30)), label=_("Email address"))
password1 = forms.CharField(widget=forms.PasswordInput(attrs=dict(required=True, max_length=30, render_value=False)), label=_("Password"))
password2 = forms.CharField(widget=forms.PasswordInput(attrs=dict(required=True, max_length=30, render_value=False)), label=_("Password (again)"))
CHOICES= (('LinkedinUser', 'LinkedinUser'),('FacebookUser', 'FacebookUser'),)
u = forms.ChoiceField(choices=CHOICES, label='ID', widget=forms.RadioSelect())
class Meta :
model = CustomUser
fields = [ 'username', 'email', 'password1', 'password2', 'u' ]
User = get_user_model()
def clean_name(self):
try:
user = User.objects.get(username__iexact=self.cleaned_data['username'])
except User.DoesNotExist:
return self.cleaned_data['username']
raise forms.ValidationError(_("The username already exists. Please try another one."))
def clean(self):
if 'password1' in self.cleaned_data and 'password2' in self.cleaned_data:
if self.cleaned_data['password1'] != self.cleaned_data['password2']:
raise forms.ValidationError(_("The two password fields did not match."))
return self.cleaned_data
class CustomUserCreationForm(UserCreationForm):
"""
A form that creates a user, with no privileges, from the given email and
password.
"""
def __init__(self, *args, **kargs):
super(CustomUserCreationForm, self).__init__(*args, **kargs)
del self.fields['username']
class Meta:
model = CustomUser
fields = ("email",)
### admin.py
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth import get_user_model
from .models import CustomUser
from .forms import CustomUserCreationForm
class CustomUserAdmin(admin.ModelAdmin):
form = CustomUserCreationForm
admin.site.register(CustomUser, CustomUserAdmin)
### backends.py:
from models import CustomUser
class CustomUserAuth(object):
def authenticate(self, username=None, password=None):
try:
user = CustomUser.objects.get(email=username)
if user.check_password(password):
return user
except CustomUser.DoesNotExist:
return None
def get_user(self, user_id):
try:
user = CustomUser.objects.get(pk=user_id)
if user.is_active:
return user
return None
except CustomUser.DoesNotExist:
return None
Answer: Remove `django.contrib.customuser` and `django.contrib.auth` from your
`INSTALLED_APPS`. There is no `customuser` application under `django.contrib`
package, and `auth` can be omitted (to avoid potential name colission).
Furthermore, I suggest you re-read [the Django docs on auth
customization](https://docs.djangoproject.com/en/1.9/topics/auth/customizing/).
Most of the changes are optional, and your code should be simplified by re-
using the base classes, unless your methods vary of course. The docs also
mentions that for swapping User models, you are required to update settings to
`AUTH_USER_MODEL = 'customuser.CustomUser'`.
|
Python - SkLearn Imputer usage
Question: I have the following question: I have a pandas dataframe, in which missing
values are marked by the string `na`. I want to run an Imputer on it to
replace the missing values with the mean in the column. According to the
sklearn documentation, the parameter `missing_values` should help me with
this:
> missing_values : integer or “NaN”, optional (default=”NaN”) The placeholder
> for the missing values. All occurrences of missing_values will be imputed.
> For missing values encoded as np.nan, use the string value “NaN”.
In my understanding, this means, that if I write
df = pd.read_csv(filename)
imp = Imputer(missing_values='na')
imp.fit_transform(df)
that would mean that the imputer replaces anything in the dataframe with the
`na` value with the mean of the column. However, instead, I get an error:
ValueError: could not convert string to float: na
What am I misinterpreting? Is this not how the imputer should work? How can I
replace the `na` strings with the mean, then? Should I just use a lambda for
it?
Thank you!
Answer: Since you say you want to replace these `'na'` by a the mean of the column,
I'm guessing the non-missing values are indeed floats. The problem is that
pandas does not recognize the string `'na'` as a missing value, and so reads
the column with dtype `object` instead of some flavor of `float`.
Case in point, consider the following `.csv` file:
test.csv
col1,col2
1.0,1.0
2.0,2.0
3.0,3.0
na,4.0
5.0,5.0
With the naive import `df = pd.read_csv('test.csv')`, `df.dtypes` tells us
that `col1` is of dtype `object` and `col2` is of dtype `float64`. But how do
you take the mean of a bunch of objects?
The solution is to tell `pd.read_csv()` to interpret the string `'na'` as a
missing value:
df = pd.read_csv('test.csv', na_values='na')
The resulting dataframe has both columns of dtype `float64`, and you can now
use your imputer.
|
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 7601: ordinal not in range(128)
Question: I am currently running: Python 3.5.1 :: Anaconda 4.0.0 (x86_64).
ERROR: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
7601: ordinal not in range(128)
When running the below code I get the above error. When I save and try to open
the txt file from a local directy I experience the same error, however, when I
save and run a duplicate, that I shorten to ~25 lines the run as expected --
any guidance would be very much appreciated.
import numpy as np
import matplotlib.pyplot as pp
import seaborn
import urllib.request
urllib.request.urlretrieve('ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt','stations.txt')
print(open('stations.txt','r').readlines()[:10])
Answer: Unfortunately, the [documentation for that
directory](http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt) does not
specify what codec is used for the files, so I opened the file in binary mode
instead and found the bytes that caused 'offense'.
The data is encoded as UTF-8; the 'offending' bytes you encounter spell out
ESPAñOLA:
>>> line
b'US1NMRA0022 36.0456 -106.1517 1955.0 NM ESPA\xc3\xb1OLA 5.4 WNW \n'
>>> line.decode('utf8')
'US1NMRA0022 36.0456 -106.1517 1955.0 NM ESPAñOLA 5.4 WNW \n'
That's the 63815th line in the file, if you are curious, which is why you
don't see this issue when you truncate the file.
Open the file with that codec:
open('stations.txt', 'r', encoding='utf8')
Don't rely on the default, which depends on your locale (which easily differs
from environment to environment).
|
Developer Inactive error when calling UnderArmour api
Question: A 403 "Developer Inactive" error is received when trying to post to the
[access_token endpoint](https://developer.underarmour.com/docs/v71_OAuth_2) in
the UnderArmour Connected Fitness api. The **client_id** being used is active.
The url used in the call is: <https://api.ua.com/v7.1/oauth2/access_token/>
This is a snippet of the call using python, after having obtained the
authorization code:
import requests
access_token_url = 'https://api.ua.com/v7.1/oauth2/access_token/'
access_token_data = {'grant_type': 'authorization_code',
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET,
'code': authorize_code}
response = requests.post(url=access_token_url, data=access_token_data)
In [24]: response
Out[24]: <Response [403]>
In [25]: response.content
Out[25]: '<h1>Developer Inactive</h1>'
where CLIENT_ID and CLIENT_SECRET are my registered values on the developer's
portal.
Answer: All calls made to api.ua.com must include an 'api-key' header value,
otherwise, you'll get the 403 Developer Inactive error.
This snippet shows how to do it, in python:
import requests
access_token_url = 'https://api.ua.com/v7.1/oauth2/access_token/'
access_token_data = {'grant_type': 'authorization_code',
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET,
'code': authorize_code}
headers = {'api-key': CLIENT_ID}
response = requests.post(url=access_token_url, data=access_token_data, headers=headers)
In [30]: response
Out[30]: <Response [200]>
In [31]: response.content
Out[31]: '{"user_id": "<user_id>", "access_token": "<access token>", "expires_in": 2591999, "token_type": "Bearer", "scope": "read", "user_href": "/v7.1/user/<user id>/", "refresh_token": "<refresh token>"}'
|
Python: Pass arguments to function which are stored in a file file
Question: We have a databse from which we run queries. Some of these queries take a
(too) long time and and since there seems to be no easy optimization (this has
to do with h5py not supporting fast fancy indexing), we decided to make a
cache, so that once run queries are fast. This works fine so far.
What I would like to do now is to make a textfile which in each row contains
one complete list arguments of the query function. I want to run all these
queries (that is, the query function with the arguments from a line for each
line in the file) from the constructor of the query engine, thus filling up
the cache with important/time consuming queries.
Now, how do I go about this? With pandas, I got almost what I wanted (it gives
you a list of lists, and each of these lists should be passable as *args).
The problem with that approach was that some of the arguments can be lists and
in that case, pandas interprets it as just as a string then, making additional
and tedious work necessary in order to achieve something passable.
Is there a more "professional" way to do this?
Answer: I don't know Python very well, but I think that you simply want to serialize a
list of arguments and deserialize it later. If this is the case, something
like [pickle](https://docs.python.org/2/library/pickle.html) should help.
|
Save Dataframe to csv directly to s3 Python
Question: I have a pandas DataFrame that I want to upload to a new CVS file. The problem
is that I don't want to save the file locally before transferring it to s3. Is
there a method like to_csv for writing the dataframe to s3 directly? I am
using boto3.
Here is what I have so far:
import boto3
s3 = boto3.client('s3', aws_access_key_id='key', aws_secret_access_key='secret_key')
read_file = s3.get_object(Bucket, Key)
df = pd.read_csv(read_file['Body'])
# Make alterations to DataFrame
# Then export DataFrame to CSV through direct transfer to s3
Answer: If you pass `None` as the first argument to
[`to_csv()`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.to_csv.html) the data will be returned
as a string. From there it's an easy step to upload that to S3 in one go.
It should also be possible to pass a `StringIO` object to `to_csv()`, but
using a string will be easier.
|
datastax opscenter "SchedulesNotLoaded" error
Question: I am using Datastax OpsCenter v5.2.4 with DSE v4.8.4-1. Since few days ago,
I've been not able to retrieve the result of "Best Practice Service" both from
API and opscenter UI. When I try to get it, I get errors like below.
GUI:
> Could not retrieve best practice rules: Scheduled jobs have not been loaded
> yet. There may be a connectivity problem with Cassandra.
opscenterd.log
2016-07-01 19:47:50+0000 [] ERROR: Problem while calling decorator (SchedulesNotLoaded): Scheduled jobs have not been loaded yet. There may be a connectivity problem with Cassandra.
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python2.7/dist-packages/opscenterd/WebServer.py", line 1939, in SchedulesGetController
File "/usr/lib/python2.7/dist-packages/opscenterd/Schedule.py", line 213, in getAllSchedules
File "/usr/lib/python2.7/dist-packages/opscenterd/Schedule.py", line 175, in _assert_loaded
I've tried to restart opscenterd service as well as rebooting the opscenter
machine itself but it didn't make any difference. The error says there might
be some connectivity issue, but what port/protocol is opscenter using to load
these scheduled jobs? (there is no firewall between opscenter and cassandra
nodes) There is no alert in the cluster, and agents are all connected
according to opscenter's GUI.
I couldn't find any relevant trouble shooting documentation ... how can we
recover opscenter from this situation?
Answer: Issue is around when schedules settings get in messed up state. If you
schedule a one time only run, and opscenterd is down when it is scheduled to
run, then on startup opscenterd dies loading that schedule.
If you don't have anything particularly important stored there is an easy fix.
Shutdown off opscenter, use cqlsh to `drop keyspace "OpsCenter";` and restart
opscenter.
Otherwise you have to hand clean up some of the schedules in
`OpsCenter.settings` table that got messed up. This is fixed in 6.0, so if you
upgrade it wont happen again.
|
Firebase get request: TypeError: __init__() got an unexpected keyword argument 'strict'
Question: Trying to get Firebase set up and this code produces the error. I've also
tried making the restful call simply using `requests` and I'm getting the
exact same error. I'm using python 3.4. What's going on here?
from firebase import firebase
firebase = firebase.FirebaseApplication('https://testDB-72927.firebaseio.com/', authentication=None)
result = firebase.get('/test', None)
print(result)
Traceback (most recent call last):
File "/Users/Parthenon/Desktop/TestProject/Test.py", line 19, in <module>
firstFunc(mylist)
File "/Users/Parthenon/Desktop/TestProject/Test.py", line 14, in firstFunc
result = firebase.get('/test', None)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/firebase/decorators.py", line 19, in wrapped
return f(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/firebase/firebase.py", line 274, in get
return make_get_request(endpoint, params, headers, connection=connection)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/firebase/decorators.py", line 19, in wrapped
return f(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/firebase/firebase.py", line 38, in make_get_request
response = connection.get(url, params=params, headers=headers, timeout=timeout)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py", line 310, in get
return self.request('GET', url, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py", line 279, in request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py", line 374, in send
r = adapter.send(request, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/adapters.py", line 174, in send
timeout=timeout
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 417, in urlopen
conn = self._get_conn(timeout=pool_timeout)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 232, in _get_conn
return conn or self._new_conn()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 547, in _new_conn
strict=self.strict)
TypeError: __init__() got an unexpected keyword argument 'strict'
Answer: My requests module was not up to date.
sudo pip install requests --upgrade
|
First of two blocks of code of reading from a file not executing in python
Question: As part of an assignment I'm writing an assembler in python that takes
simplified assembly language and outputs binary machine language. Part of my
code is below, where I'm reading from the assembly code in two passes. The
first pass (the first line of with open(filename, "r") as asm_file) in the
first block of reading from the file asm_file doesn't seem to be executing.
The second one is executing fine, well it's not outputting the correct binary
because the first block doesn't seem to be running correctly or at all. Am I
using the "with open(filename. "r") as file:" correctly? What am I missing?
Thanks in advance.
For completeness an input file is given below the code:
if __name__ == "__main__":
#fill Symbol Table and C instruction Tables
symbol_table = symbolTable()
symbol_table.initialiseTable()
comp_table = compTable()
comp_table.fillTable()
dest_table = destTable()
dest_table.fillTable()
jump_table = jumpTable()
jump_table.fillTable()
#import the file given in the command line
filename = sys.argv[-1]
#open output_file
output_file = open('output.hack', 'w')
#open said file and work on contents line by line
with open(filename, "r") as asm_file: ##### This one doesn't seem to run because
#1st pass of input file ##### The print command below doesn't output anything
num_instructions = -1
for line in asm_file:
#ignoring whitespace and comments
if line != '\n' and not line.startswith('//'):
num_instructions += 1
#remove in-line comments
if '//' in line:
marker, line = '//', line
line = line[:line.index(marker)].strip()
#search for beginning of pseudocommand
if line.startswith('('):
num_instructions -= 1
label = line.strip('()')
address = num_instructions + 1
symbol_table.addLabelAddresses(label, address)
print(num_instructions) ###### This print command doesn't output anything
with open(filename, "r") as asm_file:
#2nd pass of input file
for line in asm_file:
#ignoring whitespace and comments
if line != '\n' and not line.startswith('//') and not line.startswith('('):
#remove in-line comments
if '//' in line:
marker, line = '//', line
line = line[:line.index(marker)].strip()
#send each line to parse function to unpack into its underlying fields
instruction = parseLine(line.strip(' \n'))
inst = Instruction(instruction)
binary_string = inst.convertToBin()
#write to output file
output_file.write(binary_string +'\n')
output_file.close()
An input file example:
// This file is part of www.nand2tetris.org
// and the book "The Elements of Computing Systems"
// by Nisan and Schocken, MIT Press.
// File name: projects/06/max/Max.asm
// Computes R2 = max(R0, R1) (R0,R1,R2 refer to RAM[0],RAM[1],RAM[2])
@R0
D=M // D = first number
@R1
D=D-M // D = first number - second number
@OUTPUT_FIRST
D;JGT // if D>0 (first is greater) goto output_first
@R1
D=M // D = second number
@OUTPUT_D
0;JMP // goto output_d
(OUTPUT_FIRST)
@R0
D=M // D = first number
(OUTPUT_D)
@R2
M=D // M[2] = D (greatest number)
(INFINITE_LOOP)
@INFINITE_LOOP
0;JMP // infinite loop
Answer: Your problem seems to be that your code checks if a line starts with a `(`,
but in the assembly it has a tab before an instruction so that it doesn't
work. You should probably do a `line.strip()` after your first if statement
like so
with open(filename, "r") as asm_file:
num_of_instructions = -1
for line in asm_file
if line != "\n":
line.strip()
#rest of code
Incidentally, is the print statement supposed to execute every time it finds a
line? Because if it doesn't, you should put it after the for loop. That is why
it is not outputting anything
Edit: As @TimPeters says, the print statement will also only execute if it
starts with an open bracket and has a comment in it
|
must be str, not bytes | 'str' has no attribute 'decode'
Question: I have this simple code to extract from a mongo database:
import sys
import codecs
import datetime
from pymongo import MongoClient
sys.stdout = codecs.getwriter('utf8')(sys.stdout)
mongo_db = "database"
collectionId = "coll name"
def main(argv):
client = MongoClient("mongodb://localhost:27017")
db = client[mongo_db]
collection = db[collectionId]
cursor = collection.find({})
for document in cursor:
# if "content" in document:
# sys.stdout.write(
# "|"+(document['content'] if document['content'] is not None else "")+"\n")
for key, value in document.items() :
sys.stdout.write(key.decode('utf-8'))
if __name__ == "__main__":
main(sys.argv)
running it like this, gets me
> AttributeError: 'str' object has no attribute 'decode'
So... it's a str object, then? but if I remove the decode, I get
> TypeError: must be str, not bytes
and, it's not like it's printing anything, so, it must be failing at the first
key? but... can the first key be neither str nor bytes??? how can I print
this?
EDIT testing with flush:
for key, value in document.items() :
sys.stdout.write("1")
sys.stdout.flush()
sys.stdout.write(key.decode('utf-8'))
sys.stdout.flush()
I changed the for to that, getting the error
~/Desktop$ python3 sentimongo.py
Traceback (most recent call last):
File "sentimongo.py", line 30, in <module>
main(sys.argv)
File "sentimongo.py", line 24, in main
sys.stdout.write("1")
File "/usr/lib/python3.4/codecs.py", line 374, in write
self.stream.write(data)
TypeError: must be str, not bytes
Answer:
sys.stdout = codecs.getwriter('utf8')(sys.stdout)
This line changes the standard output, so it does different things than you
normally see. In Python 3 you don’t really need to care about converting
things to utf8 since everything already is a unicode string.
If you remove that line, writing a normal string (or even printing one) should
work fine.
|
Iterating list with given index list in python
Question: I have a list `A = [-1, 2, 1, 2, 0, 2, 1, -3, 4, 3, 0, -1]` and `B = [0, 7,
11]`. List `B` shows the index of negative integer number index.
How can I return the sum of each slice of the list: For example sum of
`A[0+1:7]` and `A[7+1:11]`
Answer: Using [`zip`](https://docs.python.org/2/library/functions.html#zip), you can
convert `[0, 7, 11]` to the desired slice index pairs (`1:7` / `8:11`):
>>> zip(B, B[1:]) # In Python 3.x, this will return an iterator.
[(0, 7), (7, 11)]
>>> [(i+1, j) for i, j in zip(B, B[1:])]
[(1, 7), (8, 11)]
* * *
>>> [A[i+1:j] for i, j in zip(B, B[1:])]
[[2, 1, 2, 0, 2, 1], [4, 3, 0]]
>>> [sum(A[i+1:j]) for i, j in zip(B, B[1:])]
[8, 7]
**UPDATE**
Another way to accomplish what you want without defining `B` using
[`itertools.groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby):
>>> A = [-8, 3, 0, 5, -3, 12]
>>> import itertools
>>> [sum(grp) for positive, grp in itertools.groupby(A, lambda x: x >= 0) if positive]
[8, 12]
`key` function was used to split _0 and positive numbers_ and _negative_
numbers.
|
How to copy a file in a zipfile into a certain directory?
Question: I need only one subfile in each of 500 zipfiles, the paths are the same, like:
120132.zip/A/B/C/target_file
212332.zip/A/B/C/target_file
....
How can I copy all these target files into one directory? Keeping the entire
paths in the new directory will be the best, which I mean is:
target_dir/
120132/A/B/C/target_file
212332/A/B/C/target_file
......
I tried it with Python modules _zipfile_ and _shutil_
However, **copyfile** from _shutil_ takes the entire path as argument but when
I tried to directly copy the target file it will raise filenotfind error. When
unzipped by the zipfile.Zipfile, the target file will be accessible but
_copyfile_ becomes invalid.
How can I do this correctly and efficiently ?
Answer: [`ZipFile.extract`](https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile.extract)
accepts optional `path` specifying into which directory it will extract file:
import os
import zipfile
zip_filepath = ['120132.zip', '212332.zip', ...] # or glob.glob('...zip')
target_dir = '/path/to/target_dir'
for path in zip_filepath:
with zipfile.ZipFile(path) as zf:
dirname = os.path.join(
target_dir, os.path.splitext(os.path.basename(path))[0]
)
zf.extract('A/B/C/target_file', path=dirname)
|
Python: Communicate between two files
Question: I have a defined function in one program:
def start():
QuadraticButton = Button(left_frame, text = "Quadratic Equation Solver", command = calculateQuad)
QuadraticButton.pack()
and then in a separate script I have the "calculate quad" function defined.
How can I link these two together, so when that button is pressed, the
calculate quad function is called, from that separate file?
Answer: include my_utils
def start():
from .utils import calculateQuad
QuadraticButton = Button(left_frame, text = "Quadratic Equation Solver", command = calculateQuad)
QuadraticButton.pack()
|
Python-Rounding the value of PI according to the number specified by user
Question: My Code :
import math
def CalPI(precision):
answer = round((math.pi),precision)
return answer
precision=raw_input('Enter number of digits you want after decimal:')
try:
roundTo=int(precision)
print CalPI(roundTo)
except:
print 'Error'
When I run this code I get the output max only upto 11 decimal places. However
I want to generate the output according to the input given by user.
![This is my output snip:](http://i.stack.imgur.com/PTc7l.jpg)
Can anyone tell me where I am going wrong?
Thank you in advance!
Answer: If you use `repr` in your print you'll have 15 digits: 3.141592653589793.
If you want more digits (until 50) use
nb_digits = 40
print(format(math.pi, '.%dg' % nb_digits))
(thanks Stefan for the precision :) but as he stated again: don't trust digits
after digit 15 so the 1000 digit program is the best).
For even more digits, compute pi yourself just like here:
[1000 digits of pi in
python](http://stackoverflow.com/questions/9004789/1000-digits-of-pi-in-
python)
|
The Python "Requests" module cannot detect certain HTML link tags
Question: I'm sure this is an easy question for someone who has experience with webpage
programming and basic Web Scraping (which I do not).
My goal is to obtain information about the many tutors that Chegg hires, by
scraping their "bio" paragraphs. Although I am a novice at web-scraping, I
imagine that this will involve coding a scaper that recursively clicks through
the tutors' links:
![List of Tutors](http://i.stack.imgur.com/ZUWN3.png)
And scrapes the tutors' bios
Using the Microsoft Edge DOM Explorer, I can detect the tutor's link tag in
the page's HTML:
![Tutor's HTML link tag](http://i.stack.imgur.com/m6tlS.png)
However, when I use Python's "Requests" module to obtain the HTML of the web
page, the tutor's link is not there! Strangely, other links on the web page
are detected, but none of the tutors' links. The Python code looks like this:
import requests
r = requests.get('www.chegg.com/tutors/online-tutors/')
print r.content
Can someone advise me on this problem, and what I should go about learning
(e.g. HTML programming, HTTP Theory, etc) so I will be equipped to handle this
project?
Answer: All the data for each expert is inside the div with the `expert-list-content`
class:
from bs4 import BeautifulSoup
import requests
soup = BeautifulSoup(requests.get("https://www.chegg.com/tutors/online-tutors/").content)
for ex in soup.select("div.expert-list-content"):
print(ex.select_one("div.expert-description").text)
That gives you:
"Tutoring gives me great pleasure because I not only get to feel good about helping others, but my students also gain..."
"I was a teaching assistant as a graduate student in mathematics, and taught several classes as a postdoc. I have been a tutor..."
"I have always been the go-to student for notes, essay proofreading, and math instruction. I have tutored at the Latino..."
"In my senior year of high school, I worked as a Physics Teaching Assistant and through that, I honed skills necessary to..."
"Throughout the past eight years, I have had the incredible opportunity to work closely with over 200 students in..."
"I have worked as a teaching assistant in my college for core disciplinary courses. I have also conducted training sessions on..."
"Scott here. Originally from Tennessee and educated in Cornell University, I've been tutoring/teaching math for 10 years and..."
"I am currently pursuing dual BE Mechanical Engineering and M.Sc Mathematics degrees from BITS Pilani. I have had ample..."
"I am a specialist in language and linguistics, with a particular interest in the history and grammar of the English language..."
"I graduated 7 years before and since then have taught many students on a regular basis in Finance and Mathematics. I have..."
To get the profile links and name:
for ex in soup.select("div.expert-list-content"):
info = ex.select_one("div.expert-info a")
print(info.text, info["href"])
Which gives you:
(u'Aleria S.', '/tutors/online-tutors/Aleria-S-371573/')
(u'Douglas Z.', '/tutors/online-tutors/Douglas-Z-568826/')
(u'Carla S.', '/tutors/online-tutors/Carla-S-864918/')
(u'Vinit R.', '/tutors/online-tutors/Vinit-R-2031766/')
(u'Anastasia G.', '/tutors/online-tutors/Anastasia-G-65278/')
(u'Vinay S.', '/tutors/online-tutors/Vinay-S-85533/')
(u'Gunjan G.', '/tutors/online-tutors/Gunjan-G-2695711/')
(u'Scott M.', '/tutors/online-tutors/Scott-M-277743/')
(u'Saumya U.', '/tutors/online-tutors/Saumya-U-890305/')
(u'Ed M.', '/tutors/online-tutors/Ed-M-2895636/')
There is no Javascript involved, if you right click in your browser and choose
view source you can see it is all there. If it were dynamically created you
would not see it in the source outside _Microsoft Edge DOM Explorer_. In
general, it is always good to add a user-agent.
head = {"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"}
soup = BeautifulSoup(requests.get("https://www.chegg.com/tutors/online-tutors/", headers=head).content)
|
Recursive loops with increasing size/arguments (Hamiltonian Paths?) Python
Question: I have a code that takes `n` inputs and computes the shortest distance between
them without ever revisiting the same point twice. I think this the same as
the Hamiltonian Path problem.
My code takes `n` addresses as inputs and iterates over all possible
combinations without repeating. Right now I have the 'brute force' method
where each loop grabs the start/end location, calcs distance, excludes
replicated locations, then adds paths that only visit every point to my Df.
Since there are 5 locations, the 5th nested for loop block write the sequence
and distance to the DF.
DF with values:
Index Type start_point
0 Start (38.9028613352942, -121.339977998194)
1 A (38.8882610961556, -121.297759)
2 B (38.9017768701178, -121.328815149117)
3 C (38.902337877551, -121.273244306122)
4 D (38.8627754142291, -121.313577618114)
5 E (38.882338375, -121.277366625)
My code goes like:
from geopy.distance import vincenty
import pandas as pd
master=pd.DataFrame()
master['locations']=''
master['distance']=''
n=0
df1a=source[source.Type != source.loc[0,'Type']]
df1a=df1a.reset_index(drop=True)
for i1a in df1a.index:
i1_master=vincenty(source.loc[0,'start_point'],df1a.loc[i1a,'start_point']).mile s
for i2 in df1a.index:
df2a=df1a[df1a.Type != df1a.loc[i2,'Type']]
df2a=df2a.reset_index(drop=True)
for i2a in df2a.index:
if df1a.loc[i1a,'Type']==df2a.loc[i2a,'Type']:
break
else:
i2_master=i1_master+vincenty(df1a.loc[i1a,'start_point'],df2a.loc[i2a,'start_point']).miles
for i3 in df2a.index:
df3a=df2a[df2a.Type != df2a.loc[i3,'Type']]
df3a=df3a.reset_index(drop=True)
for i3a in df3a.index:
if df1a.loc[i1a,'Type']==df3a.loc[i3a,'Type']:
break
if df2a.loc[i2a,'Type']==df3a.loc[i3a,'Type']:
break
else:
i3_master=i2_master+vincenty(df2a.loc[i2a,'start_point'],df3a.loc[i3a,'start_point']).miles
for i4 in df3a.index:
df4a=df3a[df3a.Type != df3a.loc[i4,'Type']]
df4a=df4a.reset_index(drop=True)
for i4a in df4a.index:
if df1a.loc[i1a,'Type']==df4a.loc[i4a,'Type']:
break
if df2a.loc[i2a,'Type']==df4a.loc[i4a,'Type']:
break
if df3a.loc[i3a,'Type']==df4a.loc[i4a,'Type']:
break
else:
i4_master=i3_master+vincenty(df3a.loc[i3a,'start_point'],df4a.loc[i4a,'start_point']).miles
for i5 in df4a.index:
df5a=df4a[df4a.Type != df4a.loc[i5,'Type']]
df5a=df5a.reset_index(drop=True)
for i5a in df5a.index:
if df1a.loc[i1a,'Type']==df5a.loc[i5a,'Type']:
break
if df2a.loc[i2a,'Type']==df5a.loc[i5a,'Type']:
break
if df3a.loc[i3a,'Type']==df5a.loc[i5a,'Type']:
break
if df4a.loc[i4a,'Type']==df5a.loc[i5a,'Type']:
break
if df4a.loc[i4a,'Type']==df5a.loc[i5a,'Type']:
break
else:
i5_master=i4_master+vincenty(df4a.loc[i4a,'start_point'],df5a.loc[i5a,'start_point']).miles
#This loop is special, it calculates distance back to the start.
for i5 in df4a.index:
df5a=df4a[df4a.Type != df4a.loc[i5,'Type']]
df5a=df5a.reset_index(drop=True)
for i5a in df5a.index:
master.loc[n,'locations']=source.loc[0,'Type']+'_'+df1a.loc[i1a,'Type']+'_'+df2a.loc[i2a,'Type']+'_'+df3a.loc[i3a,'Type']+'_'+df4a.loc[i4a,'Type']+'_'+df5a.loc[i5a,'Type']+'_'+source.loc[0,'Type']
master.loc[n,'distance']=i5_master+vincenty(df5a.loc[i5a,'start_point'],df1a.loc[0,'start_point']).miles
n=n+1
Is there a way to use recursive code to build this structure? As a Chemical
Engineer I am out of my league ;)
For example: The number of if statements (to check for sequentially repeated
`start_points` ) increases in each section and changes in terms of arguments.
Any other pointers are appreciated.
Answer: This is a special case of the _Travelling Salesman Problem_ , which is perhaps
the most famous example of an _intractable_ problem - one which cannot be
solved in sensible time for any sizable input. Using recursion on this would
take O(N!) memory and time, which can only be viable (even on modern systems)
for small numbers of inputs (< 10 maybe).
If you are willing to sacrifice _perfect_ solutions for the sake of resources,
check out some sub-optimal heuristic solutions here:
<http://www.math.tamu.edu/~mpilant/math167/Notes/Chapter2.pdf>
|
Can't install datasets package via pip
Question: I'm trying to run a script that requires the datasets python package. I've
tried installing this unsuccessfully using pip by calling:
`pip install datasets`
I know this hasn't worked because when I run the script I get the message:
Traceback (most recent call last):
File "lda.py", line 2, in <module>
import lda
File "/Users/deepthought/lda.py", line 3, in <module>
import datasets
ImportError: No module named datasets
I've installed python via homebrew.
When I run `pip install datasets` I get the error:
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/ch/84cpkwc52zx0rsh4k5v4_7h40000gn/T/pip-build-gZWyT3/datasets/
I'm fairly new to scripting python or going under the hood of an OS X, so
there's a risk I've missed something elementary.
I've been researching & trying to overcome this for about a week now including
looking at similar questions on stackoverflow.com and haven't gotten past this
stage for the duration. One of the tutorials I was working through told me to
edit ~/.profile
This has been left like so:
# The orginal version is saved in .profile.pysave
#PATH="/Library/Frameworks/Python.framework/Versions/3.5/bin:${PATH}"
#export PATH
export PATH=/usr/local/bin:/usr/local/sbin:$PATH
/etc/paths contains:
/usr/local/bin
/usr/bin
/bin
/usr/sbin
/sbin
I'm running OS X El Capitan - 10.11.5 (15F34) Python 2.7.11
Brew doctor flagged multiple items, but I've no idea whether it is worth
fixing none/all of them:
Warning: Your XQuartz (2.7.7) is outdated
Please install XQuartz 2.7.9:
https://xquartz.macosforge.org
Warning: Python is installed at /Library/Frameworks/Python.framework
Homebrew only supports building against the System-provided Python or a
brewed Python. In particular, Pythons installed to /Library can interfere
with other software installs.
Warning: Unbrewed dylibs were found in /usr/local/lib.
If you didn't put them there on purpose they could cause problems when
building Homebrew formulae, and may need to be deleted.
Unexpected dylibs:
/usr/local/lib/libtcl8.6.dylib
/usr/local/lib/libtk8.6.dylib
Warning: Unbrewed header files were found in /usr/local/include.
If you didn't put them there on purpose they could cause problems when
building Homebrew formulae, and may need to be deleted.
Unexpected header files:
/usr/local/include/fakemysql.h
/usr/local/include/fakepq.h
/usr/local/include/fakesql.h
/usr/local/include/itcl.h
/usr/local/include/itcl2TclOO.h
/usr/local/include/itclDecls.h
/usr/local/include/itclInt.h
/usr/local/include/itclIntDecls.h
/usr/local/include/itclMigrate2TclCore.h
/usr/local/include/itclTclIntStubsFcn.h
/usr/local/include/mysqlStubs.h
/usr/local/include/node/ares.h
/usr/local/include/node/ares_version.h
/usr/local/include/node/nameser.h
/usr/local/include/node/node.h
/usr/local/include/node/node_buffer.h
/usr/local/include/node/node_internals.h
/usr/local/include/node/node_object_wrap.h
/usr/local/include/node/node_version.h
/usr/local/include/node/openssl/opensslconf.h
/usr/local/include/node/uv-private/ngx-queue.h
/usr/local/include/node/uv-private/stdint-msvc2008.h
/usr/local/include/node/uv-private/tree.h
/usr/local/include/node/uv-private/uv-bsd.h
/usr/local/include/node/uv-private/uv-darwin.h
/usr/local/include/node/uv-private/uv-linux.h
/usr/local/include/node/uv-private/uv-sunos.h
/usr/local/include/node/uv-private/uv-unix.h
/usr/local/include/node/uv-private/uv-win.h
/usr/local/include/node/uv.h
/usr/local/include/node/v8-debug.h
/usr/local/include/node/v8-preparser.h
/usr/local/include/node/v8-profiler.h
/usr/local/include/node/v8-testing.h
/usr/local/include/node/v8.h
/usr/local/include/node/v8stdint.h
/usr/local/include/node/zconf.h
/usr/local/include/node/zlib.h
/usr/local/include/odbcStubs.h
/usr/local/include/pqStubs.h
/usr/local/include/tcl.h
/usr/local/include/tclDecls.h
/usr/local/include/tclOO.h
/usr/local/include/tclOODecls.h
/usr/local/include/tclPlatDecls.h
/usr/local/include/tclThread.h
/usr/local/include/tclTomMath.h
/usr/local/include/tclTomMathDecls.h
/usr/local/include/tdbc.h
/usr/local/include/tdbcDecls.h
/usr/local/include/tdbcInt.h
/usr/local/include/tk.h
/usr/local/include/tkDecls.h
/usr/local/include/tkPlatDecls.h
Warning: Unbrewed .pc files were found in /usr/local/lib/pkgconfig.
If you didn't put them there on purpose they could cause problems when
building Homebrew formulae, and may need to be deleted.
Unexpected .pc files:
/usr/local/lib/pkgconfig/tcl.pc
/usr/local/lib/pkgconfig/tk.pc
Warning: Unbrewed static libraries were found in /usr/local/lib.
If you didn't put them there on purpose they could cause problems when
building Homebrew formulae, and may need to be deleted.
Unexpected static libraries:
/usr/local/lib/libtclstub8.6.a
/usr/local/lib/libtkstub8.6.a
Warning: You have unlinked kegs in your Cellar
Leaving kegs unlinked can lead to build-trouble and cause brews that depend on
those kegs to fail to run properly once built. Run `brew link` on these:
git
python3
Warning: Broken symlinks were found. Remove them with `brew prune`:
/usr/local/bin/github
/usr/local/lib/perl5/site_perl/Git/I18N.pm
/usr/local/lib/perl5/site_perl/Git/IndexInfo.pm
/usr/local/lib/perl5/site_perl/Git/SVN/Editor.pm
/usr/local/lib/perl5/site_perl/Git/SVN/Fetcher.pm
/usr/local/lib/perl5/site_perl/Git/SVN/GlobSpec.pm
/usr/local/lib/perl5/site_perl/Git/SVN/Log.pm
/usr/local/lib/perl5/site_perl/Git/SVN/Memoize/YAML.pm
/usr/local/lib/perl5/site_perl/Git/SVN/Migration.pm
/usr/local/lib/perl5/site_perl/Git/SVN/Prompt.pm
/usr/local/lib/perl5/site_perl/Git/SVN/Ra.pm
/usr/local/lib/perl5/site_perl/Git/SVN/Utils.pm
/usr/local/lib/perl5/site_perl/Git/SVN.pm
/usr/local/lib/perl5/site_perl/Git.pm
/usr/local/share/git-core/templates/description
/usr/local/share/git-core/templates/hooks/applypatch-msg.sample
/usr/local/share/git-core/templates/hooks/commit-msg.sample
/usr/local/share/git-core/templates/hooks/post-update.sample
/usr/local/share/git-core/templates/hooks/pre-applypatch.sample
/usr/local/share/git-core/templates/hooks/pre-commit.sample
/usr/local/share/git-core/templates/hooks/pre-push.sample
/usr/local/share/git-core/templates/hooks/pre-rebase.sample
/usr/local/share/git-core/templates/hooks/prepare-commit-msg.sample
/usr/local/share/git-core/templates/hooks/update.sample
/usr/local/share/git-core/templates/info/exclude
/usr/local/share/man/man1/git-add.1
/usr/local/share/man/man1/git-am.1
/usr/local/share/man/man1/git-annotate.1
/usr/local/share/man/man1/git-apply.1
/usr/local/share/man/man1/git-archimport.1
/usr/local/share/man/man1/git-archive.1
/usr/local/share/man/man1/git-bisect.1
/usr/local/share/man/man1/git-blame.1
/usr/local/share/man/man1/git-branch.1
/usr/local/share/man/man1/git-bundle.1
/usr/local/share/man/man1/git-cat-file.1
/usr/local/share/man/man1/git-check-attr.1
/usr/local/share/man/man1/git-check-ignore.1
/usr/local/share/man/man1/git-check-mailmap.1
/usr/local/share/man/man1/git-check-ref-format.1
/usr/local/share/man/man1/git-checkout-index.1
/usr/local/share/man/man1/git-checkout.1
/usr/local/share/man/man1/git-cherry-pick.1
/usr/local/share/man/man1/git-cherry.1
/usr/local/share/man/man1/git-citool.1
/usr/local/share/man/man1/git-clean.1
/usr/local/share/man/man1/git-clone.1
/usr/local/share/man/man1/git-column.1
/usr/local/share/man/man1/git-commit-tree.1
/usr/local/share/man/man1/git-commit.1
/usr/local/share/man/man1/git-config.1
/usr/local/share/man/man1/git-count-objects.1
/usr/local/share/man/man1/git-credential-cache--daemon.1
/usr/local/share/man/man1/git-credential-cache.1
/usr/local/share/man/man1/git-credential-store.1
/usr/local/share/man/man1/git-credential.1
/usr/local/share/man/man1/git-cvsexportcommit.1
/usr/local/share/man/man1/git-cvsimport.1
/usr/local/share/man/man1/git-cvsserver.1
/usr/local/share/man/man1/git-daemon.1
/usr/local/share/man/man1/git-describe.1
/usr/local/share/man/man1/git-diff-files.1
/usr/local/share/man/man1/git-diff-index.1
/usr/local/share/man/man1/git-diff-tree.1
/usr/local/share/man/man1/git-diff.1
/usr/local/share/man/man1/git-difftool.1
/usr/local/share/man/man1/git-fast-export.1
/usr/local/share/man/man1/git-fast-import.1
/usr/local/share/man/man1/git-fetch-pack.1
/usr/local/share/man/man1/git-fetch.1
/usr/local/share/man/man1/git-filter-branch.1
/usr/local/share/man/man1/git-fmt-merge-msg.1
/usr/local/share/man/man1/git-for-each-ref.1
/usr/local/share/man/man1/git-format-patch.1
/usr/local/share/man/man1/git-fsck-objects.1
/usr/local/share/man/man1/git-fsck.1
/usr/local/share/man/man1/git-gc.1
/usr/local/share/man/man1/git-get-tar-commit-id.1
/usr/local/share/man/man1/git-grep.1
/usr/local/share/man/man1/git-gui.1
/usr/local/share/man/man1/git-hash-object.1
/usr/local/share/man/man1/git-help.1
/usr/local/share/man/man1/git-http-backend.1
/usr/local/share/man/man1/git-http-fetch.1
/usr/local/share/man/man1/git-http-push.1
/usr/local/share/man/man1/git-imap-send.1
/usr/local/share/man/man1/git-index-pack.1
/usr/local/share/man/man1/git-init-db.1
/usr/local/share/man/man1/git-init.1
/usr/local/share/man/man1/git-instaweb.1
/usr/local/share/man/man1/git-log.1
/usr/local/share/man/man1/git-lost-found.1
/usr/local/share/man/man1/git-ls-files.1
/usr/local/share/man/man1/git-ls-remote.1
/usr/local/share/man/man1/git-ls-tree.1
/usr/local/share/man/man1/git-mailinfo.1
/usr/local/share/man/man1/git-mailsplit.1
/usr/local/share/man/man1/git-merge-base.1
/usr/local/share/man/man1/git-merge-file.1
/usr/local/share/man/man1/git-merge-index.1
/usr/local/share/man/man1/git-merge-one-file.1
/usr/local/share/man/man1/git-merge-tree.1
/usr/local/share/man/man1/git-merge.1
/usr/local/share/man/man1/git-mergetool--lib.1
/usr/local/share/man/man1/git-mergetool.1
/usr/local/share/man/man1/git-mktag.1
/usr/local/share/man/man1/git-mktree.1
/usr/local/share/man/man1/git-mv.1
/usr/local/share/man/man1/git-name-rev.1
/usr/local/share/man/man1/git-notes.1
/usr/local/share/man/man1/git-p4.1
/usr/local/share/man/man1/git-pack-objects.1
/usr/local/share/man/man1/git-pack-redundant.1
/usr/local/share/man/man1/git-pack-refs.1
/usr/local/share/man/man1/git-parse-remote.1
/usr/local/share/man/man1/git-patch-id.1
/usr/local/share/man/man1/git-peek-remote.1
/usr/local/share/man/man1/git-prune-packed.1
/usr/local/share/man/man1/git-prune.1
/usr/local/share/man/man1/git-pull.1
/usr/local/share/man/man1/git-push.1
/usr/local/share/man/man1/git-quiltimport.1
/usr/local/share/man/man1/git-read-tree.1
/usr/local/share/man/man1/git-rebase.1
/usr/local/share/man/man1/git-receive-pack.1
/usr/local/share/man/man1/git-reflog.1
/usr/local/share/man/man1/git-relink.1
/usr/local/share/man/man1/git-remote-ext.1
/usr/local/share/man/man1/git-remote-fd.1
/usr/local/share/man/man1/git-remote-testgit.1
/usr/local/share/man/man1/git-remote.1
/usr/local/share/man/man1/git-repack.1
/usr/local/share/man/man1/git-replace.1
/usr/local/share/man/man1/git-repo-config.1
/usr/local/share/man/man1/git-request-pull.1
/usr/local/share/man/man1/git-rerere.1
/usr/local/share/man/man1/git-reset.1
/usr/local/share/man/man1/git-rev-list.1
/usr/local/share/man/man1/git-rev-parse.1
/usr/local/share/man/man1/git-revert.1
/usr/local/share/man/man1/git-rm.1
/usr/local/share/man/man1/git-send-email.1
/usr/local/share/man/man1/git-send-pack.1
/usr/local/share/man/man1/git-sh-i18n--envsubst.1
/usr/local/share/man/man1/git-sh-i18n.1
/usr/local/share/man/man1/git-sh-setup.1
/usr/local/share/man/man1/git-shell.1
/usr/local/share/man/man1/git-shortlog.1
/usr/local/share/man/man1/git-show-branch.1
/usr/local/share/man/man1/git-show-index.1
/usr/local/share/man/man1/git-show-ref.1
/usr/local/share/man/man1/git-show.1
/usr/local/share/man/man1/git-stage.1
/usr/local/share/man/man1/git-stash.1
/usr/local/share/man/man1/git-status.1
/usr/local/share/man/man1/git-stripspace.1
/usr/local/share/man/man1/git-submodule.1
/usr/local/share/man/man1/git-svn.1
/usr/local/share/man/man1/git-symbolic-ref.1
/usr/local/share/man/man1/git-tag.1
/usr/local/share/man/man1/git-tar-tree.1
/usr/local/share/man/man1/git-unpack-file.1
/usr/local/share/man/man1/git-unpack-objects.1
/usr/local/share/man/man1/git-update-index.1
/usr/local/share/man/man1/git-update-ref.1
/usr/local/share/man/man1/git-update-server-info.1
/usr/local/share/man/man1/git-upload-archive.1
/usr/local/share/man/man1/git-upload-pack.1
/usr/local/share/man/man1/git-var.1
/usr/local/share/man/man1/git-verify-pack.1
/usr/local/share/man/man1/git-verify-tag.1
/usr/local/share/man/man1/git-web--browse.1
/usr/local/share/man/man1/git-whatchanged.1
/usr/local/share/man/man1/git-write-tree.1
/usr/local/share/man/man1/git.1
/usr/local/share/man/man1/gitk.1
/usr/local/share/man/man1/gitremote-helpers.1
/usr/local/share/man/man1/gitweb.1
/usr/local/share/man/man3/Git.3pm
/usr/local/share/man/man3/Git::I18N.3pm
/usr/local/share/man/man3/Git::SVN::Editor.3pm
/usr/local/share/man/man3/Git::SVN::Fetcher.3pm
/usr/local/share/man/man3/Git::SVN::Memoize::YAML.3pm
/usr/local/share/man/man3/Git::SVN::Prompt.3pm
/usr/local/share/man/man3/Git::SVN::Ra.3pm
/usr/local/share/man/man3/Git::SVN::Utils.3pm
/usr/local/share/man/man5/gitattributes.5
/usr/local/share/man/man5/githooks.5
/usr/local/share/man/man5/gitignore.5
/usr/local/share/man/man5/gitmodules.5
/usr/local/share/man/man5/gitrepository-layout.5
/usr/local/share/man/man5/gitweb.conf.5
/usr/local/share/man/man7/gitcli.7
/usr/local/share/man/man7/gitcore-tutorial.7
/usr/local/share/man/man7/gitcredentials.7
/usr/local/share/man/man7/gitcvs-migration.7
/usr/local/share/man/man7/gitdiffcore.7
/usr/local/share/man/man7/gitglossary.7
/usr/local/share/man/man7/gitnamespaces.7
/usr/local/share/man/man7/gitrevisions.7
/usr/local/share/man/man7/gittutorial-2.7
/usr/local/share/man/man7/gittutorial.7
/usr/local/share/man/man7/gitworkflows.7
Warning: Your Homebrew is outdated.
You haven't updated for at least 24 hours. This is a long time in brewland!
To update Homebrew, run `brew update`.
How do I make progress in diagnosing the issue with the installation of the
datasets package?
**Update**
Here is the script I'm trying to run:
import sys
egg_path = '/usr/local/lib/python2.7/site-packages/datasets-0.0.9-py2.7.egg'
sys.path.append(egg_path)
import numpy as np
import lda
import datasets
X = lda.datasets.load_reuters()
vocab = lda.datasets.load_reuters_vocab()
titles = lda.datasets.load_reuters_titles()
X.shape
(395, 4258)
X.sum()
84010
model = lda.LDA(n_topics=20, n_iter=1500, random_state=1)
model.fit(X) # model.fit_transform(X) is also available
topic_word = model.topic_word_ # model.components_ also works
n_top_words = 8
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
print('Topic {}: {}'.format(i, ' '.join(topic_words)))
Answer: Using `pip install datasets` I was also not able to properly install this
package. It seems like there is a **bug in this particular package.**
The **DESCRIBE.rst** file is simply missing. To fix this just download the
plain package from PyPi. <https://pypi.python.org/pypi/datasets/0.0.9>
Then **adjust the setup.py** file (remove the description).
Afterwards you need to install using `python setup.py install`. Don't forget
to add the installed package to your Python path!
To do so, I would recommend that you add the following to your script.
import sys
egg_path = '__MODULE_PATH__/datasets-0.0.9-py3.5.egg'
sys.path.append(egg_path)
import datasets
Otherwise, you can also add your module using:
export PATH=__MODULE_PATH__:$PATH
Alternatively, you could also simply pull the source code from the Github
repository and just include it in your project.
<https://github.com/realtimeweb/datasets>
Hope this was kind of helpful to your problem. If you got any further
questions just let me know.
|
Can't connect to mysql using python's mysql.connector
Question: I'm using a Mac (OS 10.10.5), PyCharm, Python 3.5 and MySQL. MySQL has been
working with PHP on the same machine. I'm trying to connect to it using Python
and getting the error message: `enter code here`2003: Can't connect to MySQL
server on 'localhost::3306' (8 nodename nor servname provided, or not known)
Can someone list the diagnostic steps so I can correct the problem? Thanks,
Doug
Below is the connection code: import mysql.connector from mysql.connector
import errorcode
try:
cnn = mysql.connector.connect(
host="localhost:", # your host, usually localhost
user="root", # your username
password="root", # your password
database="bb_cards") # name of the data base
print("It Works!!")
except mysql.connector.Error as e:
if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with username or Password")
elif e.errno == errorcode.ER_BAD_DB_ERROR:
print("Database Does not exist")
else:
print(e)
Answer: You have a colon where there shouldn't be one:
host="localhost:" # remove the : -> host="localhost"
`127.0.0.1::3306` is not the same as `127.0.0.1:3306`
|
ImportError: cannot import name pubnub
Question: i have problem with pubhub module in python 2.7.6.
I've installed by `sudo pip install pubnub`
Output:
>>> import pubnub
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pubnub.py", line 3, in <module>
from pubnub import Pubnub
ImportError: cannot import name Pubnub
>>> from pubnub import Pubnub
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pubnub.py", line 3, in <module>
from pubnub import Pubnub
ImportError: cannot import name Pubnub
I reinstalled requests, python-dev and pubnub. Nothing helped
In help('modules') there is module pubnub
Answer: You have a file in your project called `pubnub.py`, which the importer is
finding before your installation of the pubnub module, I think. Rename your
file to something else, and everything should work.
|
PyQt Application Frozen Screen on Linux, fine on Windows
Question: I have been writing a PyQt Application on Windows for awhile, and I wanted to
see if it would run properly on Linux. The gui application is basically a
shell for a scientific toolkit meant to be used on the qtconsole/notebook.
Both Linux and Windows systems are based on Anaconda 3.5, and running PyQt5.
It works fine on Windows, but when I run on Linux, the Qt window pops up bit
simply freezes in place the graphics that were underneath the window (see
image below). There are no errors at all shown on the command line, the window
just pops up and doesn't show widgets at all. Furthermore, when I try to close
with the x-button in the top of the window, it shows an "Application not
responsive" dialog and I have to force-quite.
The code-base is quite large (too large for this post) so I wouldn't really be
able to put in a code example for this problem (I haven't been able to
reproduce the error yet outside of this program). I can say that I did get
some examples from this site working for simple gui programs:
<https://github.com/Deusdies/pythonbo> . My code has a a lot of widget
subclassing, uses pyqtgraph (most recent version from github) for plots, and
has a lot of MDI subwindows.
Other packages for the underlying toolkit include:
* numpy, numba, scipy, matplotlib
* lmfit
* pyexcel, pyexcel-xlsx
* pyvisa, pyserial
* pyperclip
When I try to run any of the MDI subwindow widgets as individual programs, I
get the same problem as running the main program.
I guess I'm curious if anyone has run into this situation before and can
suggest likely things I should probe to see what the problem might be.
[![Frozen PyQt
Application](http://i.stack.imgur.com/A8Xcl.png)](http://i.stack.imgur.com/A8Xcl.png)
Answer: The general answer turned out to be that 2 QApplication event loops were being
initialized.
The problem code turned out to be in a little module I have been using called
[pyperclip](https://github.com/asweigart/pyperclip) which was being imported
first, and then I was instantiating my QApplication at the bottom of my gui
code.
This was not a problem on Windows because pyperclip can access the Windows
clipboard somewhat natively, but on Linux it uses the QtClipboard. My solution
was to edit the pyperclip code to detect if an application was already
running, and then to instantiate my gui application before importing pyperclip
|
Motion detection + Contours (Python)
Question: What's wrong with my python code. It says syntax error at line 5 "Mat frame",
line 13 "std". This code was originally from C++, I converted it to Python.
import numpy as np
import cv2
def run_main():
cv2.Mat frame
cv2.Mat back
cv2.Mat fore
cv2.VideoCapture cap(0)
cv2.BackgroundSubtractorMOG2 bg
bg.nmixtures = 3
bg.bShadowDetection = false
std::vector<std::vector<cv::Point> > contours;
cv2.namedWindow("Frame")
cv2.namedWindow("Background")
while True:
cap >> frame;
bg.operator ()(frame,fore)
bg.getBackgroundImage(back)
cv2.erode(fore,fore,cv2.Mat())
cv2.dilate(fore,fore,cv2.Mat())
cv2.findContours(fore,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE)
cv2.drawContours(frame,contours,-1,cv2.Scalar(0,0,255),2)
cv2.imshow("Frame",frame)
cv2.imshow("Background",back)
if cv2.waitKey(1) & 0xFF == ord('q')
break
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
run_main()
Answer: Your code is no valid Python, it's a syntactic mix between C++ and Python.
Some hints on what needs to be changed:
cv2.Mat frame
cv2.Mat back
cv2.Mat fore
cv2.VideoCapture cap(0)
cv2.BackgroundSubtractorMOG2 bg
There is no **Mat** type in OpenCV Python. It uses [numpy
arrays](http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html)
to handle data. Also, in Python variables are not declared this way.
cap = cv2.VideoCapture()
is the right way to do it.
The method you're trying to call to create the background subractor does not
exist.
cv2.BackgroundSubtractorMOG2 bg
has to be
bg = cv2.createBackgroundSubtractorMOG2()
Also see
[this](http://docs.opencv.org/3.1.0/db/d5c/tutorial_py_bg_subtraction.html#gsc.tab=0)
tutorial.
This line
std::vector<std::vector<cv::Point> > contours;
is just plain C++.
cap >> frame;
That's the C++ way to read data. In Python you should do
frame = cap.read()
|
How am I getting two different results from same Python print command?
Question: For the first `print tag` I am getting a large list of hundreds of `<a` tags.
For the second `print tag` I am getting a list with four `<a` tags, not
including the ones that I want.
One of the tags that tags that I want is at the end of `tags`. After printing
all several hundred tags, I am printing the last tag, and that is printing the
correct end tag as it should. But then by running another for loop over the
same (unchanged) list `tags` I am not just getting a different result, but
significantly different.
With or without the `print '\n\n\n' the phenomenon is happening, it's just to
make the split between the two prints easier for me to see.
What is happening to this list in between the first and second `for` loop to
cause this problem?
(This code is exactly as I have it in my script. Originally I didn't have the
lines from the first `for` loop until the empty line, and am doing this to
debug the lack of the correct URL from the end result.)
EDIT: Also, here is what is being printed for all the `print` statements (only
the last section of the first `print` within the `for` loop):
import urllib
from bs4 import BeautifulSoup
startingList = ['http://www.stowefamilylaw.co.uk/']
for url in startingList:
try:
html = urllib.urlopen(url)
soup = BeautifulSoup(html,'lxml')
tags = soup('a')
for tag in tags:
print tag
print tags[-1]
print '\n\n\n'
for tag in tags:
print tag
if not tag.get('href', None).startswith('..'):
continue
except:
continue
....
<a class="shiftnav-target" href="http://www.stowefamilylaw.co.uk/faq-category/decrees-orders-forms/" itemprop="url">Decrees, Orders & Forms</a>
<a class="shiftnav-target" href="http://www.stowefamilylaw.co.uk/faq-category/international-divorce/" itemprop="url">International Divorce</a>
<a class="shiftnav-target"><i class="fa fa-chevron-left"></i> Back</a>
<a class="shiftnav-target" href="http://www.stowefamilylaw.co.uk/contact/" itemprop="url"><i class="fa fa-phone"></i> Contact</a>
<a class="shiftnav-target" href="http://www.stowefamilylaw.co.uk/contact/" itemprop="url"><i class="fa fa-phone"></i> Contact</a>
<a href="http://www.stowefamilylaw.co.uk/">Stowe Family Law</a>
<a href="#spu-5086" style="color: #fff"><div class="callbackbutton"><i class="fa fa-phone" style="font-size: 16px"></i> Request Callback </div></a>
<a href="#spu-5084" style="color: #fff"><div class="callbackbutton"><i class="fa fa-envelope-o" style="font-size: 16px"></i> Quick Enquiry </div></a>
<a class="ubermenu-responsive-toggle ubermenu-responsive-toggle-main ubermenu-skin-black-white-2 ubermenu-loc-primary" data-ubermenu-target="ubermenu-main-3-primary"><i class="fa fa-bars"></i>Main Menu</a>
Answer: You have a blanket `except:`:
try:
# ...
except:
continue
so _any error_ in the block will be masked and your loop will be skipped.
Don't use blanket except handlers without raising again, ever, see [Why is
"except: pass" a bad programming
practice?](https://stackoverflow.com/questions/21553327/why-is-except-pass-a-
bad-programming-practice). At the very least catch only `Exception` and
_print_ that error:
except Exception as e:
print 'Encountered:', e
Without proper diagnostics all we can do is guess.
One error you definitely have is an attribute error here when there is no
`href` attribute; the `None` object doesn't have an attribute `startswith`:
if not tag.get('href', None).startswith('..'):
Instead of `None` return an empty string:
if not tag.get('href', '').startswith('..'):
or better yet, select only `a` tags with an `href` attribute:
tags = soup.select('a[href]')
|
Python tkinter, restart the program
Question: this is simple example of my code:
from tkinter import *
import random
class A:
def __init__(self, master):
n = random.randrange(1, 10, 1)
self.frame_a = Frame(master)
self.frame_a.pack()
self.label_a = Label(self.frame_a, text=n)
self.label_a.pack()
def clean(self):
self.frame_a.destroy()
A(root)
B(root)
class B:
def __init__(self, master):
self.frame_b = Frame(master)
self.frame_b.pack()
self.button_b = Button(self.frame_b, text='again', command=self.do_again)
self.button_b.pack()
def do_again(self):
self.frame_b.destroy()
hello.clean()
root = Tk()
hello = A(root)
world = B(root)
root.mainloop()
I want delete all frames and label and restart the program. But it doesn't
work correctly. When I first time use button `again`, it works. However If I
click button again, frame won't destroy.
EDIT: This is sample of my code:
class GamePick:
def __init__(self):
#there are also some labels and frames but they relate to my problem
def do_black_jack(self):
global bj
global bj_play
''' You pick Black Jack, open new window and close actual window'''
bj = Toplevel(self.master)
bj_play = BjGui(bj)
bj['bg'] = 'springgreen4'
bj.wm_geometry("1500x900")
Choices.close(my)
class BlackJack:
#There are some function for pick new card for player and dealer
def check_limit(self):#
if sum(player_cards_val) > 21:
if 11 in player_cards_val:
player_cards_val.remove(11)
player_cards_val.append(1)
self.check_limit()
else:
bj_play.show_result(res='Dealer win')
elif sum(bot_cards_val) > 21:
if 11 in bot_cards_val:
bot_cards_val.remove(11)
bot_cards_val.append(1)
self.check_sixteen()
else:
bj_play.show_result(res='Player win')
else:
if sum(player_cards_val) > sum(bot_cards_val):
bj_play.show_result(res='Player win')
else:
bj_play.show_result(res='Dealer win')
class BjGui:
def __init__(self, master):
self.master = master
self.bot_frame = Frame(self.master, height=False, width=False, bg='springgreen4')
bj.wm_geometry("1500x900")
self.bot_frame.pack(side='top', pady=20)
self.player_frame = Frame(self.master, height=False, width=False, bg='springgreen4')
bj.wm_geometry("1500x900")
self.player_frame.pack(side='bottom', pady=20)
self.buttons_frame = Frame(self.master, bg='springgreen4')
self.buttons_frame.pack(side=BOTTOM)
# There are function for this class, they show players cards and bots cards from class Black Jack
def show_result(self, res):#This function, create new label and button.
# New label show result of the game, after I click button, it should will restart
self.info_label = Label(self.master, text=res, font=('aharoni', 60), bg='springgreen4', pady=35)
self.info_label.pack()
self.again_frame = Frame(self.master)
self.again_frame.pack()
self.again_bt = Button(self.again_frame, text='play again', font=('times', 12), command=self.do_again, bg='blue')
self.again_bt.pack()
def do_again(self):# This should delete all frames and clear list with cards and restart game, but it doesn't work
self.info_label.destroy()
self.again_frame.destroy()
self.player_frame.destroy()
self.bot_frame.destroy()
self.buttons_frame.destroy()
bot_cards_key.clear()
bot_cards_val.clear()
player_cards_key.clear()
player_cards_val.clear()
no_repeat.clear()
BlackJack()
BjGui(bj)
I would like to create Black Jack game. `class BlackJack` picks new cards for
player and dealer, compare value of these cards, and determite who is winner.
`class BjGui` shows players and dealers cards, there some buttons frames and
labels. When the game ends, I use function `check_limit`, it determites who is
winner and function `show_result` create new label with result and new button.
This button should be able restart the game.
Answer: I don't think you want the object recursion that you have. You're making new
`A`s and `B`s inside the current `A` that you're working with - the top of the
stack. `hello` and `world` only refer to the bottom of the stack, not to all
the other `A`s and `B`s in your object recursion stack.
Also, I'm not sure why you want two different classes or two different frames.
I would have to understand what you want to give a suggestion on how to do it
with different frames and classes.
from tkinter import *
import random
class A:
def __init__(self):
self.master = Tk()
self.frame_a = None
self.label_a = None # not necessary, but common styling preference
self.button_b = None # not necessary, but common styling preference
self.clean()
self.master.mainloop()
def clean(self):
if self.frame_a is not None:
self.frame_a.destroy()
n = random.randrange(1, 10, 1)
self.frame_a = Frame(self.master)
self.frame_a.pack()
self.label_a = Label(self.frame_a, text=n)
self.label_a.pack()
self.button_b = Button(self.frame_a, text='again', command=self.clean)
self.button_b.pack()
hello = A()
|
Letsencrypt ImportError: No module named interface on amazon linux while renewing
Question: **Today when i tried to renew my certificates using this command I'm facing
error**
/opt/letsencrypt/letsencrypt-auto renew --config /etc/letsencrypt/config.ini --agree-tos && apachectl graceful
**also tried this command**
/opt/letsencrypt/letsencrypt-auto renew
**Error :**
Traceback (most recent call last):
File "/root/.local/share/letsencrypt/bin/letsencrypt", line 7, in <module>
from certbot.main import main
File "/root/.local/share/letsencrypt/local/lib/python2.7/dist-packages/certbot/main.py", line 12, in <module>
import zope.component
File "/root/.local/share/letsencrypt/local/lib/python2.7/dist-packages/zope/component/__init__.py", line 16, in <module>
from zope.interface import Interface
ImportError: No module named interface
**I did lot of research no solution found.**
Answer: **After doing lots of research I found it.**
You have to unset Python install layout
unset PYTHON_INSTALL_LAYOUT
then update letsencrypt
/opt/letsencrypt/letsencrypt-auto -v
for more refer this blog <https://o-mkar.com/facing-problem-while-renewing-
letsencrypt-certificates-importerror-no-module-named-interface-amazon-linux>
|
editing a specific line in python 3.5.1
Question: Having trouble with overwriting a specific line in python 3.5.1 I know there
are solutions out there but they use external modules or are lengthy and
specific to that persons problem.
is there a line of code that could do that? here is what im looking for:
File.write("insert text here","line to overwrite")
here is my code:
from time import sleep
import os
#functions
def pause():
pause = input("Paused press <ENTER> to continue")
print("attempting to open workspace......")
#
Fo = open("work1.txt", 'r+')#opens the file
print("workspace opened")
#mainloop :D
def Mainloop():
global Fo
Fo.close()
Fo = open("work1.txt", 'r+')
os.system('cls')
print('''
''')
print (" ======")
print (" Read")
print (" Write")
print (" ======")
print("")
INP = input(' >')
if INP == "Read":
Hm = input("how much?")
print("")
if Hm == 'All':
print(Fo.read())
else:
print(Fo.read(int(Hm)))
sleep(2)
pause()
os.system('cls')
Mainloop()
if INP == "Write":
TextToAdd = input("Text to Write: ")
Fo.write(TextToAdd)
os.system('cls')
Mainloop()
else:
print('Not avaliable')
sleep(2)
os.system('cls')
Mainloop()
Mainloop()
pause()
Answer: You want to replace a single line in the textfile with a different line?
for line in open(filename):
if line.rstrip('\n') == search_pattern:
sys.stdout.write('%s\n' % replacement)
else:
sys.stdout.write('%s' % line)
Instead of the `sys.stdout.write()`, you could e.g. populate a list, or write
into a (temporary) file, etc.
The important idea is to first get the required input (search pattern,
replacement string), and then iterate over the file line by line.
Solution properties:
* low memory usage (as you only keep a single line in memory)
* O(n) runtime - the more lines the file has, the longer this approach takes
Alternatively, if the file is guaranteed to be small, you can read the
contents into a string and use `str.replace()` to replace the line you are
searching for.
|
Python scraping XHR returns ValueError: Too many values to unpack
Question: So for educational purposes I've got this piece of code written to scrape the
'detailed' tab of this webpage :
<https://www.whoscored.com/Regions/252/Tournaments/2/Seasons/5826/Stages/12496/TeamStatistics/England-
Premier-League-2015-2016>
However it gives me a value error : too many values to unpack.
I'm still quite beginner level at Python so I just can't figure it out why
this shows up.
import requests
url = 'https://www.whoscored.com/Regions/252/Tournaments/2/Seasons/5826/Stages/12496/TeamStatistics/England-Premier-League-2015-2016'
params = {
'category:shots',
'subcategory:zones',
'statsAccumulationType:0',
'timeOfTheGameStart:0',
'timeOfTheGameEnd:5',
'stageId:12496',
'sortBy: Rating',
'page: 1',
'isCurrent: True'
}
headers = {
'User-Agent: Mozilla/5.0 (Windows NT 6.2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
'X-Requested-With: XMLHttpRequest',
'Host: www.whoscored.com'
'Referer: https://www.whoscored.com/Regions/252/Tournaments/2/Seasons/5826/Stages/12496/TeamStatistics/England-Premier-League-2015-2016',}
response = requests.get(url, params=params, headers=headers)
stats = response.json()
print stats
Answer: You are incorrectly defining the `params` \- currently it is actually a set of
strings. Break down the strings to produce key-value pairs and create a
dictionary instead:
params = {
'category': 'shots',
'subcategory': 'zones',
'statsAccumulationType': '0',
'timeOfTheGameStart': '0',
'timeOfTheGameEnd': '5',
'stageId': '12496',
'sortBy': 'Rating',
'page': '1',
'isCurrent': 'True'
}
Same goes for `headers`:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
'X-Requested-With': 'XMLHttpRequest',
'Host': 'www.whoscored.com',
'Referer': 'https://www.whoscored.com/Regions/252/Tournaments/2/Seasons/5826/Stages/12496/TeamStatistics/England-Premier-League-2015-2016'
}
This would help to fix the error, but you would not get the JSON response as
is. I suspect the `url` you are using is not the right one, recheck that.
Also, if you want to simulate the request that you observe in Chrome Developer
Tools - do the following:
* right click on a particular request
* select "Copy as cURL"
* go to <http://curl.trillworks.com/> and convert it to Python+Requests code
|
python urlopen 403 inspite of URL being accessible via browsers
Question: ..Hi folks ..the following URL
[https://www.nseindia.com/products/dynaContent/equities/indices/historicalindices.jsp?toDate=30-06-2016&fromDate=29-06-2016&indexType=NIFTY%2050](https://www.nseindia.com/products/dynaContent/equities/indices/historicalindices.jsp?toDate=30-06-2016&fromDate=29-06-2016&indexType=NIFTY%2050)
works perfectly fine while being accessed via browser but my python code below
keeps throwing a 403. I trapped the error message which says "access denied"
but the funny thing is the explanation which reads - You don't have permission
to access
"http://www.nseindia.com/products/dynaContent/equities/indices/historicalindices.jsp"
on this server
Any pointers will be greatly appreciated !
Code inserted below (contents of INDEX file are just 2 lines "NIFTY 50" and
"NIFTY MIDCAP 50")
from urllib import urlencode
import urllib2
from bs4 import BeautifulSoup
from datetime import datetime
import csv
import time
import datetime
arr = [1,3,5,10]
url = "http://www.nseindia.com/products/dynaContent/equities/indices/historicalindices.jsp"
fo = open("options/multiyearreturn/INDEX_DATA.txt", "wb")
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
with open('options/multiyearreturn/INDEX', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',')
for row in spamreader:
print row
# cant take today since it will load with a days lag
ToDt = datetime.datetime.now() - datetime.timedelta(days=1)
if datetime.datetime.now().weekday() == 5:
ToDt = ToDt - datetime.timedelta(days=1)
elif datetime.datetime.now().weekday() == 6:
ToDt = ToDt - datetime.timedelta(days=2)
for x in range(len(arr)):
frmDt1 = ToDt - datetime.timedelta(days=1)
if frmDt1.weekday() == 5:
frmDt1 = frmDt1 - datetime.timedelta(days=1)
elif frmDt1.weekday() == 6:
frmDt1 = frmDt1 - datetime.timedelta(days=2)
values = {'indexType' : row[0], 'fromDate' : frmDt1.strftime("%d-%m-%Y"), 'toDate' : ToDt.strftime("%d-%m-%Y"), "User-Agent" : "Magic Browser" }
data = urlencode(values).replace('+','%20')
req = urllib2.Request(url, data)
print data
try:
response = urllib2.urlopen(req)
except urllib2.HTTPError, e:
print e.fp.read()
the_page = response.read()
soup = BeautifulSoup( the_page )
MFDTLS = soup.findAll('td', {'class': 'number'})
Answer: just added this additional line in the headers
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'}
the "Accept" header did the trick for me. Appreciate help!
|
Create FlowMap in Python OpenCV
Question: Updated question:
Would anyone be able to point me in the direction of any material that could
help me to plot an optical flow map in python? Ideally i want to find
something that provides a similar output to the video shown here:
<http://study.marearts.com/2014/04/opencv-study-calcopticalflowfarneback.html>
. Or something with a similar functional output
I have implemented the dense optical flow algorithm
(cv2.calcOpticalFlowFarneback). And from this i have been able to sample the
magnitudes at specified points of the image. The video feed that is being
input is 640x480, and i have set sample points to be at every fifth pixel
vertically and horizontally.
import cv2
import numpy as np
import matplotlib.pyplot as plt
cap = cv2.VideoCapture("T5.avi")
ret, frame1 = cap.read()
prvs = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
hsv = np.zeros_like(frame1)
hsv[..., 1] = 255
[R,C]=prvs.shape
count=0
while (1):
ret, frame2 = cap.read()
next = cv2.cvtColor(frame2, cv2.COLOR_BGR2GRAY)
flow = cv2.calcOpticalFlowFarneback(prvs, next, None, 0.5, 3, 15, 2, 5, 1.2, 0)
mag, ang = cv2.cartToPolar(flow[..., 0], flow[..., 1])
RV=np.arange(5,480,5)
CV=np.arange(5,640,5)
# These give arrays of points to sample at increments of 5
if count==0:
count =1 #so that the following creation is only done once
[Y,X]=np.meshgrid(CV,RV)
# makes an x and y array of the points specified at sample increments
temp =mag[np.ix_(RV,CV)]
# this makes a temp array that stores the magnitude of flow at each of the sample points
motionvectors=np.array((Y[:],X[:],Y[:]+temp.real[:],X[:]+temp.imag[:]))
Ydist=motionvectors[0,:,:]- motionvectors[2,:,:]
Xdist=motionvectors[1,:,:]- motionvectors[3,:,:]
Xoriginal=X-Xdist
Yoriginal=Y-Ydist
plot2 = plt.figure()
plt.quiver(Xoriginal, Yoriginal, X, Y,
color='Teal',
headlength=7)
plt.title('Quiver Plot, Single Colour')
plt.show(plot2)
hsv[..., 0] = ang * 180 / np.pi / 2
hsv[..., 2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX)
bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imshow('frame2', bgr)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
prvs = next
cap.release()
cv2.destroyAllWindows()
I think i have calculated the original and final X,Y positions of the pixels
and the distances the moved and have put these into a matplotlib quiver plot.
The result i get does not coincide with the hsv plot of the dense optical flow
(which i know to be correct as it was taken from the OpenCV tutorials) and the
quiver plot also only shows one frame at a time and the plot must be exited
before the next one displays.
Can anyone see where i have gone wrong in my calculations and how i can make
the plot update automatically with each frame?
Answer: I do not know how to change the behaviour of matplotlib quiver plots, but I'm
sure it is possible.
An alternative is to create a function to draw lines on top of the original
image, based on the calculated optical flow. The following code should achieve
this:
def dispOpticalFlow( Image,Flow,Divisor,name ):
"Display image with a visualisation of a flow over the top. A divisor controls the density of the quiver plot."
PictureShape = np.shape(Image)
#determine number of quiver points there will be
Imax = int(PictureShape[0]/Divisor)
Jmax = int(PictureShape[1]/Divisor)
#create a blank mask, on which lines will be drawn.
mask = np.zeros_like(Image)
for i in range(1, Imax):
for j in range(1, Jmax):
X1 = (i)*Divisor
Y1 = (j)*Divisor
X2 = int(X1 + Flow[X1,Y1,1])
Y2 = int(Y1 + Flow[X1,Y1,0])
X2 = np.clip(X2, 0, PictureShape[0])
Y2 = np.clip(Y2, 0, PictureShape[1])
#add all the lines to the mask
mask = cv2.line(mask, (Y1,X1),(Y2,X2), [255, 255, 255], 1)
#superpose lines onto image
img = cv2.add(Image,mask)
#print image
cv2.imshow(name,img)
return []
This code only creates lines rather than arrows, but with some effort it could
be modified to display arrows.
|
Browser not displaying POST data PHP?
Question: I have written a program to send data from python to PHP. The python code is
as follows:
import urllib2, urllib
mydata=[('one','Check'),('two','Mate')] #The first is the var name the second is the value
mydata=urllib.urlencode(mydata)
path='http://localhost/index.php' #the url you want to POST to
req=urllib2.Request(path, mydata)
req.add_header("Content-type", "application/x-www-form-urlencoded")
page=urllib2.urlopen(req).read()
print page
The corresponding index.php PHP script is as follows:
<?php
echo $_POST['one'];
echo $_POST['two'];
?>
However, when I run index.php from localhost, it is not displaying anything. I
think it should be displaying CheckMate. But when running the python script I
get the response back. print page prints CheckMate
Where could be the problem?
Answer: By default Python is sending as `GET` request.
You need to specify it as a `POST` request.
req=urllib2.Request('POST', path, mydata)
|
Python function replacing part of variable
Question: I am writing a code for a project in particle physics (using pyroot).
In my first draft, I use the following line
for i in MyTree:
pion.SetXYZM(K_plus_PX, K_plus_PY, K_plus_PZ,K_plus_MM)
This basically assigns to the pion the values of variables in the parenthesis,
ie momenta and inv. mass of the kaon.
Physics aside, I would like to write a function "of the form":
def myfunc(particle):
return %s_PX % particle
I know this is wrong. What I would like to achieve is to write a function that
allows, for a given particle, to set particle_PX, particle_PY etc to be the
arguments of SetXYZM.
Thank you for your help,
B
Answer: To access class attributes from string variables you can use python's
`getattr`:
import ROOT
inputfile = ROOT.TFile.Open("somefile.root","read")
inputtree = inputfile.Get("NameOfTTree")
inputtree.Print()
# observe that there are branches
# K_plus_PX
# K_plus_PY
# K_plus_PZ
# K_plus_MM
# K_minus_PX
# K_minus_PY
# K_minus_PZ
# K_minus_MM
# pi_minus_PX
# pi_minus_PY
# pi_minus_PZ
# pi_minus_MM
def getx(ttree,particlename):
return getattr(ttree,particlename+"_PX")
def gety(ttree,particlename):
return getattr(ttree,particlename+"_PY")
def getz(ttree,particlename):
return getattr(ttree,particlename+"_PZ")
def getm(ttree,particlename):
return getattr(ttree,particlename+"_MM")
def getallfour(ttree,particlename):
x = getattr(ttree,particlename+"_PX")
y = getattr(ttree,particlename+"_PY")
z = getattr(ttree,particlename+"_PZ")
m = getattr(ttree,particlename+"_MM")
return x,y,z,m
for entry in xrange(inputtree.GetEntries()):
inputtree.GetEntry(entry)
pion1 = ROOT.TLorentzVector()
x = getx(inputtree,"K_plus")
y = gety(inputtree,"K_plus")
z = getz(inputtree,"K_plus")
m = getm(inputtree,"K_plus")
pion2.SetXYZM(x,y,z,m)
x,y,z,m = getallfour(inputtree,"pi_minus")
pion2 = ROOT.TLorentzVector()
pion2.SetXYZM(x,y,z,m)
As linked by Josh Caswell, you can similarly access variable names:
def getx(particlename):
x = globals()[partilcename+"_PX"]
though that might get nasty quickly as of whether your variables are global or
local and for local, in which context.
|
curl works fine, except if I call it with subprocess
Question: I have a curl that looks a bit like this:
curl -1 -X POST --user "xxx:yyy" -d "status=new&content=issue+details+at%3A+http%3A%2F%2Flocalhost%3A6543%2Ftest%2Fsubmit%2F16-07-03-H-20-18-&kind=bug&title=QA+Fail&responsible=xxx&priority=critical" "https://api.bitbucket.org/1.0/repositories/my/repo/issues"
If I open up a terminal and just execute it it works fine (an issue is created
in bitbucket)
if I try to execute the same curl via subprocess it just fails:
sCmd = "curl....etc"
lCmd = [s for s in sCmd.split() if s]
subprocess.call(lCmd)
I get the error message:
curl: (1) Protocol "https not supported or disabled in libcurl
I don't get why the exact same command works so differently in Python. Any
ideas?
This is without use of a virtualenv, by the way. And I know for a fact that
the contents of `lCmd` are valid
PS: yes, I know I should be using requests. Unfortunately requests was giving
me similar problems.
Answer: I can't answer your question about using the `curl` command in `subprocess`,
but it might work if you just call `os.system` Like this:
import os
os.system ("curl -1 -X POST --user \"xxx:yyy\" -d \"status=new&content=issue+details+at%3A+http%3A%2F%2Flocalhost%3A6543%2Ftest%2Fsubmit%2F16-07-03-H-20-18-&kind=bug&title=QA+Fail&responsible=xxx&priority=critical\" \"https://api.bitbucket.org/1.0/repositories/my/repo/issues\"")
This most likely works knowing that `curl` runs via the terminal directly.
> This might help with your question about `curl` not running in subprocess
> (probably has something to do with the environment of subprocess being
> different then the environment that `curl` was configured):
> <https://curl.haxx.se/docs/faq.html#curl_1_SSL_is_disabled_https>
|
GAE datastore restore stops with The API call urlfetch.Fetch() took too long to respond and was cancelled
Question: I am following this guide
<https://cloud.google.com/appengine/docs/python/console/datastore-backing-up-
restoring#restoring_data_to_another_app> on how to backup data in one GAE app
and restore it in another.
But every time I restore the backup on the target application I get the error:
The API call urlfetch.Fetch() took too long to respond and was cancelled.
Any ideas what I am doing wrong?
Answer: Your urlfetch.Fetch() is taking too long (greater than 60 seconds) to respsond
and so it is timing out. Here is an article about it
<https://cloud.google.com/appengine/articles/deadlineexceedederrors>
One solution is to use task queues. Task queues have a longer timeout or, more
appropriately, let you chop the job up into smaller parts.
<https://cloud.google.com/appengine/docs/python/taskqueue/>
Here is a simple example of how to do this with "push" task queues. I realize
going from one datastore model to another might not be the redundancy you are
looking for. You may want to backup the datastore entities to another app
entirely or another type of database or cloud service. You also probably have
multiple models you are backing up. This is just a simple example of setting
up and schedule a "push" task queue using a cron job every 24 hours:
first you have to add "deferred" to the builtins in your app.yaml:
builtins:
- deferred: on
Next you need to create a second datastore model we will call "Backup" just
copy paste your old model and rename it Backup - It helps to use an identical
version of the same model for backups rather than the same model because you
can give them the same the primary and the backup the same key:
class Backup(db.Model): # example
prop1 = db.StringProperty()
prop2 = db.StringListProperty()
prop3 = db.StringProperty()
Next setup a cron job in your cron.yaml:
- description: Creates a backup of the target db every 24 hours at 10:45 GMT
url: /backup
schedule: everyday 10:45
Add /backup to your app.yaml handlers:
- url: /backup
script: mybackup.py
login: admin
Finally, create mybackup.py
from google.appengine.ext import deferred
from google.appengine.ext import db
#from google.appengine.ext import ndb
def backup_my_model(model_name):
"""
Takes all enities in the model_name model and copies it to Backup model
"""
logging.info("Backing up %s" % model_name)
query = db.GqlQuery('SELECT * From %s ' % model_name)
for primary_db in query:
backup = Backup(key_name = primary_db.key_name)
backup.prop1 = primary_db.prop1
backup.prop2 = primary_db.prop2
...
backup.put()
deferred.defer(backup_my_model, MyModel) #where MyModel is the model you want to backup
deferred.defer(backup_my_model, MyOtherModel)
...
deferred.defer(backup_my_model, MyFinalModel)
I hope that helps.
|
How can I collect this data from a div using Selenium and Python
Question: I have been using Selenium and Pyton to scrape a webpage and I am having
difficulty collecting data that I want out of a div that has the following
structure:
<div class="col span_6" style="margin-left: 12px;width: 47% !important;">
<div class="MainGridRow">
<span class="MainGridcolumn1">Heading1</span>
<span class="MainGridcolumn2">Text that I want</span>
</div>
<div class="MainGridRow">
<span class="MainGridcolumn1">Another heading</span>
<span class="MainGridcolumn2">More text that I want</span>
</div>
<div class="MainGridRow">
<span class="MainGridcolumn1">Next heading</span>
<span class="MainGridcolumn2">Even more text</span>
</div>
<div class="MainGridRow">
<span class="MainGridcolumn1">Yet another heading</span>
<span class="MainGridcolumn2">Piece of text</span>
</div>
</div>
The div has a number of rows, each with 2 columns containing the data/text
inside of span tags. There are no CSS ids.
I'm only interested in collecting the text contained within the
'MainGridcolumn2' span classes.
I've tried the below to navigate to the first heading, with the intention of
then trying to use 'following_sibling' to move down to the next span tag
containing the text, but I can't even get this to work as it isn't returning
any text when I try to print it to the console:
driver.find_element_by_xpath("//span['@class=MainGridcolumn1'][contains(text(), 'Heading1')]").text
and
driver.find_element_by_xpath("//span[contains(text(), 'Heading1')]").text
Would appreciate some help!
Thanks
Answer: One way would be to get the the enclosing div i.e the grandparent and pull the
spans from that:
h = """<div class="col span_6" style="margin-left: 12px;width: 47% !important;">
<div class="MainGridRow">
<span class="MainGridcolumn1">Heading1</span>
<span class="MainGridcolumn2">Text that I want</span>
</div>
<div class="MainGridRow">
<span class="MainGridcolumn1">Another heading</span>
<span class="MainGridcolumn2">More text that I want</span>
</div>
<div class="MainGridRow">
<span class="MainGridcolumn1">Next heading</span>
<span class="MainGridcolumn2">Even more text</span>
</div>
<div class="MainGridRow">
<span class="MainGridcolumn1">Yet another heading</span>
<span class="MainGridcolumn2">Piece of text</span>
</div>
</div>
<div class="MainGridRow">
<span class="MainGridcolumn1">Yet another heading</span>
<span class="MainGridcolumn2">Piece of text I don't want</span>
</div>"""
from lxml import html
xm = html.fromstring(h)
div = xm.xpath("//span[@class='MainGridcolumn1'][contains(text(), 'Heading1')]/../..")[0]
print(div.xpath(".//span[@class='MainGridcolumn2']/text()"))
Which would give you:
['Text that I want', 'More text that I want', 'Even more text', 'Piece of text']
You could also just select the parent and get the parents siblings
from lxml import html
xm = html.fromstring(h)
div = xm.xpath("//span[@class='MainGridcolumn1'][contains(text(), 'Heading1')]/..")[0]
print(div.xpath(".//span[@class='MainGridcolumn2']/text() | .//following-sibling::div/span[@class='MainGridcolumn2']/text()"))
|
TkMessageBox - No Module
Question:
import TkMessageBox
When I import TkMessageBox it displays the messsge _'ImportError: No module
named 'TkMessageBox'_.
As far as I know im using python 3.3.2 and Tk 8.5.
Am I using the wrong version of python or importing it wrong ?
Any answers would be extremely useful. Alternatively is there something
similar in the version i am using?
Answer: In Python3.x things have changed a little bit:
>>> import tkinter
>>> import tkinter.messagebox
>>>
I mean what we call `tkMessageBox` in Python2.x becomes `tkinter.messagebox`
in Python3.x
|
python sklearn KDTree with haversine distance
Question: I try to create a KD tree of WGS84 coordinates and find neighbors within a
certain radius
from sklearn.neighbors.dist_metrics import DistanceMetric
from sklearn.neighbors.kd_tree import KDTree
T = KDTree([[47.8665, 8.90123]], metric=DistanceMetric.get_metric('haversine'))
But get the following error:
ValueError: metric HaversineDistance is not valid for KDTree
How can I use haversine distance in a KD-Tree?
Answer: `KDTree.valid_metrics`
Output -
['p',
'l1',
'chebyshev',
'manhattan',
'minkowski',
'cityblock',
'l2',
'euclidean',
'infinity']
Which tells, you can't use `haversine` with KDTree. The reason behind it is
`haversine` distance gives you
[Orthodromic](https://en.wikipedia.org/wiki/Great-circle_distance) distance
which is the distance measure used when your points are represented in a
sphere. But in a kdTree the points are organised in a tree which makes it
invalid to use.
|
Adding a .txt to an already zipped file
Question: I have created this small python program to automate some processes I want to
run. Long story short I use python to pass some information and parameters to
a outside program. The outside program does its thing and zips up the results.
What I am trying to do is add a "licence.txt" file to the already zipped
results. This is what I have:
import zipfile
#List of files for the licence to be inserted into
data = [ loc + r"\FME_Data\FME_GENERIC_OUTPUT\BASE\BASE_NS_CivicAddress_File_UT83_GDB.zip",
loc + r"\FME_Data\FME_GENERIC_OUTPUT\BASE\BASE_NS_CivicAddress_File_UT83_MIF.zip",
loc + r"\FME_Data\FME_GENERIC_OUTPUT\BASE\BASE_NS_CivicAddress_File_UT83_SHP.zip",
loc + r"\FME_Data\FME_GENERIC_OUTPUT\BND\BND_NS_Community_Bndys_UT83_GDB.zip",
loc + r"\FME_Data\FME_GENERIC_OUTPUT\BND\BND_NS_Community_Bndys_UT83_MIF.zip",
loc + r"\FME_Data\FME_GENERIC_OUTPUT\BND\BND_NS_Community_Bndys_UT83_SHP.zip",
loc + r"\FME_Data\FME_GENERIC_OUTPUT\TRNS\TRNS_NSRN_Addressed_Roads_UT83_GDB.zip",
loc + r"\FME_Data\FME_GENERIC_OUTPUT\TRNS\TRNS_NSRN_Addressed_Roads_UT83_MIF.zip",
loc + r"\FME_Data\FME_GENERIC_OUTPUT\TRNS\TRNS_NSRN_Addressed_Roads_UT83_SHP.zip"]
count, tot = 0,8
while count <= tot:
z = zipfile.ZipFile(data[count] , "a")
z.write ("Licence.txt")
z.close()
count = count + 1
This runs and it inserts the .txt into the zip files, but it is also copying
all the other individual .zip files into each zipped directory. So I end up
with the 9 separate zipped files zipped into each directory. There should only
be one file type in each .zip. I feel as though I made a real simple mistake
but I can't find it... Any ideas?
Answer: I don't see why yours would do that but try this, it is a bit simpler:
import zipfile
for filepath in data:
with zipfile.ZipFile(filepath, 'a') as myzip:
myzip.write('license.txt')
|
How can I update a specific value on a custom configuration file?
Question: Assuming I have a configuration txt file with this content:
{"Mode":"Classic","Encoding":"UTF-8","Colors":3,"Blue":80,"Red":90,"Green":160,"Shortcuts":[],"protocol":"2.1"}
How can i change a specific value like `"Red":90` to `"Red":110` in the file
without changing its original format?
I have tried with configparser and configobj but as they are designed for .INI
files I couldn't figure out how to make it work with this custom config file.
I also tried splitting the lines searching for the keywords witch values I
wanted to change but couldn't save the file the same way it was before. Any
ideas how to solve this? (I'm very new in Python)
Answer: this looks like json so you could:
import json
obj = json.load(open("/path/to/jsonfile","r"))
obj["Blue"] = 10
json.dump(obj,open("/path/to/mynewfile","w"))
but be aware that a json dict does not have an order. So the order of the
elements is not guaranteed (and normally it's not needed) json lists have an
order though.
|
Finding the self-consistent solution to an equation
Question: At the bottom of this question are a set of functions transcribed from a
published neural-network model. When I call `R`, I get the following error:
> RuntimeError: maximum recursion depth exceeded while calling a Python object
Note that within each call to `R`, a recursive call to `R` is made for every
other neuron in the network. This is what causes the recursion depth to be
exceeded. Each return value for `R` depends on all the others (with the
network involving `N = 512` total values.) Does anyone have any idea what
method should be used to compute the self-consistent solution for `R`? Note
that `R` itself is a smooth function. I've tried treating this as a vector
root-solving problem -- but in this case the 512 dimensions are not
independent. With so many degrees of freedom, the roots are never found (using
the `scipy.optimize` functions). Does Python have any tools that can help with
this? Maybe it would be more natural to solve `R` using something like
Mathematica? I don't know how this is normally done.
"""Recurrent model with strong excitatory recurrence."""
import numpy as np
l = 3.14
def R(x_i):
"""Steady-state firing rate of neuron at location x_i.
Parameters
----------
x_i : number
Location of this neuron.
Returns
-------
rate : float
Firing rate.
"""
N = 512
T = 1
x = np.linspace(-2, 2, N)
sum_term = 0
for x_j in x:
sum_term += J(x_i - x_j) * R(x_j)
rate = I_S(x_i) + I_A(x_i) + 1.0 / N * sum_term - T
if rate < 0:
return 0
return rate
def I_S(x):
"""Sensory input.
Parameters
----------
x : number
Location of this neuron.
Returns
-------
float
Sensory input to neuron at x.
"""
S_0 = 0.46
S_1 = 0.66
x_S = 0
sigma_S = 1.31
return S_0 + S_1 * np.exp(-0.5 * (x - x_S) ** 2 / sigma_S ** 2)
def I_A(x):
"""Attentional additive bias.
Parameters
----------
x : number
Location of this neuron.
Returns
-------
number
Additive bias for neuron at x.
"""
x_A = 0
A_1 = 0.089
sigma_A = 0.35
A_0 = 0
sigma_A_prime = 0.87
if np.abs(x - x_A) < l:
return (A_1 * np.exp(-0.5 * (x - x_A) ** 2 / sigma_A ** 2) +
A_0 * np.exp(-0.5 * (x - x_A) ** 2 / sigma_A_prime ** 2))
return 0
def J(dx):
"""Connection strength.
Parameters
----------
dx : number
Neuron i's distance from neuron j.
Returns
-------
number
Connection strength.
"""
J_0 = -2.5
J_1 = 8.5
sigma_J = 1.31
if np.abs(dx) < l:
return J_0 + J_1 * np.exp(-0.5 * dx ** 2 / sigma_J ** 2)
return 0
if __name__ == '__main__':
pass
Answer: This recursion never ends since there is no termination condition before
recursive call, adjusting maximum recursion depth does not help
def R(x_i):
...
for x_j in x:
sum_term += J(x_i - x_j) * R(x_j)
Perhaps you should be doing something like
# some suitable initial guess
state = guess
while True: # or a fixed number of iterations
next_state = compute_next_state(state)
if some_condition_check(state, next_state):
# return answer
return state
if some_other_check(state, next_state):
# something wrong, terminate
raise ...
|
DRF Create and Retrieve m2m with through model
Question: I want to save m2m relationship with through model
# models.py
from django.db import models
class Student(utils.PersonalDetailsMixin, utils.ContactDetailsMixin, TimeStampedModel):
guardians = models.ManyToManyField('core.Guardian', through='Relation')
class Relation(models.Model):
student = models.ForeignKey('core.Student')
guardian = models.ForeignKey('core.Guardian')
relation_type = models.CharField(max_length=1, choices=utils.RELATION_CHOICES)
# Only used when relation_type is Other
relation_name = models.CharField(max_length=25, null=True, blank=True)
class Meta:
unique_together = (
('student', 'guardian')
)
class Guardian(utils.PersonalDetailsMixin, utils.ContactDetailsMixin, TimeStampedModel):
pass
# serializers.py
class StudentSerializer(serializers.ModelSerializer):
guardians = GuardianSerializer(many=True)
class Meta:
model = Student
class GuardianSerializer(serializers.ModelSerializer):
class Meta:
model = Guardian
class RelationSerializer(serializers.ModelSerializer):
class Meta:
model = Relation
First:
relation_type = serializers.ChoiceField(choices=RELATION_CHOICES)
relation_name = serializers.CharField(max_length=25, required=False, allow_null=True, allow_blank=True)
I tried adding `relation_type` and `relation_name` to `GuardianSerializer` but
got this error
Got AttributeError when attempting to get a value for field `relation_type` on serializer `GuardianSerializer`.
The serializer field might be named incorrectly and not match any attribute or key on the `Guardian` instance.
Original exception text was: 'Guardian' object has no attribute 'relation_type'.
Which is fine DRF should give this error.
Second:
class StudentSerializer(serializers.ModelSerializer):
school = serializers.HiddenField(default='')
guardians = RelationSerializer(source='relation_set', many=True)
class GuardianSerializer(serializers.ModelSerializer):
class Meta:
model = Guardian
class RelationSerializer(serializers.ModelSerializer):
guardian = GuardianSerializer(many=True)
class Meta:
model = Relation
Added `RelationSerializer` to `StudentSerializer` and `GuardianSerializer` to
`RelationSerializer`
Now I am getting `'Guardian' object is not iterable`
Traceback:
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
149. response = self.process_exception_by_middleware(e, request)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
147. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/django/views/decorators/csrf.py" in wrapped_view
58. return view_func(*args, **kwargs)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/viewsets.py" in view
87. return self.dispatch(request, *args, **kwargs)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
466. response = self.handle_exception(exc)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
463. response = handler(request, *args, **kwargs)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/mixins.py" in list
48. return Response(serializer.data)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/serializers.py" in data
674. ret = super(ListSerializer, self).data
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/serializers.py" in data
239. self._data = self.to_representation(self.instance)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/serializers.py" in to_representation
614. self.child.to_representation(item) for item in iterable
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/serializers.py" in to_representation
472. ret[field.field_name] = field.to_representation(attribute)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/serializers.py" in to_representation
614. self.child.to_representation(item) for item in iterable
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/serializers.py" in to_representation
472. ret[field.field_name] = field.to_representation(attribute)
File "/home/prime/.virtualenvs/omapi/local/lib/python2.7/site-packages/rest_framework/serializers.py" in to_representation
614. self.child.to_representation(item) for item in iterable
Exception Type: TypeError at /v1/students/
Exception Value: 'Guardian' object is not iterable
How do I save `Student instance` with `Guardian` and `Relation`.How can I
achieve this I know a custom create method is required but how do I by pass
this error. Or is there any other way to `serialize and deserialize` this kind
of relation.
Answer: You'll need to have the `MembershipSerializer` nested in the `GroupSerializer`
replacing the `PersonSerializer` and have the `MembershipSerializer` that has
a nested `PersonSerializer` in order for this to work.
Alternatively, you could replace the `PersonSerializer` with a regular
`Serializer` and get the dataset explicitly by yourself instead of relying on
automated DRF.
Edit following the question edition:
class RelationSerializer(serializers.ModelSerializer):
guardian = GuardianSerializer(many=True)
Since the relation between `Relation` and `Guardian` is a `ForeignKey` you
need to remove the `many=True` here.
|
Plain Text export from Google Docs with PYdrive has a centered dot in Github
Question: I wrote a Pydrive script which downloads all the files in a specific folder.
The docs get downloaded as 'sampleTitle.md' with the mimetype of 'text/plain'.
then they simply get commited and pushed to my repo.
Here is my python code for pydrive:
def checkFile(arg):
if arg['mimeType'] in mimetypes:
downloadFile(arg)
print('The file ' + str(arg['title']) + ' has a mimetype of ' + arg['mimeType'] + ' and will be downloaded')
return
if arg['mimeType'] in folder:
enterFolder(arg['id'])
print('The file ' + str(arg['title']) + ' has a mimetype of ' + arg['mimeType'] + ' and will be entered')
return
def enterFolder(query):
file_list = drive.ListFile({'q': '\'' + query + '\' in parents and trashed=false'}).GetList()
for file1 in file_list:
checkFile(file1)
return
def downloadFile(arg):
download_mimetype = None
download_mimetype = mimetypes[arg['mimeType']]
arg.GetContentFile(arg['title'], mimetype=download_mimetype)
print(arg['title'] + 'got downloaded')
return
import sys
sys.path.insert(1, '/Library/Python/2.7/site-packages')
from pydrive.auth import GoogleAuth
gauth = GoogleAuth()
gauth.LocalWebserverAuth() # Creates local webserver and auto handles authentication.
from pydrive.drive import GoogleDrive
mimetypes = {
# Drive Document files as plain text.
'application/vnd.google-apps.document': 'text/plain'
# etc.
}
folder = {
# Comparing for folder.
'application/vnd.google-apps.folder': 'true'
# etc.
}
# Create GoogleDrive instance with authenticated GoogleAuth instance.
drive = GoogleDrive(gauth)
# Auto-iterate through all files that matches this query
enterFolder('starfolder')
The code works and the files are downloaded.
In google docs sthe start of a file looks like this:
---
layout: post
title: title
---
Its a YAML front matter which i need for jekyll and github pages.
When I download the file and push it to my repo it looks like this:
·---
layout: post
title: title
---
I really dont know where that centered dot gets entered. It only appears on
github and is hidden in all of my editors.(Atom, Textwrangler, Brackets,
TextEdit, VisualStudio Code). It seems that when I hit backspace where the dot
should be in the editor it removes the hidden dot. In Nano it is shown as
whitespace.
I have to remove the whitespace somehow because it disrupts my markdown
format. Is there an effective solution ?
# Edit
I found the culprit its a BOM which gets set at the start of the document. I
try now to remove it using a shell command but i cant find one which works I
tried the following with example:
awk '{if(NR==1)sub(/^\xef\xbb\xbf/,"");print}' text.md > text.md
sed '1 s/\xEF\xBB\xBF//' < text.md > text.md
They remove the complete content of the files instead of only the BOM.
So doe anyone knows what I do wrong with the command line because everyone
else seems to get the command working.
Answer: When a file with mimetype "application/vnd.google-apps.document" is downloaded
as "text/plain" a BOM gets inserted.
This BOM seems to be interpreted as a whitespace in nano and · in github.
The following command for removing BOMs works when the data gets renamed.
not working:
awk '{if(NR==1)sub(/^\xef\xbb\xbf/,"");print}' text.md > text.md
working for me:
awk '{if(NR==1)sub(/^\xef\xbb\xbf/,"");print}' text > text.md
|
How do I get only those lines that has highest value if they are inside a timewindow?
Question: I am new to the python and scripting in general, so I would really appreciate
some guidance in writing a python script. So, to the point:
I have a big number of files in a directory. Some files are empty, other
contain rows like that:
16 2009-09-30T20:07:59.659Z 0.05 0.27 13.559 6
16 2009-09-30T20:08:49.409Z 0.22 0.312 15.691 7
16 2009-09-30T20:12:17.409Z -0.09 0.235 11.826 4
16 2009-09-30T20:12:51.159Z 0.15 0.249 12.513 6
16 2009-09-30T20:15:57.209Z 0.16 0.234 11.776 4
16 2009-09-30T20:21:17.109Z 0.38 0.303 15.201 6
16 2009-09-30T20:23:47.959Z 0.07 0.259 13.008 5
16 2009-09-30T20:32:10.109Z 0.0 0.283 14.195 5
16 2009-09-30T20:32:10.309Z 0.0 0.239 12.009 5
16 2009-09-30T20:37:48.609Z -0.02 0.256 12.861 4
16 2009-09-30T20:44:19.359Z 0.14 0.251 12.597 4
16 2009-09-30T20:48:39.759Z 0.03 0.284 14.244 5
16 2009-09-30T20:49:36.159Z -0.07 0.278 13.98 4
16 2009-09-30T20:57:54.609Z 0.01 0.304 15.294 4
16 2009-09-30T20:59:47.759Z 0.27 0.265 13.333 4
16 2009-09-30T21:02:56.209Z 0.28 0.272 13.645 6
and so on.
I want to get this lines out of the files into a new file. But there are some
conditionals! If two or more successive lines are inside a timewindow of 6
seconds, then only the line with highest treshold should be printed into the
new file.
So, something like that:
Original:
16 2009-09-30T20:32:10.109Z 0.0 0.283 14.195 5
16 2009-09-30T20:32:10.309Z 0.0 0.239 12.009 5
in output file:
16 2009-09-30T20:32:10.109Z 0.0 0.283 14.195 5
Keep in mind, that lines from different files can have times inside 6s window
with lines from other files, so the line, that will be in output is the one
that has highest treshold from different files.
The code that explains what is what in the lines is here:
import glob
from datetime import datetime
path = './*.cat'
files=glob.glob(path)
for file in files:
in_file=open(file, 'r')
out_file = open("times_final", "w")
for line in in_file.readlines():
split_line = line.strip().split(' ')
template_number = split_line[0]
t = datetime.strptime(split_line[1], '%Y-%m-%dT%H:%M:%S.%fZ')
mag = split_line[2]
num = split_line[3]
threshold = float(split_line[4])
no_detections = split_line[5]
in_file.close()
out_file.close()
Thank you very much for hints, guidelines, ...
Answer: you said in the comments you know how to merge multiple files into 1 sorted by
`t` and that the 6 second windows start with the first row and are based on
actual data.
so, you need a way to remember the maximum threshold per window and write only
after you are sure you processed all rows in a window. sample implementation:
from datetime import datetime, timedelta
from csv import DictReader, DictWriter
fieldnames=("template_number", "t", "mag","num", "threshold", "no_detections")
with open('master_data') as f_in, open("times_final", "w") as f_out:
reader = DictReader(f_in, delimiter=" ", fieldnames=fieldnames)
writer = DictWriter(f_out, delimiter=" ", fieldnames=fieldnames,
lineterminator="\n")
window_start = datetime(1900, 1, 1)
window_timedelta = timedelta(seconds=6)
window_max = 0
window_row = None
for row in reader:
try:
t = datetime.strptime(row["t"], "%Y-%m-%dT%H:%M:%S.%fZ")
threshold = float(row["threshold"])
except ValueError:
# replace by actual error handling
print("Problem with: {}".format(row))
# switch to new window after 6 seconds
if t - window_start > window_timedelta:
# write out previous window before switching
if window_row:
writer.writerow(window_row)
window_start = t
window_max = threshold
window_row = row
# remember max threshold inside a single window
elif threshold > window_max:
window_max = threshold
window_row = row
# don't forget the last window
if window_row:
writer.writerow(window_row)
|
Can not infer schema for type: <type 'str'>
Question: I have the following Python code that uses Spark:
from pyspark.sql import Row
def simulate(a, b, c):
dict = Row(a=a, b=b, c=c)
df = sqlContext.createDataFrame(dict)
return df
df = simulate("a","b",10)
df.collect()
I am creating a `Row` object and I want to save it as a `DataFrame`.
However, I am getting this error:
TypeError: Can not infer schema for type: <type 'str'>
It occurs on this line:
df = sqlContext.createDataFrame(dict)
What am I doing wrong?
Answer: It is pointless to create single element data frame. If you want to make it
work despite that use list: `df = sqlContext.createDataFrame([dict])`
|
Python SimpleCookie and JSON Value
Question: I've run into some problems using Python's `SimpleCookie` when using a JSON
string as a value.
In [1]: from http.cookies import SimpleCookie
In [2]: cookie = SimpleCookie('x=1; json={"myVal":1}; y=2')
In [3]: cookie.keys()
Out[3]: dict_keys(['x'])
In [4]: cookie = SimpleCookie('x=1; y=2')
In [5]: cookie.keys()
Out[5]: dict_keys(['y', 'x'])
Not only the JSON string is missing, but every other value coming afterwards
as well. I'm wondering now if this is a Python-related bug as the characters
should all be fine to be used in a cookie?
Answer: Currently you're trying to use an object as the value, not a JSON string
representation of that object. You need to escape the JSON string within your
argument string, as follows:
>>> from http.cookies import SimpleCookie
>>> cookie = SimpleCookie(r'x=1; json="{\"myVal\":1}"; y=2')
# ^ note raw string ^ and single backslashes
>>> cookie.keys()
dict_keys(['json', 'x', 'y'])
It does seem odd that a malformed string is quietly consumed as far as it can
go and the rest is ditched, though; I'd expect a `ValueError` or something for
your input. [The
parser](https://hg.python.org/cpython/file/3.5/Lib/http/cookies.py#l563) just
stops and returns the result so far when it's run out of things that match the
regex.
* * *
Given [Marius's answer](http://stackoverflow.com/a/38206698/3001761), which
seemed to work on 3.3.2 but doesn't work in 3.5.2 (and, to me, doesn't look
like it should work _anywhere_ ; an implicit JSON package import?!) I went
searching for when it changed. Using this script:
from http.cookies import SimpleCookie
import json
from sys import version
print(version)
cookie1 = SimpleCookie('x=1; json=json.dumps({"myVal":1}); y=2')
print('Marius ', cookie1.keys(), repr(cookie1.get('json')), sep='\t')
cookie2 = SimpleCookie(r'x=1; json="{\"myVal\":1}"; y=2')
print('Jonathan', cookie2.keys(), repr(cookie2.get('json')), sep='\t')
cookie3 = SimpleCookie('x=1; json={"myVal":1}; y=2')
print('Bernhard', cookie3.keys(), repr(cookie3.get('json')), sep='\t')
and `pyenv` on Mac OS X gives the following results for 3.3:
3.3.0 (default, Jul 7 2016, 10:47:41)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['x', 'y', 'json']) <Morsel: json='json.dumps({'>
Jonathan dict_keys(['x', 'y', 'json']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['x', 'y', 'json']) <Morsel: json='{'>
3.3.1 (default, Jul 7 2016, 10:53:06)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['json', 'x', 'y']) <Morsel: json='json.dumps({'>
Jonathan dict_keys(['json', 'x', 'y']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['json', 'x', 'y']) <Morsel: json='{'>
3.3.2 (default, Jul 6 2016, 22:02:23)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['json', 'y', 'x']) <Morsel: json='json.dumps({'>
Jonathan dict_keys(['json', 'y', 'x']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['json', 'y', 'x']) <Morsel: json='{'>
# ...loses 'json'
3.3.3 (default, Jul 7 2016, 10:57:02)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['x', 'y']) None
Jonathan dict_keys(['x', 'y', 'json']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['x', 'y']) None
3.3.4 (default, Jul 7 2016, 10:59:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['y', 'x']) None
Jonathan dict_keys(['y', 'x', 'json']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['y', 'x']) None
3.3.5 (default, Jul 7 2016, 11:01:45)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['y', 'x']) None
Jonathan dict_keys(['json', 'y', 'x']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['y', 'x']) None
# ...and now 'y'!
3.3.6 (default, Jul 7 2016, 11:03:40)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['x']) None
Jonathan dict_keys(['json', 'x', 'y']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['x']) None
From there it seems to be stable:
3.4.4 (default, Jul 7 2016, 11:13:43)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['x']) None
Jonathan dict_keys(['json', 'y', 'x']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['x']) None
3.5.2 (v3.5.2:4def2a2901a5, Jun 26 2016, 10:47:25)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
Marius dict_keys(['x']) None
Jonathan dict_keys(['x', 'json', 'y']) <Morsel: json="{\"myVal\":1}">
Bernhard dict_keys(['x']) None
The version from 3.3.6 onwards was apparently a security fix; see [this bug
report](http://bugs.python.org/issue25228). This was also applied in 3.2.6,
so:
3.2.4 (default, Jul 7 2016, 11:05:33)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['y', 'x', 'json']) <Morsel: json='json.dumps({'>
Jonathan dict_keys(['y', 'x', 'json']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['y', 'x', 'json']) <Morsel: json='{'>
3.2.5 (default, Jul 7 2016, 11:07:15)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['y', 'x', 'json']) <Morsel: json='json.dumps({'>
Jonathan dict_keys(['y', 'x', 'json']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['y', 'x', 'json']) <Morsel: json='{'>
# ...loses 'y'?!
3.2.6 (default, Jul 7 2016, 11:09:00)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
Marius dict_keys(['x', 'json']) <Morsel: json='json.dumps({'>
Jonathan dict_keys(['y', 'x', 'json']) <Morsel: json='{"myVal":1}'>
Bernhard dict_keys(['x', 'json']) <Morsel: json='{'>
This behaviour is _different_ to 3.3.6 with the same bugfix! In fact, it sets
the value of `json` in the cookie to `'{'`, discarding the rest of that value
and `y`.
I conclude that:
1. my version is more widely applicable;
2. Marius's version won't actually help the OP, as it has the same keys as their original version; and
3. something weird was happening with this functionality in 3.2 and 3.3!
|
Python code structure with flask integrated
Question: [Newbie] I have written a python program that does some data manipulations to
imported xlsx files and save them as csv. It looks kinda like this:
#!/usr/bin/env python2.7
def main():
imported_files = import_files_from_input_folder('/input/*.xlsx')
data_handling_functions(imported_files)
save_processed_files_to_output_folder('/output/')
if __name__ = '__main__':
main()
I want to create a web app (using flask) for the users to use the program, by
uploading their files to 'input' folder, and then download the results from
'output' folder. Thanks to [this
topic](http://stackoverflow.com/questions/27628053/uploading-and-downloading-
files-with-flask) and some others, I know how to upload multiple files into
'input' folder.
Now, my first question is: how to list all the files in 'output' folder and
let users download them?
My second question is: how to wrap/integrate the flask part into the existed
program?
Answer: This is how a basic Flask web server looks like:
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
so it is quite straightforward to integrate your functionality inside the app.
Just add a routing function (like the hello) and call the functions you would
normally call inside the main in your code, and after that render the results
on a web page, along with some Download buttons. For this you could use the
function `render_template` and give the output (altered in such a way that it
can be iterated over to get each file) as a parameter, like so:
@app.route("/")
def hello():
return render_template('name_of_template', output = your_output)
You can find Flask documentation [here](http://flask.pocoo.org) and it will
show you how to do all that. I recommend that you use the jinja2 API to add
Python code to your HTML templates e.g. you could do something like that:
{% for file in output %}
<!-- Output file inside html tag -->
<!-- Add a download button -->
{% endfor %}
You can find more about Jinja [here](http://jinja.pocoo.org/docs/dev/)
And of course, you would store the CSVs on your server.
|
While looping issue in Python 3.5.1
Question: Using Python 3.5.1.
I am trying to build a while loop, which iterates a function until a certain
number of prime numbers has been appended into a list. I have previously
written a function which takes in a number, evaluates whether or not it is a
prime and adds it to a list if it is a prime:
def primelister(n):
if n < 10:
return
else:
l1=[]
l2=[]
ts1=np.arange(1,(n+1),1)
for i in ts1:
if n%i==0:
l1.append(i)
continue
else:
continue
if len(l1) < 3:
l2.append(i)
print(l2)
This function works ok and seems to give out correct results. I would like to
implement the function in to a while loop, where the value of n starts out at
10, and is incremented by 1 at each loop. The loop would go on until a certain
number of primes has been reached (i.e. stop when 1000 primes have been
listed).
This is what I've tried so far:
n=10
l1=[]
l2=[]
while numberofprimes < 100:
ts1=np.arange(1,(n+1),1)
for i in ts1:
if n%i==0:
l1.append(i)
continue
if len(l1) < 3:
l2.append(i)
numofprimes=len(l2)
print("Number of primes so far:", numberofprimes)
n = n + 1
The loop is obviously broken. The output is just 1 at all times, and the loop
seems to be infinite. All help will be appreciated.
Answer: The problem is that you are not resetting `l1` after each while loop
iteration. Furthermore, you are using `numberofprimes` as your while loop
condition while assigning the number of primes value to `numofprimes`
import numpy as NP
n=10
l2=[]
numberofprimes = 0
while numberofprimes < 100:
l1 = []
ts1=NP.arange(1,(n+1),1)
for i in ts1:
if n%i==0:
l1.append(i)
if len(l1) < 3:
l2.append(i)
numberofprimes=len(l2)
print("Number of primes so far:", numberofprimes)
n = n + 1
|
how to avoid removing 0 from msb in python panda dataframe
Question: i have data in column like `0123456789` after reading from a file it will get
like `123456789` where column name is `msisdn`
how to fix this issue
am using the pandas script as follows
#!/usr/bin/env python
import gc
import pandas
csv1 = pandas.read_csv('/home/subin/Desktop/a.txt')
csv2 = pandas.read_csv('/home/subin/Desktop/b.txt')
merged = pandas.merge(csv1, csv2,left_on=['MSISDN'],right_on=['MSISDN'],how='left',suffixes=('#x', '#y'), sort=True).fillna('0')
merged.to_csv("/home/subin/Desktop/amergeb_out.txt", index=False, float_format='%.0f')
Answer: You can cast column `msisdn` to `string` by parameter `dtype` in
[`read_csv`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.read_csv.html):
temp=u"""msisdn
0123456789
0123456789"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), dtype={'msisdn': str})
print (df)
msisdn
0 0123456789
1 0123456789
|
How to add a custom flag to IPython's magic commands? (.ipy files)
Question: Is it possible to add a custom flag to IPython's magic command? To be more
specific, I want to use the %run command with a homemade flag:
%run script.ipy --flag "option"
and be able to use "option" inside the script.
For .py files, the answer is provided here: [Command Line Arguments In
Python](http://stackoverflow.com/questions/1009860/command-line-arguments-in-
python)
Answer: As indicated in the comments, this question isn't just about handling
commandline arguments in a Python script. It's about handling them in an
`.ipy` file run via `%run`.
If I create `test.ipy` as
import sys
print(sys.argv)
and run it from shell, I see the commandline arguments:
1223:~/mypy$ python3 test.ipy -test foo
['test.ipy', '-test', 'foo']
but from a `ipython` session, I don't
In [464]: %run test.ipy --flag test
['/usr/bin/ipython3']
If I make a copy with a `py` name
In [468]: %run testipy.py --flag test
['testipy.py', '--flag', 'test']
So the `%run ...ipy` behaves differently. This is a `ipython` issue, not a
general Python commandline one.
================
The `%run` doc has this point:
> There is one special usage for which the text above doesn't apply: if the
> filename ends with .ipy[nb], the file is run as ipython script, just as if
> the commands were written on IPython prompt.
In that case, the `test.ipy` script is seeing the same `sys.argv` as I do when
I type:
In [475]: sys.argv
Out[475]: ['/usr/bin/ipython3']
So if I modify `sys.argv` in the current session, such as by appending a
couple of strings:
In [476]: sys.argv += ['--flag','test']
In [477]: sys.argv
Out[477]: ['/usr/bin/ipython3', '--flag', 'test']
In [479]: %run test.ipy
['/usr/bin/ipython3', '--flag', 'test']
my `ipy` script now sees them.
So that's the answer - put the commandline arguments in the `sys.argv` prior
to using `%run ...ipy`.
(I've done this sort of fiddling with the `sys.argv` when doing advanced
things with `argparse`.)
More `ipython` magic
In [480]: %run??
shows me both its doc and its code. I can thus see how it is treating a `.ipy`
file special. Since it's so easy to find, I'll refrain from copying it here.
There's another solution - don't try to use the commandline style of coding
with `ipy` files.
If I add a
print(x)
line to that test file, and have an `x` defined in my Ipython session, I see
that print. But I put that same print in a `.py`, I'll get a `Nameerror`. When
they say the `ipy` is run as though it were typed in, they mean it. Running an
`ipy` is, in effect, an alternative to `%paste` from the clipboard.
|
how to automatically install dependent modules used in a python app
Question: I just started learning Python and a bit confused about how packages are
distributed and installed. I am aware of helper scripts `easy_install` and
`pip` which can be used to install the dependent modules,howerver I am not
clear how to do with programatically,can someone help me on this?
How to install dependent modules _automatically_ when running python
applications? I have a dependency on `subprocess32` and other modules,I want
to automatically install them if they are not present....
File "script.py", line 6, in <module>
import subprocess32 as subprocess
ImportError: No module named subprocess32
I have looked at some posts online below but not clear...really appreciate
guidance here
locallyoptimal.com/blog/2014/03/14/executable-python-scripts-via-entry-points/
[Python packages installation in
Windows](http://stackoverflow.com/questions/3155128/python-packages-
installation-in-windows?rq=1)
Answer: Easiest way to handle module installs within a program is with `pip`:
import pip
pip.main(["install", "numpy"])
For example will install `numpy` and its dependencies. You can also specify
versions using `numpy=xxx`, or upgrade by passing `"-upgrade"` between those
two above.
|
Python: Random without repetition, but in def
Question: I don't know how in def I can randomize sentences without repetition.
def yellowJeden(x,m):
if m <= 25:
zd1 = "Juz na samym poczatku meczu "+ x.strip() + " dostal"
zd2 = "Juz w " + str(m) + ". minucie meczu zawodnik druzyny "+ druzyna1 + " dostal zolta kartke. "+ x.strip() + " sfaulowal swojego przeciwnika."
zd3 = x.strip() + " juz w poczatkowych minutach meczu otrzymal zolta kartke za nieczyste zagranie. Sfaulowal on zowodnika druzyny przeciwnej. "
zolta1 = [zd1, zd2, zd3]
zolta1Index = random.randint(0, len(zolta1)-1)
print zolta1[zolta1Index]
zolta1.remove(zolta1[zolta1Index])
def akcja():
a = line.split(',')
min = int(a[1])
if a[2] == ' yellow1':
yellow1 = a[3]
yellowJeden(yellow1, min)
elif a[2] == ' yellow2':
yellow2 = a[3]
yellowDwa(yellow2, min)
for line in fh:
if line.startswith('1.'):
akcja()
elif line.startswith('2.'):
akcja()
elif line.startswith('3.'):
akcja()
It's more complicated. I don't want to write every sentence.
Answer: Are you looking for something like this?
>>> import random
>>> def ShuffleSentences():
... sentences = [ "First sentence", "Second sentence", "Third sentence", "last sentence"]
... random.shuffle(sentences)
... for s in sentences:
... print s
...
>>> ShuffleSentences()
Second sentence
last sentence
Third sentence
First sentence
>>> ShuffleSentences()
Second sentence
First sentence
Third sentence
last sentence
>>> ShuffleSentences()
Second sentence
Third sentence
First sentence
last sentence
>>>
|
Python issue with linecache reading a line from an external .txt file
Question: I'm hacking together my first Python project. I'm running the below Python
code and having trouble with the linecache.getline. I want it to open the
stoarage.txt file. Check if the number is 0. If it is, change it to 1 - play
the music and turn on the LED. Then switch it to 2 it doesn't play the music
again every time I power off and start up my Raspberry Pi. Any suggestions
would be super amazing.
My storage.txt is ending up like this:
0
0
0
I thought it should look like this:
2
0
0
My python code looks like this:
import requests
import pygame
import RPi.GPIO as GPIO
from BeautifulSoup import BeautifulSoup
import linecache
pygame.mixer.init()
pygame.mixer.music.load('chaching.wav')
GPIO.setmode(GPIO.BCM)
#Im using pin 23 for the led.
GPIO.setup(21, GPIO.OUT)
#Im using pin 4 as a basic switch in replacement for a motion sensor.
GPIO.setup(4, GPIO.IN)
#Cleans up the number taken from my php file.
Soup = BeautifulSoup
#Subscriber number
sub_num = 0
#goal one value
goal1 = 93
#goal1 celebration flag pulled from the txt file. Line 1 of the doc.
goal1cel = linecache.getline('storage.txt', 1)
goal2 = 100
goal2cel = linecache.getline('storage.txt', 2)
goal3 = 150
goal3cel= linecache.getline('storage.txt', 3)
detectDaniel = 0
while True:
response = requests.get('http://www.bringyourownlaptop.com/php-test')
html= response.content
soup = BeautifulSoup(html)
num = soup.find('section')
sub_num = int(num.contents[0].strip())
# figure out if goal have been reached and set the appropriate goal celebration flag
if sub_num == goal1 and goal1cel == 0:
goal1cel = 1
if sub_num == goal2 and goal2cel == 0:
goal2cel = 1
if sub_num == goal3 and goal3cel == 0:
goal3cel = 1
# This passed the current status of the goal1cel e.g. 0 not done, 1 current, 2 finished.
text_file = open("storage.txt", "w")
#This added it to the text document with slash n used as a line break. Weve also had to change the value to a string instead of a integer.
text_file.write(str(goal1cel)+"\n"+str(goal2cel)+"\n"+str(goal3cel))
#closes the file.
text_file.close()
detectDaniel = 1
# checks if celebrations need to happen from goal celebration flags
if goal1cel == 1 and detectDaniel == 1:
GPIO.output(21, True)
pygame.mixer.music.play()
while pygame.mixer.music.get_busy() == True:
continue
print("Goal 1 reached!")
goal1cel = 2
if goal2cel == 1 and detectDaniel == 1:
GPIO.output(21, True)
pygame.mixer.music.play()
while pygame.mixer.music.get_busy() == True:
continue
print("Goal 2 reached!")
goal2cel = 2
if goal3cel == 1 and detectDaniel == 1:
print("Goal 3 reached!")
goal3cel = 2
print(sub_num)
print(detectDaniel)
Answer: Your mistake lies in assuming that `linecache.getline` will simply return an
integer. If you run it in an interpreter, it will return the string `'0\n'`,
which is a string.
After you get the line, you're going to need to find some way to simply get
the number you want (if they're only ever going to be 1 character values, you
might as well just take the first character and do comparisons to characters,
e.g. `if goal1cel == '1'`.
|
load Python Pickle (.pkl) file
Question: I am trying to load .pkl files that are in the same directory where my .py
file is located. The following is my code:
import os
def load_var(var_name):
fid = open(os.path.join((var_name, '.pkl')))
data = pickle.load(fid)
fid.close()
return data
def main():
data = load_var('myfilename')
if __name__ == '__main__':
main()
I keep on running into the error:
fid = open(os.path.join((var_name, '.pkl')))
TypeError: coercing to Unicode: need string or buffer, tuple found
Is there an easy way to resolve this error?
Answer: The input to `os.path.join` should not be a tuple. That is, the command should
read:
fid = open(os.path.join(var_name, '.pkl'))
|
Can't get Python class code to work
Question: I'm new to Python. Anyway, I was trying to make a 21 questions game, but my
code won't work.
The error is: `Name 'plux' is not defined.`
Here is the code:
from random import randint
class Game(object):
num = randint(1, 3)
def plux(x):
x += 1
return x
def minu(x):
x -= 1
def iff(i):
apple = 0
num = randint(1, 3)
if num == 1:
x = input('Can you eat it? ').lower()
if x == "yes" or "yeah":
print("test num 1")
apple = plux(apple)
elif num == 2:
print('test num 2')
elif num == 3:
print("test num 3")
a = Game()
print(a.iff())
Answer: When you call the function `plux` in the line `apple = plux(apple)`, add a
`self.` in front of the function call. In Python, `self.` is used as a
reference to the object itself that is used within the class scope (any place
inside of the `Game` class). In every function of a class, `self` has to be
the first parameter (this is something I don't know why we must do, I'm sure
there is a reason for it, but just follow this convention carefully).
|
DLL load failed: The specified module could not be found for pygpu/libgpuarray
Question: I'm using libgpuarray (openCL) but can't seem to get the GPU working with
Theano in anaconda 2. When I try to run the
[test](http://deeplearning.net/software/theano/tutorial/using_gpu.html) I get:
> ERROR (theano.gpuarray): pygpu was configured but could not be imported
> Traceback (most recent call last): File
> "C:\Users\username\Anaconda2\lib\site-
> packages\theano-0.9.0.dev1-py2.7.egg\theano\gpuarray__init__.py", line 21,
> in import pygpu File "C:\Users\username\Anaconda2\lib\site-
> packages\pygpu-0.2.1-py2.7-win-amd64.egg\pygpu__init__.py", line 7, in from
> . import gpuarray, elemwise, reduction ImportError: DLL load failed: The
> specified module could not be found.
Theano works fine with the cpu. I followed
[this](http://deeplearning.net/software/libgpuarray/installation.html)
documentation. I ran the command code "python setup.py build" and "python
setup.py install" on setup.py in the libgpuarray folder (after I used cmake to
install libgpuarray) to get pygpu and it ran successfully. After running cmake
I get the gpuarray.dll file yet this error still occurs. Do I need to do
something with it or is there something else happening?
Answer: I solved this problem by copying `gpuarray.dll` generated in your build dir
into C:\Windows\System32.
|
Bisection or Hashtable Method in Python
Question: I'd like to speed up the execution time of my function in python. I read that
a good way to do this is using a Bisection or Hashtable method. Do you know
how I can do this with this function?
from time import time
import csv
f = open('file.csv')
reader = csv.reader(f, delimiter=';')
def old(abi):
first = True
for row in reader:
if first:
first = False
first_row = row
else:
if row[0] == abi:
res = row
res = dict(zip(first_row, res))
break
@timing
def test2():
for x in xrange(3000, 800000):
old(str(x))
test2()
Thank you very much for help me ;)
Answer: I suspect that your problem is I/O (rather than CPU) bound.
If it is indeed CPU bound, there is one thing you could try to improve
performance: replace a forloop with a generator. This way the iteration would
happen on the C side on CPython.
def old(path, abi):
with open(path) as handle:
r = csv.reader(handle, delimiter=";")
header = next(r)
try:
result = next(row for row in r if row[0] == abi)
return dict(zip(header, result))
except StopIterations:
return None # Not found.
|
How to correctly use python libraries under Windows?
Question: I am using Windows 10 and I would like to import a library from some place
`P:\_Testing\Tools\Selenium\Basic` (which I added to `PYTHONPATH`). I have the
following script:
print(os.environ['PYTHONPATH'])
from Basic import basic
and the path `P:\_Testing\Tools\Selenium\Basic` contains two files: a non-
empty file `basic.py` and an empty file `__init__.py`. However, when running
the script, I get the following output:
> Y:\BFH\Selenium\BFH_ARoeffnen.py;P:_Testing\Tools\Selenium\Basic;C:\Program
> Files (x86)\JetBrains\PyCharm Community Edition 5.0.4\helpers\pycharm
>
>
> Error
> Traceback (most recent call last):
> File "Y:\BFH\Selenium\BFH_ARoeffnen.py", line 25, in
> test_b_f_h_a_roeffnen
> from Basic import basic
> ImportError: No module named 'Basic'
>
So why does this not work? How to include the library in the given path
correctly?
Answer: You can import basic directly, because the Basic folder is right in your
PYTHONPATH. Just like search files right in this folder, you can get result
include: basic.py **init**.py But can not find Basic folder itself.
|
How To Access The Request Object in Django's GenericStackedInline Admin
Question: Using **GenericStackedInline** in Django 1.9 _(Python 3.4)_ I want to access
the **request** object before **saving my model** in the Django **Admin**.
When using `MediaItemAdmin` I can intercept the save function before
`obj.save()` is run, as in this example:
**admin.py**
class StuffAdmin(admin.ModelAdmin):
def save_model(self, request, obj, form, change):
# Do some stuff here like obj.user = request.user before saving.
obj.save()
However, the same behaviour or 'hook' isn't available using a
`GenericStackedInline`. It appears to call the model save method directly:
**admin.py**
class StuffAdmin(GenericStackedInline):
model = StuffModel
def save_model(self, request, obj, form, change):
print("I'm never run :(")
obj.save()
As I understand `GenericStackedInline` inherits from a `form` so I have also
tried using a form and overriding that as in this example:
**admin.py**
class StuffAdmin(GenericStackedInline):
model = StuffModel
form = StuffForm
class StuffForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(StuffForm, self).__init__(*args, **kwargs)
def save_model(self, request, obj, form, change):
print("Still not run!(")
obj.save()
def save_form(self, request, obj, form, change):
print("Work already!")
obj.save()
I have searched stackoverflow, but most are unanswered, as seen here
[accessing request object within a django admin inline
model](http://stackoverflow.com/questions/7107498/accessing-request-object-
within-a-django-admin-inline-model) or say use `init` to do something like
`self.request = kwargs.pop('request')` however, `request` is never passed
here, right?
Anyhow, any idea how I can call the request object and update my instance
before the model save() is called?
Answer: The method that saves the "inlines" is part of `ModelAdmin`, not
`InlineModelAdmin`.
class BarInline(GenericStackedInline):
model = Bar
class FooModelAdmin(ModelAdmin):
model = Foo
inlines = [BarInline]
def save_formset(self, request, form, formset, change):
"""
`form` is the base Foo form
`formset` is the ("Bar") formset to save
`change` is True if you are editing an existing Foo,
False if you are creating a new Foo
"""
if formset_matches_your_inline_or_some_requirement(formset):
do_something_with(request)
super().save_formset(request, form, formset, change)
If you want to check whether the formset is the `BarInline`'s formset, you can
do something like this:
class BarInline(GenericStackedInline):
model = Bar
def get_formset(self, *args, **kwargs):
formset = super().get_formset(*args, **kwargs)
formset.i_come_from_bar_inline = True
return formset
class FooModelAdmin(ModelAdmin):
model = Foo
inlines = [BarInline]
def save_formset(self, request, form, formset, change):
if getattr(formset, 'i_come_from_bar_inline', False):
do_something_with(request)
super().save_formset(request, form, formset, change)
Or even better, make it generic:
class BarInline(GenericStackedInline):
model = Bar
def pre_save_formset(self, request, form, model_admin, change):
"""Do something here with `request`."""
class FooModelAdmin(ModelAdmin):
model = Foo
inlines = [BarInline]
def save_formset(self, request, form, formset, change):
if hasattr(formset, 'pre_save_formset'):
formset.pre_save_formset(request, form, self, change)
super().save_formset(request, form, formset, change)
if hasattr(formset, 'post_save_formset'):
formset.post_save_formset(request, form, self, change)
* * *
If you need to do something with the request before each form save rather than
before each formset, you will have to use your own Form and FormSet propagate
the request through the formset to the form:
from django.forms import ModelForm
from django.forms.models import BaseInlineFormSet
class BarForm(ModelForm):
model = Bar
def __init__(self, *args, **kwargs):
request = kwargs.pop('request', None)
super().__init__(*args, **kwargs)
self.request = request
def save(self, commit=True):
print(self.request)
print(self.instance)
obj = super().save(False) # Get object but don't save it
do_something_with(self.request, obj)
if commit:
obj.save()
self.save_m2m()
return obj
class BarFormSet(BaseInlineFormSet):
@property
def request(self):
return self._request
@request.setter
def request(self, request):
self._request = request
for form in self.forms:
form.request = request
class BarInline(GenericStackedInline):
codel = Bar
form = BarForm
formset = BarFormSet
class FooModelAdmin(ModelAdmin):
inlines = [BarInline]
def _create_formsets(self, request, obj, change):
formsets, inline_instances = super()._create_formsets(request, obj, change)
for formset in formsets:
formset.request = request
return formsets, inline_instances
According to you usecase, the save method might also simply look like
something like this:
class BarForm(ModelForm):
model = Bar
def save(self, commit=True):
do_something_with(self.request, self.instance)
return super().save(commit) # Get object but don't save it
|
How to fit parametric equations to data points in Python
Question: I am looking for a way to fit [parametric
equations](https://en.wikipedia.org/wiki/Parametric_equation) to a set of data
points, using Python.
As a simple example, given is the following set of data points:
import numpy as np
x_data = np.array([1, 2, 3, 4, 5])
y_data = np.array([2, 0, 3, 7, 13])
Using `t` as the parameter, I want to fit the following parametric equation to
the data points,
t = np.arange(0, 5, 0.1)
x = a1*t + b1
y = a2*t**2 + b2*t + c2
that is, have Python find the values for the coefficients `a1`, `b1`, `a2`,
`b2`, `c2` that fits `(x,y)` best to the data points `(x_data, y_data)`.
Note that the `y(t)` and `x(t)` functions above only serve as examples of
parametric equations. The actual functions I want to fit my data to are much
more complex, and in those functions, it is not trivial to express `y` as a
function of `x`.
Help will be appreciated - thank you!
Answer: Since the relation between `x` and `y` is a quadratic one, you can use
`np.polyfit` to get the coefficients. According to your equations, your `x`
and `y` relation is:
`y = a2*((x-b1)/a1)**2 + b2*((x-b1)/a1) + c2`
Using polyfit, we get
y_data = np.array([2, 0, 3, 7, 13])
x_data = np.array([1, 2, 3, 4, 5])
np.polyfit(x_data,y_data,2)
[p2,p1,p0] = list(array([ 1.21428571, -4.38571429, 4.8 ]))
The values of `a1, b1, a2, b2, c2` can be obtained by solving the following
eqns
`p2 = a2/a1**2`
`p1 = -2b1*a2/a1**2 + b2/a1`
`p0 = a2*b1**2/a1**2 -b2*b1/a1 + c2`
|
Python's multiprocessing returns more results than tasks where given
Question: I'm currently trying to use multiprocessing for my simulation run, to evaluate
different input values at the same time.
Therefore, I googled a lot in the last weeks and got something together which
is probably not very pretty but it (somehow) works. My Problem is now, that it
returns more output than I have given it tasks to do and I don't understand
why.
Sometimes each simulation run returns only one value as expected but as in the
example below I would expect the result of e.g. simulation run 5 to be only
[23]. It also differs, which simulation run produces more output then
expected. When I increase the number of periods to e.g. 2, it would generate 4
output values but I cannot figure out why that is.
Could please somebody give me a hint how I could change that? I cannot find an
answer to that and I'm getting quite frustrated :( Also any suggestions on how
I could improve my code would be really appreciated as I'm quite new to python
and I love it so far :)
This is the simplified code I use:
import numpy as np
from multiprocessing import Process, Queue
import multiprocessing
from itertools import repeat
class Simulation(Process):
Nr = 1
Mean = 5
StdDev = 3
Periods = 10
Result = []
def Generate_Value(self):
GeneratedValue = max(int(round(np.random.normal(self.Mean, self.StdDev), 0)), 0)
return GeneratedValue
def runSimulation(self):
for i in range(self.Periods):
self.Result.append(self.Generate_Value())
return self.Result
def worker(Mean, stdDev, Periods, Nr, queue):
Sim = Simulation()
Sim.Nr = Nr
Sim.Periods = Periods
Sim.Mean = Mean
Sim.StdDev = stdDev
Results = Sim.runSimulation()
queue.put(Results)
print("Simulation run " + str(Nr) + " done with a result of " + str(Results)
+ " (Input: mean: " + str(Mean) + ", std. dev.: " + str(stdDev) + ")")
if __name__ == '__main__':
m = multiprocessing.Manager()
queue = m.Queue()
CPUS = multiprocessing.cpu_count() # CPUS = 8
WORKERS = multiprocessing.Pool(processes=CPUS)
Mean = [50, 60, 70, 80, 90]
StdDev = [10, 10, 10, 10, 10]
Periods = 1
Nr = list(range(1,len(Mean) + 1))
WORKERS.starmap(worker, zip(Mean, StdDev, repeat(Periods), Nr, repeat(queue)))
WORKERS.close()
WORKERS.join()
FinalSimulationResults = []
for i in range(len(Mean)):
FinalSimulationResults.append(queue.get())
print(FinalSimulationResults)
Which results in e.g. this:
Simulation run 1 done with a result of [23] (Input: mean: 50, std. dev.: 10)
Simulation run 2 done with a result of [55] (Input: mean: 60, std. dev.: 10)
Simulation run 3 done with a result of [64] (Input: mean: 70, std. dev.: 10)
Simulation run 5 done with a result of [23, 89] (Input: mean: 90, std. dev.: 10)
Simulation run 4 done with a result of [78] (Input: mean: 80, std. dev.: 10)
[[23], [55], [64], [23, 89], [78]]
**It works now :)**. Not as fast as I expected (only 2 times faster with 8
cores) but for everyone who might has the same problem, here's my working
code:
import numpy as np
from multiprocessing import Process, Queue
import multiprocessing
from itertools import repeat
class Simulation():
def __init__(self, Nr, Mean, Std_dev, Periods):
self.Result = []
self.Nr = Nr
self.Mean = Mean
self.StdDev = Std_dev
self.Periods = Periods
def Generate_Value(self):
GeneratedValue = max(int(round(np.random.normal(self.Mean, self.StdDev), 0)), 0)
return GeneratedValue
def runSimulation(self):
for i in range(self.Periods):
self.Result.append(self.Generate_Value())
return self.Result
def worker(Mean, stdDev, Periods, Nr, queue):
Sim = Simulation(Nr=Nr,Mean=Mean,Std_dev=stdDev,Periods=Periods)
Results = Sim.runSimulation()
queue.put(Results)
print("Simulation run " + str(Nr) + " done with a result of " + str(Results)
+ " (Input: mean: " + str(Mean) + ", std. dev.: " + str(stdDev) + ")")
if __name__ == '__main__':
start = time.time()
m = multiprocessing.Manager()
queue = m.Queue()
CPUS = multiprocessing.cpu_count()
WORKERS = multiprocessing.Pool(processes=CPUS)
Mean = [50, 60, 70, 80, 90]
StdDev = [10, 10, 10, 10, 10]
Periods = 100
Nr = list(range(1,len(Mean) + 1))
WORKERS.starmap(worker, zip(Mean, StdDev, repeat(Periods), Nr, repeat(queue)))
WORKERS.close()
WORKERS.join()
FinalSimulationResults = []
for i in range(len(Mean)):
FinalSimulationResults.append(queue.get())
print(FinalSimulationResults)
Answer: The way you assign the attributes to the class makes the attributes class
attributes. That way they are shared between every instance of the class. In
your case this doesn't appear immideatly because in every process you only
have one instance of the class and the class object itself is not shared
between processes. Now if a worker is finished early enough that it can get
another task the class object will be reused and the class attributes work "as
expected".
To circumvent this you should always assign instance attributes (i.e.
attributes that should be different from instance to instance) in the
`__init__` function:
class Simulation(Process):
def __init__(self, nr, mean, std_dev, periods):
self.nr = nr
self.mean = mean
self.std_dev = std_dev
self.periods = periods
self.result = []
def Generate_Value(self):
GeneratedValue = max(int(round(np.random.normal(self.Mean, self.StdDev), 0)), 0)
return GeneratedValue
def runSimulation(self):
for i in range(self.Periods):
self.Result.append(self.Generate_Value())
return self.Result
For further information see [the
documentation](https://docs.python.org/3.5/tutorial/classes.html#class-
objects)
That said I don't think you should use the Process class in the way you are
using it. `Pool` automatically handles Process creating for you and you only
need to tell it what to do. So rewriting your code:
def task(nr, mean, std_dev, periods, results):
for i in range(periods):
results.append(max(int(round(np.random.normal(self.Mean, self.StdDev), 0)), 0))
return results
m = multiprocessing.Manager()
queue = m.Queue()
cpu_count = multiprocessing.cpu_count() # CPUS = 8
pool = multiprocessing.Pool(processes=CPUS)
Mean = [50, 60, 70, 80, 90]
StdDev = [10, 10, 10, 10, 10]
Periods = 1
Nr = list(range(1,len(Mean) + 1))
pool.starmap(task, zip(Mean, StdDev, repeat(Periods), Nr, repeat(queue)))
pool.close()
pool.join()
should work (not tested).
|
Unable to load patterns in AIML via Python
Question: I've installed AIML via pip and wrote files _startup.py_ , _std-startup.xml_ ,
_basic.aiml_ and _bot_brain.brn_ in **core** folder. When I try to run
_startup.py_ , I get this warning:
Loading std-startup.xml... done (0.06 seconds)
WARNING: No match found for input: load aiml b
Kernel bootstrap completed in 0.10 seconds
Saving brain to core/bot_brain.brn... done (0.00 seconds)
This is the content of _std-startup.xml_ :
<aiml version="1.0.1" encoding="UTF-8">
<!-- std-startup.xml -->
<category>
<pattern>load aiml b</pattern>
<template>
<learn>basic.aiml</learn>
</template>
</category>
</aiml>
This is Python script:
import aiml
import os
kernel = aiml.Kernel()
if os.path.isfile("core/bot_brain.brn"):
kernel.bootstrap(brainFile = "core/bot_brain.brn")
else:
kernel.bootstrap(learnFiles = "std-startup.xml", commands = "load aiml b")
kernel.saveBrain("core/bot_brain.brn")
while True:
msg = raw_input(">")
if msg == "exit":
exit(0)
elif msg == "save":
kernel.saveBrain("core/bot_brain.brn")
else:
bot_response = kernel.respond(msg)
print("bot: " + bot_response)
For every input I get error `No match found for input`. What I'm doing wrong?
Everything is in the same directory, except the _bot_brain.brn_.
Answer: Problem is solved; I had to enter it with uppercase letters:
<category>
<pattern>LOAD AIML B</pattern>
<template>
<learn>basic.aiml</learn>
</template>
</category>
|
Remove decimal points and commas using regex in python
Question: I have the following string:
st='19.000\n20,000'
i want to remove the commas and points **ONLY FOR NUMBERS**. I am using the
following code
re.sub(r'[^\d\.]','',st)
The result is:
> '19.00020000'
I am newbie in regex. How do I preserve the new line and remove the dot Can
anyone help?
Answer: `^\d` matches everything that is not a digit.
Instead, you should use `(?<=\d)[,\.]`.
`(?<=\d)` ensures that there are digits before the comma or the point.
import re
st = '19.000\n20,000\na.a,a'
print(re.sub(r'(?<=\d)[,\.]','',st))
>> 19000
20000
a.a,a
|
python SOMETIMES os.environ has no pythonpath
Question: if i run the following script in Aptana Studio 3:
import os
from pprint import pprint
pprint(os.environ['PYTHONPATH'].split(os.pathsep))
I get the following output:
['C:\\Users\\Phocas_Tommy\\plugins\\org.python.pydev_3.0.0.1388187472\\pysrc\\pydev_sitecustomize',
'D:\\Phocas\\Phocas-Automation',
'D:\\Phocas\\Phocas-Automation\\analytics',
'C:\\Users\\Phocas_Tommy\\plugins\\org.python.pydev_2.7.0.2013032300\\pysrc',
'C:\\Windows\\system32\\python27.zip',
'C:\\Python27\\DLLs',
'C:\\Python27\\lib',
'C:\\Python27\\lib\\plat-win',
'C:\\Python27\\lib\\lib-tk',
'C:\\Python27',
'C:\\Python27\\lib\\site-packages']
If i run the same script in sublime text 3 i get the this error:
Traceback (most recent call last):
File "D:\Phocas\Phocas-Automation\scrapbook.py", line 3, in <module>
pprint(os.environ['PYTHONPATH'].split(os.pathsep))
File "C:\Python27\lib\os.py", line 423, in __getitem__
return self.data[key.upper()]
KeyError: 'PYTHONPATH'
Totally confused - Could it be something to do with a project file being read
by Aptana, telling the interpreter where to look for the PYTHONPATH? I don't
understand why os.environ['PYTHONPATH'] is completely absent when i run it in
sublime.
I'm using python 2.7 on Windows Server 2008 R2 Standard
Answer: PYTHONPATH is an environment variable. How are you starting Sublime Text 3?
Why do you think the environment in which it runs should have a PYTHONPATH
setting? Can you run Sublime Text from the command line?
The bottom line is that Sublime Text does not appear to be running in the same
environment as it is in Aptana Studio.
|
Python No module named
Question: I have a custom module that I am trying to read from a folder under a
hierarchy:
> project-source
/tests
/provider
my_provider.py
settings_mock.py
__init__.py
I am trying to call, from my_provider.py
import tests.settings_mock as settings
Example from command line:
project-source> python tests/provider/my_provider.py
Error:
... ImportError: No module named settings_mock
I keep getting `No module named settings_mock` as error. I have already
exported `project_source` path to PYTHONPATH. I have made tests into a package
by creating a `__init__.py` file in its root, but no change in the error then.
I can print the `settings_mock.py` attributes when cd'ing project source
>>> import tests.settings_mock as settings
>>> print settings.storage_provider
correct storage provider value
Is anyone able to point out my mistake here? Thanks!
Answer: You only have one small mistake. To use subfolders, you need `__init__.py`,
not `init.py` as you stated in the question. The difference is that `__init__`
is a builtin function of python, whereas `init` is not. Having this file in
each subfolder tells the pyhon interpreter that the folder is a "package" that
needs to be initialized.
**UPDATED:** It should be noted that python usually runs from the current
directory that the script is located. If your executable main script is
`my_provider.py`, then it's not going to know what to import, since the main
script is located in a lower directory than the object it is trying to import.
Think of it as a hierarchy. Scripts can only import things that are beneath
them. Try separating out the executable from everything else in that file, if
there are things that `settings_mock` needs to import.
|