text
stringlengths 226
34.5k
|
---|
Can't draw function using python
Question: I have made a function RC(n) that given any n changes the digits of n
according to a rule. The function is the following
def cfr(n):
return len(str(n))-1
def n_cfr(k,n):
J=str(k)
if "." in J:
J2=J.replace(".", "")
return J2[n-1]
else:
return J[n]
def RC(n):
if "." not in str(n):
return n+1
sum=0
val=0
for a in range(1,cfr(n)+1):
O=(int(n_cfr(n,a)))*10**(-a+1)
if int(n_cfr(n,a))==9:
val=0
else:
val=O+10**(-a+1)
sum=sum+val
return sum
I would like to draw this function for non-integers values of n. A friend gave
me this code that he used in other functions but it doesn't seem to work for
me:
def draw(f,a,b,res):
import numpy as np
import matplotlib.pyplot as plt
x=[a+(b-a)*i/res for i in range(0,res)]
y=[f(elm) for elm in x]
plt.plot(np.asarray(x), np.asarray(y))
plt.show()
I'm not familiar with plotting functions using python so could anyone give me
some help? Thanks in advance
Answer: The line in your function should be `x = list(range(a, b, res))` the first two
arguments of `range` are `start` and `stop`. Here is a better version of draw:
def draw(f, a, b, res):
import numpy as np
import matplotlib.pyplot as plt
x = list(range(a, b, res))
plt.plot(x, map(f, x))
plt.show()
|
How to find the diameter of objects using image processing in Python?
Question: Given an image with some irregular objects in it, I want to find their
individual diameter.
[Thanks to this answer](http://stackoverflow.com/questions/33707095/how-to-
locate-a-particular-region-of-values-in-a-2d-numpy-array?answertab=active#tab-
top), I know how to identify the objects. **However, is it possible to measure
the maximum diameter of the objects shown in the image?**
I have looked into the `scipy-ndimage` documentation and haven't found a
dedicated function.
Code for object identification:
import numpy as np
from scipy import ndimage
from matplotlib import pyplot as plt
# generate some lowpass-filtered noise as a test image
gen = np.random.RandomState(0)
img = gen.poisson(2, size=(512, 512))
img = ndimage.gaussian_filter(img.astype(np.double), (30, 30))
img -= img.min()
img /= img.max()
# use a boolean condition to find where pixel values are > 0.75
blobs = img > 0.75
# label connected regions that satisfy this condition
labels, nlabels = ndimage.label(blobs)
# find their centres of mass. in this case I'm weighting by the pixel values in
# `img`, but you could also pass the boolean values in `blobs` to compute the
# unweighted centroids.
r, c = np.vstack(ndimage.center_of_mass(img, labels, np.arange(nlabels) + 1)).T
# find their distances from the top-left corner
d = np.sqrt(r*r + c*c)
# plot
fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(10, 5))
ax[0].imshow(img)
ax[1].hold(True)
ax[1].imshow(np.ma.masked_array(labels, ~blobs), cmap=plt.cm.rainbow)
for ri, ci, di in zip(r, c, d):
ax[1].annotate('', xy=(0, 0), xytext=(ci, ri),
arrowprops={'arrowstyle':'<-', 'shrinkA':0})
ax[1].annotate('d=%.1f' % di, xy=(ci, ri), xytext=(0, -5),
textcoords='offset points', ha='center', va='top',
fontsize='x-large')
for aa in ax.flat:
aa.set_axis_off()
fig.tight_layout()
plt.show()
Image: [![enter image description
here](http://i.stack.imgur.com/yOznb.png)](http://i.stack.imgur.com/yOznb.png)
Answer: You could use `skimage.measure.regionprops` to determine the bounding box of
all the regions in your image. For roughly circular blobs the diameter of the
minimum enclosing circle can be approximated by the **largest side of the
bounding box**. To do so you just need to add the following snippet at the end
of your script:
from skimage.measure import regionprops
N = 20
img_dig = np.digitize(img, np.linspace(0, 1, N))
properties = regionprops(img_dig)
print 'Label \tLargest side'
for p in properties:
min_row, min_col, max_row, max_col = p.bbox
print '%5d %14.3f' % (p.label, max(max_row - min_row, max_col - min_col))
It is important to note that it is necessary to digitize `img` since
`regionprops` does not accept arrays of float values. In the example above
`img` was quantized into `N = 20` bins (each bin is uniquely identified by an
integer index). You may want to test other values of `N` to better fit your
needs.
And this is the output you get:
Label Largest side
1 251.000
2 270.000
3 368.000
4 512.000
5 512.000
6 512.000
7 512.000
8 512.000
9 512.000
10 512.000
11 512.000
12 512.000
13 512.000
14 512.000
15 512.000
16 512.000
17 457.000
18 419.000
19 58.000
20 1.000
|
How do I install modules on qpython3 (Android port of python)
Question: I found this great module on within and downloaded it as a zip file. Once I
extracted the zip file, i put the two modules inside the file(setup and the
main one) on the module folder including an extra read me file I needed to
run. I tried installing the setup file but I couldn't install it because the
console couldn't find it. So I did some research and I tried using pip to
install it as well, but that didn't work. So I was wondering if any of you
could give me the steps to install it manually and with pip (keep in mind that
the setup.py file needs to be installed in order for the main module to work).
Thanks!
Answer: Extract the zip file to the site-packages folder. Find the qpyplus folder in
that Lib/python3.2/site-packages extract here that's it.Now you can directly
use your module from REPL terminal by importing it.
|
historical stock price ten days before holidays for the past twenty years
Question: even though still as a noob, I have been enthusiastically learning Python for
a while and here's a project I'm working on. I need to collect historical
stock price ten days before US public holidays in the past twenty years and
here's what I've done: (I used pandas_datareader and holidays here)
start=datetime.datetime(1995,1,1)
end=datetime.datetime(2015,12,31)
history_price=web.get_data_yahoo('SPY', start, end)
us_holidays=holidays.UnitedStates()
test=[]
for i in dates:
if i in us_holidays:
test.append((history_price['Adj Close'].ix[pd.date_range(end=i, periods=11, freq='B')]))
test
And the result is like this:
Freq: B, Name: Adj Close, dtype: float64, 1995-02-06 32.707565
1995-02-07 32.749946
1995-02-08 32.749946
1995-02-09 32.749946
1995-02-10 32.792328
1995-02-13 32.802975
1995-02-14 32.845356
1995-02-15 33.025457
1995-02-16 32.983076
1995-02-17 32.855933
1995-02-20 NaN
The length of the list "test" is 233. My question is: how can I convert this
list into a dictionary with the holidays being the keys and the stock prices
being values under each key.
Thank you in advance for your guidance.
Answer: This uses a dictionary and list comprehension to generate a set of ten U.S.
workdays preceding each holiday. The stock prices for those days are then
stored in a dictionary (keyed on holiday) as a list of prices, most recent
first (h-1) and oldes last (h-10).
from pandas.tseries.holiday import USFederalHolidayCalendar
from pandas.tseries.offsets import CustomBusinessDay
holidays = USFederalHolidayCalendar().holidays(start='1995-1-1', end='2015-12-31')
bday_us = CustomBusinessDay(calendar=USFederalHolidayCalendar())
start = '1995-01-01'
end = '2015-12-31'
days = 10
dates = {holiday: [holiday - bday_us * n for n in range(1, days + 1)]
for holiday in USFederalHolidayCalendar().holidays(start=start, end=end)}
>>> dates
{...
Timestamp('2015-12-25 00:00:00'): [
Timestamp('2015-12-24 00:00:00'),
Timestamp('2015-12-23 00:00:00'),
Timestamp('2015-12-22 00:00:00'),
Timestamp('2015-12-21 00:00:00'),
Timestamp('2015-12-18 00:00:00'),
Timestamp('2015-12-17 00:00:00'),
Timestamp('2015-12-16 00:00:00'),
Timestamp('2015-12-15 00:00:00'),
Timestamp('2015-12-14 00:00:00'),
Timestamp('2015-12-11 00:00:00')]}
result = {holiday: history_price.ix[dates[holiday]].values for holiday in dates}
>>> result
{...
Timestamp('2015-12-25 00:00:00'):
array([ 203.56598 , 203.902497, 201.408393, 199.597201, 197.964166,
201.55487 , 204.673725, 201.722125, 199.626485, 198.622952])}
|
PyInstaller/Py2exe - include os.system call with third party scripts in single file compilation
Question: I'm using tkinter and pyinstaller/py2exe (either one would be fine), to create
an executable as a single file from my python script. I can create the
executable, and it runs as desired when not using the bundle option with
py2exe or -F option with pyinstaller. I'm running third party python scripts
within my code with os.system(), and can simply place these scripts in the
'dist' dir after it is created in order for it to work. The command has
several parameters: input file, output file, number of threads..etc, so I'm
unsure how to add this into my code using import. Unfortunately, this is on
Windows, so some colleagues can use the GUI, and would like to have the single
executable to distribute.
**EDIT:**I can get it to bundle into a single executable, and provide the
scripts along with the exe. The issue still however, is with
`os.system("python script.py -1 inputfile -n numbthreads -o outputfile..")`
when running the third party scripts within my code. I had a colleague test
the executable with the scripts provided with it, however at this point they
need to have python installed, which is unacceptable since there will be
multiple users.
Answer: After a couple of days of some tests, I was able to figure out how to work
around this problem. Instead of `os.system`, I am using
`subprocess.call("script.py arg1 arg2 ..., shell=True)` for each script I need
to run. Also, I used `chmod +x` (in linux) before transferring the scripts to
windows to ensure they're an executable (someone can hopefully tell me if this
was really necessary). Then without having to install python a colleague was
able to run the program, after I compiled it as a single file with
pyInstaller. I was also able to do the same thing with blast executables
(where the user did not have to install blast locally - if the exe also
accompanied the distribution of the script). This avoided having to call
bipython ncbiblastncommandline and the install.
|
Looping through HTML tags using BeautifulSoup
Question: As mentioned in the previous questions, I am using Beautiful soup with python
to retrieve weather data from a website.
Here's how the website looks like:
<channel>
<title>2 Hour Forecast</title>
<source>Meteorological Services Singapore</source>
<description>2 Hour Forecast</description>
<item>
<title>Nowcast Table</title>
<category>Singapore Weather Conditions</category>
<forecastIssue date="18-07-2016" time="03:30 PM"/>
<validTime>3.30 pm to 5.30 pm</validTime>
<weatherForecast>
<area forecast="TL" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"/>
<area forecast="SH" lat="1.32100000" lon="103.92400000" name="Bedok"/>
<area forecast="TL" lat="1.35077200" lon="103.83900000" name="Bishan"/>
<area forecast="CL" lat="1.30400000" lon="103.70100000" name="Boon Lay"/>
<area forecast="CL" lat="1.35300000" lon="103.75400000" name="Bukit Batok"/>
<area forecast="CL" lat="1.27700000" lon="103.81900000" name="Bukit Merah"/>`
..
..
<area forecast="PC" lat="1.41800000" lon="103.83900000" name="Yishun"/>
<channel>
I managed to retrieve the information I need using these codes :
import requests
from bs4 import BeautifulSoup
import urllib3
import csv
import sys
import json
#getting the Validtime
area_attrs_li = []
r = requests.get('http://www.nea.gov.sg/api/WebAPI/?
dataset=2hr_nowcast&keyref=781CF461BB6606AD907750DFD1D07667C6E7C5141804F45D')
soup = BeautifulSoup(r.content, "xml")
time = soup.find('validTime').string
print "validTime: " + time
#getting the date
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "date: " + element['date']
#getting the time
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "time: " + element['time']
#print area
for area in soup.select('area'):
area_attrs_li.append(area)
print area
#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
f = open("C:\\scripts\\testing\\testingnea.csv" , 'wt')
try:
for area in area_attrs_li:
#print str(area) + "\n"
writer = csv.writer(f)
writer.writerow( (time, element['date'], element['time'], area, name))
finally:
f.close()
print open("C:/scripts/testing/testingnea.csv", 'rt').read()
I managed to get the data in a CSV, however when I run this part of the codes:
#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
This is the result:
[![This is what I
got](http://i.stack.imgur.com/8T1gg.png)](http://i.stack.imgur.com/8T1gg.png)
Apparently, my loop is not working as it keeps printing the last area of the
last record over and over again.
**EDIT** : I tried looping through the data for area in the list :
for area in area_attrs_li:
name = (area.get('name'))
print name
However, its still not looping.
I'm not sure where did the codes go wrong :/
Answer: The problem is in the line: `writer.writerow( (time, element['date'],
element['time'], area, name))`, the `name` never change.
A way to fix it:
try:
for index, area in enumerate(area_attrs_li):
# print str(area) + "\n"
writer = csv.writer(f)
writer.writerow((time, element['date'], element['time'], area, areas[index].get('name')))
finally:
f.close()
|
Regex not working as required
Question: Here is my HTML code:
<ul class="hide menuSearchType">
<li><a href="../../dynamic/city_select.aspx">Search by city</a></li>
<li><a href="../../searchbyphone.aspx">Search by phone</a></li>
<li><a href="../searchbyaddress.aspx">Search by address</a></li>
<li><a href="../searchbybrand.aspx">Search by brand</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="dynamic/city_select.aspx">Search by city</a></li>
<li><a href="searchbybrand.aspx">Search by brand</a></li>
</ul>
Here is my Python code:
import re, os
from urllib.parse import urlparse
url = "http://www.phonebook.com.pk/dynamic/search.aspx?searchtype=cat&class_id=2566"
path = urlparse(url)
lpath = os.path.dirname(path.path)
html = u"<ul class=\"hide menuSearchType\">\n <li><a href=\"../../dynamic/city_select.aspx\">Search by city</a></li>\n <li><a href=\"../../searchbyphone.aspx\">Search by phone</a></li>\n <li><a href=\"../searchbyaddress.aspx\">Search by address</a></li>\n <li><a href=\"../searchbybrand.aspx\">Search by brand</a></li>\n <li><a href=\"/advertisement-center/\">Advertise with us</a></li>\n <li><a href=\"/advertisement-center/\">Advertise with us</a></li>\n <li><a href=\"//fonts.googleapis.com/css?family=Open+Sans\">Find a Person</a></li>\n <li><a href=\"//fonts.googleapis.com/css?family=Open+Sans\">Find a Person</a></li>\n <li><a href=\"dynamic/city_select.aspx\">Search by city</a></li>\n <li><a href=\"searchbybrand.aspx\">Search by brand</a></li>\n</ul>"
linkList1 = re.findall(re.compile(u'(?<=href=")../.*?(?=")'), str(html))
for link1 in linkList:
html = re.sub(link1, path.scheme + "://" + os.path.normpath(path.netloc + os.path.abspath(lpath + "/" + link1)), str(html))
print (html)
Problem is it detects the links with "../" as intended but also "../../" is
changed, is there any way I can restrict my regex to just pick the url's with
single "../"?
Expected output:
<ul class="hide menuSearchType">
<li><a href="../../dynamic/city_select.aspx">Search by city</a></li>
<li><a href="../../searchbyphone.aspx">Search by phone</a></li>
<li><a href="http://www.phonebook.com.pk/searchbyaddress.aspx">Search by address</a></li>
<li><a href="http://www.phonebook.com.pk/searchbybrand.aspx">Search by brand</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="/advertisement-center/">Advertise with us</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="//fonts.googleapis.com/css?family=Open+Sans">Find a Person</a></li>
<li><a href="dynamic/city_select.aspx">Search by city</a></li>
<li><a href="searchbybrand.aspx">Search by brand</a></li>
</ul>
Answer: By Using BeautifulSoup as requested :
from bs4 import Beautifulsoup
soup = BeautifulSoup(html)
all = soup.select('li')
for i in all:
try:
output = re.sub(r'(?is)(href="../)([^.])','http://www.phonebook.com.pk/'+r'\2',str(i))
except:
output = i
print(output)
|
I need to figure out how to make my program repeat. (Python coding class)
Question: I am a beginner student in a python coding class. I have the majority of the
done and the program itself works, however I need to figure out a way to make
the program ask if wants a subtraction or an adding problem, and if the user
would like another question. I asked my teacher for assistance and he hasn't
gotten back to me, so I'm simply trying to figure out and understand what
exactly I need to do.
import random
x = int(input("Please enter an integer: "))
if x < 0:
x = 0
print('Negative changed to zero')
elif x == 0:
print('Zero')
elif x == 1:
print('Single')
else:
print('More')
maximum = 10 ** x;
maximum += 1
firstnum = random.randrange(1,maximum) # return an int from 1 to 100
secondnum = random.randrange(1, maximum)
compsum = firstnum + secondnum # adds the 2 random numbers together
# print (compsum) # print for troubleshooting
print("What is the sum of", firstnum, " +", secondnum, "?") # presents problem to user
added = int(input("Your answer is: ")) # gets user input
if added == compsum: # compares user input to real answer
print("You are correct!!!")
else:
print ("Sorry, you are incorrect")
Answer: You'll want to do something like this:
def foo():
print("Doing good work...")
while True:
foo()
if input("Want to do more good work? [y/n] ").strip().lower() == 'n':
break
I've seen this construct (i.e., using a `break`) used more often than using a
[sentinel](https://en.wikipedia.org/wiki/Sentinel_value) in Python, but either
will work. The sentinel version looks like this:
do_good_work = True
while do_good_work:
foo()
do_good_work = input("Want to do more good work? [y/n] ").strip().lower() != 'n'
You'll want to do more error checking than me in your code, too.
|
Python searching multiple directories and reading select files
Question: I am looking for some help performing actions on a set of files in two
different directories using Python.
I am attempting to:
1. Search two different directories
2. Find the 15 last modified files (comparing files in both directories)
3. Read all 15 recently modified files line by line
I can accomplish reading through one file directory using glob. However, I
cannot specify multiple directories. Is there another way I can accomplish
this?
Below is my code which accomplishes grabbing the latest 15 files in dir1 but
not dir2.
dir1 = glob.iglob("/dir1/data_log.*")
dir2 = glob.iglob("/dir2/message_log.*")
latest=heapq.nlargest(10, dir1, key=os.path.getmtime)
for fn in latest:
with open(fn) as f:
for line in f:
print(line)
Answer: I'm not sure this is what you are after but if you were to use `glob.glob`
instead of `glob.iglob`, you could do
dir1 = glob.glob("/dir1/data_log.*")
dir2 = glob.glob("/dir2/message_log.*")
latest=heapq.nlargest(10, dir1+dir2, key=os.path.getmtime)
And actually, if you don't like the idea of using lists (`glob.glob`) instead
of generators (`glob.iglob`), you can do
from itertools import chain
dir1 = glob.iglob("/dir1/data_log.*")
dir2 = glob.iglob("/dir2/message_log.*")
latest=heapq.nlargest(10, chain(dir1, dir2), key=os.path.getmtime)
|
Running a Python script in a Makefile
Question: I have a Python script (scr1.py) that calls another Python script (scr2.py)
and they both on the same path. When I open CMD and run scr1.py everything
works perfectly.
I want to run scr1.py inside a Makefile that is NOT on the same path as the
scripts. The scr1.py is executing but fail on calling scr2.py. I think the
problem is that scr1.py searches the Makefile directory instead of the scripts
directory. How can I fix it?
The code:
import os
import scr2
fileinfo = os.stat('scr2.py')
if os.path.isfile("infofile.txt"):
file=open("infofile.txt",'r')
lm = file.read()
file.close()
if lm == str(fileinfo.st_mtime):
#Do_Something
else:
scr2
else:
scr2
file = open("infofile.txt",'w')
OK, I just fount another problem. When you import another file, it runs this
file IN THE IMPORT LINE! It means that this isn't the right way to import a
file, unless you use the import line where you want to run the script, but it
is so ugly.
Answer: Since both files are in same directory, how about just appending the current
working directory path before second file name ?
import os
import scr2
fileinfo = os.stat( os.getcwd() + '/scr2.py')
if os.path.isfile("infofile.txt"):
file=open("infofile.txt",'r')
lm = file.read()
file.close()
if lm == str(fileinfo.st_mtime):
#Do_Something
else:
scrcompile
else:
scrcompile
file = open("infofile.txt",'w')
|
Caffe to Tensorflow (Kaffe by Ethereon) : TypeError: Descriptors should not be created directly, but only retrieved from their parent
Question: I wanted to use the wonderful package caffe-tensorflow by ethereon and I ran
into the same problem described in [this closed
issue](https://github.com/ethereon/caffe-tensorflow/issues/10):
When I run the example or try to `import caffepb` I got the error message:
>>> import caffepb
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "caffepb.py", line 28, in <module>
type=None),
File "/home/me/anaconda/python2.7/site-packages/google/protobuf/descriptor.py", line 652, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors should not be created directly, but only retrieved from their parent.
I am using Tensorflow 0.7.0 on a linux 64 bits UBUNTU 14.04 machine with
protobuf 3.0.0b2.post (but it also happened with 3.0.0a4 and 3.0.0b2) with
Python 2.7 and anaconda.
I tried to reinstall protobuf and tensorflow numerous times as I figured it
was quite possibly a conflict between different protobuf installs (or at least
that was the conclusion of the github issue) but I couldn't make it work even
after doing a combination of pip install protobuf, pip uninstall protobuf or
directly installing protobuf .whl.
What would you advise ?
**EDIT:** Using a virtual environment may be a solution but I would like to
avoid it if possible
Answer: I met the same problem too. My solution (workaround) was the same as one of
the comments in the issue - **install/run tf and protobuf3 (and anything) in
virtualenv**.
I have no more idea about what the problem exactly is. This is just one
workaround that you can give a try.
|
need to package jinja2 template for python
Question: (UPDATE: I've made a better question with a better answer
[here](http://stackoverflow.com/questions/38642557/how-to-load-jinja-template-
directly-from-filesystem). I was going to delete this question, but some of
the answers might prove useful to future searchers.)
My question is just about identical to
[this](http://stackoverflow.com/questions/29150156/how-to-make-a-python-
package-containing-only-jinja-templates), but that answer is ugly (requires a
dir structure including `sharedtemplates/templates/templates/`), incomplete as
posted (user "answered" his own question), and assumes some knowledge I don't
have.
I'm working on my first python-backed web application. The javascript
component is well under development using a static HTML page. Now I want a
server-side python component to handle AJAX calls and render an HTML template
using jinja2.
I've had success with python before, creating GUI apps using tkinter/requests.
Love the language, but the python environment (environments?) is confusing.
I'm not working in a `virtualenv`.
According to [jinja2 docs](http://jinja.pocoo.org/docs/dev/api/#high-level-
api), HTML templates have to be in something called a package. Then you create
an `Environment` with a `PackageLoader` that knows the name of the package and
the template dir:
from jinja2 import Environment, PackageLoader
env = Environment(loader=PackageLoader('yourapplication', 'templates'))
So, here's my index.py (it's just a stub and doesn't even try to render
anything, but you can at least tell if it crashes).
#!/usr/bin/python
from jinja2 import Environment, PackageLoader # no prob, jinja2 correctly installed using pip
env = Environment(loader=PackageLoader('mypkg', 'template')) # causes server error
# if it doesn't crash, just put up a basic html page for now
print ("Content-type: text/html\r\n\r\n")
print("<html><head><title>hello</title></head><body>hello wuld</body></html>")
Here's the directory structure:
index.py
mypkg/
mypkg/template/index.html
mypkg/__init__.py # empty
Relevant line from error log:
ImportError: No module named mypkg
Maybe I need to structure this differently, and I'm pretty sure I'll need to
create and invoke a `setup.py` to install the module. That's part of what the
other answer left out: what's in `setup.py` and how does it work in this case?
I've looked at dozens of resources on `setup.py` and none of them seems to
pertain to the question of installing HTML templates.
Thanks in advance for your help!
UPDATE: fragilewindows pointed to a resource that tells about "developer
mode", which is probably part of the answer. The difficulty here is, I'm
looking to package this template for local deployment, not for distribution.
But 99% of the online documentation is about packaging projects for PyPi. I
don't need to package a "project", just a dinky HTML template. Indeed, the
only reason I need to package the template is because that's the default way
for `jinja2` to access templates (and I do want to go native in python).
I just need to convince the environment that "mypkg" is installed, and that
"template" is a directory within the install. You can see that my efforts so
far are naive; I expect the right answer will be correspondingly lightweight.
Answer: I don't know the process involved with packaging but I figure since Jinja2 is
written in Python, the process would be the same as packaging any other
application in Python.
Here are a few links that may be useful to you:
* The Hitchhiker's Guide to Python (great resource): [Packaging Explained](http://docs.python-guide.org/en/latest/shipping/packaging/)
* Alternatives to Packaging: [freeze your application](http://docs.python-guide.org/en/latest/shipping/freezing/#freezing-your-code-ref)
* [Python Packaging User Guide](https://packaging.python.org/distributing/) (probably the most useful to you)
I hope this helps.
|
Python NLTK Word Tokenize UnicodeDecode Error
Question: I get the error when trying the below code. I try to read from a text file and
tokenize the words using nltk. Any ideas? The text file can be found
[here](https://pythonprogramming.net/static/downloads/short_reviews/positive.txt)
from nltk.tokenize import word_tokenize
short_pos = open("./positive.txt","r").read()
#short_pos = short_pos.decode('utf-8').lower()
short_pos_words = word_tokenize(short_pos)
Error:
Traceback (most recent call last):
File "sentimentAnalysis.py", line 19, in <module>
short_pos_words = word_tokenize(short_pos)
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 106, in word_tokenize
return [token for sent in sent_tokenize(text, language)
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 91, in sent_tokenize
return tokenizer.tokenize(text)
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1226, in tokenize
return list(self.sentences_from_text(text, realign_boundaries))
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1274, in sentences_from_text
return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1265, in span_tokenize
return [(sl.start, sl.stop) for sl in slices]
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1304, in _realign_boundaries
for sl1, sl2 in _pair_iter(slices):
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 311, in _pair_iter
for el in it:
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1280, in _slices_from_text
if self.text_contains_sentbreak(context):
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1325, in text_contains_sentbreak
for t in self._annotate_tokens(self._tokenize_words(text)):
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1460, in _annotate_second_pass
for t1, t2 in _pair_iter(tokens):
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 310, in _pair_iter
prev = next(it)
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 577, in _annotate_first_pass
for aug_tok in tokens:
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 542, in _tokenize_words
for line in plaintext.split('\n'):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xed in position 6: ordinal not in range(128)
Thanks for your support.
Answer: Looks like this text is encoded in Latin-1. So this works for me:
import codecs
with codecs.open("positive.txt", "r", "latin-1") as inputfile:
text=inputfile.read()
short_pos_words = word_tokenize(text)
print len(short_pos_words)
You can test for different encodings by e.g. looking at the file in a good
editor like TextWrangler. You can
1) open the file in different encodings to see which one looks good and
2) look at the character that caused the issue. In your case, that is the
character in _position 4645_ \- which happens to be an accented word from a
Spanish review. That is not part of Ascii, so that doesn't work; it's also not
a valid codepoint in UTF-8.
|
Tensorflow : how to insert custom input to existing graph?
Question: I have downloaded a tensorflow GraphDef that implements a VGG16 ConvNet, which
I use doing this :
Pl['images'] = tf.placeholder(tf.float32,
[None, 448, 448, 3],
name="images") #batch x width x height x channels
with open("tensorflow-vgg16/vgg16.tfmodel", mode='rb') as f:
fileContent = f.read()
graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
tf.import_graph_def(graph_def, input_map={"images": Pl['images']})
Besides, I have image features that are homogeneous to the output of the
`"import/pool5/"`.
How can I tell my graph that don't want to use his input `"images"`, but the
tensor `"import/pool5/"` as input ?
Thank's !
**EDIT**
OK I realize I haven't been very clear. Here is the situation:
I am trying to use [this implementation](https://github.com/yuxng/tensorflow/)
of ROI pooling, using a pre-trained VGG16, which I have in the GraphDef
format. So here is what I do:
First of all, I load the model:
tf.reset_default_graph()
with open("tensorflow-vgg16/vgg16.tfmodel",
mode='rb') as f:
fileContent = f.read()
graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
graph = tf.get_default_graph()
Then, I create my placeholders
images = tf.placeholder(tf.float32,
[None, 448, 448, 3],
name="images") #batch x width x height x channels
boxes = tf.placeholder(tf.float32,
[None,5], # 5 = [batch_id,x1,y1,x2,y2]
name = "boxes")
And I define the output of the first part of the graph to be conv5_3/Relu
tf.import_graph_def(graph_def,
input_map={'images':images})
out_tensor = graph.get_tensor_by_name("import/conv5_3/Relu:0")
So, `out_tensor` is of shape `[None,14,14,512]`
Then, I do the ROI pooling:
[out_pool,argmax] = module.roi_pool(out_tensor,
boxes,
7,7,1.0/1)
With `out_pool.shape = N_Boxes_in_batch x 7 x 7 x 512`, which is homogeneous
to `pool5`. I would then like to feed `out_pool` as an input to the op that
comes just after `pool5`, so it would look like
tf.import_graph_def(graph.as_graph_def(),
input_map={'import/pool5':out_pool})
But it doesn't work, I have this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-89-527398d7344b> in <module>()
5
6 tf.import_graph_def(graph.as_graph_def(),
----> 7 input_map={'import/pool5':out_pool})
8
9 final_out = graph.get_tensor_by_name("import/Relu_1:0")
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/importer.py in import_graph_def(graph_def, input_map, return_elements, name, op_dict)
333 # NOTE(mrry): If the graph contains a cycle, the full shape information
334 # may not be available for this op's inputs.
--> 335 ops.set_shapes_for_outputs(op)
336
337 # Apply device functions for this op.
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
1610 raise RuntimeError("No shape function registered for standard op: %s"
1611 % op.type)
-> 1612 shapes = shape_func(op)
1613 if len(op.outputs) != len(shapes):
1614 raise RuntimeError(
/home/hbenyounes/vqa/roi_pooling_op_grad.py in _roi_pool_shape(op)
13 channels = dims_data[3]
14 print(op.inputs[1].name, op.inputs[1].get_shape())
---> 15 dims_rois = op.inputs[1].get_shape().as_list()
16 num_rois = dims_rois[0]
17
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py in as_list(self)
745 A list of integers or None for each dimension.
746 """
--> 747 return [dim.value for dim in self._dims]
748
749 def as_proto(self):
TypeError: 'NoneType' object is not iterable
Any clue ?
Answer: What I would do is something along those lines:
-First retrieve the names of the tensors representing the weights and biases of the 3 fully connected layers coming after pool5 in VGG16.
To do that I would inspect `[n.name for n in graph.as_graph_def().node]`.
(They probably look something like import/locali/weight:0,
import/locali/bias:0, etc.)
-Put them in a python list:
weights_names=["import/local1/weight:0" ,"import/local2/weight:0" ,"import/local3/weight:0"]
biases_names=["import/local1/bias:0" ,"import/local2/bias:0" ,"import/local3/bias:0"]
-Define a function that look something like:
def pool5_tofcX(input_tensor, layer_number=3):
flatten=tf.reshape(input_tensor,(-1,7*7*512))
tmp=flatten
for i in xrange(layer_number):
tmp=tf.matmul(tmp, graph.get_tensor_by_name(weights_name[i]))
tmp=tf.nn.bias_add(tmp, graph.get_tensor_by_name(biases_name[i]))
tmp=tf.nn.relu(tmp)
return tmp
Then define the tensor using the function:
wanted_output=pool5_tofcX(out_pool)
Then you are done !
|
Download images automatically
Question: I have written this piece of python code which downloads a number of images
from a repository of images and saves them in specified folder. The code looks
like this:
import urllib.request
import cv2
import numpy as np
import os
def store_raw_images():
neg_images_link = 'http://image- net.org/api/text/imagenet.synset.geturls?wnid=n00464651'
neg_images_urls = urllib.request.urlopen(neg_images_link).read().decode()
if not os.path.exists('neg'):
os.makedirs('neg')
pic_num = 1
for i in neg_images_urls.split('\n'):
try:
print(i)
urllib.request.urlretrieve(i, "neg/{}.jpg".format(pic_num))
img = cv2.imread("neg/{}.jpg".format(pic_num) + cv2.IMREAD_GRAYSCALE)
resized_image = cv2.resize(img, (100, 100))
cv2.imwrite("neg/{}.jpg".format(pic_num), resized_image)
pic_num = pic_num + 1
print(pic_num)
except Exception as e:
print(str(e))
store_raw_images()
For some reason the images are replaced and I do NOT see all images. I keep
seeing one image `1.jpg` and all the images seem to replaced, though I expect
the name of the images to go `1.jpg`, `2.jpg` , ... .
I also see this warning/error but I am not sure if it is relevant to this
problem or not.
Can't convert 'int' object to str
http://www.azjeugd.nl/site/modules/xcgal/albums/20082009seizoen/a1/groningen_thuis/IMG_7798.jpg
HTTP Error 403: Forbidden
http://www.ga-eagles.nl/images/duels1e0809/gaetel6.jpg
Where do you think the problem lies?
Note that I am incrementing the image number:
pic_num = pic_num + 1
Answer: You have everything in one `try/except` block. Assuming `cv2.imwrite` fails
but all the other lines are executed without any problems, your code will
never reach `picnum = picnum + 1`. Try rearranging your code where you first
increase `picnum` and check which lines actually gives you the error.
|
Execute IPython notebook cell from python script
Question: In the IPython notebook, you can execute an outside script, say `test.py`,
using the run magic:
%run test.py
Is there a way to do the opposite, i.e. given an IPython notebook, accessing
and then running a particular cell inside it from a python script?
Answer: The file with extention "ipynb" of Jupyter (or IPython) is a JSON file. And
the cells are under the name "cells" ["cells"]. Then you choose the number of
the cell [0] and to get the source choose "source" ["source"]. In return you
get an array with one element so you need to get the first element [0].
>>> import json
>>> from pprint import pprint
>>> with open('so1.ipynb', 'r') as content_file:
... content = content_file.read()
...
>>> data=json.loads(content)
>>> data["cells"][0]["source"][0]
'1+1'
>>> eval(data["cells"][0]["source"][0])
2
>>> data["cells"][1]["source"][0]
'2+2'
>>> eval(data["cells"][1]["source"][0])
4
**EDIT:**
To run other python scripts in cells that have %run:
os.system(data["cells"][2]["source"][0].replace("%run ",""))
Or replace it with the following if you have -i option:
execfile(data["cells"][2]["source"][0].replace("%run -i ",""))
See [Run a python script from another python script, passing in
args](http://stackoverflow.com/questions/3781851/run-a-python-script-from-
another-python-script-passing-in-args) for more info.
|
Python binding of functions within a c++ program
Question: I have a program written in c++ that functions on it's own, however we want to
make it accessible to Python. Specifically, we have several functions that are
more efficient in c++, but we do a lot of other things with the output using
Python scripts. I don't want to rewrite the whole of main() in Python as we
make use of Boost's root finding algorithms and other functionalities that'd
be a pain to do in Python.
Is it possible to add Python binding to these functions while keeping the c++
main()? I've never done Python binding before, but I've looked at
[Boost.python](http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/tutorial/index.html)
since we're already using Boost. Most of the examples use c++
functions/classes in a hpp file and embed them into a python program, which
isn't exactly what we want.
What we want is to keep our c++ program as a standalone so it can run as it is
if users want, and also allow users to call these functions from a Python
program. Being able to use the same Makefile and exe would be great. We don't
really want to make a separate c++ library containing the bound functions;
we're not interested in making a pythonic version of the code, merely allowing
access to these useful functions.
Thanks
Answer: We have an extensive c++ library which we made available to python through use
of a python wrapper class which calls an interface that we defined in boost
python.
One python class handles all the queries in a pythonic manner, by calling a
python extension module written in c++ with boost python. The python extension
executes c++ code, so it can link and use anything from the original library.
You said your c++ is an executable, though. Why can't you use system calls to
launch a shell process? You can do that in any language, including python.
What I thought was you want to access individual functions, which means you
need all your functions in a static library.
You build your c++ exe normally, linking the common code. You make a "boost
python extension module" which links the common code, and can be imported by a
python script. And of course a unit test executable, which links and tests the
common code. My preference is that the common code be a stand-alone static lib
(use -fPic if there's a posix gcc build).
|
Custom gradient for a chain of ops
Question: I've got a chain of standard TensorFlow operations, and I need to specify a
custom gradient for this chain as a whole.
Say that, in the example below, these operations are grouped in a single
Python function: 'my_op'. What I'm trying to do is to specify a custom
gradient for 'my_op'. I had a look at RegisterGradient, gradient_override_map,
and tf.Graph.create_op, but I couldn't find any simple example about how to
use them to define a custom gradient for a group of ops without rewriting the
full operation chain in C++.
import numpy as np
import tensorflow as tf
n = 2
m = 3
x = np.random.normal(size=(1, n))
A = tf.Variable(tf.truncated_normal(shape=(n, m), dtype=tf.float32))
b = tf.Variable(tf.zeros(shape=(1, m), dtype=tf.float32))
def my_op(a):
return tf.add(tf.matmul(a, A), b)
x_placeholder = tf.placeholder(tf.float32,shape=[1, n])
t = my_op(tf.stop_gradient(x_placeholder))
grad = tf.gradients(t, [A])
sess = tf.Session()
sess.run(tf.initialize_all_variables())
result = sess.run(grad, feed_dict={x_placeholder: x})
print(result)
sess.close()
Answer: As far as as I can see, the best way you can define a custom gradient(i.e.
give some modification to the plain gradients) is to add a new custom ops in
tensorflow following
[this](https://www.tensorflow.org/versions/master/how_tos/adding_an_op/index.html#adding-
a-new-op). As you can see, for a custom op outputing the input, you can define
the gradients of it in python by making use of
`@ops.RegisterGradient("MyOp")`.
|
Nested `ImportError` on Py3 but not on Py2
Question: I'm having trouble understanding how nested imports work in a python project.
For example:
test.py
package/
__init__.py
package.py
subpackage/
__init__.py
`test.py`:
import package
`package/__init__.py`:
from .package import functionA
`package/package.py`:
import subpackage
def functionA():
pass
In Python 3.5 when I run `test.py` I get the following error, but no error in
Python 2.7:
C:\Users\Patrick\Anaconda3\python.exe C:/Users/Patrick/Desktop/importtest/test.py
Traceback (most recent call last):
File "C:/Users/Patrick/Desktop/importtest/test.py", line 1, in <module>
import package
File "C:\Users\Patrick\Desktop\importtest\package\__init__.py", line 1, in <module>
from .package import functionA
File "C:\Users\Patrick\Desktop\importtest\package\package.py", line 1, in <module>
import subpackage
ImportError: No module named 'subpackage'
However if I run `package.py` with Python 3.5. I get no error at all.
This seems strange to me as when `package.py` is run on its own the line
`import subpackage` works, but with it is being 'run' (don't know if this is
the right terminology here) through the nested import, the same line cannot
find `subpackage`.
Why are there differences between Python 2.7 and 3.5 in this case and how can
this be resolved in a way that works for both 2.7.x and 3.x?
I think this might be due to the fact that `import subpackage` in the nested
import counts as an implicit relative import in the nested import but not when
`package.py` is run directly, but if I do `import .subpackage` instead, I get
this error on both 2.7 and 3.5:
C:\Users\Patrick\Anaconda3\python.exe C:/Users/Patrick/Desktop/importtest/test.py
Traceback (most recent call last):
File "C:/Users/Patrick/Desktop/importtest/test.py", line 1, in <module>
import package
File "C:\Users\Patrick\Desktop\importtest\package\__init__.py", line 1, in <module>
from .package import functionA
File "C:\Users\Patrick\Desktop\importtest\package\package.py", line 1
import .subpackage
^
SyntaxError: invalid syntax
Answer: You should use:
from . import subpackage
in `package/package.py`.
|
shell multipipe broken with multiple python scripts
Question: I am trying to get the stdout of a python script to be shell-piped in as stdin
to another python script like so:
find ~/test -name "*.txt" | python my_multitail.py | python line_parser.py
It should print an output but nothing comes out of it.
Please note that this works:
find ~/test -name "*.txt" | python my_multitail.py | cat
And this works too:
echo "bla" | python line_parser.py
my_multitail.py prints out the new content of the .txt files:
from multitail import multitail
import sys
filenames = sys.stdin.readlines()
# we get rid of the trailing '\n'
for index, filename in enumerate(filenames):
filenames[index] = filename.rstrip('\n')
for fn, line in multitail(filenames):
print '%s: %s' % (fn, line),
sys.stdout.flush()
When a new line is added to the .txt file ("hehe") then my_multitail.py
prints:
> /home/me/test2.txt: hehe
line_parser.py simply prints out what it gets on stdin:
import sys
for line in sys.stdin:
print "line=", line
There is something I must be missing. Please community help me :)
Answer: There's a hint if you run your `line_parser.py` interactively:
$ python line_parser.py
a
b
c
line= a
line= b
line= c
Note that I hit ctrl+D to provoke an EOF after entering the 'c'. You can see
that it's slurping up all the input before it starts iterating over the lines.
Since this is a pipeline and you're continuously sending output through to it,
this doesn't happen and it never starts processing. You'll need to choose a
different way of iterating over `stdin`, for example:
import sys
line = sys.stdin.readline()
while line:
print "line=", line
line = sys.stdin.readline()
|
Python 3.x tkinter Place Entries on top of eachother
Question:
import tkinter
from tkinter import *
root = tkinter.Tk()
root.title("Gmail App")
def login():
L1 = Label(root, text="Email")
L1.pack( side = LEFT)
E1 = Entry(root, bd =5)
E1.pack(side = LEFT)
L1 = Label(root, text="Password")
L1.pack( side = RIGHT)
E1 = Entry(root, bd =5)
E1.pack(side = RIGHT)
login()
root.mainloop()
I have this code, and I`d like to place the 'email' entry above the 'password'
entry. How might I do this? Thanks
Im very new to tkinter. . . where might i learn better?
Answer: I suggest you to use grid layout manager not pack.
from tkinter import *
root = Tk()
root.title("Gmail App")
def login():
L1 = Label(root, text="Email")
E1 = Entry(root, bd=5)
L2 = Label(root, text="Password")
E2 = Entry(root, bd=5)
L1.grid(row=0, column=0)
L2.grid(row=3, column=0)
E1.grid(row=2, column=0)
E2.grid(row=4, column=0)
login()
root.mainloop()
And this tutorial will be helpful for newer:
<https://pythonprogramming.net/python-3-tkinter-basics-tutorial/>
|
Calling another method to another class
Question: i newbie on python programming, i so confused, why i cant call another method
from another class,
this is my source- file : 8_turunan lanjut.py
class Karyawan(object):
'untuk kelas karyawan'
jml_karyawan = 0 # Class variable
# constructor
def __init__(self, kid, nama, jabatan):
self.kid = kid
self.nama = nama
self.jabatan = jabatan
Karyawan.jml_karyawan += 1
# method
def infoKaryawan(self):
print "Karyawan baru masuk"
print "==================="
print "ID : %s " % self.kid
print "Nama : %s " % self.nama
print "Jabatan : %s " % self.jabatan
second source file : 9_turunan advance.py
# cara mengakses/memakai class/membuat Object
class cobaa():
obj = Karyawan("K001", "Ganjar", "Teknisi")
obj.infoKaryawan()
# tambah karyawan baru
obj2 = Karyawan("K002", "Nadya", "Akunting")
obj2.infoKaryawan()
# tampilkan total Karyawan
print "-----------------------------"
print "Total Karyawan : %d " % Karyawan.jml_karyawan
how can i call method **init** and infoKaryawan to class cobaa on file
9_turunan advance.py
i already put `from percobaan.Karyawan import __init__` on file : 9_turunan
advance and its wrong, i dont know where's the problem of my source
here my directory sturcture [directory
structure](http://i.stack.imgur.com/anIXs.png)
Answer: Your indentation is off in your class. It should read as follows:
class Karyawan(object):
'untuk kelas karyawan'
jml_karyawan = 0 # Class variable
def __init__(self, kid, nama, jabatan):
self.kid = kid
self.nama = nama
self.jabatan = jabatan
Karyawan.jml_karyawan += 1
def infoKaryawan(self):
print "Karyawan baru masuk"
print "==================="
print "ID : %s " % self.kid
print "Nama : %s " % self.nama
print "Jabatan : %s " % self.jabatan
Then, in your other file, just import it as such: `from filename import
Karyawan`
Good luck!
|
Returning to the main menu in my game - Python
Question: I am creating a Rock, Paper, Scissors game. The game has a main menu which I
need to be able to return to from each sub menu. I've tried a few different
method I could think of as well as looked here and elsewhere online to
determine a method of solving my problem.
I want the user to be able to select an option from the main menu, go to the
selected sub menu, then be prompted with an option to return to the main menu.
For example, Select the rules sub menu, then return to the main menu. Or,
select to play a round of Rock, Paper, Scissors, then select to play again or
return back to the main menu.
# random integer
from random import randint
# list for weapon
WEAPON = ["Rock", "Paper", "Scissors"]
# module to run the program
#def main():
# menu()
def main():
menuSelect = ""
print("\tRock, Paper, Scissors!")
# main menu
print("\n\t\tMain Menu")
print("\t1. See the rules")
print("\t2. Play against the computer")
print("\t3. Play a two player game")
print("\t4. Quit")
menuSelect = int(input("\nPlease select one of the four options "))
while menuSelect < 1 or menuSelect > 4:
print("The selection provided is invalid.")
menuSelect = int(input("\nPlease select one of the four options "))
if menuSelect == 1:
rules()
elif menuSelect == 2:
onePlayer()
elif menuSelect == 3:
twoPlayer()
elif menuSelect == 4:
endGame()
# display the rules to the user
def rules():
print("\n\t\tRules")
print("\tThe game is simple:")
print("\tPaper Covers Rock")
print("\tRock Smashes Scissors")
print("\tScissors Cut Paper")
print("")
# one player mode
def onePlayer():
again = ""
player = False
print("\n\tPlayer VS Computer")
while player == False:
player = input("\nSelect your weapon: Rock, Paper, or Scissors\n")
player = player.lower()
computer = WEAPON[randint(0,2)]
computer = computer.lower()
if player == computer:
print(player," vs ",computer)
print("It's a tie!\n")
elif player == "rock":
if computer == "paper":
print(player," vs ",computer)
print("Paper covers rock! You lose!\n")
else:
print("Rock smashes",computer,". You win!\n")
elif player == "paper":
if computer == "scissors":
print(player," vs ",computer)
print("Scissors cut paper! You lose!\n")
else:
print("Paper covers",computer,". You win!\n")
elif player == "scissors":
if computer == "rock":
print(player," vs ",computer)
print("Rock smashes scissors! You lose!\n")
else:
print("Scissors cut",computer,". You win!\n")
else:
print("invalid input")
again = input("Would you like to play again? Yes or no\n")
again = again.lower()
if again == "yes" or "y":
player = False
elif again == "no" or "n":
main()
# two player mode
def twoPlayer():
fight = False
player1 = ""
player2 = ""
print("\n\tPlayer VS Player")
while fight == False:
player1 = input("\nSelect your weapon: Rock, Paper, or Scissors\n")
player1 = player1.lower()
player2 = input("\nSelect your weapon: Rock, Paper, or Scissors\n")
player2 = player2.lower()
if player1 == player2:
print(player1," vs ",player2)
print("It's a tie!\n")
elif player1 == "rock":
if player2 == "paper":
print(player1," vs ",player2)
print("Paper covers rock! Player 2 wins!\n")
else:
print("Rock smashes",player2,". Player 1 wins!\n")
elif player1 == "paper":
if player2 == "scissors":
print(player1," vs ",player2)
print("Scissors cut paper! Player 2 wins!\n")
else:
print("Paper covers",player2,". Player 1 wins!\n")
elif player1 == "scissors":
if player2 == "rock":
print(player1," vs ",player2)
print("Rock smashes scissors! Player 2 wins!\n")
else:
print("Scissors cut",player2,". Player 1 wins!\n")
else:
print("invalid input")
again = input("Would you like to play again? Yes or no\n")
again = again.lower()
if again == "yes" or "y":
player = False
elif again == "no" or "n":
main()
def endGame():
print("Thank you for playing!")
main()
Currently my only test is within the onePlayer() module. The idea behing my
code is to ask the user if they want to continue playing. If they don't then I
want the program to bring them back to the main menu.
Answer: Do a try and except command. If they say no your code should be quit(). If
they say yes put a continue command and it will restart the whole thing.
|
How to paste a PNG image with transparency to another image in PIL without white pixels?
Question: I have two images, a background and a PNG image with transparent pixels. I am
trying to paste the PNG onto the background using Python-PIL but when I paste
the two images I get white pixels around the PNG image where there were
transparent pixels.
My code:
import os
from PIL import Image, ImageDraw, ImageFont
filename='pikachu.png'
ironman = Image.open(filename, 'r')
filename1='bg.png'
bg = Image.open(filename1, 'r')
text_img = Image.new('RGBA', (600,320), (0, 0, 0, 0))
text_img.paste(bg, (0,0))
text_img.paste(ironman, (0,0))
text_img.save("ball.png", format="png")
My images:
[![enter image description
here](http://i.stack.imgur.com/BcUf3m.png)](http://i.stack.imgur.com/BcUf3m.png)
[![enter image description
here](http://i.stack.imgur.com/tC7som.png)](http://i.stack.imgur.com/tC7som.png)
**my output image:**
[![enter image description
here](http://i.stack.imgur.com/IZtSP.png)](http://i.stack.imgur.com/IZtSP.png)
How can I have transparent pixels instead of white?
Answer: You just need to specify the image as the mask as follows in the paste
function:
import os
from PIL import Image, ImageDraw, ImageFont
filename='pikachu.png'
ironman = Image.open(filename, 'r')
filename1='bg.png'
bg = Image.open(filename1, 'r')
text_img = Image.new('RGBA', (600,320), (0, 0, 0, 0))
text_img.paste(bg, (0,0))
text_img.paste(ironman, (0,0), mask=ironman)
text_img.save("ball.png", format="png")
Giving you:
[![paste with
transparency](http://i.stack.imgur.com/bwoXS.png)](http://i.stack.imgur.com/bwoXS.png)
|
Add elements of two list of dictionaries based on a key value pair match
Question: Given n lists with m dictionaries as their elements, I would like to produce a
new list, with a joined set of dictionaries.
l1 = [{"index":'a', "b":2,'c':9}, {"index":'b', "b":3,"c":5}, {"index":'c', "b":8,"c":8}]
l2 = [{"index":'a', "b":4,'c':8}, {"index":'b', "b":9,"c":10},{"index":None, "b":11,"c":10}]
I would like to produce a joined list:
l3 = [{"index":'a', "b":6, "c":17},
{"index":'b', "b":12, "c":15},
{"index":'c', "b":8, "c":8},
{"index":None, "b":11,"c":10}]
I have a method that can merge the two lists. But as you can see above, I also
wish to add the elements.
def merge_lists(l1, l2, key):
merged = {}
for item in l1+l2:
if item[key] in merged:
merged[item[key]].update(item)
else:
merged[item[key]] = item
return [val for (_, val) in merged.items()]
l3 = merge_lists(l1,l2,'index')
What is the most efficient way to do this in Python?
Answer: You can use a `Counter` for something like this pretty easily...
from collections import defaultdict, Counter
def merge_lists(l1, l2):
d = defaultdict(Counter)
for sdict in l1 + l2:
counter = Counter(sdict)
d[counter.pop('index')] += counter
lists = []
for k, v in d.items():
result = dict(v)
result['index'] = k
lists.append(result)
return lists
l1 = [{"index":'a', "b":2,'c':9}, {"index":'b', "b":3,"c":5}, {"index":'c', "b":8,"c":8}]
l2 = [{"index":'a', "b":4,'c':8}, {"index":'b', "b":9,"c":10},{"index":None, "b":11,"c":10}]
print(merge_lists(l1, l2))
The great thing about adding `Counter` instances is that it pretty much just
does what you expect. If one counter doesn't have the key, it adds nothing to
the sum, but if both counters have the given key, then their values are added
and used as the resultant value at that key.
* * *
Note that the order of the merged lists is arbitrary (based on the ordering of
the `defaultdict`). If you need to preserve order in some way, you can either
`sort` after the fact or create a default ordered dict which will preserve the
order based on when the `index` was first seen in `l1` or `l2`:
class DefaultOrderedDict(collections.OrderedDict):
def __init__(self, default_factory, *args, **kwargs):
self.default_factory = default_factory
super(DefaultOrderedDict, self).__init__(*args, **kwargs)
def __missing__(self, key):
self[key] = self.default_factory()
return self[key]
(There are more "complete" default ordered dicts floating around on
ActiveState and StackOverflow, but this simple one _should_ work for your
problem at hand)
|
Python, how to move dict under itself with a new attribute? Eg dict['key'] = dict
Question: I have an array of very large dictionaries, and need to put each dict itself
under a new key.
I know `dict['key'] = dict` won't work and will result a recursive dict in
python. Currently, I'm doing something like:
new_dict['key'] = old_dict
and it will waste memory, is there a better way doing it?
Answer: A `dict` _can_ hold a reference to itself:
>>> d = {'foo': 'bar'}
>>> d['self'] = d
>>> d
{'self': {...}, 'foo': 'bar'}
>>> d['self']['self']['self']['self']['foo']
'bar'
Of course, there are _some_ things that you can't do (e.g. dump it to `json`):
>>> import json
>>> json.dumps(d)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
ValueError: Circular reference detected
If you actually need to persist this in some way, then you probably don't have
another option other than to copy it when you add it to itself:
d = {'foo': 'bar'}
d['self'] = d.copy()
Or writing some sort of custom logic so that when deserializing, you replace
certain sentinel values with the `dict` itself (which may or may not work
depending on why you need this particular feature)
|
How to turn off autoscaling in matplotlib.pyplot
Question: I am using matplotlib.pyplot in python to plot my data. The problem is the
image it generates seems to be autoscaled. How can I turn this off so that
when I plot something at (0,0) it will be placed fixed in the center?
Answer: You want the
[`autoscale`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.autoscale)
function:
from matplotlib import pyplot as plt
# Set the limits of the plot
plt.xlim(-1, 1)
plt.ylim(-1, 1)
# Don't mess with the limits!
plt.autoscale(False)
# Plot anything you want
plt.plot([0, 1])
|
Python issue gave up around 4am
Question: program that requests four number (integer or floating-point) from the user.
your program should compute the average the first three numbers and compare
the average to the fourth. if they are equal, your program should print
'Equal' on the screen.
import math
x1 = input('Enter first number: ')
x2 = input('Enter second number: ')
x3 = input('Enter third number: ')
x4 = input('Enter fourth number: ')
if ( x4 == (x1 + x2 + x3) / 3):
print('equal')
Error message:
if ( x4 == (x1 + x2 + x3) / 3):
TypeError: unsupported operand type(s) for /: 'str' and 'int'"
second error message after trying to convert to int:
x1 = int(input('Enter first number: ')) ValueError: invalid literal for int()
with base 10:
Answer: You are using mathematical operands on strings. Here is your fix:
import math
x1 = int(input('Enter first number: '))
x2 = int(input('Enter second number: '))
x3 = int(input('Enter third number: '))
x4 = int(input('Enter fourth number: '))
if ( x4 == (x1 + x2 + x3) / 3):
print('equal')
You simply have to cast the strings to int.
|
Python import error no module named bz2
Question: I have libbz2-dev installed however I am still getting the following import
error while importing gensim :
>>> import gensim
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/krishna/gensimenv/lib/python2.7/site-packages/gensim/__init__.py", line 6, in <module>
from gensim import parsing, matutils, interfaces, corpora, models, similarities, summarization
File "/home/krishna/gensimenv/lib/python2.7/site-packages/gensim/corpora/__init__.py", line 14, in <module>
from .wikicorpus import WikiCorpus
File "/home/krishna/gensimenv/lib/python2.7/site-packages/gensim/corpora/wikicorpus.py", line 21, in <module>
import bz2
ImportError: No module named bz2
Answer: you can try to do
pip install bz2file
|
Encode IP address using all printable characters in Python 2.7.x
Question: I would like to encode an IP address in as short a string as possible using
all the printable characters. According to
<https://en.wikipedia.org/wiki/ASCII#Printable_characters> these are codes
20hex to 7Ehex.
For example:
shorten("172.45.1.33") --> "^.1 9" maybe.
In order to make decoding easy I also need the length of the encoding always
to be the same. I also would like to avoid using the space character in order
to make parsing easier in the future.
> How can one do this?
I am looking for a solution that works in Python 2.7.x.
* * *
My attempt so far to modify Eloims's answer to work in Python 2:
First I installed the ipaddress backport for Python 2
(<https://pypi.python.org/pypi/ipaddress>) .
#This is needed because ipaddress expects character strings and not byte strings for textual IP address representations
from __future__ import unicode_literals
import ipaddress
import base64
#Taken from http://stackoverflow.com/a/20793663/2179021
def to_bytes(n, length, endianess='big'):
h = '%x' % n
s = ('0'*(len(h) % 2) + h).zfill(length*2).decode('hex')
return s if endianess == 'big' else s[::-1]
def def encode(ip):
ip_as_integer = int(ipaddress.IPv4Address(ip))
ip_as_bytes = to_bytes(ip_as_integer, 4, endianess="big")
ip_base85 = base64.a85encode(ip_as_bytes)
return ip_base
print(encode("192.168.0.1"))
This now fails because base64 doesn't have an attribute 'a85encode'.
Answer: An IP stored in binary is 4 bytes.
You can encode it in 5 printable ASCII characters using Base85.
Using more printable characters won't be able to shorten the resulting string
more than that.
import ipaddress
import base64
def encode(ip):
ip_as_integer = int(ipaddress.IPv4Address(ip))
ip_as_bytes = ip_as_integer.to_bytes(4, byteorder="big")
ip_base85 = base64.a85encode(ip_as_bytes)
return ip_base85
print(encode("192.168.0.1"))
|
Failing to import itertools in Python 3.5.2
Question: I am new to Python. I am trying to import izip_longest from itertools. But I
am not able to find the import "itertools" in the preferences in Python
interpreter. I am using Python 3.5.2. It gives me the below error-
from itertools import izip_longest
ImportError: cannot import name 'izip_longest'
Please let me know what is the right course of action. I have tried Python 2.7
too and ended up with same problem. Do I need to use lower version Python.
Answer: `izip_longest` was _renamed_ to
[`zip_longest`](https://docs.python.org/3/library/itertools.html#itertools.zip_longest)
in Python 3 (note, no `i` at the start), import that instead:
from itertools import zip_longest
and use that name in your code.
If you need to write code that works both on Python 2 and 3, catch the
`ImportError` to try the other name, then rename:
try:
# Python 3
from itertools import zip_longest
except ImportError
# Python 2
from itertools import izip_longest as zip_longest
# use the name zip_longest
|
Python Pandas self join for merge cartesian product to produce all combinations and sum
Question: I am brand new to Python, seems like it has a lot of flexibility and is faster
than traditional RDBMS systems.
Working on a very simple process to create random fantasy teams. I come from
an RDBMS background (Oracle SQL) and that does not seem to be optimal for this
data processing.
I made a dataframe using pandas read from csv file and now have a simple
dataframe with two columns -- Player, Salary:
` Name Salary
0 Jason Day 11700
1 Dustin Johnson 11600
2 Rory McIlroy 11400
3 Jordan Spieth 11100
4 Henrik Stenson 10500
5 Phil Mickelson 10200
6 Justin Rose 9800
7 Adam Scott 9600
8 Sergio Garcia 9400
9 Rickie Fowler 9200`
What I am trying to do via python (pandas) is produce all combinations of 6
players which salary is between a certain amount 45000 -- 50000.
In looking up python options, I found the itertools combination interesting,
but it would result a massive list of combinations without filtering the sum
of salary.
In traditional SQL, I would do a massive merge cartesian join w/ SUM, but then
I get the players in different spots..
Such as A, B, C then, C, B, A..
My traditional SQL which doesn't work well enough is something like this:
` SELECT distinct
ONE.name AS "1",
TWO.name AS "2",
THREE.name AS "3",
FOUR.name AS "4",
FIVE.name AS "5",
SIX.name AS "6",
sum(one.salary + two.salary + three.salary + four.salary + five.salary + six.salary) as salary
FROM
nl.pgachamp2 ONE,nl.pgachamp2 TWO,nl.pgachamp2 THREE, nl.pgachamp2 FOUR,nl.pgachamp2 FIVE,nl.pgachamp2 SIX
where ONE.name != TWO.name
and ONE.name != THREE.name
and one.name != four.name
and one.name != five.name
and TWO.name != THREE.name
and TWO.name != four.name
and two.name != five.name
and TWO.name != six.name
and THREE.name != four.name
and THREE.name != five.name
and three.name != six.name
and five.name != six.name
and four.name != six.name
and four.name != five.name
and one.name != six.name
group by ONE.name, TWO.name, THREE.name, FOUR.name, FIVE.name, SIX.name`
Is there a way to do this in Pandas/Python?
Any documentation that can be pointed to would be great!
Answer: I ran this for combinations of 6 and found no teams that satisfied. I used 5
instead.
This should get you there:
from itertools import combinations
import pandas as pd
s = df.set_index('Name').squeeze()
combos = pd.DataFrame([c for c in combinations(s.index, 5)])
combo_salary = combos.apply(lambda x: s.ix[x].sum(), axis=1)
combos[(combo_salary >= 45000) & (combo_salary <= 50000)]
[![enter image description
here](http://i.stack.imgur.com/yzx6k.png)](http://i.stack.imgur.com/yzx6k.png)
|
Parse GenCAD file to python lists
Question: I am new to python. In my first little project I want to **parse** `GenCAD`
output file and assign the `$PARTS$` content to a python list of lists data
structure for further procesing.
The file to import:
$HEADER$
BOARD_TYPE PCB_DESIGN
UNITS MM
$END HEADER$
$PARTS$
CONN1 CONN003081 25.00 22.70 TOP 3
1 25.00 20.70 SIGNALA
2 25.00 21.70 SIGNALB
3 25.00 22.70 SIGNALC
CONN2 CONN003081 31.50 45.00 TOP 3
1 31.50 43.00 F-
2 31.50 44.00 S-
3 31.50 45.00 (Net0)
R1 RESI100161 29.89 46.50 TOP 2
2 29.89 47.00 F+
1 29.89 46.00 S+
$END PARTS$
...
I want something like that:
` print(parts[0] ['CONN1', 'CONN003081', '25.00', '22.70', 'TOP', '3', ['1',
'25.00', '20.70', 'SIGNALA'], ['2', '25.00', '21.70', 'SIGNALB'], ['3',
'25.00', '22.70', 'SIGNALC']] `
`
print(parts[1]) ['CONN2', 'CONN003081', '31.50', '45.00', 'TOP', '3', ['1',
'31.50', '43.00', 'F-'], ['2', '31.50', '44.00', 'S-'], ['3', '31.50',
'45.00', '(Net0)']] `
Answer: pyparsing is just right for this kind of task. Here is a pyparsing parser for
your `$PARTS$` section - see the embedded comments:
from pyparsing import *
# define little expressions for the pieces in your input text
real = pyparsing_common.real
integer = pyparsing_common.integer
word = Word(alphas, alphanums+'_+-') | QuotedString('(', endQuoteChar=')', unquoteResults=False)
# combine the small expressions into more complex ones - use Group to retain structure
part_head = Group(word + word + real + real + word + integer)
part_detail = Group(integer + real + real + word)
# define an overall part defn expression, and add results names to access the substructures
part_defn = Group(part_head('head') + OneOrMore(part_detail)('details'))
# finally, define an expression for the input data section
parts_list = '$PARTS$' + Group(OneOrMore(part_defn)) + '$END PARTS'
And here is how you put the parser to use, and access the returned results:
# use searchString to find the $PARTS$ section, and parse its contents
# (searchString returns a list of matches, we just want the first one)
parts = parts_list.searchString(sample)[0]
# output using pretty-printing
parts.pprint()
# output, including results names
print(parts.dump())
# iterate over parts and access substructures
for part in parts[1]:
print(part.head)
Gives:
['$PARTS$',
[[['CONN1', 'CONN003081', 25.0, 22.7, 'TOP', 3],
[1, 25.0, 20.7, 'SIGNALA'],
[2, 25.0, 21.7, 'SIGNALB'],
[3, 25.0, 22.7, 'SIGNALC']],
[['CONN2', 'CONN003081', 31.5, 45.0, 'TOP', 3],
[1, 31.5, 43.0, 'F-'],
[2, 31.5, 44.0, 'S-'],
[3, 31.5, 45.0, '(Net0)']],
[['R1', 'RESI100161', 29.89, 46.5, 'TOP', 2],
[2, 29.89, 47.0, 'F+'],
[1, 29.89, 46.0, 'S+']]],
'$END PARTS']
['$PARTS$', [[['CONN1', 'CONN003081', 25.0, 22.7, 'TOP', 3], [1, 25.0, ...
[0]:
$PARTS$
[1]:
[[['CONN1', 'CONN003081', 25.0, 22.7, 'TOP', 3], [1, 25.0, 20.7, ...
[0]:
[['CONN1', 'CONN003081', 25.0, 22.7, 'TOP', 3], [1, 25.0, 20.7, ...
- details: [[1, 25.0, 20.7, 'SIGNALA'], [2, 25.0, 21.7, 'SIGNALB'],...
[0]:
[1, 25.0, 20.7, 'SIGNALA']
[1]:
[2, 25.0, 21.7, 'SIGNALB']
[2]:
[3, 25.0, 22.7, 'SIGNALC']
- head: ['CONN1', 'CONN003081', 25.0, 22.7, 'TOP', 3]
[1]:
[['CONN2', 'CONN003081', 31.5, 45.0, 'TOP', 3], [1, 31.5, 43.0, 'F-'], ...
- details: [[1, 31.5, 43.0, 'F-'], [2, 31.5, 44.0, 'S-'], [3, 31.5, ...
[0]:
[1, 31.5, 43.0, 'F-']
[1]:
[2, 31.5, 44.0, 'S-']
[2]:
[3, 31.5, 45.0, '(Net0)']
- head: ['CONN2', 'CONN003081', 31.5, 45.0, 'TOP', 3]
[2]:
[['R1', 'RESI100161', 29.89, 46.5, 'TOP', 2], [2, 29.89, 47.0, 'F+'], ...
- details: [[2, 29.89, 47.0, 'F+'], [1, 29.89, 46.0, 'S+']]
[0]:
[2, 29.89, 47.0, 'F+']
[1]:
[1, 29.89, 46.0, 'S+']
- head: ['R1', 'RESI100161', 29.89, 46.5, 'TOP', 2]
[2]:
$END PARTS
['CONN1', 'CONN003081', 25.0, 22.7, 'TOP', 3]
['CONN2', 'CONN003081', 31.5, 45.0, 'TOP', 3]
['R1', 'RESI100161', 29.89, 46.5, 'TOP', 2]
|
ImportError: No module named _markerlib when trying to install via pip
Question: Did somebody experienced the same problem? I tried to run a solution from SO:
pip install --upgrade distribute
and
pip install --upgrade setuptools
And I got the same result, every time:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-JC9mq_/distribute/setup.py", line 58, in <module>
setuptools.setup(**setup_params)
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setuptools/command/egg_info.py", line 177, in run
writer = ep.load(installer=installer)
File "pkg_resources.py", line 2241, in load
if require: self.require(env, installer)
File "pkg_resources.py", line 2254, in require
working_set.resolve(self.dist.requires(self.extras),env,installer)))
File "pkg_resources.py", line 2471, in requires
dm = self._dep_map
File "pkg_resources.py", line 2682, in _dep_map
self.__dep_map = self._compute_dependencies()
File "pkg_resources.py", line 2699, in _compute_dependencies
from _markerlib import compile as compile_marker
ImportError: No module named _markerlib
python 2.7, pip 8.1.2
[EDIT] The solution of creating a new env. with `virtualenv myenv
--distribute` worked for the local environment, but when I try to push to the
heroku, it gives me exactly the same error: No module named _markerlib. So,
the problem is not just in the local env.
Answer: I fixed it this way, I think.
pip uninstall setuptools
download https://bitbucket.org/pypa/setuptools/raw/0.7.3/ez_setup.py
Run that,then
pip install %HOME%\Downloads\wheel-0.25.0.tar.gz
pip install Distribute
I did this so this would work
pip install django-validated-file
|
Finding the format of my timestamp in Python
Question: My time format is screwy, but it seemed workable, as a string with the
following format:
'47:37:00'
I tried to set a variable where:
DT = '%H:%M:%S'
So I could find the difference between two times, but it's given me the
following error:
ValueError: time data '47:37:00' does not match format '%H:%M:%S'
Is it possible there are more elements to my time stamps than I thought? Or
that it's formatted in minutes/seconds/milliseconds? I can't seem to find
documentation that would help me determine my time format so I could set DT
and do arithmetic on it.
Answer: It's because you set 47 to %H, that is not a proper value. Here is an example:
import datetime
dt = datetime.datetime.strptime('2016/07/28 12:37:00','%Y/%m/%d %H:%M:%S')
print dt
Output: 2016-07-28 12:37:00
|
Python - Speech Recognition time offsets
Question: I am trying to do speech recognition using python. In addition to this, I need
to get the times of beginning and end of each word.
I would rather use a free library that can deal with this. I've heard that
Sphinx is able to do this but I couldn't find any examples (for python
anyway).
I would appreciate any help or suggestions.
Answer: Something like this:
from os import environ, path
from pocketsphinx.pocketsphinx import *
from sphinxbase.sphinxbase import *
MODELDIR = "../../../model"
DATADIR = "../../../test/data"
config = Decoder.default_config()
config.set_string('-hmm', path.join(MODELDIR, 'en-us/en-us'))
config.set_string('-lm', path.join(MODELDIR, 'en-us/en-us.lm.bin'))
config.set_string('-dict', path.join(MODELDIR, 'en-us/cmudict-en-us.dict'))
config.set_string('-logfn', '/dev/null')
decoder = Decoder(config)
stream = open(path.join(DATADIR, 'goforward.raw'), 'rb')
in_speech_bf = False
decoder.start_utt()
while True:
buf = stream.read(1024)
if buf:
decoder.process_raw(buf, False, False)
if decoder.get_in_speech() != in_speech_bf:
in_speech_bf = decoder.get_in_speech()
if not in_speech_bf:
decoder.end_utt()
print ('Result:', decoder.hyp().hypstr)
print ([(seg.word, seg.prob, seg.start_frame, seg.end_frame) for seg in decoder.seg()])
decoder.start_utt()
else:
break
decoder.end_utt()
More examples
[here](https://github.com/cmusphinx/pocketsphinx/blob/master/swig/python/test/).
|
Python: Can an exception class identify the object that raised it?
Question: When a Python program raises an exception, is there a way the exception
handler can identify the object in which the exception was raised?
If not, I believe I can find out by defining the exception class like this...
class FoobarException(Exception) :
def __init__(self,message,context) :
...
...and using it like this:
raise FoobarException("Something bad happened!", self)
I'd rather not have to pass "self" to every exception, though.
Answer: It quickly gets messy if you want the exception itself to figure out where in
the stack it is. You can do something like this:
import inspect
frameinfo = inspect.getframeinfo(inspect.stack()[1][0])
caller_name = frameinfo[2]
file_name = frameinfo[0]
This, however, will only really work if you are looking for the function or
method where the exception was raised, not if you are looking for the class
that owns it.
You are probably better off doing something like this:
class MyException(Exception):
pass
# ... somewhere inside a class
raise MyException("Something bad happened in {}".format(self.__class__))
That way you don't have to write any handling code for your `Exception`
subclass either.
|
Create a bar graph using datetimes
Question: I am using matplotlib and pyplot to create some graphs from a CSV file. I can
create line graphs no problem, but I am having a lot of trouble creating a bar
graph.
I referred to this post [matplotlib bar chart with
dates](http://stackoverflow.com/questions/5902371/matplotlib-bar-chart-with-
dates) among several others that seemed like they should easily accomplish my
task, but I can't get it to work with my list of datetimes.
Running the exact code from the above post generates the expected graph, but
when I swap our their x and y values for my own from my CSV file:
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
from datetime import datetime
import csv
columns="YEAR,MONTH,DAY,HOUR,PREC,PET,Q,UZTWC,UZFWC,LZTWC,LZFPC,LZFSC,ADIMC,AET"
data_file="FFANA_000.csv"
list_of_datetimes = []
skipped_header = False
with open(data_file, 'rt') as f:
reader = csv.reader(f, delimiter=',', quoting=csv.QUOTE_NONE)
for row in reader:
if skipped_header:
date_string = "%s/%s/%s %s" % (row[0].strip(), row[1].strip(), row[2].strip(), row[3].strip())
dt = datetime.strptime(date_string, "%Y/%m/%d %H")
list_of_datetimes.append(dt)
skipped_header = True
UZTWC = np.genfromtxt(data_file, delimiter=',', names=columns, usecols=("UZTWC"))
x = list_of_datetimes
y = UZTWC
ax = plt.subplot(111)
ax.bar(x, y, width=10)
ax.xaxis_date()
plt.show()
Running this gives the error:
Traceback (most recent call last):
File "graph.py", line 151, in <module>
ax.bar(x, y, width=10)
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\site-packages\matplotlib\__init__.py", line 1812, in inner
return func(ax, *args, **kwargs)
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\site-packages\matplotlib\axes\_axes.py", line 2118, in bar
if h < 0:
TypeError: unorderable types: numpy.ndarray() < int()
When I run the datetime numpy conversion that is necessary for plotting my
line graphs:
list_of_datetimes = matplotlib.dates.date2num(list_of_datetimes)
I get the same error.
Could anyone offer some insight?
excerpt from FFANA_000.csv:
%YEAR,MO,DAY,HR,PREC(MM/DT),ET(MM/DT),Q(CMS), UZTWC(MM),UZFWC(MM),LZTWC(MM),LZFPC(MM),LZFSC(MM),ADIMC(MM), ET(MM/DT)
2012, 5, 1, 0, 0.000, 1.250, 0.003, 2.928, 0.000, 3.335, 4.806, 0.000, 6.669, 1.042
2012, 5, 1, 6, 0.000, 1.250, 0.003, 2.449, 0.000, 3.156, 4.798, 0.000, 6.312, 0.987
2012, 5, 1, 12, 0.000, 1.250, 0.003, 2.048, 0.000, 2.970, 4.789, 0.000, 5.940, 0.929
2012, 5, 1, 18, 0.000, 1.250, 0.003, 1.713, 0.000, 2.782, 4.781, 0.000, 5.564, 0.869
2012, 5, 2, 0, 0.000, 1.250, 0.003, 1.433, 0.000, 2.596, 4.772, 0.000, 5.192, 0.809
2012, 5, 2, 6, 0.000, 1.250, 0.003, 1.199, 0.000, 2.414, 4.764, 0.000, 4.829, 0.750
2012, 5, 2, 12, 0.000, 1.250, 0.003, 1.003, 0.000, 2.239, 4.756, 0.000, 4.478, 0.693
2012, 5, 2, 18, 0.000, 1.250, 0.003, 0.839, 0.000, 2.072, 4.747, 0.000, 4.144, 0.638
2012, 5, 3, 0, 0.000, 1.250, 0.003, 0.702,
Answer: I could not fully reproduce your problem with your data and code. I get
> UZTWC = np.genfromtxt(data_file, delimiter=';', names=columns,
> usecols=("UZTWC")) File
> "C:\Python34-64bit\lib\site-packages\numpy\lib\npyio.py", line 1870,
> in genfromtxt
> output = np.array(data, dtype) ValueError: could not convert string to float: b'UZTWC(MM)'
But try changing `UZTWC = np.genfromtxt(...)` to
UZTWC = np.genfromtxt(data_file, delimiter=',', usecols=(7), skip_header=1)
and you should get a graph. The problem is that for some reason your numpy
array is a made of strings and not floats.
|
Reading JSON file with Python 3
Question: I'm using Python 3.5.2 on Windows 10 x64. The `JSON` file I'm reading is
[this](http://pastebin.com/Yjs6FAfm "this") which is a `JSON` array containing
2 more arrays.
I'm trying to parse this `JSON` file using the `json` module. As described in
the [docs](https://docs.python.org/3/library/json.html "docs") the `JSON` file
must be compliant to `RFC 7159`. I checked my file
[here](https://jsonformatter.curiousconcept.com/ "here") and it tells me it's
perfectly fine with the `RFC 7159` format, but when trying to read it using
this simple python code:
with open(absolute_json_file_path, encoding='utf-8-sig') as json_file:
text = json_file.read()
json_data = json.load(json_file)
print(json_data)
I'm getting this exception:
Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd.py", line 2217, in <module>
globals = debugger.run(setup['file'], None, None)
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd.py", line 1643, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/Andres Torti/Git-Repos/MCF/Sur3D.App/shapes-json-checker.py", line 14, in <module>
json_data = json.load(json_file)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\__init__.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\Andres Torti\AppData\Local\Programs\Python\Python35-32\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I can read this exact file perfectly fine on Javascript but I can't get Python
to parse it. Is there anything wrong with my file or is any problem with the
Python parser?
Answer: Try this
import json
with open('filename.txt', 'r') as f:
array = json.load(f)
print (array)
|
What is the equivalent of Serial.available() in pyserial?
Question: When I am trying to read multiple lines of serial data on an Arduino, I use
the following idiom:
String message = "";
while (Serial.available()){
message = message + serial.read()
}
In Arduino C, `Serial.available()` returns the number of bytes available to be
read from the serial buffer (See
[Docs](https://www.arduino.cc/en/Serial/Available)). _What is the equivalent
of`Serial.available()` in python?_
For example, if I need to read multiple lines of serial data I would expect to
ues the following code:
import serial
ser = serial.Serial('/dev/ttyACM0', 9600, timeout=0.050)
...
while ser.available():
print ser.readline()
Answer: The property
[`Serial.in_waiting`](https://pythonhosted.org/pyserial/pyserial_api.html#serial.Serial.in_waiting)
returns the "the number of bytes in the receive buffer".
This seems to be the equivalent of
[`Serial.available()`](https://www.arduino.cc/en/Serial/Available)'s
description: "the number of bytes ... that's already arrived and stored in the
serial receive buffer."
For versions prior to pyserial 3.0, use `.inWaiting()`.
Try:
import serial
ser = serial.Serial('/dev/ttyACM0', 9600, timeout=0.050)
...
while ser.in_waiting: # Or: while ser.inWaiting():
print ser.readline()
|
AttributeError: LinearRegression object has no attribute 'coef_'
Question: I've been attempting to fit this data by a Linear Regression, following a
tutorial on bigdataexaminer. Everything was working fine up until this point.
I imported LinearRegression from sklearn, and printed the number of
coefficients just fine. This was the code before I attempted to grab the
coefficients from the console.
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import sklearn
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
boston = load_boston()
bos = pd.DataFrame(boston.data)
bos.columns = boston.feature_names
bos['PRICE'] = boston.target
X = bos.drop('PRICE', axis = 1)
lm = LinearRegression()
After I had all this set up I ran the following command, and it returned the
proper output:
In [68]: print('Number of coefficients:', len(lm.coef_)
Number of coefficients: 13
However, now if I ever try to print this same line again, or use 'lm.coef_',
it tells me coef_ isn't an attribute of LinearRegression, right after I JUST
used it successfully, and I didn't touch any of the code before I tried it
again.
In [70]: print('Number of coefficients:', len(lm.coef_))
Traceback (most recent call last):
File "<ipython-input-70-5ad192630df3>", line 1, in <module>
print('Number of coefficients:', len(lm.coef_))
AttributeError: 'LinearRegression' object has no attribute 'coef_'
Answer: The `coef_` attribute is created when the `fit()` method is called. Before
that, it will be undefined:
>>> import numpy as np
>>> import pandas as pd
>>> from sklearn.datasets import load_boston
>>> from sklearn.linear_model import LinearRegression
>>> boston = load_boston()
>>> lm = LinearRegression()
>>> lm.coef_
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-22-975676802622> in <module>()
7
8 lm = LinearRegression()
----> 9 lm.coef_
AttributeError: 'LinearRegression' object has no attribute 'coef_'
If we call `fit()`, the coefficients will be defined:
>>> lm.fit(boston.data, boston.target)
>>> lm.coef_
array([ -1.07170557e-01, 4.63952195e-02, 2.08602395e-02,
2.68856140e+00, -1.77957587e+01, 3.80475246e+00,
7.51061703e-04, -1.47575880e+00, 3.05655038e-01,
-1.23293463e-02, -9.53463555e-01, 9.39251272e-03,
-5.25466633e-01])
My guess is that somehow you forgot to call `fit()` when you ran the
problematic line.
|
Python: Including only the last 7 values in each key
Question: I have a dictionary where each key has multiple values. Would it be possible
to include only the last 7 values for each key, and then do basic arithmetic
with it (ex: addition, subtraction, multiplication, division)?
The end objective is to be able to upload date-specific data and be able to
include only the past week, month, or year.
Any nudges in the right direction are very much appreciated.
Answer: Depending on how the incoming data is organized (already sorted vs. random
order), I'd take a look at
[`collections.deque`](https://docs.python.org/3/library/collections.html#collections.deque)
(which can set a maximum length so newly added items seamlessly push out older
items once it reaches the specified limit) for the already sorted case or
rolling your own solution with the [`heapq`
module](https://docs.python.org/3/library/heapq.html) primitives (initially
using `heapq.heappush`, then switching to `heappushpop` when you reach
capacity) for the unordered input case.
Using a
[`collections.defaultdict`](https://docs.python.org/3/library/collections.html#collections.defaultdict)
with either approach as the underlying storage type would simplify code.
Example with bounded `deque`:
from collections import defaultdict, deque
recentdata = defaultdict(lambda: deque(maxlen=7))
for k, v in mydata:
recentdata[k].append(v) # If deque already size 7, first entry added is bumped out
or with `heapq`:
from collections import defaultdict
from heapq import heappush, heappushpop
recentdata = defaultdict(list)
for k, v in mydata:
kdata = recentdata[k]
if len(kdata) < 7:
heappush(kdata, v) # Grow to max size maintaining heap invariant
else:
heappushpop(kdata, v) # Remain at max size, discarding smallest value (old or new)
|
Authenticating a Controller with a Tor subprocess using Stem
Question: I am trying to launch a new tor process (no tor processes currently running on
the system) using a 'custom' config by using stems `launch_tor_with_config`.
I wrote a function that will successfully generate and capture a new hashed
password. I then use that new password in the config, launch tor and try to
authenticate using the same exact passhash and it fails.
Here is the code:
from stem.process import launch_tor_with_config
from stem.control import Controller
from subprocess import Popen, PIPE
import logging
def genTorPassHash(password):
""" Launches a subprocess of tor to generate a hashed <password>"""
logging.info("Generating a hashed password")
torP = Popen(['tor', '--hush', '--hash-password', str(password)], stdout=PIPE, bufsize=1)
try:
with torP.stdout:
for line in iter(torP.stdout.readline, b''):
line = line.strip('\n')
if not "16:" in line:
logging.debug(line)
else:
passhash = line
torP.wait()
logging.info("Got hashed password")
logging.debug(passhash)
return passhash
except Exception as e:
logging.exception(e)
def startTor(config):
""" Starts a tor subprocess using a custom <config>
returns Popen and controller
"""
try:
# start tor
logging.info("Starting tor")
torProcess = launch_tor_with_config(
config=config, # use our custom config
tor_cmd='tor', # start tor normally
completion_percent=100, # blocks until tor is 100%
timeout=90, # wait 90 sec for tor to start
take_ownership=True # subprocess will close with parent
)
# connect a controller
logging.info("Connecting controller")
torControl = Controller.from_port(address="127.0.0.1", port=int(config['ControlPort']))
# auth controller
torControl.authenticate(password=config['HashedControlPassword'])
logging.info("Connected to tor process")
return torProcess, torControl
except Exception as e:
logging.exception(e)
if __name__ == "__main__":
logging.basicConfig(format='[%(asctime)s] %(message)s', datefmt="%H:%M:%S", level=logging.DEBUG)
password = genTorPassHash(raw_input("Type something: "))
config = {
'ClientOnly': '1',
'ControlPort': '9051',
'DataDirectory': '~/.tor/temp',
'Log': ['DEBUG stdout', 'ERR stderr' ],
'HashedControlPassword' : password }
torProcess, torControl = startTor(config)
This is what happens when I run the above code:
s4w3d0ff@FooManChoo ~ $ python stackOverflowTest.py
Type something: foo
[13:33:55] Generating a hashed password
[13:33:55] Got hashed password
[13:33:55] 16:84DE3F93CAFD3B0660BD6EC303A8A7C65B6BD0AC7E9454B3B130881A57
[13:33:55] Starting tor
[13:33:56] System call: tor --version (runtime: 0.01)
[13:33:56] Received from system (tor --version), stdout:
Tor version 0.2.4.27 (git-412e3f7dc9c6c01a).
[13:34:00] Connecting controller
[13:34:00] Sent to tor:
PROTOCOLINFO 1
[13:34:00] Received from tor:
250-PROTOCOLINFO 1
250-AUTH METHODS=HASHEDPASSWORD
250-VERSION Tor="0.2.4.27"
250 OK
[13:34:00] Sent to tor:
AUTHENTICATE "16:84DE3F93CAFD3B0660BD6EC303A8A7C65B6BD0AC7E9454B3B130881A57"
[13:34:00] Received from tor:
515 Authentication failed: Password did not match HashedControlPassword value from configuration
[13:34:00] Error while receiving a control message (SocketClosed): empty socket content
[13:34:00] Sent to tor:
SETEVENTS SIGNAL CONF_CHANGED
[13:34:00] Error while receiving a control message (SocketClosed): empty socket content
[13:34:00] Failed to send message: [Errno 32] Broken pipe
[13:34:00] Error while receiving a control message (SocketClosed): empty socket content
[13:34:00] Received empty socket content.
Traceback (most recent call last):
File "stackOverflowTest.py", line 46, in startTor
torControl.authenticate(password=config['HashedControlPassword'])
File "/usr/local/lib/python2.7/dist-packages/stem/control.py", line 991, in authenticate
stem.connection.authenticate(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/stem/connection.py", line 608, in authenticate
raise auth_exc
AuthenticationFailure: Received empty socket content.
Traceback (most recent call last):
File "stackOverflowTest.py", line 65, in <module>
torProcess, torControl = startTor(config)
TypeError: 'NoneType' object is not iterable
Am I missing something?
Answer: The trouble is that you're authenticating with the password hash rather than
the password itself. Try...
password = raw_input('password: ')
password_hash = genTorPassHash(password)
... then use the password_hash in the config and password for authentication
|
tarfile compressionerror bz2 module is not available
Question: I'm trying to install twisted pip install
<https://pypi.python.org/packages/18/85/eb7af503356e933061bf1220033c3a85bad0dbc5035dfd9a97f1e900dfcb/Twisted-16.2.0.tar.bz2#md5=8b35a88d5f1a4bfd762a008968fddabf>
This is for a `django-channels` project and I'm having the following error
problem
Exception:
Traceback (most recent call last):
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1655, in bz2open
import bz2
File "/usr/local/lib/python3.5/bz2.py", line 22, in <module>
from _bz2 import BZ2Compressor, BZ2Decompressor
ImportError: No module named '_bz2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/petarp/.virtualenvs/CloneFromGitHub/lib/python3.5/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/commands/install.py", line 310, in run
wb.build(autobuilding=True)
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/wheel.py", line 750, in build
self.requirement_set.prepare_files(self.finder)
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/req/req_set.py", line 370, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/req/req_set.py", line 587, in _prepare_file
session=self.session, hashes=hashes)
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/download.py", line 810, in unpack_url
hashes=hashes
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/download.py", line 653, in unpack_http_url
unpack_file(from_path, location, content_type, link)
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/utils/__init__.py", line 605, in unpack_file
untar_file(filename, location)
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/utils/__init__.py", line 538, in untar_file
tar = tarfile.open(filename, mode)
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1580, in open
return func(name, filemode, fileobj, **kwargs)
File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1657, in bz2open
raise CompressionError("bz2 module is not available")
tarfile.CompressionError: bz2 module is not available
Clearly I'm missing `bz2` module, so I've tried to installed it manually, but
that didn't worked out for `python 3.5`, so how can I solved this?
I've did what @e4c5 suggested but I did it for `python3.5.1`, the output is
➜ ~ python3.5
Python 3.5.1 (default, Apr 19 2016, 22:45:11)
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import bz2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/bz2.py", line 22, in <module>
from _bz2 import BZ2Compressor, BZ2Decompressor
ImportError: No module named '_bz2'
>>>
[3] + 18945 suspended python3.5
➜ ~ dpkg -S /usr/local/lib/python3.5/bz2.py
dpkg-query: no path found matching pattern /usr/local/lib/python3.5/bz2.py
I am on Ubuntu 14.04 LTS and I have installed python 3.5 from source.
Answer: I don't seem to have any problem with `import bz2` on my python 3.4
installation. So I did
import bz2
print (bz2.__file__)
And found that it's located at `/usr/lib/python3.4/bz2.py` then I did
dpkg -S /usr/lib/python3.4/bz2.py
This reveals:
> libpython3.4-stdlib:amd64: /usr/lib/python3.4/bz2.py
Thus the following command should hopefully fix this:
apt-get install libpython3.4-stdlib
**Update:**
If you have compiled python 3.5 from sources, it's very likely the bz2 hasn't
been compiled in. Please reinstall by first doing
./configure --with-libs='bzip'
Note that this will probably complain about other missing dependencies.
Installing something as complex as this form sources isn't going to be easy.
|
Implement Cost Function of Neural Network (Week #5 Coursera) using Python
Question: Based on the Coursera Course for Machine Learning, I'm trying to implement the
cost function for a neural network in python. There is a
[question](http://stackoverflow.com/questions/21441457/neural-network-cost-
function-in-matlab) similar to this one -- with an accepted answer -- but the
code in that answers is written in octave. Not to be lazy, I have tried to
adapt the relevant concepts of the answer to my case, and as far as I can
tell, I'm implementing the function correctly. The cost I output differs from
the expected cost, however, so I'm doing something wrong.
Here's a small reproducible example:
The following link leads to an `.npz` file which can be loaded (as below) to
obtain relevant data. Rename the file `"arrays.npz"` please, if you use it.
<http://www.filedropper.com/arrays_1>
if __name__ == "__main__":
with np.load("arrays.npz") as data:
thrLayer = data['thrLayer'] # The final layer post activation; you
# can derive this final layer, if verification needed, using weights below
thetaO = data['thetaO'] # The weight array between layers 1 and 2
thetaT = data['thetaT'] # The weight array between layers 2 and 3
Ynew = data['Ynew'] # The output array with a 1 in position i and 0s elsewhere
#class i is the class that the data described by X[i,:] belongs to
X = data['X'] #Raw data with 1s appended to the first column
Y = data['Y'] #One dimensional column vector; entry i contains the class of entry i
import numpy as np
m = len(thrLayer)
k = thrLayer.shape[1]
cost = 0
for i in range(m):
for j in range(k):
cost += -Ynew[i,j]*np.log(thrLayer[i,j]) - (1 - Ynew[i,j])*np.log(1 - thrLayer[i,j])
print(cost)
cost /= m
'''
Regularized Cost Component
'''
regCost = 0
for i in range(len(thetaO)):
for j in range(1,len(thetaO[0])):
regCost += thetaO[i,j]**2
for i in range(len(thetaT)):
for j in range(1,len(thetaT[0])):
regCost += thetaT[i,j]**2
regCost *= lam/(2*m)
print(cost)
print(regCost)
In actuality, `cost` should be 0.287629 and `cost + newCost` should be
0.383770.
This is the cost function posted in the question above, for reference:
* * *
[![enter image description
here](http://i.stack.imgur.com/WvX7X.png)](http://i.stack.imgur.com/WvX7X.png)
Answer: The problem is that you are using the **wrong class labels**. When computing
the cost function, you need to use the **ground truth** , or the true class
labels.
I'm not sure what your Ynew array, was, but it wasn't the training outputs.
So, I changed your code to use Y for the class labels in the place of Ynew,
and got the correct cost.
import numpy as np
with np.load("arrays.npz") as data:
thrLayer = data['thrLayer'] # The final layer post activation; you
# can derive this final layer, if verification needed, using weights below
thetaO = data['thetaO'] # The weight array between layers 1 and 2
thetaT = data['thetaT'] # The weight array between layers 2 and 3
Ynew = data['Ynew'] # The output array with a 1 in position i and 0s elsewhere
#class i is the class that the data described by X[i,:] belongs to
X = data['X'] #Raw data with 1s appended to the first column
Y = data['Y'] #One dimensional column vector; entry i contains the class of entry i
m = len(thrLayer)
k = thrLayer.shape[1]
cost = 0
Y_arr = np.zeros(Ynew.shape)
for i in xrange(m):
Y_arr[i,int(Y[i,0])-1] = 1
for i in range(m):
for j in range(k):
cost += -Y_arr[i,j]*np.log(thrLayer[i,j]) - (1 - Y_arr[i,j])*np.log(1 - thrLayer[i,j])
cost /= m
'''
Regularized Cost Component
'''
regCost = 0
for i in range(len(thetaO)):
for j in range(1,len(thetaO[0])):
regCost += thetaO[i,j]**2
for i in range(len(thetaT)):
for j in range(1,len(thetaT[0])):
regCost += thetaT[i,j]**2
lam=1
regCost *= lam/(2.*m)
print(cost)
print(cost + regCost)
This outputs:
0.287629165161
0.383769859091
**Edit:** Fixed an integer division error with `regCost *= lam/(2*m)` that was
zeroing out the regCost.
|
csv file compression without using existing libraries in Python
Question: I'm trying to compress a .csv file without using any 3rd party or framework
provided compression libraries.
I have tried, what I wish to think, everything. I looked at Huffman, but since
I'm not allowed to use that solution I tried to do my own.
An example:
6NH8,F,A,0,60541567,60541567,78.78,20
6NH8,F,A,0,60541569,60541569,78.78,25
6AH8,F,B,0,60541765,60541765,90.52,1
QMH8,F,B,0,60437395,60437395,950.5,1
I made an algorithm that counts every char and gives me amount of times
they've been used and, depending on how many time they been dedicated a
number.
',' --- 28
'5' --- 18
'6' --- 17
'0' --- 15
'7' --- 10
'8' --- 8
'4' --- 8
'1' --- 8
'9' --- 6
'.' --- 4
'3' --- 4
'\n'--- 4
'H' --- 4
'F' --- 4
'2' --- 3
'A' --- 3
'N' --- 2
'B' --- 2
'M' --- 1
'Q' --- 1
[(',', 0), ('5', 1), ('6', 2), ('0', 3), ('7', 4), ('8', 5),
('4', 6), ('1', 7), ('9', 8), ('.', 9), ('3', 10), ('\n', 11),
('H', 12), ('F', 13), ('2', 14), ('A', 15), ('N', 16), ('B', 17),
('M', 18), ('Q', 19)]
So instead of storing for example ord('H') = 72, I give H the value 12, and so
on.
But, when I change all the chars to my values, my generated cvs(>40MB) is
still larger than original(19MB).
I even tried the alternatives to divide the list into 2. i.e. for one row make
it two rows.
[6NH8,F,A,0,]
[60541567,60541567,78.78,20]
But still larger, even larger than my "huffman" version.
**QUESTION** : Anybody have any suggestions on how to 1.Read a .csv file,
2.use something thats a lib. or 3rd party. 3.generate and write a smaller .csv
file?
For step 2 Im not asking for a full computed solution, just suggestions of how
to minimize the file, by i.e. write each value as one list ? etc.
Thank you
Answer: Try running your algorithm on the contents of each cell instead of individual
characters and then creating a new CSV file with the compressed cell values.
If the data you have provided is an example of the larger file you may want to
run the compression algorithm on each column separately. For example it may
only help to compress columns 0,4 and 5.
For reading and writing CSV files check out the
[csv](https://docs.python.org/2/library/csv.html "csv") module where you can
do things like:
import csv
with open('eggs.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
print ', '.join(row)
|
Python Class Self Object
Question: This may seems a weird question, I have a variable which has the function name
of another class which I have already imported.
Now I have to call that function using self.variable_name(argument)
Is it possible?
name = analyser['name']
analyser_execution = self.name(page_source)
It is searching name function but I want it to execute the value which that
name variable have...
Answer: It can be done like this:
name = analyser['name']
analyser_execution = getattr(self, name)(page_source)
Or like this:
func = getattr(self, analyser['name'])
analyser_execution = func(page_source)
|
Open .exe file through .bat file in Flask
Question: I am trying to open an .exe file (e.g. Paint) with a html button using Flask,
so I wrote a small .bat file that runs it properly when I run it through
Python, but does not seem to work when I open it through Flask.
The Python is:
@app.route('/assemblies', methods=['GET', 'POST'])
def assemblies():
if request.method == 'POST':
if request.form['submit'] == 'runFile':
#os.startfile("/static/run.bat")
text = "... running ..."
filepath="/static/run.bat"
p = subprocess.Popen(filepath, shell=True, stdout = subprocess.PIPE)
stdout, stderr = p.communicate()
return render_template('assemblies.html', text=text)
else request.form['submit'] == 'process':
[do other stuff]
elif request.method == 'GET':
return render_template('assemblies.html')
(Part of) the html file is:
<div class="container">
<form action="/assemblies" method="post"> ASSEMBLY <br>
<select name="Layer">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
<option value="4">4</option>
<option value="5">5</option>
</select><br>
<button type="submit" name="submit" value="process"> Process </button>
<button type="submit" name="submit" value="runFile"> Run </button>
</form>
</div>
And the bat is:
start /d "static\" myFile.exe
The bat file works outside Flask, but I have tried with the exe and bat both
in the 'static' folder and on C:/ and seems to be completely unresponsive (no
console feedback), so I assume I might be missing something important?
Answer: Found a solution after changing:
filepath="/static/run.bat"
p = subprocess.Popen(filepath, shell=True, stdout = subprocess.PIPE)
stdout, = p.communicate()
for:
subprocess.call(["static/myFile.exe"])
and now it works. I am bypassing the bat file, but not sure what was the
problem with the original script, so any insights are still welcome.
|
Color between the x axis and the graph in PyPlot
Question: I have a graph that was plotted using datetime objects for the x axis and I
want to be able to color beneath the graph itself (the y-values) and the x
axis. I found this post [Matplotlib's fill_between doesnt work with plot_date,
any alternatives?](http://stackoverflow.com/questions/28091290/matplotlibs-
fill-between-doesnt-work-with-plot-date-any-alternatives) describing a similar
problem, but the proposed solutions didn't help me at all.
My code:
import matplotlib.patches as mpatches
import matplotlib.dates
import matplotlib.pyplot as plt
import numpy as np
import csv
from tkinter import *
from datetime import datetime
from tkinter import ttk
columns="YEAR,MONTH,DAY,HOUR,PREC,PET,Q,UZTWC,UZFWC,LZTWC,LZFPC,LZFSC,ADIMC,AET"
data_file="FFANA_000.csv"
UZTWC = np.genfromtxt(data_file,
delimiter=',',
names=columns,
skip_header=1,
usecols=("UZTWC"))
list_of_datetimes = []
skipped_header = False;
with open(data_file, 'rt') as f:
reader = csv.reader(f, delimiter=',', quoting=csv.QUOTE_NONE)
for row in reader:
if skipped_header:
date_string = "%s/%s/%s %s" % (row[0].strip(), row[1].strip(), row[2].strip(), row[3].strip())
dt = datetime.strptime(date_string, "%Y/%m/%d %H")
list_of_datetimes.append(dt)
skipped_header = True
dates = matplotlib.dates.date2num(list_of_datetimes)
fig = plt.figure(1)
#UZTWC
ax1 = fig.add_subplot(111)
plt.plot(dates, UZTWC, '-', color='b', lw=2)
ax1.fill_between(dates, 0, UZTWC)
fig.autofmt_xdate()
plt.title('UZTWC', fontsize=15)
plt.ylabel('MM', fontsize=10)
plt.tick_params(axis='both', which='major', labelsize=10)
plt.tick_params(axis='both', which='minor', labelsize=10)
plt.grid()
plt.show()
This yields:
Traceback (most recent call last):
File "test_color.py", line 36, in <module>
ax1.fill_between(dates, 0, UZTWC)
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\site-packages\matplotlib\__init__.py", line 1812, in inner
return func(ax, *args, **kwargs)
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\site-packages\matplotlib\axes\_axes.py", line 4608, in fill_between
y2 = ma.masked_invalid(self.convert_yunits(y2))
File "C:\Users\rbanks\AppData\Local\Programs\Python\Python35-32\lib\site-packages\numpy\ma\core.py", line 2300, in masked_invalid
condition = ~(np.isfinite(a))
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
It seems the issue comes with fill_between not being able to handle my dates
being of type 'numpy.ndarray'. Do I need to just convert this to another data
type for this to work?
EDIT: After more testing, I've found that I still get this exact error even
after trying to use list_of_datetimes, and after converting all of my
datetimes to timestamps, so I'm starting to wonder if it is a type issue after
all.
Sample data:
%YEAR,MO,DAY,HR,PREC(MM/DT),ET(MM/DT),Q(CMS), UZTWC(MM),UZFWC(MM),LZTWC(MM),LZFPC(MM),LZFSC(MM),ADIMC(MM), ET(MM/DT)
2012, 5, 1, 0, 0.000, 1.250, 0.003, 2.928, 0.000, 3.335, 4.806, 0.000, 6.669, 1.042
2012, 5, 1, 6, 0.000, 1.250, 0.003, 2.449, 0.000, 3.156, 4.798, 0.000, 6.312, 0.987
2012, 5, 1, 12, 0.000, 1.250, 0.003, 2.048, 0.000, 2.970, 4.789, 0.000, 5.940, 0.929
2012, 5, 1, 18, 0.000, 1.250, 0.003, 1.713, 0.000, 2.782, 4.781, 0.000, 5.564, 0.869
2012, 5, 2, 0, 0.000, 1.250, 0.003, 1.433, 0.000, 2.596, 4.772, 0.000, 5.192, 0.809
2012, 5, 2, 6, 0.000, 1.250, 0.003, 1.199, 0.000, 2.414, 4.764, 0.000, 4.829, 0.750
I am using Python 3.5.0 and matplotlib 1.5.1 on Windows 10 and I gained all of
my dependencies through WinPython
<https://sourceforge.net/projects/winpython/>
Answer: I've yet to determine what went wrong in your original code but I got it
working with [pandas](http://pandas.pydata.org/):
import pandas as pd, matplotlib.pyplot as plt, matplotlib.dates as mdates
df = pd.read_csv('/path/to/yourfile.csv')
df['date'] = df['%YEAR'].astype(str)+'/'+df['MO'].astype(str)+'/'+df['DAY'].astype(str)
df['date'] = pd.to_datetime(df['date'])
dates = [date.to_pydatetime() for date in df['date']]
yyyy_mm_dd_format = mdates.DateFormatter('%Y-%m-%d')
plt.clf()
fig = plt.figure(1)
ax = fig.add_subplot(111)
ax.plot_date(dates,df[' UZTWC(MM)'],'-',color='b',lw=2)
ax.fill_between(dates,0,df[' UZTWC(MM)'])
ax.xaxis.set_major_formatter(yyyy_mm_dd_format)
ax.set_xlim(min(dates), max(dates))
fig.autofmt_xdate()
plt.show()
[![enter image description
here](http://i.stack.imgur.com/CHMAf.png)](http://i.stack.imgur.com/CHMAf.png)
|
Python/Pyomo with glpk Solver - Error
Question: I am trying to run some simle example with Pyomo + glpk Solver (Anaconda2
64bit Spyder):
from pyomo.environ import *
model = ConcreteModel()
model.x_1 = Var(within=NonNegativeReals)
model.x_2 = Var(within=NonNegativeReals)
model.obj = Objective(expr=model.x_1 + 2*model.x_2)
model.con1 = Constraint(expr=3*model.x_1 + 4*model.x_2 >= 1)
model.con2 = Constraint(expr=2*model.x_1 + 5*model.x_2 >= 2)
opt = SolverFactory("glpk")
instance = model.create()
#results = opt.solve(instance)
#results.write()
But i get the following error message:
invalid literal for int() with base 10: 'c'
Traceback (most recent call last):
File "<ipython-input-5-e074641da66d>", line 1, in <module>
runfile('D:/..../Exampe.py', wdir='D:.../exercises/pyomo')
File "C:\...\Continuum\Anaconda21\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile
execfile(filename, namespace)
File "C:\....\Continuum\Anaconda21\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "D:/...pyomo/Exampe.py", line 34, in <module>
results = opt.solve(instance)
File "C:\....\Continuum\Anaconda21\lib\site-packages\pyomo\opt\base\solvers.py", line 580, in solve
result = self._postsolve()
File "C:\...Continuum\Anaconda21\lib\site-packages\pyomo\opt\solver\shellcmd.py", line 267, in _postsolve
results = self.process_output(self._rc)
File "C:\...\Continuum\Anaconda21\lib\site-packages\pyomo\opt\solver\shellcmd.py", line 329, in process_output
self.process_soln_file(results)
File "C:\....\Continuum\Anaconda21\lib\site-packages\pyomo\solvers\plugins\solvers\GLPK.py", line 454, in process_soln_file
raise ValueError(msg)
ValueError: Error parsing solution data file, line 1
I downloaded glpk from <http://winglpk.sourceforge.net/> \--> unziped + added
parth to the environmental variable "path".
Hope someone can help me - thank you!
Answer: This is a known problem with GLPK 4.60 (glpsol changed the format of their
output which broke Pyomo 4.3's parser). You can either download an older
release of GLPK, or upgrade Pyomo to 4.4.1 (which contains an updated parser).
|
pandas read_csv raises ValueError
Question: I want to read txt data seperated by ',' and '\t', and I use code below:
`io_df = pd.read_csv('input_output.txt',sep='\D|\t',engine = 'python')`
This triggered error information below:
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last) <ipython-input-38-5ab0138d93ac>
in <module>() ----> 1 io_df =
pd.read_csv('input_output.txt',sep='\D|\t',engine = 'python')`
How to solve this?
Answer: For me works `sep=",|\t"`:
pd.read_csv('test.csv', sep=",|\t", engine = 'python')
Sample:
import pandas as pd
df = pd.read_csv('https://dl.dropboxusercontent.com/u/84444599/test.csv',
sep=",|\t",
engine = 'python')
print (df)
col col1 col2
0 a d t
1 d u l
|
Getting Turtle in Python to recognize click events
Question: I'm trying to make Connect 4 in python, but I can't figure out how to get the
coordinates of the screen click so I can use them. Right now, I want to draw
the board, then have someone click, draw a dot, then go back to the top of the
while loop, wipe the screen and try again. I've tried a couple different
options but none have seemed to work for me.
def play_game():
"""
When this function runs, allows the user to play a game of Connect 4
against another person
"""
turn = 1
is_winner = False
while is_winner == False:
# Clears screen
clear()
# Draws empty board
centers = draw_board()
# Decides whose turn it is, change color appropriately
if turn % 2 == 0:
color = RED
else:
color = BLACK
# Gets coordinates of click
penup()
onscreenclick(goto)
dot(HOLE_SIZE, color)
turn += 1
Answer: As well intentioned as the other answers are, I don't believe either addresses
the actual problem. You've locked out events by introducing an infinite loop
in your code:
is_winner = False
while is_winner == False:
You can't do this with turtle graphics -- you set up the event handlers and
initialization code but turn control over to the main loop event handler. My
following rework show how you might do so:
import turtle
colors = ["red", "black"]
HOLE_SIZE = 2
turn = 0
is_winner = False
def draw_board():
pass
return (0, 0)
def dot(color):
turtle.color(color, color)
turtle.stamp()
def goto(x, y):
global turn, is_winner
# add code to determine if we have a winner
if not is_winner:
# Clears screen
turtle.clear()
turtle.penup()
# Draws empty board
centers = draw_board()
turtle.goto(x, y)
# Decides whose turn it is, change color appropriately
color = colors[turn % 2 == 0]
dot(color)
turn += 1
else:
pass
def start_game():
"""
When this function runs, sets up a new
game of Connect 4 against another person
"""
global turn, is_winner
turn = 1
is_winner = False
turtle.shape("circle")
turtle.shapesize(HOLE_SIZE)
# Gets coordinates of click
turtle.onscreenclick(goto)
start_game()
turtle.mainloop()
Run it and you'll see the desired behavior you described.
|
wxpython wx.Slider: how to fire an event only if a user pauses for some predetermined time
Question: I have a `wx.Slider` widget that is bound to an event handler. As a user moves
the slider, some process will run. However, because the process can take up to
3 seconds to run, I don't want the event to fire continuously as the user
moves the slider. Instead, I want the event to fire only if the user stops
moving the slider for some amount of time (say, 2 seconds). I tried using
`time.time()` with a `while`-loop (see code below), but it didn't work because
the event would still fire repeatedly -- it's just that the firing got
delayed. Any idea/pointer/suggestion would be greatly appreciated.
import wx
import time
class Example(wx.Frame):
def __init__(self, *args, **kw):
super(Example, self).__init__(*args, **kw)
self.InitUI()
def InitUI(self):
pnl = wx.Panel(self)
sld = wx.Slider(pnl, value=200, minValue=150, maxValue=500, pos=(20, 20),
size=(250, -1), style=wx.SL_HORIZONTAL)
self.counter = 0
sld.Bind(wx.EVT_SCROLL, self.OnSliderScroll)
self.txt = wx.StaticText(pnl, label='200', pos=(20, 90))
self.SetSize((290, 200))
self.SetTitle('wx.Slider')
self.Centre()
self.Show(True)
def OnSliderScroll(self, e):
now = time.time()
future = now + 2
while time.time() < future:
pass
#substitute for the actual process.
self.counter += 1
print self.counter
def main():
ex = wx.App()
Example(None)
ex.MainLoop()
if __name__ == '__main__':
main()
Answer: Delaying with `time.sleep` will block your GUI. Use `wx.CallLater` instead,
which in the sample below will trigger the delayed event until it has been
restarted again.
def InitUi(self):
# ...
# Add a delay timer, set it up and stop it
self.delay_slider_evt = wx.CallLater(2000, self.delayed_event)
self.delay_slider_evt.Stop()
def OnSliderScroll(self, e):
# if delay timer does not run, start it, either restart it
if not self.delay_slider_evt.IsRunning():
self.delay_slider_evt.Start(2000)
else:
self.delay_slider_evt.Restart(2000)
def delayed_event(self):
#substitute for the actual delayed process.
self.counter += 1
print self.counter
|
Setting values for a numpy ndarray using mask
Question: I want to calculate business days between two times, both of which contain
null values, following [this
question](http://stackoverflow.com/questions/37576552/dealing-with-none-
values-when-using-pandas-groupby-and-apply-with-a-function) related to
calculating business days. I've identified that the way I'm setting values
using a mask does not behave as expected.
I'm using python 2.7.11, pandas 0.18.1 and numpy 1.11.0. My slightly modified
code:
import datetime
import numpy as np
import pandas as pd
def business_date_diff(start, end):
mask = pd.notnull(start) & pd.notnull(end)
start = start[mask]
end = end[mask]
start = start.values.astype('datetime64[D]')
end = end.values.astype('datetime64[D]')
result = np.empty(len(mask), dtype=float)
result[mask] = np.busday_count(start, end)
result[~mask] = np.nan
return result
Unfortunately, this doesn't return the expected business day differences
(instead I get a number of very near 0 floats). When I check
`np.busday_count(start, end)` the results look correct.
print start[0:5]
print end[0:5]
print np.busday_count(start, end)[0:5]
# ['2016-07-04' '2016-07-04' '2016-07-04' '2016-07-04' '2016-07-04']
# ['2016-07-05' '2016-07-05' '2016-07-05' '2016-07-06' '2016-07-06']
# [1 1 1 2 2]
But when I check the values for `results` the results do not make sense:
...
result = np.empty(len(mask), dtype=float)
result[mask] = np.busday_count(start, end)
result[~mask] = np.nan
print result
# [ nan nan 1.43700866e-210 1.45159738e-210
# 1.45159738e-210 1.45159738e-210 1.45159738e-210 1.46618609e-210
# 1.45159738e-210 1.64491834e-210 1.45159738e-210 1.43700866e-210
# 1.43700866e-210 1.43700866e-210 1.43700866e-210 1.45159738e-210
# 1.43700866e-210 1.43700866e-210 1.43700866e-210 1.43700866e-210
What am I doing wrong?
Answer: Your problem is that with your version of numpy, you can't use a boolean array
as an index to an array. Just use `np.where(mask==True)` instead of mask and
`np.where(mask==False)` instead of ~mask, and it will work as desired.
|
return inverse string selection in python
Question: I have a python snippet that returns the contents within two strings using
regex.
res = re.search(r'Presets = {(.*)Version = 1,', data, re.DOTALL)
What I now want to do is return the two strings surrounding this inner part.
Keep in mind this is a multiline string. How can I get the bordering strings,
the beginning and end part in a two part list would be ideal.
data = """{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
presets = {
location = "italy",
size = 10,
travelers = False,
},
version = 1,
},
},
stuff = {
this = "great",
},
}"""
import re
res = re.search(r'presets = {(.*)version = 1,', data, re.DOTALL)
print res.groups(1)
In this case I would want to return the beginning string:
data = """{
data = {
friends = {
max = 0 0,
min = 0 0,
},
family = {
cars = {
van = "honda",
car = "ford",
bike = "trek",
},
And the end string:
},
},
stuff = {
this = "great",
},
}"""
Answer: Regex is really not a good tool for parsing these strings, but you can use
`re.split` to achieve what you wanted. It can even combine the 2 tasks into
one:
begin, middle, end = re.split(r'presets = \{(.*)version = 1,', data,
flags=re.DOTALL)
`re.split` splits the string at matching positions; ordinarily the separator
is not in the resulting list. However, if the regular expression contains
capturing groups, then the matching contents of the first group is returned in
the place of the delimiter.
|
Python win32api get "stack" of windows
Question: I'm looking for a way to find out what order windows are open on my desktop in
order to tell what parts of what windows are visible to the user.
Say, in order, I open up a maximized chrome window, a maximized notepad++
window, and then a command prompt that only covers a small portion of the
screen. Is there a way using the win32api (or possibly other library) that can
tell me the stack of windows open so I can take the window dimensions and find
out what is visible? I already know how to get which window has focus and the
top-level window, but I'm looking for more info than that.
In the example I mentioned above, I'd return that the full command prompt is
visible but in the places it isn't, the notepad++ window is visible for
example. No part of the chrome window would be visible.
Answer: This does not yet have any logic deciding if windows are overlaid but it does
return a dictionary of existing windows with info of their title, visibility,
minimization, size and the next window handle.
import win32gui
import win32con
def enum_handler(hwnd, results):
results[hwnd] = {
"title":win32gui.GetWindowText(hwnd),
"visible":win32gui.IsWindowVisible(hwnd),
"minimized":win32gui.IsIconic(hwnd),
"rectangle":win32gui.GetWindowRect(hwnd), #(left, top, right, bottom)
"next":win32gui.GetWindow(hwnd, win32con.GW_HWNDNEXT) # Window handle to below window
}
def get_windows():
enumerated_windows = {}
win32gui.EnumWindows(enum_handler, enumerated_windows)
return enumerated_windows
if __name__ == "__main__":
windows = get_windows()
for window_handle in windows:
if windows[window_handle]["title"] is not "":
print "{}, {}, {}, {}".format(windows[window_handle]["minimized"],
windows[window_handle]["rectangle"],
windows[window_handle]["next"],
windows[window_handle]["title"])
Microsoft MSDN has good artice on zorder info with GetWindow() and GW_HWNDNEXT
<https://msdn.microsoft.com/en-
us/library/windows/desktop/ms633515(v=vs.85).aspx>
|
Custom logger with time stamp in python
Question: I have lots of code on a project with print statements and wanted to make a
quick a dirty logger of these print statements and decided to go the custom
route. I managed to put together a logger that prints both to the terminal and
to a file (with the help of this site), but now I want to add a simple time
stamp to each statement and I am running into a weird issue.
Here is my logging class.
class Logger(object):
def __init__(self, stream):
self.terminal = stream
self.log = open("test.log", 'a')
def write(self, message):
self.terminal.flush()
self.terminal.write(self.stamp() + message)
self.log.write(self.stamp() + message)
def stamp(self):
d = datetime.today()
string = d.strftime("[%H:%M:%S] ")
return string
Notice the stamp method that I then attempt to use in the write method.
When running the following two lines I get an unexpected output:
sys.stdout = Logger(sys.stdout)
print("Hello World!")
Output:
[11:10:47] Hello World![11:10:47]
This what the output also looks in the log file, however, I see no reason why
the string that I am adding appends to the end. Can someone help me here?
**UPDATE** See answer below. However, for quicker reference the issue is using
"print()" in general; replace it with sys.stdout.write after assigning the
variable.
Also use "logging" for long-term/larger projects right off the bat.
Answer: It calls the `.write()` method of your stream twice because in cpython `print`
calls the stream `.write()` method twice. The first time is with the object,
and the second time it writes a newline character. For example look at [line
138 in the `pprint` module in cpython
v3.5.2](https://hg.python.org/cpython/file/v3.5.2/Lib/pprint.py#l138)
def pprint(self, object):
self._format(object, self._stream, 0, 0, {}, 0)
self._stream.write("\n") # <- write() called again!
You can test this out:
>>> from my_logger import Logger # my_logger.py has your Logger class
>>> import sys
>>> sys.stdout = Logger(stream=sys.stdout)
>>> sys.stdout.write('hi\n')
[14:05:32] hi
You can replace `print(<blah>)` everywhere in your code using
[`sed`](https://www.gnu.org/software/sed/manual/sed.html).
$ for mymodule in *.py; do
> sed -i -E "s/print\((.+)\)/LOGGER.debug(\1)/" $mymodule
> done
Check out [Python's Logging builtin
module](https://docs.python.org/2/library/logging.html). It has pretty
comprehensive logging including inclusion of a timestamp in all messages
format.
import logging
FORMAT = '%(asctime)-15s %(message)s'
DATEFMT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=FORMAT, datefmt=DATEFMT)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.debug('message: %s', 'message')
This outputs `2016-07-29 11:44:20 message: message` to `stdout`. There are
also handlers to send output to files. There is a [basic
tutorial](https://docs.python.org/2/howto/logging.html#logging-basic-
tutorial), an [advanced
tutorial](https://docs.python.org/2/howto/logging.html#logging-advanced-
tutorial) and a [cookbook of common logging
recipes](https://docs.python.org/2/howto/logging-cookbook.html#logging-
cookbook).
There is an example of using [simultaneous file and console
loggers](https://docs.python.org/2.7/howto/logging-cookbook.html#using-
logging-in-multiple-modules) in the cookbook.
import logging
LOGGER = logging.getLogger(__name__) # get logger named for this module
LOGGER.setLevel(logging.DEBUG) # set logger level to debug
# create formatter
LOG_DATEFMT = '%Y-%m-%d %H:%M:%S'
LOG_FORMAT = ('\n[%(levelname)s/%(name)s:%(lineno)d] %(asctime)s ' +
'(%(processName)s/%(threadName)s)\n> %(message)s')
FORMATTER = logging.Formatter(LOG_FORMAT, datefmt=LOG_DATEFMT)
CH = logging.StreamHandler() # create console handler
CH.setLevel(logging.DEBUG) # set handler level to debug
CH.setFormatter(FORMATTER) # add formatter to ch
LOGGER.addHandler(CH) # add console handler to logger
FH = logging.FileHandler('myapp.log') # create file handler
FH.setLevel(logging.DEBUG) # set handler level to debug
FH.setFormatter(FORMATTER) # add formatter to fh
LOGGER.addHandler(FH) # add file handler to logger
LOGGER.debug('test: %s', 'hi')
This outputs:
[DEBUG/__main__:22] 2016-07-29 12:20:45 (MainProcess/MainThread)
> test: hi
to both console and file `myapp.log` simultaneously.
|
TensorFlow: AttributeError: 'Tensor' object has no attribute 'shape'
Question: I have the following code which uses TensorFlow. After I reshape a list, it
says
> AttributeError: 'Tensor' object has no attribute 'shape'
when I try to print its shape.
# Get the shape of the training data.
print "train_data.shape: " + str(train_data.shape)
train_data = tf.reshape(train_data, [400, 1])
print "train_data.shape: " + str(train_data.shape)
train_size,num_features = train_data.shape
Output:
> train_data.shape: (400,) Traceback (most recent call last): File "", line 1,
> in File "/home/shehab/Downloads/tools/python/pycharm-
> edu-2.0.4/helpers/pydev/pydev_import_hook.py", line 21, in do_import module
> = self._system_import(name, *args, **kwargs) File "/home/shehab/Dropbox/py-
> projects/try-tf/logistic_regression.py", line 77, in print
> "train_data.shape: " + str(train_data.shape) AttributeError: 'Tensor' object
> has no attribute 'shape'
Could anyone please tell me what I am missing?
Answer: Indeed, `tf.Tensor` doesn't have a `.shape` property. You should use the
`Tensor.get_shape()` method instead:
train_data = tf.reshape(train_data, [400, 1])
print "train_data.shape: " + str(train_data.get_shape())
Note that in general you might not be able to get the actual shape of the
result of a TensorFlow operation. In some cases, the shape will be a computed
value that depends on running the computation to find its value; and it may
even vary from one run to the next (e.g. the shape of
[`tf.unique()`](https://www.tensorflow.org/versions/r0.9/api_docs/python/math_ops.html#unique)).
In that case, the result of `get_shape()` for some dimensions may be `None`
(or `"?"`).
|
restarted computer and got: ImportError: No module named django.core.management
Question: I have been having some issues with gulp serving my files so I restarted my
computer, upon going back to my project and starting the server I suddenly got
the error: `ImportError: No module named django.core.management`.
I am working locally and in my files I can see the django folder - it's path
is: `MAMP/Library/lib/python2.7/site-packages/mysql/connector/django`
The full error looks like this:
Message:
Command failed: /bin/sh -c ./manage.py runserver
Traceback (most recent call last):
File "./manage.py", line 11, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
Details:
killed: false
code: 1
signal: null
cmd: /bin/sh -c ./manage.py runserver
stdout:
stderr: Traceback (most recent call last):
File "./manage.py", line 11, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
My manage.py looks like this:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "tckt.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
running which python gives me this: `/usr/bin/python`
I am not sure if I am running in a virtual enviornment or not. I am doing the
front-end of this project, the enviornment was set up and installed by someone
else for me - but running `python -c 'import sys; print sys.real_prefix'
2>/dev/null && INVENV=1 || INVENV=0` (as another post suggested to check if I
was in a virtual enviornment) returned nothing.
I have looked through some of the other posts and see that some people have
reinstalled, others have modified paths, others say NOT to edit the manage.py
file - but since I am not really sure if the problem is the path or the
install I am not sure how to proceed.If you need more info please let me know.
Answer: You're missing python packages which means you're
[VirtualEnv](https://virtualenv.pypa.io/en/stable/) isn't activated.
[VirtualEnv](https://virtualenv.pypa.io/en/stable/) creates a folder named
`env` by default (though the name can be changed) which is where it stores the
specific python installation and all it's packages. Search for the `activate`
bash script in your project folder. Once you locate you can source it.
source ./env/bin/activate
In the interest of completeness, in Windows it would be a batch file.
env/bin/activate.bat
You'll know you're in a virtualenv when your command prompt is prefixed by the
env name, for example `(env) Macbook user$`.
You can now start your django test server.
python manage.py runserver
To deactivate, simply type `deactivate` at any time in your command prompt.
The `(env)` prefix on the prompt should disappear.
|
Using Concurrent.Futures.ProcessPoolExecutor to run simultaneous & independents ABAQUS models
Question: I wish to run a total of **_nAnalysis=25_** Abaqus models, each using X number
of Cores, and I can run concurrently **_nParallelLoops=5_** of these models.
If one of the current 5 analysis finishes, then another analysis should start
until all **_nAnalysis_** are completed.
I implemented the code below based on the solutions posted in **1** and **2**.
However, I am missing something because all **_nAnalysis_** try to start at
"once", the code deadlocks and no analysis ever completes since many of then
may want to use the same Cores than an already started analysis is using.
1. [Using Python's Multiprocessing module to execute simultaneous and separate SEAWAT/MODFLOW model runs](http://stackoverflow.com/questions/9874042/using-pythons-multiprocessing-module-to-execute-simultaneous-and-separate-seawa)
2. [How to parallelize this nested loop in Python that calls Abaqus](http://stackoverflow.com/questions/37169336/how-to-parallelize-this-nested-loop-in-python-that-calls-abaqus)
def runABQfile(*args):
import subprocess
import os
inpFile,path,jobVars = args
prcStr1 = (path+'/runJob.sh')
process = subprocess.check_call(prcStr1, stdin=None, stdout=None, stderr=None, shell=True, cwd=path)
def safeABQrun(*args):
import os
try:
runABQfile(*args)
except Exception as e:
print("Tread Error: %s runABQfile(*%r)" % (e, args))
def errFunction(ppos, *args):
import os
from concurrent.futures import ProcessPoolExecutor
from concurrent.futures import as_completed
from concurrent.futures import wait
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(0,nAnalysis)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
The only way up to now I am able to run that is if I modify the `errFunction`
to use exactly 5 analysis at the time as below. However, this approach results
sometimes in one of the analysis taking much longer than the other 4 in every
group (every `ProcessPoolExecutor` call) and therefore the next group of 5
won't start despite the availability of resources (Cores). Ultimately this
results in more time to complete all 25 models.
def errFunction(ppos, *args):
import os
from concurrent.futures import ProcessPoolExecutor
from concurrent.futures import as_completed
from concurrent.futures import wait
# Group 1
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(0,5)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 2
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(5,10)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 3
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(10,15)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 4
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(15,20)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
# Group 5
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
future_to_file = dict((executor.submit(safeABQrun, inpFiles[k], aPath[k], jobVars), k) for k in range(20,25)) # 5Nodes
wait(future_to_file,timeout=None,return_when='ALL_COMPLETED')
I tried using the `as_completed` function but it seems not to work either.
Please can you help figuring out the proper parallelization so I can run a
**_nAnalysis_** , with always **_nParallelLoops_** running concurrently? Your
help is appreciated it. I am using Python 2.7
Bests, David P.
* * *
**UPDATE JULY 30/2016** :
I introduced a loop in the `safeABQrun` and that managed the 5 different
"queues". The loop is necessary to avoid the case of an analysis trying to run
in a node while another one is still running. The analysis are pre-configured
to run in one of the requested nodes before starting any actual analysis.
def safeABQrun(*list_args):
import os
inpFiles,paths,jobVars = list_args
nA = len(inpFiles)
for k in range(0,nA):
args = (inpFiles[k],paths[k],jobVars[k])
try:
runABQfile(*args) # Actual Run Function
except Exception as e:
print("Tread Error: %s runABQfile(*%r)" % (e, args))
def errFunction(ppos, *args):
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
futures = dict((executor.submit(safeABQrun, inpF, aPth, jVrs), k) for inpF, aPth, jVrs, k in list_args) # 5Nodes
for f in as_completed(futures):
print("|=== Finish Process Train %d ===|" % futures[f])
if f.exception() is not None:
print('%r generated an exception: %s' % (futures[f], f.exception()))
Answer: It looks OK to me, but I can't run your code as-is. How about trying something
vastly simpler, then _add_ things to it until "a problem" appears? For
example, does the following show the kind of behavior you want? It does on my
machine, but I'm running Python 3.5.2. You say you're running 2.7, but
`concurrent.futures` didn't exist in Python 2 - so if you are using 2.7, you
must be running someone's backport of the library, and perhaps the problem is
in that. Trying the following should help to answer whether that's the case:
from concurrent.futures import ProcessPoolExecutor, wait, as_completed
def worker(i):
from time import sleep
from random import randrange
s = randrange(1, 10)
print("%d started and sleeping for %d" % (i, s))
sleep(s)
if __name__ == "__main__":
nAnalysis = 25
nParallelLoops = 5
with ProcessPoolExecutor(max_workers=nParallelLoops) as executor:
futures = dict((executor.submit(worker, k), k) for k in range(nAnalysis))
for f in as_completed(futures):
print("got %d" % futures[f])
Typical output:
0 started and sleeping for 4
1 started and sleeping for 1
2 started and sleeping for 1
3 started and sleeping for 6
4 started and sleeping for 5
5 started and sleeping for 9
got 1
6 started and sleeping for 5
got 2
7 started and sleeping for 6
got 0
8 started and sleeping for 6
got 4
9 started and sleeping for 8
got 6
10 started and sleeping for 9
got 3
11 started and sleeping for 6
got 7
12 started and sleeping for 9
got 5
...
|
Return all keys along with value in nested dictionary
Question: I am working on getting all text that exists in several `.yaml` files placed
into a new singular YAML file that will contain the English translations that
someone can then translate into Spanish.
Each YAML file has a lot of nested text. I want to print the full 'path', aka
all the keys, along with the value, for each value in the YAML file. Here's an
example input for a `.yaml` file that lives in the
myproject.section.more_information file:
default:
heading: Here’s A Title
learn_more:
title: Title of Thing
url: www.url.com
description: description
opens_new_window: true
and here's the desired output:
myproject.section.more_information.default.heading: Here’s a Title
myproject.section.more_information.default.learn_more.title: Title of Thing
mproject.section.more_information.default.learn_more.url: www.url.com
myproject.section.more_information.default.learn_more.description: description
myproject.section.more_information.default.learn_more.opens_new_window: true
This seems like a good candidate for recursion, so I've looked at examples
such as [this answer](http://stackoverflow.com/questions/36808260/python-
recursive-search-of-dict-with-nested-keys?rq=1)
However, I want to preserve all of the keys that lead to a given value, not
just the last key in a value. I'm currently using PyYAML to read/write YAML.
Any tips on how to save each key as I continue to check if the item is a
dictionary and then return all the keys associated with each value?
Answer: What you're wanting to do is flatten nested dictionaries. This would be a good
place to start: [Flatten nested Python dictionaries, compressing
keys](http://stackoverflow.com/questions/6027558/flatten-nested-python-
dictionaries-compressing-keys)
In fact, I think the code snippet in the top answer would work for you if you
just changed the sep argument to `.`.
edit:
Check this for a working example based on the linked SO answer
<http://ideone.com/Sx625B>
import collections
some_dict = {
'default': {
'heading': 'Here’s A Title',
'learn_more': {
'title': 'Title of Thing',
'url': 'www.url.com',
'description': 'description',
'opens_new_window': 'true'
}
}
}
def flatten(d, parent_key='', sep='_'):
items = []
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, collections.MutableMapping):
items.extend(flatten(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
return dict(items)
results = flatten(some_dict, parent_key='', sep='.')
for item in results:
print(item + ': ' + results[item])
If you want it in order, you'll need an OrderedDict though.
|
Authorizing a python script to access the GData API without the OAuth2 user flow
Question: I'm writing a small python script that will retrieve a list of my Google
Contacts (using the [Google Contacts
API](https://developers.google.com/google-apps/contacts/v3/)) and will
randomly suggest one person for me to contact (good way to automate keeping in
touch with friends!)
This is just a standalone script that I plan to schedule on a cron job. The
problem is that Google seems to require OAuth2 style authentication, where the
user (me) has to approve the access and then the app receives an authorization
token I can then use to query the user's (my) contacts.
Since I'm only accessing my own data, is there a way to "pre-authorize"
myself? Ideally I'd love to be able to retrieve some authorization token and
then I'd run the script and pass that token as an environment variable
AUTH_TOKEN=12345 python my_script.py
That way it doesn't require user input/interaction to authorize it one time.
Answer: The implementation you're describing invokes the full "three-legged" OAuth
handshake, which requires explicit user consent. If you don't need user
consent, you can instead utilize "two-legged" OAuth via a [Google service
account](https://cloud.google.com/compute/docs/access/create-enable-service-
accounts-for-instances), which is tied to an _application_ , rather than a
_user_. Once you've [granted
permission](https://console.developers.google.com/permissions/serviceaccounts)
to your service account to access your contacts, you can use the
[`oauth2client`](https://github.com/google/oauth2client)
[`ServiceAccountCredentials`
class](https://github.com/google/oauth2client/blob/bb2386ea51b330765b7c44461465bdceb0be09b4/oauth2client/service_account.py#L43-L542)
to directly access GData without requiring user consent.
Here's the two-legged authentication example from the [Google service account
documentation](https://developers.google.com/api-client-
library/python/auth/service-accounts):
import json
from httplib2 import Http
from oauth2client.service_account import ServiceAccountCredentials
from apiclient.discovery import build
scopes = ['https://www.googleapis.com/auth/sqlservice.admin']
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'service-account.json', scopes)
sqladmin = build('sqladmin', 'v1beta3', credentials=credentials)
response = sqladmin.instances().list(project='examinable-example-123').execute()
print response
|
initialization of multiarray raised unreported exception python
Question: I am a new programmer who is picking up python. I recently am trying to learn
about importing csv files using numpy. Here is my code:
import numpy as np
x = np.loadtxt("abcd.py", delimiter = True, unpack = True)
print(x)
The idle returns me with:
>> True
>> Traceback (most recent call last):
>> File "C:/Python34/Scripts/a.py", line 1, in <module>
import numpy as np
>> File "C:\Python34\lib\site-packages\numpy\__init__.py", line 180, in <module>
from . import add_newdocs
>> File "C:\Python34\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
>> File "C:\Python34\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
>> File "C:\Python34\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
>> File "C:\Python34\lib\site-packages\numpy\core\__init__.py", line 14, in <module>
from . import multiarray
>> SystemError: initialization of multiarray raised unreported exception
Why do I get the this system error and how can I remedy it?
Answer: As there is an error at the import line, your installation of numpy is broken
in some way. My guess is that you have installed numpy for python2 but are
using python3. You should remove numpy and attempt a complete re-install,
taking care to pick the correct version.
There are a few oddities in the code: You are apparently reading a python
file, `abcd.py`, not a csv file. Typically you want to have your data in a csv
file.
The delimiter is a string, not a boolean, typically `delimiter=","`
([Documentation](http://docs.scipy.org/doc/numpy-
dev/reference/generated/numpy.loadtxt.html))
import numpy as np
x = np.loadtxt("abcd.csv", delimiter = ",", unpack = True)
|
Missing dll files when using pyinstaller
Question: Good day!
I'm using python 3.5.2 with qt5, pyqt5 and sip14.8. I'm also using the latest
pyinstaller bracnch (3.3.dev0+g501ad40).
I'm trying to create an exe file for a basic hello world program.
from PyQt5 import QtWidgets
import sys
class newPingDialog(QtWidgets.QMainWindow):
def __init__(self):
super(newPingDialog, self).__init__()
self.setGeometry(50, 50, 500, 300)
self.setWindowTitle("hello!")
self.show()
app = QtWidgets.QApplication(sys.argv)
GUI = newPingDialog()
sys.exit(app.exec_())
At first, I used to get some errors regarding crt-msi. So I've reinstalled SDK
and c++ runtime and added them to my environment. But now I keep getting
errors about missing dlls (qsvg, Qt5PrintSupport)
6296 WARNING: lib not found: Qt5Svg.dll dependency of C:\users\me\appdata\local\programs\python\python35\lib\site-pac
kages\PyQt5\Qt\plugins\imageformats\qsvg.dll
6584 WARNING: lib not found: Qt5Svg.dll dependency of C:\users\me\appdata\local\programs\python\python35\lib\site-pac
kages\PyQt5\Qt\plugins\iconengines\qsvgicon.dll
6992 WARNING: lib not found: Qt5PrintSupport.dll dependency of C:\users\me\appdata\local\programs\python\python35\lib
\site-packages\PyQt5\Qt\plugins\printsupport\windowsprintersupport.dll
7535 WARNING: lib not found: Qt5PrintSupport.dll dependency of c:\users\me\appdata\local\programs\python\python35\lib
\site-packages\PyQt5\QtPrintSupport.pyd
8245 INFO: Looking for eggs
8245 INFO: Using Python library c:\users\me\appdata\local\programs\python\python35\python35.dll
8246 INFO: Found binding redirects:
I've checked and both dlls exist and have their PATH set. I also tried to
manually add them to my dist folder, but it didn't helped.
I'll highly appreciate any advice you might have!
Answer: This may be more like a workaround and Pyinstaller might need fixing.
I found out that `--paths` argument pointing to the directory containing
_Qt5Core.dll_ , _Qt5Gui.dll_ , etc. helped
pyinstaller --paths C:\Python35\Lib\site-packages\PyQt5\Qt\bin hello.py
|
python division result not true and different results
Question: I am trying to solve fractional knapsack problem.
I have to find items with maximum calories per weight. I will fill my bag up
to defined/limited weight with maximum calories.
Though algorithm is true, I can't find true result because of python division
weirdness
When I try to find items with max calories per weight (python3)
print ((calories_list[i]/weight_list[i])*10)
# calories[i] 500 and weight[i] 30 (they're integers)
166.66666666666669
on the other hand, I opened terminal and typed python3
>>> 500/30
16.666666666666668
#when multiply with 10, it must be 16.666666666666668 not
#166.66666666666669
as you see, it gives different results
most of all, the important thing is that the real answer
500/30=16.6666666667
I got stucked here two days ago, please help me
Thanks you
Answer: As explained in the [Python
FAQ](https://docs.python.org/3/faq/design.html#why-are-floating-point-
calculations-so-inaccurate):
> The float type in CPython uses a C double for storage. A float object’s
> value is stored in binary floating-point with a fixed precision (typically
> 53 bits) and Python uses C operations, which in turn rely on the hardware
> implementation in the processor, to perform floating-point operations. This
> means that as far as floating-point operations are concerned, Python behaves
> like many popular languages including C and Java.
You could use the [`decimal`](https://docs.python.org/3/library/decimal.html)
module as an alternative:
>>> from decimal import Decimal
>>> Decimal(500)/Decimal(30)
Decimal('16.66666666666666666666666667')
|
'module' object has no attribute 'questiоn'. Class name considered an attribute?
Question: I'm trying to make a quiz for my project and I'm getting this error:
`AttributeError: 'module' object has no attribute 'question'`. I don't
understand why it thinks my class is an attribute.
* questionbf.py is where I made the binary file.
* quizbf.py is where I'm trying to make the quiz scoring right.
I'm lacking experience in Python so anything at all would be helpful. Thank
you.
**questionbf.py**
import pickle
class question:
def __init__(self,a,b,c):
self.q=a
self.an=b
self.o=c
f1=open("Question.DAT","wb")
n=input("Enter no. of Questions ")
for i in range(n):
a=raw_input("Enter Question ")
b=raw_input("Enter Answer ")
c=raw_input("Enter Options ")
s=question(a,b,c)
pickle.dump(s,f1)
f1.close()
**quizbf.py**
import pickle
print '''Welcome to the revision quiz.'''
print
score=0
w=0
c=0
f1=open("Question.DAT","rb")
try:
while True:
s=pickle.load(f1)
print s.q
print s.o
guess=input("Enter Choice ")
if guess==s.a:
print "Correct!!"
print
score=score+1
c=c+1
elif guess=="exit" or guess=="Exit":
break
else:
w=w+1
print "Incorrect. Better luck next time!!"
print
except EOFError:
f1.close()
print s
print w
**Error:**
Traceback (most recent call last):
File "C:\Users\RUBY\Desktop\questionbf.py", line 32, in <module>
s=pickle.load(f1)
File "C:\Python27\lib\pickle.py", line 1378, in load
return Unpickler(file).load()
File "C:\Python27\lib\pickle.py", line 858, in load
dispatch[key](self)
File "C:\Python27\lib\pickle.py", line 1069, in load_inst
klass = self.find_class(module, name)
File "C:\Python27\lib\pickle.py", line 1126, in find_class
klass = getattr(mod, name)
AttributeError: 'module' object has no attribute 'question'
Answer: When you pickle an instance of a class the class name is saved in the pickle
to allow the reading program to import the necessary module and gain access to
the required class. Unfortunately the class whose elements you are pickling is
in the `__main__` module, which is the name Python gives to the module that is
being executed.
When your second program reads the pickle, it therefore looks for the
`question` class in the `__main__` module, which this time is the second
program. So `pickle` complains that the given class (`__main__`) does not
contain the required class (a defined class is an attrinute in its module just
like a method of a class is an attribute in the class).
The necessary fix is to move the `question` class to a separate module, which
your first program explicitly imports (using something like `from new_module
import question`). Your second program will then know it needs to import
`new_module` in order to access the `question` class, which it will do
automatically (_i.e._ with no need to explicitly import it).
|
In Python, bool(a.append(3)) is False. Why?
Question: It seems `bool(a.append)` and `bool(a)` are all `True`, but why
`bool(a.append(3))` is `False`?
My question is from the code here:
class MovingAverage(object):
def __init__(self, size):
self.next = lambda v, q=collections.deque((), size): q.append(v) or 1.*sum(q)/len(q)
Answer: In Python, the following things are considered `False`. In other words, if you
call `bool` with them as arguments, you get `False` back:
* `False` itself.
* `0`, in integer or floating point form.
* empty sequence types; strings, lists, sets, dictionaries, and anything that's a subclass of them
* `None`
The last one is the most important to us here. The reason that
`bool(a_list.append(3))` is `False` has to do with how `append` works. The
`append` method updates the existing list. It doesn't return anything in
particular.
In Python, any function which does not explicitly return anything implicitly
returns the `None` value. That means that something like this won't work.
my_list = []
for i in range(10):
if i % 2 == 0:
my_list = my_list.append(i)
That code I just made up will actually throw an exception (`AttributeError`)
the second time the `if` block gets executed because `my_list` gets set to
`None`, and `None` doesn't have an `append` method, because it's not a list.
And just to be clear, _anything_ not on that list gets considered `True`. You
can change this for custom objects by overriding the `__bool__` special method
(it's called `__nonzero__` in Python 2).
So let's just finish up by clarifying why `bool(a.append)` is `True`. If you
leave the parentheses (and argument list) off a method call, the method
doesn't get called. So `bool(a.append)` is just passing the method `a.append`
to `bool` without calling it.
|
How to nest numba jitclass
Question: I'm trying to understand how the @jitclass decorator works with nested
classes. I have written two dummy classes: fifi and toto fifi has a toto
attribute. Both classes have the @jitclass decorator but compilation fails.
Here's the code:
fifi.py
from numba import jitclass, float64
from toto import toto
spec = [('a',float64),('b',float64),('c',toto)]
@jitclass(spec)
class fifi(object):
def __init__(self, combis):
self.a = combis
self.b = 2
self.c = toto(combis)
def mySqrt(self,x):
s = x
for i in xrange(self.a):
s = (s + x/s) / 2.0
return s
toto.py:
from numba import jitclass,int32
spec = [('n',int32)]
@jitclass(spec)
class toto(object):
def __init__(self,n):
self.n = 42 + n
def work(self,y):
return y + self.n
The script that launches the code:
from datetime import datetime
from fifi import fifi
from numba import jit
@jit(nopython = True)
def run(n,results):
for i in xrange(n):
q = fifi(200)
results[i+1] = q.mySqrt(i + 1)
if __name__ == '__main__':
n = int(1e6)
results = [0.0] * (n+1)
starttime = datetime.now()
run(n,results)
endtime = datetime.now()
print("Script running time: %s"%str(endtime-starttime))
print("Sqrt of 144 is %f"%results[145])
When I run the script, I get [...]
> TypingError: Untyped global name 'toto' File "fifi.py", line 11
Note that if I remove any reference to 'toto' in 'fifi', the code works fine
and I get a x16 speed up thanks to numba.
Answer: It is possible to use a jitclass as a member of another jitclass, although the
way of doing this isn't well documented. You need to use a `deferred_type`
instance. This works in Numba 0.27 and possibly earlier. Change `fifi.py` to:
from numba import jitclass, float64, deferred_type
from toto import toto
toto_type = deferred_type()
toto_type.define(toto.class_type.instance_type)
spec = [('a',float64),('b',float64),('c',toto_type)]
@jitclass(spec)
class fifi(object):
def __init__(self, combis):
self.a = combis
self.b = 2
self.c = toto(combis)
def mySqrt(self,x):
s = x
for i in xrange(self.a):
s = (s + x/s) / 2.0
return s
I then get as output:
$ python test.py
Script running time: 0:00:01.991600
Sqrt of 144 is 12.041595
This functionality can be seen in some of the more advanced jitclass examples
of data structures, for example:
* [stack.py](https://github.com/numba/numba/blob/a4237562b78e9c4183173983051e5383dfab901c/examples/stack.py)
* [linkedlist.py](https://github.com/numba/numba/blob/44aca4325d3a0f1ad4b8f8f9ebf8af3572b59321/examples/linkedlist.py)
* [binarytree.py](https://github.com/numba/numba/blob/a4237562b78e9c4183173983051e5383dfab901c/examples/binarytree.py)
|
Python IF ELSE statement now working
Question: Simple question for some. I have my first program I am trying to make using
python. The IF ELSE statement is not working. The output remains "Incorrect"
even if the correct number is inputted by the user. I'm curious if it's that
the random number and the user input are different data types. In saying that
I have tried converting both to int with no avail.
Code below:
#START
from random import randrange
#Display Welcome
print("--------------------")
print("Number guessing game")
print("--------------------")
#Initilize variables
randNum = 0
userNum = 0
#Computer select a random number
randNum = randrange(10)
#Ask user to enter a number
print("The computer has chosen a number between 0 and 9, you have to guess the number!")
print("Please type in a number between 0 and 9, then press enter")
userNum = input('Number: ')
#Check if the user entered the correct number
if userNum == randNum:
print("You have selected the correct number")
else:
print("Incorrect")
Answer: If you are using Python 3, change the following line:
userNum = input('Number: ')
to
userNum = int(input('Number: '))
For an explanation, refer to [this
document](http://stackoverflow.com/documentation/python/193/introduction-to-
python/2642/input#t=201607311149485686857).
|
Print dataframe after grouping H2o python
Question: **Data:** "<https://github.com/estimate/pandas-exercises/blob/master/baby-
names2.csv>"
In pandas:
df=pd.read_csv("baby-names2.csv")
df_group=df.groupby("year")
print df_group.head()
It prints the dataframe grouped by year.
**How do I do the same thing in H2o Python ?**
In H2o:
df=h2o.upload_file("baby-names2.csv")
df_group=df.group_by("year")
print df_group.head() ==> gives Error
Expected output:
<http://i.imgur.com/VTbMX9w.png>
Answer: To get an h2o frame after you've used `groupby()`, use `.get_frame()` which
returns the result of the group-by. For example, if you wanted to get the
count for each year you could do:
df=h2o.import_file("baby-names2.csv")
df_group=df.group_by("year").count()
df_group.get_frame()
[which prints the year and count columns](http://i.stack.imgur.com/H3iCX.png).
|
How to properly display image in Python?
Question: I found this code:
from PIL import Image, ImageTk
import tkinter as tk
root = tk.Tk()
img = Image.open(r"Sample.jpg")
canvas = tk.Canvas(root, width=500, height=500)
canvas.pack()
tk_img = ImageTk.PhotoImage(img)
canvas.create_image(250, 250, image=tk_img)
root.mainloop()
It displays any picture in 500x500px resolution. I tried to change it and
display it in its original size:
from PIL import Image, ImageTk
import tkinter as tk
root = tk.Tk()
img = Image.open(r"D:/1.jpg")
canvas = tk.Canvas(root, width=img.width, height=img.height)
canvas.pack()
tk_img = ImageTk.PhotoImage(img)
canvas.create_image(img.width/2, img.height/2, image=tk_img)
root.mainloop()
But something went wrong and the picture with a size of 604x604px shows in a
window with a size of 602x602px, but the window size is correct. [Take a
look](http://i.stack.imgur.com/I0YCx.png) [(full
image)](http://i.stack.imgur.com/h2NJj.jpg). What am I doing wrong?
P.S. Sorry for bad English.
Answer: Well, no, your first example still cuts off by a few pixels. All top level
windows will have padding around their absolute borders to prevent other
elements from 'bleeding' into the borders and looking unprofessional.
You are still being given a canvas of 604 pixels by 604 pixels, but the root
window itself is asserting its padding. I could not find any way of removing
this top level padding, and it may very well appear differently in other
operating systems. A work-around could be to request a canvas size that is
slightly larger than the image you would like to display.
Another issue, if you're aiming for precision, would be the line...
canvas.create_image(img.width/2, img.height/2, image=tk_img)
Which could create off-by-one errors if your width or height is an odd number.
You can anchor images to the north-west corner and place them at the co-
ordinates `(0, 0)` like such:
canvas.create_image(0, 0, anchor=tk.NW, image=tk_img)
|
python basic show and return values
Question: I'm using python to input data to my script
then trying to return it back on demand to show the results
I tried to write it as simple as possible since it's only practicing and
trying to get the hang of python
here's how my script looks like
#!/usr/python
## imports #####
##################
import os
import sys
## functions
##################
# GET INSERT DATA
def getdata():
clientname = raw_input(" *** Enter Client Name > ")
phone = raw_input(" *** Enter Client Phone > ")
location = raw_input(" *** Enter Client Location > ")
email = raw_input(" *** Enter Client email > ")
website = raw_input(" *** Enter Client Website > ")
return clientname, phone, location, email, website
# VIEW DATA
def showdata():
print "==================="
print ""
print clientname
print ""
print phone
print ""
print location
print ""
print email
print ""
print website
print ""
print "==================="
# CLEAR
def clear():
os.system("clear") #linux
os.system("cls") #windows
# SHOW INSTRUCTIONS
def welcome():
clear()
while True:
choice = raw_input(" Select Option > ")
# INSERT DATA
if choice == "1":
getdata()
# VIEW DATA
elif choice == "2":
showdata()
else:
print "Invalid Selection.. "
print "Terminating... "
#exit()
welcome()
what am i doing wrong ? what am i missing?
Answer: You're absolutely misusing globals. Please go back and read a good Python
tutorial, for example from python.org.
Python is a programming language that allows you to define _functions_ , i.e.
things that _return_ values. You should definitely use that, instead of
`global`izing your input. I don't know where you've learned that – every
Python ressource that I'd know of will first introduce how to deal properly
with functions and their return values before even _mentioning_ `global`.
|
Add current element in array + next element in array while iterating through array in Python
Question: What's the best way to add the first element in an array to the next element
in the same array, then add the result to the next element of the array, and
so on? For example, I have an array:
s=[50, 1.2658, 1.2345, 1.2405, 1.2282, 1.2158, 100]
I would like the end array to look like the following:
new_s=[50, 51.2658, 52.5003, 53.7408, 54.969, 56.1848, 100]
Thus leaving the minimum and maximum elements of the array unchanged.
I started going this route:
arr_length=len(s)
new_s=[50]
for i, item in enumerate(s):
if i == 0:
new_s.append(new_s[i]+s[i+1])
elif 0<i<=(arr_length-2):
new_s.append(new_s[i]+s[i+1])
Currently I get the following list:
new_s=[50, 51.2658, 52.5003, 53.7408, 54.969, 56.1848, 156.1848]
What am I doing wrong that isn't leaving the last item unchanged?
Answer: The beset way is using `numpy.cumsum()` for all of your items except the last
one then append the last one to the result of `cumsum()`:
>>> import numpy as np
>>> s=[50, 1.2658, 1.2345, 1.2405, 1.2282, 1.2158, 100]
>>>
>>> np.append(np.cumsum(s[:-1]), s[-1])
array([ 50. , 51.2658, 52.5003, 53.7408, 54.969 , 56.1848,
100. ])
Or with python (3.X) use `itertools.accumulate()`:
>>> import itertools as it
>>>
>>> list(it.accumulate(s[:-1])) + s[-1:]
[50, 51.2658, 52.500299999999996, 53.74079999999999, 54.968999999999994, 56.184799999999996, 100]
|
Serialize a string without changes in Django Rest Framework?
Question: I'm using Python's json.dumps() to convert an array to a string and then store
it in a Django Model. I'm trying to figure out how I can get Django's REST
framework to ignore this field and send it 'as is' without serializing it a
second time.
For example, if the model looks like this(Both fields are CharFields):
> name = "E:\"
>
> path_with_ids= "[{"name": "E:\", "id": 525}]"
I want the REST framework to ignore 'path_with_ids' when serializing so the
JSON output will look like this:
> { "name": "E:\", "path_with_ids": [ {"name": "E:\", "id": 525} ] }
and not like this:
> { "name": "E:\", "path_with_ids": "[{\"name\": \"E:\\\\\", \"id\": 525}]" }
I've tried to make another serializer class that spits out the input it gets
'as is' without success:
**Serializers.py:**
class PathWithIds(serializers.CharField):
def to_representation(self, value):
return value.path_with_ids
class FolderSerializer(serializers.ModelSerializer):
field_to_ignore = PathWithIds(source='path_with_ids')
class Meta:
model = Folder
fields = ['id', 'field_to_ignore']
Please help!
Answer: I ended up using a wasteful and sickening method of deserializing the array
before serializing it again with the REST framework:
**Serializers.py:**
import json
class PathWithIds(serializers.CharField):
def to_representation(self, value):
x = json.loads(value)
return x
class FolderSerializer(serializers.ModelSerializer):
array_output = PathWithIds(source='field_to_ignore')
class Meta:
model = Folder
fields = ['id', 'array_output']
**Output in the rest API:**
> { "name": "E:\", "array_output": [ { "name": "E:\", "id": 525 } ] }
|
Script to replace characters in file
Question: i'm facing trouble trying to replace characters in a file.
#!/usr/bin/env python
with open("crypto.txt","r") as arquivo:
data = arquivo.read()
for caracter in data:
if "a" in data:
data = data.replace("a","c")
elif "b" in data:
data = data.replace("b","d")
elif "c" in data:
data = data.replace("c","e")
elif "d" in data:
data = data.replace("d","f")
elif "e" in data:
data = data.replace("e","g")
elif "f" in data:
data = data.replace("f","h")
elif "g" in data:
data = data.replace("g","i")
elif "h" in data:
data = data.replace("h","j")
elif "i" in data:
data = data.replace("i","k")
elif "j" in data:
data = data.replace("j","l")
elif "k" in data:
data = data.replace("k","m")
elif "l" in data:
data = data.replace("l","n")
elif "m" in data:
data = data.replace("m","o")
elif "n" in data:
data = data.replace("n","p")
elif "o" in data:
data = data.replace("o","q")
elif "p" in data:
data = data.replace("p","r")
elif "q" in data:
data = data.replace("q","s")
elif "r" in data:
data = data.replace("r","t")
elif "s" in data:
data = data.replace("s","u")
elif "t" in data:
data = data.replace("t","v")
elif "u" in data:
data = data.replace("u","w")
elif "v" in data:
data = data.replace("v","x")
elif "w" in data:
data = data.replace("w","y")
elif "x" in data:
data = data.replace("x","z")
print data
the script reads a txt file called crypto and start to replace the characters
based on the statements above. Inside the file is writed the word aloha.
this is the result i get everytime i run the script
clohc
elohe
glohg
ilohi
iloji
klojk
how can i fix it?
Answer: What about python's [ string
translate](https://docs.python.org/3/library/stdtypes.html#str.translate)
import string
with open("crypto.txt","r") as arquivo:
data = arquivo.read()
out = data.translate(string.maketrans("abcdefghijklmnopqrstuvw","defghijklmnopqrstuvwxyz"))
print out
It is directly equlal to perl `tr` function. It is works as below
Image describes `A` convert to `T`, `C` convert to `G`, `G` convert to `C` and
`T` convert to `A`
[![](http://i.stack.imgur.com/Btr3b.png)](http://i.stack.imgur.com/Btr3b.png)
> **Then don't get confuse with string translate and string replace**
string replace, replace the whole word. string translate, replace by the each
character.
|
MySQL get multiple items from a table as a input for a single field of another table
Question: I have two tables one is teachers another is subjects and I need to link the
subjects to the teachers , thats an easy one but the problem is that a single
teacher can have multiple subjects.Thus I need a kind of array for that, so
that when I do my queries with python it returns an array or should I say a
'tuple of tuple of tuples'. So can someone help me to solve that?
Thanks
Answer: Database Architecture is most important thing to decide. Here seems 2 approach
one is decided by you or other to make a mapping table.
For your Approach:-
id teacher_name Subject
1 XYZ 1,5,6,7
Query:-
SELECT teacher_name, subject_name
FROM subject s
INNER JOIN teacher t on FIND_IN_SET(s.id,t.subject)
Other is make a mapping table:-
teacher_id subject_id
1 1
1 5
1 7
Query:-
SELECT teacher_name, subject_name
FROM mapping m
INNER JOIN subject s on m.subject_id = s.id
INNER JOIN teacher t on m.teacher_id = t.id
|
How to display an error message in mpl_connect() callback function
Question: My understanding is that: Normally, when an error happens, it's thrown down
through all the calling functions, and then displayed in the console. Now
there's some packages that do their own error handling, especially GUI related
packages often don't show errors at all but just continue excecution.
How can we override such behaviour in general? When I write GUI functions, I
would like to see the errors! I found [this
post](http://stackoverflow.com/questions/15246523/handling-exception-in-
python-tkinter) where it's explained how to do it for the case of Tkinter. How
can this be done in Matplotlib?
Example code:
import matplotlib.pyplot as plt
def onclick(event):
print(event.x, event.y)
raise ValueError('SomeError') # this error is thrown but isn't displayed
fig = plt.figure(5)
fig.clf()
try: # if figure was open before, try to disconnect the button
fig.canvas.mpl_disconnect(cid_button)
except:
pass
cid_button = fig.canvas.mpl_connect('button_press_event', onclick)
Answer: Indeed, when the python interpreter encounters an exception that is never
caught, it will print a so-called traceback to stdout before exciting.
However, GUI packages usually catch and swallow all exceptions, in order to
prevent the python interpreter from exciting. You would like to display that
traceback somewhere, but in case of GUI applications, you will have to decide
where to show that traceback. The standard library has a module that helps you
working with such traceback, aptly named
[`traceback`](https://docs.python.org/3.5/library/traceback.html). Then, you
will have to catch the exception before the GUI toolkit does it. I do not know
a general way to insert a callback error handler, but you can manually add
error handling to each of your callbacks. The best way to do this is to write
a function decorator that you then apply to your callback.
import traceback, functools
def print_errors_to_stdout(fun):
@functools.wraps(fun)
def wrapper(*args,**kw):
try:
return fun(*args,**kw)
except Exception:
traceback.print_exc()
raise
return wrapper
@print_errors_to_stdout
def onclick(event):
print(event.x, event.y)
raise ValueError('SomeError')
The decorator `print_errors_to_stdout` takes a function and returns a new
function that embeds the original function in a `try ... except` block and in
case of an exception prints the traceback to stdout with the help of
[`traceback.print_exc()`](https://docs.python.org/3.5/library/traceback.html#traceback.print_exc).
(The wrapper itself is decorated with
[`functools.wraps`](https://docs.python.org/3.5/library/functools.html#functools.wraps)
such that the generated wrapper function, among other things, keeps the
docstring of the original function). If you would like to show the traceback
somewhere else
[`traceback.format_exc()`](https://docs.python.org/3.5/library/traceback.html#traceback.format_exc)
would give you a string that you could then show/store somwhere. The decorator
also reraises the exception, such that the GUI toolkit still gets a chance to
take it's own actions, typically just swallowing the exception.
|
Difficulty parsing text file Python 2.7
Question: Using Python 2.7, I want to take a file as input, remove some charachters from
it, and write that to another file. I'm not entirely succeeding with the below
code:
print 'processing .ujc file for transmit'
infile, outfile = open('app_code.ujc','r'), open('app_code_transmit.ujc','w')
data = infile.read()
data = data.replace("#include <avr/pgmspace.h> const unsigned char uj_code[] PROGMEM = {", "")
data = data.replace("0x", "")
data = data.replace(", ", "")
data = data.replace("};", "")
outfile.write(data)
The input file (example) is:
#include <avr/pgmspace.h>
const unsigned char uj_code[] PROGMEM = {
0x00, 0x03, 0xB1, 0x4B, 0xEC, 0x00, 0x1D, 0x00, 0x1E, 0x00, 0x21, 0x00, 0x02, 0x6A, 0x00, 0x02,
0x6A, 0x00, 0x02, 0xE3, 0x3F, 0x00, 0x1F, 0x00, 0x02, 0x2C, 0x00, 0x01, 0x3B, 0x00, 0x02, 0x36, 0x00, 0x00
};
And this should become (the etc is a continuation of the above and not
actually present):
0003B14BEC001D001E002100026A00(...etc...)02360000
What I get with the above code is:
#include <avr/pgmspace.h>
const unsigned char uj_code[] PROGMEM = {
0003B14BEC001D001E002100026A00(...etc...)
02360000
In other words, I want to remove all character, empty lines, and 0x and stuff
except the actual bytes in a single continuous line but I'm tripping a little
bit on the nuances, I'm expecting. Any help?
Answer: @MKesper is right. When you read the file, there are \n or \r\n (line
separators) depending on your OS. Looking at the expected output, I feel the
better way would be to extract the data you need rather than delete the
unwanted data. I would take some help from regular expression and here is my
attempt:
import re
print 'processing .ujc file for transmit'
infile, outfile = open('app_code.ujc','r'), open('app_code_transmit.ujc','w')
data = infile.read()
# Expect 0003B14BEC001D001E002100026A00026A0002E33F001F00022C00013B0002360000 to be the output
outfile.write(''.join(re.findall('0x([0-9a-fA-F][0-9a-fA-F])', data)))
Update 1: This is based on assumption that you do not have any other 0x. Else
we need to update our regular expression
|
Set default value for cut off date and ballot date using value of date field in the same model in django
Question: I have created a model for entering sitting date of a session along with cut
off date and ballot date. My model is:
from datetime import datetime, timedelta
class Sitting(models.Model):
sit_date = models.DateField(blank=False)
cut_off_date = models.DateField(default=get_cut_off_date)
ballot_date = models.DateField(default=ballot_date)
genre = TreeForeignKey('Genre', null=True, blank=True, db_index=True)
sess_no = models.ForeignKey(Session,
on_delete=models.CASCADE)
def get_cut_off_date(self):
return self.sit_date - timedelta(days=16)
def ballot_date(self):
return self.sit_date - timedelta(days=12)
def __str__(self): # __unicode__ on Python 2
return self.sit_date
I want to set cut off date and ballot date default values from the values of
sitting date of the same model. But my model not work. How to set the default
values of cut off date and ballot date from the input of sitting date?
Answer: Well you have created the function to get cut_off_date so just override the
save method to update the `cut_off_date`
class Sitting(models.Model):
sit_date = models.DateField(blank=False)
cut_off_date = models.DateField(null=True, blank=True)
def get_cut_off_date(self):
....
def save(self, *args, **kwargs):
self.cut_off_date = self.get_cut_off_date()
super(Model, self).save(*args, **kwargs)
|
Slicing a graph
Question: I have created a graph in python but I now need to take a section of the graph
and expand this by using a small range of the original data, but I don't know
how to find the row number of the results that form the range or how I can
create a graph using just these results form the file. This is the code I have
for the graph:
import numpy as np
import matplotlib.pyplot as plt
#variable for data to plot
spec_to_plot = "SN2012fr_20121129.42_wifes_BR.dat"
#tells python where to look for the file
spec_directory = '/home/fh1u16/Documents/spectra/'
data = np.loadtxt(spec_directory + spec_to_plot, dtype=np.float)
x = data[:,0]
y = data[:,1]
plt.plot(x, y)
plt.xlabel("Wavelength")
plt.ylabel("Flux")
plt.title(spec_to_plot)
plt.show()
edit: data is between 3.5e+3 and 9.9e+3 in the first column, I need to use
just the data between 5.5e+3 and 6e+3 to plot another graph, but this only
applies to the first column. Hope this makes a bit more sense? Python version
2.7
Answer: If I understand you correctly, you could do it this way:
my_slice = slice(np.argwhere(x>5.5e3)[0], np.argwhere(x>6e3)[0])
x = data[my_slice,0]
y = data[my_slice,1]
`np.argwhere(x>5.5e3)[0]` is the index of the first occurrence of `x>5.5e3`
and like wise for the end of the slice. (assuming your data is sorted)
A more general way working even if your data is not sorted:
mask = (x>5.5e3) & (x<6e3)
x = data[mask, 0]
y = data[mask, 1]
|
how to convert string into dictionary in python 3.*?
Question: I want to convert the following string into dictionary without using eval()
function in python 3.5 .
d="{'Age': 7, 'Name': 'Manni'}";
Can anybody tell me the good way than using the eval() function?? (Actually i
want to know about the function which can directly convert dictionary to
string.)
Answer: 1. `literal_eval`, a somewhat safer version of `eval` (will only evaluate literals ie strings, lists etc):
from ast import literal_eval
python_dict = literal_eval("{'a': 1}")
2. `json.loads` but it would require your string to use double quotes:
import json
python_dict = json.loads('{"a": 1}')
|
Python pause thread, do manually and reset time
Question: I need to call function every x seconds but with option to call it manually
and in this case reset time. I have sth like this:
import time
import threading
def printer():
print("do it in thread")
def do_sth(event):
while not event.is_set():
printer()
time.sleep(10)
event = threading.Event()
print("t - terminate, m - manual")
t = threading.Thread(target=do_sth, args=(event,))
t.daemon = True
t.start()
a = input()
if a == 't':
event.set()
elif a == 'm':
event.wait()
printer()
event.clear()
UPDATE: I have found something that helped me a lot: [Python - Thread that I
can pause and resume](http://stackoverflow.com/questions/33640283/python-
thread-that-i-can-pause-and-resume) Now my code look like this:
import threading, time, sys
class ThreadClass(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.can_run = threading.Event()
self.thing_done = threading.Event()
self.thing_done.set()
self.can_run.set()
def run(self):
while True:
self.can_run.wait()
try:
self.thing_done.clear()
self.do_in_thread()
finally:
self.thing_done.set()
time.sleep(5)
def pause(self):
self.can_run.clear()
self.thing_done.wait()
def resume(self):
self.can_run.set()
def do_in_thread(self):
print("Thread...1")
time.sleep(2)
print("Thread...2")
time.sleep(2)
print("Thread...3")
def do_in_main():
print("Main...1")
time.sleep(2)
print("Main...2")
time.sleep(2)
print("Main...3")
if __name__ == '__main__':
t = ThreadClass()
t.daemon = True
t.start()
while True:
i = input()
if i == 'm':
t.pause()
do_in_main()
t.resume()
elif i == 't':
sys.exit()
# t.join()
The only problem is that when I terminate, a would like thread to finish its
job before it exits.
Answer: It may be that _buffered outputs_ are the culprits and thus - you're are
getting your expected behaviour.
I changed your code to the following, and it seems to do something (if that's
what you wanted, is up to you):
import time
import threading
def printer():
print("do it in thread")
def do_sth(event):
print("event.is_set:", event.is_set())
while not event.is_set():
printer()
time.sleep(10)
event = threading.Event()
print("t - terminate, m - manual")
t = threading.Thread(target=do_sth, args=(event,))
print("t:",t)
t.daemon = True
t.start()
a = input()
if a == 't':
event.set()
elif a == 'm':
event.wait()
printer()
event.clear()
|
ElasticSearch AND query in python
Question: I am trying to query elastic search for logs which have one field with some
value and another fields with another value my logs looks like this in Kibana:
{
"_index": "logstash-2016.08.01",
"_type": "logstash",
"_id": "6345634653456",
"_score": null,
"_source": {
"@timestamp": "2016-08-01T09:03:50.372Z",
"session_id": "value_1",
"host": "local",
"message": "some message here with error",
"exception": null,
"level": "ERROR",
},
"fields": {
"@timestamp": [
1470042230372
]
}
}
I would like to receive all logs which have the value of "ERROR" in the level
field (inside _source) and the value of value_1 in the session_id field
(inside the _sources)
I am managing to query for one of them but not both together:
from elasticsearch import Elasticsearch
host = "localhost"
es = Elasticsearch([{'host': host, 'port': 9200}])
query = 'session_id:"{}"'.format("value_1")
result = es.search(index=INDEX, q=query)
Answer: Since you need to match exact values, I would recommend using filters, not
queries. Filter for your case would look somewhat like this:
filter = {
"filter": {
"and": [
{
"term": {
"level": "ERROR"
}
},
{
"term": {
"session_id": "value_1"
}
}
]
}
}
And you can pass it to filter using `es.search(index=INDEX, body=filter)`
EDIT: reason to use filters instead of queries: "In filter context, a query
clause answers the question “Does this document match this query clause?” The
answer is a simple Yes or No — no scores are calculated. Filter context is
mostly used for filtering structured data, e.g."
Source: <https://www.elastic.co/guide/en/elasticsearch/reference/2.0/query-
filter-context.html>
|
Optimising the generation of a large number of random numbers using python 3
Question: I am wanting to generate eight random numbers within a range (0 to pi/8), add
them together, take the sine of this sum, and after doing this N times, take
the mean result. After scaling this up I get the correct answer, but it is too
slow for `N > 10^6`, especially when I am averaging over N trials `n_t = 25`
more times! I am currently getting this code to run in around _12 seconds_ for
`N = 10^5`, meaning that it will take _20 minutes_ for `N = 10^7`, which
doesn't seem optimal (it may be, I don't know!).
My code is as follows:
import random
import datetime
from numpy import pi
from numpy import sin
import numpy
t1 = datetime.datetime.now()
def trial(N):
total = []
uniform = numpy.random.uniform
append = total.append
for j in range(N):
sum = 0
for i in range (8):
sum+= uniform(0, pi/8)
append(sin(sum))
return total
N = 1000000
n_t = 25
total_squared = 0
ans = []
for k in range (n_t):
total = trial(N)
f_mean = (numpy.sum(total))/N
ans.append(f_mean*((pi/8)**8)*1000000)
sum_square = 0
for e in ans:
sum_square += e**2
sum = numpy.sum(ans)
mean = sum/n_t
variance = sum_square/n_t - mean**2
s_d = variance**0.5
print (mean, " ± ", s_d)
t2 = datetime.datetime.now()
print ("Execution time: %s" % (t2-t1))
If anyone can help me optimise this it would be much appreciated!
Thank you :)
Answer: Given your requirement of obtaining the result with this method,
`np.sin(np.random.uniform(0,np.pi/8,size=(8,10**6,25)).sum(axis=0)).mean(axis=0)`
gets you your 25 trials pretty quickly... This is fully vectorised (and
concise which is always a bonus!) so I doubt you could do any better...
Explanation:
You generate a big random 3d array of size `(8 x 10**6 x 25)`. `.sum(axis=0)`
will get you the sum over the first dimension (`8`). `np.sin(...)` applies
elementwise. `.mean(axis=0)` will get you the mean over the first remaining
dimension (`10**6`) and leave you with a 1d array of length (`25`)
corresponding to your trials.
|
Windows path with spaces in python
Question: I have a problem with passing windows path to a function in python. Now, if I
hard code the path everything actually works. So, my code is:
from pymatbridge import Matlab
lab = Matlab(executable=r'"c:\Program Files \MATLAB\bin\matlab.exe"')
lab.start()
This works fine as I am using the raw string formatting to the hard-coded
string. Now, the issue is that the string is passed as a variable. So, imagine
I have a variable like:
path="c:\Program Files \MATLAB\bin\matlab.exe"
Now, I am unable to figure out how to get the equivalent raw string from this.
I tried may things like `shlex.quote(path)` and this makes issue with the
`\b`. Without conversion to the raw string, the space in `Program Files`
causes a problem, I think.
Answer:
def testpath(path):
print path
testpath(path='c:\\Program Files \\MATLAB\\bin\\matlab.exe')
output is:
c:\Program Files \MATLAB\bin\matlab.exe
If you are facing issues with space between `Program Files` use `Progra~1`
instead
|
Run sqoop in python script
Question: I'm trying to run sqoop command inside Python script. I had no problem to do
that trough shell command, but when I'm trying to execute python stript:
#!/usr/bin/python
sqoopcom="sqoop import --direct --connect abcd --username abc --P --query "queryname" "
exec (sqoopcom)
I got an error, Invalid syntax, how to solve it ?
Answer: You need to skip " on --query param
sqoopcom="sqoop import --direct --connect abcd --username abc --P --query \"queryname\" --target-dir /pwd/dir --m 1 --fetch-size 1000 --verbose --fields-terminated-by , --escaped-by \\ --enclosed-by '\"'/dir/part-m-00000"
|
How to parse logs and extract lines containing specific text strings?
Question: I've got several hundred log files that I need to parse searching for text
strings. What I would like to be able to do is run a Python script to open
every file in the current folder, parse it and record the results in a new
file with the original_name_parsed_log_file.txt. I had the script working on a
single file but now I'm having some issues doing all files in the directory.
Below is what I have so far but it's not working atm. Disregard the first
def... I was playing around with changing font colors.
import os
import string
from ctypes import *
title = ' Log Parser '
windll.Kernel32.GetStdHandle.restype = c_ulong
h = windll.Kernel32.GetStdHandle(c_ulong(0xfffffff5))
def display_title_bar():
windll.Kernel32.SetConsoleTextAttribute(h, 14)
print '\n'
print '*' * 75 + '\n'
windll.Kernel32.SetConsoleTextAttribute(h, 13)
print title.center(75, ' ')
windll.Kernel32.SetConsoleTextAttribute(h, 14)
print '\n' + '*' * 75 + '\n'
windll.Kernel32.SetConsoleTextAttribute(h, 11)
def parse_files(search):
for filename in os.listdir(os.getcwd()):
newname=join(filename, '0_Parsed_Log_File.txt')
with open(filename) as read:
read.seek(0)
# Search line for values if found append line with spaces replaced by tabs to new file.
with open(newname, 'ab') as write:
for line in read:
for val in search:
if val in line:
write.write(line.replace(' ', '\t'))
line = line[5:]
read.close()
write.close()
print'\n\n'+'Parsing Complete.'
windll.Kernel32.SetConsoleTextAttribute(h, 15)
display_title_bar()
search = raw_input('Please enter search terms separated by commas: ').split(',')
parse_files(search)
Answer: This line is wrong:
newname=join(filename, '0_Parsed_Log_File.txt')
use:
newname= "".join([filename, '0_Parsed_Log_File.txt'])
`join` is a string method which requires a list of strings to be joined
|
Google Drive API POST requests?
Question: I'm trying to interact with the Google Drive API and while their example is
working, I'd like to learn how to make the POST requests in python instead of
using their pre-written methods. For example, in python how would I make the
post request to insert a file? [Insert a
File](https://developers.google.com/drive/v2/reference/files/insert)
How do I add requests and parameters to the body?
Thanks!
UPDATE 1:
headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer ' + 'my auth token'}
datax = {'name': 'upload.xlsx', 'parents[]': ['0BymNvEruZwxmWDNKREF1cWhwczQ']}
r = requests.post('https://www.googleapis.com/upload/drive/v3/files/', headers=headers, data=json.dumps(datax))
response = json.loads(r.text)
fileID = response['id']
headers2 = {'Authorization': 'Bearer ' + 'my auth token'}
r2 = requests.patch('https://www.googleapis.com/upload/drive/v3/files/' + fileID + '?uploadType=media', headers=headers2)
Answer: To insert a file:
1. Create a file in Google drive and get its _Id_ in response
2. Insert a file using _Id_
Here are the POST parameters for both operations:
URL: 'https://www.googleapis.com/drive/v3/files'
headers: 'Authorization Bearer <Token>'
Content-Type: application/json
body: {
"name": "temp",
"mimeType": "<Mime type of file>"
}
In python you can use "Requests"
import requests
import json
headers = {'Content-Type': 'application/json','Authorization': 'Bearer <Your Oauth token' }
data = {'name': 'testing', 'mimeType': 'application/vnd.google-apps.document'}
r = requests.post(url,headers=headers,data=json.dumps(data))
r.text
Above POST response will give you an id. To insert in file use _PATCH_ request
with following parameters
url: 'https://www.googleapis.com/upload/drive/v3/files/'<ID of file created> '?uploadType=media'
headers: 'Authorization Bearer <Token>'
Content-Type: <Mime type of file created>
body: <Your text input>
I hope you can convert it in python requests.
|
Speedup GPU vs CPU for matrix operations
Question: I am wondering how much GPU computing would help me speed up my simulations.
The critical part of my code is matrix multiplication. Basically the code
looks like the following python code with matrices of order 1000 and long for
loops.
import numpy as np
Msize = 1000
simulationLength = 50
a = np.random.rand(Msize,Msize)
b = np.random.rand(Msize,Msize)
for j in range(simulationLength):
result = np.dot(a,b)
Note: My matrices are dense, mostly random and for loops are compiled with
cython.
My naive guess would be that I have two factors:
* More parallel threads (Currently of order 1 thread, GPUs of order 100 threads?) --> Speedup of order 100? [[Source](http://gamedev.stackexchange.com/a/17255) is quite outdated, from 2011]
* Lower processor frequency (Currently 3Ghz, GPUs typically 2 Ghz) --> Neglect
I expect that this viewpoint is to naive, so what am I missing?
Answer: Generally speaking GPUs are much faster than CPU at highly parallel simple
tasks (that is what they are made for) like multiplying big matrices but there
are some problems coming with GPU computation:
* transfering data between normal RAM and graphics RAM takes time
* loading/starting GPU programs takes some time
so while multiplication itself may be 100 (or more) times faster, you might
experience an actually much smaller speedup or even a slowdown
There are more issues with GPUs being "stupid" in comparison to CPUs like
massive slowdowns on branching code, having to handle caching by hand and
others which can make writing fast programs for GPUs quite challenging.
|
How to call variables from an imported parameter-dependent script?
Question: I've just begun to use python as a scripting language, but I'm having
difficulty understanding how I should call objects from another file. This is
probably just because I'm not too familiar on using attributes and methods.
For example, I created this simple quadratic formula script.
qf.py
#script solves equation of the form a*x^2 + b*x + c = 0
import math
def quadratic_formula(a,b,c):
sol1 = (-b - math.sqrt(b**2 - 4*a*c))/(2*a)
sol2 = (-b + math.sqrt(b**2 - 4*a*c))/(2*a)
return sol1, sol2
So accessing this script in the python shell or from another file is fairly
simple. I can get the script to output as a set if I import the function and
call on it.
>>> import qf
>>> qf.quadratic_formula(1,0,-4)
(-2.0, 2.0)
But I cannot simply access variables from the imported function, e.g. the
first member of the returned set.
>>> print qf.sol1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'sol1'
The same happens if I merge namespaces with the imported file
>>> from qf import *
>>> quadratic_formula(1,0,-4)
(-2.0, 2.0)
>>> print sol1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'sol1' is not defined
Is there a better way call on these variables from the imported file? I think
the fact that sol1 & sol2 are dependent upon the given parameters (a,b,c)
makes it more difficult to call them.
Answer: I think it is because `sol1` and `sol2` are the local variables defined only
in the function. What you can do is something like
import qf
sol1,sol2 = qf.quadratic_formula(1,0,-4)
# sol1 = -2.0
# sol2 = 2.0
but this `sol1` and `sol2` are not the same variables in `qf.py`.
|
Calling Class, getting TypeError: unbound method must be called
Question: I have reviewed the error on Stackoverflow, but none of the solutions I've
seen resolve my problem. I'm attempting to create a class for cx_Oracle to put
my database connectivity in a class, and call it during my database instances.
I've created similar classes in C#, but python is especially difficult for
some reason. Any assistance appreciated.
I leveraged this code found here: [cx_Oracle and Exception Handling - Good
practices?](http://stackoverflow.com/questions/7465889/cx-oracle-and-
exception-handling-good-practices)
import sys
import os
import cx_Oracle
class Oracle(object):
__db_server = os.getenv("ORACLE_SERVER")
__db_user = os.getenv("ORACLE_ACCT")
__db_password = os.getenv("ORACLE_PWD")
def connect(self):
""" Connect to the database. """
try:
self.db = cx_Oracle.connect(__db_user+'/'+__db_password+'@'+__db_server)
except cx_Oracle.DatabaseError as e:
error, = e.args
if error.code == 1017:
print('Please check your credentials.')
else:
print('Database connection error: %s'.format(e))
# Very important part!
raise
# If the database connection succeeded create the cursor
# we-re going to use.
self.cursor = db.Cursor()
def disconnect(self):
"""
Disconnect from the database. If this fails, for instance
if the connection instance doesn't exist we don't really care.
"""
try:
self.cursor.close()
self.db.close()
except cx_Oracle.DatabaseError:
pass
def execute(self, sql, bindvars=None, commit=False):
"""
Execute whatever SQL statements are passed to the method;
commit if specified. Do not specify fetchall() in here as
the SQL statement may not be a select.
bindvars is a dictionary of variables you pass to execute.
"""
try:
self.cursor.execute(sql, bindvars)
except cx_Oracle.DatabaseError as e:
error, = e.args
if error.code == 955:
print('Table already exists')
elif error.code == 1031:
print("Insufficient privileges")
print(error.code)
print(error.message)
print(error.context)
# Raise the exception.
raise
# Only commit if it-s necessary.
if commit:
self.db.commit()
def select(self, sql, commit=False):
bindvars=None
result = None
try:
self.cursor.execute(sql, bindvars)
result = self.cursor.fetchall()
except cx_Oracle.DatabaseError as e:
error, = e.args
print "Database Error: failed with error code:%d - %s" % (error.code, error.message)
raise
if commit:
self.db.commit()
return result
def commit(self):
try:
self.db.commit()
except cx_Oracle.DatabaseError as e:
error, = e.args
print "Database Commit failed with error code:%d - %s" % (error.code, error.message)
raise
def rollback(self):
try:
self.db.rollback()
except cx_Oracle.DatabaseError as e:
error, = e.args
print "Database Rollback failed with error code:%d - %s" %(error.code, error.message)
raise
And this is my calling routine
import sys
import os
#import cx_Oracle
from Oracle import Oracle
def main():
oracle = Oracle.connect()
query = """SELECT DISTINCT NAME FROM MyTable"""
data = oracle.select(query)
for row in data:
print row
oracle.disconnect()
### MAIN
if __name__ == '__main__':
main()
On a related note: I can't seem to get Python to find my Oracle.py class,
unless it's in the same directory as the calling function.
Allan
Answer: You have to create an instance of your class to use it:
orcl = Oracle()
orcl.connect()
...
|
Formatting a CSV for a Python dictionary
Question: If I have a CSV file that looks like this:
### Name | Value 1 | Value 2
Foobar | 22558841 | 96655
Barfool | 02233144 | 3301144
How can I make it into a dictionary that looks like this:
dict = {
'Foobar': {
'Value 1': 2255841,
'Value 2': 9665
},
'Barfool': {
'Value 1': 02233144,
'Value 2': 3301144
}
}
Answer: If you use `pandas`:
import pandas as pd
pd.read_csv('test.csv', delimiter='|', index_col='Name').T.to_dict()
# {'Barfool': {'Value 1': 2233144, 'Value 2': 3301144},
# 'Foobar': {'Value 1': 22558841, 'Value 2': 96655}}
|
Is it possible to use SQL Server in python without external libs?
Question: I'm developing on an environment which I'm not allowed to install anything.
It's a monitoring server and I'm making a script to work with logs and etc.
So, I need to connect to a SQL Server with Python 2.7 without any lib like
pyodbc installed. Is it possible to make this? I've found nothing I could use
to connect to that database.
Answer: There are certain things you can do to run sql from the command line from
python:
import subprocess
x = subprocess.check_output('sqlcmd -Q "SELECT * FROM db.table"')
print x
|
python multiprocessing pool timeout
Question: I want to use
[multiprocessing.Pool](https://docs.python.org/3.4/library/multiprocessing.html#multiprocessing.pool.Pool),
but multiprocessing.Pool can't abort a task after a timeout. I found
[solution](http://stackoverflow.com/questions/29494001/how-can-i-abort-a-task-
in-a-multiprocessing-pool-after-a-timeout) and some modify it.
from multiprocessing import util, Pool, TimeoutError
from multiprocessing.dummy import Pool as ThreadPool
import threading
import sys
from functools import partial
import time
def worker(y):
print("worker sleep {} sec, thread: {}".format(y, threading.current_thread()))
start = time.time()
while True:
if time.time() - start >= y:
break
time.sleep(0.5)
# show work progress
print(y)
return y
def collect_my_result(result):
print("Got result {}".format(result))
def abortable_worker(func, *args, **kwargs):
timeout = kwargs.get('timeout', None)
p = ThreadPool(1)
res = p.apply_async(func, args=args)
try:
# Wait timeout seconds for func to complete.
out = res.get(timeout)
except TimeoutError:
print("Aborting due to timeout {}".format(args[1]))
# kill worker itself when get TimeoutError
sys.exit(1)
else:
return out
def empty_func():
pass
if __name__ == "__main__":
TIMEOUT = 4
util.log_to_stderr(util.DEBUG)
pool = Pool(processes=4)
# k - time to job sleep
featureClass = [(k,) for k in range(20, 0, -1)] # list of arguments
for f in featureClass:
# check available worker
pool.apply(empty_func)
# run job with timeout
abortable_func = partial(abortable_worker, worker, timeout=TIMEOUT)
pool.apply_async(abortable_func, args=f, callback=collect_my_result)
time.sleep(TIMEOUT)
pool.terminate()
print("exit")
main modification - worker process exit with **sys.exit(1)**. It's kill worker
process and kill job thread, but i'm not sure that this solution is good. What
potential problems can i get, when process terminate itself with running job?
Answer: There is no implicit risk in stopping a running job, the OS will take care of
correctly terminating the process.
If your job is writing on files, you might end up with lots of truncated files
on your disk.
Some small issue might also occur if you write on DBs or if you are connected
with some remote process.
Nevertheless, Python standard Pool does not support timeouts and terminating
processes abruptly might lead to weird behaviour within your applications.
[Pebble](https://pypi.python.org/pypi/Pebble) processing Pool does support
timing-out tasks.
from pebble import process, TimeoutError
with process.Pool() as pool:
task = pool.schedule(function, args=[1,2], timeout=5)
try:
result = task.get()
except TimeoutError:
print "Task: %s took more than 5 seconds to complete" % task
|
How to close cmd after opening a file using Python in Windows?
Question: I have a written a program using Python to open a particular file (txt) which
it creates during execution. I have made a batch file to access the script
using command line. The batch script is as follows:
@echo off
python F:\program\script.py %*
I have tried these two options for opening the file with Python in script.py.
subprocess.Popen(name, shell=True)
and
os.system('"'+name+'"')
I have further made a keyboard shortcut for the batch script. The problem is I
want the cmd prompt to close after the text file opens in notepad. But I have
to either manually close the cmd prompt or close the notepad file which
automatically closes the cmd prompt.
So my question is how can I close the cmd prompt and keep the notepad file
open?
Answer: To execute a child program in a new process use `Popen`
from subprocess import Popen
Popen( [ "notepad.exe", "arg1", "arg2", "arg3" ] )
|
Python Json Config 'Extended Interpolation'
Question: I am currently using the Python library configparser:
from configparser import ConfigParser, ExtendedInterpolation
I find the ExtendedInterpolation very useful because it avoids the risk of
having to reenter constants in multiple places.
I now have a requirement to use a Json document as the basis of the
configuration as it provides more structure.
import json
from collections import OrderedDict
def get_json_config(file):
"""Load Json into OrderedDict from file"""
with open(file) as json_data:
d = json.load(json_data, object_pairs_hook=OrderedDict)
return d
Does anyone have any suggestions as to the best way to implement configparser
style ExtendedInterpolation?
For example if a node in the Json contains the value ${home_dir}/lumberjack
this would copy root node home_dir and take value 'lumberjack'?
Answer: Try to use `string.Template`. But I'm not sure whether it's your need. There
is one package can do this may be. Bellow is what i should do.
`config.json`
{
"home_dir": "/home/joey",
"class_path": "/user/local/bin",
"dir_one": "${home_dir}/dir_one",
"dir_two": "${home_dir}/dir_two",
"sep_path_list": [
"${class_path}/python",
"${class_path}/ruby",
"${class_path}/php"
]
}
python code:
import json
from string import Template
with open("config.json", "r") as config_file:
config_content = config_file.read()
config_template = Template(config_content)
mid_json = json.loads(config_content)
config = config_template.safe_substitute(mid_json)
print config
This can substitute the defined key in json file.
|
How to create a timetracker?
Question: I'm very new to Python. I want to create a script for time tracking with a
grafical interface. Once you hit start the start time should be stored and
when you hit stop the time difference shoult be displayed.
This is my approach to the topic:
import datetime
from Tkinter import *
root=Tk()
def starttime():
start=datetime.datetime.now()
print start
def stoptime():
stop=datetime.datetime.now()
print stop
delta=stop-start
print delta
startb = Button(root, text="Start", command=starttime)
startb.pack()
stopb = Button(root, text="Stop", command=stoptime)
stopb.pack()
mainloop()
Thanks for any help to this noob ;).
Answer: Since `start` is a local variable, its value is forgotten when the `starttime`
function ends. If you want it to persist you can make it a global variable
def starttime():
global start
start=datetime.datetime.now()
print start
You should probably at some time read up about why there are local and global
variables, and how to avoid using globals too often.
|
change a range of colors to white in python
Question: I use following code to change specific colors (grays) to white in photos. But
the code is too slow. any suggestion or alternative is welcomed.
import os
import numpy as np
from PIL import Image
for j in range(1,160):
im = Image.open(str(j)+'.jpg')
data = np.array(im)
for i in (range(205,254)):
r1, g1, b1 = i, i, i # Original
r2, g2, b2 = 255, 255, 255 # Replacement
red, green, blue = data[:,:,0], data[:,:,1], data[:,:,2]
mask = (red == r1) & (green == g1) & (blue == b1)
data[:,:,:3][mask] = [r2, g2, b2]
im = Image.fromarray(data)
im.save(os.getcwd()+'\\conv\\'+str(j)+'.jpg')
Answer: This way of image processing is slow because it is single threaded, and
straightforward. Try splitting it into multiple equal parts, and running them
at the same time for a major speedup (try to split the job with function and
check the results: more of smaller chunks vs lesser number of larger ones).
For greater improvement: GPU supported image manipulating, but it's quite hard
in Python.
|
How can I extract the text outside the <em> tag in BeautifulSoup
Question: Can someone help me extract the test that is after the _From_ , I want to
extract the sender name. It is situated right outside the em tag. I'm using
the python BeautifulSoup package.
Here is a link to the webpage: <http://seclists.org/fulldisclosure/2016/Jan/0>
I was able to extract the email title successfully since is was in a tag.
There are no other div's or classes in the html page.
This is the html code of the page:
Here is what I've tried
def title_spider(max_pages):
page = 0
while page <= max_pages:
url = 'http://seclists.org/fulldisclosure/2016/Jan/' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for email_title in soup.find('b'):
title = email_title.string
print(title)
for date_stamp in soup.em:
date = date_stamp
print(date)
page += 1
title_spider(2)
`
Answer: You want the next sibling and if you want the specific em's From and Date you
can combine with a regex:
import re
def title_spider(max_pages):
for page in range(max_pages + 1):
url = 'http://seclists.org/fulldisclosure/2016/Jan/{}'.format(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for email_title in soup.find('b'):
title = email_title.string
print(title)
for em in soup.find_all("em", text=re.compile("From|Date")):
print(em.text, em.next_sibling)
Which gives you:
In [5]: title_spider(2)
Alcatel Lucent Home Device Manager - Management Console Multiple XSS
From : Uğur Cihan KOÇ <u.cihan.koc () gmail com>
Date : Sun, 3 Jan 2016 13:20:53 +0200
Executable installers/self-extractors are vulnerable^WEVIL (case 17): Kaspersky Labs utilities
From : "Stefan Kanthak" <stefan.kanthak () nexgo de>
Date : Sun, 3 Jan 2016 16:12:50 +0100
Possible vulnerability in F5 BIG-IP LTM - Improper input validation of the HTTP version number of the HTTP reqest allows any payload size and conent to pass through
From : Eitan Caspi <eitanc () yahoo com>
Date : Sun, 3 Jan 2016 21:10:27 +0000 (UTC)
|
How can I decode a utf-8 byte array to a string in Python2?
Question: I have an array of bytes representing a utf-8 encoded string. I want to decode
these bytes back into the string in Pyton2. I am relying on Python2 for my
overall program, so I can not switch to Python3.
array = [67, 97, 102, **-61, -87**, 32, 70, 108, 111, 114, 97]
-> Caf**é** Flora
Since every character in the string I want is not necessarily represented by
exactly 1 byte in the array, I can not use a solution like:
"".join(map(chr, array))
I tried to create a function that would step through the array, and whenever
it encounters a number not in the range 0-127 (ASCII), create a new 16 bit
int, shift the current bits over 8 to the left, and then add the following
byte using a bitwise OR. Finally it would use unichr() to decode it.
result = []
for i in range(len(byte_array)):
x = byte_array[i]
if x < 0:
b16 = x & 0xFFFF # 16 bit
b16 = b16 << 8
b16 = b16 | byte_array[i+1]
result.append(unichr(m16))
else:
result.append(chr(x))
return "".join(result)
However, this was unsuccessful.
The following article explains the issue very well, and includes a nodeJS
solution:
<http://ixti.net/development/node.js/2011/10/26/get-utf-8-string-from-array-
of-bytes-in-node-js.html>
Answer: Use the little-used [`array`
module](https://docs.python.org/2/library/array.html) to convert your input to
a bytestring and then `decode` it with the UTF-8 codec:
import array
decoded = array.array('b', your_input).tostring().decode('utf-8')
|