text
stringlengths 226
34.5k
|
---|
How can I remove text within multi layer of parentheses python
Question: I have a python string that I need to remove parentheses. The standard way is
to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the
parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good
luck)`. With the regex I use, it will still have `And good luck)` part left. I
know I can scan through the entire string and try to keep a counter of number
of `(` and `)` and when the numbers are balanced, index the location of `(`
and `)` and remove the content within middle, but is there a better/cleaner
way for doing that? It doesn't need to be regex, whatever it will work is
great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this
is a test e) sentence`
Answer: With the re module (replace the innermost parenthesis until there's no more
replacement to do):
import re
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
nb_rep = 1
while (nb_rep):
(s, nb_rep) = re.subn(r'\([^()]*\)', '', s)
print(s)
With the [regex module](https://pypi.python.org/pypi/regex) that allows
recursion:
import regex
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
print(regex.sub(r'\([^()]*+(?:(?R)[^()]*)*+\)', '', s))
Where `(?R)` refers to the whole pattern itself.
|
How to configure Django to access remote MySQL db django.contrib.sites.RequestSite module missing
Question: I'm trying to set up a django app that connects to a remote MySQL db. I
currently have Django==1.10 and MySQL-python==1.2.5 installed in my venv. In
settings.py I have added the following to the DATABASES variable:
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'db_user',
'PASSWORD': 'db_password',
'HOST': 'db_host',
'PORT': 'db_port',
}
I get the error
from django.contrib.sites.models import RequestSite
when I run python manage.py migrate
I am a complete beginner when it comes to django. Is there some step I am
missing?
edit: I have also installed mysql-connector-c via brew install edit2: realized
I just need to connect to a db by importing MySQLdb into a file. sorry for the
misunderstanding.
Answer: The error you're seeing has nothing to do with your database settings
(assuming your real code has the actual database name, username, and password)
or connection. You are not importing the RequestSite from the correct spot.
Change (wherever you have this set) from:
from django.contrib.sites.models import RequestSite
to:
from django.contrib.sites.requests import RequestSite
|
How can I optimize this python code - NO MEMORY?
Question: I wrote this python code:
from itertools import product
combo_pack = product("qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890!&+*-_.#@", repeat = 8)
myfile = open("lista_combinazioni.txt","w")
for combo in combo_pack:
combo = "".join(combo)
combo = "%s\n" % (combo)
myfile.write(combo)
myfile.close()
How can I optimize it? After a long time it running, it crashing because there
isn't memory.
Answer: What's probably happening is your file buffer is filling up and it isn't
flushing to the file, thus using a lot of memory.
Try using this instead:
myfile = open("lista_combinazioni.txt","w",1)
|
Problems installing Python Pymqi package
Question: THis is my first post on this forum. If I am not complying with protocols,
please just let me know.
C:\>python --version
Python 2.7.11
OS: Windows version 7
WMQ: 8.2
I am trying to install Python **pymqi** package. After couple hours of trying
and searching the web for solutions I decided to post this question hoping to
get some help. The following is the command I issue and the errors I am
getting.
**C:>pip install pymqi**
Collecting pymqi
Using cached pymqi-1.5.4.tar.gz
Requirement already satisfied (use --upgrade to upgrade): testfixtures in c:\python27\lib\site-packages (from pymqi)
Installing collected packages: pymqi
Running setup.py install for pymqi ... error
Complete output from command c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\reyesv~1\\appdata\\local\\temp\\1\\pip
-build-4qqnkt\\pymqi\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --rec
ord c:\users\reyesv~1\appdata\local\temp\1\pip-u2jdz5-record\install-record.txt --single-version-externally-managed --compile:
Building PyMQI client 64bits
running install
running build
running build_py
creating build
creating build\lib.win-amd64-2.7
creating build\lib.win-amd64-2.7\pymqi
copying pymqi\__init__.py -> build\lib.win-amd64-2.7\pymqi
copying pymqi\CMQC.py -> build\lib.win-amd64-2.7\pymqi
copying pymqi\CMQCFC.py -> build\lib.win-amd64-2.7\pymqi
copying pymqi\CMQXC.py -> build\lib.win-amd64-2.7\pymqi
copying pymqi\CMQZC.py -> build\lib.win-amd64-2.7\pymqi
running build_ext
building 'pymqi.pymqe' extension
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
creating build\temp.win-amd64-2.7\Release\pymqi
C:\Users\reyesviloria362048\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DND
EBUG -DPYQMI_SERVERBUILD=0 "-Ic:\Program Files (x86)\IBM\WebSphere MQ\tools\c\include" -Ic:\python27\include -Ic:\python27\PC /Tcpymqi/pymqe.c /Fobuil
d\temp.win-amd64-2.7\Release\pymqi/pymqe.obj
pymqe.c
pymqi/pymqe.c(240) : error C2275: 'MQCSP' : illegal use of this type as an expression
C:\IBM\WebSphere MQ\tools\c\include\cmqc.h(4072) : see declaration of 'MQCSP'
pymqi/pymqe.c(240) : error C2146: syntax error : missing ';' before identifier 'csp'
pymqi/pymqe.c(240) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(240) : error C2059: syntax error : '{'
pymqi/pymqe.c(247) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(247) : error C2224: left of '.AuthenticationType' must have struct/union type
pymqi/pymqe.c(248) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(248) : error C2224: left of '.CSPUserIdPtr' must have struct/union type
pymqi/pymqe.c(249) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(249) : error C2224: left of '.CSPUserIdLength' must have struct/union type
pymqi/pymqe.c(250) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(250) : error C2224: left of '.CSPPasswordPtr' must have struct/union type
pymqi/pymqe.c(251) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(251) : error C2224: left of '.CSPPasswordLength' must have struct/union type
pymqi/pymqe.c(256) : error C2065: 'csp' : undeclared identifier
pymqi/pymqe.c(256) : warning C4133: '=' : incompatible types - from 'int *' to 'PMQCSP'
error: command 'C:\\Users\\reyesviloria362048\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\cl.exe' fa
iled with exit status 2
----------------------------------------
Command "c:\python27\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\reyesv~1\\appdata\\local\\temp\\1\\pip-build-4qqnkt\\pymqi\\se
tup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\reyesv~1\ap
pdata\local\temp\1\pip-u2jdz5-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\reyesv~1\a
ppdata\local\temp\1\pip-build-4qqnkt\pymqi\
Answer: You need to install [Microsoft Visual C++ compiler for Python
2.7](https://www.microsoft.com/en-us/download/details.aspx?id=44266)
|
Send scheduled emails with pyramid_mailer and apscheduler
Question: I've tried getting this to work but there must be a better way, any input is
welcome.
I'm trying to send scheduled emails in my python pyramid app using
pyramid_mailer (settings stored in .ini file), and apscheduler to set the
schedule.
I also use the SQLAlchemyJobStore so jobs can be restarted if the app
restarts.
jobstores = {
'default': SQLAlchemyJobStore(url='mysql://localhost/lgmim')
}
scheduler = BackgroundScheduler(jobstores=jobstores)
@view_config(route_name='start_email_schedule')
def start_email_schedule(request):
# add the job and start the scheduler
scheduler.add_job(send_scheduled_email, 'interval', [request], weeks=1)
scheduler.start()
return HTTPOk()
def send_scheduled_email(request):
# compile message and recipients
# send mail
send_mail(request, subject, recipients, message)
def send_mail(request, subject, recipients, body):
mailer = request.registry['mailer']
message = Message(subject=subject,
recipients=recipients,
body=body)
mailer.send_immediately(message, fail_silently=False)
This is as far as I've gotten, now I'm getting an error, presumably because it
can't pickle the request.
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
Using `pyramid.threadlocal.get_current_registry().settings` to get the mailer
works the first time, but thereafter I get an error. I'm advised not to use it
in any case.
What else can I do?
Answer: Generally, you cannot pickle `request` object as it contains references to
things like open sockets and other liveful objects.
Some useful patterns here are that
* You pregenerate email id in the database and then pass id (int, UUID) over scheduler
* You generate template context (JSON dict) and then pass that over the scheduler and render the template inside a worker
* You do all database fetching and related inside a scheduler and don't pass any arguments
Specifically, the problem how to generate a faux `request` object inside a
scheduler can be solved like this:
from pyramid import scripting
from pyramid.paster import bootstrap
def make_standalone_request():
bootstrap_env = bootstrap("your-pyramid-config.ini")
app = bootstrap_env["app"]
pyramid_env = scripting.prepare(registry=bootstrap_env["registry"])
request = pyramid_env["request"]
# Note that request.url will be always dummy,
# so if your email refers to site URL, you need to
# resolve request.route_url() calls before calling the scheduler
# or read the URLs from settings
return request
[Some more inspiration can be found here (disclaimer: I am the
author).](https://websauna.org/narrative/misc/task.html)
|
python: convert pywintyptes.datetime to datetime.datetime
Question: I am using pywin32 to read/write to an Excel file. I have some dates in Excel,
stored in format yyyy-mm-dd hh:mm:ss. I would like to import those into Python
as datetime.datetime objects. Here is the line of code I started with:
prior_datetime = datetime.strptime(excel_ws.Cells(2, 4).Value, '%Y-%m-%d %H:%M:%S')
That didn't work. I got the error:
strptime() argument 1 must be str, not pywintypes.datetime
I tried casting it to a string, like so:
prior_datetime = datetime.strptime(str(excel_ws.Cells(2, 4).Value), '%Y-%m-%d %H:%M:%S')
That didn't work either. I got the error:
ValueError: unconverted data remains: +00:00
So then I tried something a little different:
prior_datetime = datetime.fromtimestamp(int(excel_ws.Cells(2, 4).Value))
Still no luck. Error:
TypeError: a float is required.
Casting to a float didn't help. Nor integer. (Hey, I was desperate at this
point.)
I might be looking in the wrong plce, but I'm having a terrible time finding
any good documentation on pywin32 in general or pywintypes or
pywintypes.datetime in particular.
Any help?
Answer: So the problem is the `+00:00` timezone offset. [Looking into this there's not
an out of the box solution for
Python](https://stackoverflow.com/questions/20194496/iso-to-datetime-object-z-
is-a-bad-directive)
datetime.datetime.strptime("2016-04-01 17:29:25+00:00", '%Y-%m-%d %H:%M:%S %z')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/_strptime.py", line 324, in _strptime
(bad_directive, format))
ValueError: 'z' is a bad directive in format '%Y-%m-%d %H:%M:%S %z'
One band-aid solution is to strip the timezone but that feels pretty gross.
datetime.datetime.strptime("2016-04-01 17:29:25+00:00".rstrip("+00:00"), '%Y-%m-%d %H:%M:%S')
datetime.datetime(2016, 4, 1, 17, 29, 25)
Looking around it looks like (if you can use a third party library) `dateutil`
solves this issue and is nicer to use then `datetime.strptime`.
# On Commandline
pip install python-dateutil
# code
>>> import dateutil.parser
>>> dateutil.parser.parse("2016-04-01 17:29:25+00:00")
datetime.datetime(2016, 4, 1, 17, 29, 25, tzinfo=tzutc())
|
Change a parent variable in subclass, then use new value in parent class?
Question: I'm just getting to grips with python, and am currently trying to change the
value of a Parent class variable using a subclass method. A basic example of
my code is below.
from bs4 import BeautifulSoup
import requests
import urllib.request as req
class Parent(object):
url = "http://www.google.com"
r = requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
print(url)
def random_method(self):
print(Parent.soup.find_all())
class Child(Parent):
def set_url(self):
new_url = input("Please enter a URL: ")
request = req.Request(new_url)
response = req.urlopen(request)
Parent.url = new_url
def print_url(self):
print(Parent.url)
If I run the methods, the outputs are as follows.
run = Child()
run.Parent()
>>> www.google.com
run.set_url()
>>> Please enter a url: www.thisismynewurl.com
run.print_url()
>>> www.thisismynewurl.com
run.random_method()
>>> #Prints output for www.google.com
Can anyone explain why I can get the new url printing when I run print_url,
but if I try and use it in another method, it reverts to the old value?
Answer: Because when you use `Parent.url` it uses the static value set inside the
class Parent, not the value from the instance of the class.
|
Pandas Python - Finding Time Series Not Covered
Question: Hoping someone can help me out with this one because I don't even know where
to start.
Given a data frame that contains a series of start and end times, such as:
Order Start Time End Time
1 2016-08-18 09:30:00.000 2016-08-18 09:30:05.000
1 2016-08-18 09:30:00.005 2016-08-18 09:30:25.001
1 2016-08-18 09:30:30.001 2016-08-18 09:30:56.002
1 2016-08-18 09:30:40.003 2016-08-18 09:31:05.003
1 2016-08-18 11:30:45.000 2016-08-18 13:31:05.000
For each order id, I am looking to find a list of time periods that are not
covered by any of the ranges between the earliest start time and latest end
time
So in the example above, I would be looking for
2016-08-18 09:30:05.000 to 2016-08-18 09:30:00.005 (the time lag between the first and second rows)
2016-08-18 09:30:25.001 to 2016-08-18 09:30:30.001 (the time lag between the second and third rows)
and
2016-08-18 09:31:05.003 to 2016-08-18 11:30:45.000 (the time period between 4 and 5)
There is overlap between the 3 and 4 rows, so they wouldn't count
**A few things to consider (additional color):**
Each record indicates an outstanding order placed at (for example) one of the
stock exchanges. Therefore, I can have orders open at Nasdaq and NYSE at the
same time. I also can have a short duration order at Nasdaq and a long one at
NYSE starting at the same time.
That would look as following:
Order Start Time End Time
1 2016-08-18 09:30:00.000 2016-08-18 09:30:05.000 (NYSE)
1 2016-08-18 09:30:00.001 2016-08-18 09:30:00.002 (NASDAQ)
I am trying to figure out when we are doing nothing at all, and I have no live
orders on any exchanges.
I have zero idea where to even start on this..any ideas would be appreciated
Answer: ### Setup
from StringIO import StringIO
import pandas as pd
text = """Order Start Time End Time
1 2016-08-18 09:30:00.000 2016-08-18 09:30:05.000
1 2016-08-18 09:30:00.005 2016-08-18 09:30:25.001
1 2016-08-18 09:30:30.001 2016-08-18 09:30:56.002
1 2016-08-18 09:30:40.003 2016-08-18 09:31:05.003
1 2016-08-18 11:30:45.000 2016-08-18 13:31:05.000
2 2016-08-18 09:30:00.000 2016-08-18 09:30:05.000
2 2016-08-18 09:30:00.005 2016-08-18 09:30:25.001
2 2016-08-18 09:30:30.001 2016-08-18 09:30:56.002
2 2016-08-18 09:30:40.003 2016-08-18 09:31:05.003
2 2016-08-18 11:30:45.000 2016-08-18 13:31:05.000"""
df = pd.read_csv(StringIO(text), sep='\s{2,}', engine='python', parse_dates=[1, 2])
### Solution
def find_gaps(df, start_text='Start Time', end_text='End Time'):
# rearrange stuff to get all times and a tracker
# in single columns.
cols = [start_text, end_text]
df = df.reset_index()
df1 = df[cols].stack().reset_index(-1)
df1.columns = ['edge', 'time']
df1['edge'] = df1['edge'].eq(start_text).mul(2).sub(1)
# sort by ascending time, then descending edge
# (starts before ends if equal time)
# this will ensure we avoid zero length gaps.
df1 = df1.sort_values(['time', 'edge'], ascending=[True, False])
# we identify gaps when we've reached a number
# of ends equal to number of starts.
# we'll track that with cumsum, when cumsum is
# zero, we've found a gap
# last position should always be zero and is not a gap.
# So I remove it.
track = df1['edge'].cumsum().iloc[:-1]
gap_starts = track.index[track == 0]
gaps = df.ix[gap_starts]
gaps[start_text] = gaps[end_text]
gaps[end_text] = df.shift(-1).ix[gap_starts, start_text]
return gaps
df.set_index('Order').groupby(level=0).apply(find_gaps)
[![enter image description
here](http://i.stack.imgur.com/9Gyyl.png)](http://i.stack.imgur.com/9Gyyl.png)
|
facedetection with opencv and python only detect eye region
Question: I wrote the script by looking at this website and it works perfectly but the
only problem when I run it on my computer is it only detects the eye region.
<https://pythonprogramming.net/haar-cascade-face-eye-detection-python-opencv-
tutorial/>
Below is the script I wrote based on the website.
import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
image = cv2.imread('frame119.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = image[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
do I need to add an additional line to fix the problem? Also, the image is in
680x480 dimension, I think that maybe one of the reason why it only detects
eye region of the image but I do not have any idea regarding that.
Thank you for the help.
Answer: It is not possible to detect eyes if no face is detected .
so try these modification
import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
image = cv2.imread('frame119.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
print len(faces) # it will print no of faces detected
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = image[y:y+h, x:x+w]
cv2.imshow('face',roi_color) # It will show a cropped face , if face is detected
cv2.waitKey()
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
|
Formatting Lists into columns of a table output (python 3)
Question: I have data that is collected in a loop and stored under separate lists that
hold only the same datatypes (e.g. only strings, only floats) as shown below:
names = ['bar', 'chocolate', 'chips']
weights = [0.05, 0.1, 0.25]
costs = [2.0, 5.0, 3.0]
unit_costs = [40.0, 50.0, 12.0]
I have treated these lists as "columns" of a table and wish to print them out
as a formatted table that should look something like this:
Names | Weights | Costs | Unit_Costs
----------|---------|-------|------------
bar | 0.05 | 2.0 | 40.0
chocolate | 0.1 | 5.0 | 50.0
chips | 0.25 | 3.0 | 12.0
I only know how to print out data from lists horizontally across table rows, I
have looked online (and on this site) for some help regarding this issue,
however I only managed to find help for getting it to work in python 2.7 and
not 3.5.1 which is what I am using.
my question is:
how do I get entries from the above 4 lists to print out into a table as shown
above.
Each item index from the lists above is associated (i.e. entry[0] from the 4
lists is associated with the same item; bar, 0.05, 2.0, 40.0).
Answer: Some interesting table draw with `texttable`.
import texttable as tt
tab = tt.Texttable()
headings = ['Names','Weights','Costs','Unit_Costs']
tab.header(headings)
names = ['bar', 'chocolate', 'chips']
weights = [0.05, 0.1, 0.25]
costs = [2.0, 5.0, 3.0]
unit_costs = [40.0, 50.0, 12.0]
for row in zip(names,weights,costs,unit_costs):
tab.add_row(row)
s = tab.draw()
print (s)
**Result**
+-----------+---------+-------+------------+
| Names | Weights | Costs | Unit_Costs |
+===========+=========+=======+============+
| bar | 0.050 | 2 | 40 |
+-----------+---------+-------+------------+
| chocolate | 0.100 | 5 | 50 |
+-----------+---------+-------+------------+
| chips | 0.250 | 3 | 12 |
+-----------+---------+-------+------------+
You can install `texttable` with using this command `pip install texttable`.
|
python - Implementing Sobel operators with python without opencv
Question: Given a greyscale 8 bit image (2D array with values from 0 - 255 for pixel
intensity), I want to implement the Sobel operators (mask) on an image. The
Sobel function below basically loops around a given pixel,applies the
following weight to the pixels: [![enter image description
here](http://i.stack.imgur.com/1N67K.png)](http://i.stack.imgur.com/1N67K.png)
[![enter image description
here](http://i.stack.imgur.com/Ut0Aq.png)](http://i.stack.imgur.com/Ut0Aq.png)
And then aplies the given formula:
[![enter image description
here](http://i.stack.imgur.com/aBBUL.png)](http://i.stack.imgur.com/aBBUL.png)
Im trying to implement the formulas from this link:
<http://homepages.inf.ed.ac.uk/rbf/HIPR2/sobel.htm>
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import Image
def Sobel(arr,rstart, cstart,masksize, divisor):
sum = 0;
x = 0
y = 0
for i in range(rstart, rstart+masksize, 1):
x = 0
for j in range(cstart, cstart+masksize, 1):
if x == 0 and y == 0:
p1 = arr[i][j]
if x == 0 and y == 1:
p2 = arr[i][j]
if x == 0 and y == 2:
p3 = arr[i][j]
if x == 1 and y == 0:
p4 = arr[i][j]
if x == 1 and y == 1:
p5 = arr[i][j]
if x == 1 and y == 2:
p6 = arr[i][j]
if x == 2 and y == 0:
p7 = arr[i][j]
if x == 2 and y == 1:
p8 = arr[i][j]
if x == 2 and y == 2:
p9 = arr[i][j]
x +=1
y +=1
return np.abs((p1 + 2*p2 + p3) - (p7 + 2*p8+p9)) + np.abs((p3 + 2*p6 + p9) - (p1 + 2*p4 +p7))
def padwithzeros(vector, pad_width, iaxis, kwargs):
vector[:pad_width[0]] = 0
vector[-pad_width[1]:] = 0
return vector
im = Image.open('charlie.jpg')
im.show()
img = np.asarray(im)
img.flags.writeable = True
p = 1
k = 2
m = img.shape[0]
n = img.shape[1]
masksize = 3
img = np.lib.pad(img, p, padwithzeros) #this function padds image with zeros to cater for pixels on the border.
x = 0
y = 0
for row in img:
y = 0
for col in row:
if not (x < p or y < p or y > (n-k) or x > (m-k)):
img[x][y] = Sobel(img, x-p,y-p,masksize,masksize*masksize)
y = y + 1
x = x + 1
img2 = Image.fromarray(img)
img2.show()
Given this greyscale 8 bit image
[![enter image description
here](http://i.stack.imgur.com/8zINU.gif)](http://i.stack.imgur.com/8zINU.gif)
I get this when applying the function:
[![enter image description
here](http://i.stack.imgur.com/MPM6y.png)](http://i.stack.imgur.com/MPM6y.png)
but should get this:
[![enter image description
here](http://i.stack.imgur.com/ECAIK.gif)](http://i.stack.imgur.com/ECAIK.gif)
I have implemented other gaussian filters with python, I'm not sure where I'm
going wrong here?
Answer: Sticking close to what your code is doing, one elegant solution is to use the
[`scipy.ndimage.filters.generic_filter()`](http://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.filters.generic_filter.html)
with the formula provided above.
import numpy as np
from scipy.ndimage.filters import generic_filter
from scipy.ndimage import imread
# Load sample data
with np.DataSource().open("http://i.stack.imgur.com/8zINU.gif", "rb") as f:
img = imread(f, mode="I")
# Apply the Sobel operator
def sobel_filter(P):
return (np.abs((P[0] + 2 * P[1] + P[2]) - (P[6] + 2 * P[7] + P[8])) +
np.abs((P[2] + 2 * P[6] + P[7]) - (P[0] + 2 * P[3] + P[6])))
G = generic_filter(img, sobel_filter, (3, 3))
Running this on the sample image takes about 400 ms. For comparison, the
`convolve2d`'s performance is about 6.5 ms.
|
how to read json file with pandas?
Question: I have scraped a website with scrapy and stored the data in a json file.
Link to the json file:
<https://drive.google.com/file/d/0B6JCr_BzSFMHLURsTGdORmlPX0E/view?usp=sharing>
But the json isn't standard json and gives errors:
>>> import json
>>> with open("/root/code/itjuzi/itjuzi/investorinfo.json") as file:
... data = json.load(file)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/root/anaconda2/lib/python2.7/json/__init__.py", line 291, in load
**kw)
File "/root/anaconda2/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/root/anaconda2/lib/python2.7/json/decoder.py", line 367, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 3 column 2 - line 3697 column 2 (char 45 - 3661517)
Then I tried this:
with open('/root/code/itjuzi/itjuzi/investorinfo.json','rb') as f:
data = f.readlines()
data = map(lambda x: x.decode('unicode_escape'), data)
>>> df = pd.DataFrame(data)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'pd' is not defined
>>> import pandas as pd
>>> df = pd.DataFrame(data)
>>> print pd
<module 'pandas' from '/root/anaconda2/lib/python2.7/site-packages/pandas/__init__.pyc'>
>>> print df
[3697 rows x 1 columns]
Why does this only return 1 column?
How can I standardize the json file and read it with pandas correctly?
Answer: try this [from SO documentation[JSON]
](http://stackoverflow.com/documentation/pandas/4752/json/16714/read-
json#t=201608191329212576968):
import json
with open('data.json') as data_file:
data = json.load(data_file)
This has the advantage of dealing well with large JSON files that do not fit
in memory
EDIT: Your data is not valid JSON. Delete the following in the first 3 lines
and it will validate:
[{
"website": ["\u5341\u65b9\u521b\u6295"]
}]
EDIT2[Since you need to access nested values from json]:
You can now also access single values like this:
data["one"][0]["id"] # will return 'value'
data["two"]["id"] # will return 'value'
data["three"] # will return 'value'
|
Python: Division by larger numbers slower?
Question: Why does dividing by the larger factor pair result in slower execution?
My solution for
<https://codility.com/programmers/task/min_perimeter_rectangle/>
from math import sqrt, floor
# This fails the performance tests
def solution_slow(n):
x = int(sqrt(n))
for i in xrange(x, n+1):
if n % i == 0:
return 2*(i + n / i))
# This passes the performance tests
def solution_fast(n):
x = int(sqrt(n))
for i in xrange(x, 0, -1):
if n % i == 0:
return 2*(i + n / i)
Answer: It's not division that slows it down; it's the number of iterations required.
Let `L = xrange(0, x)` (order doesn't matter here) and `R = xrange(x, n+1)`.
Every factor of `n` in `L` can be paired with exactly one factor of `n` in
`R`. In general, `x` is much, much smaller than `n/2`, so `L` is much smaller
than `R`. This means that there are far more elements of `R` that don't divide
`n` than there are in `L`. In the case of a prime number, there _are_ no
factors, so the slow solution has to check every value of the much larger than
instead of the much smaller set.
|
PIL in Python complains that there are no 'size' attributes to a PixelAccess, what am I doing wrong?
Question: I am trying to program an application that will loop through every pixel of a
given image, get the rgb value for each, add it to a dictionary (along with
amount of occurences) and then give me rundown of the most used rgb values.
However, to be able to loop through images, I need to be able to fetch their
size; this proved to be no easy task.
According to the [PIL
documentation](http://effbot.org/imagingbook/image.htm#tag-Image.Image.size),
the Image object should have an attribute called 'size'. When I try to run the
program, I get this error:
AttributeError: 'PixelAccess' object has no attribute 'size'
this is the code:
from PIL import Image
import sys
'''
TODO:
- Get an image
- Loop through all the pixels and get the rgb values
- append rgb values to dict as key, and increment value by 1
- return a "graph" of all the colours and their occurances
TODO LATER:
- couple similar colours together
'''
SIZE = 0
def load_image(path=sys.argv[1]):
image = Image.open(path)
im = image.load()
SIZE = im.size
return im
keyValue = {}
# set the image object to variable
image = load_image()
print SIZE
Which makes no sense at all. What am I doing wrong?
Answer: `image.load` returns a pixel access object that does not have a `size`
attribute
def load_image(path=sys.argv[1]):
image = Image.open(path)
im = image.load()
SIZE = image.size
return im
is what you want
[documentation](http://effbot.org/imagingbook/image.htm) for PIL
|
How do I write a python dictionary to an excel file?
Question: I'm trying to write a dictionary with randomly generated strings and numbers
to an excel file. I've almost succeeded but with one minor problem. The
structure of my dictionary is as follows:
Age: 11, Names Count: 3, Names: nizbh,xyovj,clier
This dictionary was generated from data obtained through a text file. It
aggregates all the contents based on their age and if two people have the same
age, it groups them into one list. I'm trying to write this data on to an
excel file. I've written this piece of code so far.
import xlsxwriter
lines = []
workbook = xlsxwriter.Workbook('demo.xlsx')
worksheet = workbook.add_worksheet()
with open ("input2.txt") as input_fd:
lines = input_fd.readlines()
age_and_names = {}
for line in lines:
name,age = line.strip().split(",")
if not age in age_and_names:
age_and_names[age]=[]
age_and_names[age].append(name)
print age_and_names
for key in sorted(age_and_names):
print "Age: {}, Names Count: {}, Names: {}".format(key, len(age_and_names[key]), ",".join(age_and_names[key]))
row=0
col=0
for key in sorted(age_and_names):#.keys():
row += 1
worksheet.write(row, col, key)
for item in age_and_names[key]:
worksheet.write(row, col+1, len(age_and_names[key]))
worksheet.write(row, col+1, item)
row+=1
workbook.close()
But what this is actually doing is this (in the excel file):
11 nizbh
xyovj
clier
What should I do to make it appear like this instead?
Age Name Count Names
11 3 nizbh, xyovj, clier
Answer: The problem was indeed in the two for loops there. I meddled and played around
with them until I arrived at the answer. They're working fine. Thank you guys!
Replace the for loops in the end with this:
for key in sorted(age_and_names):#.keys():
row+=1
worksheet.write(row, col, key)
worksheet.write(row, col+1, len(age_and_names[key]))
worksheet.write(row, col+2, ",".join(age_and_names[key]))
|
How to apply a formula to each cell of a numpy array
Question: I am trying to take an existing numpy array and apply a formula to each cell
of the array. I have the code below but it returns the following error.
Traceback (most recent call last): File "C:\gTemp\Text-1.py", line 5, in
myarray = 0.1236 * math.tan(myarray / 2842.5 + 1.1863) TypeError: only
length-1 arrays can be converted to Python scalars
I am new to numpy and I am looking for skill level appropriate advice. Here is
my existing code.
import arcpy
import numpy
import math
myarray = numpy.load(r"E:\depthtester2.npy")
myarray = 0.1236 * math.tan(myarray / 2842.5 + 1.1863)
myRaster = arcpy.NumPyArrayToRaster(myarray,arcpy.Point(0.0,0.0),1.0, 1.0, -99999.0 )
myRaster.save("E:\deptht")
print "done"
Answer: Instead of `math.tan()`, use
[`numpy.tan()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.tan.html).
The numpy functions are designed to work elementwise on numpy arrays.
|
Tor Browser with RSelenium in Linux/Windows
Question: Looking to use RSelenium and Tor using my Linux machine to return the Tor IP
(w/Firefox as Tor Browser). This is doable with Python, but having trouble
with it in R. Can anybody get this to work? Perhaps you can share your
solution in either Windows / Linux.
# library(devtools)
# devtools::install_github("ropensci/RSelenium")
library(RSelenium)
RSelenium::checkForServer()
RSelenium::startServer()
binaryExtension <- paste0(Sys.getenv('HOME'),"/Desktop/tor-browser_en-US/Browser/firefox")
remDr <- remoteDriver(dir = binaryExtention)
remDr$open()
remDr$navigate("http://myexternalip.com/raw")
remDr$quit()
The error `Error in callSuper(...) : object 'binaryExtention' not found` is
being returned.
For community reference, this Selenium code works in Windows using Python3:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from os.path import expanduser # Finds user's user name on Windows
# Substring inserted to overcome r requirement in FirefoxBinary
binary = FirefoxBinary(r"%s\\Desktop\\Tor Browser\\Browser\\firefox.exe" % (expanduser("~")))
profile = FirefoxProfile(r"%s\\Desktop\\Tor Browser\\Browser\\TorBrowser\\Data\\Browser\\profile.default" % (expanduser("~")))
driver = webdriver.Firefox(profile, binary)
driver.get('http://myexternalip.com/raw')
html = driver.page_source
soup = BeautifulSoup(html, "lxml") # lxml needed
# driver.close()
# line.strip('\n')
"Current Tor IP: " + soup.text.strip('\n')
# Based in part on
# http://stackoverflow.com/questions/13960326/how-can-i-parse-a-website-using-selenium-and-beautifulsoup-in-python
# http://stackoverflow.com/questions/34316878/python-selenium-binding-with-tor-browser
# http://stackoverflow.com/questions/3367288/insert-variable-values-into-a-string-in-python
Answer: Something like the following should work:
browserP <- paste0(Sys.getenv('HOME'),"/Desktop/tor-browser_en-US/Browser/firefox")
jArg <- paste0("-Dwebdriver.firefox.bin='", browserP, "'")
selServ <- RSelenium::startServer(javaargs = jArg)
UPDATE:
This worked for me on windows. Firstly run the beta version:
checkForServer(update = TRUE, beta = TRUE, rename = FALSE)
Next open a version of the tor browser manually.
library(RSelenium)
browserP <- "C:/Users/john/Desktop/Tor Browser/Browser/firefox.exe"
jArg <- paste0("-Dwebdriver.firefox.bin=\"", browserP, "\"")
pLoc <- "C:/Users/john/Desktop/Tor Browser/Browser/TorBrowser/Data/Browser/profile.meek-http-helper/"
jArg <- c(jArg, paste0("-Dwebdriver.firefox.profile=\"", pLoc, "\""))
selServ <- RSelenium::startServer(javaargs = jArg)
remDr <- remoteDriver(extraCapabilities = list(marionette = TRUE))
remDr$open()
remDr$navigate("https://check.torproject.org/")
> remDr$getTitle()
[[1]]
[1] "Congratulations. This browser is configured to use Tor."
|
GtkInfoBar doesn't show again after hide
Question: I'm hide Gtk widget, then try to show it, but none of the methods "show()",
"show_all()" or "show_now()" does't work. If not call "hide()" widget shows.
python 3.5.2
gtk3 3.20.8
pygobject-devel 3.20.1
test.py:
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
builder = Gtk.Builder()
builder.add_from_file("gui.glade")
infoBar = builder.get_object("infoBar")
window = builder.get_object("window")
window.show_all()
infoBar.hide()
infoBar.show()
Gtk.main()
gui.glade: <http://pastebin.com/xKFt1v84>
Answer: [This is a long-standing bug in GTK+ specific to GtkInfoBar. Monitor the
linked bug report for more details, some workarounds (including one in Python
that you can use for the time being) and to find out when it's fixed for
real.](https://bugzilla.gnome.org/show_bug.cgi?id=710888)
|
recursive import in python
Question: in logging.py in my python library there are the lines:
import logging
and:
from logging import DEBUG, INFO, WARNING, ERROR, CRITICAL
I don't understand the meaning of importing logging within logging.py and also
where (DEBUG, INFO, WARNING, ERROR, CRITICAL) are defined?
Answer: it's importing `logging` from python standard library,
[link](https://docs.python.org/2/library/logging.html),
also [here](https://docs.python.org/2/library/logging.html#logging-levels) is
log levels in that page (DEBUG, INFO, ...)
|
Python convert list to dict with multiple key value
Question: I have a list something like below and want to convert it to dict
my_list = ['key1=value1', 'key2=value2', 'key3=value3-1', 'value3-2', 'value3-3', 'key4=value4', 'key5=value5', 'value5-1', 'value5-2', 'key6=value6']
How can I convert above list to dict something like below
my_dict = {
'key1': 'value1',
'key2': 'value2',
'key3': ['value3-1', 'value3-2', 'value3-3'],
'key4': 'value4',
'key5': ['value5', 'value5-1', 'value5-2'],
'key6': 'value6'
}
Answer: Here's a possible solution:
from collections import defaultdict
import pprint
my_list = ['key1=value1', 'key2=value2', 'key3=value3-1', 'value3-2',
'value3-3', 'key4=value4', 'key5=value5', 'value5-1', 'value5-2', 'key6=value6']
my_dict = defaultdict(list)
current_key = None
for item in my_list:
if '=' in item:
current_key, value = item.split('=')
my_dict[current_key].append(value)
my_dict = {k: v[0] if len(v) == 1 else v for k, v in my_dict.iteritems()}
pprint.pprint(my_dict)
Out of curiosity, if your input was a dictionary, getting a list would be
trivial:
from collections import defaultdict
my_dict = {
'key1': 'value1',
'key2': 'value2',
'key3': ['value3-1', 'value3-2', 'value3-3'],
'key4': 'value4',
'key5': ['value5', 'value5-1', 'value5-2'],
'key6': 'value6'
}
output = ["{0}={1}".format(k, ', '.join(v) if type(v) is list else v)
for k, v in my_dict.iteritems()]
print output
|
Python Problems Guess the Number Game
Question: I have made a guess the number python game for the terminal, but the game does
not recognize when the player wins and i dont understand why. here is my code:
from random import randint
import sys
def function():
while (1 == 1):
a = raw_input('Want to Play?')
if (a == 'y'):
r = randint(1, 100)
print('Guess the Number:')
print('The number is between 1 and 100')
b = raw_input()
if (b == r):
print(r, 'You Won')
elif (b != r):
print(r, 'You Lose')
elif (a == 'n'):
sys.exit()
else:
print('You Did Not Answered the Question')
function()
Answer: As mentioned in [FujiApple's
answer](http://stackoverflow.com/a/39056047/6568562): The type of input by
default is a string.
So :
>>>b = raw_input("Enter a number : ")
Enter a number : 5
>>>print b
'5'
>>>type(b)
<type 'str'>
You need to convert the string into an integer, in order it to evalute equal
to the randint number :
if int(b) == r:
|
No module named 'matplotlib.pylot'
Question: I had installed Python 3.5 from anaconda distribution.
C:\Users\ananda>python
Python 3.5.2 |Anaconda 4.1.1 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
I am importing matplotlib on jupyter notebook,where I am getting error related
to module not found.
I tried to install matplotlib like below:
>>>C:\Users\ananda>conda install matplotlib
Fetching package metadata .........
Solving package specifications: ..........
# All requested packages already installed.
# packages in environment at C:\Users\ananda\AppData\Local\Continuum\Anaconda3:
matplotlib 1.5.1 np111py35_0
Not sure whats wrong I am doing,how to use matplotlib module here,do i need to
install any particular version?
Answer: Your attempt to install `matplotlib` seems right, but the submodule you're
looking for is called `pyplot`.
Just try:
>>> import matplotlib.pyplot as plt
>>> # No ImportError or similar, everything is fine
If you still get an error, just post the **full** traceback.
Hope this helps!
|
IMDbPY get the stars of a movie
Question: I was able to get the cast of the movie like this:
#!/usr/bin/env python
import imdb
ia = imdb.IMDb()
s_result = ia.search_movie('The Untouchables')
the_unt = s_result[0]
print the_unt['cast']
However, that gave all the _cast_ , i am looking for just the _stars_ of the
movie. for example, al pacino is a star in god father
Answer: As stated in the comments, IMDb does not know the concept of a "star", but it
does somewhat know the "top actors" from the cast lineup.
It is probably based on the credits of the film, but even IMDb says _first
billed only_ for some movies.
The cast ordering is entirely dependent on the movie and who verifies the data
on IMDb. If it says "credits order", that means "in the order they were
introduced in the film credits", **but** , the credit ordering could be some
arbitrary ordering that the director of the film felt like placing them in.
For example, some films say "And introducing... (some actor no one knows
about)" or like a TV Show says "Special Guest Star... (someone most people
recognize)". In both cases, those are either before / after the entire regular
cast is introduced.
* * *
So, if you wanted the top 5 actors for a given film, you could do something
like this
import imdb
ia = imdb.IMDb()
search_results = ia.search_movie('The Godfather')
if search_results:
movieID = search_results[0].movieID
movie = ia.get_movie(movieID)
if movie:
cast = movie.get('cast')
topActors = 5
for actor in cast[:topActors]:
print "{0} as {1}".format(actor['name'], actor.currentRole)
Output
Marlon Brando as Don Vito Corleone
Al Pacino as Michael Corleone
James Caan as Sonny Corleone
Richard S. Castellano as Clemenza
Robert Duvall as Tom Hagen
|
ImportError: No module named services Django
Question: I installed python 2.7 alongside my mac. I have a project running using Django
v1.9.4. Unfortunately `manage.py runserver` is throwing an error while running
failed because it couldn't find module named services.
From a shell:
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/Library/Python/2.7/site-packages/django/core/management/__init__.py", line 327, in execute
django.setup()
File "/Library/Python/2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Library/Python/2.7/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/Library/Python/2.7/site-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named services
I'm wondering what should be done here to install this module.
Answer: It seems that 'services' is your project's module. Maybe you should set the
PYTHONPATH environment to make python find your module. If the module is in
current directory, you can run the project like this: export
PYTHONPATH=.:$PYTHONPATH manage.py runserver
|
TypeError: super() takes at least 1 argument [Python 3]
Question: In the following codes, I kept on getting the same error although I rechecked
it for more than 15 minutes. For your info, I ran it on sublime text and the
error:
> TypeError: super() takes at least 1 argument (0 given)
The code is as shown below:
class Car():
"""A simple attempt to represent a car."""
def __init__(self, make, model, year):
self.make = make
self.model = model
self.year = year
self.odometer_reading = 0
def get_descriptive_name(self):
long_name = str(self.year) + ' ' + self.make + ' ' + self.model
return long_name.title()
def read_odometer(self):
print("This car has " + str(self.odometer_reading) + " miles on it.")
def update_odometer(self, mileage):
if mileage >= self.odometer_reading:
self.odometer_reading = mileage
else:
print("You can't roll back an odometer!")
def increment_odometer(self, miles):
self.odometer_reading += miles
class ElectricCar(Car):
"""Represent aspects of a car, specific to electric vehicles."""
def __init__(self, make, model, year):
"""Initialize attributes of the parent class."""
super().__init__(make, model, year)
my_tesla = ElectricCar('tesla', 'model s', 2016)
print(my_tesla.get_descriptive_name())
Answer: The problem here is a fairly [well
documented](http://stackoverflow.com/questions/576169/understanding-python-
super-with-init-methods?noredirect=1&lq=1) one on StackOverflow. But I'll
explain how you are using `super()` incorrectly. You're using what's called
[Old Style classes](https://wiki.python.org/moin/NewClassVsClassicClass),
while trying to use `super()`. **New style classes** inherit from
[`object`](https://docs.python.org/2/library/functions.html#object) and can be
used in **Python 2.2** and up (_Python 3 exclusively uses New style classes_).
Your `Car` class declaration should look like this -> `class Car(object):`
(`Car` _inherits from the_ `object` _built-in_), with your `super` call having
the class the object is in, and `self` passed in as arguments:
super(ElectricCar, self).__init__(make, model, year)
Now, if we _print_ out the type of object `my_tesla` is:
>>> print type(my_tesla)
<class '__main__.ElectricCar'>
We can see it's of type `ElectricCar`.
Now why is all this important? Well there are a few key differences between
the styles. In the Old style, the class and the objects it defines for
instantiating are of _different_ types. In Old style classes, instances are
always of type `instance`, regardless of their class. With New style classes,
an instance is generally going to share the same type that it's class has.
Examples:
**Old Style** ->
>>> class MyClass:
pass
>>> print type(MyClass)
>>> print type(MyClass())
<type 'classobj'>
<type 'instance'>
**New Style** ->
>>> class MyClass(object):
pass
>>> print type(MyClass)
>>> print type(MyClass())
<type 'type'>
<class '__main__.MyClass'>
Please refer to Python's official documentation on
[`super()`](https://docs.python.org/2/library/functions.html#super).
|
Skulpt runit() button conflicting with CodeMirror?
Question: I am making an in-browser (static) Python editor with Skulpt and CodeMirror.
Here is the code for it so far:
<!DOCTYPE html>
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js" type="text/javascript">
</script>
<script src="http://www.skulpt.org/static/skulpt.min.js" type="text/javascript">
</script>
<script src="http://www.skulpt.org/static/skulpt-stdlib.js" type="text/javascript">
</script>
<script src="https://www.cs.princeton.edu/~dp6/CodeMirror/lib/codemirror.js" type="text/javascript">
</script>
<script src="https://www.cs.princeton.edu/~dp6/CodeMirror/mode/python/python.js" type="text/javascript">
</script>
<link href="https://www.cs.princeton.edu/~dp6/CodeMirror/lib/codemirror.css" rel="stylesheet" type="text/css">
<title></title>
</head>
<body>
<script type="text/javascript">
function outf(text) {
var mypre = document.getElementById("dynamicframe");
mypre.innerHTML = mypre.innerHTML + text;
}
function builtinRead(x) {
if (Sk.builtinFiles === undefined || Sk.builtinFiles["files"][x] === undefined)
throw "File not found: '" + x + "'";
return Sk.builtinFiles["files"][x];
}
function runit() {
var prog = document.getElementById("textbox").value;
var mypre = document.getElementById("dynamicframe");
mypre.innerHTML = '';
Sk.pre = "dynamicframe";
Sk.configure({
output: outf,
read: builtinRead
});
(Sk.TurtleGraphics || (Sk.TurtleGraphics = {})).target = 'canvas';
var myPromise = Sk.misceval.asyncToPromise(function() {
return Sk.importMainWithBody("<stdin>", false, prog, true);
});
myPromise.then(function(mod) {
console.log('success');
},
function(err) {
console.log(err.toString());
});
}
//<![CDATA[
window.onload = function() {
CodeMirror.fromTextArea(document.getElementById('textbox'), {
mode: {
name: "python",
version: 2,
singleLineStringErrors: false
},
lineNumbers: true,
indentUnit: 4
});
} //]]>
</script>
<textarea id="textbox" name="textbox"></textarea>
<br>
<button onclick="runit()" type="button">Run</button>
<pre id="dynamicframe"></pre>
<div id="canvas"></div>
</body>
</html>
With the `<button>`, I call `onclick="runit()"` but it does not do anything at
all when clicked. I took the skulpt code directly from their website
([skulpt.org](http://www.skulpt.org/)) and the CodeMirror parts from a fiddle
(<https://jsfiddle.net/gw0shwok/2/>). They seem to conflict each other in some
way when I call the `runit()` function on a button click. Why is this? How can
I fix the issue?
A link to my live editor: <http://ckdata.neocities.org/python.html>
Answer: This worked for me:
// Step 1: Declare a variable to hold the editor:
<script type="text/javascript">
var editor;
function outf(text) {...
Then save the codemirror editor when it's created:
// Step 2 : Save the codemirror object in the editor.
window.onload = function() {
editor = CodeMirror.fromTextArea(document.getElementById('textbox'), {
mode: {...
Finally use the codemirror API to get the contents of the editor in the
`runit` callback:
function runit() {
var prog = editor.getDoc().getValue(); // Use codemirror API
var mypre = document.getElementById("dynamicframe");
This worked for me: here is the
[output](https://s4.postimg.io/hvl11m3vx/Output.png)
Here is the entire modified code:
<!DOCTYPE html>
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js" type="text/javascript">
</script>
<script src="http://www.skulpt.org/static/skulpt.min.js" type="text/javascript">
</script>
<script src="http://www.skulpt.org/static/skulpt-stdlib.js" type="text/javascript">
</script>
<script src="https://www.cs.princeton.edu/~dp6/CodeMirror/lib/codemirror.js" type="text/javascript">
</script>
<script src="https://www.cs.princeton.edu/~dp6/CodeMirror/mode/python/python.js" type="text/javascript">
</script>
<link href="https://www.cs.princeton.edu/~dp6/CodeMirror/lib/codemirror.css" rel="stylesheet" type="text/css">
<title></title>
</head>
<body>
<script type="text/javascript">
var editor;
function outf(text) {
var mypre = document.getElementById("dynamicframe");
mypre.innerHTML = mypre.innerHTML + text;
}
function builtinRead(x) {
if (Sk.builtinFiles === undefined || Sk.builtinFiles["files"][x] === undefined)
throw "File not found: '" + x + "'";
return Sk.builtinFiles["files"][x];
}
function runit() {
var prog = editor.getDoc().getValue();
var mypre = document.getElementById("dynamicframe");
mypre.innerHTML = '';
Sk.pre = "dynamicframe";
Sk.configure({
output: outf,
read: builtinRead
});
(Sk.TurtleGraphics || (Sk.TurtleGraphics = {})).target = 'canvas';
var myPromise = Sk.misceval.asyncToPromise(function() {
return Sk.importMainWithBody("<stdin>", false, prog, true);
});
myPromise.then(function(mod) {
console.log('success');
},
function(err) {
console.log(err.toString());
});
}
//<![CDATA[
window.onload = function() {
editor = CodeMirror.fromTextArea(document.getElementById('textbox'), {
mode: {
name: "python",
version: 2,
singleLineStringErrors: false
},
lineNumbers: true,
indentUnit: 4
});
} //]]>
</script>
<textarea id="textbox" name="textbox"></textarea>
<br>
<button onclick="runit()" type="button">Run</button>
<pre id="dynamicframe"></pre>
<div id="canvas"></div>
</body>
</html>
|
Parsing data from JSON with python
Question: I'm just starting out with Python and here is what I'm trying to do. I want to
access Bing's API to get the picture of the day's url. I can import the json
file fine but then I can't parse the data to extract the picture's url.
Here is my python script:
import urllib, json
url = "http://www.bing.com/HPImageArchive.aspx? format=js&idx=0&n=1&mkt=en-US"
response = urllib.urlopen(url)
data = json.loads(response.read())
print data
print data["images"][3]["url"]
I get this error:
Traceback (most recent call last):
File "/Users/Robin/PycharmProjects/predictit/api.py", line 9, in <module>
print data["images"][3]["url"]
IndexError: list index out of range
FYI, here is what the JSON file looks like:
[http://jsonviewer.stack.hu/#http://www.bing.com/HPImageArchive.aspx?format=js&idx=0&n=1&mkt=en-
US](http://jsonviewer.stack.hu/#http://www.bing.com/HPImageArchive.aspx?format=js&idx=0&n=1&mkt=en-
US)
Answer:
print data["images"][0]["url"]
there is only one object in "images" array
|
Python Guessing game - incomplete code
Question: Can someone please help me re-design this code so that the program prompts the
user to choose Easy, Medium or Hard.
Easy: maxNumber = 10
Medium: maxNumber = 50
Hard: maxNumber = 100
It should choose a random number between 0 and the maxNumber. The program will
loop calling a function the get the users guess, and another to check their
guess. a function named “getGuess” which will ask the user for their guess and
reprompt if the guess is not between 0 and the maxNumber r function named
“checkGuess” which will check the users guess compared to the answer. The
function will return “higher” if the number is higher than the guess; “lower”
if the number is lower than the guess and “correct” if thenumber is equal to
the guess. Once the user has guessed the number correctly the program will
display all their guesses and how many guesses it took them. Then the program
will ask the user if they would like to try again and redisplay the difficulty
menu.
import random
guessesTaken = 0
print('Hello! Welcome to the guessing game')
myName = input()
number = random.randint(1, 20)
print('Well, ' + myName + ', I am thinking of a number between 1 and 20.')
while guessesTaken < 6:
print('Take a guess.')
guess = input()
guess = int(guess)
guessesTaken = guessesTaken + 1
if guess < number:
print('Your guess is too low.')
if guess > number:
print('Your guess is too high.')
if guess == number:
break
if guess == number:
guessesTaken = str(guessesTaken)
print('Good job, ' + myName + '! You guessed my number in ' + guessesTaken + ' guesses!')
if guess != number:
number = str(number)
print('Nope. The number I was thinking of was ' + number)
Answer: You could do something like this:
from random import randint
myName = input("what's your name? ")
def pre_game():
difficulty = input("Choose difficulty: type easy medium or hard: ")
main_loop(difficulty)
def main_loop(difficulty):
if difficulty == "easy":
answer = randint(0, 10)
elif difficulty == "medium":
answer = randint(0, 50)
else:
answer = randint(0, 100)
times_guessed = 0
guess = int()
while times_guessed < 6:
print('Take a guess.')
guess = input()
guess = int(guess)
times_guessed += 1
if guess < answer:
print('Your guess is too low.')
if guess > answer:
print('Your guess is too high.')
if guess == answer:
break
if guess == answer:
guessesTaken = str(times_guessed)
print('Good job, ' + myName + '! You guessed my number in ' + guessesTaken + ' guesses!')
if guess != answer:
print('Nope. The number I was thinking of was ' + str(answer))
next = input("Play again? y/n: ")
if next == "y":
pre_game()
else:
print("Thanks for playing!")
pre_game()
|
Python 3.4 ctypes message box doesn't open with other code included
Question: Normally this code works fine when called.
import ctypes
def message_box(title, text):
ctypes.windll.user32.MessageBoxW(0, text, title, 1)
But when it's used with other code it hangs at the line where message_box is
called.
import ctypes
def message_box(title, text):
ctypes.windll.user32.MessageBoxW(0, text, title, 1)
while True:
time = input("Enter time of the reminder in the format 'HH:MM': ")
if (len(time) != 5):
print("\nInvalid answer\n")
continue
if (time[2] != ":"):
print("\nInvalid answer\n")
continue
try:
hours = int(time[0:2])
minutes = int(time[3:5])
except:
print("\nInvalid answer\n")
continue
if not (0 < hours < 23 or 0 < minutes < 59):
print("\nInvalid answer\n")
continue
break
message_box("Example_title", "Example_text")
Answer: I found how to do it.
In the fourth argument for the message box, you need to put in values
separated by pipes ('|'). From my limited testing, the MB arguments define the
buttons that the user can click, apart from MB_SYSTEMMODAL which brings the
window to the front. The ICON arguments define what noise the window makes as
it pops up as well as a little image in the window denoting its purpose.
MB_OK = 0x0
MB_OKCXL = 0x01
MB_YESNOCXL = 0x03
MB_YESNO = 0x04
MB_HELP = 0x4000
MB_SYSTEMMODAL = 4096
ICON_EXCLAIM = 0x30
ICON_INFO = 0x40
ICON_STOP = 0x10
def message_box(title, text):
ctypes.windll.user32.MessageBoxW(0, text, title, MB_OK | ICON_INFO | MB_SYSTEMMODAL)
|
I can't install matplotlib
Question: Collecting matplotlib Using cached matplotlib-1.5.2.tar.gz Complete output
from command python setup.py egg_info:
============================================================================
Edit setup.cfg to change the build options
BUILDING MATPLOTLIB
matplotlib: yes [1.5.2]
python: yes [3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015,
01:38:48) [MSC v.1900 32 bit (Intel)]]
platform: yes [win32]
REQUIRED DEPENDENCIES AND EXTENSIONS
numpy: yes [version 1.11.1]
dateutil: yes [using dateutil version 2.5.3]
pytz: yes [using pytz version 2016.6.1]
cycler: yes [cycler was not found. pip will attempt to
install it after matplotlib.]
tornado: yes [tornado was not found. It is required for the
WebAgg backend. pip/easy_install may attempt to
install it after matplotlib.]
pyparsing: yes [pyparsing was not found. It is required for
mathtext support. pip/easy_install may attempt to
install it after matplotlib.]
libagg: yes [pkg-config information for 'libagg' could not
be found. Using local copy.]
freetype: no [The C/C++ header for freetype (ft2build.h)
could not be found. You may need to install the
development package.]
png: no [The C/C++ header for png (png.h) could not be
found. You may need to install the development
package.]
qhull: yes [pkg-config information for 'qhull' could not be
found. Using local copy.]
OPTIONAL SUBPACKAGES
sample_data: yes [installing]
toolkits: yes [installing]
tests: yes [nose 0.11.1 or later is required to run the
matplotlib test suite. Please install it with pip or
your preferred tool to run the test suite / using
unittest.mock]
toolkits_tests: yes [nose 0.11.1 or later is required to run the
matplotlib test suite. Please install it with pip or
your preferred tool to run the test suite / using
unittest.mock]
OPTIONAL BACKEND EXTENSIONS
macosx: no [Mac OS-X only]
qt5agg: no [PyQt5 not found]
qt4agg: no [PySide not found; PyQt4 not found]
gtk3agg: no [Requires pygobject to be installed.]
gtk3cairo: no [Requires cairocffi or pycairo to be installed.]
gtkagg: no [Requires pygtk]
tkagg: yes [installing; run-time loading from Python Tcl /
Tk]
wxagg: no [requires wxPython]
gtk: no [Requires pygtk]
agg: yes [installing]
cairo: no [cairocffi or pycairo not found]
windowing: yes [installing]
OPTIONAL LATEX DEPENDENCIES
dvipng: no
ghostscript: no
latex: no
pdftops: no
OPTIONAL PACKAGE DATA
dlls: no [skipping due to configuration]
============================================================================
* The following required packages can not be built:
* freetype, png
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in
C:\Users\User\AppData\Local\Temp\pycharm-packaging\matplotlib\
I have tried uninstalling and installing again the pip, but it did not work. I
do not know what to do. :( If you can comment in spanish would help me a lot,
my English is not so good, thanks in advance.
Answer: I installed matplotlib with the .whl file and it worked for me. I got it from
this website <http://www.lfd.uci.edu/~gohlke/pythonlibs/#matplotlib>. Download
the right .whl package ( i.e If you have Python 3.5.1 64 bit download
matplotlib-2.0.0b3-cp35-cp35m-win_amd64.whl) Then go to File Explorer and go
to the folder where you downloaded the .whl file. On file explorer click File
-> open command prompt -> open as adminstrator. Then type this
pip install (fileName.whl)
If it says that pip is not found then go to the place where you downloaded
python in file explorer. Go to the scripts folder and open it there. You can
copy and paste the .whl file in the scripts folder. Then type the pip install
command again. After that's done you can check if it works by going to IDLE
and try importing it
import matplotlib
if it runs that means you installed it correctly. That is the only way I know
to install matplotlib so sorry if it doesn't work.
|
Pandas converts list of datetime values incorrectly
Question: I have a list of datetime values, and would like to convert the list into a
pandas.Series instance. The code boils down to the following:
from datetime import datetime
from datetime import timedelta
from dateutil import parser
day = parser.parse('2016-08-07T00:00:00Z')
dates = [day + timedelta(days=delta) for delta in range(80)]
pandas.Series(dates)
What puzzles me is that the code above returned lots of datetime instance of
1970-01-01:
0 2016-08-07 00:00:00+00:00
1 1970-01-01 00:00:00+00:00
2 1970-01-01 00:00:00+00:00
3 1970-01-01 00:00:00+00:00
4 1970-01-01 00:00:00+00:00
5 1970-01-01 00:00:00+00:00
...
However, if I convert any sublist of 60 elements or fewer, I can get back a
correct series:
from datetime import datetime
from datetime import timedelta
from dateutil import parser
day = parser.parse('2016-08-07T00:00:00Z')
dates = [day + timedelta(days=delta) for delta in range(80)]
pandas.Series(dates[0:60])
Note the last line, the input of pandas.Series becomes dates[0:60]. In fact,
it can be any dates[n:n+60], where n is between 0 and len(dates) - 60.
0 2016-08-07 00:00:00+00:00
1 2016-08-08 00:00:00+00:00
2 2016-08-09 00:00:00+00:00
3 2016-08-10 00:00:00+00:00
4 2016-08-11 00:00:00+00:00
5 2016-08-12 00:00:00+00:00
...
I also read the Pandas document on Series and datetime, and tried Pandas'
timestamp, but still go the the same result. The Pandas version is 0.18.1, and
the Python version used by the iPython notebook kernel is 2.7.3:
print pandas.__version__
import sys
print(sys.version)
The output is
0.18.1
2.7.3 (default, Jun 22 2015, 19:33:41)
[GCC 4.6.3]
Any hints on what I should look into to find out why this problem happens and
how to fix it?
Thanks,
Answer: I don't know what's wrong with your Python version, but you can and should use
vectorized (i.e. much more efficient and faster) pandas methods instead of
vanilla Python methods:
In [181]: pd.Series([pd.to_datetime('2016-08-07T00:00:00Z') + pd.Timedelta(days=delta) for delta in range(80)])
Out[181]:
0 2016-08-07
1 2016-08-08
2 2016-08-09
3 2016-08-10
4 2016-08-11
5 2016-08-12
6 2016-08-13
7 2016-08-14
8 2016-08-15
9 2016-08-16
10 2016-08-17
11 2016-08-18
12 2016-08-19
13 2016-08-20
14 2016-08-21
15 2016-08-22
16 2016-08-23
17 2016-08-24
18 2016-08-25
19 2016-08-26
20 2016-08-27
21 2016-08-28
22 2016-08-29
23 2016-08-30
24 2016-08-31
25 2016-09-01
26 2016-09-02
27 2016-09-03
28 2016-09-04
29 2016-09-05
...
|
Kombu, RabbitMQ: Ack message more than once in a consumer mixin
Question: I have stumbled upon this problem [while I was documenting
Kombu](http://stackoverflow.com/documentation/python/drafts/6079) for the new
SO documentation project.
Consider the following Kombu code of a [Consumer
Mixin](http://docs.celeryproject.org/projects/kombu/en/latest/reference/kombu.mixins.html):
from kombu import Connection, Queue
from kombu.mixins import ConsumerMixin
from kombu.exceptions import MessageStateError
import datetime
# Send a message to the 'test_queue' queue
with Connection('amqp://guest:guest@localhost:5672//') as conn:
with conn.SimpleQueue(name='test_queue') as queue:
queue.put('String message sent to the queue')
# Callback functions
def print_upper(body, message):
print body.upper()
message.ack()
def print_lower(body, message):
print body.lower()
message.ack()
# Attach the callback function to a queue consumer
class Worker(ConsumerMixin):
def __init__(self, connection):
self.connection = connection
def get_consumers(self, Consumer, channel):
return [
Consumer(queues=Queue('test_queue'), callbacks=[print_even_characters, print_odd_characters]),
]
# Start the worker
with Connection('amqp://guest:guest@localhost:5672//') as conn:
worker = Worker(conn)
worker.run()
The code fails with:
kombu.exceptions.MessageStateError: Message already acknowledged with state: ACK
Because the message was ACK-ed twice, on `print_even_characters()` and
`print_odd_characters()`.
A simple solution that works would be ACK-ing only the last callback function,
but it breaks modularity if I want to use the same functions on other queues
or connections.
**How to ACK a queued Kombu message that is sent to more than one callback
function?**
Answer: # Solutions
## 1 - Checking `message.acknowledged`
The `message.acknowledged` flag checks whether the message is already ACK-ed:
def print_upper(body, message):
print body.upper()
if not message.acknowledged:
message.ack()
def print_lower(body, message):
print body.lower()
if not message.acknowledged:
message.ack()
**Pros** : Readable, short.
**Cons** : Breaks [Python EAFP
idiom](https://docs.python.org/3/glossary.html).
## 2 - Catching the exception
def print_upper(body, message):
print body.upper()
try:
message.ack()
except MessageStateError:
pass
def print_lower(body, message):
print body.lower()
try:
message.ack()
except MessageStateError:
pass
**Pros:** Readable, Pythonic.
**Cons:** A little long - 4 lines of boilerplate code per callback.
## 3 - ACKing the last callback
The documentation guarantees that the [callbacks are called in
order](http://docs.celeryproject.org/projects/kombu/en/latest/userguide/consumers.html#reference).
Therefore, we can simply `.ack()` only the last callback:
def print_upper(body, message):
print body.upper()
def print_lower(body, message):
print body.lower()
message.ack()
**Pros:** Short, readable, no boilerplate code.
**Cons:** Not modular: the callbacks can not be used by another queue, unless
the last callback is always last. This implicit assumption can break the
caller code.
This can be solved by moving the callback functions into the `Worker` class.
We give up some modularity - these functions will not be called from outside -
but gain safety and readability.
# Summary
The difference between 1 and 2 is merely a matter of style.
Solution 3 should be picked if the order of execution matters, and whether a
message should not be ACK-ed before it went through all the callbacks
successfully.
1 or 2 should be picked if the message should always be ACK-ed, even if one or
more callbacks failed.
Note that there are other possible designs; this answer refers to callback
functions that reside outside the worker.
|
Is there a way to do this in python?
Question: I have a list of integers. For example [2,3,4] and I want to expand the list
with outcomes of all possible multiplications of these integers. That would be
in this case 6,8,12,24. How would I do this? Keep in mind that the list I want
to do this with has 16 items so an algorithm for this case might not be a good
solution for my case.
Answer: Here is a solution in basic Python (with only batteries included modules :-)
):
import itertools, functools
lst = [2,3,4]
comb = [itertools.combinations(lst, n) for n in range(2, len(lst) + 1)]
lst2 = []
for seq in itertools.chain(*comb):
lst2.append(functools.reduce(lambda x, y: x * y, seq))
print(lst2)
Output:
[6, 8, 12, 24]
|
Python Looping through File and downloading Images
Question: I am trying to loop through a text file with different website links to
download the images. I also want them to have a unique file name. Just a loop
counter as you see so the three images would be `1.jpg`, `2.jpg`, `3.jpg`. Yet
I am only getting the last image and the file is named `0.jpg`. I have tried a
couple of different methods but this seemed the best but still no luck. Any
suggestions on next steps?
import urllib
input_file = open('Urls1.txt','r')
x=0
for line in input_file:
URL= line
urllib.urlretrieve(URL, str(x) + ".jpg")
x+=1
Answer: rewrite the code by indenting the last two lines thus
import urllib
input_file = open('Urls1.txt','r')
x=0
for line in input_file:
URL= line
urllib.urlretrieve(URL, str(x) + ".jpg")
x+=1
Indentation is significant in Python. Without it, the last two statements are
only executed after the loop has completed. Thus you only retrieve the last
URL in the file.
|
numpy reshape confusion with negative shape values
Question: Always confused how numpy reshape handle negative shape parameter, here is an
example of code and output, could anyone explain what happens for reshape [-1,
1] here? Thanks.
Related document, using Python 2.7.
<http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html>
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
S = np.array(['box','apple','car'])
le = LabelEncoder()
S = le.fit_transform(S)
print(S)
ohe = OneHotEncoder()
one_hot = ohe.fit_transform(S.reshape(-1,1)).toarray()
print(one_hot)
[1 0 2]
[[ 0. 1. 0.]
[ 1. 0. 0.]
[ 0. 0. 1.]]
Answer: `-1` is used to infer one missing length from the other. For example reshaping
`(3,4,5)` to `(-1,10)` is equivalent to reshaping to `(6,10)` because `6` is
the only length that makes sense form the other inputs.
|
Python2 - Summing "for" loop output
Question: I'm trying to make a script that takes a balances of multiple addresses from a
json file and adds them together to make a final balance.
This is the code so far -
import json
from pprint import pprint
with open('hd-wallet-addrs/addresses.json') as data_file:
data = json.load(data_file)
for balance in data:
print balance['balance']
**This is what's in the json file:**
[
{
"addr": "1ERDMDducUsmrajDpQjoKxAHCqbTMEU9R6",
"balance": "21.00000000"
},
{
"addr": "1DvmasdbaFD7Tj6diu6D8WVc1Bkbj7jYRM",
"balance": "0.30000000"
},
{
"addr": "18xkkUi7qagUuBAg572UsmDKcZTP5zxaDB",
"balance": "0.80000000"
},
{
"addr": "1MmTDCsySdsWRVbNFwXBy2APW5kGsynkaA3",
"balance": "0.005"
},
]
The output is like this:
21
0.3
0.8
0.005
How should I edit my code to add the numbers together?
Answer: Actually add them together...
total = 0
for balance in data:
total += float(balance['balance'])
print total
Or using `sum`:
print sum(float(temp_balance['balance']) for temp_balance in data)
|
Python - Kivy: AttributeError: 'super' object has no attribute '__getattr__' when trying to get self.ids
Question: I wrote a code for a kind of android lock thing, whenever I try to get an
specific ClickableImage using the id it raises the following error:
AttributeError: 'super' object has no attribute '__getattr__'
I've spent hours trying to look for a solution for this problem, I looked
other people with the same issue, and people told them to change the site of
the builder, because it needed to be called first to get the ids attribute or
something like that, but everytime I move the builder, it raises the error
"class not defined". Any clues?
Here is my code:
from kivy.app import App
from kivy.config import Config
from kivy.lang import Builder
from kivy.graphics import Line
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.uix.widget import Widget
from kivy.uix.image import Image
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.behaviors import ButtonBehavior
#Variables
cords = ()
bld = Builder.load_file('conf.kv')
class Manager(ScreenManager): pass
class Principal(Screen): pass
class ClickableImage(ButtonBehavior, Image):
def on_press(self):
self.source = 'button_press.png'
def on_release(self):
self.source = 'button.png'
self.ids.uno.source = 'button_press.png'
class canva(Widget):
def on_touch_down(self, touch):
global cords
with self.canvas:
touch.ud['line'] = Line(points=(touch.x, touch.y), width=1.5)
cords = (touch.x, touch.y)
def on_touch_move(self,touch):
global cords
touch.ud['line'].points = cords + (touch.x, touch.y)
def on_touch_up(self,touch):
self.canvas.clear()
class Api(App):
def build(self):
return bld
if __name__ == '__main__':
Api().run()
and here is my .kv file:
# conf to file: test.py
<Manager>:
Principal:
<Principal>:
GridLayout:
size_hint_x: 0.5
size_hint_y: 0.6
width: self.minimum_width
cols: 3
ClickableImage:
id: 'uno'
size: 10,10
source: 'button.png'
allow_strech: True
ClickableImage:
id: 'dos'
size: 30,30
source: 'button.png'
allow_strech: True
canva:
Answer: Let's look at the output:
'super' object has no attribute '__getattr__'
In kv language `id` is set in a special way(up to 1.9.2 now), its value is not
a string, because it's not a casual variable. You can't access it with
`<widget>.id`.
I'd say it's similar to `canvas`, which is not a widget, yet it may look like
that(which is why I was confused by your code :P). You've already noticed
`something: <some object>` is like Python's `something = <object>` and that's
(at least what I think) is the whole point of `id`'s value not being a
string(which to some is odd). If `id` was a string, there'd probably be needed
a check to exclude it somehow from casual assigning values. Maybe it's because
of performance or just simplicity.
Therefore let's say `id` is a keyword for a future keyword. In fact, it is,
because the characters assigned to `id` will become a string key with a value
of object got from WeakProxy, to the object WeakProxy points to. Or better
said:
id: value
becomes
<some_root_widget>.ids[str(value)] = weakref.proxy(value)
where `value` becomes an _object_(what `print(self)` would return)
I suspect(not sure) that if you use string as the value for `id`, you'll end
up with [weakref](http://stackoverflow.com/a/36789779/5994041) /
[WeakProxy](https://github.com/kivy/kivy/blob/master/kivy/weakproxy.pyx)
pointing to a string. I use the word `point` as it reminds me pointers, don't
get confused with C pointers.
Now if you look again at the output:
* [super](http://stackoverflow.com/documentation/python/809/compatibility-between-python-3-and-python-2/9712/compatible-subclassing-with-super#t=201608251807170382088) gives you an access to a class you inherit from
* `print('string id'.__getattr__)` will give you the same error, but `'super'` is substituted with the real value, because well... it doesn't have `__getattr__`
Therefore _if_ you assign a _string_ value to `id`, you'll get into this
situation:
<some_root_widget>.ids[str('value')] = weakref.proxy('value') # + little bit of magic
Although `str('value')` isn't necessarily wrong, by default you can't create
weakref.proxy for a string. I'm not sure how Kivy handles this with
WeakProxies, but if you assign a string to `id`, roughly this is what you get.
(Please correct me if I'm wrong)
|
Python 2.7.8 for line iteration error
Question: I want to iterate over all lines in a file with the following script
import sys
infile = open("test.txt")
infile.read()
for line in infile
if line.find("This") != -1
print line
infile.close()
Unfortunately, I am getting this error message:
File "getRes.py", line 6
for line in infile
^
SyntaxError: invalid syntax
I've been trying for an hour to figure out what is the error and I am still
not able to find it. Can you tell me what is wrong and how to fix it?
PS: I am using Python 2.7.8, I would like to use this old version instead of a
more recent version.
Answer: You need a colon after any line that introduces a block in Python.
for line in infile:
if line.find("This") != -1:
|
Unable to crawl some href in a webpage using python and beautifulsoup
Question: I am currently crawling a web page using Python 3.4 and bs4 in order to
collect the match results played by Serbia in Rio2016. So the url
[here](http://rio2016.fivb.com/en/volleyball/women/teams/srb-
serbia#wcbody_0_wcgridpadgridpad1_1_wcmenucontent_3_Schedule) contains links
to all the match results she played, for example
[this](http://rio2016.fivb.com/en/volleyball/women/7168-serbia-italy/post).
Then I found that the link is located in the html source like this:
<a href="/en/volleyball/women/7168-serbia-italy/post" ng-href="/en/volleyball/women/7168-serbia-italy/post">
<span class="score ng-binding">3 - 0</span>
</a>
But after several trials, this `href="/en/volleyball/women/7168-serbia-
italy/post"` never show up. Then I tried to run the following code to get all
the href from the url:
from bs4 import BeautifulSoup
import requests
Countryr = requests.get('http://rio2016.fivb.com/en/volleyball/women/teams/srb-serbia#wcbody_0_wcgridpadgridpad1_1_wcmenucontent_3_Schedule')
countrySoup = BeautifulSoup(Countryr.text)
for link in countrySoup.find_all('a'):
print(link.get('href'))
Then a strange thing happened. The `href="/en/volleyball/women/7168-serbia-
italy/post"` is not included in the output at all.
I found that this href is located in one of the tab pages
`href="#scheduldedOver"` in side this url, and it is controlled by the
following HTML code:
<nav class="tabnav">
<a href="#schedulded" ng-class="{selected: chosenStatus == 'Pre' }" ng-click="setStatus('Pre')" ng-href="#schedulded">Scheduled</a>
<a href="#scheduldedLive" ng-class="{selected: chosenStatus == 'Live' }" ng-click="setStatus('Live')" ng-href="#scheduldedLive">Live</a>
<a href="#scheduldedOver" class="selected" ng-class="{selected: chosenStatus == 'Over' }" ng-click="setStatus('Over')" ng-href="#scheduldedOver">Complete</a>
</nav>
Then how should I get the href using BeautifulSoup inside a tab page?
Answer: The data is created dynamically, if you look at the actual source you can see
[Angularjs](https://docs.angularjs.org/tutorial/step_02) templating.
You can still get all the info in json format by mimicking an ajax call, in
the source yuuuuou can also see a div like:
<div id="AngularPanel" class="main-wrapper" ng-app="fivb"
data-servicematchcenterbar="/en/api/volley/matches/341/en/user/lives"
data-serviceteammatches="/en/api/volley/matches/WOG2016/en/user/team/3017"
data-servicelabels="/en/api/labels/Volley/en"
data-servicelive="/en/api/volley/matches/WOG2016/en/user/live/">
Using the `data-servicematchcenterbar` href will give you all the info:
from bs4 import BeautifulSoup
import requests
from urlparse import urljoin
r = requests.get('http://rio2016.fivb.com/en/volleyball/women/teams/srb-serbia#wcbody_0_wcgridpadgridpad1_1_wcmenucontent_3_Schedule')
soup = BeautifulSoup(r.content)
base = "http://rio2016.fivb.com/"
json = requests.get(urljoin(base, soup.select_one("#AngularPanel")["data-serviceteammatches"])).json()
In json you will see output like:
{"Id": 7168, "MatchNumber": "006", "TournamentCode": "WOG2016", "TournamentName": "Women's Olympic Games 2016",
"TournamentGroupName": "", "Gender": "", "LocalDateTime": "2016-08-06T22:35:00",
"UtcDateTime": "2016-08-07T01:35:00+00:00", "CalculatedMatchDate": "2016-08-07T03:35:00+02:00",
"CalculatedMatchDateType": "user", "LocalDateTimeText": "August 06 2016",
"Pool": {"Code": "B", "Name": "Pool B", "Url": "/en/volleyball/women/results and ranking/round1#anchorB"},
"Round": 68,
"Location": {"Arena": "Maracanãzinho", "City": "Maracanãzinho", "CityUrl": "", "Country": "Brazil"},
"TeamA": {"Code": "SRB", "Name": "Serbia", "Url": "/en/volleyball/women/teams/srb-serbia",
"FlagUrl": "/~/media/flags/flag_SRB.png?h=60&w=60"},
"TeamB": {"Code": "ITA", "Name": "Italy", "Url": "/en/volleyball/women/teams/ita-italy",
"FlagUrl": "/~/media/flags/flag_ITA.png?h=60&w=60"},
"Url": "/en/volleyball/women/7168-serbia-italy/post", "TicketUrl": "", "Status": "Over", "MatchPointsA": 3,
"MatchPointsB": 0, "Sets": [{"Number": 1, "PointsA": 27, "PointsB": 25, "Hours": 0, "Minutes": "28"},
{"Number": 2, "PointsA": 25, "PointsB": 20, "Hours": 0, "Minutes": "25"},
{"Number": 3, "PointsA": 25, "PointsB": 23, "Hours": 0, "Minutes": "27"}],
"PoolRoundName": "Preliminary Round", "DayInfo": "Weekend Day",
"WeekInfo": {"Number": 31, "Start": 7, "End": 13}, "LiveStreamUri": ""},
You can parse whatever you need from those.
|
UDF (User Defined Function) python gives different answer in pig
Question: I want to write a UDF python for pig, to read lines from the file called like
#'prefix.csv'
spol.
LLC
Oy
OOD
and match the names and if finds any matches, then replaces it with white
space. here is my python code
def list_files2(name, f):
fin = open(f, 'r')
for line in fin:
final = name
extra = 'nothing'
if (name != name.replace(line.strip(), ' ')):
extra = line.strip()
final = name.replace(line.strip(), ' ').strip()
return final, extra,'insdie if'
return final, extra, 'inside for'
Running this code in python,
>print list_files2('LLC nakisa', 'prefix.csv' )
>print list_files2('AG company', 'prefix.csv' )
returns
('nakisa', 'LLC', 'insdie if')
('AG company', 'nothing', 'inside for')
which is exactly what I need. But when I register this code as a UDF in apache
pig for this sample list:
nakisa company LLC
three Oy
AG Lans
Test OOD
pig returns wrong answer on the third line:
((nakisa company,LLC,insdie if))
((three,Oy,insdie if))
((A G L a n s,,insdie if))
((Test,OOD,insdie if))
The question is why UDF enters the if loop for the third entry which does not
have any match in the prefix.csv file.
Answer: I don't know `pig` but the way you are checking for a match is strange and
might be the cause of your problem.
If you want to check whether a string is a substring of another, `python`
provides the `find` method on strings:
if name.find(line.strip()) != -1:
# find will return the first index of the substring or -1 if it was not found
# ... do some stuff
additionally, your code might leave the file handle open. A way better
approach to handle file operations is by using the `with` statement. This
assures that in any case (except of interpreter crashes) the file handle will
get closed.
with open(filename, "r") as file_:
# Everything within this block can use the opened file.
Last but not least, `python` provides a module called `csv` with a `reader`
and a `writer`, that handle the parsing of the csv file format.
Thus, you could try the following code and check if it returns the correct
thing:
import csv
def list_files2(name, filename):
with open(filename, 'rb') as file_:
final = name
extra = "nothing"
for prefix in csv.reader(file_):
if name.find(prefix) != -1:
extra = prefix
final = name.replace(prefix, " ")
return final, extra, "inside if"
return final, extra, "inside for"
Because your file is named `prefix.csv` I assume you want to do prefix
substitution. In this case, you could use `startswith` instead of `find` for
the check and replace the line `final = name.replace(prefix, " ")` with `final
= " " + name[name.find(prefix):]`. This assures that only a prefix will be
substituted with the space.
I hope, this helps
|
Why cant I do blob detection on this binary image
Question: First, now i am doing the blob detection by Python2.7 with Opencv. What i want
to do is to finish the blob detection after the color detection. i want to
detect the red circles(marks), and to avoid other blob interference, i want to
do color detection first, and then do the blob detection.
and the image after color detection is [binary
mask](http://i.stack.imgur.com/eRCe1.png)
now i want to do blob detection on this image, but it doesn't work. This is my
code.
import cv2
import numpy as np;
# Read image
im = cv2.imread("myblob.jpg", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10; # the graylevel of images
params.maxThreshold = 200;
params.filterByColor = True
params.blobColor = 255
# Filter by Area
params.filterByArea = False
params.minArea = 10000
detector = cv2.SimpleBlobDetector(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)`
I am really confused by this code, because it work on this image [white
dots](http://i.stack.imgur.com/c0AXO.jpg) I think the white dots image is
quiet similar with the binary mask,but why cant i do blob detection on the
binary image?could anyone tell me the difference or the right code?
Thanks!!
Regards, Nan
Answer: It looks that the blob detector has `filterByInertia` and `filterByConvexity`
parameters enabled by default. You can check this in your system:
import cv2
params = cv2.SimpleBlobDetector_Params()
print params.filterByColor
print params.filterByArea
print params.filterByCircularity
print params.filterByInertia
print params.filterByConvexity
So when you call `detector = cv2.SimpleBlobDetector(params)` you are actually
filtering also by inertia and convexity with the default min and max values.
If you explicitly disable those filtering criteria:
# Disable unwanted filter criteria params
params.filterByInertia = False
params.filterByConvexity = False
... and then call `detector = cv2.SimpleBlobDetector(params)` you get the
following image: [![blobing
result](http://i.stack.imgur.com/UxBZY.png)](http://i.stack.imgur.com/UxBZY.png)
The third blob in that image is caused by the white frame on the lower right
of your image. You can crop the image, if the frame is always in the same
place, or you can use the parameters to filter by circularity and remove the
undesired blob:
params.filterByCircularity = True
params.minCircularity = 0.1
And you will finally get:
[![enter image description
here](http://i.stack.imgur.com/KPx99.png)](http://i.stack.imgur.com/KPx99.png)
|
Troubles understanding finding elements in python selenium
Question: I'm trying to follow use find elements from <http://selenium-
python.readthedocs.io/locating-elements.html#locating-elements-by-class-name>;
however, they seem to work only half the time and usually on more simple
sites. I'm wondering why that is. For example, currently i am trying to locate
:
<a class="username" title="bruceleenation" href="/profile/u/3618527996"></a>
using :
content = driver.find_element_by_class_name('username')
but i'm getting nothing. The html is from
<https://gyazo.com/b2a0d389da26bbd325baaa5f915d0569> or
<body>
<nav id="nav-sidebar" class="nav-main"></nav>
<main id="page-content" class="" style="margin-right: 17px; margin-bottom: 0px;">
<header class="header-logged"></header>
<section class="page-content-wrapper"></section>
<section class="media-slider" style="display: block;">
<div class="close-slider"></div>
<section id="slider" class="open" style="display: inline-block;">
<a class="go-back" data-media-id="1322612612609855850_3618527996" title="Back to all media" href="javascript:void(0);"></a>
<section class="media-viewer-wrapper viewer" data-count-comments="0" data-count-likes="1" data-url-delete="/aj/d" data-url-comment="/aj/c" data-url-unlike="/aj/ul" data-url-like="/aj/l" data-user-id="3618527996" data-media-id="1322612612609855850_3618527996">
<section class="mobile-user-info"></section>
<section class="desktop-wrapper">
<section class="user-image-wrapper">
<div class="image-like-click"></div>
<a class="user-image-shadow" href="javascript:void(0);">
<img class="user-image" alt="" src="https://scontent.cdninstagram.com/t51.2885-15/s640x640/sh0.0…493235_n.jpg?ig_cache_key=MTMyMjYxMjYxMjYwOTg1NTg1MA%3D%3D.2"></img>
</a>
<section class="image-actions-wrapper dropdown-anchor"></section>
</section>
<section class="media-viewer-info ui-front">
<section class="user-info-wrapper text-translate-parent-wrapper ">
<a class="user-avatar-wrapper profile" title="bruceleenation" href="/profile/u/3618527996"></a>
<section class="user-info">
<a class="username" title="bruceleenation" href="/profile/u/3618527996">
bruceleenation
</a>
<p class="full-name"></p>
<div class="media-date-geo">
<span></span>
</div>
</section>
Any suggestions on what to do? I've tried Xpath as well.
`["//a[@class='username'"]`
Answer: You should try using [`WebDriverWait`](http://selenium-
python.readthedocs.io/waits.html#explicit-waits) to wait until element present
as below :-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
content = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, "a.username[title = 'bruceleenation']")))
|
How to count number of occurences of permutation (overlapping) in large text in python3?
Question: I am having a list of words and I'd like to find out how many time each
permutation occurs in this list of word. And I'd like to count overlapping
permutation also. So count() doesn't seem to be appropriate. for example: the
permutation aba appears twice in this string:
ababa
However count() would say one.
So I designed this little script, but I am not too sure that is efficient. The
array of word is an external file, I just removed this part to make it
simplier.
import itertools
import itertools
#Occurence counting function
def occ(string, sub):
count = start = 0
while True:
start = string.find(sub, start) + 1
if start > 0:
count+=1
else:
return count
#permutation generator
abc="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
permut = [''.join(p) for p in itertools.product(abc,repeat=2)]
#Transform osd7 in array
arrayofWords=['word1',"word2","word3","word4"]
dict_output['total']=0
#create the array
for perm in permut:
dict_output[perm]=0
#iterate over the arrayofWords and permutation
for word in arrayofWords:
for perm in permut:
dict_output[perm]=dict_output[perm]+occ(word,perm)
dict_output['total']=dict_output['total']+occ(word,perm)
It is working, but it takes looonnnggg time. If I change,
product(abc,repeat=2) by product(abc,repeat=3) or product(abc,repeat=4)... It
will take a full week!
**The question: Is there a more efficient way?**
Answer: Very simple: count only what you need to count.
from collections import defaultdict
quadrigrams = defaultdict(lambda: 0)
for word in arrayofWords:
for i in range(len(word) - 3):
quadrigrams[word[i:i+4]] += 1
|
Extracting infromation from multiple JSON files to single CSV file in python
Question: I have a JSON file with multiple dictionaries:
{"team1participants":
[ {
"stats": {
"item1": 3153,
"totalScore": 0,
...
}
},
{
"stats": {
"item1": 2123,
"totalScore": 5,
...
}
},
{
"stats": {
"item1": 1253,
"totalScore": 1,
...
}
}
],
"team2participants":
[ {
"stats": {
"item1": 1853,
"totalScore": 2,
...
}
},
{
"stats": {
"item1": 21523,
"totalScore": 5,
...
}
},
{
"stats": {
"item1": 12503,
"totalScore": 1,
...
}
}
]
}
In other words, the JSON has multiple keys. Each key has a list containing
statistics of individual participants.
I have many such JSON files, and I want to extract it to a single CSV file. I
can of course do this manually, but this is very tedious. I know of
DictWriter, but it seems to work only for single dictionaries. I also know
that dictionaries can be concatenated, but it will be problematic because all
dictionaries have the same keys.
How can I efficiently extract this to a CSV file?
Answer: You can make your data tidy so that each row is a unique observation.
teams = []
items = []
scores = []
for team in d:
for item in d[team]:
teams.append(team)
items.append(item['stats']['item1'])
scores.append(item['stats']['totalScore'])
# Using Pandas.
import pandas as pd
df = pd.DataFrame({'team': teams, 'item': items, 'score': scores})
>>> df
item score team
0 1853 2 team2participants
1 21523 5 team2participants
2 12503 1 team2participants
3 3153 0 team1participants
4 2123 5 team1participants
5 1253 1 team1participants
You could also use a list comprehension instead of a loop.
results = [[team, item['stats']['item1'], item['stats']['totalScore']]
for team in d for item in d[team]]
df = pd.DataFrame(results, columns=['team', 'item', 'score'])
You can then do a pivot table, for example:
>>> df.pivot_table(values='score ', index='team ', columns='item', aggfunc='sum').fillna(0)
item 1253 1853 2123 3153 12503 21523
team
team1participants 1 0 5 0 0 0
team2participants 0 2 0 0 1 5
Also, now that it is a dataframe, it is easy to save it as a CSV.
df.to_csv(my_file_name.csv)
|
Implement hashid in django
Question: I've been trying to implement
[hashids](https://github.com/davidaurelio/hashids-python) in django models. I
want to acquire hashid based on model's `id` like when model's `id=3` then
hash encoding should be like this: `hashid.encode(id)`. The thing is i can not
get id or pk until i save them. What i have in my mind is get the latest
objects `id` and add `1` on them. But it's not a solution for me. Can anyone
help me to figure it out???
django model is:
from hashids import Hashids
hashids = Hashids(salt='thismysalt', min_length=4)
class Article(models.Model):
title = models.CharField(...)
text = models.TextField(...)
hashid = models.CharField(...)
# i know that this is not a good solution. This is meant to be more clear understanding.
def save(self, *args, **kwargs):
super(Article, self).save(*args, **kwargs)
self.hashid = hashids.encode(self.id)
super(Article, self).save(*args, **kwargs)
Answer: I would only tell it to save if there is no ID yet, so it doesn't run the code
every time. You can do this using a TimeStampedModel inheritance, which is
actually great to use in any project.
from hashids import Hashids
hashids = Hashids(salt='thismysalt', min_length=4)
class TimeStampedModel(models.Model):
""" Provides timestamps wherever it is subclassed """
created = models.DateTimeField(editable=False)
modified = models.DateTimeField()
def save(self, *args, **kwargs): # On `save()`, update timestamps
if not self.created:
self.created = timezone.now()
self.modified = timezone.now()
return super().save(*args, **kwargs)
class Meta:
abstract = True
class Article(TimeStampedModel):
title = models.CharField(...)
text = models.TextField(...)
hashid = models.CharField(...)
# i know that this is not a good solution. This is meant to be more clear understanding.
def save(self, *args, **kwargs):
super(Article, self).save(*args, **kwargs)
if self.created == self.modified: # Only run the first time instance is created (where created & modified will be the same)
self.hashid = hashids.encode(self.id)
self.save(update_fields=['hashid'])
|
allen brain institute - brain observatory example
Question: I'm trying to follow the example of [brain observatory ipython
notebook](https://alleninstitute.github.io/AllenSDK/_static/examples/nb/brain_observatory.html).
However, I became stuck loading the `nwb` file like below.
from allensdk.core.brain_observatory_cache import BrainObservatoryCache
boc = BrainObservatoryCache(manifest_file='boc/manifest.json')
data_set = boc.get_ophys_experiment_data(501940850) # problem here
So, I opened the `nwb` file by
[HDFview](https://www.hdfgroup.org/products/java/hdfview/).
All of the brain observatory `nwb` files were not opened except for
`502376461.nwb`.
When I tried to open the `502376461.nwb` in the ipython notebook example from
allen, it worked!! But the others (`501940850`, `503820068`...) failed like
above.
Answer: Summarizing the thread from github:
<https://github.com/AllenInstitute/AllenSDK/issues/22>
The files were partially downloaded or corrupted somehow. No exceptions were
reported during the download, so urllib must not have noticed a problem.
AllenSDK developers are investigating some sort of file consistency check
and/or a different HTTP library.
<https://github.com/AllenInstitute/AllenSDK/issues/28>
If others run into this, you can delete the bad file and re-run the download
function (`BrainObservatoryCache.get_ophys_experiment_data`). Files are
downloaded into a subdirectory of the BrainObservatoryCache [manifest
file](http://alleninstitute.github.io/AllenSDK/_static/examples/nb/brain_observatory.html#Experiment-
Containers), which defaults to the current working directory if unspecified.
|
List index out of range error in web scraping
Question: I am now building a web-scraping program with Python 3.5 and bs4. In the code
below I tried to retrieve the data from two tables in the url. I succeed in
the first table, but error pops out for the second one. The error is
"IndexError: list index out of range" for
"D.append(cells[0].find(text=True))". I have checked the list indices for
"cells', which gives me 0,1,2, so should be no problem. Could anyone suggest
any ideas on solving this issue?
import tkinter as tk
def test():
from bs4 import BeautifulSoup
import urllib.request
import pandas as pd
url_text = 'http://www.sce.hkbu.edu.hk/future-students/part-time/short-courses-regular.php?code=EGE1201'
resp = urllib.request.urlopen(url_text)
soup = BeautifulSoup(resp, from_encoding=resp.info().get_param('charset'))
all_tables=soup.find_all('table')
print (all_tables)
right_table=soup.find('table', {'class' : 'info'})
A=[]
B=[]
C=[]
for row in right_table.findAll("tr"):
cells = row.findAll('td')
A.append(cells[0].find(text=True))
B.append(cells[1].find(text=True))
C.append(cells[2].find(text=True))
df=pd.DataFrame()
df[""]=A
df["EGE1201"]=C
print(df)
D=[]
E=[]
F=[]
right_table=soup.find('table', {'class' : 'schedule'})
for row in right_table.findAll("tr"):
try:
cells = row.findAll('th')
except:
cells = row.findAll('td')
D.append(cells[0].find(text=True))
E.append(cells[1].find(text=True))
F.append(cells[2].find(text=True))
df1=pd.DataFrame()
df[D[0]]=D[1]
df[E[0]]=E[1]
df[F[0]]=F[1]
print(df1)
if __name__ == '__main__':
test()
Answer: It looks like you're expecting this code to choose between 'th' and 'td', but
it will not. It will always choose 'th' and will return an empty list when
there is no 'th' in that row.
try:
cells = row.findAll('th')
except:
cells = row.findAll('td')
Instead, I would change the code to check if the list is empty and then
request 'td':
cells = row.findAll('th')
if not cells:
cells = row.findAll('td')
Alternatively you can shorten the code to this:
cells = row.findAll('th') or row.findAll('td')
|
Multiprocessing does not see global variables?
Question: I have run into a strange behaviour at multiprocessing.
When i try to use a global variable in a function which is called from
multiprocessing it does not see a global variable.
Example:
import multiprocessing
def func(useless_variable):
print(variable)
useless_list = [1,2,3,4,5,6]
p = multiprocessing.Pool(processes=multiprocessing.cpu_count())
variable = "asd"
func(useless_list)
for x in p.imap_unordered(func, useless_list):
pass
Output:
asd
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.4/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "pywork/asd.py", line 4, in func
print(variable)
NameError: name 'variable' is not defined
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "pywork/asd.py", line 11, in <module>
for x in p.imap_unordered(func, useless_list):
File "/usr/lib/python3.4/multiprocessing/pool.py", line 689, in next
raise value
NameError: name 'variable' is not defined
As you see the first time i just simply call `func` it print `asd` as
expected. However when i call the very same function with multiprocessing it
says the variable `variable` does not exists, even after i clearly printed it
just before.
Does multiprocessing ignore global variables? How can i work this around?
Answer: When you spam a process all context is copyed, you need to get use of
[`managers`](https://docs.python.org/2/library/multiprocessing.html#managers)
for exachanging objects between them, check the [official
documentations](https://docs.python.org/2/library/multiprocessing.html#exchanging-
objects-between-processes), for managing state check
[this](https://docs.python.org/2/library/multiprocessing.html#sharing-state-
between-processes).
|
django - UNIQUE CONSTRAINED FAILED error
Question: This is what my models.py looks like:
from django.db import models
from django.core.validators import RegexValidator
# Create your models here.
class Customer(models.Model):
customer_id = models.AutoField(primary_key=True,unique=True)
full_name = models.CharField(max_length=50)
user_email = models.EmailField(max_length=50)
user_pass = models.CharField(max_length=30)
def __str__(self):
return "%s" % self.full_name
class CustomerDetail(models.Model):
phone_regex = RegexValidator(regex = r'^\d{10}$', message = "Invalid format! E.g. 4088385778")
date_regex = RegexValidator(regex = r'\d{2}[-/]\d{2}[-/]\d{2}', message = "Invalid format! E.g. 05/16/91")
address = models.CharField(max_length=100)
date_of_birth = models.CharField(validators = [date_regex], max_length = 10, blank = True)
company = models.CharField(max_length=30)
home_phone = models.CharField(validators = [phone_regex], max_length = 10, blank = True)
work_phone = models.CharField(validators = [phone_regex], max_length = 10, blank = True)
customer_id = models.ForeignKey(Customer, on_delete=models.CASCADE)
I added `customer_id` to `Customer` after I added the same in `CustomerDetail`
as foreign key. Why do I still get this error after running migrate, even
after I added `unique=True` to customer_id?
Error:
Rendering model states... DONE
Applying newuser.0003_auto_20160823_0128...Traceback (most recent call last):
File "/home/krag91/Documents/djangodev/virtualenv /lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/krag91/Documents/djangodev/virtualenv /lib/python3.5/site-packages/django/db/backends/sqlite3/base.py", line 337, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.IntegrityError: UNIQUE constraint failed: newuser_customer.customer_id
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute()
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/core/management/__init__.py", line 359, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/core/management/base.py", line 305, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/core/management/base.py", line 356, in execute
output = self.handle(*args, **options)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/core/management/commands/migrate.py", line 202, in handle
targets, plan, fake=fake, fake_initial=fake_initial
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/migrations/executor.py", line 97, in migrate
state = self._migrate_all_forwards(plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/migrations/executor.py", line 132, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/migrations/executor.py", line 237, in apply_migration
state = migration.apply(state, schema_editor)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/migrations/migration.py", line 129, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/migrations/operations/fields.py", line 84, in database_forwards
field,
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/backends/sqlite3/schema.py", line 231, in add_field
self._remake_table(model, create_fields=[field])
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/backends/sqlite3/schema.py", line 199, in _remake_table
self.quote_name(model._meta.db_table),
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/backends/base/schema.py", line 112, in execute
cursor.execute(sql, params)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/krag91/Documents/djangodev/virtualenv/lib/python3.5/site-packages/django/db/backends/sqlite3/base.py", line 337, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: UNIQUE constraint failed: newuser_customer.customer_id
Answer: It seems like you are `already having some objects as per old model
definitions`. `By default django creates a field named as id to every model in
database`. It can be accesses by modelName.id.
In your case I guess what happened is that you are having some objects in
database with customer.id as primary as primary key. So when you changed the
models and applied migrations, `existing objects are checked and another
unique field is tried to added as a primary key`. Here the workaround is to
delete all the existing objects after removing the customer_id field and then
try recreating the field and run migrations.
HTH
|
Web Scraping Python using Google Chrome extension
Question: Hi I am a Python newbie and I am webscrapping a webpage.
I am using the Google Chrome Developer Extension to identify the class of the
objects I want to scrape. However, my code returns an empty array of results
whereas the screenshots clearly show that that those strings are in the HTML
code. [Chrome Developer](http://i.stack.imgur.com/0Xf87.png)
import requests
from bs4 import BeautifulSoup
url = 'http://www.momondo.de/flightsearch/?Search=true&TripType=2&SegNo=2&SO0=BOS&SD0=LON&SDP0=07-09-2016&SO1=LON&SD1=BOS&SDP1=12-09-2016&AD=1&TK=ECO&DO=false&NA=false'
html = requests.get(url)
soup = BeautifulSoup(html.text,"lxml")
x = soup.find_all("span", {"class":"value"})
print(x)
#pprint.pprint (soup.div)
I am very much appreciating your help!
Many thanks!
Answer: Converted my comment to an answer...
Make sure the data you are expecting is actually there. Use
`print(soup.prettify())` to see what was actually returned from the request.
Depending on how the site works, the data you are looking for may only exist
in the browser after the javascript is processed. You might also want to take
a look at [selenium](http://www.seleniumhq.org/)
|
Spread function calls evenly over time in Python
Question: Let's say i have a function in Python and it's pretty fast so i can call it in
a loop like 10000 times per second.
I'd like to call it, for example, 2000 times per second but with even
intervals between calls (not just call 2000 times and wait till the end of the
second). How can i achieve this in Python?
Answer: You can use the built-in
[`sched`](https://docs.python.org/2/library/sched.html) module which
implements a general purpose scheduler.
import sched, time
# Initialize the scheduler
s = sched.scheduler(time.time, time.sleep)
# Define some function for the scheduler to run
def some_func():
print('ran some_func')
# Add events to the scheduler and run
delay_time = 0.01
for jj in range(20):
s.enter(delay_time*jj, 1, some_func)
s.run()
Using the `s.enter` method puts the events into the scheduler with a delay
relative to when the events are entered. It is also possible to schedule the
events to occur at a specific time with `s.enterabs`.
|
How to extract the first numbers in a string - Python
Question: How do I remove all the numbers before the first letter in a string? For
example,
myString = "32cl2"
I want it to become:
"cl2"
I need it to work for any length of number, so 2h2 should become h2, 4563nh3
becomes nh3 etc. **EDIT:** This has numbers without spaces between so it is
not the same as the other question and it is specifically the first numbers,
not all of the numbers.
Answer: If you were to solve it without regular expressions, you could have used
[`itertools.dropwhile()`](https://docs.python.org/2/library/itertools.html#itertools.dropwhile):
>>> from itertools import dropwhile
>>>
>>> ''.join(dropwhile(str.isdigit, "32cl2"))
'cl2'
>>> ''.join(dropwhile(str.isdigit, "4563nh3"))
'nh3'
* * *
Or, using [`re.sub()`](https://docs.python.org/2/library/re.html#re.sub),
replacing one or more digits at the beginning of a string:
>>> import re
>>> re.sub(r"^\d+", "", "32cl2")
'cl2'
>>> re.sub(r"^\d+", "", "4563nh3")
'nh3'
|
django: RecursionError when initialize a object
Question: SO I am trying to build a simple shopping cart feature for my app, that simply
add, remove a piece of equipment, and display a list of current cart. Here are
my codes for this cart: (codes largely adopted from
<https://github.com/bmentges/django-cart>)
cart.py:
import datetime
from .models import Cart, Item, ItemManager
CART_ID = 'CART-ID'
class ItemAlreadyExists(Exception):
pass
class ItemDoesNotExist(Exception):
pass
class Cart:
def __init__(self, request, *args, **kwargs):
super(Cart, self).__init__()
cart_id = request.session.get(CART_ID)
if cart_id:
try:
cart = models.Cart.objects.get(id=cart_id, checked_out=False)
except models.Cart.DoesNotExist:
cart = self.new(request)
else:
cart = self.new(request)
self.cart = cart
def __iter__(self):
for item in self.cart.item_set.all():
yield item
def new(self, request):
cart = Cart(request, creation_date=datetime.datetime.now())
cart.save()
request.session[CART_ID] = cart.id
return cart
def add(self, equipment):
try:
item = models.Item.objects.get(
cart=self.cart,
equipment=equipment,
)
except models.Item.DoesNotExist:
item = models.Item()
item.cart = self.cart
item.equipment = equipment
item.save()
else: #ItemAlreadyExists
item.save()
def remove(self, equipment):
try:
item = models.Item.objects.get(
cart=self.cart,
equipment=equipment,
)
except models.Item.DoesNotExist:
raise ItemDoesNotExist
else:
item.delete()
def count(self):
result = 0
for item in self.cart.item_set.all():
result += 1 * item.quantity
return result
def clear(self):
for item in self.cart.item_set.all():
item.delete()
and models.py:
class Cart(models.Model):
creation_date = models.DateTimeField(verbose_name=_('creation date'))
checked_out = models.BooleanField(default=False, verbose_name=_('checked out'))
class Meta:
verbose_name = _('cart')
verbose_name_plural = _('carts')
ordering = ('-creation_date',)
def __unicode__(self):
return unicode(self.creation_date)
class ItemManager(models.Manager):
def get(self, *args, **kwargs):
if 'equipment' in kwargs:
kwargs['content_type'] = ContentType.objects.get_for_model(type(kwargs['equipment']))
kwargs['object_id'] = kwargs['equipment'].pk
del(kwargs['equipment'])
return super(ItemManager, self).get(*args, **kwargs)
class Item(models.Model):
cart = models.ForeignKey(Cart, verbose_name=_('cart'))
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
objects = ItemManager()
class Meta:
verbose_name = _('item')
verbose_name_plural = _('items')
ordering = ('cart',)
def __unicode__(self):
return u'%d units of %s' % (self.quantity, self.equipment.__class__.__name__)
# product
def get_product(self):
return self.content_type.get_object_for_this_type(pk=self.object_id)
def set_product(self, equipment):
self.content_type = ContentType.objects.get_for_model(type(equipment))
self.object_id = equipment.pk
When trying to go the view displaying current cart, I got this problem:
> RecursionError at /calbase/cart/ maximum recursion depth exceeded in
> comparison
and basically it repeats calling the following :
cart = Cart(request, creation_date=datetime.datetime.now())
cart = self.new(request)
cart = Cart(request, creation_date=datetime.datetime.now())
cart = self.new(request)
.....
I am aware that this is because I am calling _init_ when doing cart =
Cart(...) and this again go back to cart = self.new(request) and tried several
ways to fix this, in vain. Could somebody help me?
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/calbase/cart/
Django Version: 1.10
Python Version: 3.5.2
Installed Applications:
['calbase.apps.CalbaseConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'haystack',
'whoosh']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "C:\Users\hansong.li\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\handlers\exception.py" in inner
39. response = get_response(request)
File "C:\Users\hansong.li\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\handlers\base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "C:\Users\hansong.li\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\handlers\base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\hansong.li\Documents\GitHub\equipCal\calbase\views.py" in get_cart
72. return render_to_response('cart.html', dict(cart=Cart(request)))
File "C:\Users\hansong.li\Documents\GitHub\equipCal\calbase\cart.py" in __init__
22. cart = self.new(request)
File "C:\Users\hansong.li\Documents\GitHub\equipCal\calbase\cart.py" in __init__
22. cart = self.new(request)
File "C:\Users\hansong.li\Documents\GitHub\equipCal\calbase\cart.py" in new
30. cart = Cart(request, creation_date=datetime.datetime.now())
Exception Type: RecursionError at /calbase/cart/
Exception Value: maximum recursion depth exceeded in comparison
Answer: You have two separate classes called Cart. Rename one of ,them.
|
Dumping JSON directly into a tarfile
Question: I have a large list of dict objects. I would like to store this list in a tar
file to exchange remotely. I have done that successfully by writing a
json.dumps() string to a tarfile object opened in 'w:gz' mode.
I am trying for a piped implementation, opening the tarfile object in 'w|gz'
mode. Here is my code so far:
from json import dump
from io import StringIO
import tarfile
with StringIO() as out_stream, tarfile.open(filename, 'w|gz', out_stream) as tar_file:
for packet in json_io_format(data):
dump(packet, out_stream)
This code is in a function 'write_data'. 'json_io_format' is a generator that
returns one dict object at a time from the dataset (so packet is a dict).
Here is my error:
Traceback (most recent call last):
File "pdml_parser.py", line 35, in write_data
dump(packet, out_stream)
File "/.../anaconda3/lib/python3.5/tarfile.py", line 2397, in __exit__
self.close()
File "/.../anaconda3/lib/python3.5/tarfile.py", line 1733, in close
self.fileobj.close()
File "/.../anaconda3/lib/python3.5/tarfile.py", line 459, in close
self.fileobj.write(self.buf)
TypeError: string argument expected, got 'bytes'
After some troubleshooting with help from the comments, the error is caused
when the 'with' statement exits, and tries to call the context manager
__exit__. I _BELIEVE_ that this in turn calls TarFile.close(). If I remove the
tarfile.open() call from the 'with' statement, and purposefully leave out the
TarFile.close(), I get this code:
with StringIO() as out_stream:
tarfile.open(filename, 'w|gz', out_stream) as tar_file:
for packet in json_io_format(data):
dump(packet, out_stream)
This version of the program completes, but does not produce the output file
'filname' and yields this error:
Exception ignored in: <bound method _Stream.__del__ of <targile._Stream object at 0x7fca7a352b00>>
Traceback (most recent call last):
File "/.../anaconda3/lib/python3.5/tarfile.py", line 411, in __del__
self.close()
File "/.../anaconda3/lib/python3.5/tarfile.py", line 459, in close
self.fileobj.write(self.buf)
TypeError: string argument expected, got 'bytes'
I believe that is caused by the garbage collector. Something is preventing the
TarFile object from closing.
Can anyone help me figure out what is going on here?
Answer: Why do you think you can write a tarfile to a StringIO? That doesn't work like
you think it does.
This approach doesn't error, but it's not actually how you create a tarfile in
memory from in-memory objects.
from json import dumps
from io import BytesIO
import tarfile
data = [{'foo': 'bar'},
{'cheese': None},
]
filename = 'fnord'
with BytesIO() as out_stream, tarfile.open(filename, 'w|gz', out_stream) as tar_file:
for packet in data:
out_stream.write(dumps(packet).encode())
|
Getting an argument name as a string - Python
Question: I have a function that takes in a variable as an argument. That variable
happens to contain a directory, also holding a bunch of txt files.
I was wondering if there is a way for that variable to be taken as a string?
Not what's inside the variable, just the name of that variable. Thanks a
bunch!
import glob
import pandas as pd
variable_1 = glob.glob('dir_pathway/*txt')
variable_2 = glob.glob('other_dir_pathway/*txt')
def my_function(variable_arg):
## a bunch of code to get certain things from the directory ##
variable_3 = pd.DataFrame( ## stuff taken from directory ## )
variable_3.to_csv(variable_arg + "_add_var_to_me.txt")
Answer: Although that is very weird request, here is how you get the name of argument
from inside the function
import inspect
def f(value):
frame = inspect.currentframe()
print(inspect.getargvalues(frame)[0][0])
f(10)
f("Hello world")
f([1, 2, 3])
Prints
value
value
value
|
Resize window without covering widget
Question: This Python-tkinter program uses the pack geometry manager to place a text
widget and a "quit" button in the root window -- the text widget is above the
button.
import tkinter as tk
root = tk.Tk()
txt = tk.Text(root, height=5, width=25, bg='lightblue')
txt.pack(fill=tk.BOTH,expand=True)
txt.insert('1.0', 'this is a Text widget')
tk.Button(root, text='quit', command=quit).pack()
root.mainloop()
When I resize the root window by dragging its lower border up, I lose the quit
button. It vanishes under the root window's border. But I'm trying to get the
quit button to move up along with the window's border, while letting the text
widget shrink. I have played with "fill" and "expand" in pack() and "height"
in both widgets without success.
Is there any straightforward way to keep the quit button visible while
dragging the window smaller?
(While researching, I noticed that grid geometry with its "sticky" cells can
accomplish this task easily. But I'm still curious to know if there is any
simple way to do the same with pack geometry.)
Answer: Pack the quit button before packing the text widget. When the window is too
small for its contents it starts to shrink widgets in the reverse order tat
they were packed until it can't shrink anymore, then it starts clipping
widgets.
The text widget, being the largest widget in your window, can shrink a lot
before it starts getting clipped.
|
how can I get python's np.savetxt to save each iteration of a loop in a different column?
Question: This is an extremely basic code that does what I want... except with regard to
the writing of the text file.
import numpy as np
f = open("..\myfile.txt", 'w')
tst = np.random.random(5)
tst2 = tst/3
for i in range(3):
for j in range(5):
test = np.random.random(5)+j
a = np.random.normal(test, tst2)
np.savetxt(f, np.transpose(a), fmt='%10.2f')
print a
f.close()
This code will write to a .txt file a single column that is concatenated after
each iteration of the for loop.
**What I want is independent columns for each iteration.**
**How does one do that?**
note: I have used `np.c_[]` as well, and that _will_ write the columns **if**
I express each iteration within the command. ie: `np.c_[a[0],a[1]]` and so on.
The problem whit this is, what if both my `i` and `j` values are very large?
It isn't reasonable to follow this method.
Answer: So a run produces:
2218:~/mypy$ python3 stack39114780.py
[ 4.13312217 4.34823388 4.92073836 4.6214074 4.07212495]
[ 4.39911371 5.15256451 4.97868452 3.97355995 4.96236119]
[ 3.82737975 4.54634489 3.99827574 4.44644041 3.54771411]
2218:~/mypy$ cat myfile.txt
4.13
4.35
4.92
4.62
4.07 # end of 1st iteration
4.40
5.15
4.98
3.97
....
Do you understand what's going on? One call to `savetxt` writes a set of
lines. With a 1d array like `a` it prints one number per row. (`transpose(a)`
doesn't do anything).
File writing is done line by line, and can't be rewound to add columns. So to
make multiple columns you need to create an array with multiple columns. Then
do one `savetxt`. In other words, collect all the data before writing.
Collect your values in a list, make an array, and write that
alist = []
for i in range(3):
for j in range(5):
test = np.random.random(5)+j
a = np.random.normal(test, tst2)
alist.append(a)
arr = np.array(alist)
print(arr)
np.savetxt('myfile.txt', arr, fmt='%10.2f')
I'm getting 15 rows of 5 columns, but you can tweak that.
2226:~/mypy$ cat myfile.txt
0.74 0.60 0.29 0.74 0.62
1.72 1.62 1.12 1.95 1.13
2.19 2.55 2.72 2.33 2.65
3.88 3.82 3.63 3.58 3.48
4.59 4.16 4.05 4.26 4.39
Since `arr` is now 2d, `np.transpose(arr)` does something meaningful - I would
get 5 rows with 15 columns.
==================
With
for i in range(3):
for j in range(5):
test = np.random.random(5)+j
a = np.random.normal(test, tst2)
np.savetxt(f, np.transpose(a), fmt='%10.2f')
you write `a` once for each `i` \- hence the 3 rows. You throwing away 4 of
the `j` iterations. In my variation I collect all `a`, and hence get 15 rows.
|
Find and Edit Text File
Question: I'm looking to find if there is a way of automating this process. Basically I
have 300,000 rows of data needed to download on a daily basis. There are a
couple of rows that need to be edited before it can be uploaded to SQL.
Jordan || Michael | 23 | Bulls | Chicago
Bryant | Kobe ||| 8 || LA
What I want to accomplish is to just have 4 vertical bars per row. Normally, I
would search for a keyword then edit it manually then save. These two are the
only anomalies in my data.
1. Find "Jordan", then remove the excess 1 vertical bar "|" right after it.
2. I need to find "Kobe", then remove the two excess vertical bars "|" right after it.
Correct format is below -
Jordan | Michael | 23 | Bulls | Chicago
Bryant | Kobe | 8 || LA
Not sure if this can be done in vbscript or Python. Any help would be
appreciated. Thanks!
Answer: Python or vbscript could be used but they are overkill for something this
simple. Try `sed`:
$ sed -E 's/(Jordan *)\|/\1/g; s/(Kobe *)\| *\|/\1/g' file
Jordan | Michael | 23 | Bulls | Chicago
Bryant | Kobe | 8 || LA
To save to a new file:
sed -E 's/(Jordan *)\|/\1/g; s/(Kobe *)\| *\|/\1/g' file >newfile
Or, to change the existing file in-place:
sed -Ei.bak 's/(Jordan *)\|/\1/g; s/(Kobe *)\| *\|/\1/g' file
### How it works
sed reads and processes a file line by line. In our case, we need only the
substitute command which has the form `s/old/new/g` where `old` is a regular
expression and, if it is found, it is replaced by `new`. The optional `g` at
the end of the command tells sed to perform the substitution command
'globally', meaning not just once but as many times as it appears on the line.
* `s/(Jordan *)\|/\1/g`
This tells sed to look for Jordan followed by zero or more spaces followed by
a vertical bar and remove the vertical bar.
In more detail, the parens in `(Jordan *)` tell sed to save the string Jordan
followed by zero or more spaces as a group. In the replacement side, we
reference that group as `\1`.
* `s/(Kobe *)\| *\|/\1/g`
Similarly, this tells sed to look for Kobe followed by zero or more spaces
followed by a vertical bar and remove the vertical bar.
## Using python
Using the same logic as above, here is a python program:
$ cat kobe.py
import re
with open('file') as f:
for line in f:
line = re.sub(r'(Jordan *)\|', r'\1', line)
line = re.sub(r'(Kobe *)\| *\|', r'\1', line)
print(line.rstrip('\n'))
$ python kobe.py
Jordan | Michael | 23 | Bulls | Chicago
Bryant | Kobe | 8 || LA
To save that to a new file:
python kobe.py >newfile
|
Registering on PyPI test site command line ValueError
Question: Why is the below happening when I try to register my package with the test
site? It registers with the regular site just fine :/
This is what happens at my command line when I attempt to register with the
pypi test site:
PS C:\Users\Dave\Desktop\distributing\hellodmt2Distribution> python setup.py register -r https://testpypi.python.org
i
running register
running egg_info
writing hellodmt2.egg-info\PKG-INFO
writing top-level names to hellodmt2.egg-info\top_level.txt
writing dependency_links to hellodmt2.egg-info\dependency_links.txt
reading manifest file 'hellodmt2.egg-info\SOURCES.txt'
writing manifest file 'hellodmt2.egg-info\SOURCES.txt'
Traceback (most recent call last):
File "setup.py", line 14, in <module>
download_url = "https://github.com/dmt257/hellodmt2/archive/0.1.tar.gz",
File "C:\Python27\lib\distutils\core.py", line 151, in setup
dist.run_commands()
File "C:\Python27\lib\distutils\dist.py", line 953, in run_commands
self.run_command(cmd)
File "C:\Python27\lib\distutils\dist.py", line 972, in run_command
cmd_obj.run()
File "C:\Python27\lib\site-packages\setuptools\command\register.py", line 10, in run
orig.register.run(self)
File "C:\Python27\lib\distutils\command\register.py", line 46, in run
self._set_config()
File "C:\Python27\lib\distutils\command\register.py", line 81, in _set_config
raise ValueError('%s not found in .pypirc' % self.repository)
ValueError: https://testpypi.python.org/pypi not found in .pypirc
PS C:\Users\Dave\Desktop\distributing\hellodmt2Distribution>
My setup.py:
#!/usr/bin/env
try:
from setuptools import setup
except importError:
from distutils.core import setup
setup(name = "hellodmt2",
description = "a source distribution test",
version = "0.1",
author = "David",
author_email = "dmt257257@gmail.com",
py_modules = ["hellodmt2"],
url = "https://github.com/dmt257/hellodmt2",
download_url = "https://github.com/dmt257/hellodmt2/archive/0.1.zip",
keywords = ["testing"],
)
This is my pypirc:
[distutils]
index-servers=
pypi
pypitest
[pypitest]
repository = https://testpypi.python.org/pypi
username = dmt257
password = mypasswordhere
[pypi]
repository = https://pypi.python.org/pypi
username = dmt257
password = mypasswordhere
Answer: The file should be called, simply, `.pypirc`, not `pypi.pypirc`. This a Linux-
style filename commonly used for configuration files. The leading dot means
that it won't be shown in a normal directory listing.
And from what I have read, the Windows equivalent of the Linux `$Home`
directory (`~`) is `C\Users\<logged-in-user>`, so `C\Users\Dave`, in your
case. Adding the location to your `PATH` won't help; this variable is only to
allow Windows to find executables.
The documentation isn't clear on where this file should go in a Windows
environment, there is an old [bug](http://bugs.python.org/issue1741) that
mentions this file not being found in Windows because of the lack of a `HOME`
envorinment variable. It's been 'fixed' but it's still not clear where the
file should go, other than `~/.pypirc`.
I'd try renaming your file, first. If you still have issues, try moving it to
your 'home' directory. Note that the Windows GUI won't let you rename a file
with only an extension, so you'll have to do it from a command window:
`rename pypi.pypirc .pypirc`
|
How to plot bar graph interactively based on value of dropdown widget in bokeh python?
Question: I want to plot the bar graph based value of dropdown widget.
**Code**
import pandas as pd
from bokeh.io import output_file, show
from bokeh.layouts import widgetbox
from bokeh.models.widgets import Dropdown
from bokeh.plotting import curdoc
from bokeh.charts import Bar, output_file,output_server, show #use output_notebook to visualize it in notebook
df=pd.DataFrame({'item':["item1","item2","item2","item1","item1","item2"],'value':[4,8,3,5,7,2]})
menu = [("item1", "item1"), ("item2", "item2")]
dropdown = Dropdown(label="Dropdown button", button_type="warning", menu=menu)
def function_to_call(attr, old, new):
df=df[df['item']==dropdown.value]
p = Bar(df, title="Bar Chart Example", xlabel='x', ylabel='values', width=400, height=400)
output_server()
show(p)
dropdown.on_change('value', function_to_call)
curdoc().add_root(dropdown)
**Questions**
1. I am getting the flowing error "UnboundLocalError: local variable '**df** ' referenced before assignment" eventhough df is already created.
2. How to plot the bar graph in the webpage below the dropdown? What is the syntax to display it after issue in 1. is resolved?
Answer: For 1.) you are referencing it before assigning it. Look at the
`df['item']==dropdown.value` inside the square brackets. That happens _first_
before the assignment. As to why this matters, that's how Python works. All
assignments in a function by default create _local_ variables. But before the
assignment, only the global value is available. Python is telling you it won't
allow mixed global/local usage in a single function. Long story short, rename
the `df` variable inside the function:
subset = df[df['item']==dropdown.value]
p = Bar(subset, ...)
For 2.) you need to put things in a layout (e.g. a `column`). There are lots
of example of this in the project docs and in the gallery.
|
List Accumulation with Append
Question: I want to generate or return an append-accumulated list from a given list (or
iterator). For a list like `[1, 2, 3, 4]`, I would like to get, `[1]`, `[1,
2]`, `[1, 2, 3]` and `[1, 2, 3, 4]`. Like so:
>>> def my_accumulate(iterable):
... grow = []
... for each in iterable:
... grow.append(each)
... yield grow
...
>>> for x in my_accumulate(some_list):
... print x # or something more useful
...
[1]
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
This works but is there an operation I could use with
[`itertools.accumulate`](https://docs.python.org/3/library/itertools.html#itertools.accumulate)
to facilitate this? (I'm on Python2 but the pure-python
implementation/equivalent has been provided in the docs.)
Another problem I have with `my_accumulate` is that it doesn't work well with
`list()`, it outputs the entire `some_list` for each element in the list:
>>> my_accumulate(some_list)
<generator object my_accumulate at 0x0000000002EC3A68>
>>> list(my_accumulate(some_list))
[[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
* * *
Option 1:
I wrote my own appending accumulator function to use with
`itertools.accumulate` but considering the LoC and final useful-ness, it seems
like a waste of effort, with `my_accumulate` being more useful, _(though may
fail in case of empty iterables and consumes more memory since`grow` keeps
growing)_:
>>> def app_acc(first, second):
... if isinstance(first, list):
... first.append(second)
... else:
... first = [first, second]
... return first
...
>>> for x in accumulate(some_list, app_acc):
... print x
...
1
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
>>> list(accumulate(some_list, app_acc)) # same problem again with list
[1, [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]
_(and the first returned elem is not a list, just a single item)_
* * *
Option 2: Figured it would be easier to just do incremental slicing but using
the ugly iterate over list length method:
>>> for i in xrange(len(some_list)): # the ugly iterate over list length method
... print some_list[:i+1]
...
[1]
[1, 2]
[1, 2, 3]
[1, 2, 3, 4]
Answer: The easiest way to use `accumulate` is to make each item in the iterable a
list with a single item and then the default function works as expected:
from itertools import accumulate
acc = accumulate([el] for el in range(1, 5))
res = list(acc)
# [[1], [1, 2], [1, 2, 3], [1, 2, 3, 4]]
|
How can I combine two FITS tables into a single table in new fits file?
Question: I have two fits file data (file1.fits and file2.fits). The first one
(file1.fits) consists of 80,700 important rows of data and another one is
140,000 rows. The both of them have the same Header.
$ python
>>> import pyfits
>>> f1 = pyfits.open('file1.fits')
>>> f2 = pyfits.open('file2.fits')
>>> event1 = f1[1].data
>>> event2 = f2[1].data
>>> len(event1)
80700
>>> len(event2)
140000
How can I combine file1.fits and file2.fits into new fits file (newfile.fits)
with the same header as the old ones and the total number of rows of
newfile.fits is 80,700+ 140,000 = 220,700 ?
Answer: I tried with [astropy](http://www.astropy.org/):
from astropy.table import Table, hstack
t1 = Table.read('file1.fits', format='fits')
t2 = Table.read('file2.fits', format='fits')
new = hstack([t1, t2])
new.write('combined.fits')
It seems to work with samples from NASA.
|
Multiprocessing - Shared Array
Question: So I'm trying to implement multiprocessing in python where I wish to have a
Pool of 4-5 processes running a method in parallel. The purpose of this is to
run a total of thousand Monte simulations (250-200 simulations per process)
instead of running 1000. I want each process to write to a common shared array
by acquiring a lock on it as soon as its done processing the result for one
simulation, writing the result and releasing the lock. So it should be a three
step process :
1. Acquire lock
2. Write result
3. Release lock for other processes waiting to write to array.
Everytime I pass the array to the processes each process creates a copy of
that array which I donot want as I want a common array. Can anyone help me
with this by providing sample code?
Answer: Not tested, but something like that should work. The array and lock are shared
between processes.
from multiprocessing import Process
def f(array,lock):
lock.acquire()
#modify array here
lock.release()
if __name__ == '__main__':
size=100
multiprocessing.Array('i', size) #c type array
lock=multiprocessing.Lock()
p = Process(target=f, args=(arr,lock,))
q = Process(target=f, args=(arr,lock,))
p.start()
q.start()
q.join()
p.join()
the documentation here
<https://docs.python.org/3.5/library/multiprocessing.html> has plenty of
examples to start with
|
Get read of ['\n'] in python3 with paramiko
Question: I've just started to do some monitoring tool in python3 and I wondered, if I
can get 'clear' number output through ssh. I've made some script:
import os
import paramiko
command = 'w|grep \"load average\"|grep -v grep|awk {\'print ($10+$11+$12)/3*100\'};'
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy())
ssh.connect('10.123.222.233', username='xxx', password='xxx')
stdin, stdout, stderr = ssh.exec_command(command)
print (stdout.readlines())
ssh.close()
It works fine, except output is:
['22.3333\n']
How can I get rid of " [' " and " \n'] " and just get the clear number value?
How to get the result as I see it just in putty?
Answer: `.readlines()` returns a _list_ of separate lines. In your case there is just
one line, you can just extract it by indexing the list, then strip of the
whitespace at the end:
firstline = stdout.readlines()[0].rstrip()
This is still a string, however. If you expected _numbers_ , you'd have to
convert the string to a `float()`. Since your command line will only ever
return **one** line, you may as well just use `.read()` and convert that
straight up (no need to strip, as `float()` is tolerant of trailing and
leading whitespace):
result = float(stdout.read())
|
Python & Google Places API | Want to get all Restaurants at a specific postion
Question: I want to get all restaurants in London by using python 3.5 and the module
"googlePlaces" with the Google Places API. I read the "googleplaces"
documentation and searched here. But I don't get it. Thats my Code so far:
from googleplaces import GooglePlaces, types, lang
API_KEY = 'XXXCODEXXX'
google_places = GooglePlaces(API_KEY)
query_result = google_places.nearby_search(
location='London', keyword='Restaurants',
radius=1000, types=[types.TYPE_RESTAURANT])
if query_result.has_attributions:
print query_result.html_attributions
for place in query_result.places:
place.get_details()
print place.rating
The Code doesn't work. What can I do to get a list with all Restaurants in
this area? Thanks
Answer: It'll be better if you drop the `keyword` parameter, `types` already searches
for restaurants.
Bear in mind the Places API (as other Google Maps APIs) is not a database, it
will not return all results that match. Actually returns only 20, and you can
get an extra 40 or so, but that's all.
If I'm reading the [GooglePlaces](https://github.com/slimkrazy/python-google-
places/blob/master/googleplaces/__init__.py) correctly, your code will send an
API request such like:
[http://maps.googleapis.com/maps/api/place/nearbysearch/json?location=51.507351,-0.127758&radius=1000&types=restaurant&keyword=Restaurants&key=YOUR_API_KEY](http://maps.googleapis.com/maps/api/place/nearbysearch/json?location=51.507351,-0.127758&radius=1000&types=restaurant&keyword=Restaurants&key=YOUR_API_KEY)
If you just drop the `keyword` parameter, it'll be like:
[http://maps.googleapis.com/maps/api/place/nearbysearch/json?location=51.507351,-0.127758&radius=1000&types=restaurant&key=YOUR_API_KEY](http://maps.googleapis.com/maps/api/place/nearbysearch/json?location=51.507351,-0.127758&radius=1000&types=restaurant&key=YOUR_API_KEY)
The difference is subtle: `keyword=Restaurants` will make the API match
results that have the word "Restaurants" in their name, address, etc. Some of
these may not be restaurants (and will be discarded), while some actual
restaurants may not have the word "Restaurants" in them.
|
Python Unit-test Mocking, unable to patch `time` module that's being imported in `__init__.py`
Question: I've the following code in the **__init__.py** file
from time import sleep
from functools import wraps
def multi_try(func):
@wraps(func)
def inner(*args, **kwargs):
count = 0
while count < 5:
resp = func(*args, **kwargs)
if resp.status_code in [200, 201]:
return resp
sleep(1)
count += 1
return inner
While writing tests for the above decorator I'm not able to patch the the
_time.sleep_ properly.
See the test below, even though I've patched time module, still the sleep
function inside the decorator getting called, thereby test case require 5+
seconds to finish.
def test_multi_try_time():
with patch("time.sleep") as tm:
mocker = MagicMock(name="mocker")
mocker.__name__ = "blah"
resp_mock = MagicMock()
resp_mock.status_code=400
_json = '{"test":"twist"}'
resp_mock.json=_json
mocker.return_value = resp_mock
wrapped = multi_try(mocker)
resp = wrapped("p", "q")
assert mocker.call_count == 5
mocker.assert_called_with('p', 'q')
assert resp == None
Also I tried this,
`with patch("dir.__init__.time" ) as tm:`
and
`with patch("dir.utils.time" ) as tm:`
That resulted in
`AttributeError: <module 'dir/__init__.pyc'> does not have the attribute
'time'`
Answer: All I had to do was
with patch("dir.sleep" ) as tm:
Instead of,
with patch("time.sleep") as tm:
|
Replacing multiple strings with regex in python for a file giving truncated string
Question: The following python code
import xml.etree.cElementTree as ET
import time
import fileinput
import re
ts = str(int(time.time()))
modifiedline =''
for line in fileinput.input("singleoutbound.xml"):
line = re.sub('OrderName=".*"','OrderName="'+ts+'"', line)
line = re.sub('OrderNo=".*"','OrderNo="'+ts+'"', line)
line = re.sub('ShipmentNo=".*"','ShipmentNo="'+ts+'"', line)
line = re.sub('TrackingNo=".*"','TrackingNo="'+ts+'"', line)
line = re.sub('WaveKey=".*"','WaveKey="'+ts+'"', line)
modifiedline=modifiedline+line
Returns the modifiedline string with some lines truncated wherever the first
match is found
How do I ensure it returns the complete string for each line?
Edit:
I have changed the way I am solving this problem, inspired by Tomalak's answer
import xml.etree.cElementTree as ET
import time
ts = str(int(time.time()))
doc = ET.parse('singleoutbound.xml')
for elem in doc.iterfind('//*'):
if 'OrderName' in elem.attrib:
elem.attrib['OrderName'] = ts
if 'OrderNo' in elem.attrib:
elem.attrib['OrderNo'] = ts
if 'ShipmentNo' in elem.attrib:
elem.attrib['ShipmentNo'] = ts
if 'TrackingNo' in elem.attrib:
elem.attrib['TrackingNo'] = ts
if 'WaveKey' in elem.attrib:
elem.attrib['WaveKey'] = ts
doc.write('singleoutbound_2.xml')
Answer: Here is how to use ElementTree to make modifications to an XML file without
accidentally breaking it:
import xml.etree.cElementTree as ET
import time
ts = str(int(time.time()))
doc = ET.parse('singleoutbound.xml')
for elem in doc.iterfind('//*[@OrderName]'):
elem.attrib['OrderName'] = ts
# and so on
doc.write('singleoutbound_2.xml')
Things to understand:
* XML represents a tree-shaped data structure that consists of elements, attributes and values, among other things. Treating it as line-based plain text fails to recognize this fact.
* There is a language to select items from that tree of data, called XPath. It's powerful and not difficult to learn. Learn it. I've used `//*[@OrderName]` above to find all elements that have an `OrderName` attribute.
* Trying to modify the document tree with improper tools like string replace and regular expressions will lead to more complex and hard-to-maintain code. You will encounter run-time errors for completely valid input that your regex has no special case for, character encoding issues and silent errors that are only caught when someone looks at your program's output. In other words: It's the wrong thing to do, so don't do it.
* The above code is actually simpler and much easier to reason about and extend than your code.
|
Problems when opening xlsx with openpyxl
Question: I have a xlsx file and I tried to load this file using openpyxl
from openpyxl import load_workbook
wb = load_workbook('/home/file_path/file.xlsx')
But I get this error:
"wb = load_workbook(new_file)"): expected string or buffer
new_file is a variable with the path of the xlsx file trying to open. Does
anybody knows why this happens or how I should change to read the file?
Thanks!
**Update** More details about the error
/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/reader/worksheet.py:322: UserWarning: Unknown extension is not supported and will be removed
warn(msg)
/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/reader/worksheet.py:322: UserWarning: Conditional Formatting extension is not supported and will be removed
warn(msg)
Traceback (most recent call last):
File "/vagrant/vagrant_conf/pycharm-debug.egg/pydevd_comm.py", line 1071, in doIt
result = pydevd_vars.evaluateExpression(self.thread_id, self.frame_id, self.expression, self.doExec)
File "/vagrant/vagrant_conf/pycharm-debug.egg/pydevd_vars.py", line 344, in evaluateExpression
Exec(expression, updated_globals, frame.f_locals)
File "/vagrant/vagrant_conf/pycharm-debug.egg/pydevd_exec.py", line 3, in Exec
exec exp in global_vars, local_vars
File "<string>", line 1, in <module>
File "/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/reader/excel.py", line 252, in load_workbook
wb._named_ranges = list(read_named_ranges(archive.read(ARC_WORKBOOK), wb))
File "/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/workbook/names/named_range.py", line 130, in read_named_ranges
if external_range(node_text):
File "/home/vagrant/scrapy/local/lib/python2.7/site-packages/openpyxl/workbook/names/named_range.py", line 112, in external_range
m = EXTERNAL_RE.match(range_string)
TypeError: expected string or buffer
Answer: The syntaxe is:
wb = load_workbook(filename='file.xlsx', read_only=True)
The `read_only` keyword is not required.
|
Python Decryption using private key
Question: I have an encrypted string. The Encryption is done using java code. I decrypt
the encrypted string using following java code
InputStream fileInputStream = getClass().getResourceAsStream(
"/private.txt");
byte[] bytes = IOUtils.toByteArray(fileInputStream);
private String decrypt(String inputString, byte[] keyBytes) {
String resultStr = null;
PrivateKey privateKey = null;
try {
KeyFactory keyFactory = KeyFactory.getInstance("RSA");
EncodedKeySpec privateKeySpec = new PKCS8EncodedKeySpec(keyBytes);
privateKey = keyFactory.generatePrivate(privateKeySpec);
} catch (Exception e) {
System.out.println("Exception privateKey::::::::::::::::: "
+ e.getMessage());
e.printStackTrace();
}
byte[] decodedBytes = null;
try {
Cipher c = Cipher.getInstance("RSA/ECB/NoPadding");
c.init(Cipher.DECRYPT_MODE, privateKey);
decodedBytes = c.doFinal(Base64.decodeBase64(inputString));
} catch (Exception e) {
System.out
.println("Exception while using the cypher::::::::::::::::: "
+ e.getMessage());
e.printStackTrace();
}
if (decodedBytes != null) {
resultStr = new String(decodedBytes);
resultStr = resultStr.split("MNSadm")[0];
// System.out.println("resultStr:::" + resultStr + ":::::");
// resultStr = resultStr.replace(salt, "");
}
return resultStr;
}
Now I have to use Python to decrypt the encrypted string. I have the private
key. When I use Cryptography package using following code
key = load_pem_private_key(keydata, password=None, backend=default_backend())
It throws `ValueError: Could not unserialize key data.`
Can anyone help what I am missing here?
Answer: I figured out the solution:
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from Crypto.Hash import SHA
from base64 import b64decode
rsa_key = RSA.importKey(open('private.txt', "rb").read())
verifier = PKCS1_v1_5.new(rsa_key)
raw_cipher_data = b64decode(<your cipher data>)
phn = rsa_key.decrypt(raw_cipher_data)
This is the most basic form of code. What I learned is first you have to get
the RSA_key(private key). For me `RSA.importKey` took care of everything.
Really simple.
|
Share a variable between two files?
Question: What is a mechanism in Python by which I can do the following:
file1.py:
def getStatus():
print status
file2.py:
status = 5
getStatus() # 5
status = 1
getStatus() # 1
The function and the variable are in two different files and I'd like to avoid
the use of a global.
Answer: You can share variables without making them global by putting them in a
module. Anybody who imports the module gets the _same_ module object, so its
contents are shared; changes made at one location show up in all the others.
notglobal.py:
status = 0
get.py:
import notglobal
def getStatus():
return notglobal.status
Testing:
>>> import notglobal
>>> import get
>>> notglobal.status = 5
>>> get.getStatus()
5
>>> notglobal.status = 1
>>> get.getStatus()
1
|
How to pass 'self' parameter of one class to another class
Question: I am trying to incorporate matplotlib into tkinter by having multiple frames.
I need to pass the entry inputs from `StartPage` frame to update the plot in
`GraphPage` once the button in `StartPage` is clicked. Therefore, I'm trying
to bind the update function of `GraphPage` to the button in `StartPage`, but
the update function requires `self` parameter which I can't get. Here's the
set up of my code right now:
import matplotlib
matplotlib.use("TkAgg")
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
from matplotlib.figure import Figure
import tkinter as tk
from tkinter import ttk
class PeakFitting(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
container = tk.Frame(self)
container.pack(side="top", fill="both", expand = True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (StartPage, GraphPage):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(StartPage)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
def get_page(self, classname):
'''Returns an instance of a page given it's class name as a string'''
for page in self.frames.values():
if str(page.__class__.__name__) == classname:
return page
return None
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
# Setting up frame and widget
self.entry1 = tk.Entry(self)
self.entry1.grid(row=4, column=1)
button3 = ttk.Button(self, text="Graph Page",
command=self.gpUpdate())
button3.grid(row=7, columnspan=2)
def gpUpdate(self):
graphPage = self.controller.get_page("GraphPage")
GraphPage.update(graphPage)
self.controller.show_frame("GraphPage")
class GraphPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
fig = Figure(figsize=(5, 5), dpi=100)
self.a = fig.add_subplot()
canvas = FigureCanvasTkAgg(fig, self)
canvas.show()
canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
toolbar = NavigationToolbar2TkAgg(canvas, self)
toolbar.update()
canvas._tkcanvas.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
startpage = self.controller.get_page("StartPage")
def update(self):
startpage = self.controller.get_page("StartPage")
self.iso = startpage.entry1.get()
##Calculation code for analysis not needed for question
app = PeakFitting()
app.mainloop()
for `gpUpdate` function in `StartPage`, I tried to pass `graphPage` as `self`,
but got an Error that it's a `NoneType`. I also tried
`GraphPage.update(GraphPage)` but get 'type object 'GraphPage' has no
attribute 'controller'' Error.
I'm very sorry if my explanation wasn't clear, but I am very new to Python and
has been struggling for weeks now to pass the entries to GraphPage class after
the button is clicked but still can't do it... Can anyone please help me with
this problem?
Thank you so much!!!
EDIT: I guess my main problem is how to call a function of one class in
another class, because I don't know what the `self` parameter should be :(
EDIT2: Changes to code thanks to suggestion:
def gpUpdate(self):
self.parent = PeakFitting()
self.grpage = GraphPage(self.parent.container, self.parent)
self.grpage.update()
self.controller.show_frame(GraphPage)
However, when I input something in the entry and hit the button, the
`self.iso` field still remains empty...
Answer: This is the problem:
GraphPage.update(graphPage)
You call the method on the class itself. You need to create an instance of
that class and then call the `update` method with a parent and a controller,
for example:
self.grpage = GraphPage(parent, controller)
And then:
self.grpage.update()
Aside from this, you have another problem `in button3`\- change this:
command=self.gpUpdate()
to this:
command=self.gpUpdate
without parenthesis. This is a function that you pass to the button, and will
be invoked upon button click.
|
Pythonic way to add values to a set within a dictionary
Question: Lets say i have a dictionary of sets:
d = {"foo":{1,2,3},
"bar":{3,4,5}}
Now lets say I want to add the value `7` to the set found within the key
`foo`. This would be easy:
d["foo"].add(7)
but what if we were unsure of the key already existing? It doesn't feel very
pythonic to check beforehand:
if "baz" in dict:
d["baz"].add(7)
else:
d["baz"] = {7}
I tried to be clever and do something like
d["baz"] = set(d["baz"]).add(7)
but then you just get a `KeyError` trying to access a bad key in the `set`
constructor.
Am i missing something, or do I need to just bite the bullet and look before I
leap? I would understand if that were the case, it would just be neat if there
were a simple way to say "Add this value to the set found at this location, or
if there isn't a set at that location, make one, and then put it in.
Answer: Use `defaultdict`
>>> from collections import defaultdict
>>> d = defaultdict(set)
>>> d
defaultdict(<class 'set'>, {})
>>> d['foo'].add(1)
>>> d['foo'].add(2)
>>> d
defaultdict(<class 'set'>, {'foo': {1, 2}})
>>> d['bar'].add(3)
>>> d['bar'].add(4)
>>> d
defaultdict(<class 'set'>, {'foo': {1, 2}, 'bar': {3, 4}})
>>>
Also, if you must use plain dict, you can use the `.setdefault` method:
>>> d2 = {}
>>> d2.setdefault('foo',set()).add(1)
>>> d2.setdefault('foo',set()).add(2)
>>> d2
{'foo': {1, 2}}
>>> d2.setdefault('bar',set()).add(3)
>>> d2.setdefault('bar',set()).add(4)
>>> d2
{'foo': {1, 2}, 'bar': {3, 4}}
>>>
## Edit to add time comparisons
You should note that using `defaultdict` is faster:
>>> setup = "gen = ((letter,k) for letter in 'abcdefghijklmnopqrstuvwxyx' for k in range(100)); d = {}"
>>> s = """for l,n in gen:
... d.setdefault(l,set()).add(n)"""
>>> setup2 = "from collections import defaultdict; gen = ((letter,k) for letter in 'abcdefghijklmnopqrstuvwxyx' for k in range(100)); d = defaultdict(set)"
>>> s2 = """for l,n in gen:
... d[l]=n"""
>>>
>>> import timeit
>>> timeit.timeit(stmt=s, setup=setup, number=10000)
0.005325066005752888
>>> timeit.timeit(stmt=s2, setup=setup2, number=10000)
0.0014927469965186901
|
Selenium webdriver python find element by xpath - Could not find element
Question: I m trying to write a script with selenium webdriver python. When I try to do
a
find_element_by_xpath("//*[@id='posted_1']/div[3]")
it says
> NoElementFoundException.
Can someone please help me here?
Regards Bala
Answer: that exception, unsurprisingly, means that that element wasn't available on
the DOM. There are a couple of options here:
driver.implicitly_wait(10)
will tell the driver to wait 10 seconds (or any amount of time) after an
element is not found/not clickable etc., and tries again after. Sometimes
elements don't load right away, so an implicit wait fixes those types of
problems.
The other option here is to do an explicit wait. This will wait until the
element appears, and until the existence of that element is confirmed, the
script will not move on to the next line:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element = WebDriverWait(ff, 10).until(EC.presence_of_element_located((By.XPATH, "//*[@id='posted_1']/div[3]")))
In my experience, an implicit wait is usually fine, but imprecise.
|
How do I set the DJANGO_SETTINGS_MODULE env variable?
Question: I'm trying to fix a bug I'm seeing in a django application where it isn't
sending mail. Please note that the application works great, it's only the mail
function that is failing. I've tried to collect error logs, but I can't come
up with any errors related to sending the mail. So, I made a example to try
and force the errors. Here is the example:
from django.core.mail import send_mail
send_mail('hi', 'hi', 'test@test.com', ['myname@yeah.com'], fail_silently=False)
When I run the above code, I get the following error:
Traceback (most recent call last):
File "dmail.py", line 14, in <module>
send_mail('hi', 'hi', 'test@test.com', ['myname@yeah.com'], fail_silently=False)
File "/data/servers/calendar_1/lib/python2.7/site-packages/django/core/mail/__init__.py", line 59, in send_mail
fail_silently=fail_silently)
File "/data/servers/calendar_1/lib/python2.7/site-packages/django/core/mail/__init__.py", line 29, in get_connection
path = backend or settings.EMAIL_BACKEND
File "/data/servers/calendar_1/lib/python2.7/site-packages/django/utils/functional.py", line 184, in inner
self._setup()
File "/data/servers/calendar_1/lib/python2.7/site-packages/django/conf/__init__.py", line 39, in _setup
raise ImportError("Settings cannot be imported, because environment variable %s is undefined." % ENVIRONMENT_VARIABLE)
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I managed to fix my test example by changing the code to this:
from django.core.mail import send_mail
from django.conf import settings
settings.configure(TEMPLATE_DIRS=('/path_to_project',), DEBUG=False, TEMPLATE_DEBUG=False)
send_mail('hi', 'hi', 'test@test.com', ['myname@yeah.com'], fail_silently=False)
However, when I try to add those settings to send_mail.py, I'm still not
getting any mail from my actual application. Can someone explain to me,
clearly, how I setup the DJANGO_SETTINGS_MODULE so that both my example and my
application can see it? Failing that, can someone tell me how to setup
meaningful logging in django so I actually see mail related errors in the
logs? Any tips or guidance would be greatly appreciated.
Answer: Do not set it from outside the application. Make a entry for
`DJANGO_SETTINGS_MODULE` variable within your `wsgi` file. Everytime your
server will be started, this variable will be set automatically.
For example:
import os
os.environ['DJANGO_SETTINGS_MODULE'] = '<project_name>.settings'
|
Why python UDF returns unexpected datetime objects where as the same function applied over RDD gives proper datetime object
Question: I am not sure if I am doing anything wrong so pardon me if this looks naive,
My problem is reproducible by the following data
from pyspark.sql import Row
df = sc.parallelize([Row(C3=u'Dec 1 2013 12:00AM'),
Row(C3=u'Dec 1 2013 12:00AM'),
Row(C3=u'Dec 5 2013 12:00AM')]).toDF()
I have created a function to parse this date strings as datetime objects to
process further
from datetime import datetime
def date_convert(date_str):
date_format = '%b %d %Y %I:%M%p'
try:
dt=datetime.strptime(date_str,date_format)
except ValueError,v:
if len(v.args) > 0 and v.args[0].startswith('unconverted data remains: '):
dt = dt[:-(len(v.args[0])-26)]
dt=datetime.strptime(dt,date_format)
else:
raise v
return dt
Now if I make a UDF out of this and apply to my dataframe I get unexpected
data
from pyspark.sql.functions import udf
date_convert_udf = udf(date_convert)
df.select(date_convert_udf(df.C3).alias("datetime")).take(2)
The result is like below
Out[40]:
[Row(datetime=u'java.util.GregorianCalendar[time=?,areFieldsSet=false,areAllFieldsSet=false,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Etc/UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=?,YEAR=2013,MONTH=11,WEEK_OF_YEAR=?,WEEK_OF_MONTH=?,DAY_OF_MONTH=1,DAY_OF_YEAR=?,DAY_OF_WEEK=?,DAY_OF_WEEK_IN_MONTH=?,AM_PM=0,HOUR=0,HOUR_OF_DAY=0,MINUTE=0,SECOND=0,MILLISECOND=0,ZONE_OFFSET=?,DST_OFFSET=?]'),
Row(datetime=u'java.util.GregorianCalendar[time=?,areFieldsSet=false,areAllFieldsSet=false,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Etc/UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=?,YEAR=2013,MONTH=11,WEEK_OF_YEAR=?,WEEK_OF_MONTH=?,DAY_OF_MONTH=1,DAY_OF_YEAR=?,DAY_OF_WEEK=?,DAY_OF_WEEK_IN_MONTH=?,AM_PM=0,HOUR=0,HOUR_OF_DAY=0,MINUTE=0,SECOND=0,MILLISECOND=0,ZONE_OFFSET=?,DST_OFFSET=?]')]
but if I use it after making the dataframe a RDD then it returns a pythond
datetime object
df.rdd.map(lambda row:date_convert(row.C3)).collect()
(1) Spark Jobs
Out[42]:
[datetime.datetime(2013, 12, 1, 0, 0),
datetime.datetime(2013, 12, 1, 0, 0),
datetime.datetime(2013, 12, 5, 0, 0)]
I want to achieve the similar thing with dataframe . How can I do that and
what is wrong with this approach (UDF over dataframe)
Answer: It's because you have to set the return type data of your `UDF`. Apparently
you are trying to obtain `timestamps`, if this is the case you have to write
something like this.
from pyspark.sql.types import TimestampType
date_convert_udf = udf(date_convert, TimestampType())
|
python load zip with modules from memory
Question: So lets say I have a zip file with modules/classes inside. I then read this
file - read binary ("rb") to store it into memory. How would I take this zip
file in memory and load a module from it. Would I need to write an import hook
for this? One cannot simply run exec on binary zip data from memory, can they?
I know its simple just to load a module from a plain zip file on disk as this
is done automatically by python2.7. I , however; want to know if this is
possible via memory.
Update: A lot of people are mentioning about importing the zip from disk. The
issue is specifically that I want to import the zip from memory NOT disk. I
obviously will read it from disk and into memory byte by byte. I want to take
all these bytes from within memory that make up the zip file and use it as a
regular import.
Answer: **EDIT:** Fixed the ZipImporter to work for everything (I think)
Test Data:
mkdir mypkg
vim mypkg/__init__.py
vim mypkg/test_submodule.py
`__init__.py` Contents:
def test():
print("Test")
`test_submodule.py` Contents:
def test_submodule_func():
print("This is a function")
Create Test Zip (on mac):
zip -r mypkg.zip mypkg
rm -r mypkg # don't want to accidentally load the directory
Special zip import in `inmem_zip_importer.py`:
import os
import imp
import zipfile
class ZipImporter(object):
def __init__(self, zip_file):
self.z = zip_file
self.zfile = zipfile.ZipFile(self.z)
self._paths = [x.filename for x in self.zfile.filelist]
def _mod_to_paths(self, fullname):
# get the python module name
py_filename = fullname.replace(".", os.sep) + ".py"
# get the filename if it is a package/subpackage
py_package = fullname.replace(".", os.sep, fullname.count(".") - 1) + "/__init__.py"
if py_filename in self._paths:
return py_filename
elif py_package in self._paths:
return py_package
else:
return None
def find_module(self, fullname, path):
if self._mod_to_paths(fullname) is not None:
return self
return None
def load_module(self, fullname):
filename = self._mod_to_paths(fullname)
if not filename in self._paths:
raise ImportError(fullname)
new_module = imp.new_module(fullname)
exec self.zfile.open(filename, 'r').read() in new_module.__dict__
new_module.__file__ = filename
new_module.__loader__ = self
if filename.endswith("__init__.py"):
new_module.__path__ = []
new_module.__package__ = fullname
else:
new_module.__package__ = fullname.rpartition('.')[0]
return new_module
Use:
In [1]: from inmem_zip_importer import ZipImporter
In [2]: sys.meta_path.append(ZipImporter(open("mypkg.zip", "rb")))
In [3]: from mypkg import test
In [4]: test()
Test function
In [5]: from mypkg.test_submodule import test_submodule_func
In [6]: test_submodule_func()
This is a function
* * *
(from efel) one more thing... :
To read directly from memory one would need to do this :
f = open("mypkg.zip", "rb")
# read binary data we are now in memory
data = f.read()
f.close() #important! close the file! we are now in memory
# at this point we can essentially delete the actual on disk zip file
# convert in memory bytes to file like object
zipbytes = io.BytesIO(data)
zipfile.ZipFile(zipbytes)
|
Python3.4 error - Cannot enable executable stack as shared object requires: Invalid argument
Question: I've been trying to install [OpenCV](http://opencv.org/) in a Bash on Windows
(Windows Subsystem for Linux, wsl) environment and it's been proving very
difficult.
I think I'm getting very close, but upon entering python, `import cv2` gives
the following error:
ImportError: libopencv_core.so.3.1: cannot enable executable stack as shared object requires: Invalid argument
How do I enable the library to execute on the stack?
* * *
My OpenCV `*opencv*.so*` library files are located in `/usr/local/lib/`. In a
normal Linux environment, I would grant these libraries the ability to execute
on the stack using
execstack -c /usr/local/lib/*opencv*.so*
However, even though I can successfully download the `execstack` package, it
isn't a recognized command I can run to allow execution on the stack. I
suspect this has something to do with Data Execution Prevention, Window's
version of Exec-Shield to prevent stack smashing attacks.
But maybe I've just been too close to the problem to figure out what's wrong.
Why can't I import this python package? I'm using Python v3.4 and OpenCV
compiled from the [latest source code](https://github.com/opencv/opencv)
(v.3.1).
Answer: There are lots of things that simply don't work at the moment, because there
are either unimplemented syscalls (WSL only has partial coverage, only about
70% of syscalls are implemented, some of them only partially), or missing
socket modes and options (WSL does not yet support Unix datagram sockets,
although it should be available in the next insider build).
If you go to the github (BashOnWindows) and post an strace or search for your
issue and find a copy of it, that's the best way to get an answer. The
Microsoft team working on this project wants lots and lots of feedback and
bugtesting.
To be clear, I am saying that you are 100% running into something that isn't
implemented yet. However, there might be a way, if you look at the sourcecode
for your .so file to disable the part of the code that uses that syscall
(since Python is crossplatform and not all Linux syscalls are supported across
all *nix operating systems).
|
Python Pandas with sqlalchemy | Bulk Insert Error
Question: I was try to Insert my millions of records from **CSV** File to **MySQL** Database, by using **Python** | **Pandas** with **sqlalchemy**. Some time this insertion is interrupted before the completion or not even insert single row to Database.
My Code is :
import pandas as pd
from sqlalchemy import create_engine
df = pd.read_csv('/home/shankar/LAB/Python/Rough/*******.csv')
# 2nd argument replaces where conditions is False
df = df.where(pd.notnull(df), None)
df.head()
conn_str = "mysql+pymysql://root:MY_PASS@localhost/MY_DB?charset=utf8&use_unicode=0"
engine = create_engine(conn_str)
conn = engine.raw_connection()
df.to_sql(name='table_name', con=conn,
if_exists='append')
conn.close()
Error :
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/shankar/.local/lib/python3.5/site-packages/pandas/io/sql.py in execute(self, *args, **kwargs)
1563 else:
-> 1564 cur.execute(*args)
1565 return cur
/home/shankar/.local/lib/python3.5/site-packages/pymysql/cursors.py in execute(self, query, args)
164
--> 165 query = self.mogrify(query, args)
166
/home/shankar/.local/lib/python3.5/site-packages/pymysql/cursors.py in mogrify(self, query, args)
143 if args is not None:
--> 144 query = query % self._escape_args(args, conn)
145
TypeError: not all arguments converted during string formatting
During handling of the above exception, another exception occurred:
DatabaseError Traceback (most recent call last)
<ipython-input-6-bb91db9eb97e> in <module>()
11 df.to_sql(name='company', con=conn,
12 if_exists='append',
---> 13 chunksize=10000)
14 conn.close()
/home/shankar/.local/lib/python3.5/site-packages/pandas/core/generic.py in to_sql(self, name, con, flavor, schema, if_exists, index, index_label, chunksize, dtype)
1163 sql.to_sql(self, name, con, flavor=flavor, schema=schema,
1164 if_exists=if_exists, index=index, index_label=index_label,
-> 1165 chunksize=chunksize, dtype=dtype)
1166
1167 def to_pickle(self, path):
/home/shankar/.local/lib/python3.5/site-packages/pandas/io/sql.py in to_sql(frame, name, con, flavor, schema, if_exists, index, index_label, chunksize, dtype)
569 pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index,
570 index_label=index_label, schema=schema,
--> 571 chunksize=chunksize, dtype=dtype)
572
573
/home/shankar/.local/lib/python3.5/site-packages/pandas/io/sql.py in to_sql(self, frame, name, if_exists, index, index_label, schema, chunksize, dtype)
1659 if_exists=if_exists, index_label=index_label,
1660 dtype=dtype)
-> 1661 table.create()
1662 table.insert(chunksize)
1663
/home/shankar/.local/lib/python3.5/site-packages/pandas/io/sql.py in create(self)
688
689 def create(self):
--> 690 if self.exists():
691 if self.if_exists == 'fail':
692 raise ValueError("Table '%s' already exists." % self.name)
/home/shankar/.local/lib/python3.5/site-packages/pandas/io/sql.py in exists(self)
676
677 def exists(self):
--> 678 return self.pd_sql.has_table(self.name, self.schema)
679
680 def sql_schema(self):
/home/shankar/.local/lib/python3.5/site-packages/pandas/io/sql.py in has_table(self, name, schema)
1674 query = flavor_map.get(self.flavor)
1675
-> 1676 return len(self.execute(query, [name, ]).fetchall()) > 0
1677
1678 def get_table(self, table_name, schema=None):
/home/shankar/.local/lib/python3.5/site-packages/pandas/io/sql.py in execute(self, *args, **kwargs)
1574 ex = DatabaseError(
1575 "Execution failed on sql '%s': %s" % (args[0], exc))
-> 1576 raise_with_traceback(ex)
1577
1578 @staticmethod
/home/shankar/.local/lib/python3.5/site-packages/pandas/compat/__init__.py in raise_with_traceback(exc, traceback)
331 if traceback == Ellipsis:
332 _, _, traceback = sys.exc_info()
--> 333 raise exc.with_traceback(traceback)
334 else:
335 # this version of raise is a syntax error in Python 3
/home/shankar/.local/lib/python3.5/site-packages/pandas/io/sql.py in execute(self, *args, **kwargs)
1562 cur.execute(*args, **kwargs)
1563 else:
-> 1564 cur.execute(*args)
1565 return cur
1566 except Exception as exc:
/home/shankar/.local/lib/python3.5/site-packages/pymysql/cursors.py in execute(self, query, args)
163 pass
164
--> 165 query = self.mogrify(query, args)
166
167 result = self._query(query)
/home/shankar/.local/lib/python3.5/site-packages/pymysql/cursors.py in mogrify(self, query, args)
142
143 if args is not None:
--> 144 query = query % self._escape_args(args, conn)
145
146 return query
DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': not all arguments converted during string formatting
This error is Occurred in some sort of CSV files only. Kindly notify my bug on
this !
Thanks in Advance.
Answer: From the error raised your arguments in to the query aren't all strings so
what you would have to do is go and convert each of the elements of data frame
to string. `df =df.astype(str)`
|
output on a new line in python curses
Question: I am using curses module in python to display output in real time by reading a
file. The string messages are output to the console using addstr() function
but I am not able to achieve printing to a newline wherever I need.
sample code:
import json
import curses
w=curses.initscr()
try:
while True:
with open('/tmp/install-report.json') as json_data:
beta = json.load(json_data)
w.erase()
w.addstr("\nStatus Report for Install process\n=========\n\n")
for a1, b1 in beta.iteritems():
w.addstr("{0} : {1}\n".format(a1, b1))
w.refresh()
finally:
curses.endwin()
The above is not really outputting the strings to a new line (notice the \n in
addstr()) with each iteration. On the contrary, the script fails off with
error if I resize the terminal window.
w.addstr("{0} ==> {1}\n".format(a1, b1))
_curses.error: addstr() returned ERR
Answer: There's not enough program to offer more than general advice:
* you will get an error when printing to the end of the screen if your script does not enable scrolling (see [`window.scroll`](https://docs.python.org/2/library/curses.html#curses.window.scroll)).
* if you resize the terminal window, you will have to read the keyboard to dispose of any `KEY_RESIZE` (and ignore errors).
Regarding the expanded question, these features would be used something like
this:
import json
import curses
w=curses.initscr()
w.scrollok(1) # enable scrolling
w.timeout(1) # make 1-millisecond timeouts on `getch`
try:
while True:
with open('/tmp/install-report.json') as json_data:
beta = json.load(json_data)
w.erase()
w.addstr("\nStatus Report for Install process\n=========\n\n")
for a1, b1 in beta.iteritems():
w.addstr("{0} : {1}\n".format(a1, b1))
ignore = w.getch() # wait at most 1msec, then ignore it
finally:
curses.endwin()
|
python request post doesn't submit
Question: I have a problem with request.post, instead of returning the html code with
the results I get back the html code of the starting side.
import requests
def test(pdb):
URL = "http://capture.caltech.edu/"
r = requests.post(URL,files={"upfile": open( pdb)})
content=r.text
print(content)
print(r.headers)
def main():
test("Model.pdb")
Could it be that I have to define which postmethod I want to use? Because
there are two in the html file. If this is the case how do I do that?(I want
to use the second one.)
<FORM ACTION="result.cgi" METHOD=POST>
<form action="capture_ul.cgi" method="post" enctype="multipart/form-data">
I am aware that there are similar questions here but the answers there didn't
help because the mistake was that params was used instead of files, which
shouldn't be a problem here.
Thanks in advance.
Answer: 1 - You are posting to the wrong url, it should be
`http://capture.caltech.edu/capture_ul.cgi`.
2 - There's an hidden field (`name='note'`) that must be sent (value of an
empty string will be enough).
...
def test(pdb):
URL = "http://capture.caltech.edu/capture_ul.cgi"
r = requests.post(URL,files={"upfile": open(pdb)}, data={'note': ''})
content=r.text
print(content)
print(r.headers)
...
|
Pass just one field via AJAX with Flask
Question: I have a VERY simple HTML form with just one `<input type='text'>` field, an
email address, that I need to pass back to a Python script via AJAX. I can't
seem to receive the value on the other end. (And can all the JSON
encoding/decoding be avoided, since there's just one field?)
Here's the code:
from flask import Flask, render_template, request, json
import logging
app = Flask(__name__)
@app.route('/')
def hello():
return render_template('index.htm')
@app.route('/start', methods=['POST'])
def start():
# next line outputs "email=myemail@gmail.com"
app.logger.debug(request.json);
email = request.json['email'];
# next line ALSO outputs "email=myemail@gmail.com"
app.logger.debug(email);
return json.dumps({ 'status': 'OK', 'email': email })
if __name__ == "__main__":
app.run()
And the Javascript that sends the AJAX from the HTML side--
$( "form" ).on( "submit", function( event ) {
event.preventDefault();
d = "email=" + $('#email').val(); // this works correctly
// next line outputs 'sending data: myemail@gmail.com'
console.log("sending data: "+d);
$.ajax({
type: "POST",
url: "{{ url_for('start') }}",
data: JSON.stringify(d),
dataType: 'JSON',
contentType: 'application/json;charset=UTF-8',
success: function(result) {
console.log("SUCCESS: ");
// next line outputs 'Object {email: "email=myemail@gmail.com", status: "OK"}'
console.log(result);
}
});
});
Answer: JSON.stringify is used to turn an object into a JSON-formatted string, but you
don't have an object, just a string. Try this:
var d = { email: $('#email').val() };
`JSON.stringify(d)` will now turn that into a JSON-formatted string:
`{email: "myemail@gmail.com"}` which can be parsed by flask.
* * *
To do this without JSON:
var d = { email: $('#email').val() };
...
// AJAX
data: d,
success: ...
This will turn `{email: "myemail@gmail.com"}` into `email=mymail@gmail.com`
and send that as body of the POST request. In Flask, use
`request.form['email']`.
|
Accessing rabbitmq running on local machine from docker container
Question: I want to test a docker image running a python script subscribing to a
rabbitmq queue. I have rabbitmq running on my local machine, and want to test
the docker container running on the same machine and have it subscribe to the
local rabbimq server.
I want the script to read environment variables 'QUEUE_URL' set in the docker
run command.
The python script:
#!/usr/bin/env python
import pika
url = os.environ.get('QUEUE_URL')
params.socket_timeout = 5
connection = pika.BlockingConnection(pika.ConnectionParameters(
host=url))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
Obviously it doesnt work if QUEUE_URL = localhost, and I also tried using the
local ip address of the machine, but I only get
pika.exceptions.ProbableAuthenticationError
Is there any easy way of accessing the local rabbitmq from the docker
container ?
Answer: According to [Docker CLI
docs](https://docs.docker.com/engine/reference/commandline/run/):
> Sometimes you need to connect to the Docker host from within your container.
> To enable this, pass the Docker host’s IP address to the container using the
> --add-host flag. To find the host’s address, use the `ip addr show` command.
So all you need to do is set: `QUEUE_URL` to the output of `ip addr show`.
|
I'm trying to get an excel sheet downloaded using python requests module and getting junk output
Question: I'm trying to download an excel file which is uploaded on a Sharepoint 2013
site.
My code is as follows:
import requests
url='https://<sharepoint_site>/<document_name>.xlsx?Web=0'
author = HttpNtlmAuth('<username>','<passsword>')
response=requests.get(url,auth=author,verify=False)
print(response.status_code)
print(response.content)
This gives me a long output which is something like:
>
> x00docProps/core.xmlPK\x01\x02-\x00\x14\x00\x06\x00\x08\x00\x00\x00!\x00\x7f\x8bC\xc3\xc1\x00\x00\x00"\x01\x00\x00\x13\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xb8\xb9\x01\x00customXml/item1.xmlPK\x05\x06\x00\x00\x00\x00\x1a\x00\x1a\x00\x12\x07\x00\x00\xd2\xba\x01\x00\x00\x00'
I did something like this before for another site and I got xml as output
which was acceptable for me but I'm not sure how to handle this data.
Any ideas to process this to be like xlsx or xml?
Or maybe to download the xlsx another way?(I tried doing it through the wget
library and the excel seems to get corrupted)
Any ideas would be really helpful.
Regards, Karan
Answer: It seems that the file is encrypted and request can't handle this.
Maybe the web service provides an API for downloading and secure decoding.
|
ImportError: No module named _mysql
Question: I'm trying to use the Python module MySQL-python to connect to an external
MySQL database from an AWS EC2 instance running amazon linux.
This is the code I'm trying to run:
db=_mysql.connect(host="hostname",user="dbuser",passwd="dbpassword",db="database")
db.query("""SELECT id, field1 FROM test""")
r=db.store_result()
row = r.fetch_row()
print row
I have installed the python module with pip:
sudo pip install MySQL-python
When I run the script I get the following error message:
Traceback (most recent call last):
File "script.py", line 2, in <module>
import _mysql
ImportError: No module named _mysql
When I research this I keep on digging up a lot of solutions for Ubuntu/Debian
linux that don't work for amazon linux.
How can I fix this error on amazon linux and run the script?
Also, from any experienced linux users observing/answering: Is there any
advantage to using amazon linux as I try to learn more linux and pick up AWS
or would I be better off using an Ubuntu/Debian image? I'm not an experienced
linux user as probably shows from the question.
**Update**
I've realised that the installation of the package was unsuccessful on the
amazon linux server. Here's the full output when I try to run the install via
pip:
$ sudo pip install MySQL-Python
Collecting MySQL-Python
Using cached MySQL-python-1.2.5.zip
Installing collected packages: MySQL-Python
Running setup.py install for MySQL-Python ... error
Complete output from command /usr/bin/python2.7 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-B1IkvH/MySQL-Python/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-RNgtpa-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.7
copying _mysql_exceptions.py -> build/lib.linux-x86_64-2.7
creating build/lib.linux-x86_64-2.7/MySQLdb
copying MySQLdb/__init__.py -> build/lib.linux-x86_64-2.7/MySQLdb
copying MySQLdb/converters.py -> build/lib.linux-x86_64-2.7/MySQLdb
copying MySQLdb/connections.py -> build/lib.linux-x86_64-2.7/MySQLdb
copying MySQLdb/cursors.py -> build/lib.linux-x86_64-2.7/MySQLdb
copying MySQLdb/release.py -> build/lib.linux-x86_64-2.7/MySQLdb
copying MySQLdb/times.py -> build/lib.linux-x86_64-2.7/MySQLdb
creating build/lib.linux-x86_64-2.7/MySQLdb/constants
copying MySQLdb/constants/__init__.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
copying MySQLdb/constants/CR.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
copying MySQLdb/constants/ER.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
copying MySQLdb/constants/FLAG.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
copying MySQLdb/constants/REFRESH.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
copying MySQLdb/constants/CLIENT.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
running build_ext
building '_mysql' extension
creating build/temp.linux-x86_64-2.7
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/usr/include/mysql55 -I/usr/include/python2.7 -c _mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv -fPIC -fPIC -g -static-libgcc -fno-omit-frame-pointer -fno-strict-aliasing -DMY_PTHREAD_FASTMUTEX=1
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python2.7 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-B1IkvH/MySQL-Python/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-RNgtpa-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-B1IkvH/MySQL-Python/
Answer: Only a workaround, but one that worked for me in situations where I could not
easily call "sudo pip install".
What you (often, not always) can do:
1. Turn to a system where that python module you are looking for works
2. Identify its "location", for example, after installing enum34 on my ubuntu, the installation would put files under _/usr/lib/python2.7/dist-packages/enum_
3. Put that directory in an archive
4. On your "target" system, extract that archive locally
5. Manipulate the python path to include the locally extracted archive
As said, this isn't beautiful; but if no better answers come in; you have at
least something to try ...
|
python package error when running from sublime
Question: I installed the python package `isochrones` using `pip install isochrones`.
When I type `from isochrones.dartmouth import Dartmouth_Isochrone` in the
`Sublime text editor` I get the following error:
from isochrones.dartmouth import Dartmouth_Isochrone
ImportError: No module named dartmouth
However, the same command works when I run it from `ipython`.
What's going on?! I have a long code, so working in `ipython` is not possible.
I want to use `sublime`.
Answer: You need to create a new [build
system](http://docs.sublimetext.info/en/latest/file_processing/build_systems.html)
for Anaconda. Select **`Tools → Build System → New Build System...`** and
replace the contents of the file that opens with the following:
{
"cmd": ["/Applications/anaconda/bin/python", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"
}
When you hit save, it should automatically open your User directory
(`~/Library/Application Support/Sublime Text 2/Packages/User`). Save the file
there as `Anaconda Python.sublime-build`. Finally, select **`Tools → Build
System → Anaconda Python`** so the proper system will run when you select
Build.
Now that the build system is all set up, you need to ensure that you're
installing things under the right Python distribution. OS X comes with Python
built-in as `/usr/bin/python`, with system packages residing in a range of
possible directories, depending on which build of OS X you're using. From the
command line, run
which pip
to ensure it points to the Anaconda installation. If it doesn't, you'll have
to alter your `PATH` variable to put `/Applications/anaconda/bin` at the
front, before `/usr/bin` and `/usr/local/bin`. How to do that is beyond the
scope of this answer, but it's easy to find out by a quick Google search.
You should now be able to use your Anaconda `pip`-installed packages with
Sublime Text.
|
SSL SYSCALL error: Bad file descriptor on Heroku with postgres and Celery
Question: I've been using Celery successfully with a Django site on Heroku but it's just
started generating the error below, which stops it running. It looks like it's
having trouble with postgres, but I'm stumped as to how to fix it, given it's
Celery rather than my code that's having the problem (I assume...).
I'm using CloudAMPQ as a broker, and my Django settings include:
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
Here's the traceback from the Heroku logs:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/kombu/utils/__init__.py", line 323, in __get__
return obj.__dict__[self.__name__]
KeyError: 'scheduler'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.OperationalError: SSL SYSCALL error: Bad file descriptor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/billiard/process.py", line 292, in _bootstrap
self.run()
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 553, in run
self.service.start(embedded_process=True)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 470, in start
humanize_seconds(self.scheduler.max_interval))
File "/app/.heroku/python/lib/python3.5/site-packages/kombu/utils/__init__.py", line 325, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 512, in scheduler
return self.get_scheduler()
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 507, in get_scheduler
lazy=lazy)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 151, in __init__
Scheduler.__init__(self, *args, **kwargs)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 185, in __init__
self.setup_schedule()
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 158, in setup_schedule
self.install_default_entries(self.schedule)
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 251, in schedule
self._schedule = self.all_as_schedule()
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 164, in all_as_schedule
for model in self.Model.objects.enabled():
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/app/.heroku/python/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.OperationalError: SSL SYSCALL error: Bad file descriptor
Answer: I've resolved the issue now... there was a line of my Django code which had
caused an Internal Server Error in the past -- I think, early on in Django
starting up, it was trying to access a model before the migrations that
created the model had run.
I'd resolved that but noticed these "SSL SYSCALL error"s started about the
same time. So I removed that line of code, and Celery has started up again.
It could be coincidence. And I don't understand why this fixed things.
Ideally I'd still like to understand what the error above actually _means_ so
I'd have a better chance of fixing such a thing in the future.
|
Queue doesn't process all elements when there are many threads
Question: I have noticed that when I have many threads pulling elements from a queue,
there are less elements processed than the number that I put into the queue.
This is sporadic but seems to happen somewhere around half the time when I run
the following code.
#!/bin/env python
from threading import Thread
import httplib, sys
from Queue import Queue
import time
import random
concurrent = 500
num_jobs = 500
results = {}
def doWork():
while True:
result = None
try:
result = curl(q.get())
except Exception as e:
print "Error when trying to get from queue: {0}".format(str(e))
if results.has_key(result):
results[result] += 1
else:
results[result] = 1
try:
q.task_done()
except:
print "Called task_done when all tasks were done"
def curl(ourl):
result = 'all good'
try:
time.sleep(random.random() * 2)
except Exception as e:
result = "error: %s" % str(e)
except:
result = str(sys.exc_info()[0])
finally:
return result or "None"
print "\nRunning {0} jobs on {1} threads...".format(num_jobs, concurrent)
q = Queue()
for i in range(concurrent):
t = Thread(target=doWork)
t.daemon = True
t.start()
for x in range(num_jobs):
q.put("something")
try:
q.join()
except KeyboardInterrupt:
sys.exit(1)
total_responses = 0
for result in results:
num_responses = results[result]
print "{0}: {1} time(s)".format(result, num_responses)
total_responses += num_responses
print "Number of elements processed: {0}".format(total_responses)
Answer: Tim Peters hit the nail on the head in the comments. The issue is that the
tracking of results is threaded and isn't protected by any sort of mutex. That
allows something like this to happen:
thread A gets result: "all good"
thread A checks results[result]
thread A sees no such key
thread A suspends # <-- before counting its result
thread B gets result: "all good"
thread B checks results[result]
thread B sees no such key
thread B sets results['all good'] = 1
thread C ...
thread C sets results['all good'] = 2
thread D ...
thread A resumes # <-- and remembers it needs to count its result still
thread A sets results['all good'] = 1 # resetting previous work!
A more typical workflow might have a results queue that the main thread is
listening on.
workq = queue.Queue()
resultsq = queue.Queue()
make_work(into=workq)
do_work(from=workq, respond_on=resultsq)
# do_work would do respond_on.put_nowait(result) instead of
# return result
results = {}
while True:
try:
result = resultsq.get()
except queue.Empty:
break # maybe? You'd probably want to retry a few times
results.setdefault(result, 0) += 1
|
How to sort stopped EC2s by time using "state_transition_reason" variable? Python Boto3
Question: I am seeing a steep increase on my AWS account costs. The largest cost items
are: **EC2: 67% RDS: 12%**
I have more than 50 stopped EC2s. One of them has been sitting there in a
stopped state from September of the year 2015.
I found the way to get the stopped time of EC2s using variable called:
> state_transition_reason
Here how the code looks:
import boto3
session = boto3.Session(region_name="us-east-1")
ec2 = session.resource('ec2')
instances = ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': ['stopped']}])
count = 0
for i in instances:
print "{0}, {1}, {2}".format( i.id, i.state_transition_reason, i.state['Name'])
count +=1
print count
It prints out the following information:
i-pll78233b, User initiated (2016-07-06 21:14:03 GMT), stopped
i-tr62l5647, User initiated (2015-12-18 21:35:20 GMT), stopped
i-9oc4391ca, User initiated (2016-03-17 04:37:46 GMT), stopped
55
**My question is** : How can I sort instances (EC2s) by their time being
stopped. In my example I would love to see the output in the following order
starting from year 2015 accordingly:
i-tr62l5647, User initiated (2015-12-18 21:35:20 GMT), stopped
i-9oc4391ca, User initiated (2016-03-17 04:37:46 GMT), stopped
i-pll78233b, User initiated (2016-07-06 21:14:03 GMT), stopped
55
Thanks.
Answer: As long as the User initiated part never varies, we can simply sort the
instances by state_transition_reason:
sortedInstances = sorted(instances, key=lambda k: k.state_transition_reason)
|
Why can't python's datetime.max survive a round trip through timestamp / fromtimestamp?
Question: In most cases I can round-trip datetimes to and from a timestamp as follows:
from datetime import datetime
dt = datetime(2016, 1, 1, 12, 34, 56, 789)
print(dt)
print(datetime.fromtimestamp(dt.timestamp()))
> 2016-01-01 12:34:56.000789
>
> 2016-01-01 12:34:56.000789
But this doesn't work for datetime.max. Why is that?
dt = datetime.max
print(dt)
print(datetime.fromtimestamp(dt.timestamp()))
> 9999-12-31 23:59:59.999999
>
> Traceback (most recent call last): File "python", line 9, in ValueError:
> year is out of range
More precisely, why hasn't the datetime library taken this case into account?
Answer: Simply because the maximum of a datetime object is not the same as the maximum
of a valid timestamp.
There's also a good reason to limit the range of timestamps: they are but a
simple python `float`, which, on "normal" machines are double precision
floating points. But: you lose more than a couple of seconds in precision:
print(datetime.max.timestamp())
253402297200.0
print(datetime.max.second)
59
print(datetime.max.microsecond)
999999
**spot the error.**
Timestamps based on floating point numbers are, _by definition_ less accurate
the more they are in the future. So not being able to represent arbitrary
valid `datetimes` in a timestamp is perfectly reasonable, just as restricting
one a couple thousand years in the future.
so:
> More precisely, why hasn't the datetime library taken this case into
> account?
Because timestamps so far in the future are unreliably and very likely do not
represent the time you've meant, so rejecting them is a wise thing.
Takeaway: a floating point number like `timestamp()` produces is not an
appropriate way of transporting times with fixed precision. If you at all can,
avoid it.
|
Python cant handle exceptions from zipfile.BadZipFile
Question: Need to handle if a zip file is corrupt, so it just pass this file and can go
on to the next.
In the code example underneath Im trying to catch the exception, so I can pass
it. But my script is failing when the zipfile is corrupt*, and give me the
"normal" traceback errors* istead of printing "my error", but is running ok if
the zipfile is ok.
This i a minimalistic example of the code I'm dealing with.
path = "path to zipfile"
from zipfile import ZipFile
with ZipFile(path) as zf:
try:
print "zipfile is OK"
except BadZipfile:
print "Does not work "
pass
part of the traceback is telling me: raise BadZipfile, "File is not a zip
file"
Answer: You need to put your context manager _inside_ the `try-except` block:
try:
with ZipFile(path) as zf:
print "zipfile is OK"
except BadZipfile:
print "Does not work "
The error is _raised by_ `ZipFile` so placing it outside means no handler can
be found for the raised exception. In addition make sure you appropriately
import `BadZipFile` from `zipfile`.
|
My url template not work
Question: I have a problem with my template tag url. The redirect not work when i click
on button.
Django version => 1.9 Python version => 2.7
In my urls.py(main) i have:
from django.conf import settings
from django.conf.urls import include, url
from django.conf.urls.static import static
from django.contrib import admin
from memoryposts.views import home, profile, preregistration
urlpatterns = [
url(r'^$', home, name="home"),
url(r'^grappelli/', include('grappelli.urls')),
url(r'^admin/', admin.site.urls),
url(r'^memory/', include("memoryposts.urls", namespace="memory")),
url(r'^avatar/', include('avatar.urls')),
url(r'^accounts/', include('registration.backends.hmac.urls')),
url(r'^preregistration/', preregistration, name="preregistration"),
url(r'^profile/', profile, name="profile"),
]
if settings.DEBUG:
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
In my urls.py(apps) i have:
from django.conf.urls import url
from django.contrib import admin
from .views import (
memory_list,
memory_create,
memory_detail,
memory_update,
memory_delete,
)
urlpatterns = [
url(r'^$', memory_list, name='list'),
url(r'^create/$', memory_create, name='create'),
url(r'^(?P<slug>[-\w]+)/$', memory_detail, name='detail'),
url(r'^(?P<slug>[-\w]+)/edit/$', memory_update, name='update'),
url(r'^(?P<slug>[-\w]+)/delete/$', memory_delete, name='delete'),
]
In my views.py(apps) i have:
from django.contrib import messages
from django.http import HttpResponse, HttpResponseRedirect
from django.shortcuts import render, get_object_or_404, redirect
from .models import Post
from .forms import PostForm
def home(request):
return render(request, "base.html")
def profile(request):
return render(request, "profile.html")
def preregistration(request):
return render(request, "preregistration.html")
def memory_create(request):
form = PostForm(request.POST or None, request.FILES or None)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
messages.success(request,"Succès !")
return HttpResponseRedirect(instance.get_absolute_url())
context = {
"form": form,
}
return render(request, "memory_create.html", context)
def memory_detail(request, slug=None):
instance = get_object_or_404(Post, slug=slug)
context = {
"title":instance.title,
"instance":instance,
}
return render(request, "memory_detail.html", context)
def memory_list(request):
queryset = Post.objects.all()
context = {
"object_list": queryset,
}
return render(request, "memory_list.html", context)
def memory_update(request, slug=None):
instance = get_object_or_404(Post, slug=slug)
form = PostForm(request.POST or None, request.FILES or None, instance=instance)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
messages.success(request,"Mis à jour !")
return HttpResponseRedirect(instance.get_absolute_url())
context = {
"title":instance.title,
"instance":instance,
"form": form,
}
return render(request, "memory_create.html", context)
def memory_delete(request, slug=None):
instance = get_object_or_404(Post, slug=slug)
instance.delete()
messages.success(request, "Supprimer !")
return redirect("posts:list")
In my template html i have:
<button type="button" class="btn btn-primary"><a id="back-profile" href="{% url 'memory:update' %}"> Update</a></button>
<button type="button" class="btn btn-primary"><a id="back-profile" href="{% url 'memory:delete' %}"> Delete</a></button>
The redirect not work with this template tag. can you help me please :) ?
Answer: From doc here [URL
dispatcher](https://docs.djangoproject.com/en/1.10/topics/http/urls/#url-
dispatcher)
<button type="button" class="btn btn-primary"><a id="back-profile" href="{% url 'memory:update' %}"> Update</a></button>
<button type="button" class="btn btn-primary"><a id="back-profile" href="{% url 'memory:delete' %}"> Delete</a></button>
You should put what you 'slug' in your button, like this (if slug is 200)
<button type="button" class="btn btn-primary"><a id="back-profile" href="{% url 'memory:update' 200 %}"> Update</a></button>
Usually will look like this:
{% for slug in slug_list %}
<button type="button" class="btn btn-primary"><a id="back-profile" href="{% url 'memory:update' slug %}"> Update</a></button>
{% endfor %}
|
Comparing Dates in SQLLite
Question: Hello Im currently doing a small project in python and sqllite and I have a
csv that has been imported into a database with values under table name :
Members.
Each member has a "Date Joined" field in the format m/dd/yy. Some example
formats are below:
**I cant change the values in the csv because when I turn this assignment in
they're going to use a document with the same format as below**
## Date Joined
5/1/98 6/4/97 7/1/99
8/1/99
8/3/99
11/20/99
2/2/00
1/2/99
2/3/99
One of the questions Im asked is:
to retrieve all member information that have joined after 1999-07-01 (yyyy-mm-
dd) and are from VA (can ignore the VA part)
My query to do this something like this started off as
SELECT * FROM Members WHERE "Date Joined" >= "1999-07-01" AND "State"="VA";
But my problem is that Im having trouble converting the date (Im guessing its
stored as a string in the database) so it can be compared with "1999-07-01".
Answer: You can try the following query:
SELECT *
FROM yourTable
WHERE CAST(SUBSTR(SUBSTR(join_date, INSTR(join_date, '/') + 1), INSTR(SUBSTR(join_date, INSTR(join_date, '/') + 1), '/') + 1) AS INTEGER) <= 16 OR
(
CAST(SUBSTR(SUBSTR(join_date, INSTR(join_date, '/') + 1), INSTR(SUBSTR(join_date, INSTR(join_date, '/') + 1), '/') + 1) AS INTEGER) >= 99 AND
CAST(SUBSTR(join_date, 1, INSTR(join_date, '/') - 1) AS INTEGER) >= 7
)
**Explanation:**
Apologies for such an ugly query, but then again the date data you are working
with is also very ugly. The logic of the query is that is will select all
records where the year is `16` or less, _or_ if the year be `99` or greater
_and_ the month be `7` or greater.
The trick here is to carefully use SQLite's string manipulation functions to
extract the pieces we want. To extract the month, use:
SUBSTR(join_date, 1, INSTR(join_date, '/') - 1)
This will extract everything from the date column up to, but not including,
the first forward slash. To extract the day and year is a bit more work,
because `INSTR` picks up the first matching character. In this case, we can
substring the date to remove everything up and including the first forward
slash. So the day and year can be extracted using:
SUBSTR(sub, 1, INSRT(sub, '/') - 1) -- day
SUBSTR(sub, INSTR(sub, '/') + 1) -- year
where `sub` is obtained as `SUBSTR(join_date, INSTR(join_date, '/') + 1)`.
|
YAML file update and delete using python?
Question: I have a YAML file and it looks like below
test:
- exam.com
- exam1.com
- exam2.com
test2:
- examp.com
- examp1.com
- examp2.com
I like to manage this file using python. Task is, I like to add an entry under
"test2" and delete entry from "test".
Answer: You first have to load the data, which will give you a top-level dict (in a
variable called `data` in the following example), the values for the keys will
be lists. On those lists you can do the `del` resp. `insert()` (or `append()`)
import sys
import ruamel.yaml
yaml_str = """\
test:
- exam.com
- exam1.com
- exam2.com
test2:
- examp.com
- examp1.com # want to insert after this
- examp2.com
"""
data = ruamel.yaml.round_trip_load(yaml_str)
del data['test'][1]
data['test2'].insert(2, 'examp1.5')
ruamel.yaml.round_trip_dump(data, sys.stdout, block_seq_indent=1)
gives:
test:
- exam.com
- exam2.com
test2:
- examp.com
- examp1.com # want to insert after this
- examp1.5
- examp2.com
The `block_seq_indent=1` is necessary as by default `ruamel.yaml` will left
align a sequence value with the key.¹
If you want to get rid of the comment in the output you can do:
data['test2']._yaml_comment = None
* * *
¹ This was done using [ruamel.yaml](https://pypi.python.org/pypi/ruamel.yaml)
a YAML 1.2 parser, of which I am the author.
|
Pygame for Python 3 - "Setup.py build" command error
Question: I am following these directions:
<http://www.pygame.org/wiki/CompileUbuntu?parent=Compilation>
The instructions give the steps in installing Pygame for Python 3 on Ubuntu.
I am having no problems with it until i reach the `python3 setup.py build`
step. This is what the command outputs:
Traceback (most recent call last):
File "setup.py", line 109, in <module>
from setuptools import setup, find_packages
ImportError: No module named 'setuptools'
If i simply run `import pygame` in both Python 2 and Python 3, it reports that
there is no module called pygame.
Is there anything special that is needed to be done? Thanks!
**EDIT:** Followed @docmarvin 's directions and installed the module
setuptools. Still the same error
Answer:
sudo apt install python3-setuptools
^ separate from Python 2 setuptools.
Per [this answer](http://stackoverflow.com/a/14426553/2877364).
|
'[08001][TPT] [ODBC SQL Server Wire Protocol driver] Invalid connection Data
Question: I have a python program that implies connecting to a teradata database. The
server name is defaulted. Two people can succesfully use the python program
but one person can't and gets the following error message:
'[08001][TPT] [ODBC SQL Server Wire Protocol driver] Invalid connection Data
., [TPT][ODBC SQL Server Wire Protocol driver ]Invalid attribute in connection string : DBCNAME.'
The person who gets the error message has access to that server and uses
Teradata.
Python code:
import teradata
udaExec = teradata.UdaExec (appName="test", version="1.0",
logConsole=False)
session = udaExec.connect(method="odbc", system=servername,username=user1, password=passw)
Answer: If you check the log you can see that probably you have more than one driver
for Teradata set into your ODBC configuration.
To set your correct Teradata driver you can add driver property to connect
method:
session = udaExec.connect(method="odbc", system="servername", username=user1, password=passw, driver="Teradata");
A different way to connect to Teradata could be using a DSN defined by user in
ODBC settings:
import teradata
udaExec = teradata.UdaExec (appName="test", version="1.0", logConsole=False)
session = udaExec.connect(method="odbc", dsn="<dsn-defined-by-user>", username=user1, password=passw)
|
Why do Python and wc disagree on byte count?
Question: Python and `wc` disagree drastically on the byte count (length) of a given
string:
with open("commedia.pfc", "w") as f:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
print(len(t))
f.write(t)
Output : 318885
* * *
$> wc commedia.pfc
2181 12282 461491 commedia.pfc
The file is mostly made of unreadable chars so I will provide an hexdump:
<http://www.filedropper.com/dump_2>
The file is the result of a prefix free compression, if you ask I can provide
the full code that generates it along with the input text.
Why aren't both byte counts equal?
* * *
I add the full code of the compression algorithm, it looks long but is full of
documentation and tests, so should be easy to understand:
"""
Implementation of prefix-free compression and decompression.
"""
import doctest
from itertools import islice
from collections import Counter
import random
import json
def binary_strings(s):
"""
Given an initial list of binary strings `s`,
yield all binary strings ending in one of `s` strings.
>>> take(9, binary_strings(["010", "111"]))
['010', '111', '0010', '1010', '0111', '1111', '00010', '10010', '01010']
"""
yield from s
while True:
s = [b + x for x in s for b in "01"]
yield from s
def take(n, iterable):
"""
Return first n items of the iterable as a list.
"""
return list(islice(iterable, n))
def chunks(xs, n, pad='0'):
"""
Yield successive n-sized chunks from xs.
"""
for i in range(0, len(xs), n):
yield xs[i:i + n]
def reverse_dict(dictionary):
"""
>>> sorted(reverse_dict({1:"a",2:"b"}).items())
[('a', 1), ('b', 2)]
"""
return {value : key for key, value in dictionary.items()}
def prefix_free(generator):
"""
Given a `generator`, yield all the items from it
that do not start with any preceding element.
>>> take(6, prefix_free(binary_strings(["00", "01"])))
['00', '01', '100', '101', '1100', '1101']
"""
seen = []
for x in generator:
if not any(x.startswith(i) for i in seen):
yield x
seen.append(x)
def build_translation_dict(text, starting_binary_codes=["000", "100","111"]):
"""
Builds a dict for `prefix_free_compression` where
More common char -> More short binary strings
This is compression as the shorter binary strings will be seen more times than
the long ones.
Univocity in decoding is given by the binary_strings being prefix free.
>>> sorted(build_translation_dict("aaaaa bbbb ccc dd e", ["01", "11"]).items())
[(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')]
"""
binaries = sorted(list(take(len(set(text)), prefix_free(binary_strings(starting_binary_codes)))), key=len)
frequencies = Counter(text)
# char value tiebreaker to avoid non-determinism v
alphabet = sorted(list(set(text)), key=(lambda ch: (frequencies[ch], ch)), reverse=True)
return dict(zip(alphabet, binaries))
def prefix_free_compression(text, starting_binary_codes=["000", "100","111"]):
"""
Implements `prefix_free_compression`, simply uses the dict
made with `build_translation_dict`.
Returns a tuple (compressed_message, tranlation_dict) as the dict is needed
for decompression.
>>> prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])[0]
'010101010100111111111001101101101001000100010011001'
"""
translate = build_translation_dict(text, starting_binary_codes)
# print(translate)
return ''.join(translate[i] for i in text), translate
def prefix_free_decompression(compressed, translation_dict):
"""
Decompresses a prefix free `compressed` message in the form of a string
composed only of '0' and '1'.
Being the binary codes prefix free,
the decompression is allowed to take the earliest match it finds.
>>> message, d = prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])
>>> message
'010101010100111111111001101101101001000100010011001'
>>> sorted(d.items())
[(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')]
>>> ''.join(prefix_free_decompression(message, d))
'aaaaa bbbb ccc dd e'
"""
decoding_translate = reverse_dict(translation_dict)
# print(decoding_translate)
word = ''
for bit in compressed:
# print(word, "-", bit)
if word in decoding_translate:
yield decoding_translate[word]
word = ''
word += bit
yield decoding_translate[word]
if __name__ == "__main__":
doctest.testmod()
with open("commedia.txt") as f:
text = f.read()
compressed, d = prefix_free_compression(text)
with open("commedia.pfc", "w") as f:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
print(len(t))
f.write(t)
with open("commedia.pfcd", "w") as f:
f.write(json.dumps(d))
# dividing by 8 goes from bit length to byte length
print("Compressed / uncompressed ratio is {}".format((len(compressed)//8) / len(text)))
original = ''.join(prefix_free_decompression(compressed, d))
assert original == text
`commedia.txt` is filedropper.com/commedia
Answer: You are using Python3 and an `str` object - that means the count you see in
`len(t)` is the number of _characters_ in the string. Now, characters are not
bytes - [and it is so since the
90's](http://www.joelonsoftware.com/articles/Unicode.html) .
Since you did not declare an explicit text encoding, the file writing is
encoding your text using the system default encoding - which on Linux or Mac
OS X will be utf-8 - an encoding in which any character that falls out of the
ASCII range (ord(ch) > 127) uses more than one byte on disk.
So, your program is basically wrong. First, define if you are dealing with
_text_ or _bytes_ . If you are dealign with bytes, open the file for writting
in binary mode (`wb`, not `w`) and change this line:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
to
t = bytes((int(b, base=2) for b in chunks(compressed, 8))
That way it is clear that you are working with the bytes themselves, and not
mangling characters and bytes.
Of course there is an ugly workaround to do a "transparent encoding" of the
text you had to a bytes object - (if your original list would have all
character codepoints in the 0-256 range, that is): You could encode your
previous `t` with `latin1` encoding before writing it to a file. But that
would have been just wrong semantically.
You can also experiment with Python's little known "bytearray" object: it
gives one the ability to deal with elements that are 8bit numbers, and have
the convenience of being mutable and extendable (just as a C "string" that
would have enough memory space pre allocated)
|
keep tracking time in while loop and interacting with other commands in python3
Question: So i created a while loop that the user input and an output is returned
choice = input()
while True:
if choice == "Mark":
print("Zuckerberg")
elif choice == "Sundar":
print("Pichai")
and i want to keep time so when i hit Facebook is going to keep time for FB
and when i type Google is going to keep time for google like this
import time
choice = input()
while True:
if choice == "Facebook":
endb = time.time()
starta = time.time()
if choice == "google":
enda = time.time()
startb = time.time()
if choice == "Mark":
print("Zuckerberg")
elif choice == "Sundar":
print("Pichai")
if i make this like above when i get to print the elapsed time is going to
print the same number but is going to be minus instead of plus, and vice versa
elapseda = enda - starta
elapsedb = endb - startb
print(elapseda)
print(elapsedb)
How do i keep track of the time but be able to interact with my other
input/outputs?
Thanks
##############################################################################
Edit: Sorry for making it not clear. What i meant by tracking time it that
instead of print an output when you type a keyword is going to track time.
This will be used to take the possession time of a sport match but meanwhile
count other stats like Penalty Kicks and stuff. I cant post my code due to
character limit but here is an idea:
while True:
choice = input()
if choice == "pk":
print("pk")
elif choice == "fk":
print("fk")
elif choice == "q":
break
and in there i should put possession time but meanwhile i want to interact
with the others
Answer: In the while loop you could count seconds like so.
import time
a = 0
while True:
a = a + 1
time.sleep(1)
That would mean that a is about how many seconds it look to do the while loop.
|
Python script for searching variable strings between two constant strings
Question:
import re
infile = open('document.txt','r')
outfile= open('output.txt','w')
copy = False
for line in infile:
if line.strip() == "--operation():":
bucket = []
copy = True
elif line.strip() == "StartOperation":
for strings in bucket:
outfile.write( strings + ',')
for strings in bucket:
outfile.write('\n')
copy = False
elif copy:
bucket.append(line.strip()
CSV format is like this:
id, name, poid, error
5896, AutoAuthOSUserSubmit, 900105270, 0x4002
My log file has several sections starting with `==== START ====` and ending
with `==== END ====`. I want to extract the string between `--operation():`
and `StartOperation`. For example, `AutoAuthOSUserSubmit.` I also want to
extract the `poid` value from line `poid: 900105270, poidLen: 9`. Finally, I
want to extract the return value, e.g `0x4002` if `Roll back all updates` is
found after it.
I am not even able to extract point the original text if `Start` and `End` are
not on the same line. How do I go about doing that?
This is a sample LOG extract with two paragraphs:
-- 08/24 02:07:56 [mds.ecas(5896) ECAS_CP1] **==== START ====**
open file /ecas/public/onsite-be/config/timer.conf failed
INFO 08/24/16 02:07:56 salt1be-d1-ap(**5896**/0) main.c(780*****):--operation(): AutoAuthOSUserSubmit. StartOperation*****
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) main.c(784):--Client Information: Request from host 'malt-d1-wb' process id 12382.
DEBUG 08/24/16 02:07:56 salt1be-d1-ap(5896/0) TOci.cc(571):FetchServiceObjects: ServiceCert.sql
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsserviceagent.cpp(517):Generate Certificate 2: c1cd00d5c3de082360a08730fef9cd1d
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) junk.c(1373):GenerateWebPin : poid: **900105270**, poidLen: 9
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) junk.c(1408):GenerateWebPin : pinStr
DEBUG 08/24/16 02:07:56 salt1be-d1-ap(5896/0) uaadapter_vasco_totp.c(275):UAVascoTOTPImpl.close() -- Releasing Adapter Context
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsenterprise.cpp(288):VSEnterprise::Engage returns 0x4002 - Unknown error code **(0x4002)**
ERROR 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsautoauth.cpp(696):OSAAEndUserEnroll: error occurred. **Roll back** all updates!
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) uaotptokenstoreqmimpl.cpp(199):Close token store
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) main.c(990):-- EndOperation
-- 08/24 02:07:56 [mds.ecas(5896) ECAS_CP1] **==== END ====**
OPERATION = AutoAuthOSUserSubmit, rc = 0x0 (0)
SYSINFO Elapse = 0.687, Heap = 1334K, Stack = 64K
Answer: It looks like you are simply trying to find strings within the LOG document
and trying to parse the lines of characters using keywords. You can go line by
line which is what you are doing currently or you could go through the
document once (assuming the LOG document never gets huge) and add each
subsequent line to an existing string.
Check this out for finding substrings
<http://www.tutorialspoint.com/python/string_index.htm> <\--- for finding the
location of where a string is within another string, this will help you
determine a start index and an end index. Once you have those you can extract
your desired information.
Check this out for your CSV problem
<http://www.tutorialspoint.com/python/string_split.htm> <\--- for splitting a
string around a specific character i.e. "," for your CSV files.
[Does Python have a string contains substring
method?](http://stackoverflow.com/questions/3437059/does-python-have-a-string-
contains-substring-method) will be more useful than your current method of
using the strip() method
Hopefully this will point you in the right direction!
|
mongoDB: store audio files (Best way) using python
Question: I've a bunch of audio files (.wav) and I'd like know from you guys, what's the
best way to store them in mongoDB? What I'm doing today is, just storing the
path of the file.(as you can see below). But I think it's not good because I'm
creating a "fake reference" to the file and I wonder If by chance I delete the
file, how could I consist it?
{
"_id" : ObjectId("57c0a06cd92f49222ce2f42d"),
"eps" : "GPSP",
"terminal" : 989638523,
"main_path" : "W:\\Python\\Speech\\audio\\teste\\teste_9",
"motivo" : "Classic",
"audio" : [
{
"path" : "W:\\Python\\Speech\\audio\\teste\\teste_9\\01_audio.wav",
"confidence" : 0.8332507,
"transcript" : "Alô bom dia com quem eu falo",
"sequence" : 1
},
{
"path" : "W:\\Python\\Speech\\audio\\teste\\teste_9\\02_audio.wav",
"confidence" : 0.90813386,
"transcript" : "Um novo benefício pra minha da senhora, sem impostos e nada mais do que isso",
"sequence" : 2
}
}
Thank you,
Answer: Take a look at MongoDB
[`gridfs`](https://docs.mongodb.com/manual/core/gridfs/):
> GridFS is a specification for storing and retrieving files that exceed the
> BSON-document size limit of 16 MB
Using pymongo you can put files inside like this:
from pymongo import MongoClient
import gridfs
fs = gridfs.GridFS(db)
file_id = fs.put(open( r'audio.wav', 'rb')
|
transform string column of a pandas data frame into 0 1 vectors
Question: `LabelEncoder` and `OneHotEncoder` works pretty good for numpy array, which
transform string into `0,1` based vectors.
My question is, is there a neat API to convert a column of a pandas data frame
into `0, 1` vectors? I showed my code and raw content of the pandas data frame
`123.csv`, suppose I want to binary `0, 1` for columns `c_a`,`c_b`,`c_c`, each
of the 3 columns are independent, I want to binary `0, 1` for the separately
independent.
Code,
import pandas as pd
sample=pd.read_csv('123.csv', sep=',',header=None)
print sample.dtypes
123.csv content,
c_a,c_b,c_c,c_d
hello,python,pandas,1.2
hi,c++,vector,1.2
Label Encoder and OneHotEncoder examples for numpy,
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
S = np.array(['b','a','c'])
le = LabelEncoder()
S = le.fit_transform(S)
print(S)
ohe = OneHotEncoder()
one_hot = ohe.fit_transform(S.reshape(-1,1)).toarray()
print(one_hot)
which results in:
[1 0 2]
[[ 0. 1. 0.]
[ 1. 0. 0.]
[ 0. 0. 1.]]
**Edit 1** , tried `get_dummies`, and it seems results are `0.0` and `1.0`
(seems `float`), is there a way to convert into integer directly?
0_c_a 0_hello 0_hi 0_ho 1_c++ 1_c_b 1_java 1_python 2_c_c 2_numpy \
0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0
1 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0
2 0.0 0.0 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
3 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0
Answer: Are you looking for `get_dummies`?
s = pd.Series(["a", "b", "a", "c"])
pd.get_dummies(s)
If you want `ints`:
pd.get_dummies(s).astype(np.uint8)
reference:
[Pandas get_dummies to output dtype integer/bool instead of
float](http://stackoverflow.com/questions/27468892/pandas-get-dummies-to-
output-dtype-integer-bool-instead-of-float)
|