text
stringlengths 226
34.5k
|
---|
How can I set UVs to a Mesh in Blender Python?
Question: Using Blender 2.49's Python API I'm creating a mesh. I have a list of vertices
and a list of face indices.
e.g.
mesh = bpy.data.meshes.new('mesh')
mesh.verts.extend(mVerts)
mesh.faces.extend(mFaces)
I've noticed MVert's
[uvco](http://www.blender.org/documentation/249PythonDoc/Mesh.MVert-
class.html#uvco) property and MFace's
[uv](http://www.blender.org/documentation/249PythonDoc/Mesh.MFace-
class.html#uv) property, and added some random values, but I can't see any
change when I render.
Regarding uvco, the documentation mentions:
> Note: These are not seen in the UV editor and they are not a part of UV a
> UVLayer.
I tried this with the new mesh selected:
import Blender
from Blender import *
import random
scn = Scene.GetCurrent()
ob = scn.objects.active
o = ob.getData()
for v in o.verts:
v.uvco = (random.random(),random.random(),random.random())
print v.uvco
for f in o.faces:
r = (random.random(),random.random())
for i in range(0,4):
f.uv.append(r)
print f.uv
I can see the values change in Terminal, but I don't see any change when I
render. If I reselect the object, the previous face uvs are gone.
Can anyone explain how are UVs set using the Blender 2.49 Python API ?
Thanks
Answer: Try simply replacing this line:
o = ob.getData()
with
o = ob.getData(mesh=True)
Due to the historic development of Blender Python API, an ordinary call to
blender_object.getData gives you a copy of an object's mesh data, that while
can be modified, is not "live" on the displayed object. (Actually it is even
an "NMesh" - a class that differs from the living "Mesh" class).
With the optional parameter "mesh=True" passed to the getData method you get
back the living mesh of the object, and changes therein have effect (that can
be seen upon an update forced with after a Blender.Redraw()).
I never tried UV things, however, so there might be more things to it, but I
believe this is your issue.
|
python data types
Question: I wrote a script to take files of data that is in columns and plot it
depending on which column the user wants to view. Well, I noticed that the
plots look crazy, and have all the wrong numbers because python is ignoring
the exponential.
My numbers are in the format: 1.000000E+1 OR 1.000000E-1
What dtype is that? I am using numpy.genfromtxt to import with a dtype =
float. I know there are all sorts of dtypes you can enter, but I cannot find a
comprehensive list of the options, and examples.
Thanks.
Here is an example of my input (those spaces are tabs):
> Time Stamp``T1_ModBt``T2_90Bend``T3_InPE``T5_Stg2Rfrg
> 5:22 AM``2.115800E+2``1.400000E+0``1.400000E+0``3.035100E+1
> 5:23 AM``2.094300E+2``1.400000E+0``1.400000E+0``3.034800E+1
> 5:24 AM``2.079300E+2``1.400000E+0``1.400000E+0``3.031300E+1
> 5:25 AM``2.069500E+2``1.400000E+0``1.400000E+0``3.031400E+1
> 5:26 AM``2.052600E+2``1.400000E+0``1.400000E+0``3.030400E+1
> 5:27 AM``2.040700E+2``1.400000E+0``1.400000E+0``3.029100E+1
**Update** I figured out at least part of the reason why what I am doing does
not work. Still do not know how to define dtypes the way I want to.
import numpy as np
file = np.genfromtxt('myfile.txt', usecols = (0,1), dtype = (str, float), delimiter = '\t')
That returns an array of strings for each column. How do I tell it I want
column 0 to be a str, and all the rest of the columns to be float?
Answer:
In [55]: type(1.000000E+1)
Out[55]: <type 'float'>
What does your input data look like, it's fair possible that it's in the wrong
input format but it's also sure that it's fairly easy to convert it to the
right format.
|
What's the best performing xml parsing for GAE (Python Version)?
Question: I think we all know [this page](http://blog.ianbicking.org/2008/03/30/python-
html-parser-performance/), but the benchmarks provided dated from more than
two years ago. So, I would like to know if you could point out the best xml
parser around. As I need just a xml parser, the more important thing to me is
speed over everything else. My objective is to process some xml feeds (about
25k) that are 4kb in size (this will be a daily task). As you probably know,
I'm restricted by the 30 seconds request timeout. So, what's the best parser
(Python only) that I can use?
Thanks for your anwsers.
Edit 01:
@Peter Recore I'll. I'm writing some code now and plan to run some profiling
in the near future. Regarding your question, the answer is no. Processing
takes just a little time when compared with downloading the actual xml feed.
But, I can't increase Google's Bandwidth, so I can only focus on this right
now.
My only problem is that i need to do this as fastest as possible because my
objective is to get a snapshot of a website status. And, as internet is live
and people keep adding and changing it's data, i need the fastest method
because any data insertion during the "downloading and processing" time span
will actually mess with my statistical analisys.
I used to do it from my own computer and the process took 24 minutes back
then, but now the website has 12 times more information.
Answer: I know that this don't awnser my question directly, but id does what i just
needed.
I remenbered that xml is not the only file type I could use, so instead of
using a xml parser I choose to use json. About 2.5 times smaller in size. What
means a decrease in download time. I used `simplejson` as my json libray.
I used `from google.appengine.api import urlfetch` to get the json feeds in
parallel:
class GetEntityJSON(webapp.RequestHandler):
def post(self):
url = 'http://url.that.generates.the.feeds/'
if self.request.get('idList'):
idList = self.request.get('idList').split(',')
try:
asyncRequests = self._asyncFetch([url + id + '.json' for id in idList])
except urlfetch.DownloadError:
# Dealed with time out errors (#5) as these were very frequent
for result in asyncRequests:
if result.status_code == 200:
entityJSON = simplejson.loads(result.content)
# Filled a database entity with some json info. It goes like this:
# entity= Entity(
# name = entityJSON['name'],
# dateOfBirth = entityJSON['date_of_birth']
# ).put()
self.redirect('/')
def _asyncFetch(self, urlList):
rpcs = []
for url in urlList:
rpc = urlfetch.create_rpc(deadline = 10)
urlfetch.make_fetch_call(rpc, url)
rpcs.append(rpc)
return [rpc.get_result() for rpc in rpcs]
I tried getting 10 feeds at a time, but most of the times an individual feed
raised the DownloadError #5 (Time out). Then, I increased the deadline to 10
seconds and started getting 5 feeds at a time.
But still, 25k feeds getting 5 at a time results in 5k calls. In a queue that
can spawn 5 tasks a second, the total task time should be 17min in the end.
|
Download torrent form torcache.com using php.?
Question: As the server is using gzip encription I am getting an error torrent while
downloading.
<?
$path_parts = pathinfo("http://torcache.com/torrent/56A250DC4CD64F6C304631897F1108D413FE76C7.torrent");
$name= $path_parts['basename'];
$d="torrent/".$name;
if(!copy($f,$d))
{
echo "not copied";
}
else
{
echo "copied";
}
?>
Then i used this then also the result is invalid torrent
<?php
/* Tutorial by AwesomePHP.com -> www.AwesomePHP.com */
/* Function: download remote file */
/* Parameters: $url -> to download | $dir -> where to store file |
$file_name -> store file as this name - if null, use default*/
/* $path_parts = pathinfo("http://torcache.com/torrent/56A250DC4CD64F6C304631897F1108D413FE76C7.torrent");
$name= $path_parts['basename'];
$d="torrent/".$name; */
$f="http://torcache.com/torrent/56A250DC4CD64F6C304631897F1108D413FE76C7.torrent";
downloadRemoteFile($f,"torrent/",$file_name = NULL);
function downloadRemoteFile($url,$dir,$file_name = NULL){
if($file_name == NULL){ $file_name = basename($url);}
$url_stuff = parse_url($url);
$port = isset($url_stuff['port']) ? $url_stuff['port'] : 80;
$fp = fsockopen($url_stuff['host'], $port);
if(!$fp){ return false;}
$query = 'GET ' . $url_stuff['path'] . " HTTP/1.0\n";
$query .= 'Host: ' . $url_stuff['host'];
$query .= "\n\n";
fwrite($fp, $query);
while ($tmp = fread($fp, 8192)) {
$buffer .= $tmp;
}
preg_match('/Content-Length: ([0-9]+)/', $buffer, $parts);
$file = substr($buffer, - $parts[1]);
$file_binary=($file);
if($file_name == NULL){
$temp = explode(".",$url);
$file_name = $temp[count($temp)-1];
}
$file_open = fopen($dir . "/" . $file_name,'w');
if(!$file_open){ return false;}
fwrite($file_open,$file_binary);
fclose($file_open);
return true;
}
?>
**python**
import urllib2, httplib
httplib.HTTPConnection.debuglevel = 1
request = urllib2.Request('http://torcache.com/torrent/4F78CA71DD8C308F18426F845AFBFF4481633B11.torrent')
request.add_header('Accept-encoding', 'gzip')
opener = urllib2.build_opener()
f = opener.open(request)
compresseddata = f.read()
import StringIO
compressedstream = StringIO.StringIO(compresseddata)
import gzip
gzipper = gzip.GzipFile(fileobj=compressedstream)
data = gzipper.read()
print data
filename = "633B11.torrent"
FILE = open(filename,"w")
FILE.write(data)
Then i used python wiht gzip compression still i am getting invalid torrent
file can **anyboy help me to solve the gzip problem in php to download a
torrent from a torrent cache server with gzip encoding**
Answer: I encountered the same problem, solution is ya just decode the torrent before
saving it.simplezz
function gzdecode($d){
$f=ord(substr($d,3,1));
$h=10;$e=0;
if($f&4){
$e=unpack('v',substr($d,10,2));
$e=$e[1];$h+=2+$e;
}
if($f&8){
$h=strpos($d,chr(0),$h)+1;
}
if($f&16){
$h=strpos($d,chr(0),$h)+1;
}
if($f&2){
$h+=2;
}
$u = gzinflate(substr($d,$h));
if($u===FALSE){
$u=$d;
}
return $u;}
$torrent = file_get_contents('http://URL_PATH_TO_TORRENT.torrent',FILE_BINARY);
$torrent = gzdecode($torrent);
file_put_contents('./torrentname.torrent',$torrent);
|
How to create a notification server which informs Delphi application when database changes?
Question: We need to be able to inform a Delphi application in case there are changes to
some of our tables in MySQL.
Delphi clients are in the Internet behind a firewall, and they have to be
authenticated before connecting to the notification server we need to
implement. The server can be programmed using for example Java, PHP or Python,
and it has to support thousands of clients.
Typically one change in the database needs to be informed only to a single
client, and I don't believe performance will be a bottleneck. It just has to
be possible to inform any of those thousands of clients when a change
affecting the specific client occurs.
I have been thinking of a solution where:
1. MySQL trigger would inform to notification server
2. Delphi client connects to a messaging queue and gets the notification using it
My questions:
1. What would be the best to way from the trigger to inform the external server of the change
2. Which message queue solution to pick?
Answer: **Answer to the First Question:**
check this question and answers on Stack Overflow:
[When a new row in database is added, an external command line program must be
invoked](http://stackoverflow.com/questions/668666/when-a-new-row-in-database-
is-added-an-external-command-line-program-must-be-inv)
In theory, a simple user-defined function could be used to fire a 'row
changed' message to a message broker / queue. But this involves external
systems (at least a network subsystem) which can fail - and bad things can
happen.
A different solution which does not require dangerous modifications to the
database system would be a multi-tiered design for the application. The server
application which hosts the business logic then needs to generate 'database
content changed' events, post them to a publish and subscribe message channel
(aka 'topic') on the message broker so that every client will receive a copy
of this message immediately or if the client reconnects (using '[durable
subscriptions](http://docs.oracle.com/javaee/1.4/api/javax/jms/Session.html#createDurableSubscriber%28javax.jms.Topic,%20java.lang.String%29)').
* * *
I wrote a related blog article about this topic here: [Firebird Database
Events and Message-oriented
Middleware](http://mikejustin.wordpress.com/2012/11/06/firebird-database-
events-and-message-oriented-middleware/)
**Answer to the Second Question:**
The creators of [Second Life](http://www.secondlife.com/) have evaluated a
couple of message brokers and published their results - for some of the
products, Delphi client libraries exist or can be implemented using standard
protocols: [Message Queue Evaluation
Notes](http://wiki.secondlife.com/wiki/Message_Queue_Evaluation_Notes)
Since then, other products have been released, some of them can also be
integrated with Delphi clients through non-Java protocols, for example:
* [**Open Message Queue (OpenMQ) 4.4**](http://activemq.apache.org/index.html), which is the default [JMS](http://en.wikipedia.org/wiki/Java_Message_Service) provider broker in the Sun GlassFish v3 application Server
* [**JBoss HornetQ 2.1**](http://jboss.org/hornetq/) which will be the default JMS provider in JBoss application server 6. HornetQ 2.0.GA obtained scores up to 307% higher than previously published SPECjms2007 benchmark results, on the same server hardware and operating system set-up.
A very popular open source message broker which can be used from Delphi, Java,
PHP, C# (and other) clients is
* [**Apache ActiveMQ 5.4.1**](http://activemq.apache.org/index.html) \- free [introduction chapter](http://manning.com/snyder/) of "ActiveMQ in Action" by Bruce Snyder, Dejan Bosanac, and Rob Davies
All brokers are designed for thousands of concurrent clients and tens of
thousands of messages per second. They also typically support clustering and
failover, though this is not part of the JMS specification.
If speed is not so important, but you need High availability (even if your
internal system is down), [Amazon Simple Queue Service (Amazon
SQS)](http://aws.amazon.com/sqs/) is a cloud-based service which can be
accessed using REST and Soap style interfaces.
|
Perl's BEGIN{} block in Python
Question: I have Python code that uses the "with" keyword (new in 2.6) and I want to
check if the interpreter version is at least 2.6, so I use this code:
import sys
if sys.version < '2.6':
raise Exception( "python 2.6 required" )
However, the 2.4 interpreter chokes on the with keyword (later in the script)
because it doesn't recognize the syntax, and it does this before it evaluates
my check.
Is there something in Python analogous to Perl's BEGIN{} block?
Answer: Take a look here: <http://stackoverflow.com/questions/446052/python-best-way-
to-check-for-python-version-in-program-that-uses-new-language-fe>
|
issue running a test in Python, via rpy2
Question: _I have a feeling this will be a quick fix, given that I started coding two
weeks ago. I am try to run a statistical test - a Mantel, looking for a
correlation between two distance matrices - in Python, by using a function(?)
that has already been written in R, via Rpy2. The R module is "ade4" and it
contains "mantel.rtest"_
from rpy2 import robjects
import rpy2.robjects as robjects
robjects.r('library(ade4)')
**EDIT** rmantel = robjects.r("mantel.rtest")
for i in windownA:
M1 = asmatrix(identityA[i]).reshape(14,14)
for j in windownB:
M2 = asmatrix(identityB[j]).reshape(14,14)
**EDIT** result = rmantel (M1, M2, nrepet = 9999)
print result
print ' '
EDIT: this now works! _"This returns the error:**"AttributeError: 'R' object
has no attribute 'mantel'"** which leads me to believe that the object being
called here is truncated at the "." (i.e. "mantel" versus the full
"mantel.rtest"). I tried reassigning the "mantel.rtest" as an object without a
"." ex) rmantel = "mantel.rtest" and substituting that result =
robjects.r.rmantel (M1, M2, nrepet = 9999) only to receive the error:
**"AttributeError: 'R' object has no attribute 'rmantel'"** \- so that did not
work. Any thoughts as to how I can get around this issue?"_
_**New Issue** :_ The Mantel test require data in "dist" format, so when I run
the edited code, I get the following error **"RRuntimeError: Error in function
(m1, m2, nrepet = 99) : Object of class 'dist' expected"**
So I tried to convert the file to that format and when I print the results,
it's the bottom half of a matrix of the correct size, but all fields are
filled with "NA"
robjects.r('library(ade4)')
rmantel = robjects.r("mantel.rtest")
distify = robjects.r("dist")
for i in windownA:
M1 = asmatrix(identityA[i]).reshape(14,14)
print distify(M1)
MOne = distify(M1, 14)
for j in windownB:
M2 = asmatrix(identityB[j]).reshape(14,14)
print distify(M2)
MTwo = distify(M2, 14)
result = rmantel(M1, M2, nrepet = 9999)
print result
print ' '
i get"
1 2 3 4 5 6 7 8 9 10 11 12 13
2 NA
3 NA NA
4 NA NA NA
5 NA NA NA NA
6 NA NA NA NA NA
7 NA NA NA NA NA NA
8 NA NA NA NA NA NA NA
9 NA NA NA NA NA NA NA NA
10 NA NA NA NA NA NA NA NA NA
11 NA NA NA NA NA NA NA NA NA NA
12 NA NA NA NA NA NA NA NA NA NA NA
13 NA NA NA NA NA NA NA NA NA NA NA NA
14 NA NA NA NA NA NA NA NA NA NA NA NA NA
Answer: Try `robjects.r['mantel.rtest']`:
In [1]: %cpaste
Pasting code; enter '--' alone on the line to stop.
:from rpy2 import robjects
import rpy2.robjects as robjects
robjects.r('library(ade4)')
::::::--
In [3]: robjects.r['mantel.rtest']
Out[5]: <RFunction - Python:0xa2aac0c / R:0xac9ec04>
This also works:
In [8]: robjects.r('mantel.rtest')
Out[8]: <RFunction - Python:0xaf7042c / R:0xac9ec04>
**Edit (for the New Issue):** Since you say `mantel.rtest` requires data in
`dist` format, I suppose `M1` and `M2` should be in `dist` format. But `M1`
and `M2` appear to be numpy arrays. On the other hand, `MOne` and `MTwo` look
like they might be in `dist` format.
So perhaps try
result = rmantel(MOne, MTwo, nrepet = 9999)
|
How to use os.spawnv to send email copy using Python?
Question: First let me say that I know it's better to use the subprocess module, but I'm
editing other people's code and I'm trying to make as few changes as possible,
which includes avoiding the importing any new modules. So I'd like to stick to
the currently-imported modules (os, sys, and paths) if at all possible.
The code is currently (in a file called postfix-to-mailman.py that some of you
may be familiar with):
if local in ('postmaster', 'abuse', 'mailer-daemon'):
os.execv("/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'first@place.com'))
sys.exit(0)
This works fine (though I think sys.exit(0) might be never be called and thus
be unnecessary).
I believe this replaces the current process with a call to /usr/sbin/sendmail
passing it the arguments /usr/sbin/sendmail (for argv[0] i.e. itself) and
'someaddress@someplace.com', then passes the environment of the current
process - including the email message in sys.stdin - to the child process.
What I'd like to do is essentially send another copy of the message before
doing this. I can't use execv again because then execution will stop. So I've
tried the following:
if local in ('postmaster', 'abuse', 'mailer-daemon'):
os.spawnv(os.P_WAIT, "/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'other@place.com'))
os.execv("/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'first@place.com'))
sys.exit(0)
However, while it sends the message to other@place.com, it never sends it to
first@place.com
This surprised me because I thought using spawn would start a child process
and then continue execution in the current process when it returns (or without
waiting, if P_NOWAIT is used).
Incidentally, I tried os.P_NOWAIT first, but the message I got at
other@place.com was empty, so at least when I used P_WAIT the message came
through intact. But it still never got sent to first@place.com which is a
problem.
I'd rather not use os.system if I can avoid it because I'd rather not go out
to a shell environment if it can be avoided (security issues, possible
performance? I admit I'm being paranoid here, but if I can avoid os.system I'd
still like to).
The only thing I can think of is that the call to os.spawnv is somehow
consuming/emptying the contents of sys.stdin, but that doesn't really make
sense either. Ideas?
Answer: While it might not make sense, that does appear to be the case
import os
os.spawnv(os.P_WAIT,"/usr/bin/wc", ("/usr/bin/wc",))
os.execv("/usr/bin/wc", ("/usr/bin/wc",))
$ cat j.py | python j.py
4 6 106
0 0 0
In which case you might do something like this
import os
import sys
buf = sys.stdin.read()
wc = os.popen("usr/sbin/sendmail other@place.com","w")
wc.write(buf)
wc.close()
wc = os.popen("usr/sbin/sendmail first@place.com","w")
wc.write(buf)
wc.close()
sys.exit(0)
|
Extracting date from a string in Python
Question: How can I extract the date from a string like "monkey 2010-07-10 love banana"?
Thanks!
Answer: Using [python-dateutil](http://labix.org/python-dateutil):
In [1]: import dateutil.parser as dparser
In [18]: dparser.parse("monkey 2010-07-10 love banana",fuzzy=True)
Out[18]: datetime.datetime(2010, 7, 10, 0, 0)
Invalid dates raise a `ValueError`:
In [19]: dparser.parse("monkey 2010-07-32 love banana",fuzzy=True)
# ValueError: day is out of range for month
It can recognize dates in many formats:
In [20]: dparser.parse("monkey 20/01/1980 love banana",fuzzy=True)
Out[20]: datetime.datetime(1980, 1, 20, 0, 0)
Note that it makes a guess if the date is ambiguous:
In [23]: dparser.parse("monkey 10/01/1980 love banana",fuzzy=True)
Out[23]: datetime.datetime(1980, 10, 1, 0, 0)
But the way it parses ambiguous dates is customizable:
In [21]: dparser.parse("monkey 10/01/1980 love banana",fuzzy=True, dayfirst=True)
Out[21]: datetime.datetime(1980, 1, 10, 0, 0)
|
Basic data storage with Python
Question: I need to store basic data of customer's and cars that they bought and payment
schedule of these cars. These data come from GUI, written in Python. I don't
have enough experience to use a database system like sql, so I want to store
my data in a file as plain text. And it doesn't have to be online.
To be able to search and filter them, first I convert my data (lists of lists)
to the string then when I need the data re-convert to the regular Python list
syntax. I know it is a very brute-force way, but is it safe to do like that or
can you advice me to another way?
Answer: It is never safe to save your database in a text format (or using pickle or
whatever). There is a risk that problems while saving the data may cause
corruption. Not to mention risks with your data being stolen.
As your dataset grows there may be a performance hit.
have a look at sqlite (or sqlite3) which is small and easier to manage than
mysql. Unless you have a very small dataset that will fit in a text file.
P/S: btw, using berkeley db in python is simple, and you don't have to learn
all the DB things, just import bsddb
|
How do I convert a python string to ucs2 hex?
Question: I've been searching for this one and couldn't find it, although it seems
simple. I need to send in a ucs2 hex string in the url, and I don't know how
to convert a python string to be ucs2 hex. Any thoughts?
Answer:
>>> 'åéîøü'.encode('utf16')
b'\xff\xfe\xe5\x00\xe9\x00\xee\x00\xf8\x00\xfc\x00'
(Note that there's a BOM in the beginning. Use the encoding `'utf_16_be'` or
`'utf_16_le'` if the endian is fixed.)
If you need hex digits, use
[`binascii.hexlify`](http://docs.python.org/py3k/library/binascii.html?highlight=binascii.hexlify#binascii.hexlify).
>>> import binascii
>>> binascii.hexlify('åéîøü'.encode('utf16'))
b'fffee500e900ee00f800fc00'
|
Automatically call all functions matching a certain pattern in python
Question: In python I have many functions likes the ones below. I would like to run all
the functions whose name matches `setup_*` without having to explicitly call
them from main. The order in which the functions are run is not important. How
can I do this in python?
def setup_1():
....
def setup_2():
....
def setup_3():
...
...
if __name__ == '__main__':
setup_*()
Answer:
def setup_1():
print('1')
def setup_2():
print('2')
def setup_3():
print('3')
if __name__ == '__main__':
for func in (val for key,val in vars().items()
if key.startswith('setup_')):
func()
yields
# 1
# 3
# 2
|
wxPython Geometry problem
Question: I'm trying to get the button to be right of the label. I set the tuple and am
still not sure why it covers the label.
Also is there a good tutorial available on wxpython geometry?
import wx
import wx.lib.agw.gradientbutton as GB
def GetRoundBitmap( w, h, r ):
maskColor = wx.Color(0,0,0)
shownColor = wx.Color(5,5,5)
b = wx.EmptyBitmap(w,h)
dc = wx.MemoryDC(b)
dc.SetBrush(wx.Brush(maskColor))
dc.DrawRectangle(0,0,w,h)
dc.SetBrush(wx.Brush(shownColor))
dc.SetPen(wx.Pen(shownColor))
dc.DrawRoundedRectangle(0,0,w,h,r)
dc.SelectObject(wx.NullBitmap)
b.SetMaskColour(maskColor)
return b
def GetRoundShape( w, h, r ):
return wx.RegionFromBitmap( GetRoundBitmap(w,h,r) )
class FancyFrame(wx.Frame):
def __init__(self):
style = ( wx.CLIP_CHILDREN | wx.STAY_ON_TOP | wx.FRAME_NO_TASKBAR |
wx.NO_BORDER | wx.FRAME_SHAPED )
wx.Frame.__init__(self, None, title='Fancy', style = style)
self.SetSize( (250, 40) )
self.SetPosition( (500,500) )
self.SetTransparent( 160 )
self.Bind(wx.EVT_KEY_UP, self.On_Esc)
self.Bind(wx.EVT_MOTION, self.OnMouse)
self.Bind(wx.EVT_PAINT, self.OnPaint)
if wx.Platform == '__WXGTK__':
self.Bind(wx.EVT_WINDOW_CREATE, self.SetRoundShape)
else:
self.SetRoundShape()
self.Show(True)
geo = wx.GridBagSizer()
self.label = wx.StaticText(self,-1,label=u'Hello !')
self.label.SetBackgroundColour("#000000")
self.label.SetForegroundColour(wx.WHITE)
self.label.SetSize( (50, 10) )
geo.Add(self.label, (0,0))
self.button = GB.GradientButton(self,label="button")
self.label.SetBackgroundColour("#9e9e9e")
geo.Add(self.button, (0,1))
def SetRoundShape(self, event=None):
w, h = self.GetSizeTuple()
self.SetShape(GetRoundShape( w,h, 10 ) )
def OnPaint(self, event):
dc = wx.PaintDC(self)
dc = wx.GCDC(dc)
w, h = self.GetSizeTuple()
r = 10
dc.SetPen( wx.Pen("#000000", width = 4 ) )
dc.SetBrush( wx.Brush("#9e9e9e") )
dc.DrawRoundedRectangle( 0,0,w,h,r )
def On_Esc(self, event):
"""quit if user press Esc"""
if event.GetKeyCode() == 27 : #27 is Esc
self.Close(force=True)
else:
event.Skip()
def OnMouse(self, event):
"""implement dragging"""
if not event.Dragging():
self._dragPos = None
return
self.CaptureMouse()
if not self._dragPos:
self._dragPos = event.GetPosition()
else:
pos = event.GetPosition()
displacement = self._dragPos - pos
self.SetPosition( self.GetPosition() - displacement )
app = wx.App()
f = FancyFrame()
app.MainLoop()
Answer: You forgot to set `FancyFrame` to have the given layout sizer.
In other words you need to add one line to the end of your `FancyFrame`'s
`__init__` method.
self.SetSizerAndFit(geo)
|
Weird subprocess issue with Django
Question: I'm sorry if this is a duplicate question, but after searching through 3 pages
for "django subprocess", I, for one, could not find the answer to my
particular problem.
I'm trying to run `pdflatex` on `tex` file, but for some reason in Django it
doesn't produce anything. It works just fine in a regular python script,
though. I've omitted most of the code here, but this is basically the
important bit. I'm running this on apache2 with mod_wsgi, and I suspect that
it might be an apache permissions related problem, dunno though. Thanks in
advance.
import subprocess
test = subprocess.Popen(['pdflatex','/home/sheepz/test.tex'],shell=True, stdout=subprocess.PIPE)
log = open('/home/sheepz/log.log', 'w')
log.write(str(test.communicate()))
log.close()
the content of the file "log.log":
('This is pdfTeX, Version 3.1415926-1.40.10 (TeX Live 2009/Debian)\n restricted \\write18 enabled.\n**\n! End of file on the terminal... why?\n', None)
EDIT: The solution for this issue is quite easy. I just want to add it here,
so everyone who's having trouble with this, can find it. Basically it involves
running the site as a different user rather than www-data using the
[WSGIDaemonProcess](http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess)
configuration directive. Here is a minimal configuration:
ServerName www.mysite.com
ServerAlias *mysite.com
WSGIDaemonProcess www.mysite.com user=joe group=joe home=/home/joe/
WSGIProcessGroup www.mysite.com
Also, it would be advisable to add `WSGIRestrictStdout Off` to your
httpd.conf, because, as far as I understand, mod_wsgi ignores any process that
tries to use stdout. Thanks, Graham.
Answer: Specify the absolute path to 'pdflatex' in command. Apache user PATH may not
look in same directories as your personal account does. Also, Apache runs as a
special user that isn't going to have write access to where you can. Suggest
you use mod_wsgi daemon mode and specify that that daemon process runs as you
and not the Apache user. Read up documentation on mod_wsgi site as to how to
do that.
|
Why aren't my sqlite3 foreign keys working?
Question: I run the following code from a python interpreter, and expect the insert
statement to fail and throw some kind of exception. But it's not happening:
Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> conn = sqlite3.connect("test.db")
>>> conn.executescript("""
... pragma foreign_keys=on;
... begin transaction;
... create table t1 (i integer primary key, a);
... create table t2 (i, a, foreign key (i) references t1(i));
... commit;
... """)
<sqlite3.Cursor object at 0x0229DAA0>
>>> c = conn.cursor()
>>> c.execute("insert into t2 values (6, 8)")
<sqlite3.Cursor object at 0x0229DAD0>
>>> #???
...
>>> conn.commit()
>>> #???????????
...
>>> c.execute("select * from t2")
<sqlite3.Cursor object at 0x0229DAD0>
>>> c.fetchall()
[(6, 8)]
>>> #but why!?
...
>>>
Does anyone know why this doesn't want to work? My understanding is that the
insert should fail since the value I gave for `t2(i)` isn't a primary key in
`t1`, but it happily does it anyway...?
Answer: Working foreign key support in SQLite is very new -- it was only released in
3.6.19 on October 14th. Are you sure you're using SQLite 3.6.19 or later?
Check the sqlite_version constant in the sqlite3 module. E.g. on a Mac OS X
10.6 system with the default python/sqlite install:
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.6.12'
>>>
|
@StaticMethod or @ClassMethod decoration on magic methods
Question: I am trying to decorate the magic method `__getitem__` to be a classmethod on
the class. Here is a sample of what I tried. I don't mind using either
classmethod or staticmethod decoration, but I am not too sure how to do it.
Here is what I tried:
import ConfigParser
class Settings(object):
_env = None
_config = None
def __init__(self, env='dev'):
_env = env
# find the file
filePath = "C:\\temp\\app.config"
#load the file
_config = ConfigParser.ConfigParser()
_config.read(filePath)
@classmethod
def __getitem__(cls, key):
return cls._config.get(cls._env, key)
@classmethod
def loadEnv(cls, env):
cls._env = env
However, when I try to call `Settings['database']` I get the following error.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: expected Array[Type], got str
Can anyone tell me what I am doing wrong. Also, could someone suggest if there
is better way to do this? I even tried using MetaClasses, but with little
success (as I don't know python too well).
class Meta(type):
def __getitem__(*args):
return type.__getitem__(*args)
class Settings(object):
__metaclass__ = Meta
Thanks in advance.
Answer: Python always looks up `__getitem__` and other magic methods on the class, not
on the instance. So, for example, defining a `__getitem__` in a metaclass
means that you can index **the class** (but you can't define it by delegating
to a non-existent `__getitem__` in `type` \-- just as you can never define
anything by delegating to other non-existent methods, of course;-).
So, if you need to index a class such as `Settings`, your custom metaclass
must indeed define `__getitem__`, but it must define it with explicit code
that performs the action you desire -- the `return cls._config.get` you want.
**Edit** : let me give a simplified example...:
>>> class MyMeta(type):
... def __getitem__(cls, k):
... return cls._config.get(k)
...
>>> class Settings:
... __metaclass__ = MyMeta
... _config = dict(foo=23, bar=45)
...
>>> print Settings['foo']
23
Of course, if that was all there was to it, it would be silly to architect
this code as "indexing a class" -- a class had better have instances with
states and methods, too, otherwise you should just code a module instead;-).
And why the "proper" access should be by indexing the whole class rather than
a specific instance, etc, is far from clear. But I'll pay you the compliment
of assuming you have a good design reason for wanting to structure things this
way, and just show you _how_ to implement such a structure;-).
|
Python function based on Scrapy to crawl entirely a web site
Question: I recently discovered [Scrapy](http://doc.scrapy.org/index.html) which i find
very efficient. However, I really don't see how to embed it in a larger
project written in python. I would like to create a spider in the normal way
but be able to launch it on a given url with a function
start_crawl(url)
which would launch the crawling process on a given domain and stop only when
all the pages have been seen.
Answer: Scrapy is much more complicated. It runs several processes and use multi-
threating. So in fact there are no way to use it as normal python function. Of
course you can import function that starts crawler and invoke it, but what
then? You will have normal scrappy process, that has taken control of your
program.
Probably the best approach here is to run scrappy as subprocess of your
program and communicate with it using database or file. You have good
separation between your program and crawler, and solid control over main
process.
|
Best way to handle reload-ing of objects, not modules
Question: I'm doing a lot of development in IPython where
In[3]: from mystuff import MyObject
and then I make lots of changes in mystuff.py. In order to update the
namespace, I have to do
In[4]: reload(mystuff)
In[5]: from mystuff import MyObject
Is there a better way to do this? Note that I cannot import MyObject by
referencing mystuff directly as with
In[6]: import mystuff
In[7]: mystuff.MyObject
since that's not how it works in the code. Even better would be to have
IPython _automatically_ do this when I write the file (but that's probably a
question for another time).
Any help appreciated.
Answer: You can use the `deep_reload` feature from IPython to do this.
<http://ipython.scipy.org/doc/manual/html/interactive/reference.html?highlight=dreload>
If you run ipython with the `-deep_reload` parameter to replace the normal
`reload()` method.
And if that does not do what you want, it would be possible to write a script
to replace all the imported modules in the scope automatically. Fairly hacky
though, but it should work ;)
I've just found the `ipy_autoreload` module. Perhaps that can help you a bit.
I'm not yet sure how it works but this should work according to the docs:
import ipy_autoreload
%autoreload 1
|
Create MySQL table from xls spreadsheet
Question: I wonder if there is a (native) possibility to create a MySQL table from an
.xls or .xlsx spreadsheet. Note that I do not want to import a file into an
existing table with LOAD DATA INFILE or INSERT INTO, but to create the table
from scratch. i.e using the header as columns (with some default field type
e.g. INT) and then insert the data in one step.
So far I used a python script to build a create statement and imported the
file afterwards, but somehow I feel clumsy with that approach.
Answer: There is no native MySQL tool that does this, but the [`MySQL PROCEDURE
ANALYSE`](http://dev.mysql.com/doc/refman/5.0/en/procedure-analyse.html) might
help you suggest the correct column types.
|
amara and django
Question: Hello I am trying to do webservice calls with django views using Amara
library.
However anytime I do `import amara` (by simply importing it!) and call a
django view with it imported, I get such errors:
Environment:
Request Method: GET
Request URL: http://127.0.0.1:4444/test
Django Version: 1.2.1
Python Version: 2.6.5
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.admin',
'azula.epgdb',
'django_extensions']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware')
Traceback:
File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response
80. response = middleware_method(request)
File "/usr/local/lib/python2.6/dist-packages/django/middleware/common.py" in process_request
58. _is_valid_path("%s/" % request.path_info, urlconf)):
File "/usr/local/lib/python2.6/dist-packages/django/middleware/common.py" in _is_valid_path
143. urlresolvers.resolve(path, urlconf)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
301. return get_resolver(urlconf).resolve(path)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
216. sub_match = pattern.resolve(new_path)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
123. return self.callback, args, kwargs
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in _get_callback
129. self._callback = get_callable(self._callback_str)
File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py" in wrapper
124. result = func(*args)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in get_callable
56. lookup_view = getattr(import_module(mod_name), func_name)
File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py" in import_module
35. __import__(name)
File "/home/uluc/azula/epgdb/views.py" in <module>
4. from azula.epgdb.utils import EventSet
File "/home/uluc/azula/epgdb/utils.py" in <module>
6. import amara
File "/usr/lib/pymodules/python2.6/amara/__init__.py" in <module>
11. import binderytools
File "/usr/lib/pymodules/python2.6/amara/binderytools.py" in <module>
13. from Ft.Xml import InputSource
File "/usr/lib/python2.6/dist-packages/Ft/Xml/InputSource.py" in <module>
355. DefaultFactory = InputSourceFactory(catalog=GetDefaultCatalog())
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in GetDefaultCatalog
579. catalog = Catalog(uri, quiet)
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in __init__
95. self._parseXmlCat(data)
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in _parseXmlCat
372. from Ft.Xml.Sax import CreateParser
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Sax.py" in <module>
242. class SaxPrinter(ContentHandler):
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Sax.py" in SaxPrinter
247. def __init__(self, printer=XmlPrinter(sys.stdout, 'utf-8')):
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Lib/XmlPrinter.py" in __init__
39. self.stream = sw = cStreamWriter.StreamWriter(stream, encoding)
Exception Type: TypeError at /test
Exception Value: argument must have 'write' attribute
How can this be solved ? I tried this under either Debian Lenny & Ubuntu 10.04
yet via Django SVN version and Amara 1
I am suspicious of some character encoding problem.
Answer: Putting "WSGIRestrictStdout Off" to my apache fixed the issue as said in
[here](http://forum.springsource.org/showpost.php?p=225788&postcount=3).
|
Creating a python priority Queue
Question: I would like to build a priority queue in python in which the queue contains
different dictionaries with their priority numbers. So when a "get function"
is called, the dictionary with the highest priority(lowest number) will be
pulled out of the queue and when "add function" is called, the new dictionary
will be added to the queue and sorted based on its priority number.
Please do help out...
Thanks in advance!
Answer: Use the heapq module in the standard library.
You don't specify how you wanted to associate priorities with dictionaries,
but here's a simple implementation:
import heapq
class MyPriQueue(object):
def __init__(self):
self.heap = []
def add(self, d, pri):
heapq.heappush(self.heap, (pri, d))
def get(self):
pri, d = heapq.heappop(self.heap)
return d
|
Globbing with python rpm module?
Question: The following code uses the `rpm` module to query the version of an installed
package. What I would like to do is to query a set of packages specified by a
glob, for example searching for `"python*"` rather than `"python"`. Is this
possible using the `rpm` module?
1 #!/usr/bin/python
2
3 import rpm
4
5 ts = rpm.TransactionSet()
6 mi = ts.dbMatch("name", "python")
7 for i in mi:
8 print i['name'], i['version']
`
Answer:
import rpm
ts = rpm.TransactionSet()
mi = ts.dbMatch()
mi.pattern('name', rpm.RPMMIRE_GLOB, 'py*' )
for h in mi:
# Do something with the header...
|
Python 2.5 Import dll AttributeError
Question: I have a program that runs peachy in Py2.4. I import the TobiiPlugin.dll file
and then run my scripts.
import TobiiPlugin as tobii
tobii.setGazeSubjectProfile(3, 0)
However, when I moved the code to Py2.5 it gets angry at me and I get
Traceback (most recent call last):
File "C:\tobiiDll\TobiiPlugin\Debug\logger_speech.py", line 274, in <module>
main()
File "C:\tobiiDll\TobiiPlugin\Debug\logger_speech.py", line 242, in main
tobii.setGazeSubjectProfile(3, 0)
File "C:\Python25\lib\ctypes\__init__.py", line 325, in __getattr__
func = self.__getitem__(name)
File "C:\Python25\lib\ctypes\__init__.py", line 330, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'setGazeSubjectProfile' not found
>>>
How did everything manage to go missing? It's not just this function either. I
tried others from the DLL and they didn't work either. Thank you!
Answer: For some reason the [Tobii SDK 3.0
Beta](http://landningpages.tobii.com/landingpads/analysis_sdk_beta.aspx) only
works with Python 2.4 or Python 2.6.
|
How to Convert Extended ASCII to HTML Entity Names in Python?
Question: I'm currently doing this to replace extended-ascii characters with their HTML-
entity-number equivalents:
s.encode('ascii', 'xmlcharrefreplace')
What I would like to do is convert to the HTML-entity-name equivalent (i.e.
`©` instead of `©`). This small program below shows what I'm trying
to do that is failing. Is there a way to do this, aside from doing a
find/replace?
#coding=latin-1
def convertEntities(s):
return s.encode('ascii', 'xmlcharrefreplace')
ok = 'ascii: !@#$%^&*()<>'
not_ok = u'extended-ascii: ©®°±¼'
ok_expected = ok
not_ok_expected = u'extended-ascii: ©®°±¼'
ok_2 = convertEntities(ok)
not_ok_2 = convertEntities(not_ok)
if ok_2 == ok_expected:
print 'ascii worked'
else:
print 'ascii failed: "%s"' % ok_2
if not_ok_2 == not_ok_expected:
print 'extended-ascii worked'
else:
print 'extended-ascii failed: "%s"' % not_ok_2
Answer: **edit**
Others have mentioned the `htmlentitydefs` that I never knew about. It would
work with my code this way:
from htmlentitydefs import entitydefs as symbols
for tag, val in symbols.iteritems():
mystr = mystr.replace("&{0};".format(tag), val)
And that should work.
|
How to extend pretty print module to tables?
Question: I have the pretty print module, which I prepared because I was not happy the
pprint module produced zillion lines for list of numbers which had one list of
list. Here is example use of my module.
>>> a=range(10)
>>> a.insert(5,[range(i) for i in range(10)])
>>> a
[0, 1, 2, 3, 4, [[], [0], [0, 1], [0, 1, 2], [0, 1, 2, 3], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7, 8]], 5, 6, 7, 8, 9]
>>> import pretty
>>> pretty.ppr(a,indent=6)
[0, 1, 2, 3, 4,
[
[],
[0],
[0, 1],
[0, 1, 2],
[0, 1, 2, 3],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5, 6],
[0, 1, 2, 3, 4, 5, 6, 7],
[0, 1, 2, 3, 4, 5, 6, 7, 8]], 5, 6, 7, 8, 9]
Code is like this:
""" pretty.py prettyprint module version alpha 0.2
mypr: pretty string function
ppr: print of the pretty string
ONLY list and tuple prettying implemented!
"""
def mypr(w, i = 0, indent = 2, nl = '\n') :
""" w = datastructure, i = indent level, indent = step size for indention """
startend = {list : '[]', tuple : '()'}
if type(w) in (list, tuple) :
start, end = startend[type(w)]
pr = [mypr(j, i + indent, indent, nl) for j in w]
return nl + ' ' * i + start + ', '.join(pr) + end
else : return repr(w)
def ppr(w, i = 0, indent = 2, nl = '\n') :
""" see mypr, this is only print of mypr with same parameters """
print mypr(w, i, indent, nl)
Here is one fixed text for table printing in my pretty print module:
## let's do it "manually"
width = len(str(10+10))
widthformat = '%'+str(width)+'i'
for i in range(10):
for j in range(10):
print widthformat % (i+j),
print
Have you better alternative for this code to be generalized enough for the
pretty printing module?
What I found for this kind of regular cases after posting the question is this
module: [prettytable A simple Python library for easily displaying tabular
data in a visually appealing ASCII table
format](http://code.google.com/p/prettytable/)
Answer: If you're looking for nice formatting for matrices,
[numpy](http://numpy.scipy.org/)'s output looks great right out of the box:
from numpy import *
print array([[i + j for i in range(10)] for j in range(10)])
Output:
[[ 0 1 2 3 4 5 6 7 8 9]
[ 1 2 3 4 5 6 7 8 9 10]
[ 2 3 4 5 6 7 8 9 10 11]
[ 3 4 5 6 7 8 9 10 11 12]
[ 4 5 6 7 8 9 10 11 12 13]
[ 5 6 7 8 9 10 11 12 13 14]
[ 6 7 8 9 10 11 12 13 14 15]
[ 7 8 9 10 11 12 13 14 15 16]
[ 8 9 10 11 12 13 14 15 16 17]
[ 9 10 11 12 13 14 15 16 17 18]]
|
Problem with Python logging RotatingFileHandler in Django website
Question: I've a django powered website, and I use standard logging module to track web
activity.
The log is done via RotatingFileHandler which is configured with 10 log files,
1000000 byte each. The log system works, but this are the log files I get:
-rw-r--r-- 1 apache apache 83 Jul 23 13:30 hr.log
-rw-r--r-- 1 apache apache 446276 Jul 23 13:03 hr.log.1
-rw-r--r-- 1 apache apache 999910 Jul 23 06:00 hr.log.10
-rw-r--r-- 1 apache apache 415 Jul 23 16:24 hr.log.2
-rw-r--r-- 1 apache apache 479636 Jul 23 16:03 hr.log.3
-rw-r--r-- 1 apache apache 710 Jul 23 15:30 hr.log.4
-rw-r--r-- 1 apache apache 892179 Jul 23 15:03 hr.log.5
-rw-r--r-- 1 apache apache 166 Jul 23 14:30 hr.log.6
-rw-r--r-- 1 apache apache 890769 Jul 23 14:03 hr.log.7
-rw-r--r-- 1 apache apache 999977 Jul 23 12:30 hr.log.8
-rw-r--r-- 1 apache apache 999961 Jul 23 08:01 hr.log.9
As you can see it is a mess. Last log has been written to file hr.log.2 (Jul
23 16:24) **instead of hr.log** , and [logging
documentation](http://docs.python.org/library/logging.html#rotatingfilehandler)
states that:
_[...]For example, with a backupCount of 5 and a base file name of app.log,
you would get app.log, app.log.1, app.log.2, up to app.log.5.**The file being
written to is always app.log**. When this file is filled, it is closed and
renamed to app.log.1, and if files app.log.1, app.log.2, etc. exist, then they
are renamed to app.log.2, app.log.3 etc. respectively._
What am I doing wrong?
* * *
My logging configuration file is:
logger.conf:
[loggers]
keys=root
[handlers]
keys=fileHandler
[formatters]
keys=simple
#--------------------------------------------------------------------
# Formatters
[formatter_simple]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
#--------------------------------------------------------------------
# Handlers
[handler_fileHandler]
class=handlers.RotatingFileHandler
level=DEBUG
formatter=simple
args=("/data/django/hr/hr.log",'a',1000000,10)
#--------------------------------------------------------------------
# Loggers
[logger_root]
level=DEBUG
handlers=fileHandler
and my python module to set up the log system is:
logger.py
import os, logging
# Load config file
logger_config_file = \
os.path.join(os.path.abspath(os.path.dirname(__file__)), 'logger.conf')
logging.config.fileConfig(logger_config_file)
# Create logger
logger = logging.getLogger('hr_Logger')
# Log start message
logger.info("Logging system started")
then, at the top of my views.py I have:
import logging
from hr import logger
log = logging.getLogger('hr.views')
log.info('Load hr.views')
[...]
Answer: I've found this behavior when there are multiple precesses are running with
your code.
Unfortunatelly no perfect option exists.
Some ideas, you can incorporate are:
* use WatchedFileHandler (new in 2.6) and rotate with external programs as logrotate
* use syslog or other log aggregating server
* use python log aggregation [sentry](https://github.com/dcramer/django-sentry) \- this is especially useful with django as you can log not only log messages, but exceptions with full stacktrace and 404s.
|
Simple Python variable scope
Question: It seems to me that functions can reference variables outside of their scope
but cannot set them. Is this correct? Am I understanding this right?
I also included the globals usage. I know they are bad ju-ju and will avoid
them; I know how to get around this, but just wanted to be clear.
My example program:
import foo
# beginning of functions
# this one works because I look at the variable but dont modify it
def do_something_working:
if flag_to_do_something:
print "I did it"
# this one does not work because I modify the var
def do_something_not_working:
if flag_to_do_something:
print "I did it"
flag_to_do_something = 0
# this one works, but if I do this God kills a kitten
def do_something_using_globals_working_bad_ju_ju:
global flag_to_do_something
if flag_to_do_something:
print "I did it"
flag_to_do_something = 0
# end of functions
flag_to_do_something = 1
do_something_working()
do_something_not_working()
do_something_using_globals_working_bad_ju_ju()
Answer: Correct. Well mostly. When you `flag_to_do_something = 0` you are not
modifying the variable, you are creating a new variable. The
`flag_to_do_something` that is created in the function will be a separate link
to (in this case) the same object. However, if you had used a function or
operator that modified the variable in place, then the code would have worked.
Example:
g = [1,2,3]
def a():
g = [1,2]
a()
print g #outputs [1,2,3]
g = [1,2,3]
def b():
g.remove(3)
b()
print g #outputs [1,2]
|
How do I connect to a UDP port in Python?
Question: Like everyone else, I can say "I've tried everything!" I kind of did. I looked
all over StackOverflow, and tried all the answers, but got nothing. Anyways, I
am jetting to at least get some code printed by Python before I get even
further in developing this.
I want to receive UDP packets from my Garry's Mod server (logaddress_add
MyIP:7131), and I don't seem to be receiving any of those packets. It's most
likely not a router firewall problem, as I can use HLSW on my other computer.
I have used Wireshark, and didn't see any data from my server's IP. I used the
Python interpreter / [made some
code](http://docs.python.org/library/socket.html#socket-example) (although
example was TCP) to see if I got any data--to make sure Wireshark wasn't doing
anything wrong--and nothing came to it either. Am I doing something silly?
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(('0.0.0.0', 7131))
sock.settimeout(10)
sock.recv(1024)
Edit : I was doing some testing with HLSW, and found out it seems to be doing
some kind of magic. When you try to logaddress_add the certain Port that is
not HLSW (say 7135), it won't do anything. Wireshark won't do anything at all.
Doesn't show any logs, anything. But, when you change HLSW to use the port
that you just added (7135), Wireshark suddenly gets a flow of data, including
the console data that I am jetting for. Is it some kind of configuration HLSW
is changing?
Answer: (Not quite an answer, but a diagnostic path that might lead to an answer.
Sometimes it helps just to know that it actually worked for someone else.)
I've entered the above into a Python console, and then typed the code below
into another Python console:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto('hello', 0, ('127.0.0.1', 7131))
The message popped out on the original console. I repeated the experiment from
another machine, using a '192.168...' address instead, and it popped out
again.
|
Character detection in a text file in Python using the Universal Encoding Detector (chardet)
Question: I am trying to use the Universal Encoding Detector (chardet) in Python to
detect the most probable character encoding in a text file ('infile') and use
that in further processing.
While chardet is designed primarily for detecting the character encoding of
webpages, I have found an
[example](http://dbaktiar.wordpress.com/2009/09/15/chardet-python-library-
determining-character-encoding-of-text/) of it being used on individual text
files.
However, I cannot work out how to tell the script to set the most likely
character encoding to the variable 'charenc' (which is used several times
throughout the script).
My code, based on a combination of the aforementioned example and chardet's
own [documentation](http://chardet.feedparser.org/docs/usage.html) is as
follows:
import chardet
rawdata=open(infile,"r").read()
chardet.detect(rawdata)
Character detection is necessary as the script goes on to run the following
(as well as several similar uses):
inF=open(infile,"rb")
s=unicode(inF.read(),charenc)
inF.close()
Any help would be greatly appreciated.
Answer: `chardet.detect` returns a dictionary which provides the encoding as the value
associated with the key `'encoding'`. So you can do this:
import chardet
rawdata = open(infile, "r").read()
result = chardet.detect(rawdata)
charenc = result['encoding']
|
pydev importerror: no module named thread, debugging no longer works after pydev upgrade
Question: My Eclipse 3.6 /PyDev setup just did a pydev upgrade to 1.6.0.2010071813 and
debugging no longer works. My default python interpreter is 3.1 although I
doubt that matters. Until the Eclipse upgrade of pydev, it was working very
nicely.
Answer: This is already fixed in the current nightly (1.6.1). See:
<http://pydev.org/download.html> for details on getting it.
Note that you can just change that "import thread" locally (in
org.python.pydev.debug/pysrc/pydevd.py) for:
try:
import thread
except ImportError:
import _thread as thread #Py3K changed it.
Cheers,
Fabio
|
How to speed up multidimensional array access in scipy.weave?
Question: I'm weaving my c code in python to speed up the loop:
from scipy import weave
from numpy import *
#1) create the array
a=zeros((200,300,400),int)
for i in range(200):
for j in range(300):
for k in range(400):
a[i,j,k]=i*300*400+j*400+k
#2) test on c code to access the array
code="""
for(int i=0;i<200;++i){
for(int j=0;j<300;++j){
for(int k=0;k<400;++k){
printf("%ld,",a[i*300*400+j*400+k]);
}
printf("\\n");
}
printf("\\n\\n");
}
"""
test =weave.inline(code, ['a'])
It's working all well, but it is still costly when the array is big. Someone
suggested me to use a.strides instead of the nasty "a[i*300*400+j*400+k]" I
can't understand the document about .strides.
Any ideas
Thanks in advance
Answer: You could replace the 3 for-loops with
grid=np.ogrid[0:200,0:300,0:400]
a=grid[0]*300*400+grid[1]*400+grid[2]
The following suggests this may result in a ~68x (or better? see below)
speedup:
% python -mtimeit -s"import test" "test.m1()"
100 loops, best of 3: 17.5 msec per loop
% python -mtimeit -s"import test" "test.m2()"
1000 loops, best of 3: 247 usec per loop
test.py:
import numpy as np
n1,n2,n3=20,30,40
def m1():
a=np.zeros((n1,n2,n3),int)
for i in range(n1):
for j in range(n2):
for k in range(n3):
a[i,j,k]=i*300*400+j*400+k
return a
def m2():
grid=np.ogrid[0:n1,0:n2,0:n3]
b=grid[0]*300*400+grid[1]*400+grid[2]
return b
if __name__=='__main__':
assert(np.all(m1()==m2()))
With n1,n2,n3 = 200,300,400,
python -mtimeit -s"import test" "test.m2()"
took 182 ms on my machine, and
python -mtimeit -s"import test" "test.m1()"
has yet to finish.
|
Python access parent object instances
Question: I'm currently trying to write a multiple-file Python (2.6.5) game using
PyGame. The problem is that one of the files, "pyconsole.py", needs to be able
to call methods on instances of other objects imported by the primary file,
"main.py". The problem is that I have a list in the main file to hold
instances of all of the game objects (player's ship, enemy ships, stations,
etc.), yet I can't seem to be able to call methods from that list within
"pyconsole.py" despite the fact that I'm doing a `from pyconsole import *` in
"main.py" before the main loop starts. Is this simply not possible, and should
I instead use M4 to combine every file into 1 single file and then bytecode-
compile and test/distribute that?
Example:
bash$ cat test.py
#!/usr/bin/python
import math, distancefrom00
foo = 5
class BarClass:
def __init__(self):
self.baz = 10
def get(self):
print "The BAZ is ", self.baz
def switch(self)
self.baz = 15
self.get()
bar = BarClass()
def main():
bar.switch()
print distancefrom00.calculate([2, 4])
if __name__ == '__main__': main()
bash$ cat distancefrom00.py
#!/usr/bin/python
import math
import test
def calculate(otherpoint):
return str(math.hypot(otherpoint[0], otherpoint[1]))+" (foo = "+str(test.foo)+"; "+test.bar.get()+")"
bash$ python test.py
The BAZ is 15
The BAZ is 10
Traceback (most recent call last):
File "test.py", line 24, in <module>
if __name__ == '__main__': main()
File "test.py", line 22, in main
print distancefrom00.calculate([2, 4])
File "/home/archie/Development/Python/Import Test/distancefrom00.py", line 8, in calculate
return str(math.hypot(otherpoint[0], otherpoint[1]))+" (foo = "+str(test.foo)+"; "+test.bar.get()+")"
TypeError: cannot concatenate 'str' and 'NoneType' objects
If my somewhat limited understanding of Python names, classes, and all that
stuff is correct here, the NoneType means that the name `test.bar.get()` \-
and thus, `test.bar` \- is not assigned to anything.
Answer: > The problem is that one of the files, "pyconsole.py", needs to be able to
> call methods on instances of other objects imported by the primary file,
> "main.py".
This just sounds like the dependencies are wrong. Generally nothing should be
calling 'backwards' up to the main file. That main.py should be the glue that
holds everything else together, and nothing should depend on it. Technically
the dependencies should form a [directed acyclic
graph](http://en.wikipedia.org/wiki/Directed_acyclic_graph). As soon as you
find a cycle in your dependency graph, move out the common aspects into a new
file to break the cycle.
So, move the things in 'main.py' that are used by 'pyconsole.py' out into a
new file. Then have 'main.py' and 'pyconsole.py' import that new file.
|
Python Web Server - Getting it to do other tasks
Question: Using the following example I can get a basic web server running but my
problem is that the handle_request() blocks the do_something_else() until a
request comes in. Is there any way around this to have the web server do other
back ground tasks?
def run_while_true(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
while keep_running():
httpd.handle_request()
do_something_else()
Answer: You can use multiple threads of execution through the Python [threading
module](http://docs.python.org/library/threading.html). An example is below:
import threading
# ... your code here...
def run_while_true(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
while keep_running():
httpd.handle_request()
if __name__ == '__main__':
background_thread = threading.Thread(target=do_something_else)
background_thread.start()
# ... web server start code here...
background_thread.join()
This will cause a thread which executes `do_something_else()` to start before
your web server. When the server shuts down, the `join()` call ensures
`do_something_else` finishes before the program exits.
|
Using urllib2 for posting data, following redirects and maintaining cookies
Question: I am using `urllib2` in **Python** to post login data to a web site.
After successful login, the site redirects my request to another page. Can
someone provide a simple code sample on how to do this in Python with
`urllib2`? I guess I will need cookies also to be _logged in_ when I get
redirected to another page. Right?
Thanks a lot in advace.
Answer: First, get mechanize: <http://wwwsearch.sourceforge.net/mechanize/>
You could do this kind of stuff with just urllib2, but you will be writing
tons of boilerplate code, and it will be buggy.
Then:
import mechanize
br = mechanize.Browser()
br.open('http://somesite.com/account/signin/')
br.select_form('loginForm')
br['username'] = 'jekyll'
br['password'] = 'bananas'
br.submit()
# At this point, you're logged in, redirected, and the
# br object has the cookies and all that.
br.geturl() # e.g. http://somesite.com/loggedin/
Then you can use the Browser object `br` and do whatever you have to do, click
on links, etc. Check the samples on the mechanize site
|
confusing python urlencode order
Question: okay, so according to <http://docs.python.org/library/urllib.html>
"The order of parameters in the encoded string will match the order of
parameter tuples in the sequence."
except when I try to run this code:
import urllib
values ={'one':'one',
'two':'two',
'three':'three',
'four':'four',
'five':'five',
'six':'six',
'seven':'seven'}
data=urllib.urlencode(values)
print data
outputs as ...
seven=seven&six=six&three=three&two=two&four=four&five=five&one=one
7,6,3,2,4,5,1?
That doesn't look like the order of my tuples.
Answer: Dictionaries are inherently unordered because of the way they are implemented.
If you want them to be ordered, you should use a list of tuples instead (or a
tuple of lists, or a tuple of tuples, or a list of lists...):
values = [ ('one', 'one'), ('two', 'two') ... ]
|
CSV file written with Python has blank lines between each row
Question:
import csv
with open('thefile.csv', 'rb') as f:
data = list(csv.reader(f))
import collections
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
with open('/pythonwork/thefile_subset11.csv', 'w') as outfile:
writer = csv.writer(outfile)
for row in data:
if counter[row[10]] >= 504:
writer.writerow(row)
This code reads `thefile.csv`, makes changes, and writes results to
`thefile_subset1`.
However, when I open the resulting csv in Microsoft Excel, there is an extra
blank line after each record!
Is there a way to make it not put an extra blank line?
Answer: In Python 2, open `outfile` with mode `'wb'` instead of `'w'`. The
`csv.writer` writes `\r\n` into the file directly. If you don't open the file
in _binary_ mode, it will write `\r\r\n` because on Windows _text_ mode will
translate each `\n` into `\r\n`.
In Python 3 the required syntax changed, so open `outfile` with the additional
parameter `newline=''` instead.
### Examples:
# Python 2
with open('/pythonwork/thefile_subset11.csv', 'wb') as outfile:
writer = csv.writer(outfile)
# Python 3
with open('/pythonwork/thefile_subset11.csv', 'w', newline='') as outfile:
writer = csv.writer(outfile)
### Documentation Links
* <https://docs.python.org/2/library/csv.html#csv.writer>
* <https://docs.python.org/3/library/csv.html#csv.writer>
|
generating bigram combinations from grouped data in pig
Question: given my input data in userid,itemid format:
raw: {userid: bytearray,itemid: bytearray}
dump raw;
(A,1)
(A,2)
(A,4)
(A,5)
(B,2)
(B,3)
(B,5)
(C,1)
(C,5)
grpd = GROUP raw BY userid;
dump grpd;
(A,{(A,1),(A,2),(A,4),(A,5)})
(B,{(B,2),(B,3),(B,5)})
(C,{(C,1),(C,5)})
I'd like to generate all of the combinations(order not important) of items
within each group. I eventually intend on performing jaccard similarity on the
items in my group.
ideally my the bigrams would be generated and then I'd FLATTEN the output to
look like:
(A, (1,2))
(A, (1,3))
(A, (1,4))
(A, (2,3))
(A, (2,4))
(A, (3,4))
(B, (1,2))
(B, (2,3))
(B, (3,5))
(C, (1,5))
The letters ABC, which represent the userid, are not really necessary for the
output, I'm just showing them for illustrative purposes. From there, I would
count the number of occurrences of each bigram in order to compute jaccard.
I'd love to know if anyone else is using pig for similar similarity
calcs(sorry!) and have encountered this already.
I've looked at the NGramGenerator that's supplied with the pig tutorials but
it doesn't really match what I'm trying to accomplish. I'm wondering if
perhaps a python streaming UDF is the way to go.
Answer: You are definitely going to have to write a UDF (in Python or Java, either
would be fine). You would want it to work on a bag, and then output a bag (if
you flatten a bag of touples, you will get output rows so it will give you the
output that you want).
the UDF itself would not be terribly difficult...something like
letter, number = zip(*input_touples)
number = list(set(number)
for i in range(0,len(number)):
for j in range(i,len(number)):
res.append((number[i],number[j]))
and then just cast things and return them appropriately.
If you need any help making a simple python udf, it's not too bad. Check here:
<http://pig.apache.org/docs/r0.8.0/udf.html>
And of course feel free to ask for more help here
|
How to send email attachments with Python
Question: I am having problems understanding how to email an attachment using Python. I
have successfully emailed simple messages with the `smtplib`. Could someone
please explain how to send an attachment in an email. I know there are other
posts online but as a Python beginner I find them hard to understand.
Answer: Here's another, adapted from
[here](http://snippets.dzone.com/posts/show/2038):
import smtplib
from os.path import basename
from email.mime.application import MIMEApplication
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.utils import COMMASPACE, formatdate
def send_mail(send_from, send_to, subject, text, files=None,
server="127.0.0.1"):
assert isinstance(send_to, list)
msg = MIMEMultipart()
msg['From'] = send_from
msg['To'] = COMMASPACE.join(send_to)
msg['Date'] = formatdate(localtime=True)
msg['Subject'] = subject
msg.attach(MIMEText(text))
for f in files or []:
with open(f, "rb") as fil:
part = MIMEApplication(
fil.read(),
Name=basename(f)
)
part['Content-Disposition'] = 'attachment; filename="%s"' % basename(f)
msg.attach(part)
smtp = smtplib.SMTP(server)
smtp.sendmail(send_from, send_to, msg.as_string())
smtp.close()
It's much the same as the first example... But it should be easier to drop in.
|
List all currently open file handles?
Question: > **Possible Duplicate:**
> [check what files are open in
> Python](http://stackoverflow.com/questions/2023608/check-what-files-are-
> open-in-python)
Hello,
Is it possible to obtain a list of all currently open file handles, I presume
that they are stored somewhere in the environment.
I am interested in theis function as I would like to safely handle any files
that are open when a fatal error is raised, i.e. close file handles and
replace potentially corrupted files with the original files.
I have the handling working but without knowing what file handles are open, I
am unable to implement this idea.
As an aside, when a file handle is initialised, can this be inherited by
another imported method?
Thank you
Answer: lsof, /proc/pid/fd/
|
Calling Python instance methods in function decorators
Question: Is there a clean way to have a decorator call an instance method on a class
only at the time an instance of the class is instantiated?
class C:
def instance_method(self):
print('Method called')
def decorator(f):
print('Locals in decorator %s ' % locals())
def wrap(f):
print('Locals in wrapper %s' % locals())
self.instance_method()
return f
return wrap
@decorator
def function(self):
pass
c = C()
c.function()
I know this doesn't work because `self` is undefined at the point `decorator`
is called (since it isn't called as an instance method as there is no
available reference to the class). I then came up with this solution:
class C:
def instance_method(self):
print('Method called')
def decorator():
print('Locals in decorator %s ' % locals())
def wrap(f):
def wrapped_f(*args):
print('Locals in wrapper %s' % locals())
args[0].instance_method()
return f
return wrapped_f
return wrap
@decorator()
def function(self):
pass
c = C()
c.function()
This uses the fact that I know the first argument to any instance method will
be `self`. The problem with the way this wrapper is defined is that the
instance method is called every time the function is executed, which I don't
want. I then came up with the following slight modification which works:
class C:
def instance_method(self):
print('Method called')
def decorator(called=[]):
print('Locals in decorator %s ' % locals())
def wrap(f):
def wrapped_f(*args):
print('Locals in wrapper %s' % locals())
if f.__name__ not in called:
called.append(f.__name__)
args[0].instance_method()
return f
return wrapped_f
return wrap
@decorator()
def function(self):
pass
c = C()
c.function()
c.function()
Now the function only gets called once, but I don't like the fact that this
check has to happen every time the function gets called. I'm guessing there's
no way around it, but if anyone has any suggestions, I'd love to hear them!
Thanks :)
Answer: I came up with this as a possible alternative solution. I like it because
there is only one call that happens when the function is defined, and one when
the class is instantiated. The only downside is a tiny bit of extra memory
consumption for the function attribute.
from types import FunctionType
class C:
def __init__(self):
for name,f in C.__dict__.iteritems():
if type(f) == FunctionType and hasattr(f, 'setup'):
self.instance_method()
def instance_method(self):
print('Method called')
def decorator(f):
setattr(f, 'setup', True)
return f
@decorator
def function(self):
pass
c = C()
c.function()
c.function()
|
get screen size in python
Question: > **Possible Duplicate:**
> [How do I get Monitor resolution in
> Python?](http://stackoverflow.com/questions/3129322/how-do-i-get-monitor-
> resolution-in-python)
I was wondering if there was a way to get the screen size from a python
script.
**Similar Questions:**
[Getting monitor size in
python](http://stackoverflow.com/questions/2662857/getting-monitor-size-in-
python)
This question tells you how to get it through pygame. Is there a way to do it
with just python? I need the scripts to run on several Unix-based operating
system.
Answer: I stole this from [Finding the workspace size (screen size less the taskbar)
using GTK](http://stackoverflow.com/questions/502282/finding-the-workspace-
size-screen-size-less-the-taskbar-using-gtk)
import gtk, pygtk
window = gtk.Window()
screen = window.get_screen()
print "width = " + str(screen.get_width()) + ", height = " + str(screen.get_height())
Just to let you know, if you have multiple monitors the screen's size will be
all of those monitors together.
|
Object propert is an integer, have to use regex to clean input, looking for good style
Question: I have some text that looks like:
California(2342)
My object has a property that I need to assign the value 2342 to.
I'm looking for input on how to go about doing this, and guarding against any
potential for errors in the input.
c = SomeClass()
c.count = re.compile(r'(\d*)').groups[0]
Does that look ok? Or should I do an IF statement and set the count to 0 in
case the input was bad?
P.S any help in the regex would be appreciated, this is my first serious
python script.
Answer:
import re
pat = re.compile(r'\w+\((\d+)\)')
s = 'California(2342)'
match = pat.match(s)
if match:
c.count = match.group(1)
print c.count
# '2342'
else:
c.count = '0' # or 0 if numeric
If you want a number back instead of a string just modify:
value = int(match.group(1))
|
Replacing value in all cursor rows
Question: Using SQLite and Python 3.1, I want to display currency data in a HTML table
via. a template which accepts a cursor as a parameter. Hence all currency
values must have 2 decimal places, but SQLite stores them as float type (even
though the structure states decimal :-( ) so some must be converted before
display (eg. I want 12.1 displayed as 12.10).
The code goes something like this (simplified for illustration)...
import sqlite3
con = sqlite3.connect("mydb")
con.row_factory = sqlite3.Row
cur = con.cursor()
cur.execute("select order_no, amount from orders where cust_id=123")
for row in cur:
row['amount'] = format(row['amount'],'%.2f')
The last command throws the error "# builtins.TypeError: 'sqlite3.Row' object
does not support item assignment"
How can I solve the problem whereby the row object values cannot be changed?
Could I convert the cursor to a list of dictionaries (one for each row, eg.
[{'order_no':1, 'amount':12.1}, {'order_no':2, 'amount':6.32}, ...]), then
format the 'amount' value for each item? If so, how can I do this?
Are there any better solutions for achieving my goal? Any help would be
appreciated.
TIA, Alan
Answer: Yep:
cur.execute("select order_no, amount from orders where cust_id=123")
dictrows = [dict(row) for row in cur]
for r in dictrows:
r['amount'] = format(r['amount'],'%.2f')
There are other ways, but this one seems the simplest and most direct one.
|
Greedy versus Non-Greedy matching in Python re
Question: Please help me to discover whether this is a bug in Python (2.6.5), in my
competence at writing regexes, or in my understanding of pattern matching.
(I accept that a possible answer is "Upgrade your Python".)
I'm trying to parse a Yubikey token, allowing for the optional extras.
When I use this regex to match a token without any optional extras (that is,
containing only the stuff that matches the two capture groups), the match
fails:
r'^\t?[^a-z0-9]?([cbdefghijklnrtuv1-8]{0,32})\t?([cbdefghijklnrtuv1-8]{32})\t?\r?\n?$'
However, if I make the first group non-greedy:
r'^\t?[^a-z0-9]?([cbdefghijklnrtuv1-8]{0,32}?)\t?([cbdefghijklnrtuv1-8]{32})\t?\r?\n?$'
it succeeds.
So, OK, it's working, but I would have thought that the only difference in end
result between these two regexes would be performance.
Both Expresso and Regex Coach like both patterns.
What have I missed?
* * *
Here are two of the strings I'm testing with.
No optional extras (the ones that can fail):
"vvbrentlnccnhgfgrtetilbvckjcegblehfvbihrdcui"
With optional extras (haven't failed so far; actual tabs are shown here as
"_"):
"_!_8R5Gkruvfgheufhcnhllchgrfiutujfh_"
"_!1U4Knivdgvkfthrd_brvejhudrdnbunellrjjkkccfnggbdng_"
* * *
I've tried to reproduce it using the suggestion from Alex Martelli, and it
doesn't fail in the raw Python environment, so I'm going to revisit my code
(I'm actually hacking on yubikey-python); I'll report back in a day or so.
* * *
My apologies to everyone. I cannot reproduce the problem. When it occurred, I
was reading input via `getpass`; I suspect that an accidental foreign
keystroke got in the way.
I am going to close the question. If whoever upvoted the question wishes to
remove their vote, that is fair.
Very sorry.
Answer: I'd recommend using [yubikey-python](http://code.google.com/p/yubikey-python/)
for Python interfacing to yubikey -- but, that's a side (and strictly
pragmatical) issue;-).
In theory, there should be no cases where a choice between greedy and non-
greedy causes a RE to match in one case and fail in another -- it should only
affects what gets matched (and as you mention performance), not whether the
match succeeds at all, since REs are supposed to backtrack for the purpose.
Problem is, I cannot reproduce the problem -- I don't have a yubikey at hand
and the tests in [this file](http://code.google.com/p/yubikey-
python/source/browse/trunk/test.py) show no differences between the two REs'
match/no-match behavior.
Could you please post a couple of failing examples (where one matches and the
other one doesn't), ideally by editing your question, so I can reproduce the
problem and try to cut it down to its minimum? Sounds like there may be a RE
bug, but without reproducible cases I can't check if and when it's been fixed,
already reported, or what. Thanks!
**Edit** the OP has now posted one failing example but I still can't
reproduce:
$ py26
Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> r1 = re.compile(r'^\t?[^a-z0-9]?([cbdefghijklnrtuv1-8]{0,32})\t?([cbdefghijklnrtuv1-8]{32})\t?\r?\n?$')
>>> r2 = re.compile(r'^\t?[^a-z0-9]?([cbdefghijklnrtuv1-8]{0,32}?)\t?([cbdefghijklnrtuv1-8]{32})\t?\r?\n?$'
... )
>>> nox="vvbrentlnccnhgfgrtetilbvckjcegblehfvbihrdcui"
>>> r1.match(nox)
<_sre.SRE_Match object at 0xcc458>
>>> r2.match(nox)
<_sre.SRE_Match object at 0xcc920>
>>>
i.e., match succeeds in both cases, as it should -- and that's exactly the
same 2.6.5 Python version as the OP is using. OP, pls, show the results of
this simple sequence of commands on your platform and tell us exactly what the
platform is, since it looks like a weird platform-dependent bug... thanks!
|
Learning Python; How can I make this more Pythonic?
Question: I am a PHP developer exploring the outside world. I have decided to start
learning Python.
The below script is my first attempt at porting a PHP script to Python. Its
job is to take tweets from a Redis store. The tweets are coming from Twitter's
Streaming API and stored as JSON objects. Then the information needed is
extracted and dumped into a CSV file to be imported into MySQL using the `LOAD
DATA LOCAL INFILE` that is hosted on a different server.
So, the question is: Now that I have my first Python script running, how can I
make it more Pythonic? Are there any suggestions that you guys have? Make it
better? Tricks I should know about? Constructive Criticism?
**Update:** Having taken everyone's suggestions thus far, here is the updated
version:
**Update2:** Ran the code through pylint. Now scores a 9.89/10. Any other
suggestions?
# -*- coding: utf-8 -*-
"""Redis IO Loop for Tweelay Bot"""
from __future__ import with_statement
import simplejson
import re
import datetime
import time
import csv
import hashlib
# Bot Modules
import tweelay.red as red
import tweelay.upload as upload
import tweelay.openanything as openanything
__version__ = "4"
def process_tweets():
"""Processes 0-20 tweets from Redis store"""
data = []
last_id = 0
for i in range(20):
last = red.pop_tweet()
if not last:
break
t = TweetHandler(last)
t.cleanup()
t.extract()
if t.get_tweet_id() == last_id:
break
tweet = t.proc()
if tweet:
data = data + [tweet]
last_id = t.get_tweet_id()
time.sleep(0.01)
if not data:
return False
ch = CSVHandler(data)
ch.pack_csv()
ch.uploadr()
source = "http://bot.tweelay.net/tweets.php"
openanything.openAnything(
source,
etag=None,
lastmodified=None,
agent="Tweelay/%s (Redis)" % __version__
)
class TweetHandler:
"""Cleans, Builds and returns needed data from Tweet"""
def __init__(self, json):
self.json = json
self.tweet = None
self.tweet_id = 0
self.j = None
def cleanup(self):
"""Takes JSON encoded tweet and cleans it up for processing"""
self.tweet = unicode(self.json, "utf-8")
self.tweet = re.sub('^s:[0-9]+:["]+', '', self.tweet)
self.tweet = re.sub('\n["]+;$', '', self.tweet)
def extract(self):
"""Takes cleaned up JSON encoded tweet and extracts the datas we need"""
self.j = simplejson.loads(self.tweet)
def proc(self):
"""Builds the datas from the JSON object"""
try:
return self.build()
except KeyError:
if 'delete' in self.j:
return None
else:
print ";".join(["%s=%s" % (k, v) for k, v in self.j.items()])
return None
def build(self):
"""Builds tuple from JSON tweet"""
return (
self.j['user']['id'],
self.j['user']['screen_name'].encode('utf-8'),
self.j['text'].encode('utf-8'),
self.j['id'],
self.j['in_reply_to_status_id'],
self.j['in_reply_to_user_id'],
self.j['created_at'],
__version__ )
def get_tweet_id(self):
"""Return Tweet ID"""
if 'id' in self.j:
return self.j['id']
if 'delete' in self.j:
return self.j['delete']['status']['id']
class CSVHandler:
"""Takes list of tweets and saves them to a CSV
file to be inserted into MySQL data store"""
def __init__(self, data):
self.data = data
self.file_name = self.gen_file_name()
def gen_file_name(self):
"""Generate unique file name"""
now = datetime.datetime.now()
hashr = hashlib.sha1()
hashr.update(str(now))
hashr.update(str(len(self.data)))
hash_str = hashr.hexdigest()
return hash_str+'.csv'
def pack_csv(self):
"""Save tweet data to CSV file"""
with open('tmp/'+self.file_name, mode='ab') as ofile:
writer = csv.writer(
ofile, delimiter=',',
quotechar='"',
quoting=csv.QUOTE_MINIMAL)
writer.writerows(self.data)
def uploadr(self):
"""Upload file to remote host"""
url = "http://example.com/up.php?filename="+self.file_name
uploadr = upload.upload_file(url, 'tmp/'+self.file_name)
if uploadr[0] == 200:
print "Upload: 200 - ("+str(len(self.data))+")", self.file_name
print "-------"
#os.remove('tmp/'+self.file_name)
else:
print "Upload Error:", uploadr[0]
if __name__ == "__main__":
while True:
process_tweets()
time.sleep(1)
Answer: Instead of:
i=0
end=20
last_id=0
data=[]
while(i<=end):
i = i + 1
...
code:
last_id=0
data=[]
for i in xrange(1, 22):
...
Same semantics, more compact and Pythonic.
Instead of
if not last or last == None:
do just
if not last:
since `None` is false-ish anyway (so `not last` is `True` when `last` is
`None). In general, when you want to check if something is`None`, code`is
None`, not`== None`.
In
if(j['id'] <> last_id):
lose the redundant parentheses and the obsolete `<>` operator and code instead
if j['id'] != last_id:
and also remove the redundant parentheses from other `if` statements.
Instead of:
if len(data) == 0:
code:
if not data:
since any empty container is false-ish.
In
hash_str = str(hash.hexdigest())
code instead
hash_str = hash.hexdigest()
since the method already returns a string, making the `str` call redundant.
Instead of:
for item in data:
writer.writerow(item)
use
writer.writerows(data)
which does the loop on your behalf.
Instead of
ofile = open('tmp/'+file_name, mode='ab')
...
ofile.close()
use (in Python 2.6 or better, or in 2.5 by starting the module with
from __future__ import with_statement
to "import from the future" the `with` statement feature):
with open('tmp/'+file_name, mode='ab') as ofile:
...
which guarantees to do the close for you (including in cases where an
exception might be raised).
Instead of
print "Upload Error: "+uploadr[0]
use
print "Upload Error:", uploadr[0]
and similarly for other `print` statements -- the comma inserts a space for
you.
I'm sure there are more such little things, but these are a few that "jumped
to the eye" as I was scanning your code.
|
Django with huge mysql database
Question: What would be the best way to import multi-million record csv files into
django.
Currently using python csv module, it takes 2-4 days for it process 1 million
record file. It does some checking if the record already exists, and few
others.
Can this process be achieved to execute in few hours.
Can memcache be used somehow.
Update: There are django ManyToManyField fields that get processed as well.
How will these used with direct load.
Answer: I'm not sure about your case, but we had similar scenario with Django where
~30 million records took more than one day to import.
Since our customer was totally unsatisfied (with the danger of losing the
project), after several failed optimization attempts with Python, we took a
radical strategy change and did the import(only) with Java and JDBC (+ some
mysql tuning), and got the import time down to ~45 minutes (with Java it was
very easy to optimize because of the very good IDE and profiler support).
|
How to create article spinner regex in Java?
Question: Say for example I want to take this phrase:
> {{Hello|What's Up|Howdy} {world|planet} | {Goodbye|Later} {people|citizens|inhabitants}}
and randomly make it into one of the following:
Hello world
Goodbye people
What's Up word
What's Up planet
Later citizens
etc.
The basic idea is that enclosed within every pair of braces will be an
unlimited number of choices separated by "|". The program needs to go through
and randomly choose one choice for each set of braces. Keep in mind that
braces can be nested endlessly within each other. I found a thread about this
and tried to convert it to Java, but it did not work. Here is the python code
that supposedly worked:
import re
from random import randint
def select(m):
choices = m.group(1).split('|')
return choices[randint(0, len(choices)-1)]
def spinner(s):
r = re.compile('{([^{}]*)}')
while True:
s, n = r.subn(select, s)
if n == 0: break
return s.strip()
Here is my attempt to convert that Python code to Java.
public String generateSpun(String text){
String spun = new String(text);
Pattern reg = Pattern.compile("{([^{}]*)}");
Matcher matcher = reg.matcher(spun);
while (matcher.find()){
spun = matcher.replaceFirst(select(matcher.group()));
}
return spun;
}
private String select(String m){
String[] choices = m.split("|");
Random random = new Random();
int index = random.nextInt(choices.length - 1);
return choices[index];
}
Unfortunately, when I try to test this by calling
generateAd("{{Hello|What's Up|Howdy} {world|planet} | {Goodbye|Later} {people|citizens|inhabitants}}");
In the main of my program, it gives me an error in the line in generateSpun
where Pattern reg is declared, giving me a PatternSyntaxException.
java.util.regex.PatternSyntaxException: Illegal repetition
{([^{}]*)}
Can someone try to create a Java method that will do what I am trying to do?
Answer: Here are some of the problems with your current code:
* You should _reuse_ your compiled `Pattern`, instead of `Pattern.compile` _every time_
* You should _reuse_ your `Random`, instead of `new Random` _every time_
* Be aware that [`String.split`](http://download.oracle.com/javase/6/docs/api/java/lang/String.html#split%28java.lang.String%29) is regex-based, so you must `split("\\|")`
* Be aware that curly braces in Java regex must be escaped to match literally, so `Pattern.compile("\\{([^{}]*)\\}");`
* You should query `group(1)`, not `group()` which defaults to group `0`
* You're using `replaceFirst` wrong, look up [`Matcher.appendReplacement/Tail`](http://download.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement%28java.lang.StringBuffer,%20java.lang.String%29) instead
* [`Random.nextInt(int n)`](http://download.oracle.com/javase/6/docs/api/java/util/Random.html#nextInt%28int%29) has _exclusive upper bound_ (like many such methods in Java)
* The algorithm itself actually does not handle arbitrarily nested braces properly
Note that escaping is done by preceding with `\`, and as a Java string literal
it needs to be doubled (i.e. `"\\"` contains a single character, the
backslash).
### Attachment
* [Source code and output with above fix but no major change to algorithm](http://ideone.com/MOoFt)
|
Python 3 argument (semi)not UTF-8 when passed from Windows batch.cmd
Question: When I invoke a Python 3 script from a Windows batch.cmd, a UTF-8 arg is not
passed as "UTF-8", but as a series of bytes, each of which are interpreted by
Python as individual UTF-8 chars.
How can I convert the Python 3 arg string to its **intended** UTF-8 state?
The calling **.cmd** and the called **.py** are shown below.
PS. As I mention in a comment below, calling u00FF.py "ÿ" **directly** from
the Windows console commandline works fine. It is only a problem when I invoke
u00FF.cmd via the **.cmd** , and I am looking for a `Python 3` way to convert
the double-encoded UTF-8 arg back to a "normally" encoded UTF-8 form.
I've now include here, the full (and latest) test code.. Its a bit long, but I
hope it explains the issue clearly enough.
Update: I've seen why the file read of "ÿ" was "double-encoding"... I was
reading the UTF-8 file in binary/byte mode... I should have used
`codecs.open('u00FF.arg', 'r', 'utf-8')` instead of just plain
`open('u00FF.arg','r')`... I've updated the offending code, and the output.
The codepage issues seems to be the only problem now...
Because the Python issue has been largely resolved, and the codepage issue is
quite independent of Python, I have posted another codepage specific question
at
[Codepage 850 works, 65001 fails! There is NO response to “call foo.cmd”.
internal commands work
fine.](http://stackoverflow.com/questions/3401802/codepage-850-works-65001-fails-
there-is-no-response-to-call-foo-cmd-intern%29)
::::::::::::::::::: BEGIN .cmd BATCH FILE ::::::::::::::::::::
:: Windows Batch file (UTF-8 encoded, no BOM): "u00FF.cmd"
@echo ÿ>u00FF.arg
@u00FF.py "ÿ"
@goto :eof
::::::::::::::::::: END OF .cmd BATCH FILE ::::::::::::::::::::
################### BEGIN .py SCRIPT #####################################
# -*- coding: utf-8 -*-
import sys
print ("""
Unicode
=======
CodePoint U+00FF
Character ÿ __Unicode Character 'LATIN SMALL LETTER Y WITH DIAERESIS'
UTF-8 bytes
===========
Hex: \\xC3 \\xBF
Dec: 195 191
Char: Ã ¿ __Unicode Character 'INVERTED QUESTION MARK'
\_______Unicode Character 'LATIN CAPITAL LETTER A WITH TILDE'
""")
print("## ====================================================")
print("## ÿ via hard-coding in this .py script itself ========")
print("##")
hard1s = "ÿ"
hard1b = hard1s.encode('utf_8')
print("hard1s: len", len(hard1s), " '" + hard1s + "'")
print("hard1b: len", len(hard1b), hard1b)
for i in range(0,len(hard1s)):
print("CodePoint[", i, "]", hard1s[i], "U+"+"{0:x}".upper().format(ord(hard1s[i])).zfill(4) )
print(''' This is a single CodePoint for "ÿ" (as expected).''')
print()
print("## ====================================================")
print("## ÿ read into this .py script from a UTF-8 file ======")
print("##")
import codecs
file1 = codecs.open( 'u00FF.arg', 'r', 'utf-8' )
file1s = file1.readline()
file1s = file1s[:1] # remove \r
file1b = file1s.encode('utf_8')
print("file1s: len", len(file1s), " '" + file1s + "'")
print("file1b: len", len(file1b), file1b)
for i in range(0,len(file1s)):
print("CodePoint[", i, "]", file1s[i], "U+"+"{0:x}".upper().format(ord(file1s[i])).zfill(4) )
print(''' This is a single CodePoint for "ÿ" (as expected).''')
print()
print("## ====================================================")
print("## ÿ via sys.argv from a call to .py from a .cmd) ===")
print("##")
argv1s = sys.argv[1]
argv1b = argv1s.encode('utf_8')
print("argv1s: len", len(argv1s), " '" + argv1s + "'")
print("argv1b: len", len(argv1b), argv1b)
for i in range(0,len(argv1s)):
print("CodePoint[", i, "]", argv1s[i], "U+"+"{0:x}".upper().format(ord(argv1s[i])).zfill(4) )
print(''' These 2 CodePoints are way off-beam,
even allowing for the "double-encoding" seen above.
The CodePoints are from an entirely different Unicode-Block.
This must be a Codepage issue.''')
print()
################### END OF .py SCRIPT #####################################
Here is the output from the above code.
========================== BEGIN OUTPUT ================================
C:\>u00FF.cmd
Unicode
=======
CodePoint U+00FF
Character ÿ __Unicode Character 'LATIN SMALL LETTER Y WITH DIAERESIS'
UTF-8 bytes
===========
Hex: \xC3 \xBF
Dec: 195 191
Char: Ã ¿ __Unicode Character 'INVERTED QUESTION MARK'
\_______Unicode Character 'LATIN CAPITAL LETTER A WITH TILDE'
## ====================================================
## ÿ via hard-coding in this .py script itself ========
##
hard1s: len 1 'ÿ'
hard1b: len 2 b'\xc3\xbf'
CodePoint[ 0 ] ÿ U+00FF
This is a single CodePoint for "ÿ" (as expected).
## ====================================================
## ÿ read into this .py script from a UTF-8 file ======
##
file1s: len 1 'ÿ'
file1b: len 2 b'\xc3\xbf'
CodePoint[ 0 ] ÿ U+00FF
This is a single CodePoint for "ÿ" (as expected
## ====================================================
## ÿ via sys.argv from a call to .py from a .cmd) ===
##
argv1s: len 2 '├┐'
argv1b: len 6 b'\xe2\x94\x9c\xe2\x94\x90'
CodePoint[ 0 ] ├ U+251C
CodePoint[ 1 ] ┐ U+2510
These 2 CodePoints are way off-beam,
even allowing for the "double-encoding" seen above.
The CodePoints are from an entirely different Unicode-Block.
This must be a Codepage issue.
========================== END OF OUTPUT ================================
Answer: Batch files and encodings are a finicky issue. First of all: Batch files have
no direct way of specifying the encoding they're in and `cmd` does not really
support Unicode batch files. You can easily see that if you save a batch file
with a Unicode BOM or as UTF-16 – they will throw an error.
What you see when you put the `ÿ` directly into the command line is that when
running a command Windows will initially use the command line as Unicode (it
may have been converted from some legacy encoding beforehand, but in the end
what Windows uses is Unicode). So Python will (hopefully) always grab the
Unicode content of the arguments.
However, since `cmd` has its own opinions about the codepage (and you never
told it to use UTF-8) the UTF-8 string you put in the batch file won't be
interpreted as UTF-8 but instead in the default `cmd` codepage (850 or 437, in
your case).
You can force UTF-8 with `chcp`:
chcp 65001 > nul
You can save the following file as UTF-8 and try it out:
@echo off
chcp 850 >nul
echo ÿ
chcp 65001 >nul
echo ÿ
Keep in mind, though, that the `chcp` setting will persist in the shell if you
run the batch from there which may make things weird.
|
sending colored text to a TextCtrl in wxpython
Question: I'm trying to send colored text to a TextCtrl widget, but don't know how
style = wx.TE_MULTILINE|wx.BORDER_SUNKEN|wx.TE_READONLY|wx.TE_RICH2
self.status_area = wx.TextCtrl(self.panel, -1,
pos=(10, 270),style=style,
size=(380,150))
basically that snippet defines a status box in my window, and I want to write
colored log messages to it. If I just do `self.status_area.AppendText("blah")`
it will append text like I want, but it will always be black. I can't find the
documentation on how to do this.
Answer: You need to call SetStyle to change the text behavior.
import wx
class F(wx.Frame):
def __init__(self, *args, **kw):
wx.Frame.__init__(self, None)
style = wx.TE_MULTILINE|wx.BORDER_SUNKEN|wx.TE_READONLY|wx.TE_RICH2
self.status_area = wx.TextCtrl(self, -1,
pos=(10, 270),style=style,
size=(380,150))
self.status_area.AppendText("blahblahhblah")
fg = wx.Colour(200,80,100)
at = wx.TextAttr(fg)
self.status_area.SetStyle(3, 5, at)
app = wx.PySimpleApp()
f = F()
f.Show()
app.MainLoop()
|
python: using numpy.histogram
Question: i am using this:
<http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html>
i have an list `a` that i want to use like this:
numpy.histogram(a,bins=[0.1,0.2,0.3,0.4...6], range=[0:6])
1. how do i include a set of bins 0.1 through 6 in 0.1 intervals?
2. how do i specify a range of 0 through 6?
Answer: Perhaps you are looking for `np.linspace(0,6,num=61)` or
`np.arange(0,6.1,0.1)`:
import numpy as np
a=np.random.random(100)*6
hist=np.histogram(a,bins=np.linspace(0,6,num=61))
|
Demonstrating instruction level parallelism at work
Question: I'm trying to show instruction level parallelism at work. What I was
originally doing was using python (willing to change) and doing the following:
def test():
for i in range(5000):
j = 0
k = 0
l = 0
def test2():
for i in range(5000):
j = i * i
k = j * 2
l = k * i
if __name__=='__main__':
from timeit import Timer
t = Timer("test()", "from __main__ import test")
print t.timeit()
t2 = Timer("test2()", "from __main__ import test2")
print t2.timeit()
However a professor tells me that this doesn't demonstrate ILP, rather it
shows whether the python interpreter is optimized or not.
What is it that I can do to demonstrate working ILP?
Answer: Your professor is correct. I think an acceptable demonstration would have to
be written in assembler, or at most C/C++, possibly using something like the
MMX instruction set.
|
Issues with time.sleep and Multithreading in Python
Question: I am having an issue with the time.sleep() function in python. I am running a
script that needs to wait for another program to generate txt files. Although,
this is a terribly old machine, so when I sleep the python script, I run into
issues with the other program not generating files. Is there any alternatives
to using time.sleep()? I thought locking the thread might work but essentially
it would just be a loop of locking the thread for a couple of seconds. I'll
give some pseudo code here of what I'm doing.
While running:
if filesFound != []:
moveFiles
else:
time.sleep(1)
Answer: One way to do a non-blocking wait is to use
[threading.Event](http://docs.python.org/library/threading.html#event-
objects):
import threading
dummy_event = threading.Event()
dummy_event.wait(timeout=1)
This can be `set()` from another thread to indicate that something has
completed. **But** if you are doing stuff in another thread, you could avoid
the timeout and event altogether and just `join` the other thread:
import threading
def create_the_file(completion_event):
# Do stuff to create the file
def Main():
worker = threading.Thread(target=create_the_file)
worker.start()
# We will stop here until the "create_the_file" function finishes
worker.join()
# Do stuff with the file
If you want an example of using events for more fine-grained control, I can
show you that...
The threading approach won't work if your platform doesn't provide the
threading module. For example, if you try to substitute the dummy_threading
module, `dummy_event.wait()` returns immediately. Not sure about the `join()`
approach.
If you are waiting for other processes to finish, you would be better off
managing them from your own script using the
[subprocess](http://docs.python.org/library/subprocess.html) module (and then,
for example, using the
[`wait`](http://docs.python.org/library/subprocess.html#subprocess.Popen.wait)
method to be sure the process is done before you do further work).
If you can't manage the subprocess from your script, but you know the PID, you
can use the
[`os.waitpid()`](http://docs.python.org/library/os.html#os.waitpid) function.
Beware of the `OSError` if the process has already finished by the time you
use this function...
If you want a cross-platform way to watch a directory to be notified of new
files, I'd suggest using a [GIO
FileMonitor](http://library.gnome.org/devel/pygobject/stable/class-
giofilemonitor.html) from
[PyGTK/PyGObject](http://library.gnome.org/devel/pygobject/stable/). You can
get a monitor on a directory using the
[monitor_directory](http://library.gnome.org/devel/pygobject/stable/class-
giofile.html#method-giofile--monitor-directory) method of a
[GIO.File](http://library.gnome.org/devel/pygobject/stable/class-
giofile.html).
Quick sample code for a directory watch:
import gio
def directory_changed(monitor, file1, file2, evt_type):
print "Changed:", file1, file2, evt_type
gfile = gio.File(".")
monitor = gfile.monitor_directory(gio.FILE_MONITOR_NONE, None)
monitor.connect("changed", directory_changed)
import glib
ml = glib.MainLoop()
ml.run()
|
month name to month number and vice versa in python
Question: I am trying to create a function that can convert a month number to an
abbreviated month name or an abbreviated month name to a month number. I
thought this might be a common question but I could not find it online.
I was thinking about the
[calendar](http://docs.python.org/library/calendar.html) module. I see that to
convert from month number to abbreviated month name you can just do
`calendar.month_abbr[num]`. I do not see a way to go the other direction
though. Would creating a dictionary for converting the other direction be the
best way to handle this? Or is there a better way to go from month name to
month number and vice versa?
Answer: Just for fun:
from time import strptime
strptime('Feb','%b').tm_mon
|
python: appends only '0'
Question:
big_set=[]
for i in results_histogram_total:
big_set.append(100*(i/sum_total))
big_set returns `[0,0,0,0,0,0,0,0........,0]`
this wrong because i checked `i` and it is `>0`
what am i doing wrong?
Answer: In Python 2.x, use `from __future__ import division` to get sane division
behavior.
|
Python: "global name 'time' is not defined"
Question: I'm writing a silly program in python for a friend that prints "We are the
knights who say 'Ni'!". then sleeps for 3 seconds, and then prints "Ni!"
twenty times at random intervals using the `random` module's `uniform()`
method. Here's my code:
from time import sleep
import random
def knights_of_ni():
generator = random.Random()
print "We are the knights who say 'ni'."
sleep(3)
for i in range(0,20):
print "Ni!"
sleep(generator.uniform(0,2))
I've tried to import this module by typing in the interpreter `from silly
import knights_of_ni()` _and_ `import silly`, then calling the function with
either `knights_of_ni()` or `silly.knights_of_ni()` (respectively), but I
always get the same exception:
NameError: global name 'time' is not defined
What is causing this error and how can I fix my code?
Edit: quite frankly, I'm not sure what problem I was having either. I ran the
code the next morning and it worked just fine. I swear that the code produced
errors last night... Anyway, thanks for your insight.
Answer: That's impossible. Your code example isn't the same as the code that produced
that error.
Perhaps you had `time.sleep(..)` instead of `sleep(..)`. You have done `from
time import sleep`. To use the `time.sleep(..)` form you must `import time`
|
Spurious failures in django.contrib.messages.tests when running manage.py test
Question: I've recently added authentication (via django.contrib.auth of course) to my
application, along with appropriate "signin"/"signup" links to my base.html.
The problem comes when I run `manage.py` tests, and I get 4 failures, all from
django.contrib.messages.tests:
ERROR: test_middleware_disabled_anon_user (django.contrib.messages.tests.cookie.CookieTest)
ERROR: test_middleware_disabled_anon_user (django.contrib.messages.tests.fallback.FallbackTest)
ERROR: test_middleware_disabled_anon_user (django.contrib.messages.tests.user_messages.LegacyFallbackTest)
ERROR: test_middleware_disabled_anon_user (django.contrib.messages.tests.session.SessionTest)
All with the same failure:
TemplateSyntaxError: Caught NoReverseMatch while rendering: Reverse for 'django.contrib.auth.views.login' with arguments '()' and keyword arguments '{}' not found.
In `manage.py shell` this works:
>>> from django.core.urlresolvers import reverse
>>> reverse('django.contrib.auth.views.login')
'/signin/'
However this doesn't:
>>> reverse('django.contrib.auth.views.login', (), {})
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/dave/Dropbox/Projects/statbooks.co.uk/lib/python2.6/site-packages/django/core/urlresolvers.py", line 350, in reverse
*args, **kwargs)))
File "/Users/dave/Dropbox/Projects/statbooks.co.uk/lib/python2.6/site-packages/django/core/urlresolvers.py", line 296, in reverse
"arguments '%s' not found." % (lookup_view_s, args, kwargs))
NoReverseMatch: Reverse for 'django.contrib.auth.views.login' with arguments '()' and keyword arguments '{}' not found.
Commenting out the `{% url %}` tags from my base.html make the tests pass.
What's causing this?
Answer: There are several suggestions for workarounds in this Django ticket and links
therein: <http://code.djangoproject.com/ticket/11077> The one I like is this:
<http://groups.google.com/group/django-developers/msg/ec7508651e9e9fb8>. To
summarize, it divides up built-in tests and app tests, then overrides
manage.py test to run just app tests.
None of these suggestions fixes the underlying problem (that all unit tests
should be able to run even if base templates use the {% url %} tag).
|
Trying to call readline() on a file object in python but it's pausing
Question: I'm using the readline() function to read data from a file object obtained
through the subprocess module: `proc = subprocess.Popen(cmd, bufsize=0,
stdout=subprocess.PIPE)`. This allows me to use `proc.stdout` as a file-like
object with `proc.stdout.readline()`. My issue is that this pauses waiting for
input and I'd like it to time out and move on if there isn't input there when
I make the readline call. I'm running Python 2.4, how can I get the readline
method to stop pausing? Thanks.
Answer: On a posix-y platform (basically any popular platform except Windows), the
[select](http://docs.python.org/library/select.html?highlight=select#module-
select) module offers the right tools for this purpose. Unfortunately, on
Windows, `select` only works on sockets (not on pipes, which is what
`subprocess.Popen` will be using), so the situation is not quite as clear
there. Do you need to run on Windows...?
If not, just use the `p.stdout.fileno()` of your subprocess object `p` in a
`select.select` call with a short timeout -- it's really easy!
**Edit** : here's a simple example (assuming the needed imports of course):
>>> def f():
... p = subprocess.Popen("sleep 10; echo ciao", shell=True, stdout=subprocess.PIPE)
... while True:
... r, w, x = select.select([p.stdout.fileno()],[],[],1.0)
... if r: return p.stdout.read()
... print 'not ready yet'
...
>>> f()
not ready yet
not ready yet
not ready yet
not ready yet
not ready yet
not ready yet
not ready yet
not ready yet
not ready yet
not ready yet
'ciao\n'
>>>
Note there is no way to "wait for a complete line": this waits for "any output
at all" (then blocks until all the output is ready). To read just what's
available, use
[fcntl](http://docs.python.org/library/fcntl.html?highlight=fcntl#module-
fcntl) to set
[os.O_NODELAY](http://docs.python.org/library/os.html#os.O_NDELAY) on the file
descriptor (what `fileno()` returns) before you start looping.
|
Determining running programs in Python
Question: How would I use Python to determine what programs are currently running. I am
on Windows.
Answer: Thanks to @hb2pencil for the WMIC command! Here's how you can pipe the output
without a file:
import subprocess
cmd = 'WMIC PROCESS get Caption,Commandline,Processid'
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
for line in proc.stdout:
print line
|
Pythonic way to convert a list of integers into a string of comma-separated ranges
Question: I have a list of integers which I need to parse into a string of ranges.
For example:
[0, 1, 2, 3] -> "0-3"
[0, 1, 2, 4, 8] -> "0-2,4,8"
And so on.
I'm still learning more pythonic ways of handling lists, and this one is a bit
difficult for me. My latest thought was to create a list of lists which keeps
track of paired numbers:
[ [0, 3], [4, 4], [5, 9], [20, 20] ]
I could then iterate across this structure, printing each sub-list as either a
range, or a single value.
I don't like doing this in two iterations, but I can't seem to keep track of
each number within each iteration. My thought would be to do something like
this:
Here's my most recent attempt. It works, but I'm not fully satisfied; I keep
thinking there's a more elegant solution which completely escapes me. The
string-handling iteration isn't the nicest, I know -- it's pretty early in the
morning for me :)
def createRangeString(zones):
rangeIdx = 0
ranges = [[zones[0], zones[0]]]
for zone in list(zones):
if ranges[rangeIdx][1] in (zone, zone-1):
ranges[rangeIdx][1] = zone
else:
ranges.append([zone, zone])
rangeIdx += 1
rangeStr = ""
for range in ranges:
if range[0] != range[1]:
rangeStr = "%s,%d-%d" % (rangeStr, range[0], range[1])
else:
rangeStr = "%s,%d" % (rangeStr, range[0])
return rangeStr[1:]
Is there a straightforward way I can merge this into a single iteration? What
else could I do to make it more Pythonic?
Answer:
>>> from itertools import count, groupby
>>> L=[1, 2, 3, 4, 6, 7, 8, 9, 12, 13, 19, 20, 22, 23, 40, 44]
>>> G=(list(x) for _,x in groupby(L, lambda x,c=count(): next(c)-x))
>>> print ",".join("-".join(map(str,(g[0],g[-1])[:len(g)])) for g in G)
1-4,6-9,12-13,19-20,22-23,40,44
The idea here is to pair each element with count(). Then the difference
between the value and count() is constant for consecutive values. groupby()
does the rest of the work
As Jeff suggests, an alternative to `count()` is to use `enumerate()`. This
adds some extra cruft that needs to be stripped out in the print statement
G=(list(x) for _,x in groupby(enumerate(L), lambda (i,x):i-x))
print ",".join("-".join(map(str,(g[0][1],g[-1][1])[:len(g)])) for g in G)
**Update:** for the sample list given here, the version with enumerate runs
about 5% slower than the version using count() on my computer
|
URL issue when using lighttpd, django and fastcgi
Question: I just set up fastcgi with lighty for django, but I'm getting the fcgi file
path when it processes the url, e.g. 404 error at
<http://myserver.myhost.com/myproject.fcgi>. It needs to route to / instead of
/myproject.fcgi.
Lighty conf:
$HTTP["host"] =~ "myproject\.myhost\.com" {
fastcgi.server = (
".fcgi" => (
"localhost" => (
"bin-path" => "/var/www/myproject/myproject.fcgi",
"socket" => "/tmp/myproject.sock",
"check-local" => "disable",
"min-procs" => 2,
"max-procs" => 4,
)
),
)
alias.url = (
"/media" => "/usr/local/lib/python1.6/dist-packages/Django-1.2.1-py2.6.egg/django/contrib/admin/media/",
)
url.rewrite-once = (
"^(/media.*)$" => "$1",
"^/favicon\.ico$" => "/media/favicon.ico",
"^(/.*)$" => "/myproject.fcgi$1",
)
}
myproject.fcgi:
#!/usr/bin/python2.6
import sys, os
# Add a custom Python path.
sys.path.insert(0, "..")
# Switch to the directory of your project. (Optional.)
os.chdir("/var/www/myproject")
os.environ['DJANGO_SETTINGS_MODULE'] = "settings"
from django.core.servers.fastcgi import runfastcgi
runfastcgi(["method=threaded", "daemonize=false"])
Answer: Once again I answer my own question. Put this into settings.py
FORCE_SCRIPT_NAME = ""
|
Python: date formatted with %x (locale) is not as expected
Question: I have a datetime object, for which I want to create a date string according
to the OS locale settings (as specified e.g. in Windows'7 region and language
settings).
Following Python's [datetime formatting
documentation](http://docs.python.org/library/datetime.html#strftime-and-
strptime-behavior), I used the `%x` format code which is supposed to output
"_Locale’s appropriate date representation._ ". I expect this "representation"
to be either Windows "short date" or "Long date" format, but it isn't either
one. (I have the short date format set to `d/MM/yyyy` and the long date format
to `dddd d MMMM yyyy`, but the output is `dd/MM/yy`)
What's wrong here: the Python documentation, the Python implementation, or my
expectation ? (and how to fix?)
Answer: After reading the [setlocale()
documentation](http://docs.python.org/library/locale.html#locale.setlocale), I
understood that the default OS locale is not used by Python as the default
locale. To use it, I had to start my module with:
import locale
locale.setlocale(locale.LC_ALL, '')
Alternatively, if you intend to only reset the locale's time settings, use
just `LC_TIME` as it breaks many fewer things:
import locale
locale.setlocale(locale.LC_TIME, '')
Surely there will be a valid reason for this, but at least this could have
been mentioned as a remark in the Python documentation for the %x directive.
|
Find text then add line after in Python
Question: I need to read a plist file and search for a string, then add a new line of
text on the next line. I can't imagine it will take much to do this. However
the plist is in binary format so not exactly sure how to deal with that.
Thanks in advance,
Aaron
#Convert plist to XML
os.system('plutil -convert xml1 com.apple.iChat.Jabber.plist')
AutoDiscovery = "<integer>0<integer>"
import fileinput
for line in fileinput.FileInput("com.apple.iChat.Jabber.plist",inplace=1):
line = line.replace("<key>AutoDiscoverHostAndPort</key>",AutoDiscovery)
print line,
#Concert plist to binary file
os.system('plutil -convert binary1 com.apple.iChat.Jabber.plist')
Answer: You want to convert it into xml format first:
plutil -convert xml file.plist
Then the rest should be fairly easy.
EDIT:
newFile = open('file.copy', 'w+')
for line in open('file'):
if (line.find('string_to_find') >= 0):
# do something with "line"
newFile.write(line)
newFile.close()
EDIT2:
# convert plist from binary to xml
plist = plistlib.readPlist('your.plist')
plist['key'] = 0
plistlib.writePlist('your.plist')
# convert plist from xml to binary
|
How do I step through/debug a python web application?
Question: I can't seem to find any information on debugging a python web application,
specifically stepping through the execution of a web request.
is this just not possible? if no, why not?
Answer: If you put
import pdb
pdb.set_trace()
in your code, the web app will drop to a pdb debugger session upon executing
`set_trace`.
Also useful, is
import code
code.interact(local=locals())
which drops you to the python interpreter. Pressing Ctrl-d resumes execution.
Still more useful, is
import IPython.Shell
ipshell = IPython.Shell.IPShellEmbed()
ipshell(local_ns=locals())
which drops you into an IPython session (assuming you've installed IPython).
Here too, pressing Ctrl-d resumes execution.
|
Python: Simple file formating problem
Question: I'm using the code below to to write to a file but at the moment it writes
everything onto a new line.
import csv
antigens = csv.reader(open('PAD_n1372.csv'), delimiter=',')
lista = []
pad_file = open('pad.txt','w')
for i in antigens:
lista.append(i[16])
lista.append(i[21])
lista.append(i[0])
for k in lista:
pad_file.write(k+',')
pad_file.write('\n')
If say my "lista" looks like
[['apple','car','red'],['orange','boat','black']]
I would like the output in my text file to be:
apple,car,red
orange,boat,black
I know my new line character is in the wrong place but I do now know where to
place it, also how would I remove the comma from the end of each line?
* * *
EDIT
Sorry my "lista" looks like
['apple','car','red','orange','boat','black']
Answer: If `lista` is `[['apple','car','red'],['orange','boat','black']]`, then each
`k` in your loop is going to be one of the sub-lists, so all you need to do is
join the elements of that sub-list on a `,` and output that as a single line:
for k in lista:
pad_file.write(','.join(k))
pad_file.write('\n')
* * *
**Edit** based on comments: If `lista` is `['apple, 'car', 'red', 'orange',
'boat', 'black']` and you want 3 elements per line, you can just change the
`for` target to a list comprehension that returns the appropriate sub-lists:
for k in [lista[x:x+3] for x in xrange(0, len(lista), 3)]:
pad_file.write(','.join(k))
pad_file.write('\n')
There are other ways to break a list into chunks; see [this SO
question](http://stackoverflow.com/questions/312443/how-do-you-split-a-list-
into-evenly-sized-chunks-in-python)
|
Gotchas of JavaScript's Number type (C's double)
Question: What are some important considerations for developers using JavaScript's
Number type? I know it's an implementation of C's `double` type, but I'm a
self-taught Python developer so that doesn't take me very far. Pointers to
well-written articles will be great answers.
Thanks!
Answer: In general, it's just the general problems that plague Floating Point based
arithmetic. For example, .1 + .2 == 0.30000000000000004 in JavaScript.
A good article on floating point arithmetic in general, and some of the
problems associated with it can be found
[here](http://en.wikipedia.org/wiki/Floating_point) on Wikipedia.
* * *
Edit: a previous answer to a similar question that seemed to be pretty popular
- <http://stackoverflow.com/questions/588004/is-javascripts-math-
broken/588014#588014>
|
Calling code in a string without exec/eval, python
Question: I have this code that executes when a player attempts to eat something:
def eat(target='object'):
global current_room
global locations
global inventory
if target in inventory:
items[target]['on_eat'] #This is showing no results.
else:
print 'You have no ' + target + ' to eat.'
and this code for items(trimmed)
items = {
'strawberry': {
'weight': 1,
'text': 'The strawberry is red',
'on_eat': "normal_eat('strawberry', 'pretty good, but not as sweet as you expected')"
},
'trees': {
'weight': 50,
'text': 'The trees are tall with large, leaf filled branches blocking out a majority of sunlight.',
'on_eat': "forcesay('Eating trees? What the hell is your problem?')"
}
}
Is there a valid way of calling items[whatever]['on_eat'] without doing
something silly like exec() or eval()? If not, alternative formatting as an
example would also be appreciated.
Before this the items[everyitems]['on_eat'] values were not strings, but that
executed the on_eat for every item as soon as the code was ran.
I have seen many answers to similar questions, but they don't deal with
arguments for functions unique- to better put that, they were more like
[this](http://stackoverflow.com/questions/680941/python-dynamic-function-names
"this")
Answer: You can store your function and function arguments as a `partial`:
from functools import partial
items = {
'strawberry': {
'weight': 1,
'text': 'The strawberry is red',
'on_eat': partial(normal_eat, 'strawberry', 'pretty good, but not as sweet as you expected')
},
'trees': {
'weight': 50,
'text': 'The trees are tall with large, leaf filled branches blocking out a majority of sunlight.',
'on_eat': partial(forcesay, 'Eating trees? What the hell is your problem?')
}
def eat(target='object'):
# those globals are probably not necessary
if target in inventory:
items[target]['on_eat']() #Add ()'s to call the partial
else:
print 'You have no ' + target + ' to eat.'
|
subprocess.Popen() has inconsistent behavior between Eclipse/PyCharm and terminal execution
Question: The problem I'm having is with Eclipse/PyCharm interpreting the results of
subprocess's Popen() differently from a standard terminal. All are using
python2.6.1 on OSX.
Here's a simple example script:
import subprocess
args = ["/usr/bin/which", "git"]
print "Will execute %s" % " ".join(args)
try:
p = subprocess.Popen(["/usr/bin/which", "git"], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# tuple of StdOut, StdErr is the responses, so ..
ret = p.communicate()
if ret[0] == '' and ret[1] <> '':
msg = "cmd %s failed: %s" % (fullcmd, ret[1])
if fail_on_error:
raise NameError(msg)
except OSError, e:
print >>sys.stderr, "Execution failed:", e
With a standard terminal, the line:
ret = p.communicate()
gives me:
(Pdb) print ret
('/usr/local/bin/git\n', '')
Eclipse and PyCharm give me an empty tuple:
ret = {tuple} ('','')
Changing the shell= value does not solve the problem either. On the terminal,
setting shell=True, and passing the command in altogether (i.e.,
args=["/usr/bin/which git"]) gives me the same result: ret =
('/usr/local/bin/git\n', ''). And Eclipse/PyCharm both give me an empty tuple.
Any ideas on what I could be doing wrong?
Answer: Ok, found the problem, and it's an important thing to keep in mind when using
an IDE in a Unix-type environment. IDE's operate under a different environment
context than the terminal user (duh, right?!). I was not considering that the
subprocess was using a different environment than the context that I have for
my terminal (my terminal has bash_profile set to have more things in PATH).
This is easily verified by changing the script as follows:
import subprocess
args = ["/usr/bin/which", "git"]
print "Current path is %s" % os.path.expandvars("$PATH")
try:
p = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# tuple of StdOut, StdErr is the responses, so ..
out, err = p.communicate()
if err:
msg = "cmd %s failed: %s" % (fullcmd, err)
except OSError, e:
print >>sys.stderr, "Execution failed:", e
Under the terminal, the path includes /usr/local/bin. Under the IDE it does
not!
This is an important gotcha for me - always remember about environments!
|
Asking "is hashable" about a Python value
Question: I am interested in taking an arbitrary dict and copying it into a new dict,
mutating it along the way.
One mutation I would like to do is swap keys and value. Unfortunately, some
values are dicts in their own right. However, this generates a "unhashable
type: 'dict'" error. I don't really mind just stringifying the value and
giving it the key. But, I'd like to be able to do something like this:
for key in olddict:
if hashable(olddict[key]):
newdict[olddict[key]] = key
else
newdict[str(olddict[key])] = key
Is there a clean way to do this that _doesn't_ involve trapping an exception
and parsing the message string for "unhashable type" ?
Answer: Since Python 2.6 you can use the abstract base class
[`collections.Hashable`](http://docs.python.org/library/collections.html#abcs-
abstract-base-classes):
import collections
>>> isinstance({}, collections.Hashable)
False
>> isinstance(0, collections.Hashable)
True
This approach is also mentioned briefly in the documentation for
[**`__hash__`**](http://docs.python.org/reference/datamodel.html#object.__hash__).
> Doing so means that not only will instances of the class raise an
> appropriate `TypeError` when a program attempts to retrieve their hash
> value, but they will also be correctly identified as unhashable when
> checking `isinstance(obj, collections.Hashable)` (unlike classes which
> define their own `__hash__()` to explicitly raise `TypeError`).
|
Python compile all non-words except dot[.]
Question: I am trying to break a line on all non-word patterns except .(dot)
Usually I guess it can be done as [\W ^[.]] in java, but how to I do in
python?
Answer:
>>> import re
>>> the_string="http://hello-world.com"
>>> re.findall(r'[\w.]+',the_string)
['http', 'hello', 'world.com']
|
How to get stdout into a string (Python)
Question: I need to capture the stdout of a process I execute via subprocess into a
string to then put it inside a TextCtrl of a wx application I'm creating. How
do I do that?
**EDIT:** I'd also like to know how to determine when a process terminates
Answer: From the [subprocess
documentation](http://docs.python.org/library/subprocess.html#replacing-bin-
sh-shell-backquote):
from subprocess import *
output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0]
|
Python ORM that automatically creates classes from DB schema
Question: is there a python ORM (object relational mapper) that has a tool for
automatically creating python classes (as code so I can expand them) from a
given database schema?
I'm frequently faced with small tasks involving different databases (like
importing/exporting from various sources etc.) and I thought python together
with the abovementioned tool would be perfect for that.
It should work like Visual Studios ADO.NET/Linq for SQL designer, where I can
just drop DB tables and VS creates classes for me ...
Thanks in advance.
Answer: Django does this.
<http://docs.djangoproject.com/en/1.2/howto/legacy-databases/#howto-legacy-
databases>
|
Silencing libcurl/pycurl PUT and POST
Question: I have a python script which does a bunch of PUTs and POSTs, and when they are
successful curl will output the updated html to stdout. I was wondering if
there was a way to keep it from doing this? I don't really care about this
information, so sending it to a file isn't necessary, but that seems to be the
only solution I can find.
Answer: Try setting the WRITEFUNCTION option in the Curl object.
from StringIO import StringIO
body = StringIO()
(...)
c.setopt(c.WRITEFUNCTION, body.write)
Full example can be seen in [curl CVS
repository](http://pycurl.cvs.sourceforge.net/viewvc/pycurl/pycurl/tests/test_stringio.py?view=markup).
|
cant install libgmail in python
Question: i'm a newbie in python , and trying to install libgmail .. this is what i get
:
C:\libgmail-0.1.11>setup.py
Traceback (most recent call last):
File "C:\libgmail-0.1.11\setup.py", line 7, in <module>
import libgmail
File "C:\libgmail-0.1.11\libgmail.py", line 96
exec data in {'__builtins__': None}, {'D': lambda x: result.append(x)}
^
SyntaxError: invalid syntax
i think that the libgmail is a bit older then my python version , but dont
know how to solve it, please help :-)
thanks in advance Amitos80
Answer: Which version of Python are you using? It's possible it's 3.x which doesn't
understand `exec` as a statement (in Python 3, `exec`, like `print` became a
function and is no longer a special keyword/statement).
The solution is to either find a port of `libgmail` to Python 3, or install
Python 2.7 for yourself instead.
|
Ironpython: IEnumerator
Question: I have a method that returns an IEnumerable of some type. Now I was wondering
how I can iterate the IEnumerable with Ironpython?
thanks
Answer: A simple for loop?
from System import *
from System.Collections.Generic import *
names = List[str]()
def get_names():
names = List[str]()
names.Add("Sam")
names.Add("Carla")
names.Add("Woody")
names.Add("Rebecca")
names.Add("Cliff")
names.Add("Norm")
return names
for name in get_names():
print name
|
Dynamically adding checkboxes with PyQt4
Question: I have a simple GUI built using python and PyQt4. After the user enters
something into the program, the program should then add a certain number of
checkboxes to the UI depending on what the user's input was. For testing
purposes, I have one checkbox existing in the application from start, and that
checkbox is nested inside of a QVBoxLayout, which is nested inside of a
QGroupBox. I have tried reading through the PyQt4 documentation for all of
this, but I have struggled to make any progress.
Here is how I am making the initial checkbox (basic output from QtCreator):
self.CheckboxField = QtGui.QGroupBox(self.GuiMain)
self.CheckboxField.setGeometry(QtCore.QRect(10, 70, 501, 41))
self.CheckboxField.setObjectName("CheckboxField")
self.verticalLayoutWidget = QtGui.QWidget(self.CheckboxField)
self.verticalLayoutWidget.setGeometry(QtCore.QRect(0, 10, 491, 21))
self.verticalLayoutWidget.setObjectName("verticalLayoutWidget")
self.CheckboxLayout = QtGui.QVBoxLayout(self.verticalLayoutWidget)
self.CheckboxLayout.setSizeConstraint(QtGui.QLayout.SetMinimumSize)
self.CheckboxLayout.setObjectName("CheckboxLayout")
self.checkBox = QtGui.QCheckBox(self.verticalLayoutWidget)
self.checkBox.setObjectName("checkBox")
self.CheckboxLayout.addWidget(self.checkBox)
Then here is my initial attempt to add a new checkbox (in a seperate file):
checkBox1 = QtGui.QCheckBox(self.window.CheckboxField)
checkBox1.setGeometry(QtCore.QRect(90, 10, 70, 17))
checkBox1.setText(QtGui.QApplication.translate("MainWindow", "Bob Oblaw", None, QtGui.QApplication.UnicodeUTF8))
checkBox1.setObjectName("checkBox1")
self.window.CheckboxLayout.addWidget(checkBox1)
self.window.CheckboxLayout.stretch(1)
self.window.CheckboxField.adjustSize()
self.window.CheckboxField.update()
There are no errors, the checkbox just doesn't show up.
Answer: I think you're making life hard for yourself by copying QtCreator's output
style. I think it's important to manually code some UIs to see how it works. I
suspect you're not adding the check box to the layout. Try something this
(Import * used for clarity here):
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
class Window(QWidget):
def __init__(self):
QWidget.__init__(self)
layout = QVBoxLayout()
self.checks = []
for i in xrange(5):
c = QCheckBox("Option %i" % i)
layout.addWidget(c)
self.checks.append(c)
self.setLayout(layout)
if __name__ == '__main__':
app = QApplication(sys.argv)
w = Window()
w.show()
app.exec_()
|
django ViewDoesNotExist
Question: I'm getting a weird error and I can't track it down. The stack trace doesn't
give any clue as to the location of the error either. It's just giving me the
standard urlresolvers.py ViewDoesNotExist exception. Here is the error
message:
Could not import myapp.myview.views. Error was: No module named model
At first I thought I forgot to put an "s" on models somewhere in my code, but
after a search of the entire codebase, that is not the case.
Here's the trackback:
File "C:\Python25\Lib\site-packages\django\core\handlers\base.py" in get_response
91. request.path_info)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py" in resolve
216. sub_match = pattern.resolve(new_path)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py" in resolve
216. sub_match = pattern.resolve(new_path)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py" in resolve
216. sub_match = pattern.resolve(new_path)
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py" in resolve
123. return self.callback, args, kwargs
File "C:\Python25\Lib\site-packages\django\core\urlresolvers.py" in _get_callback
132. raise ViewDoesNotExist("Could not import %s. Error was: %s" % (mod_name, str(e)))
Exception Value: Could not import myapp.myview.views. Error was: No module named model
Answer: From what you've posted, it seems like the error is in myapp.myview.views.
You already mentioned looking for misspellings of "models", which is good. You
might also try asking Django to validate your models to ensure that they are
properly importable (run this in your Django project root):
python manage.py validate
Beyond that, just keep following the imports in myapp.myview.views until you
see something odd. You can check to see if everything is properly importable
by opening a shell:
python manage.py shell
And attempting to import and/or try things from there.
Beyond that, someone may be able to assist you more if you post the full
traceback. Good luck!
|
Defining path to module's configuration files
Question: A Python module I'm developing has a master configuration file in
`/path/to/module/conf.conf`. The `/path/to/module`/ depends on the platform
(for instance, `/Users/me/module` in OS X, `/home/me/module` in Linux, etc).
Currently I define the `/path/to/module` in `__init__.py`, where I use logic:
if sys.platform == 'darwin':
ROOT = '/Users/me/module'
elif sys.platform == 'linux':
ROOT = '/home/me/module'
# et cetera
Once I have the configuration file root directory, I can open `conf.conf`
anywhere I want.
Do you have a better methodology?
Answer: Inside of your `__init__.py`, you could get the directory where the
`__init_.py` script lives using the `__file__` magic variable like so:
from os.path import dirname
ROOT = dirname(__file__)
Then you know that `conf.conf` will be located at `os.path.join(ROOT,
'conf.conf')`.
|
what python libraries should every python programmer know?
Question: > **Possible Duplicate:**
> [Favorite 3rd-party Python
> Libraries?](http://stackoverflow.com/questions/1764878/favorite-3rd-party-
> python-libraries)
there is a question [Favorite 3rd-party Python
Libraries?](http://stackoverflow.com/questions/1764878/favorite-3rd-party-
python-libraries)
i don't want know about favorite libraries, i want know a list of essential
libraries.
what libraries that every python programmer should know?
Answer: Depends on what kind of programming the given Python programmer does! If it
involves computation on numerical arrays, [numpy](http://numpy.scipy.org/);
for more general scientific programming, other parts of
[scipy](http://scipy.org/); for scraping often-badly-written HTML,
[BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) (perhaps but
not necessarily as included in [lxml](http://codespeak.net/lxml/); for SSH
connections, tunneling, etc, [paramiko](http://www.lag.net/paramiko/); and so
on, and so forth.
Few programmers find a need to develop _all_ these kinds of software (and
many, many more besides). Besides, in many important and popular areas, there
isn't a single package that dominates the scene so utterly that "every" Python
programmer "has" to know it -- such areas as GUIs and web server/application
frameworks, for example, offer several worthwhile alternatives. Nowhere it is
written that (for any given area of software development) "there must be only
one";-).
|
organising classes and modules in python
Question: I'm getting a bit of a headache trying to figure out how to organise modules
and classes together. Coming from C++, I'm used to classes encapsulating all
the data and methods required to process that data. In python there are
modules however and from code I have looked at, some people have a lot of
_loose_ functions stored in modules, whereas others almost always bind their
functions to classes as methods.
For example say I have a data structure and would like to write it to disk.
One way would be to implement a save method for that object so that I could
just type
MyObject.save(filename)
or something like that. Another method I have seen in equal proportion is to
have something like
from myutils import readwrite
readwrite.save(MyObject,filename)
This is a small example, and I'm not sure how python specific this problem is
at all, but my general question is what is the best pythonic practice in terms
of functions vs methods organisation?
Answer: It seems like loose functions bother you. This is the python way. It makes
sense because a module in python is really just an object on the same footing
as any other object. It does have language level support for loading it from a
file but other than that, it's just an object.
so if I have a module `foo.py`:
import pprint
def show(obj):
pprint(obj)
Then the when I import it from `bar.py`
import foo
class fubar(object):
#code
def method(self, obj):
#more stuff
foo.show(obj)
I am essentially accessing a method on the `foo` object. The data attributes
of the `foo` module are just the globals that are defined in `foo`. A module
is the language level implementation of a singleton without the need to
prepend `self` to every methods argument list.
I try to write as many module level functions as possible. If some function
will only work with an instance of a particular class, I will make it a method
on the class. Otherwise, I try to make it work on instances of every class
that is defined in the module for which it would make sense.
The rational behind the exact example that you gave is that if each class has
a save method, then if you later change how you are saving data (from say
filesystem to database or remote XML file) then you have to change every
class. If each class implements an interface to yield that data that it wants
saved, then you can write one function to save instances of every class and
only change that function once. This is known as the Single Responsibility
Principle: Each class should have only one reason to change.
|
Referencing list entries within a for loop without indexes, possible?
Question: A question of particular interest about python for loops. Engineering programs
often require values at previous or future indexes, such as:
for i in range(0,n):
value = 0.3*list[i-1] + 0.5*list[i] + 0.2*list[i+1]
etc...
However I rather like the nice clean python syntax:
for item in list:
#Do stuff with item in list
or for a list of 2d point data:
for [x,y] in list:
#Process x, y data
I like the concept of looping over a list without explicitly using an index to
reference the items in the list. I was wondering if there was a clean way to
grab the previous or next item without looping over the index (or without
keeping track of the index independently)?
EDIT:
Thanks Andrew Jaffe (and by proxy Mark Byers) and gnibbler for the simple,
extendable examples. I wasn't aware of the itertools or nwise modules till
now. John Machin - thanks for the very complex example of what NOT to do. You
put a lot of effort into this example, obviously the somewhat recursive
algorithm I presented cannot produce a list with the same number of elements
as the input list and it presents problems if not using explicit indexes. An
algorithm like this would commonly occur in signal processing.
Answer: Here's a recipe, based on the
[itertools](http://docs.python.org/library/itertools.html#recipes) pairwise
code, which does general n-wise grouping:
import itertools
def nwise(iterable, n=2):
"s->(s_0,s_1, ..., s_n), (s_1,s_2,..., s_n+1), ... "
ntup = itertools.tee(iterable, n)
for i, item in enumerate(ntup):
for ii in range(i):
next(item, None)
return itertools.izip(*ntup)
Which can be used thusly:
>>> import nwise
>>> ll = range(10)
>>> for tup in nwise.nwise(ll,3): print tup
...
(0, 1, 2)
(1, 2, 3)
(2, 3, 4)
(3, 4, 5)
(4, 5, 6)
(5, 6, 7)
(6, 7, 8)
(7, 8, 9)
[Thanks to Mark Byers' answer for the idea]
|
find value of forloop at which event occurred Python
Question: hey guys, this is very confusing...
i am trying to find the minimum of an array by:
for xpre in range(100): #used pre because I am using vapor pressures with some x molarity
xvalue=xarray[xpre]
for ppre in range(100): #same as xpre but vapor pressures for pure water, p
pvalue=parray[p]
d=math.fabs(xvalue-pvalue) #d represents the difference(due to vapor pressure lowering, a phenomenon in chemistry)
darray.append(d) #darray stores the differences
mini=min(darray) #mini is the minimum value in darray
darr=[] #this is to make way for a new set of floats
all the arrays (xarr,parr,darr)are already defined and what not. they have 100
floats each
so my question is how would I find the pvap and the xvap @ which min(darr) is
found?
**edit** have changed some variable names and added variable descriptions,
sorry guys
Answer: A couple things:
1. Try [`enumerate`](http://docs.python.org/library/functions.html#enumerate)
2. Instead of `darr` being a `list`, use a `dict` and store the `dvp` values as keys, with the `xindex` and `pindex` variables as values
Here's the code
for xindex, xvalue in enumerate(xarr):
darr = {}
for pindex, pvalue in enumerate(parr):
dvp = math.fabs(xvalue - pvalue)
darr[dvp] = {'xindex': xindex, 'pindex': pindex}
mini = min(darr.keys())
minix = darr[mini]['xindex']
minip = darr[mini]['pindex']
minindex = darr.keys().index(mini)
print "minimum_index> {0}, is the difference of xarr[{1}] and parr[{2}]".format(minindex, minix, minip)
darr.clear()
**Explanation**
The `enumerate` function allows you to iterate over a list and also receive
the index of the item. It is an alternative to your `range(100)`. Notice that
I don't have the line where I get the value at index `xpre`, `ppre`, this is
because the `enumerate` function gives me both index and value as a tuple.
The most important change, however, is that instead of your `darr` being a
list like this:
[130, 18, 42, 37 ...]
It is now a dictionary like this:
{
130: {'xindex': 1, 'pindex': 4},
18: {'xindex': 1, 'pindex': 6},
43: {'xindex': 1, 'pindex': 9},
...
}
So now, instead of just storing the `dvp` values alone, I am also storing the
indices into `x` and `p` which generated those `dvp` values. Now, if I want to
know something, say, Which `x` and `p` values produce the `dvp` value of 43? I
would do this:
xindex = darr[43]['xindex']
pindex = darr[43]['pindex']
x = xarr[xindex]
p = parr[pindex]
Now `x` and `p` are the values in question.
**Note** I personally would store the values which produced a particular
`dvp`, and not the indices of those values. But you asked for the indices so I
gave you that answer. I'm going to assume that you have a reason for wanting
to handle indices like this, but in Python generally you do not find yourself
handling indices in this way when you are programming in Pythonic manner. This
is a very C way of doing things.
|
How do I design my software for remappable short-cut keys?
Question: I've just started on a GUI application which will feature several distinct
modes and a large number of keyboard shortcuts. Ideally I'd like to support
letting the user remap these, like in eclipse. What is - if any - the standard
solution to this situation? I can imagine the main window captures all key-
presses and translate them to key-neutral events for all the child controls as
one way, but it feels clumsy. Are there any standard frameworks for doing
this? If so, particularly python or open-sourced ones that I could use as an
example are of interest.
Answer: I might use something like the Command Pattern. Create a hashtable, where the
key is the key pressed, and the value is a function (or object with a
function) that's the action to be executed.
While there's probably a much faster way to do this in CPU-time, that seems
the cleanest and most maintainable way, and it's fast enough for a single-user
application that clean code should be pretty important.
|
Make Python ignore .pyc files
Question: Is there a way to make Python ignore any .pyc files that are present and
always interpret all the code (including imported modules) directly? Google
hasn't turned up any answers, so I suspect not, but it seemed worth asking
just in case.
(Why do I want to do this? I have a large pipeline of Python scripts which are
run repeatedly over a cluster of a couple hundred computers. The Python
scripts themselves live on a shared NFS filesystem. Somehow, rarely, after
having been run hundreds of times over several hours, they will suddenly start
crashing with an error about not being able to import a module. Forcing the
regeneration of the .pyc file fixes the problem. I want, of course, to fix the
underlying causes, but in the meantime we also need the system to continue
running, so it seems like ignoring the .pyc files if possible would be a
reasonable workaround).
P.S. I'm using Python 2.5, so I can't use -B.
Answer: It's not exactly what you asked for, but would removing the existing .pyc
files and then not creating any more work for you? In that case, you could use
the -B option:
>python --help
usage: python [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
-B : don't write .py[co] files on import; also PYTHONDONTWRITEBYTECODE=x
|
Static class members python
Question: So I'm using static class members so I can share data between class methods
and static methods of the same class (there will only be 1 instantiation of
the class). I understand this fine, but I'm just wondering when the static
members get initialized? Is it on import? On the first use of the class?
Because I'm going to be calling the static members of this class from more
than 1 module (therefore more than 1 import statement). Will all the modules
accessing the static methods share the same static data members? And if my
main client deletes the instance of my class, and then recreates it (without
terminating altogether or re-importing stuff), will my data members be
preserved?
Answer: They will be initialized at class definition time, which will happen at import
time if you are importing the class as part of a module. This assuming a
"static" class member definition style like this:
class Foo:
bar = 1
print Foo.bar # prints '1'
Note that, this being a static class member, there is no need to instantiate
the class.
The import statement will execute the contents of a module exactly once, no
matter how many times or where it is executed.
Yes, the static members will be shared by any code accessing them.
Yes, the static members of a class will be preserved if you delete an object
whose type is that class:
# Create static member
class Foo:
bar = 1
# Create and destroy object of type Foo
foo = Foo()
del foo
# Check that static members survive
print Foo.bar # Still prints '1'
|
Python and C++ integration. Python prints string as multiple lines
Question: I'm trying to write a program in python to run a program in C++. It wasn't
working right, so I made the most basic version of each I could. The C++
program merely takes in a string from stdin, and then prints it out. The
Python code is written as follows:
import popen2, string, StringIO
fin, fout = popen2.popen2("PyTest")
msg = ur"Hello, world!"
print msg
fout.write(msg)
print fin.readline()
The output, however looks like this:
Hello, world!
Hello,
The problem I keep seeing is that spaces seem to break apart the string, even
though it is a string literal. I'm not really sure what to do here. Any
suggestions?
Answer: In C++, `std::cin >> mystring` uses spaces as separators. Use `std::getline`
instead if you want to gobble up a whole line at a time.
|
How can I know if the user is connected to the local machine via ssh in my python script?
Question: How can I know if the user is connected to the local machine via ssh in my
python script?
Answer: You can use the `os` module to check for the existence of the environment
variable `SSH_CONNECTION`.
>>> import os
>>> using_ssh = 'SSH_CONNECTION' in os.environ
|
Python: How to check if a unicode string contains a cased character?
Question: I'm doing a filter wherein I check if a unicode (utf-8 encoding) string
contains no uppercase characters (in all languages). It's fine with me if the
string doesn't contain any cased character at all.
For example: 'Hello!' will not pass the filter, but "!" should pass the
filter, since "!" is not a cased character.
I planned to use the islower() method, but in the example above, "!".islower()
will return False.
According to the Python Docs, "The python unicode method islower() returns
True if the unicode string's cased characters are all lowercase and the string
contained at least one cased character, otherwise, it returns False."
Since the method also returns False when the string doesn't contain any cased
character, ie. "!", I want to do check if the string contains any cased
character at all.
Something like this....
string = unicode("!@#$%^", 'utf-8')
#check first if it contains cased characters
if not contains_cased(string):
return True
return string.islower():
Any suggestions for a contains_cased() function?
Or probably a different implementation approach?
Thanks!
Answer:
import unicodedata as ud
def contains_cased(u):
return any(ud.category(c)[0] == 'L' for c in u)
|
python log manager
Question: Hello I have several python programs that runs in parallel. I want to write a
python program which will manage the programs logs, which mean that the other
programs will sent log message to this program and the program will write it
to the log file. Another important feature is that if one of the programs will
crash, the 'manage log program' will know about it and could write it to the
log file. I try to use this sample
<http://docs.python.org/library/logging.html#sending-and-receiving-logging-
events-across-a-network> but I failed.
Can anyone please help me?
Answer: This sounds like a horribly complicated scheme, frankly. I would try a simple
solution first, like [the Twisted
logger](http://twistedmatrix.com/documents/current/core/howto/logging.html) in
`twisted.python.log`. There is an example of how to log exceptions
automagically.
|
AttributeError: 'module' object has no attribute 'QtString'
Question: My development eviroment:
> os: windows xp
>
> python: python-3.1.2.msi
>
> pyqt: PyQt-Py3.1-gpl-4.7.4-1.exe
code:
import sys
from PyQt4 import QtCore, QtGui
app = QtGui.QApplication(sys.argv)
s = QtCore.QtString()
sys.exit(app.exec_())
It always show me
> in 'module'
>
> s = QtCore.QtString()
>
> AttributeError: 'module' object has no attribute 'QtString'
I chaged code:
import sys
from PyQt4.QtGui import *
from PyQt4.QtCore import *
app = QApplication(sys.argv)
s = QtString()
sys.exit(app.exec_())
Then it always show me like this:
> in 'module'
>
> s = QtString()
>
> NameError: name 'QtString' is not defined
what should i do?
Answer: Do you mean `QString` instead of `QtString` ?
(you can do `help(QtCore)` in the python interpreter and search for `string`)
|
Has the use of C to implement other languages constrained their designs in any way?
Question: It seems that most new programming languages that have appeared in the last 20
years have been written in C. This makes complete sense as C can be seen as a
sort of portable assembly language. But what I'm curious about is whether this
has constrained the design of the languages in any way. What prompted my
question was thinking about how the C stack is used directly in Python for
calling functions. Obviously the programming language designer can do whatever
they want in whatever language they want, but it seems to me that the language
you choose to write your new language in puts you in a certain mindset and
gives you certain shortcuts that are difficult to ignore. Are there other
characteristics of these languages that come from being written in that
language (good or bad)?
Answer: I tend to disagree.
I don't think it's so much that a language's compiler or interpreter is
implemented in C -- after all, you can implement a virtual machine with C that
is completely unlike its host environment, meaning that you _can_ get away
from a C / near-assembly language mindset.
However, it's more difficult to claim that the C language _itself_ didn't have
any influence on the design of later languages. Take for example the usage of
curly braces `{ }` to group statements into blocks, the notion that whitespace
and indentation is mostly unimportant, native type's names (`int`, `char`,
etc.) and other keywords, or the way how variables are defined (ie. type
declaration first, followed by the variable's name, optional initialization).
Many of today's popular and wide-spread languages (C++, Java, C#, and I'm sure
there are even more) share these concepts with C. (These probably weren't
completely new with C, but AFAIK C came up with that particular mix of
language syntax.)
|
How do I detect if stdout is connected to a tty in Perl?
Question: I'm looking for the Perl equivalent to this Python code:
from sys import stdout
if stdout.isatty():
print "yes"
else:
print "no"
Answer: Use the [-t filetest operator](http://perldoc.perl.org/functions/-X.html).
print -t STDOUT ? "Yes\n" : "No\n"
Note that in Perl, STDOUT can be tied (essentially an overcomplicated
overloaded object) so output to STDOUT may still reach a TTY even if its not
directly attached to one.
|
How to import COM into Ironpython?
Question: In Cpython, I can use win32com.
But, in ironpython, I didn't know how to import it.
Because, in .net, one always use Visual Studio to interop the COM and to use
it.
Answer: You should be able to create an IDispatch object using:
from System import Type, Activator
Activator.CreateInstance(Type.GetTypeFromProgID(com_type_name))
This is equivalent to win32com.client.Dispatch(com_type_name).
If there's a type lib you should be able to do:
import clr
import System
typelib = clr.LoadTypeLibrary(System.Guid("00020905-0000-0000-C000-000000000046"))
word = typelib.Word.Application()
I don't know what that's equivalent to. I'm not much of an expert on this but
I took those from [IronPython's
cominterop_util](http://ironpython.codeplex.com/SourceControl/changeset/view/75792#1090265)
which is used in the tests. There's more stuff in the
IronPython\Tests\interop\com directory which might be useful.
|
How to convert unicode objects to normal objects in Python
Question: I currently have a deep object, and it is all unicode (sadly).
I am to a point where a variable is either going to be a dict, or a bool. In
this case, I do
`if type( my_variable ) is BooleanType:`
But this is not triggered because the type is actually Unicode for all values.
How do I convert this unicode object to a normal object so I can correctly
read the type, without destroying the data?
Thanks!
Here is the result of print(repr(variable)). It shows the Bools as not being
unicode (unlike what I first though) but still giving me troubles.
{u'forms': {u'financing': {u'view': True, u'delete': True}, u'employment': {u'view': True, u'delete': True}, u'service': {u'view': True, u'delete': True}}, u'content': {u'articles': {u'edit': True, u'add': True, u'view': True, u'delete': True}, u'slideshow': {u'edit': True, u'view': True}, u'pages': {u'edit': True, u'add': True, u'view': True, u'delete': True}}, u'people': {u'edit': True, u'sort-staff': True, u'sort-riders': True, u'add': True, u'delete': True, u'view': True}, u'events': {u'edit': True, u'add': True, u'view': True, u'delete': True}, u'settings': {u'edit': True, u'view': True}}
Answer: ### Do not use `type` unless you are _really really sure_ that you want to.
In this case, you don't -- especially checking for `bool`, given Python's
flexibility for what can be considered as boolean! For instance, what if you
are given `None`? How about an empty string? How about `[]`?
The solution to this problem is the use of Abstract Base Classes (ABCs), which
allow you to specify exactly what an object should be able to do, instead of
what `type` it is. The `collections` module comes with a bunch of these:
import collections
if isinstance( ..., collections.MutableMapping ):
...
This permits anything 'dictionary-like', so that you retain polymorphism. If
you need more careful specification ("I want `__getitem__` and `__delitem__`
but not necessarily `__setitem__`!"), you can write your own -- see the
definition of the ABCs in the source of the collections module for starters.
Are you sure that you want this functionality? If you do it correctly (with
ABCs), it's not an inherently bad idea, but that's not to say you should abuse
it!
* * *
Edit: I'm not sure that you understand what Unicode is or how Python handles
it. This is one of the major differences between Python 2.x and Python 3.x,
which are you using?
Re-edit: Ah, ok, you are using Python 2.x and you have a dictionary with
Unicode string keys. I'm not sure what you were doing that caused a problem,
since Unicode strings work basically just like ordinary strings. The
`MutableMapping` check above will work fine.
|
In python, looking for an alternative to Shelve (too slow for large dictionaries)
Question: I am storing a table using python, and I need persistance.
Essentially I am storing the table as a dictionary string to numbers. And the
whole is stored with shelve
self.DB=shelve.open("%s%sMoleculeLibrary.shelve"%(directory,os.sep),writeback=True)
I use writeback to true as I found the system tend to be unstable if I don't.
So after the computations the system needs to close the database, and store it
back. Now the database (the table) is about 540MB, and it is taking ages. The
time exploded after the table grew to about 500MB. But I need a much bigger
table. In fact I need two of them.
I am probably using the wrong form of persistance. Any suggestions?
Answer: For storing a large dictionary of `string : number` key-value pairs, I'd
suggest a JSON-native storage solution such as
[MongoDB](http://www.mongodb.org/). It has a wonderful API for Python,
[Pymongo](https://api.mongodb.com/python/current/). MongoDB itself is
lightweight and incredibly fast, and json objects will natively be
dictionaries in Python. This means that you can use your `string` key as the
object ID, allowing for compressed storage and quick lookup.
As an example of how easy the code would be, see the following:
d = {'string1' : 1, 'string2' : 2, 'string3' : 3}
from pymongo import Connection
conn = Connection()
db = conn['example-database']
collection = db['example-collection']
for string, num in d.items():
collection.save({'_id' : string, 'value' : num})
# testing
newD = {}
for obj in collection.find():
newD[obj['_id']] = obj['value']
print newD
# output is: {u'string2': 2, u'string3': 3, u'string1': 1}
You'd just have to convert back from unicode, which is trivial.
|
Python auth_handler not working for me
Question: I've been reading about Python's urllib2's ability to open and read
directories that are password protected, but even after looking at examples in
the docs, and here on StackOverflow, I can't get my script to work.
import urllib2
# Create an OpenerDirector with support for Basic HTTP Authentication...
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm=None,
uri='https://webfiles.duke.edu/',
user='someUserName',
passwd='thisIsntMyRealPassword')
opener = urllib2.build_opener(auth_handler)
# ...and install it globally so it can be used with urlopen.
urllib2.install_opener(opener)
socks = urllib2.urlopen('https://webfiles.duke.edu/?path=/afs/acpub/users/a')
print socks.read()
socks.close()
When I print the contents, it prints the contents of the login screen that the
url I'm trying to open will redirect you to. Anyone know why this is?
Answer: `auth_handler` is only for basic HTTP authentication. The site here contains a
HTML form, so you'll need to submit your username/password as POST data.
I recommend you using the `mechanize` module that will simplify the login for
you.
Quick example:
import mechanize
browser = mechanize.Browser()
browser.open('https://webfiles.duke.edu/?path=/afs/acpub/users/a')
browser.select_form(nr=0)
browser.form['user'] = 'username'
browser.form['pass'] = 'password'
req = browser.submit()
print req.read()
|
How to refer to "\" sign in python string
Question: I have problem with refering to special symbol in string:
I have: `path='C:\dir\dir1\dir2\filename.doc'`
and I want `filename`.
When I try: `filename=path[path.rfind("\"):-4]`
then interpreter says it's an error line right from "\" since is treated as a
comment.
Answer: You can use `"\\"`, technically it would be better to use `os.path.sep` if you
insist on using backslashes. But better yet, use `/` in your paths, it works
fine on Windows
Python has [builtin functions to manipulate
paths](http://docs.python.org/library/os.path.html). Note that you need to
double the backslashes if you still prefer them to forwardslashes
>>> import os
>>> path='C:\\dir\\dir1\\dir2\\filename.doc'
>>> os.path.splitext(os.path.basename(path))
('filename', '.doc')
and using forwardslashes
>>> path='C:/dir/dir1/dir2/filename.doc'
>>> os.path.splitext(os.path.basename(path))
('filename', '.doc')
|
How to get all the minimum elements according to its first element of the inside list in a nested list?
Question: Simply put! there is this list say `LST =
[[12,1],[23,2],[16,3],[12,4],[14,5]]` and i want to get all the minimum
elements of this list according to its first element of the inside list. So
for the above example the answer would be `[12,1]` and `[12,4]`. Is there any
typical way in python of doing this? Thanking you in advance.
Answer: Two passes:
minval = min(LST)[0]
return [x for x in LST if x[0] == minval]
One pass:
def all_minima(iterable, key=None):
if key is None: key = id
hasminvalue = False
minvalue = None
minlist = []
for entry in iterable:
value = key(entry)
if not hasminvalue or value < minvalue:
minvalue = value
hasminvalue = True
minlist = [entry]
elif value == minvalue:
minlist.append(entry)
return minlist
from operator import itemgetter
return all_minima(LST, key=itemgetter(0))
|
python import error
Question: what's wrong with my imports?
App folder structure:
myapp/
* models/models.py contains `SpotModel()`
* tests/tests.py contains TestSpotModel(unittest.TestCase). tests.py imports `from myapp.models.models import *` which works like a charm
* scripts/import.py contains `from myapp.models.models import *`
the problem is that import.py when executed results in an error:
ImportError: No module named myapp.models.models
but tests.py runs.
I have `__init__.py` files in `myapp/__init__.py`, `myapp/models/__init__.py`,
`myapp/tests/__init__.py` and as mentioned, running the unit tests using
nosetests works as intended.
Answer: It is `__init__.py` not `init.py`. Make sure each of the directory in
hierarchy contains it in order to be able to import.
**EDIT:** I managed to reproduce it. Here's the directory structure:
cesar@cesar-laptop:/tmp/asdasd$ tree
.
`-- myapp
|-- __init__.py
|-- models
| |-- __init__.py
| `-- models.py
|-- scripts
| |-- data.py
| `-- __init__.py
`-- tests
|-- __init__.py
`-- tests.py
I put the following code at the very beginning of the `data.py` to narrow down
the problem:
import sys
import pprint
pprint.pprint(sys.path)
from myapp.models.models import *
Running the `data.py` the way OP indicated yeilds ImportError:
cesar@cesar-laptop:/tmp/asdasd$ python myapp/scripts/data.py
['/tmp/asdasd/myapp/scripts',
'/usr/lib/python2.6',
'/usr/lib/python2.6/plat-linux2',
'/usr/lib/python2.6/lib-tk',
-- Skipped --
'/usr/local/lib/python2.6/dist-packages']
Traceback (most recent call last):
File "myapp/scripts/data.py", line 6, in
from myapp.models.models import *
ImportError: No module named myapp.models.models
But this way works like a charm:
cesar@cesar-laptop:/tmp/asdasd$ python -m myapp.scripts.data
['',
'/usr/lib/python2.6',
'/usr/lib/python2.6/plat-linux2',
'/usr/lib/python2.6/lib-tk',
-- Skipped --
'/usr/local/lib/python2.6/dist-packages']
Note the difference in the first entry of `sys.path`.
|
Simulate Mouse Clicks on Python
Question: I'm currently in the process of making my Nintendo Wiimote (Kinda sad
actually) to work with my computer as a mouse. I've managed to make the
nunchuk's stick control actually move the mouse up and down, left and right on
the screen! This was so exciting. Now I'm stuck.
I want to left/right click on things via python when I press A, When I went to
do a search, All it came up with was tkinter?
So my question is, What do I call to make python left/right click on the
desktop, and if it's possible, maybe provide a snippet?
Thank you for your help!
NOTE: I guess I forgot to mention that this is for Linux.
Answer: You can use [PyMouse](https://github.com/pepijndevos/PyMouse) which has now
merged with [PyUserInput](https://github.com/SavinaRoja/PyUserInput). I
installed it via pip:
apt-get install python-pip
pip install pymouse
In some cases it used the cursor and in others it simulated mouse events
without the cursor.
from pymouse import PyMouse
m = PyMouse()
m.position() #gets mouse current position coordinates
m.move(x,y)
m.click(x,y) #the third argument "1" represents the mouse button
m.press(x,y) #mouse button press
m.release(x,y) #mouse button release
You can also specify which mouse button you want used. Ex left button:
m.click(x,y,1)
Keep in mind, on Linux it requires Xlib.
|
How much overhead do decorators add to Python function calls
Question: I've been playing around with a timing decorator for my pylons app to provide
on the fly timing info for specific functions. I've done this by creating a
decorator & simply attaching it to any function in the controller I want
timed.
It's been pointed out however that decorators could add a fair amount of
overhead to the call, and that they run 2-3x slower than an undecorated
function.
Firstly, I would expect that executing a decorated function would take a smite
longer than an undecorated one, but I would expect that overhead to be in the
thousandths of seconds & be negligible compared to a SQL insert call. The
decorator itself does simple simple timing calculations using time.time() &
some very simple aggregation.
Do decorators add significant overhead to a system? I can't find anything to
back that up.
Answer: The overhead added by using a decorator should be just one extra function
call.
The work being done by the decorator isn't part of the overhead as your
alternative is to add the equivalent code to the decorated object.
So it's possible that the decorate function takes twice as long to run, but
that's because the decorator is doing some important work that takes roughly
the same time to fun as the undecorated function.
|