text
stringlengths 226
34.5k
|
---|
Seeding the random number Generator In PHP
Question: I m having a application in which i have to select a number out of many
numbers according to their weights. Every time I will select , I have send the
result to flash.I have found a algorithm in python. I have implemented it in
php and was testing for its results. If i was running that algo in python it
was giving good results but in php not so good. Ex. (1=>30,2=>40,3=>30) After
running many times , the probablity of occurence first number in weighted
array is always more but in python it is uniform. I have attatched the PHP
code.
define("MAX",100000);
$reelfrequencies=array(30,40,30);
echo weightedselect($reelfrequencies);
/*function weightedselect($frequency)
{
$arr=cumWghtArray($frequency);//array(35,96,100);
print_r($arr);
$len=sizeof($frequency);
$count=array();
echo $r=mt_rand(0,$arr[$len-1]);
$index=binarysearch($arr,$r,0,$len-1);
return $index;
}*/
function cumWghtArray($arr)
{
$cumArr=array();
$cum=0;
$size=sizeof($arr);
for($i=0;$i<$size;$i++)
{
$cum+=$arr[$i];
array_push($cumArr,$cum);
}
return $cumArr;
}
function weightedselect($frequency)
{
$arr=cumWghtArray($frequency);//array(35,96,100);
$len=sizeof($frequency);
$count=array();
$count[0]=$count[1]=$count[2]=0;
for($i=0;$i<MAX;$i++)
{
$r=mt_rand(0,$arr[$len-1]);
$index=binarysearch($arr,$r,0,$len-1);
$count[$index]++;
}
for($i=0;$i<3;$i++)
{
$count[$i]/=MAX;
echo $i." ".$count[$i]."\n";
}
}
function binarySearch($ar,$value,$first,$last)
{
if($last<$first)
return -1;
$mid=intVal(($first+$last)/2);
$a=$ar[$mid];
if($a===$value)
return $mid;
if($a>$value&&(($mid-1>=0&&$ar[$mid-1]<$value)||$mid==0))
return $mid;
else if($a>$value)
$last=$mid-1;
else if($a<$value)
$first=$mid+1;
return binarySearch($ar,$value,$first,$last);
}
Here is the Python Code. I have taken this code from this forum . import
random import bisect import collections
def cdf(weights):
total=sum(weights)
result=[]
cumsum=0
for w in weights:
cumsum+=w
result.append(cumsum/total)
return result
def choice(population,weights):
assert len(population) == len(weights)
cdf_vals=cdf(weights)
x=random.random()
idx=bisect.bisect(cdf_vals,x)
return population[idx]
weights=[0.30,0.40,0.30]
population="ABC"
counts={"A":0.0,"B":0.0,"C":0.0}
max=10000
for i in range(max):
c=choice(population,weights)
counts[c]=counts[c]+1
print(counts)
for k, v in counts.iteritems():
counts[k]=v/max
print(counts)
Problem is of mt_rand() function which is not uniform. The python
random.rand() is very much uniform. Which random function should i implement
in php with a proper seeding value every time it runs. I was thinking of using
Withcmann (used by python random.random) but how will i provide the seed.
Answer: Both `rand` and `mt_rand` should both be more than sufficiently random for
your task here. If you needed to seed `mt_rand` you could use `mt_srand`, but
there's no need since PHP 4.2 as this is done for you.
I suspect the issue is with your code, which seems unnecessarily involved
given what I believe you're trying to do, which is just pick a random number
with weighted probabilities.
This may help: [Generating random results by weight in
PHP?](http://stackoverflow.com/questions/445235/generating-random-results-by-
weight-in-php)
|
python's webbrowser launches IE instead of default on windows 7
Question: I'm attempting to launch a local html file from python in the default browser.
Right now my default is google chrome. If I double-click on a .html file,
chrome launches.
When I use python's webbrowser.open, IE launches instead, with a blank address
bar.
Python 2.7.1 (r271:86832, Nov 27 2010, 17:19:03) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import webbrowser
>>> filename = 'test.html'
>>> webbrowser.open('file://'+filename)
True
>>> print(webbrowser.get().__class__.__name__)
WindowsDefault
I've checked my default programs and they look correct. I'm on Win 7 SP1. Why
is chrome not launching?
**Update** : The code will be running on unknown os's and machines, so
registering browsers or path updates are not options. I'm thinking that
parsing the url for `file://` and then doing an `os.path.exists` check and
`os.path.realpath` might be the answer.
Answer: My main issue was a bad URL by attempting prepend `file://` to a relative
path. It can be fixed with this:
webbrowser.open('file://' + os.path.realpath(filename))
Using `webbrowser.open` will try multiple methods until one "succeeds", which
is a loose definition.
The `WindowsDefault` class calls `os.startfile()` which fails and returns
`False`. I can verify that by entering the URL in the windows run command and
seeing an error message rather than a browser.
Both `GenericBrowser` and `BackgroundBrowser` will call `subprocess.Popen()`
with an exe which will succeed, even with a bad URL, and return `True`. IE
gives no indication of the issue, all other browsers have a nice messages
saying they can't find the file.
1. `GenericBrowser` is set by the environment variable `BROWSER` and is first.
2. `WindowsDefault` is second.
3. `BackgroundBrowser` is last and includes the fall back IE if nothing else works.
Here is my original setup:
>>> import webbrowser
>>> webbrowser._tryorder
['windows-default',
'C:\\Program Files\\Internet Explorer\\IEXPLORE.EXE']
>>> webbrowser._browsers.items()
[('windows-default', [<class 'webbrowser.WindowsDefault'>, None]),
('c:\\program files\\internet explorer\\iexplore.exe', [None, <webbrowser.BackgroundBrowser object at 0x00000000022E3898>])]
>>>
Here is my setup after modifiying the environment variables:
C:>path=C:\Program Files (x86)\Mozilla Firefox;%path%
C:>set BROWSER=C:\Users\Scott\AppData\Local\Google\Chrome\Application\chrome.exe
C:>python
Python 2.7.1 (r271:86832, Nov 27 2010, 17:19:03) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import webbrowser
>>> webbrowser._tryorder
['C:\\Users\\Scott\\AppData\\Local\\Google\\Chrome\\Application\\chrome.exe',
'windows-default',
'firefox',
'C:\\Program Files\\Internet Explorer\\IEXPLORE.EXE']
>>> webbrowser._browsers.items()
[('windows-default', [<class 'webbrowser.WindowsDefault'>, None]),
('c:\\program files\\internet explorer\\iexplore.exe',[None, <webbrowser.BackgroundBrowser object at 0x000000000235E828>]),
('firefox', [None, <webbrowser.BackgroundBrowser object at 0x000000000235E780>]),
('c:\\users\\scott\\appdata\\local\\google\\chrome\\application\\chrome.exe', [None, <webbrowser.GenericBrowser object at 0x000000000235E8D0>])]
>>>
The `webbrowser._tryorder` gives the list of browsers tried. Registering
chrome or adding a BROWSER env var or modifiying my path all would have gotten
me the correct browser with a better error message.
Thanks for the help guys, I couldn't have solved this without your ideas.
|
What is the pythonic way to calculate dot product?
Question: I have two lists, one is named as A, another is named as B. Each element in A
is a triple, and each element in B is just an number. I would like to
calculate the result defined as :
result = A[0][0] * B[0] + A[1][0] * B[1] + ... + A[n-1][0] * B[n-1]
I know the logic is easy but how to write in pythonic way?
Thanks!
Answer:
import numpy
result = numpy.dot( numpy.array(A)[:,0], B)
<http://docs.scipy.org/doc/numpy/reference/>
If you want to do it without numpy, try
sum( [a[i][0]*b[i] for i in range(len(b))] )
|
Finding repeats in multiple lists read from CSV File (Python)
Question: Title seems confusing, but let's say I'm working with the following CSV file
('names.csv').
name1,name2,name3
Bob,Jane,Joe
Megan,Tom,Jane
Jane,Joe,Rob
My question is, how would I go about making code that returns the string that
occurs at least 3 times. So the output should be 'Jane', because that occurs
at least 3 times. Really confused here.. perhaps some sample code would help
me better understand?
So far I have:
import csv
reader = csv.DictReader(open("names.csv"))
for row in reader:
names = [row['name1'], row['name2'], row['name3']]
print names
This returns:
['Bob', 'Jane', 'Joe']
['Megan', 'Tom', 'Jane']
['Jane', 'Joe', 'Rob']
Where do I go from here? Or am I going about this wrong? I'm really new to
Python (well, programming altogether), so I have close to no clue what I'm
doing..
Cheers
Answer: I'd do it like this:
>>> from collections import defaultdict
>>> d = defaultdict(int)
>>> rows = [['Bob', 'Jane', 'Joe'],
... ['Megan', 'Tom', 'Jane'],
... ['Jane', 'Joe', 'Rob']]
...
>>> for row in rows:
... for name in row:
... d[name] += 1
...
>>> filter(lambda x: x[1] >= 3, d.iteritems())
[('Jane', 3)]
It uses dict with default value of 0 to count how many times each name happens
in the file, and then it filters the dict with according condition (count >=
3).
|
How to execute a shell command through Python
Question: I am new to Python programming. I want to execute a shell command "at" from a
Python program. Can any one of the Python gurus help me out? Thanks in
advance.
Answer: The [`subprocess`](http://docs.python.org/library/subprocess.html) module can
be used for this purpose:
import subprocess
retcode = subprocess.call(["at", "x", "y", "z"])
Replace `x`, `y` and `z` with the parameters to `at`.
|
Python Queues appear to be dying
Question: I have a main process that spawns Processes and then those Processes add jobs
to a Queue which the main process uses to spawn more processes. And, that
works fine for about 20 seconds and then the main process just stops spawning
jobs even though it has plenty of jobs in the queue.
here's the code for the job process's run loop:
try:
page = self.fetch_page(self.url, self.timeout)
if page != None:
#do_stuff
pass
except Exception, e: #Log any errors
import traceback
self.log(str(traceback.format_exc(limit=10)), level="error")
self.log(str(e), level="error")
finally:
import os, signal
print "releasing Semaphore"
self.__sem.release()
#print "Joining pQueue" #these statements raise errors...
#self.__pqueue.join_thread()
#print "Joining lQueue"
#self.__log.join_thread()
print "exiting"
os._exit(1)
#os.kill(self.pid, signal.SIGTERM)
And here's the code for main process that spawns the jobs:
while True:
print "Waiting for url"
url = self.page_queue.get()
print "waiting for semaphore"
self.__sem.acquire()
print "semaphore recived"
process = self.process_handler(url, self.log_queue, self.__sem, self.page_queue)
process.start()
Just a little context, self.log_queue in the spawning process is self.__log in
the Job process, self.page_queue is self.__pqueue in the Job process, and
self.__sem is the same as self.__sem in the Job process.
The spawing process usually hangs at:
url = self.page_queue.get()
I'm pretty sure that is has something to do with the Queues breaking when the
Job Processes terminate before they finish writing to the queues, however
that's just a hunch. and self.__pqueue.join_thread() raises an assertion
error.
Answer: Not sure if this will be useful, but if your self.page_queue is an instance of
Queue (http://docs.python.org/library/queue.html), then get() is blocking by
default. Have you verified that the queue isn't empty? It might just be hung
waiting for an item. I remember that plagued me when I was using Queues.
Further, it wont join until for every task that you do a get() for, you have
called task_done()
|
How do I send XML POST data from an iOS app to a Django app?
Question: I am attempting to implement an online leaderboard in a game app for iOS,
using Django to process POST requests from the iDevice and store the scores. I
have figured out how to get Django to serialize the objects to XML, and my
iPhone can read and display the scores. However, I can't for the life of me
get my iPhone to POST XML to my Django server.
Below is the function I am using to post the scores...
## iOS (Objective-C) Controller:
- (void) submitHighScore {
NSLog(@"Submitting high score...");
NSString *urlString = HIGH_SCORES_URL;
NSURL *url = [NSURL URLWithString: urlString];
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL: url];
[request setHTTPMethod: @"POST"];
[request setValue: @"text/xml" forHTTPHeaderField: @"Content-Type"];
NSMutableData *highScoreData = [NSMutableData data];
[highScoreData appendData: [[NSString stringWithFormat: @"<?xml version=\"1.0\" encoding=\"UTF-8\" ?>"] dataUsingEncoding: NSUTF8StringEncoding]];
[highScoreData appendData: [[NSString stringWithFormat: @"<player_name>%@</player_name", @"test"] dataUsingEncoding: NSUTF8StringEncoding]];
[highScoreData appendData: [[NSString stringWithFormat: @"<score>%d</score>", 0] dataUsingEncoding: NSUTF8StringEncoding]];
[highScoreData appendData: [[NSString stringWithFormat: @"</xml>"] dataUsingEncoding: NSUTF8StringEncoding]];
[request setHTTPBody: highScoreData];
[[UIApplication sharedApplication] setNetworkActivityIndicatorVisible: YES];
NSURLConnection *connection = [[NSURLConnection alloc] initWithRequest: request
delegate: self];
if (!connection) {
NSLog(@"Request to send high scores appears to be invalid.");
[[UIApplication sharedApplication] setNetworkActivityIndicatorVisible: NO];
}
}
The above method succeeds in sending the request, and interprets it correctly
as `CONTENT_TYPE: text/xml`, but the Django view that processes the request
can't seem to make any sense of it, interpreting it almost as if it was merely
plain text. Below is my Django view...
## Django (Python) view:
from django.http import HttpResponse, HttpResponseBadRequest
from django.shortcuts import render_to_response
from django.template import RequestContext
from django.core import serializers
from django.core.exceptions import ValidationError
from django.views.decorators.csrf import csrf_exempt
from modologger.taptap.models import HighScore
@csrf_exempt
def leaderboard( request, xml = False, template_name = 'apps/taptap/leaderboard.html' ):
"""Returns leaderboard."""
if xml == True: # xml is set as True or False in the URLConf, based on the URL requested
if request.method == 'POST':
postdata = request.POST.copy()
print postdata
# here, postdata is evaluated as:
# <QueryDict: {u'<?xml version': [u'"1.0" encoding="UTF-8" ?><player_name>test</player_name<score>0</score></xml>']}>
for deserialized_object in serializers.deserialize('xml', postdata): # this fails, returning a 500 error
try:
deserialized_object.object.full_clean()
except ValidationError, e:
return HttpResponseBadRequest
deserialized_object.save()
else:
high_score_data = serializers.serialize( 'xml', HighScore.objects.all() )
return HttpResponse( high_score_data, mimetype = 'text/xml' )
else:
high_scores = HighScore.objects.all()
return render_to_response( template_name, locals(), context_instance = RequestContext( request ) )
To be honest, I'm not sure whether the problem lies in the Objective-C or in
the Django code. Is the Objective-C not sending the XML in the right format?
Or is the Django server not processing that XML correctly?
Any insight would be much appreciated. Thanks in advance.
# Update:
I got it to work, by editing the iOS Controller to set the HTTPBody of the
request like so:
NSMutableData *highScoreData = [NSMutableData data];
[highScoreData appendData: [[NSString stringWithFormat: @"player_name=%@;", @"test"] dataUsingEncoding: NSUTF8StringEncoding]];
[highScoreData appendData: [[NSString stringWithFormat: @"score=%d", 0] dataUsingEncoding: NSUTF8StringEncoding]];
[request setHTTPBody: highScoreData];
For some reason putting a semicolon in there got Django to recognize it,
assign the values to a new instance of a HighScore class, and save it. The
logging on the test server indicates `request.POST` is `<QueryDict: {u'score':
[u'9'], u'player_name': [u'test']}>`.
Still not quite sure what to make of all this.
As per [Radu's](http://stackoverflow.com/users/549897/radu) suggestion, I took
a look at highScoreData with NSLog, right after appending it to
request.HTTPBody, and the result is `<706c6179 65725f6e 616d653d 74657374
3b73636f 72653d39>`.
I'm a huge Obj-C noob, so again, any help is appreciated! Thanks again.
Answer: Since you control both sides, I'd drop the complexity of xml encoding the data
and use RestKit or some other framework that makes it easy to communicate with
Django.
|
Calling Python functions from inline C with scipy.weave
Question: Can I call a python function from inline C code (using weave)?
Motivation: I have a bit of code that I'd like to optimize, and I've
identified the bottleneck in one function. After my usual tricks, I usually
turn to
[scipy.weave.inline](http://docs.scipy.org/doc/scipy/reference/generated/scipy.weave.inline.html)
for optimization. Unfortunately, in this case, my function is calling another
python function in an inner loop. I've made sure that the inner function isn't
causing the speed issue, and I really don't want to have to write it in C as
well.
Minimal Example:
from weave import inline
def foo(x):
return x*2
def bar():
a = 0
for i in xrange(10):
a += foo(i)
return a
def bar_weave():
code = """
int a = 0;
for (int i=0;i<10;++i){
a += foo(i); //<<-- what I'd like to do, but doesn't work
}
return_val = a;"""
return inline(code,['foo'])
print bar()
print bar_weave()
Answer: It's a little involved, as Weave doesn't have a way to automatically marshall
the arguments and return value. You need to do a little more work:
def bar_weave():
code = """
int a = 0;
for (int i=0;i<10;++i){
py::tuple arg(1);
arg[0] = i;
a += (int) foo.call(arg);
}
return_val = a;
"""
return inline(code,['foo'])
Output:
90
|
Setting different reply-to message in Python email/smtplib
Question: I am using Python email and smtplib to send an email from Python. I am doing
this via the Gmail SMTP server using my Gmail credentials. This works fine,
however I would like to specify a `Reply-to` email address different from the
`from` address, so that replies go to a separate address (non-Gmail.)
I have tried creating a `reply to` parameter like this:
msg = MIMEMultipart()
msg['From'] = "email@gmail.com"
msg['To'] = to
msg['Subject'] = subject
msg['Reply-to'] = "email2@domain2.com"
But this doesn't work. Can't find any info on this in the Python docs.
Thanks.
Answer: Here's my take on it. I believe that the "Reply-To" header should be set
explicitly. The likely reason is that it's less commonly used than headers
such as "Subject", "To", and "From".
python
Python 2.6.6 (r266:84292, May 10 2011, 11:07:28)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> MAIL_SERVER = 'smtp.domain.com'
>>> TO_ADDRESS = 'you@gmail.com'
>>> FROM_ADDRESS = 'email@domain.com'
>>> REPLY_TO_ADDRESS = 'email2@domain2.com'
>>> import smtplib
>>> import email.mime.multipart
>>> msg = email.mime.multipart.MIMEMultipart()
>>> msg['to'] = TO_ADDRESS
>>> msg['from'] = FROM_ADDRESS
>>> msg['subject'] = 'testing reply-to header'
>>> msg.add_header('reply-to', REPLY_TO_ADDRESS)
>>> server = smtplib.SMTP(MAIL_SERVER)
>>> server.sendmail(msg['from'], [msg['to']], msg.as_string())
{}
|
Python, default keyword arguments after variable length positional arguments
Question: I thought I could use named parameters after variable-length positional
parameters in a function call, but I get a syntax error when importing a
python class I'm writing with the following "get" method, for example:
class Foo(object):
def __init__(self):
print "You have created a Foo."
def get(self,*args,raw=False,vars=None):
print len(args)
print raw
print vars
The error looks like:
def get(self,*args,raw=False,vars=None):
^
SyntaxError: invalid syntax
I'd like to be able to call the method several ways:
f = Foo()
f.get(arg1,arg2)
f.get(arg1,raw=True)
f.get(arg1,arg2,raw=True,vars=something)
...etc.
I've RTFM as much as I can, but it doesn't quite click why this won't work.
Thanks in advance for your help.
-j
Answer: It does work, but only in Python 3. See PEP
[3102](http://www.python.org/dev/peps/pep-3102/). From glancing over the
"what's new" documents, it seems that there is no 2.x backport, so you're out
of luck. You'll have to accept any keyword arguments (`**kwargs`) and manually
parse it (you can use `d.get(k, default)` to either get `d[k]` or `default` if
that's not there).
|
Python wrapper to access Hg, Git and possibly Bazaar repositories?
Question: I'm looking for a Python library that can do basic manipulation of
repositories, but is independent of the backend version control system.
By basic manipulation, I'm referring to: initialize a repo, add files, commit,
pull, push, get current revision number.
Users of the library could do something this:
import dvcs_wrapper as dvcs
dvcs.set_backend('hg') # could choose 'git', 'bzr'
repo = dvcs.init('/home/me/my_repo')
repo.add('/home/me/my_repo/*.py')
repo.commit('Initial commit')
repo.push('http://bitbucket.org/....')
print('At revision %d' % repo.revision_num)
Any pointers to something like the above? My Google searches turn up
nothing...
**Update** : for what it's worth, I've started working on something like this:
[code is here](https://github.com/kgdunn/SciPy-
Central/blob/master/scipy_central/filestorage/dvcs_wrapper.py) with [unit
tests](https://github.com/kgdunn/SciPy-
Central/blob/master/scipy_central/filestorage/tests.py) for Hg repositories. I
might get around to Git and Bazaar; contributions welcome.
Answer: There's also the [VCS](http://pypi.python.org/pypi/vcs/0.2.0) module, which
advertises:
> vcs is abstraction layer over various version control systems. It is
> designed as feature-rich Python library with clean API.
|
Python: tell X to reserve screen space for application
Question: I am trying to solve the issue of reserving space on the screen for an
application with X Window Manager (i.e. Linux platforms). I have seen [this
issue addressed and solved for
Gtk](http://stackoverflow.com/questions/3859045/preventing-window-overlap-in-
gtk) and I asked the prompted [the same question using
Qt](http://stackoverflow.com/questions/5829585/pyqt4-how-to-make-undercorated-
window-with-reserved-space). Since no one reacted to the Qt-specific question
(which I also addressed in other forums), I thought I'd generalise my
question:
### Is there a universal, pythonic way to tell X to reserve screen space for
an application?
Thanks,
Benjamin
Answer: After some research i found the solution using [Python-Xlib](http://python-
xlib.sourceforge.net/).
In the code that generates the window, it is possible to get the window ID,
which is the reference for the window on X. Depending on the GUI kit used, the
method to get this ID may vary. Qt4 provides
[`QWidget.winId()`](http://doc.qt.nokia.com/latest/qwidget.html#winId), Gtk+2
has its own means to reserve space for a window, and i have not tried with
Gtk+3, but i have been told there should be a `window_id` attribute.
Since the call to X to reserve space for the window can be done only once the
window is displayed, in most cases it will be necessary to make the query
after the main event loop was entered.
The example below shows a case example with Qt4 using PyQt4. In order to get
the window once it is displayed, a thread is launched before the QApplication
enters its main loop and that thread keeps polling X until it has managed to
"grab" the window. In the following example space is reserved at the top of
the screen, with a height equivalent to that of the QWidget we reserve space
for.
def fix_window(self):
set = False
while set == False:
try:
window = myXwindow.Window(self.parent().winId())
if window != None:
height = self.parent().height()
window.reserve_space(0, 0, height, 0)
set = True
else:
self.sleep(1)
except:
raise
In the example here above, the `myXwindow` is a custom module using Python-
Xlib. The following is the content of the module, where Xlib queries X for
Display() and subsequently creates a window object which is an abstract model
to reference our window as displayed by X. After changing the attributes of
this model we can Display().sync() in order to apply changes. The method to
reserve space is `change_property()` in which a series of arguments are passed
according to the [Freedesktop.org
Standards](http://standards.freedesktop.org/wm-spec/wm-
spec-1.3.html#id2507592).
class Window(object):
def __init__(self, windowID):
self._display = Display()
self._window = self._display.create_resource_object('window', windowID)
def reserve_space(self, left=0, right=0, top=0, bottom=0):
LEFT = left
RIGHT = right
TOP = top
BOTTOM = bottom
self._window.change_property(self._display.intern_atom('_NET_WM_STRUT'),
self._display.intern_atom('CARDINAL'),
32, [LEFT, RIGHT, TOP, BOTTOM])
self._display.sync()
NB: it is important to keep the same instance of Display() which has created
the window object, in order to change the window properties, hence it is
stored into a variable.
|
Why do I get an AttributeError when using .count() in python
Question: `array1=[ 0 5 6 6 6 0 6 0 6 8 0 19 24 7 0 4 9 14 12 0 22 17 1 0 19 6 17 4 7 0
17 24 0 6 9 22]`
i=0
while i<23
m= array1.count(i)
i=i+1
`AttributeError: 'numpy.ndarray' object has no attribute 'count'`
Why does attribute error appear when i use .count()? do I need to import
something?
Answer: Well, according to the
[documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html),
`ndarray` simply has no `count` method.
The code you have posted contradicts the error-message you give us. In your
code you create a simple Python list, but your error message indicates that
you are actually using a numpy `ndarray`.
|
How to know the system is Debian or CentOS in Python?
Question: I want to write some install scripts by python, it should know the OS to
choose either **apt** command or **yum** command.
It seems **sys.platform** can tell **'win32'** or the others, but how to know
it is working on Debian or CentOS in Python?
Answer: The [platform module](http://docs.python.org/library/platform.html) in the
standard library has what you want.
import platform
print platform.linux_distribution()
|
Python - Cancel timer thread
Question: I'm trying to create a method which runs on a timer in the background of my
main script:
def hello_world(self):
print 'Hello!'
threading.Timer(2,hello_world).start()
if __name__ == "__main__":
try:
hello_world()
except KeyboardInterrupt:
print '\nGoodbye!'
I am getting this message when I attempt to keyboard interrupt my script:
Exception KeyboardInterrupt in <module 'threading' from '/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/threading.py'> ignored
How do I close the thread so that I can exit my application cleanly?
Answer: To elaborate a bit on Aphex's answer, the main thread can't possibly catch the
KeyboardInterrupt signal, unless you have very fast fingers. The main thread
exits almost immediately! Try this:
import threading
def hello_world():
print 'Hello!'
threading.Timer(2,hello_world).start()
if __name__ == "__main__":
try:
hello_world()
except KeyboardInterrupt:
print '\nGoodbye!'
print "main thread exited"
More generally, I would not suggest using a self-calling timer like this, just
because it creates a lot of threads. Just create one thread and call
`time.sleep` inside it.
However, as long as you keep the main thread running, you seem to be able to
catch `KeyboardInterrupt` inside. The trick then is to make the thread a
`daemon` thread that exits when the main thread exits.
import threading
import time
def hello_world():
while(True):
print 'Hello!'
time.sleep(2)
if __name__ == "__main__":
hw_thread = threading.Thread(target = hello_world)
hw_thread.daemon = True
hw_thread.start()
try:
time.sleep(1000)
except KeyboardInterrupt:
print '\nGoodbye!'
This exits automatically after 1000 seconds -- you could make that number even
bigger if you like. You could also use a busy-loop to repeat the sleep call,
but I don't really see the point.
|
display an error message when file is empty - proper way?
Question: hi im slowly trying to learn the correct way to write python code. suppose i
have a text file which i want to check if empty, what i want to happen is that
the program immediately terminates and the console window displays an error
message if indeed empty. so far what ive done is written below. please teach
me the proper method on how one ought to handle this case:
import os
def main():
f1name = 'f1.txt'
f1Cont = open(f1name,'r')
if not f1Cont:
print '%s is an empty file' %f1name
os.system ('pause')
#other code
if __name__ == '__main__':
main()
Answer: There is no need to `open()` the file, just use
[`os.stat()`](http://docs.python.org/library/os.html#os.stat).
>>> #create an empty file
>>> f=open('testfile','w')
>>> f.close()
>>> #open the empty file in read mode to prove that it doesn't raise IOError
>>> f=open('testfile','r')
>>> f.close()
>>> #get the size of the file
>>> import os
>>> import stat
>>> os.stat('testfile')[stat.ST_SIZE]
0L
>>>
|
MATLAB-style find() function in Python
Question: In MATLAB it is easy to find the indices of values that meet a particular
condition:
>> a = [1,2,3,1,2,3,1,2,3];
>> find(a > 2) % find the indecies where this condition is true
[3, 6, 9] % (MATLAB uses 1-based indexing)
>> a(find(a > 2)) % get the values at those locations
[3, 3, 3]
What would be the best way to do this in Python?
So far, I have come up with the following. To just get the values:
>>> a = [1,2,3,1,2,3,1,2,3]
>>> [val for val in a if val > 2]
[3, 3, 3]
But if I want the index of each of those values it's a bit more complicated:
>>> a = [1,2,3,1,2,3,1,2,3]
>>> inds = [i for (i, val) in enumerate(a) if val > 2]
>>> inds
[2, 5, 8]
>>> [val for (i, val) in enumerate(a) if i in inds]
[3, 3, 3]
Is there a better way to do this in Python, especially for arbitrary
conditions (not just 'val > 2')?
I found functions equivalent to MATLAB 'find' in NumPy but I currently do not
have access to those libraries.
Answer: in numpy you have `where` :
>> import numpy as np
>> x = np.random.randint(0, 20, 10)
>> x
array([14, 13, 1, 15, 8, 0, 17, 11, 19, 13])
>> np.where(x > 10)
(array([0, 1, 3, 6, 7, 8, 9], dtype=int64),)
|
Where Should Shared Object Files Be Placed?
Question: I am venturing into the land of creating C/C++ bindings for Python using
pybindgen. I've followed the steps outlined under "Building it ( GCC
instructions )" to create bindings for the sample files:
<http://packages.python.org/PyBindGen/tutorial.html#a-simple-example>
Running `make` produces a .so file. If I understand how .so files work, I
should be able to `import` the classes in the shared object into Python.
However, I'm not sure where to place the file and how to let Python know where
it is. Additionally, do the original c/c++ source files need to accompany the
.so file?
So far I've tried placing the file in /usr/local/lib and adding that path to
DYLD_LIBRARY_PATH to the .bash_profile. When I try to import the module from
within the Python interpeter an error is thrown stating that the module can
not be found.
So, my question is: What needs to be done with the generated .so file in order
for it to be used by a Python program?
Answer: Python looks for `.so` modules in the same directories where it searches
python ones. So you have to install it as you would normal python module
either somewhere that is on python's `sys.path` by default
(`/usr/share/python/site-lib` or something like that--it'd distribution-
dependent) or add the directory to `PYTHONPATH` environment variable.
It's python that's loading the module using dlopen, not the dynamic linker, so
`LD_LIBRARY_PATH` (note, there is no `DY`) won't help you.
|
Python - AttributeError: index
Question: I am stuck here...
Connecting to t3://localhost:7001 with userid weblogic ... Successfully
connected to Admin Server 'examplesServer' that belongs to domain 'wl_server'.
Warning: An insecure protocol was used to connect to the server. To ensure on-
the-wire security, the SSL port or Admin port should be used instead.
[MBeanServerInvocationHandler]com.bea:Name=mainWebApp,Type=AppDeployment
ParcelLienData.war ParcelLienData P Problem invoking WLST - Traceback
(innermost last): File "D:\RM-Share\RM-Scripts\wl_deploy_localhost-WC.py",
line 30, in ? AttributeError: index
My code looks like:
import sys
import getopt
import os
loadProperties(sys.argv[1] +".props")
connect(username,password,adminUrl)
cmd = "awk -F'Name=' '{print $2}' | awk -F',' '{print $1}'"
f = open(r'./applicationsList.txt','r')
#In Above line you can specify the Complete Path of the "applications.txt" as well
print f
for i in range(5):
line=f.readline()
line1=line[:-4]
line2=line1[:1]
#check if the service or application is already present on the server...
cd('AppDeployments')
myapps=cmo.getAppDeployments()
for dep_file in myapps:
print depfile
print line
print line1
print line2
num1=depfile.index(line2)
print num1
num2=depfile.index(",", num1)
print num2
appName=depfile[num1:num2]
print appName
if appName == "line1":
print Redeploy
elif appName != "line1":
print "Not deploying"
continue
else:
print Deploying
Please advice, where am I going wrong....
Thanks....
Answer: The error tells you that this line:
appName=dep_file[num1:num2]
Is wrong. Are you sure the `dep_file` object can be indexed with a slice?
Maybe you should call `getName()` on `dep_name` first?
|
How to install xbmc module for Python
Question: I tried to find the setup script from xbmcscripts.com but apparently that site
is down. Neither 'easy_install' gives a desirable result. I'm running Ubuntu
11.04 and xbmc (Media Center) is installed and working alright. What I
basically need is the ability to import the xbmc module in Python
console/script like this,
import xbmc
url = '<a link to a .flv file goes here'
xbmc.Player(xbmc.PLAYER_CORE_DVDPLAYER).play(url)
Thanks!
Answer: I haven't done any XBMC development myself but I thought I'd take a look at
this.
From what I can tell XBMC ships with its own Python interpreter
(`/usr/lib/xbmc/system/python/python24-i486-linux.so`) and the `xbmc`,
`xbmcgui` and `xbmcplugin` libraries are not available outside of XBMC itself.
As for the easiest way to develop for XBMC with regards to experimenting,
testing and debugging. Here's some tips:
* Running XBMC in a window and always on top might be handy (use `\` in fullscreen mode)
* Any print statements inside an add-on/script end up as notices in the log (`~/.xbmc/temp/xbmc.log`)
* You could create an add-on that exposes a Python shell from within the context of XBMC (that you communicate with over say Telnet)
There are references scattered around to something that used to be on
XMBCscripts.com referred to as an "XBMC emulator" which apparently implemented
the functionality of those libraries outside of XBMC but I can't find it
anywhere.
**Relevant documentation:**
* <http://wiki.xbmc.org/index.php?title=Python_Development>
* <http://wiki.xbmc.org/index.php?title=HOW-TO_debug_Python_Scripts>
* <http://wiki.xbmc.org/index.php?title=HOW-TO:_Automatically_start_addons_using_services>
* <http://wiki.xbmc.org/index.php?title=Log_File>
**External links:**
* [XBMC's Python Script and Plugin Addon Development Forum](http://forum.xbmc.org/forumdisplay.php?f=26)
|
How to correctly sort a string with a number inside?
Question: > **Possible Duplicate:**
> [Does Python have a built in function for string natural
> sort?](http://stackoverflow.com/questions/4836710/does-python-have-a-built-
> in-function-for-string-natural-sort)
I have a list of strings containing numbers and I cannot find a good way to
sort them.
For example I get something like that:
something1
something12
something17
something2
something25
something29
with `sort()` method.
I know that I probably need to extract numbers somehow and then sort the list
but I have no idea how to do it in the most simple way.
Answer: Perhaps you are looking for [human
sorting](http://nedbatchelder.com/blog/200712/human_sorting.html) (also known
as [natural sorting](http://www.codinghorror.com/blog/2007/12/sorting-for-
humans-natural-sort-order.html)):
import re
def atoi(text):
return int(text) if text.isdigit() else text
def natural_keys(text):
'''
alist.sort(key=natural_keys) sorts in human order
http://nedbatchelder.com/blog/200712/human_sorting.html
(See Toothy's implementation in the comments)
'''
return [ atoi(c) for c in re.split('(\d+)', text) ]
alist=[
"something1",
"something12",
"something17",
"something2",
"something25",
"something29"]
alist.sort(key=natural_keys)
print(alist)
yields
['something1', 'something2', 'something12', 'something17', 'something25', 'something29']
PS. I've changed my answer to use Toothy's implementation of natural sorting
(posted in the comments
[here](http://nedbatchelder.com/blog/200712/human_sorting.html)) since it is
significantly faster than my original answer.
|
Easiest Way to Transfer Data Over the Internet, Python
Question: I have two computers, both are connected to the internet. I'd like transfer
some basic data between them (strings, ints, floats). I'm new to networking so
I'm looking for the most simple way to do this. What modules would I be
looking at to do this?
Both systems would be running Windows 7.
Answer: As long as its not asynchronous (doing sending and receiving at once), you can
use [the socket interface](http://docs.python.org/library/socket.html).
If you like abstractions (or need asynchronous support), there is always
[Twisted.](http://twistedmatrix.com/trac/)
Here is an example with the socket interface (which will become harder to use
as your program grows larger, so, I would suggest either Twisted or
[asyncore](http://docs.python.org/library/asyncore.html))
import socket
def mysend(sock, msg):
totalsent = 0
while totalsent < MSGLEN:
sent = sock.send(msg[totalsent:])
if sent == 0:
raise RuntimeError("socket connection broken")
totalsent = totalsent + sent
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("where ever you have your other computer", "port number"))
i = 2
mysend(s, str(i))
The python documentation is excellent, I picked up the mysend() function from
there.
If you are doing computation related work, check out [XML-
RPC](http://docs.python.org/library/xmlrpclib.html), which python has all
nicely packaged up for you.
Remember, sockets are just like files, so they're not really much different to
write code for, so, as long as you can do basic file io, and understand
events, socket programming isn't hard, at all (as long as you don't get too
complicated like multiplexing VoIP streams...)
|
PyOpenGL: Rendering... Well... Anything really
Question: I've been working on a project using python with OpenGL for a while now. I
previously posted a similar problem, but I have since done some more research
and switched to non-deprecated functions. Following [this
tutorial](http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Table-of-
Contents.html) (Translating it to Python versions obviously) I end up with
this code:
import sys
import OpenGL
from OpenGL.GL import *
from OpenGL.GL.shaders import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from OpenGL.GLUT.freeglut import *
from OpenGL.arrays import vbo
import pygame
import Image
import numpy
class AClass:
def __init__(self):
self.Splash = True #There's actually more here, but it's impertinent
def TexFromPNG(self, filename):
img = Image.open(filename) # .jpg, .bmp, etc. also work
img_data = numpy.array(list(img.getdata()), 'B')
texture = glGenTextures(1)
glPixelStorei(GL_UNPACK_ALIGNMENT,1)
glBindTexture(GL_TEXTURE_2D, texture)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img.size[0], img.size[1], 0, GL_RGB, GL_UNSIGNED_BYTE, img_data)
return texture
def MakeBuffer(self, target, data, size):
TempBuffer = glGenBuffers(1)
glBindBuffer(target, TempBuffer)
glBufferData(target, size, data, GL_STATIC_DRAW)
return TempBuffer
def ReadFile(self, filename):
tempfile = open(filename,'r')
source = tempfile.read()
temprfile.close()
return source
def run(self):
glutInitDisplayMode(GLUT_RGBA)
glutInitWindowSize(256,244)
self.window = glutCreateWindow("test")
glutReshapeFunc(self.reshape)
glutDisplayFunc(self.draw)
glutKeyboardFunc(self.keypress)
self.MainTex = glGenTextures(1)
self.SplashTex = self.TexFromPNG("Resources/Splash.png")
MainVertexData = numpy.array([-1,-1,1,-1,-1,1,1,1],numpy.float32)
FullWindowVertices = numpy.array([0,1,2,3],numpy.ushort)
self.MainVertexData = self.MakeBuffer(GL_ARRAY_BUFFER,MainVertexData,len(MainVertexData))
self.FullWindowVertices = self.MakeBuffer(GL_ELEMENT_ARRAY_BUFFER,FullWindowVertices,len(FullWindowVertices))
self.BaseProgram = compileProgram(compileShader(self.ReadFile("Shaders/Mainv.glsl"),
GL_VERTEX_SHADER),
compileShader(self.ReadFile("Shaders/Mainf.glsl"),
GL_FRAGMENT_SHADER))
glutMainLoop()
def reshape(self, width, height):
self.width = width
self.height = height
glutPostRedisplay()
def draw(self):
glViewport(0, 0, self.width, self.height)
glClearDepth(1)
glClearColor(0,0,0,0)
glClear(GL_COLOR_BUFFER_BIT)
glEnable(GL_TEXTURE_2D)
if self.Splash:
glUseProgram(self.BaseProgram)
pos = glGetAttribLocation(self.BaseProgram, "position")
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, self.SplashTex)
glUniform1i(glGetUniformLocation(self.BaseProgram,"texture"), 0)
glBindBuffer(GL_ARRAY_BUFFER,self.MainVertexData)
glVertexAttribPointer(pos,
2,
GL_FLOAT,
GL_FALSE,
0,
numpy.array([0],numpy.uint8))
glEnableVertexAttribArray(pos)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,self.FullWindowVertices)
glDrawElements(GL_TRIANGLE_STRIP,
4,
GL_UNSIGNED_SHORT,
numpy.array([0],numpy.uint8))
glDisableVertexAttribArray(pos)
else:
glBindTexture(GL_TEXTURE_2D, self.MainTex)
glutSwapBuffers()
glutInit(sys.argv)
test = AClass()
test.run()
Shaders/Mainv.glsl and Shaders/Mainf.glsl contain:
#version 110
attribute vec2 position;
varying vec2 texcoord;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
texcoord = position * vec2(0.5) + vec2(0.5);
}
and:
#version 110
uniform sampler2D texture;
varying vec2 texcoord;
void main()
{
gl_FragColor = texture2D(texture, texcoord);
}
respectively.
This code nets me with a (clear color) GLUT window, which seems to suggest
that it's not rendering my triangles for some reason, but I have no idea why
that could be, and I can't really find any examples for PyOpenGL that don't
use deprecated functions, so I can't see if they did anything different from
me that I'm missing.
Answer: I think this is a useful reference for anyone starting out with PyOpenGL and
the programmable pipleline, so I've corrected the code according to comments
by Bethor and Josiah above, and have also simplified it a bit (embedded the
shaders as strings). This code works for me. It assumes that you have
**test.png** in the same directory.
import sys
import OpenGL
from OpenGL.GL import *
from OpenGL.GL.shaders import *
from OpenGL.GLU import *
from OpenGL.GLUT import *
from OpenGL.GLUT.freeglut import *
from OpenGL.arrays import vbo
import Image
import numpy
# vertex shader
strVS = """
attribute vec2 position;
varying vec2 texcoord;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
texcoord = position * vec2(0.5) + vec2(0.5);
}
"""
# fragment shader
strFS = """
uniform sampler2D texture;
varying vec2 texcoord;
void main()
{
gl_FragColor = texture2D(texture, texcoord);
}
"""
class AClass:
def __init__(self):
self.Splash = True #There's actually more here, but it's impertinent
def TexFromPNG(self, filename):
img = Image.open(filename) # .jpg, .bmp, etc. also work
img_data = numpy.array(list(img.getdata()), 'B')
texture = glGenTextures(1)
glPixelStorei(GL_UNPACK_ALIGNMENT,1)
glBindTexture(GL_TEXTURE_2D, texture)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img.size[0], img.size[1], 0, GL_RGBA, GL_UNSIGNED_BYTE, img_data)
return texture
def MakeBuffer(self, target, data, size):
TempBuffer = glGenBuffers(1)
glBindBuffer(target, TempBuffer)
glBufferData(target, size, data, GL_STATIC_DRAW)
return TempBuffer
def run(self):
glutInitDisplayMode(GLUT_RGBA)
glutInitWindowSize(256,244)
self.window = glutCreateWindow("test")
glutReshapeFunc(self.reshape)
glutDisplayFunc(self.draw)
self.MainTex = glGenTextures(1)
self.SplashTex = self.TexFromPNG("test.png")
MainVertexData = numpy.array([-1,-1,1,-1,-1,1,1,1],numpy.float32)
FullWindowVertices = numpy.array([0,1,2,3],numpy.ushort)
self.MainVertexData = self.MakeBuffer(GL_ARRAY_BUFFER,MainVertexData,4*len(MainVertexData))
self.FullWindowVertices = self.MakeBuffer(GL_ELEMENT_ARRAY_BUFFER,FullWindowVertices,2*len(FullWindowVertices))
self.BaseProgram = compileProgram(compileShader(strVS,
GL_VERTEX_SHADER),
compileShader(strFS,
GL_FRAGMENT_SHADER))
glutMainLoop()
def reshape(self, width, height):
self.width = width
self.height = height
glutPostRedisplay()
def draw(self):
glViewport(0, 0, self.width, self.height)
glClearDepth(1)
glClearColor(0,0,0,0)
glClear(GL_COLOR_BUFFER_BIT)
glEnable(GL_TEXTURE_2D)
if self.Splash:
glUseProgram(self.BaseProgram)
pos = glGetAttribLocation(self.BaseProgram, "position")
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, self.SplashTex)
glUniform1i(glGetUniformLocation(self.BaseProgram,"texture"), 0)
glBindBuffer(GL_ARRAY_BUFFER,self.MainVertexData)
glVertexAttribPointer(pos,
2,
GL_FLOAT,
GL_FALSE,
0,
None)
glEnableVertexAttribArray(pos)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,self.FullWindowVertices)
glDrawElements(GL_TRIANGLE_STRIP,
4,
GL_UNSIGNED_SHORT,
None)
glDisableVertexAttribArray(pos)
else:
glBindTexture(GL_TEXTURE_2D, self.MainTex)
glutSwapBuffers()
glutInit(sys.argv)
test = AClass()
test.run()
Here is the output:
![enter image description here](http://i.stack.imgur.com/wLv0g.png)
|
Pickling a list consisting of 'n' 2-tuple values
Question: I want to pickle a list as it takes a long time for me to create the list. The
list consists of "n" 2-tuple values where the first tuple value is a large
dictionary(1k to 10k keys) and the second value is a string. N can be as large
as 100s to 1000s. I would like to save such a data structure so that I may
load it on convenience. If the list is human-readable it would be great, but
not at the cost of performance.
I have knowledge about [Shelve](http://docs.python.org/library/shelve.html),
[PyYaml](http://pyyaml.org/),
[cPickle](http://docs.python.org/release/2.5/lib/module-cPickle.html) and
[JSON](https://github.com/jsonpickle/jsonpickle). I am just unsure as to what
to use given my data. Initial reading on various other threads on this website
and internet says that cPickle may be the best choice. Any thoughts from the
gurus here?
Answer: I would use `cPickle`, this works fine,
# Dummy data
from random import randint as r
a,b = 97,123
d = [(dict([(chr(r(a,b)),j) for j in range(1000)]),
''.join([chr(r(a,b)) for i in range(5)]))
for j in range(100)]
# Pickle it
import cPickle as pickle
f = open('store.dat','w')
pickle.dump(d,f)
f.close()
I would also consider using something like
[dumbdbm](http://docs.python.org/library/dumbdbm.html).
**Added later**
Following on from the example above, you can do something like this,
import dumbdbm as dbm
g = dbm.open('store.db')
g.update([(str(i),pickle.dumps(j)) for i,j in enumerate(d)])
g.close()
|
How to install lxml for python without administative rights on linux?
Question: I just need some packages which dont present at the host machine (and I and
linux... we... we didn't spend much time together...).
I used to install them like:
# from the source
python setup.py install --user
or
# with easy_install
easy_install prefix=~/.local package
But it doesn't work with lxml. I get a lot of errors during the build:
x:~/lxml-2.3$ python setup.py build
Building lxml version 2.3.
Building without Cython.
ERROR: /bin/sh: xslt-config: command not found
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
running build
running build_py
running build_ext
building 'lxml.etree' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.6 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.6/src/lxml/lxml.etree.o -w
In file included from src/lxml/lxml.etree.c:227:
src/lxml/etree_defs.h:9:31: error: libxml/xmlversion.h: No such file or directory
src/lxml/etree_defs.h:11:4: error: #error the development package of libxml2 (header files etc.) is not installed correctly
src/lxml/etree_defs.h:13:32: error: libxslt/xsltconfig.h: No such file or directory
src/lxml/etree_defs.h:15:4: error: #error the development package of libxslt (header files etc.) is not installed correctly
src/lxml/lxml.etree.c:230:29: error: libxml/encoding.h: No such file or directory
src/lxml/lxml.etree.c:231:28: error: libxml/chvalid.h: No such file or directory
src/lxml/lxml.etree.c:232:25: error: libxml/hash.h: No such file or directory
...
src/lxml/lxml.etree.c:55179: error: Б─≤xmlNodeБ─≥ undeclared (first use in this function)
src/lxml/lxml.etree.c:55179: error: Б─≤__pyx_v_c_nodeБ─≥ undeclared (first use in this function)
src/lxml/lxml.etree.c:55184: error: Б─≤_node_to_node_functionБ─≥ undeclared (first use in this function)
src/lxml/lxml.etree.c:55184: error: expected Б─≤;Б─≥ before Б─≤__pyx_v_next_elementБ─≥
src/lxml/lxml.etree.c:55251: error: Б─≤struct __pyx_obj_4lxml_5etree__ReadOnlyProxyБ─≥ has no member named Б─≤_c_nodeБ─≥
...
<http://lxml.de/installation.html> says that it has some dependencies. But how
to install them without administrative rights?
Answer: If you have no admin rights, and cannot convince the administrator to install
the relevant packages for you, you have two options:
**Option 1** \- Download sources for [`libxml2` and
`libxslt`](ftp://xmlsoft.org/libxml2/) and compile and install them under your
`$HOME` somewhere, then build python-lxml against those copies.
This is a pretty involved example, since if you're missing further
dependencies you could be downloading / compiling for a long time.
**Option 2** \- Download the binary packages for the same distribution of
Linux that is used on your server, and extract the contents under your home
directory.
For example, if you're running Ubuntu Lucid, you'd first find out the version
your OS is using and then download the packages you're missing:
% uname -m
x86_64
% aptitude show libxml2 | grep Version
Version: 2.7.6.dfsg-1ubuntu1.1
Next download the packages you need direct from the Ubuntu server:
% mkdir root ; cd root
% wget http://us.archive.ubuntu.com/ubuntu/pool/main/libx/libxml2/libxml2_2.7.6.dfsg-1ubuntu1.1_amd64.deb
% wget http://us.archive.ubuntu.com/ubuntu/pool/main/libx/libxslt/libxslt1.1_1.1.26-6build1_amd64.deb
% wget http://us.archive.ubuntu.com/ubuntu/pool/main/l/lxml/python-lxml_2.2.4-1_amd64.deb
Extract the contents and merge the lxml native and pure-python code and move
the shared libraries to the top, then remove the extracted contents:
% dpkg-deb -x libxml2_2.7.6.dfsg-1ubuntu1.1_amd64.deb .
% dpkg-deb -x libxslt1.1_1.1.26-6build1_amd64.deb .
% dpkg-deb -x python-lxml_2.2.4-1_amd64.deb .
% mv ./usr/lib/python2.6/dist-packages/lxml .
% mv ./usr/share/pyshared/lxml/* lxml
% mv ./usr/lib .
% rm *.deb
% rm -rf usr
Finally, to use those files you need to set your LD_LIBRARY_PATH and
PYTHONPATH environment variables to point into `$HOME/root`. Place these in
your `~/.bashrc` (or equivalent) so they are permanent:
% export LD_LIBRARY_PATH=$HOME/root/lib
% export PYTHONPATH=$HOME/root
You can verify that the shared objects are being found using `ldd` (if it's
installed):
% ldd $HOME/root/lxml/etree.so | grep $HOME
libxslt.so.1 => /home/user/root/lib/libxslt.so.1 (0x00007ff9b1f0f000)
libexslt.so.0 => /home/user/root/lib/libexslt.so.0 (0x00007ff9b1cfa000)
libxml2.so.2 => /home/user/root/lib/libxml2.so.2 (0x00007ff9b19a9000)
Then you're ready to test Python:
% python
>>> from lxml import etree
|
ImportError: cannot import name signals
Question: I'm using Django 1.3.0 with Python 2.7.1. In every test I write the following
imports I get the importError above:
from django.utils import unittest
from django.test.client import Client
The full stack trace:
File "C:\Program Files (x86)\j2ee\plugins\org.python.pydev.debug_1.6.3.2010100513\pysrc\runfiles.py", line 342, in __get_module_from_str
mod = __import__(modname)
File "C:/Users/benjamin/workspace/BookIt/src/BookIt/tests\basic_flow.py", line 11, in
from django.test.client import Client
File "C:\Python27\lib\site-packages\django\test\__init__.py", line 5, in
from django.test.client import Client, RequestFactory
File "C:\Python27\lib\site-packages\django\test\client.py", line 21, in
from django.test import signals
ImportError: cannot import name signals
ERROR: Module: basic_flow could not be imported.
Any ideas why this happening ?
Answer: @Hugo was right in that it was a settings.py problem. But I didn't had that
problem when running through the Django environment. But when I wanted to run
unit tests one by one (By using Pydev's run as unittest) it failed to run.
What I needed to do was to add the Django settings module information, so for
now what I'm doing is adding the following lines to my unit tests:
from django.core import management;
import BookIt.settings as settings;
management.setup_environ(settings)
This loads my Django project settings and allow me to run as regular unittest.
If anyone has better suggestion on how to configure this more cleanly in Pydev
please let me know.
|
Unable to import FigureCanvasWxAgg from Matplotlib in Python
Question: I'm using Python x64 with everything installed, but I'm getting an unresolved
import on FigureCanvasWxAgg. I can get up to matplotlib.backends.backend_wxagg
but there's no FigureCanvasWxAgg to import from there.
I've also tried `from matplotlib.backends.backend_wxagg import *` but it
doesn't work either.
EDIT: Problem solved. I took a peek at my backend_wxagg.py file and found it
to be completely different than the one listed
[here](http://www.java2s.com/Open-Source/Python/Chart-
Report/Matplotlib/matplotlib-0.99.1.1/lib/matplotlib/backends/backend_wxagg.py.htm).
So I copied that from version 0.99.1.1 into my 1.0.1 file. (I should probably
just uninstall 1.0.1 matplotlib and use the older version.) Anyway, it got the
examples working, so I'm happy.
Answer: What OS are you on, and how did you install matplotlib?
Your solution is quite likely to break things... You need to build and install
the wx backend as you normally would. I'm not sure about the wx backend, but
several of the other backends are C extensions, not just a simple python file.
The wx backend isn't built by default, so it's usually included as a separate
package. (e.g. `python-matplotlib-wx` in the case of Suse) You'll need to
install the wx backend through your package manager, as you normally would.
If you're on an OS without a package manager (e.g. windows, osx), the
installer may or may not have the wx backend built depending on who built it
and how it was configured. I know absolutely nothing about non-linux or BSD
oses, so you're on your own there. Try looking wherever you downloaded your
matplotlib binary from and see if they have a separate installer for the wx
backend.
If you're building from source, you need to enable the wx backend and rebuild.
To do this, edit the `site.cfg` file in your build directory. You may need to
rename the default one (`site.cfg.default`, or something along those lines) to
`site.cfg`, if you don't alread have a `site.cfg` file in your build
directory.
Hope that helps!
|
function is not defined error in Python
Question: I am trying to define a basic function in python but I always get the
following error when I run a simple test program;
>>> pyth_test(1, 2)
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
pyth_test(1, 2)
NameError: name 'pyth_test' is not defined
Here is the code I am using for this function;
def pyth_test (x1, x2):
print x1 + x2
UPDATE: I have the script called pyth.py open, and then I am typing in
pyth_test(1,2) in the interpreter when it gives the error.
Thanks for the help. (I apologize for the basic question, I've never
programmed before and am trying to learn Python as a hobby)
* * *
import sys
sys.path.append ('/Users/clanc/Documents/Development/')
import test
printline()
## (the function printline in the test.py file
##def printline():
## print "I am working"
Answer: Yes, but in what file is `pyth_test`'s definition declared in? Is it also
located before it's called?
Edit:
To put it into perspective, create a file called `test.py` with the following
contents:
def pyth_test (x1, x2):
print x1 + x2
pyth_test(1,2)
Now run the following command:
python test.py
You should see the output you desire. Now if you are in an interactive
session, it should go like this:
>>> def pyth_test (x1, x2):
... print x1 + x2
...
>>> pyth_test(1,2)
3
>>>
I hope this explains how the declaration works.
* * *
To give you an idea of how the layout works, we'll create a few files. Create
a new empty folder to keep things clean with the following:
_**myfunction.py_**
def pyth_test (x1, x2):
print x1 + x2
_**program.py_**
#!/usr/bin/python
# Our function is pulled in here
from myfunction import pyth_test
pyth_test(1,2)
Now if you run:
python program.py
It will print out 3. Now to explain what went wrong, let's modify our program
this way:
# Python: Huh? where's pyth_test?
# You say it's down there, but I haven't gotten there yet!
pyth_test(1,2)
# Our function is pulled in here
from myfunction import pyth_test
Now let's see what happens:
$ python program.py
Traceback (most recent call last):
File "program.py", line 3, in <module>
pyth_test(1,2)
NameError: name 'pyth_test' is not defined
As noted, python cannot find the module for the reasons outlined above. For
that reason, you should keep your declarations at top.
Now then, if we run the interactive python session:
>>> from myfunction import pyth_test
>>> pyth_test(1,2)
3
The same process applies. Now, package importing isn't all that simple, so I
recommend you look into how [modules work with
Python](http://docs.python.org/tutorial/modules.html). I hope this helps and
good luck with your learnings!
|
How to change the user and group permissions for a directory, by name?
Question: [os.chown](http://docs.python.org/library/os.html#os.chown) is exactly what I
want, but I want to specify the user and group by name, not ID (I don't know
what they are). How can I do that?
Answer:
import pwd
import grp
import os
uid = pwd.getpwnam("nobody").pw_uid
gid = grp.getgrnam("nogroup").gr_gid
path = '/tmp/f.txt'
os.chown(path, uid, gid)
|
Find Version of Binary File
Question: Does anyone know how I can find the version of a binary file that has been
passed to my function?
I got the following code from [this
page](http://www.linuxquestions.org/questions/programming-9/reading-binary-
file-in-python-804296/):
def version(fpath):
f = open(fpath, 'rb')
s = f.read(1024)
print s
f.close()
However, this does not give me any useful output similar to what the mentioned
website shows.
**Edit** : @BoazYaniv tells me that the file format plays in important part in
this problem. This is a windows EXE file
Answer: You have a ready-made module for parsing EXE files:
<http://code.google.com/p/pefile/>
You could read it using the following code:
import pefile
pe = pefile.PE(r'C:\Windows\notepad.exe')
FileVersion = pe.FileInfo[0].StringTable[0].entries['FileVersion']
ProductVersion = pe.FileInfo[0].StringTable[0].entries['ProductVersion']
As you can see, Windows EXE (and DLL) files store two different kinds of
versions, FileVersion and ProductVersion. Many times they are the same, but
sometimes they may different - it all depends on the one who made the EXE
really.
## Edit:
Just to make things more complex, these two strings in the PE string table
aren't the only place where Windows compilers may save the version. There are
two addiotnal FileVersion and ProductVersion values stored in the EXE, only
they are stored as pairs of 32-bit integers, each of them is broken, in turn,
into two 16-bit integers (WORDs in Windows API speak). Altogether, each
version value (FileVersion and ProductVersion) has 4 16-bit WORDs which
represent the dot-separated parts of the version. You can get them too, using
pefile:
pe = pefile.PE(r'C:\Windows\notepad.exe')
FileVersionLS = pe.VS_FIXEDFILEINFO.FileVersionLS
FileVersionMS = pe.VS_FIXEDFILEINFO.FileVersionMS
ProductVersionLS = pe.VS_FIXEDFILEINFO.ProductVersionLS
ProductVersionMS = pe.VS_FIXEDFILEINFO.ProductVersionMS
FileVersion = (FileVersionMS >> 16, FileVersionMS & 0xFFFF, FileVersionLS >> 16, FileVersionLS & 0xFFFF)
ProductVersion = (ProductVersionMS >> 16, ProductVersionMS & 0xFFFF, ProductVersionLS >> 16, ProductVersionLS & 0xFFFF)
print 'File version: %s.%s.%s.%s' % FileVersion
print 'Product version: %s.%s.%s.%s' % ProductVersion
But wait! This is not all: you have at least one more place where you could
look to find the version: Inside another structure, called OPTIONAL_HEADER,
you can find another two values called MajorImageVersion and
MinorImageVersion. They represent the two first parts of the whole version, so
a file which has a ProductVersion or FileVersion of, say, 6.1.7600.150, would
usually have a MajorImageVersion of 6 and a MinorImageVersion of 1. You could
get them with `pe.OPTIONAL_HEADER.MajorImageVersion` and
`pe.OPTIONAL_HEADER.MinorImageVersion`.
All these values (5 different ones, if I count them right) are usually
equivalent (if you ignore the extra freeform string value the ones at a string
table sometimes have), but its quite common to see FileVersions and
ProductVersions that are not the same, and you should also be ready for other
surprises as well.
|
Python - Calling a function from a class
Question: I'm having some trouble calling a function which is within a class in python.
Here is my folder hierarchy.
~/Code/program/main.py
~/Code/program/dc_functions/dcfunc.py
~/Code/program/dc_functions/**init**.py
Basically, I want to use a function from dcfunc.py inside of main.py. How
would I do this?
Relevant contents of dcfunc.py:
import subprocess, string, os, sys
class dcfunc:
#Create raw Audio track(Part of Dreamcast Disc format) + Burn track to disk.
def __init__(self):
self.self = "self"
def burnaudiotrack(device):
**CODE***
Thanks for any suggestions!
Answer: You need your init.py file to be named `__init__.py`
then use
from dc_functions.dcfunc import function_name
And you'll have acccess to the function.
|
how to deal with timezone differences in ical standard records using python?
Question: I'm trying to process a ical recurrence record from the python gdata api.
>
> DTEND: 20110421T190000
> params for DTEND:
> TZID [u'Europe/London']
> DTSTART: 20110421T180000
> params for DTSTART:
> TZID [u'Europe/London']
> RRULE: FREQ=WEEKLY;BYDAY=TH
> VTIMEZONE
> TZID: Europe/London
> DAYLIGHT
> DTSTART: 19700329T010000
> TZOFFSETFROM: +0000
> TZNAME: BST
> TZOFFSETTO: +0100
> RRULE: FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
>
STANDARD
DTSTART: 19701025T020000
TZOFFSETFROM: +0100
TZNAME: GMT
TZOFFSETTO: +0000
RRULE: FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
X-LIC-LOCATION: Europe/London
I can see from the
[event](https://www.google.com/calendar/render?eid=YnJqOW1zY3M5ajFyMnYxcXJxbmY2aDJ1dTBfMjAxMTA0MjFUMTcwMDAwWiBsaDA5djduYjJubzZjdTNsMzdxdGluMWkyMEBn&gsessionid=OK&sf=true&output=xml)
that the time frame should 'really' be 17:00 - 18:00 (British Summer Time) but
DTSTART/DTEND seems to list GMT and then need "standard" to rectify?
I'm trying to set up an automatic process in python to 'read' these recurrence
and replicate them as actual date and times.
What's the best way to do this to ensure accuracy? I know that
[dateutil](http://labix.org/python-dateutil) can parse the timezone
information, but which one do i pick, and most importantly how do I _apply_
this change so that i get a python datetime object with the "real" time?
Thanks :)
Answer: I've used Pytz before, with great success: <http://pytz.sourceforge.net/>
|
How to retrict access to myapplication.appspot.com - google app engine?
Question: For a google app engine application, I would like to retrict access to myself
to my website <http://myapplication.appspot.com> but at the same time let my
android phone app users access it. My android phone app use GetValue and
StoreValue command from a custom TinyWebdB component from google app inventor
to my appspot database.
My appspot.com page is written with Python (file main.py).
What instruction should I add to the app.yaml and main.py files ?
Answer: In your mobile app you should put something unique into the headers sent, or
the user-agent string. Then in your python code could you could check for the
presence of that value to decide if the visitor is allowed or forbidden
(return status code 403).
To allow yourself to view the app you should check to see if the current user
is an admin user.
Sortacode example:
from google.appengine.api import users
allowed = False
if unique_value_in_request():
allowed = True
user = users.get_current_user()
if user and users.is_current_user_admin():
allowed = True
if not allowed:
# return 403 status
# do normal stuff
|
wx.Gauge fails to update beyond 25% in Windows, works in Linux
Question: I seem to have nothing but trouble with wxPython and cross-platform
compatibility :(
I have the function below. It's called when the user clicks a button, it does
some work which may take a while, during which a progress gauge is shown in
the status bar.
def Go(self, event):
progress = 0
self.statbar.setprogress(progress)
self.Update()
# ...
for i in range(1, numwords + 1):
progress = int(((float(i) / float(numwords)) * 100) - 1)
self.wrdlst.Append(words.next())
self.statbar.setprogress(progress)
self.Update()
self.wrdlst.Refresh()
# ...
progress = 100
self.PushStatusText(app.l10n['msc_genwords'] % numwords)
self.statbar.setprogress(progress)
The calls to `self.Update()` are apparently needed under Linux, otherwise the
gauge doesn't update until the function exits which makes it kinda pointless.
These calls seem to have no effect under Windows (Win 7 at least).
The whole thing works perfectly under Linux (with the calls to Update()), but
on Windows 7 the gauge seems to stop around the 20-25% mark, a while before
the function exits. So it moves as it should until it reaches ~25%, then the
gauge stops moving for no apparent reason but the function continues on just
fine and exits with the proper output.
In my attempt to find out the problem, I tried inserting a `print progress`
line just before updating the gauge inside the loop, thinking maybe the value
of `progress` wasn't what I thought it should be. To my big surprise, the
gauge now worked as it should, but the moment I remove that `print` it stops
working. I can also replace the print with a call to `time.sleep(0.001)`, but
even with such a short sleep the process still grinds to almost a halt, and if
I lower it even further the problem returns, so it's hardly very helpful.
I can't figure out what is going on or how to fix it, but I guess somehow
things move too fast under Windows so that `progress` doesn't get updated
properly after a while and just stays at a fixed value (~25). I have no idea
why that would be, however, it makes no sense to me. And of course, neither
`print` nor `sleep` are good solutions. Even if I print out "nothing", Windows
still opens another window for the non-existent output, which is annoying.
Let me know if you need further info or code.
**Edit:** Ok, here's a working application which (for me at least) has the
problem. It's still pretty long, but I tried to cut out everything not related
to the problem at hand.
It works on Linux, just like the complete app. Under Windows it either fails
or works depending on the value of `numwords` in the Go function. If I
increase its value to 1000000 (1 million) the problem goes away. I suspect
this may depend on the system, so if it works for you try to tweak the value
of `numwords`. It may also be because I changed it so it `Append()`s a static
text rather than calling a generator as it does in the original code.
Still, with the current value of `numwords` (100000) it does fail on Windows
for me.
import wx
class Wordlist(wx.TextCtrl):
def __init__(self, parent):
super(Wordlist, self).__init__(parent,
style=wx.TE_MULTILINE|wx.TE_READONLY)
self.words = []
self.SetValue("")
def Get(self):
return '\r\n'.join(self.words)
def Refresh(self):
self.SetValue(self.Get())
def Append(self, value):
if isinstance(value, list):
value = '\r\n'.join(value)
self.words.append(unicode(value))
class ProgressStatusBar(wx.StatusBar):
def __init__(self, *args, **kwargs):
super(ProgressStatusBar, self).__init__(*args, **kwargs)
self._changed = False
self.prog = wx.Gauge(self, style=wx.GA_HORIZONTAL)
self.prog.Hide()
self.SetFieldsCount(2)
self.SetStatusWidths([-1, 150])
self.Bind(wx.EVT_IDLE, lambda evt: self.__reposition())
self.Bind(wx.EVT_SIZE, self.onsize)
def __reposition(self):
if self._changed:
lfield = self.GetFieldsCount() - 1
rect = self.GetFieldRect(lfield)
prog_pos = (rect.x + 2, rect.y + 2)
self.prog.SetPosition(prog_pos)
prog_size = (rect.width - 8, rect.height - 4)
self.prog.SetSize(prog_size)
self._changed = False
def onsize(self, evt):
self._changed = True
self.__reposition()
evt.Skip()
def setprogress(self, val):
if not self.prog.IsShown():
self.showprogress(True)
if val == self.prog.GetRange():
self.prog.SetValue(0)
self.showprogress(False)
else:
self.prog.SetValue(val)
def showprogress(self, show=True):
self.__reposition()
self.prog.Show(show)
class MainFrame(wx.Frame):
def __init__(self, *args, **kwargs):
super(MainFrame, self).__init__(*args, **kwargs)
self.SetupControls()
self.statbar = ProgressStatusBar(self)
self.SetStatusBar(self.statbar)
self.panel.Fit()
self.SetInitialSize()
self.SetupBindings()
def SetupControls(self):
self.panel = wx.Panel(self)
self.gobtn = wx.Button(self.panel, label="Go")
self.wrdlst = Wordlist(self.panel)
wrap = wx.BoxSizer()
wrap.Add(self.gobtn, 0, wx.EXPAND|wx.ALL, 10)
wrap.Add(self.wrdlst, 0, wx.EXPAND|wx.ALL, 10)
self.panel.SetSizer(wrap)
def SetupBindings(self):
self.Bind(wx.EVT_BUTTON, self.Go, self.gobtn)
def Go(self, event):
progress = 0
self.statbar.setprogress(progress)
self.Update()
numwords = 100000
for i in range(1, numwords + 1):
progress = int(((float(i) / float(numwords)) * 100) - 1)
self.wrdlst.Append("test " + str(i))
self.statbar.setprogress(progress)
self.Update()
self.wrdlst.Refresh()
progress = 100
self.statbar.setprogress(progress)
class App(wx.App):
def __init__(self, *args, **kwargs):
super(App, self).__init__(*args, **kwargs)
framestyle = wx.MINIMIZE_BOX|wx.CLOSE_BOX|wx.CAPTION|wx.SYSTEM_MENU|\
wx.CLIP_CHILDREN
self.frame = MainFrame(None, title="test", style=framestyle)
self.SetTopWindow(self.frame)
self.frame.Center()
self.frame.Show()
if __name__ == "__main__":
app = App()
app.MainLoop()
**Edit 2** : Below is an even simpler version of the code. I don't think I can
make it much smaller. It still has the problem for me. I can run it from
within IDLE, or directly by double clicking the .py file in Windows, either
way works the same.
I tried with various values of `numwords`. It seems the problem doesn't
actually go away as I first said, instead when I increase `numwords` the gauge
just reaches further and further before the `print` is called. At the current
value of 1.000.000 this shorter version reaches around 50%. In the longer
version above, a value of 1.000.000 reaches around 90%, a value of 100.000
reaches around 25%, and a value of 10.000 only reaches around 10%.
In the version below, once the `print` is called, the progress continues on
and reaches 99% even though the loop must have ended by then. In the original
version the call to `self.wrdlst.Refresh()`, which takes a few seconds when
numwords is high, must have caused the gauge to pause. So I think that what
happens is this: In the loop the gauge only reaches a certain point, when the
loop exits the function continues on while the gauge stays still, and when the
function exits the gauge continues on until it reaches 99%. Because a print
statement doesn't take a lot of time, the version below makes it seem like the
gauge moves smoothly from 0% to 99%, but the `print` suggests otherwise.
import wx
class MainFrame(wx.Frame):
def __init__(self, *args, **kwargs):
super(MainFrame, self).__init__(*args, **kwargs)
self.panel = wx.Panel(self)
self.gobtn = wx.Button(self.panel, label="Go")
self.prog = wx.Gauge(self, style=wx.GA_HORIZONTAL)
wrap = wx.BoxSizer()
wrap.Add(self.gobtn, 0, wx.EXPAND|wx.ALL, 10)
wrap.Add(self.prog, 0, wx.EXPAND|wx.ALL, 10)
self.panel.SetSizer(wrap)
self.panel.Fit()
self.SetInitialSize()
self.Bind(wx.EVT_BUTTON, self.Go, self.gobtn)
def Go(self, event):
numwords = 1000000
self.prog.SetValue(0)
for i in range(1, numwords + 1):
progress = int(((float(i) / float(numwords)) * 100) - 1)
self.prog.SetValue(progress)
print "Done"
if __name__ == "__main__":
app = wx.App()
frame = MainFrame(None)
frame.Show()
app.MainLoop()
Answer: So, actually, **you are blocking the GUI thread** by your long running task.
It may and may not run fine on some platforms and/or computers.
import wx
from wx.lib.delayedresult import startWorker
class MainFrame(wx.Frame):
def __init__(self, *args, **kwargs):
super(MainFrame, self).__init__(*args, **kwargs)
self.panel = wx.Panel(self)
self.gobtn = wx.Button(self.panel, label="Go")
self.prog = wx.Gauge(self, style=wx.GA_HORIZONTAL)
self.timer = wx.Timer(self)
wrap = wx.BoxSizer()
wrap.Add(self.gobtn, 0, wx.EXPAND|wx.ALL, 10)
wrap.Add(self.prog, 0, wx.EXPAND|wx.ALL, 10)
self.panel.SetSizer(wrap)
self.panel.Fit()
self.SetInitialSize()
self.Bind(wx.EVT_BUTTON, self.Go, self.gobtn)
self.Bind(wx.EVT_TIMER, self.OnTimer, self.timer)
def Go(self, event):
# Start actual work in another thread and start timer which
# will periodically check the progress and draw it
startWorker(self.GoDone, self.GoCompute)
self.progress = 0
self.timer.Start(100)
def OnTimer(self, event):
# Timer draws the progress
self.prog.SetValue(self.progress)
def GoCompute(self):
# This method will run in another thread not blocking the GUI
numwords = 10000000
self.prog.SetValue(0)
for i in range(1, numwords + 1):
self.progress = int(((float(i) / float(numwords)) * 100) - 1)
def GoDone(self, result):
# This is called when GoCompute finishes
self.prog.SetValue(100)
self.timer.Stop()
print "Done"
if __name__ == "__main__":
app = wx.App()
frame = MainFrame(None)
frame.Show()
app.MainLoop()
Also notice that contrary your example:
* Button goes back to unclicked state after clicked
* You can move the window and it will not freeze
As a rule of thumb, every method which looks like this `def Something(self,
event)` should run just a few milliseconds.
**EDIT:** Another thing what I have observed on Windows 7. The gauge
**starts** to grow at the time you call `self.prog.SetValue()` and grows in
some time to specified value. It does not "jump" to that value, rather it
grows slowly to hit set value. It seems to be Windows 7 feature. I had to
switch off "Animate controls and element inside windows" in performance
options to get rid of this behavior.
|
new sys.path in python
Question: I am having difficulties importing a script from a directory that is not
stored in sys.path. I have a script saved as test.py in a directory called
"Development" and am trying to add the development directory to sys.path so I
can import a function from my current script called index.py.
**Here is my code for index.py:**
import sys
sys.path.append ('/Users/master/Documents/Development/')
import test
printline()
**printline() is defined in test.py as:**
def printline():
print "I am working"
**Here is the error I am receiving:**
Traceback (most recent call last):
File "/Users/master/Documents/index.py", line 6, in <module>
printline()
NameError: name 'printline' is not defined
Any ideas on how I can get this to work?
Thanks.
Answer: 1. If you do `import test`, the function you defined is imported into its own namespace, so you must refer to it as `test.printline()`.
2. `test` may be the name of another module in your Python path, and since the directory you insert is appended to the path, it will be considered only if `test` is nowhere else to be found. Try inserting the path to the head of `sys.path` instead:
sys.path.insert(0, "...")
In a vanilla Python, the culprit is likely #1, but if you do not want your
scripts to break in the future, you should also get used to #2.
|
Resizing a wxPython Window
Question: Is it possible to make a wxPython window only re-sizable to a certain ratio? I
know you can disable resizing; however, I'd like it so when the window was
resized it stuck to a certain width to height ratio.
Answer: One obvious way to do this would be to bind `wx.EVT_SIZE` to a function that
constrains the aspect ratio. I'm not certain this is The Right Way to do this,
but it works:
import wx
class SizeEvent(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title)
self.Bind(wx.EVT_SIZE, self.OnSize)
self.Centre()
self.SetSizeWH(400, 300)
self.Show(True)
def OnSize(self, event):
hsize = event.GetSize()[0] * 0.75
self.SetSizeHints(minW=-1, minH=hsize, maxH=hsize)
self.SetTitle(str(event.GetSize()))
app = wx.App()
SizeEvent(None, 1, 'sizeevent.py')
app.MainLoop()
(The boilerplate is borrowed from
[here](http://zetcode.com/wxpython/events/).)
|
wxMessageBox with an auto-close timer in wxPython
Question: **Platforms: Windows, OS X
Python Version: Active State Python 2.7
wxPython Version: Version 2.9**
Here is a sample code in which I use a wxMessageBox:
import wx,os
class Frame(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(100, 100),style=wx.MINIMIZE_BOX | wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX | wx.CLIP_CHILDREN)
host=os.system('hostname')
if host!='superman':
self.dialogBox=wx.MessageBox('The host name should be superman. Closing this dialog box in 2s...','Info')
self.Destroy()
else:
self.Center()
self.Show()
if __name__ == '__main__':
app = wx.App(redirect=False)
frame = Frame(None, -1, 'Sample')
app.MainLoop()
According to the above piece of code, If the host name is not 'superman' ,
then the user is displayed a message box and prompted to press 'OK'. If the
user presses 'OK' button on the message box, then the control moves to the
next line in the code (i.e., line number 10) where the frame is destroyed. I
want to be to able to automatically close the dialog box and go to the next
line in the code i.e., `self.Destroy()` if the user does not press the 'OK'
button in the next 2 seconds. Any thoughts on how do I do that in wxpython ?
Answer: If you create your own custom dialog by subclassing
[wx.Dialog](http://www.wxpython.org/docs/api/wx.Dialog-class.html) you can use
a [wx.Timer](http://www.wxpython.org/docs/api/wx.Timer-class.html) to generate
a periodic event to which you can bind a handler which updates the message
every time the timer event fires, then after x event fires you can destroy the
dialog.
**_Working example:_**
import wx
import os
class MessageDialog(wx.Dialog):
def __init__(self, message, title, ttl=10):
wx.Dialog.__init__(self, None, -1, title,size=(400, 150))
self.CenterOnScreen(wx.BOTH)
self.timeToLive = ttl
stdBtnSizer = self.CreateStdDialogButtonSizer(wx.OK|wx.CANCEL)
stMsg = wx.StaticText(self, -1, message)
self.stTTLmsg = wx.StaticText(self, -1, 'Closing this dialog box in %ds...'%self.timeToLive)
vbox = wx.BoxSizer(wx.VERTICAL)
vbox.Add(stMsg, 1, wx.ALIGN_CENTER|wx.TOP, 10)
vbox.Add(self.stTTLmsg,1, wx.ALIGN_CENTER|wx.TOP, 10)
vbox.Add(stdBtnSizer,1, wx.ALIGN_CENTER|wx.TOP, 10)
self.SetSizer(vbox)
self.timer = wx.Timer(self)
self.timer.Start(1000)#Generate a timer event every second
self.timeToLive = 10
self.Bind(wx.EVT_TIMER, self.onTimer, self.timer)
def onTimer(self, evt):
self.timeToLive -= 1
self.stTTLmsg.SetLabel('Closing this dialog box in %ds...'%self.timeToLive)
if self.timeToLive == 0:
self.timer.Stop()
self.Destroy()
class Frame(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(100, 100),style=wx.MINIMIZE_BOX | wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX | wx.CLIP_CHILDREN)
host=os.system('hostname')
if host!='superman':
dlg = MessageDialog('The host name should be superman', 'Info', ttl=10)
dlg.ShowModal()
else:
self.Center()
self.Show()
if __name__ == "__main__":
app = wx.PySimpleApp()
frame = Frame(None, -1, "")
frame.Show(1)
app.MainLoop()
|
Python Socket Timeout Issue. How do control the flow of this code better?
Question: While working with the sockets library in python 2.7, I am encountering an
issue with getting the code to flow the way I want it to. I'd like the code to
iterate over a range of IP addresses and open a socket connection for each ip
in the range. If the connection times out, print an error and move on to the
next address in the range. I'm using a for loop to accomplish this, however
whenever the socket encounters a time out, the loop breaks. What am I doing
wrong? I'm assuming its the way the exception is being handled. Can anyone
point me in the right direction?
from IPy import IP
ip = IP(sys.argv[1])
for x in ip:
print("Connecting to: {0}".format(str(x)))
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(10)
svr = (str(x), 25)
s.connect(svr)
if socket.timeout:
print("Timed out.")
data = s.recv(2048)
print(data)
continue
print("Range Completed.")
sys.exit(1)
Answer: You cannot call s.recv(2048) on a timedout socket I believe. I think this
modified code should work fine.
from IPy import IP
ip = IP(sys.argv[1])
for x in ip:
print("Connecting to: {0}".format(str(x)))
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(10)
svr = (str(x), 25)
s.connect(svr)
if socket.timeout:
print("Timed out.")
else:
data = s.recv(2048)
print(data)
continue
print("Range Completed.")
sys.exit(1)
|
How to get formatted date time in python
Question: I want my Linux Filename like this
May-01-0340AM-2011.tar
How can i get the date variable formatted like above in Python
IN bash i write
date1=$(date +"%b-%d-%I%M%p-%G")
Answer: You can use the same formatting string in `strftime` on a [datetime
object](http://docs.python.org/library/datetime.html#datetime-datetime):
>>> import datetime
>>> datetime.datetime.now().strftime('%b-%d-%I%M%p-%G')
'May-16-0245PM-2011'
Incidentally, I'd just like to put a word in for the joy of [ISO-8601 date
formatting](http://www.cl.cam.ac.uk/~mgk25/iso-time.html) :)
|
How to get back a signed string (e.g. md5 hash) using a public-key (x509 certificate)
Question: I'm very new with cryptographic and stuck for 2 days with this problem, I hope
you can help me with.
I try to sign a md5 digest string with the private key of a X509 certificate.
As far as I can see, this works fine. Now I try to get back that string using
the public key of that x509 certificate, but I have no idea how to do this.
First I created a x509 certificate and a private-Keyfile using OpenSSL:
openssl req -newkey rsa:1024 -nodes -x509 -days 365 -out signer.pem
Here's the example code I tried:
import M2Crypto
import hashlib
def empty_callback ():
return
# md5 hash of params
params = "0045KIABCDEFG"
m = hashlib.md5()
m.update(params)
md = m.digest()
print m.hexdigest()
M2Crypto.Rand.rand_seed (os.urandom (1024))
# sign md5 hash with private key
SignEVP = M2Crypto.EVP.load_key ('privkey.pem')
#Begin signing
SignEVP.sign_init ()
#Tell it to sign our string
SignEVP.sign_update (md)
#Get the final result
StringSignature = SignEVP.sign_final ()
#print the final result
print StringSignature.encode ('base64')
I get the public key with this, but don't know how to use it then.
objX509 = M2Crypto.X509.load_cert ('signer.pem') PubKey = objX509.get_pubkey()
I only found how to verify the signed string, but that's not what I need. Is
there a method to get back the origin md5 digest (md) by using the signer.pem
(the public key)?
I'm using M2Cyrpto-0.21.1 and OpenSSL 1.0.0d, and programming with Python27,
32bit on Windows7.
kind regards, Falko
Answer: Technically, you are not supposed to "decrypt" the signature. If you really
want to decrypt it, do something like this:
cert = X509.load_cert("signer.pem")
decrypted = cert.get_pubkey().get_rsa().public_decrypt(StringSignature, 1)
but this will not give you what you are expecting, I guess.
If you want to pull out the message digest that was signed using the code
above, you would have to decode the ASN.1 sequence in the `decrypted`. You can
do this with PyCrypto, of course, but what for?
from Crypto.Util import asn1
seq = asn1.DerSequence()
seq.decode(decrypted)
obj = asn1.DerObject()
obj.decode(seq[1])
# now this is the original message digest that was signed
original_message_md = obj.payload
|
Convert Python2.6 to Google App Engine compatible (multiprocessing)
Question: I've made a little script in Python which uses multiprocessing. I've thought
of running it on the Google App Engine as a cron-job, but unfortunately Google
App Engine doesn't support multiprocessing. Can anyone help me convert this
into Google App Engine compatible code (perhaps using Google App Engine
tasks?)?
from multiprocessing import Pool
import MySQLdb
import urllib;
import urllib2;
def f(email_url):
url = "http://my-domain.com/cron.php"
values = { "email" : email_url[0], "url" : email_url[1] }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
urllib2.urlopen(req)
if __name__ == '__main__':
p = Pool()
emails_urls = list()
conn = MySQLdb.connect(host = "XXX.XXX.XXX.XXX", user = "USERNAME",
passwd = "PASSWORD", db = "MY-DATABASE")
cursor = conn.cursor()
cursor.execute ("SELECT email, url FROM data")
rows = cursor.fetchall()
for row in rows:
emails_urls.append((row[0], row[1]))
cursor.close()
conn.close()
p.map(f, emails_urls)
Answer: Take a look at Task Queues.
ca can insert an amount of work into a task Queue (=> Thread) and set the
number of jobs in a queue which are executed simultaniously.
Take a look here: <http://code.google.com/intl/de-
DE/appengine/docs/python/taskqueue/>
|
Rewrite issue with configuring Lighttpd with Flask Python framework
Question: I've run and developed my Flask application without incident using its built-
in server. It has worked fine and has been really smooth and fun.
Unfortunately, Lighttpd is, as always, a pain to deploy to. I'm following the
instructions as closely as I can, but unfortunately, my application still
isn't working out.
Here is my configuration so far:
server.modules += ( "mod_fastcgi" )
server.modules += ( "mod_rewrite" )
fastcgi.server = ("/bioinfo/main.fcgi" =>
((
"socket" => "/tmp/bioinfo-fcgi.sock",
"bin-path" => "/var/www/bioinfo/main.fcgi",
"check-local" => "disable",
"max-procs" => 1
))
)
fastcgi.debug = 1
url.rewrite-once = (
"^/bioinfo/static/(.*)$" => "/bioinfo/static/$1",
"^/bioinfo/(.*)$" => "/bioinfo/main.fcgi/$1"
)
# in: /etc/lighttpd/conf-available/10-fastcgi.conf
This works in that it displays the main page, but not any subsequent pages.
I have several app.route handlers in my Flask application which I access using
either GET or POST using some XHR in the client.
Also, here's my .fcgi file, just to make sure I don't have any glaring errors
here:
#!/usr/bin/python
from flup.server.fcgi import WSGIServer
from main import app
if __name__ == '__main__':
WSGIServer(app).run()
If anyone can identify the problem, being, AJAX doesn't work with the
application's URIs (most likely because my rewrite rules are wonky), I'd
really appreciate it. Thanks in advance, folks!
Answer: You need to chdir to the directory your application is running in manually.
|
Python: How to read a (static) file from inside a package?
Question: Could you tell me how can I read a file that is inside my python package?
I have a following situation: a package that I load has a number of templates
(text files used as strings) that I want to load from within the program. But
how do I specify the path to such file? Imagine I want to read a file from:
package\templates\temp_file
Some kind of path manipulation? Package base path tracking?
Thanks,
Answer: Assuming your template is located inside your module's package at this path:
<your_package>/templates/temp_file
the correct way to read your template is to use
[`pkg_resources`](http://pythonhosted.org/setuptools/pkg_resources.html#basic-
resource-access) package from _setuptools_ distribution:
import pkg_resources
resource_package = __name__ # Could be any module/package name
resource_path = '/'.join('templates', 'temp_file') # Do not use os.path.join(), see below
template = pkg_resources.resource_string(resource_package, resource_path)
# or for a file-like stream:
template = pkg_resources.resource_stream(resource_package, resource_path)
> **Tip:**
> This will read data even if your distribution is zipped, so you may set
> `zip_safe=True` in your `setup.py`, and/or use the long-awaited [`zipapp`
> packer](https://docs.python.org/3.5/library/zipapp.html#module-zipapp) from
> _python-3.5_ to create self-contained distributions.
According to the Setuptools/`pkg_resources` docs, do not use `os.path.join`:
> ### [Basic Resource
> Access](https://setuptools.readthedocs.io/en/latest/pkg_resources.html#basic-
> resource-access)
>
> Note that resource names must be `/`-separated paths and cannot be absolute
> (i.e. no leading `/`) or contain relative names like "`..`". Do _not_ use
> `os.path` routines to manipulate resource paths, as they are _not_
> filesystem paths.
|
TypeError: AutoProxy object is not iterable - multiprocessing
Question: consider the following server code :
from multiprocessing.managers import BaseManager, BaseProxy
def baz(aa) :
print "aaa"
l = []
for i in range(3) :
l.append(aa)
return l
class SolverManager(BaseManager): pass
manager = SolverManager(address=('127.0.0.1', 50000), authkey='mpm')
manager.register('solver', baz)
server = manager.get_server()
server.serve_forever()
and the associated client :
import sys
from multiprocessing.managers import BaseManager, BaseProxy
class SolverManager(BaseManager): pass
def main(args) :
SolverManager.register('solver')
m = SolverManager(address=('127.0.0.1', 50000), authkey='mpm')
m.connect()
for i in m.solver(args[1]):
print i
if __name__ == '__main__':
sys.exit(main(sys.argv))
I think I'm missing something important here. My guess is that I have to
subclass the BaseProxy class to provide an iterable object, but so far I
haven't managed to get it right.
when I run the client I get this error :
Traceback (most recent call last):
File "mpmproxy.py", line 17, in <module>
sys.exit(main(sys.argv))
File "mpmproxy.py", line 13, in main
for i in m.solver(args[1]):
TypeError: 'AutoProxy[solver]' object is not iterable
however if I try to print it, the list is there ... Maybe it has also
something to do with the way data is serialized between client and server ...
in the documentation there is a similar case (with a generator) and they use
the following class to access the data :
class GeneratorProxy(BaseProxy):
_exposed_ = ('next', '__next__')
def __iter__(self):
return self
def next(self):
return self._callmethod('next')
def __next__(self):
return self._callmethod('__next__')
shall I do something similar ? Can anybody give me an example and explain to
me how this works ?
**update**
To clarify : suppose I add the class :
class IteratorProxy(BaseProxy):
def __iter__(self):
print self
return self
and in the client I register the function as
SolverManager.register('solver', proxytype=IteratorProxy)
the error I get is :
$python mpmproxy.py test
['test', 'test', 'test']
Traceback (most recent call last):
File "mpmproxy.py", line 22, in <module>
sys.exit(main(sys.argv))
File "mpmproxy.py", line 18, in main
for i in m.solver(args[1]):
TypeError: iter() returned non-iterator of type 'IteratorProxy'
I have the impression I'm missing something stupid here ...
**update 2**
I think I solved this problem:
The point was to get the real value :
for i in m.solver(args[1])._getvalue():
print i
gosh !!! I'm not sure if this is the correct answer or just a workaround ...
Answer: Indeed, to be iterable your class needs to define the `__iter__` method that
`BaseProxy` defines, so I guess inheritance is the right way to go !
|
Accessing functions with a dot in theior name (eg. "as.vector") using rpy2
Question: I am trying to access the "as.vector" R function from within Python, using
rpy2. Let's say, for the sake of simplicity, that I want to do something as
simple as this using rpy2 (R code):
x <- as.vector(c(1, 2, 3))
Since "as.vector" contains a dot in its name, it is not directly available as
a member of rpy2.robjects.r
According to the documentation, rpy2 replaces dots by underscores for named
function parameters, but it doesn't seem to work for the function name itself.
I tried eg. "as_vector", "asvector" to no avail.
Any ideas?
Answer: According to the documentation, rpy2 does not replace dots by underscores when
using 'rpy2.robjects.r'.
You may want to consider the use of 'importr()'
|
Can I install Python 2.7.1 64bit along side of an exsiting 32bit install on OS X?
Question: **Short Description**
Is it possible to install Python 2.7.1 64/32bit from
[python.org](http://www.python.org/download/#id11) on top of an existing
install (from python.org) of Python 2.7.1 32bit?
**Background**
I installed the 32bit version for wxPython(2.8) support which until now has
given me zero issues. There are a few modules that I am now having difficultly
installing (psycopg2 and mysql-python). The warning messages in homebrew
constantly warn me about not having a 64bit version of Python on the path.
These warnings only add to my list of wanting to use a 64 bit version.
At the time I selected the 32bit install, the GUI for a particular project was
the main focus. Now the GUI has been come very simple and the database back-
end support (using Django) is much more important. This being said, using the
development version of wxPython 2.9 (which supports OS X and 64bit 2.7) has
become acceptable.
For the non-mac users, to help explain why I had to use the 32bit version
please see this [Brief guide to using virtualenv in a wxpython
project](http://batok.github.com/virtualenvwxp/)
**System Information**
_Development System_
_OS:_ Mac OS X Snow Leopard (10.6.7)
_Python:_ 2.7.1 with virutalenv / virutalenv-wrapper
_Project Dependencies:_
Note that MySQL could be PostgresSQL's psycopg2 if I can get the postgresql to
install with homebrew
> Django==1.2.5
> MySQL-python==1.2.3
> PIL==1.1.7
> PyVISA==1.3
> pyserial==2.5
> virtualenv==1.5.1
> virtualenvwrapper==2.6.3
> wsgiref==0.1.2
> wxPython==2.8.11.0
> wxPython-common==2.8.11.0
_Deployment System_
_OS:_ Windows XP / Windows 7
_Python:_ Hopefully none (goal to use py2exe, or similar tool)
**Current Thoughts**
I fear that my goal cannot be accomplished based on the file paths alone. In
Windows 7 the identifier (x86) is placed in the path showing that it is a
32bit program, but on OS X the path would be the same for 32bit or 64/32bit
installs (/Library/Frameworks/Python.framework/Versions/2.7/).
Any thoughts or comments would be helpful!
**Update 5-18-2011: 8:40 AM**
I have confirmed that using the pre-compiled (.dmg) framework builds,
installing the a 64 bit version _does_ blow away the 32 bit install. This did
negatively effect my virtual environments in that all of the things I have
installed in the environments was based on the 32 bit install nearly every
module threw an error of some sort.
I still have not achieved installing 32 bit and 64 bit on the same machine,
however looking into [homebrew](https://github.com/mxcl/homebrew) with greater
detail does look this _could_ be possible. The trick would be to define your
own formulas for each of the python builds (from source) and rename the
install directory to something like 'Python27_32' and 'Python27_64'. I'll keep
exploring this front as I have time.
**EDIT 7-12-2011: 10:51 AM CST**
Has anyone out there been able control how homebrew compile options?
Specifically how to select to use a 64bit compiler or a 32 bit compiler? I
will create a new SO question if this doesn't bring anything up.
Answer: Check out <http://www.macports.org/> Which provides ports of various flavors
of linux/unix tools that don't appear in the default MacInstallation and
duplicates those that it needs otherwise. It installs everthing in /opt/local
instead of stepping on the installed base. With some manipulation of PATH and
LD_LIBRARY_PATH this might give you the options you need.
|
Getting number of Google hits for a larger list of words
Question: I saw some relevant questions for my problem, but no specific answer. In
brief, I have a larger list of words (more than 1000), and I would like to get
number of Google hits for each word. In particular, I read this tread at
[Stackoverflow: Google search to retrieve number of results for search
keywords](http://stackoverflow.com/questions/4785833/google-search-to-
retrieve-number-of-results-for-search-keywords). But the question of how to
handle large list is still open. Please, I would really appreciate if anyone
can trow some piece of Python code whit which I could play and build script.
Answer: You might be referring to this comment on the aforementioned question:
> If Google rejects your request, you could try scraping the search results
> page...
I would strongly recommend _not_ doing that, especially if you have huge
numbers of words to process. However, for _instructive_ purposes, this is the
code that would ordinarily work for you:
import urllib2
import re
def results(word):
text = urllib2.urlopen('http://www.google.com/search?q=%s'%word).read()
m = re.search('About ([0-9,]+) results', text)
if m is None:
return None
else:
return int(m.group(1).replace(',', '')) # remove commas and int-ify
I say "ordinarily" because, in my testing, instead of a search results page, I
received a polite request from Google:
> **403.** That’s an error.
>
> Your client does not have permission to get URL `/search?q=foo` from this
> server.
I determined that Google knows to reject my request by looking at the `User-
Agent` in the request header. It's simple to spoof the `User-Agent` and make
this code work, but again, please don't.
Another consideration is that Randall Munroe of xkcd fame has suggested the
hit counts on the page are wildly inaccurate:
<http://blog.xkcd.com/2011/02/04/trochee-chart/>
|
android mobile socket open
Question: Why Socket can be open on the android emulator and connect to the python
server code and open a socket !! In other Hand When i run same android code on
the mobile it doesn't run . didnt open a socket ..Any suggestion what is the
problem and how to solve such thing
enter code here
import sys
from threading import Thread
import socket
import MySQLdb
allClients=[]
class Client(Thread):
def __init__(self,clientSocket):
Thread.__init__(self)
self.sockfd = clientSocket #socket client
self.name = ""
self.nickName = ""
def newClientConnect(self):
allClients.append(self.sockfd)
while True:
while True:
try:
rm= self.sockfd.recv(2048)
print rm
i=0
while (i<2):
if (rm) == row[i][0]:
reply="\n Welcome to our game %s: %s"%(rm,row[i][1])
self.sockfd.send(reply)
break
else:
i=i+1
if i==2:
reply="\n Error opaa ba2a"
self.sockfd.send(reply)
i=0
break
break
except ValueError:
self.sockfd.send("\n UNVAlied Comment ")
def run(self):
self.newClientConnect()
while True:
buff = self.sockfd.recv(2048)
if buff.strip() == 'quit':
self.sockfd.close()
break # Exit when break
else:
self.sendAll(buff)
#Main
if __name__ == "__main__":
#Server Connection to socket:
IP = '50.0.10.107'
PORT = 5807
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, 1 )
print ("Server Started")
try:
serversocket.bind(('',54633))
except ValueError,e:
print e
serversocket.listen(5)
db= MySQLdb.connect(host= "localhost",
user="root",
passwd="newpassword",
db="new_schema")
x=db.cursor()
x.execute("SELECT * FROM lolo")
row = x.fetchall()
print "Connected to the Database"
while True:
(clientSocket, address) = serversocket.accept()
print 'New connection from ', address
ct = Client(clientSocket)
ct.start()
__all__ = ['allClients','Client']
The python code and the server code it map and button when click on button
connection start and it work great on the emulator
Answer: Could be a number of thing:
1. You don't have networking permissions enabled in your manifest file
2. Your wifi / 3G is disabled
3. If your server is on your personal computer and you're connected to the internet using a router then you need to redirect the port you're using for socket communication from the router to your machine.
|
Python - multiple simultaneous threadpools
Question: I'm writing a web scraper in python, using httplib2 and lxml (yes - I know I
could be using scrapy. Let's move past that...) The scraper has about 15000
pages to parse into approximately 400,000 items. I've got the code to parse
the items to run instantaneously (almost) but the portion that downloads the
page from the server is still extremely slow. I'd like to overcome that
through concurrency. However, I can't rely on EVERY page needing to be parsed
EVERY time. I've tried with a single ThreadPool (like multiprocessing.pool,
but done with threads - which should be fine since this is an I/O bound
process), but I couldn't think of a graceful (or working) way of getting ALL
of the threads to stop when the date of the last index item was greater than
the item we were processing. Right now, I'm working on a method using two
instances of ThreadPool - one to download each page, and another to parse the
pages. A simplified code example is:
#! /usr/bin/env python2
import httplib2
from Queue import PriorityQueue
from multiprocessing.pool import ThreadPool
from lxml.html import fromstring
pages = [x for x in range(1000)]
page_queue = PriorityQueue(1000)
url = "http://www.google.com"
def get_page(page):
#Grabs google.com
h = httplib2.Http(".cache")
resp, content = h.request(url, "GET")
tree = fromstring(str(content), base_url=url)
page_queue.put((page, tree))
print page_queue.qsize()
def parse_page():
page_num, page = page_queue.get()
print "Parsing page #" + str(page_num)
#do more stuff with the page here
page_queue.task_done()
if __name__ == "__main__":
collect_pool = ThreadPool()
collect_pool.map_async(get_page, pages)
collect_pool.close()
parse_pool = ThreadPool()
parse_pool.apply_async(parse_page)
parse_pool.close()
parse_pool.join()
collect_pool.join()
page_queue.join()
Running this code however, doesn't do what I expect - which is to fire off two
threadpools: one populating a queue and another pulling from it to parse. It
begins the collect pool and runs through it and then begins the parse_pool and
runs through it (I assume, I've not let the code run long enough to get to the
parse_pool - the point is that collect_pool is all that seems to be running).
I'm fairly sure I've messed something up with the order of the calls to
join(), but I can't for the life of me figure out what order they're supposed
to be in. My question is essentially this: Am I barking up the right tree
here? and if so, what the hell am I doing wrong? If I'm not - what would your
suggestions be
Answer: First of all, your design seems to be correct at a high level. The use of a
threadpool for collecting the pages is justified by the synchronous nature of
the httlib2 module. (With an asynchronous library one thread would be enough;
note that even with httplib2 and the pool at most one collector thread is
running at any time because of the GIL.) The parsing pool is justified by the
lxml module having been written in C/C++ (and assuming that thus the [Global
Interpreter Lock](http://wiki.python.org/moin/GlobalInterpreterLock) is
released during the parsing of the page - this is to be checked in the lxml
docs or code!). If this latter were not true, then there would be no
perfomance gain by having a dedicated parsing pool as only one thread would be
able to acquire the GIL. In this case it would be better to use a process
pool.
I am not familiar with the ThreadPool implementation, but I assume that it is
analogous to the Pool class in the multiprocessing module. On this basis the
problem appears to be that you create only a single work item for the
parse_pool and after parse_page processes the first page it never tries to
dequeue further pages from there. Additional work items are not submitted to
this pool either, so the processing stops, and after the parse_pool.close()
call the threads of the (empty) pool terminate.
The solution is to eliminate the page_queue. The get_page() function should
put a work item on the parse_pool by calling apply_async() for every page it
collects, instead of feeding them into page_queue.
The main thread should wait till the collect_queue is empty (i.e. the
collect_pool.join() call returned), then it should close the parse_pool (as we
can be sure that no more work will be submitted for the parser). Then it
should wait for the parse_pool to become empty by calling parse_pool.join()
and then exit.
Furtheremore you need to increase the number of threads in the connect_pool in
order to process more http requests concurrently. The default number of
threads in a pool is the number of CPUs; currently you cannot issue more than
that many requests. You may experiment with values up to thousands or
tenthousands; observere the CPU consumption of the pool; it should not
approach 1 CPU.
|
how to concatenate multiple files for stdin of Popen
Question: I'm porting a bash script to python 2.6, and want to replace some code:
cat $( ls -tr xyz_`date +%F`_*.log ) | filter args > bzip2
I guess I want something similar to the "Replacing shell pipe line" example at
<http://docs.python.org/release/2.6/library/subprocess.html>, ala...
p1 = Popen(["filter", "args"], stdin=*?WHAT?*, stdout=PIPE)
p2 = Popen(["bzip2"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
But, I'm not sure how best to provide `p1`'s `stdin` value so it concatenates
the input files. Seems I could add...
p0 = Popen(["cat", "file1", "file2"...], stdout=PIPE)
p1 = ... stdin=p0.stdout ...
...but that seems to be crossing beyond use of (slow, inefficient) pipes to
call external programs with significant functionality. (Any decent shell
performs the `cat` internally.)
So, I can imagine a custom class that satisfies the file object API
requirements and can therefore be used for p1's stdin, concatenating arbitrary
other file objects. (_EDIT: existing answers explain why this isn't possible_)
**Does python 2.6 have a mechanism addressing this need/want, or might
another`Popen` to `cat` be considered perfectly fine in python circles?**
Thanks.
Answer: You can replace everything that you're doing with Python code, except for your
external utility. That way your program will remain portable as long as your
external util is portable. You can also consider turning the C++ program into
a library and using Cython to interface with it. As Messa showed, `date` is
replaced with `time.strftime`, globbing is done with `glob.glob` and `cat` can
be replaced with reading all the files in the list and writing them to the
input of your program. The call to `bzip2` can be replaced with the `bz2`
module, but that will complicate your program because you'd have to read and
write simultaneously. To do that, you need to either use `p.communicate` or a
thread if the data is huge (`select.select` would be a better choice but it
won't work on Windows).
import sys
import bz2
import glob
import time
import threading
import subprocess
output_filename = '../whatever.bz2'
input_filenames = glob.glob(time.strftime("xyz_%F_*.log"))
p = subprocess.Popen(['filter', 'args'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output = open(output_filename, 'wb')
output_compressor = bz2.BZ2Compressor()
def data_reader():
for filename in input_filenames:
f = open(filename, 'rb')
p.stdin.writelines(iter(lambda: f.read(8192), ''))
p.stdin.close()
input_thread = threading.Thread(target=data_reader)
input_thread.start()
with output:
for chunk in iter(lambda: p.stdout.read(8192), ''):
output.write(output_compressor.compress(chunk))
output.write(output_compressor.flush())
input_thread.join()
p.wait()
## Addition: How to detect file input type
You can use either the file extension or the Python bindings for libmagic to
detect how the file is compressed. Here's a code example that does both, and
automatically chooses `magic` if it is available. You can take the part that
suits your needs and adapt it to your needs. The `open_autodecompress` should
detect the mime encoding and open the file with the appropriate decompressor
if it is available.
import os
import gzip
import bz2
try:
import magic
except ImportError:
has_magic = False
else:
has_magic = True
mime_openers = {
'application/x-bzip2': bz2.BZ2File,
'application/x-gzip': gzip.GzipFile,
}
ext_openers = {
'.bz2': bz2.BZ2File,
'.gz': gzip.GzipFile,
}
def open_autodecompress(filename, mode='r'):
if has_magic:
ms = magic.open(magic.MAGIC_MIME_TYPE)
ms.load()
mimetype = ms.file(filename)
opener = mime_openers.get(mimetype, open)
else:
basepart, ext = os.path.splitext(filename)
opener = ext_openers.get(ext, open)
return opener(filename, mode)
|
carriage characters are lost in the resulted string when reading a file on windows
Question: When reading a txt file in windows by python, carriage characoters are lost in
resulted string.
c:/text.txt
aaa\r\nbbb\r\nccc\r\nddd
code:
input = open('c:/text.txt')
str = input.read()
import repr
for i,ch in enumerate(str):
print i,ord(ch),repr.repr(ch)
result:
0 97 'a'
1 97 'a'
2 97 'a'
3 10 '\n'
4 98 'b'
5 98 'b'
6 98 'b'
7 10 '\n'
8 99 'c'
9 99 'c'
10 99 'c'
11 10 '\n'
12 100 'd'
13 100 'd'
14 100 'd'
you can see that all carriage characters are lost. Any suggestion appreciated.
Thanks.
Answer: If you open the file in text mode, Windows line endings `\r\n` are
automatically substituted by standard line endings `\n`. To prevent this from
happening, open the file in binary mode:
input = open('c:/text.txt', 'rb')
|
Python serial communication
Question: I'm working on an Arduino project, and I am interfacing it with a Python
script due to memory limitations. On the Python side I have a 2 dimensional
matrix containing respective x, y values for coordinates, and in this list is
26000 coordinate pairs. So, in interest of clarifying the data structure for
all of you, `pathlist[0][0]`, would return the X value of the first coordinate
of my list. Performing different operations, etc. on this list in Python is
posing no problems. Where I am running into trouble however is sending these
values to Arduino over serial, in a way that is useful.
Due to the nature of serial communication (at least I think this is the case)
I must send each each integer as a string, and only one digit at a time. So, a
number like 345 would be sent over as 3 individual characters, those being of
course, 3, 4, then 5.
What I am struggling with is finding a way to rebuild those integers on the
Arduino.
Whenever I send a value over, it's receiving the data and outputting it like
so:
//Python is sending over the number '25'
2ÿÿ52
//Python is sending the number 431.
4ÿÿ321ÿÿÿ2
The Arduino code is:
String str;
int ds = 4;
void setup() {
Serial.begin(9600);
}
void loop(){
if (Serial.available()>0) {
for (int i=0; i<4; i=i+1) {
char d= Serial.read();
str.concat(d);
}
char t[str.length()+1];
str.toCharArray(t, (sizeof(t)));
int intdata = atoi(t);
Serial.print(intdata);
}
}
And the Python code looks like this:
import serial
s = serial.Serial(port='/dev/tty.usbmodemfd131', baudrate=9600)
s.write(str(25))
I'm almost certain that the problem isn't stemming from the output method
(`Serial.print`), seeing as when I declare another int, it formats fine on
output, so I am assuming the problem lies in how the `intdata` variable is
constructed.
One thing of note that may help diagnose this problem is that if I change
`Serial.print(intdata)` to `Serial.print(intdata+5)` my result is `2ÿÿ57`,
where I would expect 30 (25+5). This 7 is present regardless of the input. For
instance I could write 271 to the serial and my result would look as follows:
//For input 271.
2ÿÿ771ÿÿÿ7
It appears to me that Arduino is chunking the values into pairs of two and
appending the length to the end. I can't understand why that would happen
though.
It also seems to me that the `ÿ` are being added in the for loop. Meaning that
they are added because nothing is being sent at that current moment. But even
fixing that by adding yet another `if(Serial.available()>0)` conditional, the
result is still not treated like an integer.
Also, would using [Pickle](http://en.wikipedia.org/wiki/Pickle_%28Python%29)
be appropriate here? What am I doing wrong?
Answer: You should wait a bit for the serial data to arrive.
The Arduino code should be:
if (Serial.available()){
delay(100); // Wait for all data.
while (Serial.available()) {
char d = Serial.read();
str.concat(d);
}
}
Also you have to clear your string before re-using it.
[Edit]
I forgot to mention ÿ == -1 == 255 which means `Serial.read()` it is saying it
can't read anything.
|
How to generically apply an override of a function to mutiple classes in python?
Question: I am working on a Django application but this seems like it is just a python
question, with nothing necessarily specific to Django. I'm pretty new to
python, and its hard to describe what I am trying to do, but easier to show so
here goes:
I have one class:
class SlideForm(ModelForm):
class Meta:
model = Slide
which I subclass:
class HiddenSlideForm(SlideForm):
def __init__(self, *args, **kwargs):
super(HiddenSlideForm, self).__init__(*args, **kwargs)
for name, field in self.fields.iteritems():
field.widget = field.hidden_widget()
field.required = False
and then I have another class:
class DeckForm(ModelForm):
def __init__(self, *args, **kwargs):
# do some stuff here
return super(DeckForm, self).__init__(*args, **kwargs)
class Meta:
model = Deck
# other stuff here
which I also sub-class:
class HiddenDeckForm(DeckForm):
def __init__(self, *args, **kwargs):
super(HiddenDeckForm, self).__init__(*args, **kwargs)
for name, field in self.fields.iteritems():
field.widget = field.hidden_widget()
field.required = False
Note that the subclasses have the exact same code other than class names and
do the exact same thing. I have been trying to figure what the best way to
genericize this so I can keep it DRY and easily use it for other classes, and
have considered decorators and/or multiple inheritance--both of which are new
concepts for me--but I keep getting mixed up.
Help is appreciated!
(As a side note, feel free to point out any problems you see in my django code
:) )
Answer: One option is to use a Mixin class; example:
First, the common behavior goes in the mixin:
class SomeMixin(object):
def __init__(self, *args, **kwargs):
super(SomeMixin, self).__init__(*args, **kwargs)
for name, field in self.fields.iteritems():
field.widget = field.hidden_widget()
field.required = False
To the extent that you are in reasonable control of all of the classes in the
inheritance graph, and so long as you call `super` in every method that needs
to be overridden, then it doesn't matter too much what the derived classes
look like.
However, you run into a problem when one of the superclasses does not itself
call `super` at the correct time. It's very important that the overridden
method, in that case, must be called _last_ , since once it's called, no more
calls will be made.
The simplest solution is to make sure that each class actually derives from
the offending superclass, but in some cases, that's just not possible;
deriving a new class creates a new object that you don't actually want to
exist! Another reason might be because the logical base class is too far up
the inheritance tree to work out. \
In that case, you need to pay particular attention to the _order_ in which
base classes are listed. Python will consider the left-most superclass first,
unless a more derived class is present in the inheritance diagram. This is an
involved topic, and to understand what python is really up to, you should read
about the [C3 MRO algorithm](http://www.python.org/download/releases/2.3/mro/)
present in python 2.3 and later.
Base classes as before, but since all of the common code comes from the mixin,
the derived classes become trivial
class HiddenSlideForm(SomeMixin, SlideForm):
pass
class HiddenDeckForm(SomeMixin, DeckForm):
pass
Note that the mixin class appears _first_ , since we can't control what the
`*Form` classes do in their init methods.
If the `__init__` methods of either are non-trivial, you still get a win.
class HiddenSlideForm(SomeMixin, SlideForm):
def __init__(self, *args, **kwargs):
super(HiddenSlideForm, self).__init__(*args, **kwargs)
do_something_special()
* * *
Make sure that `object` is in the inheritance diagram, somewhere. Strange
things can happen otherwise.
|
Array order in `numpy.dot`
Question: In Python's [numerical library NumPy](http://numpy.org/), how does the
`numpy.dot` function deal with arrays of different memory-order?
`numpy.dot(c-order, f-order)` vs. `dot(f-order, c-order)` etc.
The reason I ask is that long time ago (numpy 1.0.4?), I made some tests and
noticed `numpy.dot` performed worse than calling `dgemm` from `scipy.linalg`
directly, with the correct transposition flags, though both call the same BLAS
library internally. (I suspected the reason was copying of the input matrices
inside `numpy.dot`, which is tragic if the input is large.)
Now I tried again and actually `numpy.dot` performs the same as `dgemm`, so
there is no reason to keep the arrays in specific order and set transposition
flags manually. Much cleaner code.
So my question is, how does a recent (let's say 1.6.0) `numpy.dot` work,
guarantees on when things are copied and when not? I'm concerned about 1)
memory 2) performance here. Cheers.
Answer: Possibly what you were seeing may have been related to a blas-optimized dot
import error being caught and handled silently (this code snippet is from
numeric.py)
# try to import blas optimized dot if available
try:
# importing this changes the dot function for basic 4 types
# to blas-optimized versions.
from _dotblas import dot, vdot, inner, alterdot, restoredot
except ImportError:
# docstrings are in add_newdocs.py
inner = multiarray.inner
dot = multiarray.dot
|
JSON Encoding with Django adding extra \\ characters
Question: I'm trying to create a function that will convert a dictionary containing a
message and a Django model instance into JSON, that I can pass back to the
client. For example, I have the model Test defined in models.py.
from django.db import models
class Test(models.Model):
test_field = models.CharField(max_length=40)
I've defined this extension of the simplejson JSONEncoder based on the this
[stackoverflow question](http://stackoverflow.com/questions/2249792/json-
serializing-django-models-with-simplejson):
from django.core.serializers import serialize
from django.utils.simplejson import dumps, loads, JSONEncoder
from django.db.models.query import QuerySet
from django.db import models
from django.utils.functional import curry
class DjangoJSONEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, QuerySet):
# `default` must return a python serializable
# structure, the easiest way is to load the JSON
# string produced by `serialize` and return it
return loads(serialize('json', obj))
if isinstance(obj, models.Model):
#do the same as above by making it a queryset first
set_obj = [obj]
set_str = serialize('json', set_obj)
#eliminate brackets in the beginning and the end
str_obj = set_str[1:len(set_str)-2]
return str_obj
return JSONEncoder.default(self,obj)
# partial function, we can now use dumps(my_dict) instead
# of dumps(my_dict, cls=DjangoJSONEncoder)
dumps = curry(dumps, cls=DjangoJSONEncoder)
Then I go about creating an instance of this along with a status message:
t = Test(test_field="hello")
d = {"entry": t, "message": "Congratulations"}
json = dumps(d)
The contents of json are:
{"entry": "{\\"pk\\": null, \\"model\\": \\"hours.test\\", \\"fields\\": {\\"test_field\\": \\"hello\\"}", "message": "Congratulations"}
Which is basically what I want except for all the extra `\\` characters. Why
are these being inserted into the json? How can I modify my DjangoJSONEncoder
so it doesn't insert the \ characters?
**NOTE**
If I just encode the model instance manually I don't get all the extra `\\`
characters.
s = serialize('json', [t])
s[1:len(s)-2]
This outputs:
{"pk": null, "model": "hours.test", "fields": {"test_field": "hello"}
**EDIT**
Based on the advice of Daniel Roseman and Leopd I modified the
DjangoJSONEncoder class to the following:
class DjangoJSONEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, QuerySet):
# `default` must return a python serializable
# structure, the easiest way is to load the JSON
# string produced by `serialize` and return it
return loads(serialize('python', obj))
if isinstance(obj, models.Model):
#do the same as above by making it a list first
return serialize('python', [obj])[0]
return JSONEncoder.default(self,obj)
Answer: Your logic is wrong, unfortunately. Your "easiest way", as you state, returns
a string - but you don't want a string at that point, you want a dictionary.
You end up serializing a string within a string, hence the extra quotes which
need to be escaped.
Luckily, one of the format options for the `serialize` function is `python` \-
which "serializes" the queryset to a Python dictionary. So you just need:
return serialize('python', obj))
|
Locate unused structures and structure-members
Question: Some time ago we took over the responsibility of a legacy code base.
One of the quirks of this very badly structured/written code was that it
contained a number of really huge structs, each containing hundreds of
members. One of the many steps that we did was to clean out as much of the
code as possible that wasn't used, hence the need to find unused
structs/struct members.
Regarding the structs, I conjured up a combination of python, [GNU
Global](http://www.gnu.org/software/global/) and
[ctags](http://ctags.sourceforge.net/) to list the struct members that are
unused.
Basically, what I'm doing is to use `ctags` to generate a tags file, the
python-script below parses that file to locate all struct members and then
using `GNU Global` to do a lookup in the previously generated global-database
to see if that member is used in the code.
This approach have a number of quite serious flaws, but it sort of solved the
issue we faced and gave us a good start for further cleanup.
There must be a better way to do this!
The question is: How to find unused structures and structure members in a code
base?
#!/usr/bin/env python
import os
import string
import sys
import operator
def printheader(word):
"""generate a nice header string"""
print "\n%s\n%s" % (word, "-" * len(word))
class StructFreqAnalysis:
""" add description"""
def __init__(self):
self.path2hfile=''
self.name=''
self.id=''
self.members=[]
def show(self):
print 'path2hfile:',self.path2hfile
print 'name:',self.name
print 'members:',self.members
print
def sort(self):
return sorted(self.members, key=operator.itemgetter(1))
def prettyprint(self):
'''display a sorted list'''
print 'struct:',self.name
print 'path:',self.path2hfile
for i in self.sort():
print ' ',i[0],':',i[1]
print
f=open('tags','r')
x={} # struct_name -> class
y={} # internal tags id -> class
for i in f:
i=i.strip()
if 'typeref:struct:' in i:
line=i.split()
x[line[0]]=StructFreqAnalysis()
x[line[0]].name=line[0]
x[line[0]].path2hfile=line[1]
for j in line:
if 'typeref' in j:
s=j.split(':')
x[line[0]].id=s[-1]
y[s[-1]]=x[line[0]]
f.seek(0)
for i in f:
i=i.strip()
if 'struct:' in i:
items=i.split()
name=items[0]
id=items[-1].split(':')[-1]
if id:
if id in y:
key=y[id]
key.members.append([name,0])
f.close()
# do frequency count
for k,v in x.iteritems():
for i in v.members:
cmd='global -a -s %s'%i[0] # -a absolute path. use global to give src-file for member
g=os.popen(cmd)
for gout in g:
if '.c' in gout:
gout=gout.strip()
f=open(gout,'r')
for line in f:
if '->'+i[0] in line or '.'+i[0] in line:
i[1]=i[1]+1
f.close()
printheader('All structures')
for k,v in x.iteritems():
v.prettyprint()
#show which structs that can be removed
printheader('These structs could perhaps be removed')
for k,v in x.iteritems():
if len(v.members)==0:
v.show()
printheader('Total number of probably unused members')
cnt=0
for k,v in x.iteritems():
for i in v.members:
if i[1]==0:
cnt=cnt+1
print cnt
**Edit**
As proposed by @Jens-Gustedt using the compiler is a good way to do it. I'm
after a approach that can do a sort of "High Level" filtering before using the
compiler-approach.
Answer: If these are only a few `struct` and if the code does no bad hacks of
accessing a `struct` through another type... then you could just comment out
all the fields of your first `struct` and let the compiler tell you.
Uncomment one used field after the other until the compiler is satisfied. Then
once that compiles, to a good testing to ensure the precondition that there
were no hacks.
Iterate over all `struct`.
Definitively not pretty, but at the end you'd have at least one person who
knows the code a bit.
|
Ignore ImportError when exec source code
Question: I have an application that reads test scripts in python and sends them across
the network for execution on a remote python instance. As the controlling
program does not need to run these scripts I do not want to have all the
modules the test scripts use installed on the controller's python environment.
However the controller does need information from the test script to tell it
how to run the test. Currently what I do to read and import test script data
is something like
with open( 'test.py', 'r' ) as f:
source = f.read()
m = types.ModuleType( "imported-temp", "Test module" )
co = compile( source, 'test.py', 'exec' )
exec co in m.__dict__
which yields a new module that contains the test. Unfortunately exec will
raise ImportErrors if the test tries to import something the controller does
not have. And worse, the module will not be fully imported.
If I can guarantee that the controller will not use the missing modules, is
there someway I can ignore these exceptions? Or some other way to find out the
names and classes defined in the test?
Examples test:
from controller import testUnit
import somethingThatTheControllerDoesNotHave
_testAttr = ['fast','foo','function']
class PartOne( testUnit ):
def run( self ):
pass
What the controller needs to know is the data in _testAttr and the name of all
class definitions inheriting from testUnit.
Answer: Write an import hook that catches the exception and returns a dummy module if
the module doesn't exist.
import __builtin__
from types import ModuleType
class DummyModule(ModuleType):
def __getattr__(self, key):
return None
__all__ = [] # support wildcard imports
def tryimport(name, globals={}, locals={}, fromlist=[], level=-1):
try:
return realimport(name, globals, locals, fromlist, level)
except ImportError:
return DummyModule(name)
realimport, __builtin__.__import__ = __builtin__.__import__, tryimport
import sys # works as usual
import foo # no error
from bar import baz # also no error
from quux import * # ditto
You could also write it to _always_ return a dummy module, or to return a
dummy module if the specified module hasn't already been loaded (hint: if it's
in `sys.modules`, it has already been loaded).
|
RegEx to delete all double whitespace EXCEPT \n? preg_replace
Question: I have imported a plain-text version of PDF using a Python script, but it has
a bunch of garbage artifacts that I just don't care about.
The only whitespace I care about is (1) **single** spaces, and (2) **double**
\n's.
**Single space,** for obvious reasons, between word boundaries. **Double
\n's,** to demarcate between paragraphs.
The _garbage_ whitespace it contains looks like this:
[\ \n\t]+ all jumbled together
Which leads me to another problem, sometimes the paragraphs are demarcated by
[\n][\s]+[\n]
I am not experienced enough with regex to make it ignore the inner whitespace
between the two `\n`'s. As an amateur RegExer, my problem is that `\s`
includes `\n`.
If it didn't -- I think this would be a really easy problem to solve.
All other white space is irrelevant, and nothing I am trying is working really
whatsoever.
Any suggestions would greatly be appreciated.
## Sample text
Summary: The Department of Environment in Bangladesh seized 265 sacks of poultry feed
tainted with tannery waste and various chemicals.
Synthesis/Analysis: The Department of Environment seized the tainted poultry feed on
28 March from a house in the city of Adabar located in Dhaka province. Workers were
found in the house, which was used as an illegal factory, producing the tainted feed. The
Bangladesh Environment Conservation Act allowed for a case to be filed against the
factory’s manager, Mahmud Hossain, and the owner, who was not named.
It was reported that the Department of Environment had also closed three other factories
in Hazaribag a month prior to this instance for the same charges. The Bangladesh Council of
Scientific and Industrial Research found that samples from the feed taken from these
factories had “dangerous levels of chromium…” The news report also stated that “poultry
6
and eggs became poisonous” from consuming the tainted feed, which would also cause
health concerns for consumers.
* * *
This is just leading me to more fixes... Gotta remove all the page numbers,
and random double \n's.
Answer: You can use an assertion to make `\s` exclude line breaks:
((?!\n)\s){2,}
To merge linebreaks with `\n\s+\n` spaces in between, you can use a similar
construct in place of the `\s+`. But for simplicity I would just use two
`preg_match`es and first merge linebreaks, then clean up double spaces.
|
Implementing a "Pythonic" map in Scheme: bad idea?
Question: In Scheme, the function `(map fn list0 [list1 .. listN])` comes with the
restriction that the lists must have the same number of elements. Coming from
Python, I'm missing the freedom of Python list comprehensions, which look a
lot like `map` above, but without this restriction.
I'm tempted to implement an alternative "my-map", which allows for lists of
differing size, iterating through the first N elements of all lists, where N
is the length of the shortest list.
For example, let `num` be `10` and `lst` be `(1 2 3)`. With my-map, I hope to
write expressions like:
(my-map + (circular-list num) lst)))
And get:
(11 12 13)
I have an easier time reading this than the more conventional
(map + (lambda (arg) (+ num arg)) lst)
or
(map + (make-list (length lst) num) lst)
Two questions:
* As a Scheme newbie, am I overlooked important reasons for the restriction on `map`?
* Does something like `my-map` already exist in Scheme or in the SRFIs? I did take a look at srfi-42, but either it's not what I'm looking for, or it was, and it wasn't obvious.
Answer: First, note that `map` _does_ allow empty lists, but of course if there's one
empty list then all of them should be empty.
Second, have a look at the [srfi-1 version of `map`](http://docs.racket-
lang.org/srfi-std/srfi-1.html#map) \-- it is specifically different from the
R5RS version as follows:
> This procedure is extended from its R5RS specification to allow the
> arguments to be of unequal length; it terminates when the shortest list runs
> out.
Third, most Scheme programmers would very much prefer
(map (lambda (arg) (+ num arg)) lst)
My guess is that Scheme is different from Python in a way that makes `lambda`
expressions become more and more readable as you get used to the language.
And finally, there are some implementations that come with some form of a list
comprehension. For example, in [Racket](http://racket-lang.org/) you can
write:
(for/list ([arg lst]) (+ num arg))
|
Searching CSV Files (Python)
Question: I've made this CSV file up to play with.. From what I've been told before, I'm
pretty sure this CSV file is valid and can be used in this example.
Basically I have this CSV file 'book_list.csv':
name,author,year
Lord of the Rings: The Fellowship of the Ring,J. R. R. Tolkien,1954
Nineteen Eighty-Four,George Orwell,1984
Lord of the Rings: The Return of the King,J. R. R. Tolkien,1954
Animal Farm,George Orwell,1945
Lord of the Rings: The Two Towers, J. R. R. Tolkien, 1954
And I also have this text file 'search_query.txt', whereby I put in keywords
or search terms I want to search for in the CSV file:
Lord
Rings
Animal
I've currently come up with some code (with the help of stuff I've read) that
allows me to count the number of matching entries. I then have the program
write a separate CSV file 'results.csv' which just returns either 'Matching'
or ' '.
The program then takes this 'results.csv' file and counts how many 'Matching'
results I have and it prints the count.
import csv
import collections
f1 = file('book_list.csv', 'r')
f2 = file('search_query.txt', 'r')
f3 = file('results.csv', 'w')
c1 = csv.reader(f1)
c2 = csv.reader(f2)
c3 = csv.writer(f3)
input = [row for row in c2]
for booklist_row in c1:
row = 1
found = False
for input_row in input:
results_row = []
if input_row[0] in booklist_row[0]:
results_row.append('Matching')
found = True
break
row = row + 1
if not found:
results_row.append('')
c3.writerow(results_row)
f1.close()
f2.close()
f3.close()
d = collections.defaultdict(int)
with open("results.csv", "rb") as info:
reader = csv.reader(info)
for row in reader:
for matches in row:
matches = matches.strip()
if matches:
d[matches] += 1
results = [(matches, count) for matches, count in d.iteritems() if count >= 1]
results.sort(key=lambda x: x[1], reverse=True)
for matches, count in results:
print 'There are', count, 'matching results'+'.'
In this case, my output returns:
There are 4 matching results.
I'm sure there is a better way of doing this and avoiding writing a completely
separate CSV file.. but this was easier for me to get my head around.
My question is, this code that I've put together only returns how many
matching results there are.. how do I modify it in order to return the ACTUAL
results as well?
i.e. I want my output to return:
There are 4 matching results.
Lord of the Rings: The Fellowship of the Ring
Lord of the Rings: The Return of the King
Animal Farm
Lord of the Rings: The Two Towers
As I said, I'm sure there's a much easier way to do what I already have.. so
some insight would be helpful. :)
Cheers!
EDIT: I just realized that if my keywords were in lower case, it won't work..
is there a way to avoid case-sensitivity?
Answer: 1. Throw away the query file and get your search terms from sys.argv[1:] instead.
2. Throw away your output file and use sys.stdout instead.
3. Append matched booklist titles to a result_list. The result_row that you currently have has a rather misleading name. The count that you want is `len(result_list)`. Print that. Then print the contents of result_list.
4. Convert your query words to lowercase once (before you start reading the input file). As you read each book_list row, convert its title to lowercase. Do your your matching with the lowercase query words and the lowercase title.
|
getting error with execute many in python
Question: I am learning python and i am new bie. I am trying to use functions with mysql
and python and i ma getting errors
This is my script
import MySQLdb
def insert_values(cursor, values):
#cursor = self.connection.cursor()
cursor.executemany("""
insert into pythontest (name1,name2,name3)
values (%s, %s, %s)""", values)
cursor.close()
db = MySQLdb.connect("localhost","root","root","python" )
cursor = db.cursor()
var1 = ['name1','name2','name3']
insert_values(cursor,var1)
db.close()
There may be many errors because i am learning
> 1)i don't know how can i pass db object in function or passing cusrsor is
> ok. because i have to call that function many times in for loop
>
> 2)is the syntax of values array ok to go in database
ERRORS
File "mysql.py", line 10, in insert_values
values (%s, %s, %s)""", values)
File "build/bdist.linux-i686/egg/MySQLdb/cursors.py", line 216, in executemany
File "build/bdist.linux-i686/egg/MySQLdb/connections.py", line 36, in defaulterrorhandler
_mysql_exceptions.ProgrammingError: not enough arguments for format string
Answer:
cursor.executemany("""
insert into pythontest (name1,name2,name3)
values (%s, %s, %s)""", *values)
|
Memory leak while looping web.client.getPage function
Question: I have a page what refresh periodically using this script:
from twisted.web.client import getPage
from twisted.internet import reactor, task
def getData():
dgp = getPage('http://www.google.com/')
dgp.addCallback(dataLoadOK)
dgp.addErrback(dataLoadError)
def dataLoadOK(value):
print value
def dataLoadError(error):
print error
loop = task.LoopingCall(getData)
loop.start(10, now=True)
reactor.run()
Buy while using this way I got the memory leak. Has any one help me find there
is it?
**Edit:** I have try to use [garbage collection python
module](http://teethgrinder.co.uk/perm.php?a=Python-memory-leak-detector), and
got this out put:
GARBAGE OBJECTS:
:: <HTTPClientFactory: http://www.google.com/>
type: <type 'instance'>
referrers: 3
is class: True
module: <module 'twisted.web.client' from '/usr/lib/python2.7/site-packages/twisted/web/client.pyc'>
:: {'status': '200', 'cookies': {'PREF': 'ID=d894e510f2ebe263:FF=0:TM=1306053252:LM=1306053252:S=ebpb4ZebRUu_EhiI', 'NID': '47=LxM9fbBBN-bVIeuLPOfvO-fgXOKw1n2suyZ2...
type: <type 'dict'>
referrers: 3
is class: True
module: None
:: InsensitiveDict({})
type: <type 'instance'>
referrers: 3
is class: True
module: <module 'twisted.python.util' from '/usr/lib/python2.7/site-packages/twisted/python/util.pyc'>
:: {'preserve': 1, 'data': {}}
type: <type 'dict'>
referrers: 3
is class: True
module: None
:: <Deferred at 0x29e2cf8 current result: None>
type: <type 'instance'>
referrers: 3
is class: True
module: <module 'twisted.internet.defer' from '/usr/lib/python2.7/site-packages/twisted/internet/defer.pyc'>
:: {'_chainedTo': None, 'called': True, '_canceller': None, 'callbacks': [], 'result': None, '_runningCallbacks': False}
type: <type 'dict'>
referrers: 3
is class: True
module: None
:: <<class 'twisted.internet.tcp.Client'> to ('www.google.com', 80) at 2445090>
type: <class 'twisted.internet.tcp.Client'>
referrers: 3
is class: True
module: <module 'twisted.internet.tcp' from '/usr/lib/python2.7/site-packages/twisted/internet/tcp.pyc'>
line num: 681
line: class Client(BaseClient):
line: """A TCP client."""
line:
line: def __init__(self, host, port, bindAddress, connector, reactor=None):
line: # BaseClient.__init__ is invoked later
line: self.connector = connector
line: self.addr = (host, port)
line:
line: whenDone = self.resolveAddress
line: err = None
line: skt = None
line:
line: try:
line: skt = self.createInternetSocket()
line: except socket.error, se:
line: err = error.ConnectBindError(se[0], se[1])
line: whenDone = None
line: if whenDone and bindAddress is not None:
line: try:
line: skt.bind(bindAddress)
line: except socket.error, se:
line: err = error.ConnectBindError(se[0], se[1])
line: whenDone = None
line: self._finishInit(whenDone, skt, err, reactor)
line:
line: def getHost(self):
line: """Returns an IPv4Address.
line:
line: This indicates the address from which I am connecting.
line: """
line: return address.IPv4Address('TCP', *(self.socket.getsockname() + ('INET',)))
line:
line: def getPeer(self):
line: """Returns an IPv4Address.
line:
line: This indicates the address that I am connected to.
line: """
line: return address.IPv4Address('TCP', *(self.realAddress + ('INET',)))
line:
line: def __repr__(self):
line: s = '<%s to %s at %x>' % (self.__class__, self.addr, unsignedID(self))
line: return s
:: {'_tempDataBuffer': [], 'disconnected': 1, 'dataBuffer': '', '_tempDataLen': 0, 'realAddress': ('74.125.225.81', 80), 'connector': <twisted.internet.tcp.Connect...
type: <type 'dict'>
referrers: 3
is class: True
module: None
:: []
type: <type 'list'>
referrers: 3
is class: True
module: None
:: {'x-xss-protection': ['1; mode=block'], 'set-cookie': ['PREF=ID=d894e510f2ebe263:FF=0:TM=1306053252:LM=1306053252:S=ebpb4ZebRUu_EhiI; expires=Tue, 21-May-2013 0...
type: <type 'dict'>
referrers: 3
is class: True
module: None
:: ['-1']
type: <type 'list'>
referrers: 3
is class: True
module: None
:: ['private, max-age=0']
type: <type 'list'>
referrers: 3
is class: True
module: None
:: ['text/html; charset=ISO-8859-1']
type: <type 'list'>
referrers: 3
is class: True
module: None
:: ['PREF=ID=d894e510f2ebe263:FF=0:TM=1306053252:LM=1306053252:S=ebpb4ZebRUu_EhiI; expires=Tue, 21-May-2013 08:34:12 GMT; path=/; domain=.google.com', 'NID=47=LxM9...
type: <type 'list'>
referrers: 3
is class: True
module: None
:: ['gws']
type: <type 'list'>
referrers: 3
is class: True
module: None
:: ['1; mode=block']
type: <type 'list'>
referrers: 3
is class: True
module: None
:: []
type: <type 'list'>
referrers: 3
is class: True
module: None
:: <twisted.internet.tcp.Connector instance at 0x29e2cb0>
type: <type 'instance'>
referrers: 3
is class: True
module: <module 'twisted.internet.tcp' from '/usr/lib/python2.7/site-packages/twisted/internet/tcp.pyc'>
:: ['Sun, 22 May 2011 08:34:12 GMT']
type: <type 'list'>
referrers: 3
is class: True
module: None
:: {'reactor': <twisted.internet.selectreactor.SelectReactor object at 0x288bd10>, 'state': 'disconnected', 'factoryStarted': 0, 'bindAddress': None, 'factory': <H...
type: <type 'dict'>
referrers: 3
is class: True
module: None
so I saw some unclosed reference inside twisted function, how can I avoid it?
Answer: Try some strategies recommended in [related
questions](http://stackoverflow.com/questions/110259/python-memory-
profiler/110826). However, it is likely that you don't have a memory leak, you
just have [memory
fragmentation](http://stackoverflow.com/questions/2100192/how-to-find-the-
source-of-increasing-memory-usage-of-a-twisted-server).
It looks like the "Python memory leak detector" has a pretty severe bug. It
enables `DEBUG_LEAK` which **prevents all cycles from being collected**. Put
another way, it _creates_ lots of massive leaks. If you just add some code to
your example to report the contents of `gc.garbage` without enabling
`DEBUG_LEAK`, then it remains empty (`gc.garbage` will be populated if any
objects are actually leaking, even if you don't enable any gc debug flags).
|
import C++ module, if fail: import Python version?
Question: I have a Python pyd module written in C++. I also have a Python version of the
module (which is much slower). My question is, as I want the program to run on
multiple OSs, can I try to import the C++ version in the program and import
the slower Python version if that one fails (other OS, architecture)?
Answer: Yes, you can import some thing like this:
try:
import CppModule as Module
except ImportError:
import PurePythonModule as Module
|
Fix invalid XML with ampersands in Python
Question: I am using Python to manipulate an XML file I receive from another system.
That system produces invalid XML. Mainly, it doesn't escape some of the & in
the XML.
So, for example, I have some lines like that:
<IceCream>Ben&Jerry</IceCream>
Of course, when parsed with SAX or DOM it throws invalid token error.
For some more general background - it's a very large file (2MB), fairly flat,
and contains a lot of data in CDATA.
What I've tried:
1. Writing a **Regex** to replace only unesacped &, without reesacaping > and such: `&(?!\w{2,4};)` . It fixed it, but it escaped ampersands in CDATA, which then caused errors in a destination system. I can't unescape everything that's in CDATA afterwards because some of it needs to stay escaped.
2. Using **Beautiful (Stone) Soup**. Also unlucky. Instead of escaping loose ampersands, it created an entity (i.e. `&Jerry;`). Not Good.
Next Step will be to write my own parser using a state machine. Save me from
going down that road.
It is not a complex structure (very flat, 4 layers deep at most) so perhaps
regex might be able to catch areas that aren't in a CDATA.
Many thanks.
Answer: Use the Python bindings for [tidylib](https://pypi.python.org/pypi/pytidylib):
>>> import tidylib
>>> print tidylib.tidy_document("<IceCream>Ben&Jerry</IceCream>", {"input_xml": True})[0]
<IceCream>Ben&Jerry</IceCream>
See the official tidy documentation for a list of [parser
options](http://tidy.sourceforge.net/docs/quickref.html).
|
Nothing except "None" returned for my Python web.py Facebook app when I turn on "OAuth 2.0 for Canvas"
Question: I am a beginning Facebook app developer, but I'm an experienced developer. I'm
using web.py as my web framework, and to make matters a bit worse, I'm new to
Python.
I'm running into an issue, where when I try to switch over to using the newer
"OAuth 2.0 for Canvas", I simply can't get anything to work. The only thing
being returned in my Facebook app is "None".
My motivation for turning on OAuth 2.0 is because it sounds like Facebook is
going to force it by July, and I might as well learn it now and now have to
rewrite it in a few weeks.
I turned on "OAuth 2.0 for Canvas" in the Advanced Settings, and rewrote my
code to look for "signed_request" that is POSTed to my server whenever my test
user tries to access my app.
My code is the following (I've removed debugging statements and error checking
for brevity):
#!/usr/bin/env python
import base64
import web
import minifb
import urllib
import json
FbApiKey = "AAAAAA"
FbActualSecret = "BBBBBB"
CanvasURL = "http://1.2.3.4/fb/"
RedirectURL="http://apps.facebook.com/CCCCCCCC/"
RegURL = 'https://graph.facebook.com/oauth/authorize?client_id=%s&redirect_uri=%s&type=user_agent&display=page' % (FbApiKey, RedirectURL)
urls = (
'/fb/', 'index',
)
app = web.application(urls, locals())
def authorize():
args = web.input()
signed_request = args['signed_request']
#split the signed_request via the .
strings = signed_request.split('.')
hmac = strings[0]
encoded = strings[1]
#since uslsafe_b64decode requires padding, add the proper padding
numPads = len(encoded) % 4
encoded = encoded + "=" * numPads
unencoded = base64.urlsafe_b64decode(str(encoded))
#convert signedRequest into a dictionary
signedRequest = json.loads(unencoded)
try:
#try to find the oauth_token, if it's not there, then
#redirect to the login page
access_token = signedRequest['oauth_token']
print(access_token)
except:
print("Access token not found, redirect user to login")
redirect = "<script type=\"text/javascript\">\ntop.location.href=\"" +_RegURL + "\";\n</script>"
print(redirect)
return redirect
# Do something on the canvas page
returnString = "<html><body>Hello</body></html>"
print(returnString)
class index:
def GET(self):
authorize()
def POST(self):
authorize()
if __name__ == "__main__":
app.run()
For the time being, I want to concentrate on the case where the user is
already logged in, so assume that oauth_token is found.
My question is: Why is my "Hello" not being outputted, and instead all I see
is "None"?
It appears that I'm missing something very fundamental, because I swear to
you, I've scoured the Internet for solutions, and I've read the Facebook pages
on this many times. Similarly, I've found many good blogs and stackoverflow
questions that document precisely how to use OAuth 2.0 and signed_request. But
the fact that I am getting a proper oauth_token, but my only output is "None"
makes me think there is something fundamental that I'm doing incorrectly. I
realize that "None" is a special word in python, so maybe that's the cause,
but I can't pin down exactly what I'm doing wrong.
When I turn off OAuth 2.0, and revert my code to look for the older POST data,
I'm able to easily print stuff to the screen.
Any help on this would be greatly appreciated!
Answer: How embarrassing!
In my authorize function, I return a string. But since class index is calling
authorize, it needs to be returned from the class, not from authorize. If I
return the return from authorize, it works.
|
Are Mixin class __init__ functions not automatically called in python?
Question: I'd like to use a Mixin to always add some init functionality to my child
classes which each inherit from different API base classes. Specifically, I'd
like to make multiple different child classes that inherit from one of these
different API-supplied base classes and the one Mixin, which will always have
the Mixin initialization code executed in the same way, without code
replication. However, it seems that the __init__ function of the Mixin class
never gets called unless I explicitly call it in the Child class's __init__
function, which is less than ideal. I've built up a simple test case:
class APIBaseClassOne(object):
def __init__(self, *args, **kwargs):
print (" base ")
class SomeMixin(object):
def __init__(self, *args, **kwargs):
print (" mixin before ")
super(SomeMixin, self).__init__(*args, **kwargs)
print (" mixin after ")
class MyClass(APIBaseClassOne):
pass
class MixedClass(MyClass, SomeMixin):
pass
As you can see in the following output, the Mixin function's init never gets
called:
>>> import test
>>> test.MixedClass()
base
<test.MixedClass object at 0x1004cc850>
Is there a way to do this (have an init function in a Mixin get called)
without writing every child class to explicitly invoke the Mixin's init
function? (i.e., without having to do something like this in every class:)
class MixedClass(MyClass, SomeMixin):
def __init__(*args, **kwargs):
SomeMixin.__init__(self, *args, **kwargs)
MyClass.__init__(self, *args, **kwargs)
Btw, if all my child classes were inheriting from same base class, I realize I
could create a new middle class that inherits from the base class and the
mixin and keep it DRY that way. However, they inherit from different base
classes with common functionality. (Django Field classes, to be precise).
Answer: Sorry I saw this so late, but
class MixedClass2(SomeMixin, MyClass):
pass
>>> m = MixedClass2()
mixin before
base
mixin after
The pattern @Ignacio is talking about is called cooperative multiple
inheritance, and it's great. But if a base class isn't interested in
cooperating, make it the second base, and your mixin the first. The mixin's
`__init__()` (and anything else it defines) will be checked before the base
class, following Python's
[MRO](http://www.python.org/download/releases/2.3/mro/).
This should solve the general question, though I'm not sure it handles your
specific use. Base classes with custom metaclasses (like Django models) or
with strange decorators (like @martineau's answer ;) can do crazy things.
|
invalid syntax in python while trying to plot
Question: After my blunder with the infinity factorial sum XD I redid the code, but I
keep getting syntax error :\
from scitools.std import *
from math import factorial, cos, e
from scipy import *
import numpy as np
def f1(t):
return 0.5*(1 + sum( (a**(2*n)*cos(2*sqrt(1 + n)*t))/(e**a**2*factorial(n)) for n in range(0,100)))
a=4
t = linspace(0, 35, 1000)
y1 = f1(t)
plot(t, y1)
xlabel(r'$\tau$')
ylabel(r'P($\tau$)')
legend(r'P($\tau$)')
axis([0.0, 35.0, 0.0, 1.0])
grid(True)
show()
It says that there's an error in my program: invalid syntax and the a is red
:\
What's wrong now? :(
EDIT:
I've added another ) at the end of the sum, but now I keep getting huge error:
Traceback (most recent call last):
File "D:\faxstuff\3.godina\kvantna\vježbe\qm2\v8\plot.py", line 12, in <module>
y1 = f1(t)
File "D:\faxstuff\3.godina\kvantna\vježbe\qm2\v8\plot.py", line 8, in f1
return 0.5*(1 + sum( (a**(2*n)*cos(2*sqrt(1 + n)*t))/(e**a**2*factorial(n)) for n in range(0,100)))
File "C:\Python26\lib\site-packages\numpy\core\fromnumeric.py", line 1415, in sum
res = _sum_(a)
File "D:\faxstuff\3.godina\kvantna\vježbe\qm2\v8\plot.py", line 8, in <genexpr>
return 0.5*(1 + sum( (a**(2*n)*cos(2*sqrt(1 + n)*t))/(e**a**2*factorial(n)) for n in range(0,100)))
TypeError: unsupported operand type(s) for /: 'numpy.ndarray' and 'numpy.float64'
Should I make so that the sum expression gives back an array from which I can
plot or?
Answer: The parentheses in `return 0.5*(1 + sum(...` are not balanced.
As a matter of style, I would recommend avoiding `from <module> import
*`-style imports. In your specific example you end up with a single-letter
variable name (`e`) imported into the global namespace, which you then proceed
to use. This could lead to confusion and, worse, hard-to-diagnose bugs.
|
Checking for interactive shell in a Python script
Question: I need to determine whether the shell which invoked my Python script was in
interactive mode or not. If it was in interactive mode, the program should
pipe output to less(1) for easy reading. If not, it should simply print its
output to stdout, to allow it to be piped away to a printer, file, or a
different pager.
In a shell script, I would have checked if the prompt variable $PS1 was
defined, or looked for the -i option among the flags stored in the $-
variable.
What is the preferred method for testing interactivity from within Python?
Answer: This is often works well enough
import os, sys
if os.isatty(sys.stdout.fileno()):
...
|
Python Shelve Module Memory Consumption
Question: I have been assigned the task of reading a .txt file which is a log of various
events and writing some of those events into a dictionary.
The problem is that the file can sometimes get bigger than 3GB in size. This
means that the dictionary gets too big to fit into main memory. It seems that
[Shelve](http://docs.python.org/library/shelve.html#shelve-example) is a good
way to solve this problem. However, since I will be constantly modifying the
dictionary, I must have the `writeback` option enabled. This is where I am
concerned - the tutorial says that this would slow down the read/write process
and use more memory, but I am unable to find statistics on how the speed and
memory are affected.
Can anyone clarify by how much the read/write speed and memory are affected so
that I can decide whether to use the writeback option or sacrifice some
readability for code efficiency?
Thank you
Answer: For databases this size, shelve really is the wrong tool. If you do not need a
_highly available_ client/server architecture, and you just want to convert
your TXT file to a local in-memory-accessible database, you really should be
using [ZODB][1]
[1]: <http://www.zodb.org/en/latest/> ZODB
If you need something highly-available, you will of course need to switch to a
formal "NoSQL" database, of which there are many to choose from.
Here's a simple example of how to convert your shelve database to a ZODB
database which will solve your memory usage / performance problems.
#!/usr/bin/env python
import shelve
import ZODB, ZODB.FileStorage
import transaction
from optparse import OptionParser
import os
import sys
import re
reload(sys)
sys.setdefaultencoding("utf-8")
parser = OptionParser()
parser.add_option("-o", "--output", dest = "out_file", default = False, help ="original shelve database filename")
parser.add_option("-i", "--input", dest = "in_file", default = False, help ="new zodb database filename")
parser.set_defaults()
options, args = parser.parse_args()
if options.in_file == False or options.out_file == False :
print "Need input and output database filenames"
exit(1)
db = shelve.open(options.in_file, writeback=True)
zstorage = ZODB.FileStorage.FileStorage(options.out_file)
zdb = ZODB.DB(zstorage)
zconnection = zdb.open()
newdb = zconnection.root()
for key, value in db.iteritems() :
print "Copying key: " + str(key)
newdb[key] = value
transaction.commit()
|
Django and architecture : how to share a "referencial" database between projects?
Question: I come today with a design/architecture question concerning Django.
I work on several websites (hosted on the same server) which individually need
geographical data (states, towns, etc.). Each project contains apps, and each
app may contain models with `ForeignKey` fields to Town or State.
In order to not repeat myself, I wan't to **build a database to store these
towns and states, and to use it through Django projects**.
Django provides a straightforwards way to use several databases in the same
project, declaring it in the `settings.py` file and writing routers classes to
hold reading and writing stuff. But that way, impossible to use
`select_related` statement like :
job = get_object_or_404(Jobs.objects.select_related('state__town'), user=user)
This behaviour is just natural to me (impossible to make joins between
databases, from scratch)...
My questions :
* Is it a good idea to consider introducing **dblinks** (I don't think so...) and can Django handle it (I didn't find any docs for this part) ?
* How would you proceed, facing this situation ?
A quick and dirty solution would be to import all geo data (towns, states...)
in each project database, but it's not DRY at all :( :
python manage.py loaddata geo.json
Another solution may be to build a separated "geo" app, which could "serve" (I
don't know how) the data to other projects... Actually, I tried _GeoDjango_ ,
but it seems to be really complex, and it probably won't answer my question !
Thank you very much in advance for your answers !
Answer: Depending upon how static this data is, the simplest way might be to just
define these towns and states in Python once and then import this definition
in all of your separate projects:
# locations.py
STATES = (('S1', 'State 1'), ('S2', 'State 2'))
TOWNS = (('T1', 'Town 1'), ('T2', 'Town 2'))
And then you can, instead of using a Foreign key use a charfield specifying
the options kwarg:
# app/models.py
from django.db import models
import locations # its on the path somewhere!
class MyModel(models.Model):
state = models.CharField(max_length=5, options=STATES)
town = models.CharField(max_length=5, options=TOWNS)
This approach is not very easy to update, and it does not record the
relationship between towns and states (i.e. A town is in one state), however
it is dead simple.
|
How to send a value from Arduino to Python and then use that value
Question: I am in the process of building a robot that is remote controlled using Python
to send control messages via the Internet through a simple GUI.
I have gotten part of my code working pretty well, the GUI and control
systems, but I am stuck. I am trying to use a parallax ping sensor to get
distance to objects information from an [Arduino
Mega](http://arduino.cc/en/Main/ArduinoBoardMega), and send that value to my
Python control script to be displayed on the remote GUI.
The main problem that I am having is how to integrate Python code that will
use the already established COM port with the Arduino and send a message to
tell the Arduino to poll the ping sensor and then send to a Python program
which will receive the value, and then let me insert that value into my GUI.
I already have this code to control the Arduino, and it works, with my simple
GUI.
import serial
ser = serial.Serial('/dev/ttyUSB0', 9600)
from PythonCard import model
class MainWindow(model.Background):
def on_SpdBtn_mouseClick(self, event):
spd = self.components.SpdSpin.value
def on_FBtn_mouseClick(self, event):
spd = self.components.SpdSpin.value
ser.write('@')
ser.write('F')
ser.write(chr(spd))
def on_BBtn_mouseClick(self, event):
spd = self.components.SpdSpin.value
ser.write('@')
ser.write('B')
ser.write(chr(spd))
def on_LBtn_mouseClick(self, event):
spd = self.components.SpdSpin.value
ser.write('@')
ser.write('L')
ser.write(chr(spd))
def on_RBtn_mouseClick(self, event):
spd = self.components.SpdSpin.value
ser.write('@')
ser.write('R')
ser.write(chr(spd))
def on_SBtn_mouseClick(self, event):
spd = self.components.SpdSpin.value
ser.write('@')
ser.write('S')
ser.write('0')
def on_PngDisBtn_mouseClick(self, event):
ser.write('~')
ser.write('P1')
ser.write('p2')
app = model.Application(MainWindow)
app.MainLoop()
What I would really like to do is improve the above code and add a button to
click to tell Python to send a message to the Arduino to check the ping sensor
and return the value. I am very literate with the Arduino code, but I just
started playing with Python in the last two weeks.
Answer: Basically, you'd just send a suitable command to the Arduino, much like you're
already doing, but then wait for the Arduino to send something back; the
python end of it might look something like this
ser.write('foo')
retval = ser.readline() # read a complete line (\r\n or \n terminated),
#or you could use read(n) where n is the number of bytes you want (default=1)
ping_data = retval.strip() # strip out the newline, if you read an entire line
of course, that'll get you a string, you'll probably want to convert it to an
int or float in order to use it in calculations later (use int(ping_data) or
float(ping_data) for strings, or struct.unpack in case its a byte sequence
that needs unpacking to something sane first, but it all depends on how you
represent the sensor data).
|
How to save an Image using URL in python/django
Question: If you were to save an Image using it's URL how would you do it ?
Also how do I give the Image a unique file name while saving it.
response = urllib.urlopen(image_url)
file_name = ''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(10))
f = open('/media/images/temp/'+file_name, "wb")
f.write(response.read())
f.close()
It throws no error nor saves the file... I'm new to this I have no clue what
is going wrong : |
Answer:
import urllib
import string
import random
import os
filename_charset = string.ascii_letters + string.digits
filename_length = 10
file_save_dir = '/home/user/download/'
filename = ''.join(random.choice(filename_charset)
for s in range(filename_length))
urllib.urlretrieve ("http://www.example.com/image.png",
os.path.join(file_save_dir, filename + '.png'))
|
Sql statement with like from variable
Question: I'm executing this code in python
from sqlite3 import dbapi2 as sqlite
con = sqlite.connect("db.sqlite")
cur = con.cursor()
surname = "'%atton%'"
cur.execute("select id from singers where surname like :surname", locals())
cur.close()
con.close()
After this code `cur.rowcount == -1` but Patton is in the database.
Is my SQL statement bad?
thank you
Answer: The DB-API parameterization you use (which you _should_ use, don't change
that) means the surname will automatically be quoted or escaped appropriately.
You should remove the inner set of quotes from your `surname` string.
surname = "%atton%"
cur.execute("select id from singers where surname like :surname",
dict(surname=surname))
|
File locks in SQLite
Question: I'm writing my first SQLAlchemy (0.6.8)/Python (2.7.1) program, sitting on top
of SQLite (3.7.6.3, I think), running on Windows Vista.
In order to perform unit-testing, I am pointing SQLite to a test database, and
my unit-test scripts routinely delete the database file, so I am continuously
working with a known initial state.
Sometimes my (single-threaded) unit-tests fail to remove the file:
WindowsError: [Error 32] The process cannot access the file because it is being used by another process
The only process that uses the file is the unit-test harness. Clearly, some
lock is not being released by one of my completed unit-tests, preventing the
next unit-test in the same process from deleting the file.
I have searched all the places I have created a session and confirmed there is
a corresponding session.commit() or session.rollback().
I have searched for all session.commit() and session.rollback() calls in my
code, and added a session.close() call immediately afterwards, in an attempt
to explicitly release any transactional locks, but it hasn't helped.
Are there any secrets to ensuring the remaining locks are removed at the end
of a transaction to permit the file to be deleted?
Answer: Someone had a similar problem: <http://www.mail-
archive.com/sqlalchemy@googlegroups.com/msg20724.html>
You should use a NullPool at the connection establishement to ensure that no
active connection stay after `session.close()`
from sqlalchemy import create_engine
from sqlalchemy.pool import NullPool
to_engine = create_engine('sqlite:///%s' % temp_file_name, poolclass=NullPool)
Reference:
<http://www.sqlalchemy.org/docs/06/core/pooling.html?highlight=pool#sqlalchemy.pool>
This is only required in SQLAlchemy prior to 0.7.0. After 0.7.0, this became
the default behaviour for SQLite. Reference:
<http://www.sqlalchemy.org/docs/core/pooling.html?highlight=pool#sqlalchemy.pool>
|
Why pass string that represents an object instead passing the object?
Question: Code below is from `Django`'s `settings.py`. Since everything in `Python` is
an object, why don't just pass installed apps as tuple of objects:
INSTALLED_APPS = (
django.contrib.auth,
django.contrib.contenttypes,
.....
Is there some strong reason why they do this:
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
.....
instead?
I'm new to `Python` and `Django`, so don't judge me too much, please.
Answer: Passing module or class name as a string is typical solution to avoiding
circular import error.
For example, if settings file contains `import myapp.models` and myapp.models
contains `from django.conf import settings` this would lead to circular
import.
|
In Python, how can I detect whether the computer is on battery power?
Question: I'm playing around with pygame, and one thing I'd like to do is reduce the
number of frames per second when the computer is on battery power (to lower
the CPU usage and extend battery life).
How can I detect, from Python, whether the computer is currently on battery
power?
I'm using Python 3.1 on Windows.
Answer: If you want to do it without `win32api`, you can use the built-in
[`ctypes`](http://docs.python.org/library/ctypes.html) module. I usually run
CPython without `win32api`, so I kinda like these solutions.
It's a tiny bit more work for `GetSystemPowerStatus()` because you have to
define the `SYSTEM_POWER_STATUS` structure, but not bad.
# Get power status of the system using ctypes to call GetSystemPowerStatus
import ctypes
from ctypes import wintypes
class SYSTEM_POWER_STATUS(ctypes.Structure):
_fields_ = [
('ACLineStatus', wintypes.BYTE),
('BatteryFlag', wintypes.BYTE),
('BatteryLifePercent', wintypes.BYTE),
('Reserved1', wintypes.BYTE),
('BatteryLifeTime', wintypes.DWORD),
('BatteryFullLifeTime', wintypes.DWORD),
]
SYSTEM_POWER_STATUS_P = ctypes.POINTER(SYSTEM_POWER_STATUS)
GetSystemPowerStatus = ctypes.windll.kernel32.GetSystemPowerStatus
GetSystemPowerStatus.argtypes = [SYSTEM_POWER_STATUS_P]
GetSystemPowerStatus.restype = wintypes.BOOL
status = SYSTEM_POWER_STATUS()
if not GetSystemPowerStatus(ctypes.pointer(status)):
raise ctypes.WinError()
print 'ACLineStatus', status.ACLineStatus
print 'BatteryFlag', status.BatteryFlag
print 'BatteryLifePercent', status.BatteryLifePercent
print 'BatteryLifeTime', status.BatteryLifeTime
print 'BatteryFullLifeTime', status.BatteryFullLifeTime
On my system that prints this (basically meaning "desktop, plugged in"):
ACLineStatus 1
BatteryFlag -128
BatteryLifePercent -1
BatteryLifeTime 4294967295
BatteryFullLifeTime 4294967295
|
Correct way to write line to file in Python
Question: I'm used to doing `print >>f, "hi there"`
However, it seems that `print >>` is getting deprecated. What is the
recommended way to do the line above?
**Update** : Regarding all those answers with `"\n"`...is this universal or
Unix-specific? IE, should I be doing `"\r\n"` on Windows?
Answer: You should use the new print() statement, available with Python 2.6+
from __future__ import print_function
print("hi there", file=f)
The alternative would be to use:
f = open('myfile','w')
f.write('hi there\n') # python will convert \n to os.linesep
f.close() # you can omit in most cases as the destructor will call it
Quoting from [Python
documentation](https://docs.python.org/2/tutorial/inputoutput.html#reading-
and-writing-files) regarding newlines:
> On output, if newline is None, any `'\n'` characters written are translated
> to the system default line separator, `os.linesep`. If newline is `''`, no
> translation takes place. If newline is any of the other legal values, any
> `'\n'` characters written are translated to the given string.
|
help with python ctypes and nvapi
Question: My end goal is to query NVAPI for gpu usage and other statistics in python.
See <http://developer.nvidia.com/nvapi>
from ctypes import WinDLL
nvapi = WinDLL("nvapi.dll")
print nvapi# <WinDLL 'nvapi.dll', handle 718a0000 at 27c0050>
print nvapi.nvapi_QueryInterface# <_FuncPtr object at 0x026D8E40>
print nvapi.nvapi_QueryInterface()# returns 0
print nvapi.NvAPI_Initialize# AttributeError: function 'NvAPI_Initialize' not found
print nvapi.NvAPI_SYS_GetChipSetInfo# AttributeError: function 'NvAPI_SYS_GetChipSetInfo' not found
Here is a copy of the header file available for download from the link above:
<http://paste.pound-python.org/show/7337/>
At this point, I am just trying to familiarize myself with the api... so what
am I doing wrong? I can't figure out how to call any of the functions listed
in the header file.
Answer: Are you sure it's a WinDLL? From the header file, it looks like a standard C
calling convention to me. Have you tried `CDLL` instead?
**EDIT** :
I see now. The header you pointed to isn't actually the interface for
`nvapi.dll`\--it is a wrapper around it that must be statically linked.
From the docs downloaded from [NVIDIA's developer
site](http://developer.nvidia.com/nvapi):
> **Use a Static Link with Applications**
>
> NvAPI cannot be dynamically linked to applications. You must create a static
> link to the library and then call NvAPI_Initialize(), which loads nvapi.dll
> dynamically.
>
> If the NVIDIA drivers are not installed on the system or nvapi.dll is not
> present when the application calls NvAPI_Initialize(), the call just returns
> an error. The application will still load.
I would guess that the actual calls in `nvapi.dll` are completely different
than the ones exposed in this wrapper library. I can't seem to find any
documentation on those though. Perhaps they are internal and change between
systems.
If you want to use this interface, I'm not really sure what the best solution
is. It's a static library and not a dynamic one, so ctypes wouldn't handle it
unless you wrapped it in another DLL. I'm not an expert at native code with
Python, so maybe someone else will have an easy fix. Sorry.
|
Building Python and more on missing modules
Question: I have another thread asking help on "missing zlib". With the nice help the
problem has been resolved (almost).
Now I am interested in building Python myself (on Ubuntu 10.10).
A few important questions have caught my attention:
1. After building Python (say 2.7.1), do I need to rebuild Python if I have missing modules?
2. Is there a way to find out what modules will be missing prior to building Python? Say sqlite3. I have sqlite3 installed for the system default (Python 2.6.6), and I can import that into Python 2.6.6 shell. Now I use pythonbrew to build 2.7.1, and in the shell I cannot import sqlite3 because _sqlite3 is not available. I am sure there are a few more important one missing which I need for future development (such as Django..).
I am willing to learn how to build without using
[pythonbrew](https://github.com/utahta/pythonbrew).
Please share with me your experience in building another version of Python,
and how would you address the problem of missing modules? Is there a practical
solution to building Python?
I have never bothered building one myself, so please bear with me. I am
beginning to realize the importance of learning and building one myself! Thank
you very much!
* * *
**EDIT**
First I thank you all of your inputs. They meant a lot. I did the building.
Python build finished, but the necessary bits to build these modules were not found:
_bsddb _curses _curses_panel
_tkinter bsddb185 bz2
dbm gdbm readline
sunaudiodev _sqlite3
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
I got sqlite3 and readline away by
sudo apt-get install libreadline6 libreadline6-dev
sudo apt-get install libsqlite3-dev
I tried to import them, but still "no named module xxxx".
At [AskUbuntu](http://askubuntu.com/questions/45905/terminator-command-
history) I actually asked people how to get previous commands because I
couldn't use that feature when I am in Python 2.7.1 shell. I believe it's due
to readline.
[Readline](http://ubuntuforums.org/showpost.php?p=6774301&postcount=10)
I installed the Python-2.7.1 under this directory: /home/jwxie518/python27/
I looked into setup.py, I found the following lines:
# The sqlite interface
sqlite_setup_debug = False # verbose debug prints from this script?
# We hunt for #define SQLITE_VERSION "n.n.n"
# We need to find >= sqlite version 3.0.8
sqlite_incdir = sqlite_libdir = None
sqlite_inc_paths = [ '/usr/include',
'/usr/include/sqlite',
'/usr/include/sqlite3',
'/usr/local/include',
'/usr/local/include/sqlite',
'/usr/local/include/sqlite3',
]
All the paths listed above do not exist. So I guess I have to install sqlite3
manually? I got another reference
[here](http://gcxieblog.blog.163.com/blog/static/56837839200911105418606/)
(it's in Chinese, however)
# Download the latest and extract
# Go into the extracted directory
./configure --prefix=/home/jwxie518/python27/python
make && make install
# Then edit python-2.7 's setup.py before rebuild it
# Sample (add these two lines to the end....)
'~/share/software/python/sqlite-3.6.20/include',
'~/share/software/python/sqlite-3.6.20/include/sqlite3',
# Then rebuild python like how we did before
I went into my directory where I installed sqlite3. I found
**include/sqlite3.h** only. So I went back and check **/usr/include/**. I can
only find sqlite3.h too.
So what is going on here? Readline is also non-importable.
* * *
**3RD EDIT** I started everything over, except I didn't reinstall sqlite3.
# Extract Python-2.7.1
# cd into Python-2.7.1
# ./configure
make >make.out 2>&1
less make.out
make.out is here: <http://pastebin.com/raw.php?i=7k3BfxZQ>
I still couldn't import sqlite3. So I went into setup.py and made changes:
# We hunt for #define SQLITE_VERSION "n.n.n"
# We need to find >= sqlite version 3.0.8
sqlite_incdir = sqlite_libdir = None
sqlite_inc_paths = [ '/usr/include',
'/usr/include/sqlite',
'/usr/include/sqlite3',
'/usr/local/include',
'/usr/local/include/sqlite',
'/usr/local/include/sqlite3',
'/home/jwxie518/python-mod/include/sqlite',
'/home/jwxie518/python-mod/include/sqlite3',
]
Then again, ran everything over (this time I also did **make clean**)
Output is here: <http://pastebin.com/raw.php?i=8ZKgAcWn>
According to the output, I don't think the custom path is included.... (for
complete output please go to the link above and search for sqlite)
> build/temp.linux-i686-2.7/home/jwxie518/Python-2.7.1/Modules/_sqlite/util.o
> -L/usr/lib -L/usr/local/lib -Wl,-R/usr/lib -lsqlite3 -o
> build/lib.linux-i686-2.7/_sqlite3.so
I still cannot import sqlite3.
THanks!
* * *
Thank you very much, Michael Dillon, for helping me out. Your tutorial was
neat and clear.
I solved the problem as soon as I realized whenever I tried Python-2.7.1, I
was actually using the one installed by Pythonbrew.
The moral of the story is read all the errors. I neglected the errors
generated by importing sqlite3. The one installed by Pythonbrew didn't have
sqlite3 installed. The development package for sqlite3 was installed after
Pythonbrew installed the Python-2.7.1
Thanks.
Answer: Here is how to build Python and fix any dependencies. I am assuming that you
want this Python to be entirely separate from the Ubuntu release Python, so I
am specifying the --prefix option to install it all in /home/python27 using
the standard Python layout, i.e. site-packages instead of dist-packages.
1. Get the .tar.gz file into your own home directory.
2. tar zxvf Py*.tar.gz
3. cd Py*1
4. ./configure --prefix=/home/python27
5. make
6. make install
Step 5 is the important one. At the end, it will display a list of any modules
that could not be built properly. Often you can fix this by installing an
Ubuntu package, and rerunning make.
a. sudo apt-get install something-dev
b. make
It is pretty common to have a problem because you are missing the -dev addon
to some module or other. But sometimes you should start over like this:
a. make clean
b. ./configure --prefix=/home/python27
c. make
Starting over never hurts if you are unsure. An important note about step 6. I
am not using sudo on this command which means that you will need to have the
/home/python27 directory already created with the appropriate ownership.
Don't hesitate to try out `./configure --help |less` before building something
because there may be interesting options that you could use. One time on a
minimal distro I had to do --with-dbmliborder=gdbm:bdb in order to get gdbm
working. When you run ./configure, the last few lines will tell you where it
put the information that it learned. In the case of Python, Modules/Setup has
been useful to figure out how to get a module to build.
Another useful thing is to `make clean` and then run `make >make.out 2>&1` to
capture all the output from the full make process. Then, after it is complete,
use `less` or an editor to look for the details on a problem module such as
_sqlite. For instance, check all the -I options that are passed to `gcc`. If
the correct include directory is not on the list that would cause a problem.
You can edit `setup.py` to change the list of include directories.
In the past it was more common to have library problems that would be fixed by
logging out, logging in again, and running "sudo ldconfig" before doing a
complete rebuild.
|
Reraise HTTPError with extra info
Question: I want to catch a `urllib2.HTTPError` with extra information if it's a 404:
try:
data = urlopen(url)
except HTTPError, e: # Python 2.5 syntax
if e.code == 404:
raise HTTPError('data not found on remote')
else:
raise
but this doesn't work because `HTTPError`'s init takes multiple arguments,
which are undocumented. It it did work, it would lose the backtrace and the
original message. I also tried
if e.code == 404:
e.message = 'data not found on remote: %s' % e.message
raise
but that just re-raised the exception without extra information. What should I
do?
Answer: You just need to use `e.msg` rather than `e.message`. The script:
from urllib2 import urlopen, HTTPError
url = 'http://www.red-dove.com/frob'
try:
data = urlopen(url)
except HTTPError, e: # Python 2.5 syntax
if e.code == 404:
e.msg = 'data not found on remote: %s' % e.msg
raise
prints
Traceback (most recent call last):
File "c:\temp\test404.py", line 6, in <module>
data = urlopen(url)
File "C:\Python\Lib\urllib2.py", line 124, in urlopen
return _opener.open(url, data)
File "C:\Python\Lib\urllib2.py", line 387, in open
response = meth(req, response)
File "C:\Python\Lib\urllib2.py", line 498, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python\Lib\urllib2.py", line 425, in error
return self._call_chain(*args)
File "C:\Python\Lib\urllib2.py", line 360, in _call_chain
result = func(*args)
File "C:\Python\Lib\urllib2.py", line 506, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: data not found on remote: Not Found
You can of course tidy this up with an enclosing try/except:
from urllib2 import urlopen, HTTPError
url = 'http://www.red-dove.com/frob'
try:
try:
data = urlopen(url)
except HTTPError, e: # Python 2.5 syntax
if e.code == 404:
e.msg = 'data not found on remote: %s' % e.msg
raise
except HTTPError, e:
print e
which prints simply
HTTP Error 404: data not found on remote: Not Found
The exception has all of the original detail: `e.__dict__` looks like
{'__iter__': <bound method _fileobject.__iter__ of <socket._fileobject object at 0x00AF2EF0>>,
'code': 404,
'fileno': <bound method _fileobject.fileno of <socket._fileobject object at 0x00AF2EF0>>,
'fp': <addinfourl at 12003088 whose fp = <socket._fileobject object at 0x00AF2EF0>>,
'hdrs': <httplib.HTTPMessage instance at 0x00B727B0>,
'headers': <httplib.HTTPMessage instance at 0x00B727B0>,
'msg': 'data not found on remote: Not Found',
'next': <bound method _fileobject.next of <socket._fileobject object at 0x00AF2EF0>>,
'read': <bound method _fileobject.read of <socket._fileobject object at 0x00AF2EF0>>,
'readline': <bound method _fileobject.readline of <socket._fileobject object at 0x00AF2EF0>>,
'readlines': <bound method _fileobject.readlines of <socket._fileobject object at 0x00AF2EF0>>,
'url': 'http://www.red-dove.com/frob'}
|
Writing RDF/XML file from rdf Triples in rdflib
Question: I have got rdf triples with me, now I am interested in generating RDF/XML file
using rdflib in Python. Could you please give me some sample code to start.
Thanks
Answer: The [rdflib docs](https://rdflib.readthedocs.org) could be a good starting
point, particularly the [Getting
Started](https://rdflib.readthedocs.org/en/latest/gettingstarted.html)
section. For example:
import rdflib
from rdflib.Graph import Graph
g = Graph()
g.parse("http://www.w3.org/2000/10/rdf-tests/rdfcore/ntriples/test.nt",
format="nt")
g.serialize("test.rdf", format="rdf/xml")
|
How to make a ssh connection with python?
Question: Can anyone recommend something for making a ssh connection in python? I need
it to be compatible with any OS.
I've already tried pyssh only to get an error with SIGCHLD, which I've read is
because Windows lacks this. I've tried getting paramiko to work, but I've had
errors between paramiko and Crypto to the point where the latest versions of
each won't work together.
Python 2.6.1 currently on a Windows machine.
Answer: The module pxssh does exactly what you want.
For example, to run 'ls -l' and to print the output, you need to do something
like that :
import pxssh
s = pxssh.pxssh()
if not s.login ('localhost', 'myusername', 'mypassword'):
print "SSH session failed on login."
print str(s)
else:
print "SSH session login successful"
s.sendline ('ls -l')
s.prompt() # match the prompt
print s.before # print everything before the prompt.
s.logout()
Some links :
Pxssh docs : <http://dsnra.jpl.nasa.gov/software/Python/site-
packages/Contrib/pxssh.html>
Pexpect (pxssh is based on pexpect) : <http://www.noah.org/wiki/pexpect>
|
Parallel computing with Python
Question: here is the code of my python script:
import time
for j in range(1,150,1):
for i in range(1,5,1):
x = j + i
print(x)
time.sleep(180)
This script is started out of my Finite Element programm which can be
manipulated by python. If i start this script it starts to run, but if the
call time.sleep is activated the Finite Element program also stops working.
The main task of the script shall be to start print 5 times "x" stop the
script for a certain time and print 5 times "x" again. Instead of "print"
another command is used in the final program. The stopping of the FE program
has to avoided. How can i manage that? I already tried different things, e.g.
threading, but that doesn't work too. Do you mean the problem can be solved by
using parallel processes? Thanks for your suggestions.
Answer: Do you intend to do something like this:
#! /usr/bin/env python
import threading
import time
class Worker (threading.Thread):
def run (self):
for j in range(1,150,1):
for i in range(1,5,1):
x = j + i
print "Worker says: %d" % x
time.sleep (5)
if __name__ == '__main__':
Worker ().start ()
for i in range (1, 100):
print "Main thread says: I am running."
time.sleep (1)
|
Using Python and Beautifulsoup how do I select the desired table in a div?
Question: I would like to be able to select the table containing the "Accounts Payable"
text but I'm not getting anywhere with what I'm trying and I'm pretty much
guessing using findall. Can someone show me how I would do this?
For example this is what I start with:
<div>
<tr>
<td class="lft lm">Accounts Payable
</td>
<td class="r">222.82</td>
<td class="r">92.54</td>
<td class="r">100.34</td>
<td class="r rm">99.95</td>
</tr>
<tr>
<td class="lft lm">Accrued Expenses
</td>
<td class="r">36.49</td>
<td class="r">33.39</td>
<td class="r">31.39</td>
<td class="r rm">36.47</td>
</tr>
</div>
And this is what I would like to get as a result:
<tr>
<td class="lft lm">Accounts Payable
</td>
<td class="r">222.82</td>
<td class="r">92.54</td>
<td class="r">100.34</td>
<td class="r rm">99.95</td>
</tr>
Answer: You can select the _td_ elements with class _lft lm_ and then examine the
element.string to determine if you have the "Accounts Payable" td:
import sys
from BeautifulSoup import BeautifulSoup
# where so_soup.txt is your html
f = open ("so_soup.txt", "r")
data = f.readlines ()
f.close ()
soup = BeautifulSoup ("".join (data))
cells = soup.findAll('td', {"class" : "lft lm"})
for cell in cells:
# You can compare cell.string against "Accounts Payable"
print (cell.string)
If you would like to examine the following siblings for _Accounts Payable_ for
instance, you could use the following:
if (cell.string.strip () == "Accounts Payable"):
sibling = cell.findNextSibling ()
while (sibling):
print ("\t" + sibling.string)
sibling = sibling.findNextSibling ()
**Update for Edit**
If you would like to print out the original HTML, just for the siblings that
follow the _Accounts Payable_ element, this is the code for that:
lines = ["<tr>"]
for cell in cells:
lines.append (cell.prettify().decode('ascii'))
if (cell.string.strip () == "Accounts Payable"):
sibling = cell.findNextSibling ()
while (sibling):
lines.append (sibling.prettify().decode('ascii'))
sibling = sibling.findNextSibling ()
lines.append ("</tr>")
f = open ("so_soup_out.txt", "wt")
f.writelines (lines)
f.close ()
|
Python: Implementing a series of functions with each one calling the next
Question: programming isn't my field, but I'm trying to learn. I've been writing a
program that works something like this:
from Tkinter import *
root=Tk()
def Secondwindow():
firstframe.destroy()
secondframe = Frame(root)
secondframe.pack()
secondcontent = Label(secondframe, text = 'second window content').pack()
def Thirdwindow():
secondframe.destroy()
thirdframe = Frame(root)
thirdframe.pack()
thirdcontent = Label(thirdframe, text = 'third window content').pack()
def Fourthwindow():
thirdframe.destroy()
fourthframe = Frame(root)
fourthframe.pack()
fourthcontent = Label(fourthframe, text = 'fourth window content').pack()
thirdbutton = Button(thirdframe, text = 'Next ->', command = Fourthwindow).pack()
secondbutton = Button(secondframe, text = 'Next ->', command = Thirdwindow).pack()
firstframe = Frame(root)
firstframe.pack()
firstcontent = Label(firstframe, text = 'first window content').pack()
firstbutton = Button(firstframe, text = 'Next ->', command = Secondwindow).pack()
root.mainloop()
Now, this works perfectly, but as my program gets larger and more complicated
I am starting to see that this is neither elegant nor easy to maintain. I
would like to simply write each function in (more or less) sequence, but that
causes namerrors when the program reads a reference to a function that hasn't
been defined yet (it seems like the program shouldn't worry about it until it
has to run the function, by which time it would have already seen the function
definition, but oh well).
What is the simplest way to have this functionality (functions called from
within functions) without having to stick the next function definition in the
middle of the first function definition? Thanks in advance!
Answer: I un-nested the functions to see what the error was. The problem you have is
that the functions try to access variables defined in the scope of another
function. That won't work. You either have to nest functions so that their
scopes overlap, as you did -- which is awkward -- or you have to use global
variables -- which is less awkward, but still awkward -- or you have to pass
variable names from function to function.
However, because you're using callbacks here -- which are quite advanced! --
executing the third option is more complicated. If you really want to get this
working, I would suggest an object-oriented approach. But frankly I would
suggest starting with something simpler than this for a beginning programmer.
The most important thing is that you get used to scoping rules. That, at
least, I can explain with your code. Here's an explanation of the NameErrors
you were getting.
def Secondwindow():
firstframe.destroy()
secondframe = Frame(root)
secondframe.pack()
secondcontent = Label(secondframe, text = 'second window content').pack()
secondbutton = Button(secondframe, text = 'Next ->', command = Thirdwindow).pack()
def Thirdwindow():
secondframe.destroy()
thirdframe = Frame(root)
thirdframe.pack()
thirdcontent = Label(thirdframe, text = 'third window content').pack()
thirdbutton = Button(thirdframe, text = 'Next ->', command = Fourthwindow).pack()
These two functions look like they do almost the same thing. But they don't!
Here's why:
def Secondwindow():
firstframe.destroy()
This line refers to `firstframe`, which was defined in the global scope (i.e.
at the 'lowest level' of the program. That means it can be accessed from
anywhere. So you're ok here.
secondframe = Frame(root)
secondframe.pack()
secondcontent = Label(secondframe, text = 'second window content').pack()
secondbutton = Button(secondframe, text = 'Next ->', command = Thirdwindow).pack()
These variables are all defined within the scope of `Secondwindow`. That means
they _only exist_ within `Secondwindow`. Once you leave `Secondwindow`, they
cease to exist. There are good reasons for this!
def Thirdwindow():
secondframe.destroy()
Now you run into your problem. This tries to access `secondframe`, but
`secondframe` is only defined within `Secondwindow`. So you get a `NameError`.
thirdframe = Frame(root)
thirdframe.pack()
thirdcontent = Label(thirdframe, text = 'third window content').pack()
thirdbutton = Button(thirdframe, text = 'Next ->', command = Fourthwindow).pack()
Again, these are all defined only within the scope of `ThirdWindow`.
Now, I can't explain everything you need to know to make this work, but here's
a basic hint. You can create a global variable within a function's namespace
by saying
global secondframe
secondframe = Frame(root)
Normally python assumes that variables defined in a function are local
variables, so you have to tell it otherwise. That's what `global secondframe`
does. Now you really shouldn't do this very often, because as the global scope
fills up with more and more variables, it becomes harder and harder to work
with them. Functions create smaller scopes (or 'namespaces' as they're called
in some contexts) so that you don't have to keep track of all the names (to
make sure you don't use the same name in two places, or make other even more
disastrous mistakes).
Normally, to avoid creating a global variable, you would have each function
return the frame it defines by calling `return secondframe`. Then you could
add a function argument to each function containing the previous frame, as in
`def Thirdwindow(secondframe)`. But because you're using callbacks to call
`Secondwindow`, etc., this method gets knotty. Here's some code that works
around the problem by using `lambda` statements.
from Tkinter import *
root=Tk()
def Secondwindow(firstframe):
firstframe.destroy()
secondframe = Frame(root)
secondframe.pack()
secondcontent = Label(secondframe, text = 'second window content').pack()
secondbutton = Button(secondframe, text = 'Next ->', command = lambda: Thirdwindow(secondframe)).pack()
def Thirdwindow(secondframe):
secondframe.destroy()
thirdframe = Frame(root)
thirdframe.pack()
thirdcontent = Label(thirdframe, text = 'third window content').pack()
thirdbutton = Button(thirdframe, text = 'Next ->', command = lambda: Fourthwindow(thirdframe)).pack()
def Fourthwindow(thirdframe):
thirdframe.destroy()
fourthframe = Frame(root)
fourthframe.pack()
fourthcontent = Label(fourthframe, text = 'fourth window content').pack()
firstframe = Frame(root)
firstframe.pack()
firstcontent = Label(firstframe, text = 'first window content').pack()
firstbutton = Button(firstframe, text = 'Next ->', command = lambda: Secondwindow(firstframe)).pack()
root.mainloop()
But the best way to fix this is to use object-oriented code. Unfortunately
that's just too complex a topic to get into; it would just add more verbiage
to an already long post. I honestly think you should spend some time getting
used to functions and scoping first.
* * *
That said, I found a moment to fiddle with an object-oriented variation. Here
it is:
from Tkinter import *
root=Tk()
class FrameRepeater(object):
def __init__(self, start=0, end=4):
self.frame = None
self.number = start
self.end = end
def new_frame(self):
if self.frame:
self.frame.destroy()
self.frame = Frame(root)
self.frame.pack()
self.content = Label(self.frame, text = 'window ' + str(self.number) + ' content')
self.content.pack()
self.button = Button(self.frame, text = 'Next ->', command = self.replace)
self.button.pack()
self.number += 1
def replace(self):
if self.number < self.end:
self.new_frame()
elif self.number >= self.end:
self.content.config(text='Press button again to quit')
self.button.config(command=self.quit)
def quit(self):
self.frame.destroy()
root.destroy()
exit()
FrameRepeater().new_frame()
root.mainloop()
A couple of things to note. First, in those lines that read like this, there's
a subtle error:
thirdcontent = Label(thirdframe, text = 'third window content').pack()
You were storing `None` in `thirdcontent`, because the `pack()` method has no
return value. If you want to preserve a reference to the `Label`, you have to
save the reference first, then `pack()` it separately, as I did in `new_frame`
above.
Second, as you can see from my `replace` method, you don't actually have to
destroy the frame to change the text of the label _or_ the button command! The
above still destroys the first three frames just to show how it would work.
Hope this gets you started! Good luck.
|
PySerial API thinks the com port is still open during a write(), why?
Question: I am using PySerial (a Python API for serial communication) to send AT
commands to a Nokia phone via bluetooth.
import serial
com = serial.Serial()
com.port = 19
com.timeout = 0 #also tried a timeout value greater than 0.
try:
com.open()
# at this point I turn off the phone.
com.write("AT\r\n")
print com.readlines()
except SerialException, e:
print e
Just after I open() the com, I turn off the phone. Then, I write("AT\r\n"). At
this point, the function blocks and the runtime hangs.
Do you have any solution?
Answer: With a timeout set to `0`, you desactivate the timeout parameter, the
`read()/readlines()` becomes a blocking call. The caller will be blocked until
the device answers. Try to set a non-zero timeout value to your serial
connection `com = serial.Serial(timeout=0.5)`.
If it still hangs, the problem should be in the bluetooth stack.
|
Starting a subprocess via python multiprocessing hangs
Question: I'm using pyAudio to listen to the audio device and do some "stuff" in the
background while the main program continues to run.
I started out with a second script, but would like to consolidate into a
single script for supportability. When I moved the functions in and use
Process to start up the listener it simply hangs and never runs.
Here's the simplified snippets of code:
def listener(self, q):
CHANNELS = 2
RATE = 44100
INPUT_BLOCK_TIME = 0.05
FORMAT = pyaudio.paInt16
RATE = 44100
INPUT_FRAMES_PER_BLOCK = int(RATE*INPUT_BLOCK_TIME)
p = pyaudio.PyAudio()
stream = p.open(format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
frames_per_buffer = INPUT_FRAMES_PER_BLOCK)
q.put(os.getpid())
import time
time.sleep(300)
def startListener(self):
q = Queue()
p = Process(target=self.listener, args=[q])
p.daemon=True
p.start()
print q.get()
Now if I remove the following stream setup then I get the process ID back as
expected:
stream = p.open(format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
frames_per_buffer = INPUT_FRAMES_PER_BLOCK)
Is there something about multiprocessing and threading I am missing? Is this a
bad idea? Should I stick with keeping the listener code in a separate script?
Thanks in advance!
Answer: The `__init__` method for `pyaudio.open()` is:
__init__(self, PA_manager, rate, channels, format, input=False, output=False, input_device_index=None, output_device_index=None, frames_per_buffer=1024, start=True, input_host_api_specific_stream_info=None, output_host_api_specific_stream_info=None)
According to the Docs on their website. You don't seem to be setting a
PA_manager which looks like a required parameter.
|
Is there a meaningful way to use context managers inside generators?
Question:
from contextlib import contextmanager
@contextmanager
def context():
print "entering"
yield
print "exiting"
def test():
with context():
for x in range(10):
yield x
for x in test():
if x == 5:
break # or raise
output:
entering
Is there a way to make python automatically invoke the `__exit__` method of
context() when the `for`-loop is interrupted? Or some other way of achieving
the same aim? What I know about generators and context managers makes me
suspect it's not possible, but this makes context managers rather useless
inside generators, doesn't it? It seems to me, a `yield` statement inside a
`with` block should raise a red flag, **context manager`__exit__` may not
run**.
Answer: Well, you could wrap the yield in context() function with a try/finally
clause:
from contextlib import contextmanager
@contextmanager
def context():
print "entering"
try:
yield
finally:
print "exiting"
def test():
with context():
for x in range(10):
yield x
for x in test():
if x == 5:
break # or raise
output:
entering
exiting
**Edit:** If you try a: **help(contextmanager)** , it will show it's "typical"
useage example where they wrap the yield with a try/finally clause.
|
GAE: managing dev and production versions of my app
Question: There are a lot of things I need to do before uploading my production app to
GAE. Its becoming very tedious and error-prone, and I would like to know some
best-practice how to handle following requirements, and if some tools already
exist for doing this:
Dev and test environment on my local machine: want to use debug versions of my
javascript files, Production: want to minify the files and also concatenate
them into 1. E.g.: given this code in `mytemplate.html`
<script src="script1.js"></script>
<script src="script2.js"></script>
<script src="script3.js"></script>
<script src="script4.js"></script>
<script src="script5.js"></script>
Wanted: some process to automatically minify the files, concatenate them into
1, and edit the code above so that it becomes:
<script src="mytemplate.js"></script>
Dev and test: use a `settings.dev.py`
Production: use `settings.py`
Some way to automatically switch to settings.py when pushing to production.
i.e., I don't to manually edit all the py files and change all references to
settings.dev.py to settings.py. Is a config file the recommended way to do
this? I change a setting in my config.py file before pushing to production,
and then the rest of the code picks up the right settings.py? Also, in python
is the config file a .py file, or is it something else usually - i.e., what is
the norm? (in .net we usually use xml for storing configurations)
Answer: So you can detect whether your app is running in dev or production as follows:
import os
DEV = os.environ['SERVER_SOFTWARE'].startswith('Development')
Pass this bool along to your Django templates, and write conditionals when you
want behavior to vary:
{% if DEV %}
<script src="script1.js"></script>
<script src="script2.js"></script>
<script src="script3.js"></script>
<script src="script4.js"></script>
<script src="script5.js"></script>
{% else %}
<script src="mytemplate.js"></script>
{% endif %}
To handle minification at the last minute, write a custom deployment script
that runs any pre-deployment tasks first and then calls `appcfg.py update`.
When you want to deploy, run your deployment wrapper instead of calling
appcfg.py directly.
You can handle your settings.py in a few different ways. Use one settings file
with per-environment conditionals, import different settings files depending
on the environment, or swap in the production file as part of your deployment
wrapper.
|
In a unit test, how can the parameter passed to a custom exception be determined?
Question:
class AppError(Exception): pass
class MissingInputError(AppError):
em = {1101: "Date input is missing. Please verify.", \
1102: "Key input is missing. Please verify.", \
1103: "Stn input is missing. Please verify."}
# and so on ...
...
def validate(self):
""" Method of Input class to validate input and save it """
params = self.__params
if 'dt' in params:
self.__validateKey(escape(params['dt'][0]))
else:
raise MissingInputError(1101)
if 'key' in params:
self.__validateService(escape(params['key'][0]))
else:
raise MissingInputError(1102)
# and so on ...
Unit testing the above, I know that the following tests in the MissingInput
test class:
def testMissingKeyInput(self):
""" Missing key should raise error """
ip = controller.Input(MissingInput.missInputKey)
self.assertRaises(errors.MissingInputError, ip.validate)
def testMissingDtInput(self):
""" Missing dt should raise error """
ip = controller.Input(MissingInput.missInputDt)
self.assertRaises(errors.MissingInputError, ip.validate)
# and so on ...
will correctly detect if a MissingInputError exception was raised.
Is there any way to determine in the test what error number was passed to the
exception while calling it, so that I can be sure that the error is being
raised for that particular missing input, and not for any other missing
inputs?
(P.S: **Python 2.4.3**).
* * *
**Tip** : If you are stuck with 2.4 to 2.6, **use the[unittest2
library](http://www.voidspace.org.uk/python/modules.shtml#unittest2)**. In
Python 2.7 and 3.2 a whole bunch of improvements to unittest will arrive.
unittest2 is a backport of the new features (and tests) to work with Python
2.4, 2.5 & 2.6.
Answer: You can pass a regular expression that runs against the message:
import unittest
class MyError(Exception):
pass
def raiseError():
raise MyError(100)
class TestStuff(unittest.TestCase):
def testError(self):
self.assertRaisesRegexp(MyError, '100', raiseError)
unittest.main()
Does that make sense to you? If you were raising MyError('foo') or
MyError(101), the test would fail because those wouldn't match the regular
expression of '100'. Fortunately, this method will work against numbers and
anything else that you can cast to a string.
See the [unittest documentation](http://docs.python.org/library/unittest.html)
for details on assertRaisesRegexp.
Alternatively, if you're on Python 2.6 or older, assertRaisesRegexp is not
there and you'll have to do something like this:
try:
<code>
except MyError, message:
self.failUnlessEqual(message.args, <expected args>)
else:
self.fail('MyError not raised')
|
Getting a number of digits
Question: I've been searching for a way in python to get only 4 digits on the right of
the comma of a decimal number, but i couldn't find. Took a look on this
post,---> [Rounding Decimals with New Python Format
Function](http://stackoverflow.com/questions/1598579/rounding-decimals-with-
new-python-format-function) ,but the function written there...
>>> n = 4
>>> p = math.pi
>>> '{0:.{1}f}'.format(p, n)
'3.1416'
...seems not to work in my case.
I imported the modules "math" and "decimal", but maybe i'm missing some others
to import, but i don't know which of them to import.
Thanks everyone, and sorry if this issue has already been posted.
Peixe
Answer:
"%.3f" % math.pi
I know its using the old syntax but I personally prefer it.
|
Import strings from .txt and searching for first numeric character - python
Question: I am a total noob with python (programming in fact) but I hope you can help :)
I have a .txt file with a list of strings containing addresses.
I want to import it to Python and then search for the first numberic character
and then create a new column for it...like
input
'Elm Street 12'
'Baker Street 143'
and output
'Elm Street 12' , 12
'Baker Street 143' , 14
and save it to .txt.
I am trying to do if from the windows cmd.
Thank you in advance.
Answer:
with open('file.txt') as inn:
for line in inn:
print "%s %s" % (line, [int(item) for item in line.split(' ') if item.isdigit()])
|
Python - how can I read stdin from shell, and send stdout to shell and file
Question: I'd like to have a Python script read stdin from the shell (bash), and send
stdout to shell as well a redirected file. I tried the following:
$ cat test.py
#!/usr/bin/python
val = raw_input("enter val: ")
print val
$ ./test.py | tee out
testing
enter val: testing
$ cat out
enter val: testing
For some reason, the raw_input prompt is printed after I type my input, which
means I can't see the prompt as I type. With a bash script, I can get
something similar to work.
$ cat test.sh
#!/bin/bash
echo "enter val: "
read val
echo $val
$ ./test.sh | tee out
enter val: testing
testing
$ cat out
enter val: testing
Answer:
#!/usr/bin/python
import sys
print "enter val: ",
sys.stdout.flush()
val = raw_input()
print val
Or
#!/usr/bin/python
import sys
sys.stdout = sys.stderr
val = raw_input("enter val: ")
sys.stdout = sys.__stdout__
print val
|
fast parsing links out of a page in python
Question: I need to parse a large number of pages (say 1000) and replace the links with
tinyurl links.
right now i am doing this using a regex
href_link_re = re.compile(r"<a[^>]+?href\s*=\s*(\"|')(.*?)\1[^>]*>", re.S)
but its not fast enough.
i am thinking so far
1. state machine (the success of this will depend on my ability to write clever code)
2. using an html parser
Can you suggest **faster** ways?
**EDIT:** You would think that an html parser would be faster than regex, but
in my tests it is not:
from BeautifulSoup import BeautifulSoup, SoupStrainer
import re
import time
__author__ = 'misha'
regex = re.compile(r"<a[^>]+?href\s*=\s*(\"|')(.*?)\1[^>]*>", re.S)
def test(text, fn, desc):
start = time.time()
total = 0
links = [];
for i in range(0, 10):
links = fn(text)
total += len(links)
end = time.time()
print(desc % (end-start, total))
# print(links)
def parseRegex(text):
links = set([])
for link in regex.findall(text):
links.add(link[1])
return links
def parseSoup(text):
links = set([])
for link in BeautifulSoup(text, parseOnlyThese=SoupStrainer('a')):
if link.has_key('href'):
links.add(link['href'])
return links
if __name__ == '__main__':
f = open('/Users/misha/test')
text = ''.join(f.readlines())
f.close()
test(text, parseRegex, "regex time taken: %s found links: %s" )
test(text, parseSoup, "soup time taken: %s found links: %s" )
output:
regex time taken: 0.00451803207397 found links: 2450
soup time taken: 0.791836977005 found links: 2450
(test is a dump of the wikipedia front page)
i must be using soup badly. **what am i doing wrong?**
Answer: LXML is probably your best bet for this task. See [Beautiful Soup vs LXML
Performance](http://blog.dispatched.ch/2010/08/16/beautifulsoup-vs-lxml-
performance/). Parsing links is easy in LXML and it's fast.
root = lxml.html.fromstring(s)
anchors = root.cssselect("a")
links = [a.get("href") for a in anchors]
|
Problem posting to datastore with deferred.defer tasks
Question: I am trying to do this on Google App Engine in Python:
def add_to_db(person):
a = PersonDb(key_name = person)
# get some data
data1 = a.name
data2 = a.age
a.put()
for person in people:
deferred.defer(add_to_db, person, _queue="myque")
When I run the above code via a cron job it does not work. The file executes
without error, The tasks (there are 200 strings in list 'People') get added to
the queue correctly and trickle through correctly. But the database PersonDb
does not get updated. I know the function works (the above is simplified)
because this works:
for person in people:
add_to_db(person)
The above 2 lines of code works, and the database gets updated, but I need to
run this as a deferred task. Any suggestions?
**UPDATE** : I am getting this in the log: File "C:\Program
Files\Google\google_appengine\google\appengine\ext\deferred\deferred.py", line
129, in run raise PermanentTaskFailure(e) PermanentTaskFailure: 'module'
object has no attribute 'add_to_db'
Answer: This is probably happening because the function you are trying to defer
(`add_to_db`) is defined in the request handler. Move `add_to_db` to another
module (a different python file) and import it from there (`from myfuncs
import add_to_db`).
This limitation is mentioned in the limitations section of the deferred
[article](http://code.google.com/appengine/articles/deferred.html).
|
Detect re (regexp) object in Python
Question: I wonder what is the proper pythonic backward- and forward-compatible method
how check if an object is compiled `re` object.
`isinstance` method cannot be easily used, while the resulting object claims
to be `_sre.SRE_Pattern` object:
>>> import re
>>> rex = re.compile('')
>>> rex
<_sre.SRE_Pattern object at 0x7f63db414390>
but there is no such one:
>>> import _sre
>>> _sre.SRE_Pattern
AttributeError: 'module' object has no attribute 'SRE_Pattern'
>>> import sre
__main__:1: DeprecationWarning: The sre module is deprecated, please import re.
>>> sre.SRE_Pattern
AttributeError: 'module' object has no attribute 'SRE_Pattern'
>>> re.SRE_Pattern
AttributeError: 'module' object has no attribute 'SRE_Pattern'
I don't want to use duck typing (i.e. checking for the availability of some
specific methods), because this could collide with some other types.
For now, I'm using:
>>> RegexpType = type(re.compile(''))
>>> type(rex) == RegexpType
True
but there might be a better way..
Answer: `re._pattern_type` exists, and appears to do what you want:
>>> isinstance(re.compile(''), re._pattern_type)
True
But this is not a good idea - per Python convention, names starting with _ are
not part of the public API of a module and not part of the backward
compatibility guarantees. So, using `type(re.compile(''))` is your best bet -
though notice that this isn't guaranteed to work either, since the re module
makes no mention of the object returned from re.compile() being of any
particular class.
And indeed, even if this was guaranteed, the most Pythonic and back- and
forward- compatible way would be to rely on the _interface_ , rather than the
type. In other words, embracing duck typing and EAFP, do something like this:
try:
rex.match(my_string)
except AttributeError:
# rex is not an re
else:
# rex is an re
|
Performing many means in numpy
Question: Good Morning, I am implimenting a Cressman filter for doing distance weighted
averages in Numpy.. I use a Ball Tree implimentation (thanks to Jake
VanderPlas) to return a list of locatations for each point in a request
array.. the query array (q) is shape [n,3] and at each point has the x,y,z at
point I want to do a weighted average of points stored in the tree.. the code
wrapped around the tree returns points within a certain distance so I get an
arrays of variable length arrays.. I use a where to find non-empty entries (ie
positions where there were at least some points within the radius of
influence) creating the isgood array...
I then loop over all query points to return the weighted average of the values
self.z (note that this can either be dims=1 or dims=2 to allow multiple co-
gridding)
so the thing that complilcates using map or other quicker methods is the
nonuniformity of the lengths of the arrays within self.distances and
self.locations... I am still fairly green to numpy/python but I can not think
of a way to do this array wise (ie not reverting to loops)
self.locations, self.distances = self.tree.query_radius( q, r, return_distance=True)
t2=time()
if debug: print "Removing voids"
isgood=np.where( np.array([len(x) for x in self.locations])!=0)[0]
interpol = np.zeros( (len(self.locations),) + np.shape(self.z[0]) )
interpol.fill(np.nan)
for dist, ix, posn, roi in zip(self.distances[isgood], self.locations[isgood], isgood, r[isgood]):
interpol[isgood[jinterpol]] = np.average(self.z[ix], weights=(roi**2-dist**2) / (roi**2 + dist**2), axis=0)
jinterpol += 1
so... Any hints of how to speed up the loop?..
For a typical mapping as appied to mapping weather radar data from a
range,azimuth,elevation grid to a cartesian grid where I have 240x240x34
points and 4 variables takes 99s to query the tree (written by Jake in C and
cython.. this is the hard step as you need to search the data!) and 100
seconds to do the calculation... which in my opinon is slow?? where is my
overhead? is np.mean efficient or as it is called millions of times is there a
speedup to be gained here? would I gain by using float32 rather than the
default64... or even scaling to ints (which would be very hard to avoid wrap
around in the weighting... any hints gratefully recieved!
Answer: You can find a discussion about the relative merits of the Cressman scheme vs
using a Gaussian weight function at:
<http://www.flame.org/~cdoswell/publications/radar_oa_00.pdf>
The key is to match the smoothing parameter to the data (I recommend using a
value close to the average spacing between data points). Once you know the
smoothing parameter, you can set an "influence radius" equal to the radius
where the weight function falls to 0.01 (or whatever).
How important is speed? If you wish, rather than calling an exponential
function to determine the weight, you can make up a discrete table of weights
for some fixed number of radius increments, which speeds up the calculation
considerably. Ideally, you should have data outside the grid boundaries that
can be used in the mapping of the values surrounding the gridpoints (even on
the boundary points of the grid). Note this is NOT a true interpolation scheme
- it won't return the observed values at the data points exactly. Like the
Cressman scheme, it's a low-pass filer.
|
Reading lines from text file in python (windows)
Question: I am working on a simple import routine that translates a text file to a json
file format for our system in python.
import json
# Open text file for reading
txtFile = open('Boating.Make.txt', 'r')
# Create picklist obj
picklistObj = dict()
picklistObj['name'] = 'Boating.Make'
picklistObj['items'] = list()
i = 0
# Iterate through each make in text file
for line in txtFile:
picklistItemObj = dict()
picklistItemObj['value'] = str(i)
picklistItemObj['text'] = line.strip()
picklistItemObj['selectable'] = True
picklistObj['items'].append(picklistItemObj)
i = i + 1
txtFile.close()
picklistJson = json.dumps(picklistObj, indent=4)
print picklistJson
picklistFile = open('Boating.Make.json', 'w')
picklistFile.write(picklistJson)
picklistFile.close()
My question is, why do I need the "strip"? I thought that python was supposed
to magically know the newline constant for whatever environment I am currently
in. Am I missing something?
I should clarify that the text file I am reading from is an ASCII file that
contains lines of text separated '\r\n'.
Answer: Python keeps the new line characters while enumerating lines. For example,
when enumerating a text file such as
foo
bar
you get two strings: `"foo\n"` and `"bar\n"`. If you don't want the terminal
new line characters, you call `strip()`.
I am not a fan of this behavior by the way.
|
activemq how to configure to work with stomp in python
Question: I have activemq installed and running locally, but when I run the following
script, I get an error:
#!/usr/bin/env python
import time
import sys
import stomp
class MyListener(object):
def on_error(self, headers, message):
print 'received an error %s' % message
def on_message(self, headers, message):
print 'received a message %s' % message
conn = stomp.Connection(host_and_ports=[('localhost', 61616)])
conn.set_listener('', MyListener())
conn.start()
conn.connect()
conn.subscribe(destination='/home/bitcycle/svn/cass/queue.test', ack='auto')
conn.send('Test', destination='/home/bitcycle/svn/cass/queue.test')
time.sleep(2)
conn.disconnect()
error:
./proc.py
No handlers could be found for logger "stomp.py"
Traceback (most recent call last):
File "./proc.py", line 20, in
conn.disconnect()
File "/usr/local/lib/python2.7/dist-packages/stomp.py-3.0.3-py2.7.egg/stomp/connect.py", line 387, in disconnect
self.__send_frame_helper('DISCONNECT', '', utils.merge_headers([self.__connect_headers, headers, keyword_headers]), [ ])
File "/usr/local/lib/python2.7/dist-packages/stomp.py-3.0.3-py2.7.egg/stomp/connect.py", line 453, in __send_frame_helper
self.__send_frame(command, headers, payload)
File "/usr/local/lib/python2.7/dist-packages/stomp.py-3.0.3-py2.7.egg/stomp/connect.py", line 489, in __send_frame
raise exception.NotConnectedException()
stomp.exception.NotConnectedException
Can someone help me to understand what i need to do to get this to work? I
would like to use activemq for inter-process communication.
Answer: At first glance I'd say you are trying to connect to the wrong port. Out of
the box ActiveMQ is configured to use OpenWire protocol on port 61616, and
Stomp is not enabled. You need to check your ActiveMQ configuration file and
ensure that the Stomp transport is enabled, the standard port we use is 61613
for Stomp. See this page for some info on configuring Stomp: [ActiveMQ Stomp
Guide](http://activemq.apache.org/stomp.html)
|
Multivariate spline interpolation in python/scipy?
Question: Is there a library module or other straightforward way to implement
multivariate spline interpolation in python?
Specifically, I have a set of scalar data on a regularly-spaced three-
dimensional grid which I need to interpolate at a small number of points
scattered throughout the domain. For two dimensions, I have been using
[scipy.interpolate.RectBivariateSpline](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html),
and I'm essentially looking for an extension of that to three-dimensional
data.
The N-dimensional interpolation routines I have found are not quite good
enough: I would prefer splines over
[LinearNDInterpolator](http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.LinearNDInterpolator.html#scipy.interpolate.LinearNDInterpolator)
for smoothness, and I have far too many data points (often over one million)
for, e.g., a radial basis function to work.
If anyone knows of a python library that can do this, or perhaps one in
another language that I could call or port, I'd really appreciate it.
Answer: If I'm understanding your question correctly, your input "observation" data is
regularly gridded?
If so,
[`scipy.ndimage.map_coordinates`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.interpolation.map_coordinates.html)
does exactly what you want.
It's a bit hard to understand at first pass, but essentially, you just feed it
a sequence of coordinates that you want to interpolate the values of the grid
at in pixel/voxel/n-dimensional-index coordinates.
As a 2D example:
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
# Note that the output interpolated coords will be the same dtype as your input
# data. If we have an array of ints, and we want floating point precision in
# the output interpolated points, we need to cast the array as floats
data = np.arange(40).reshape((8,5)).astype(np.float)
# I'm writing these as row, column pairs for clarity...
coords = np.array([[1.2, 3.5], [6.7, 2.5], [7.9, 3.5], [3.5, 3.5]])
# However, map_coordinates expects the transpose of this
coords = coords.T
# The "mode" kwarg here just controls how the boundaries are treated
# mode='nearest' is _not_ nearest neighbor interpolation, it just uses the
# value of the nearest cell if the point lies outside the grid. The default is
# to treat the values outside the grid as zero, which can cause some edge
# effects if you're interpolating points near the edge
# The "order" kwarg controls the order of the splines used. The default is
# cubic splines, order=3
zi = ndimage.map_coordinates(data, coords, order=3, mode='nearest')
row, column = coords
nrows, ncols = data.shape
im = plt.imshow(data, interpolation='nearest', extent=[0, ncols, nrows, 0])
plt.colorbar(im)
plt.scatter(column, row, c=zi, vmin=data.min(), vmax=data.max())
for r, c, z in zip(row, column, zi):
plt.annotate('%0.3f' % z, (c,r), xytext=(-10,10), textcoords='offset points',
arrowprops=dict(arrowstyle='->'), ha='right')
plt.show()
![enter image description here](http://i.stack.imgur.com/nWNwn.png)
To do this in n-dimensions, we just need to pass in the appropriate sized
arrays:
import numpy as np
from scipy import ndimage
data = np.arange(3*5*9).reshape((3,5,9)).astype(np.float)
coords = np.array([[1.2, 3.5, 7.8], [0.5, 0.5, 6.8]])
zi = ndimage.map_coordinates(data, coords.T)
As far as scaling and memory usage goes, `map_coordinates` will create a
filtered copy of the array if you're using an order > 1 (i.e. not linear
interpolation). If you just want to interpolate at a very small number of
points, this is a rather large overhead. It doesn't increase with the number
points you want to interpolate at, however. As long as have enough RAM for a
single temporary copy of your input data array, you'll be fine.
If you can't store a copy of your data in memory, you can either a) specify
`prefilter=False` and `order=1` and use linear interpolation, or b) replace
your original data with a filtered version using `ndimage.spline_filter`, and
then call map_coordinates with `prefilter=False`.
Even if you have enough ram, keeping the filtered dataset around can be a big
speedup if you need to call map_coordinates multiple times (e.g. interactive
use, etc).
|