text
stringlengths
226
34.5k
How to delete an entry object object Question: I can create a form with Django that has a mysql background. I wonder if it is possible to create a code that allows you to delete an object. So supposing I had a client called "Tony", and I wanted to create some python code that allowed me to delete Tony. How would I do that? #forms.py from django import forms from c2duo.accounts.models import * class ClientForm(forms.ModelForm): client_number = forms.IntegerField() name = forms.CharField(max_length=80) address = forms.CharField(max_length=250) telephone = forms.CharField(max_length=20) fax = forms.CharField(max_length=20) email = forms.EmailField() alternative_name = forms.CharField(max_length=80, required=False) alternative_address = forms.CharField(max_length=250, required=False) alternative_telephone = forms.CharField(max_length=20, required=False) alternative_email = forms.EmailField(required=False) class Meta: model = Client fields = ('client_number','name','address','telephone','fax','email','alternative_name','alternative_address','alternative_telephone','alternative_email' #views.py @login_required def add_client(request): if request.method == 'POST': form = ClientForm(request.POST or None) if form.is_valid(): form.save() return HttpResponseRedirect('/index/') else: form = ClientForm() return render_to_response('add_client.html', {'form': form}, context_instance=RequestContext(request)) Answer: def delete_client(request, client_id): client = Client.objects.get(id=client_id) client.delete() redirect_to = '/index/' return HttpResponseRedirect(redirect_to)
Twitter API: simple status update (Python) Question: I've been looking for a way to update my Twitter status from a Python client. As this client only needs to access one Twitter account, it should be possible to do this with a pre-generated oauth_token and secret, according to <http://dev.twitter.com/pages/oauth_single_token> However the sample code does not seem to work, I'm getting 'could not authenticate you' or 'incorrect signature'.. As there are a bunch of different python-twitter library out there (and not all of them are up-to-date) I'd really appreciate if anybody could point me a library that's currently working for POST requests, or post some sample code! **Update:** I've tried Pavel's solution, and it works as long as the new message is only one word long, but as soon as it contains spaces, i get this error: status = api.PostUpdate('hello world') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python26\lib\site-packages\python_twitter\twitter.py", line 2459, in PostUpdate self._CheckForTwitterError(data) File "C:\Python26\lib\site-packages\python_twitter\twitter.py", line 3394, in _CheckForTwitterErro r raise TwitterError(data['error']) python_twitter.twitter.TwitterError: Incorrect signature If however the update is just one word, it works: status = api.PostUpdate('helloworld') {'status': 'helloworld'} Any idea why this might be happening? Thanks a lot in advance, Hoff Answer: You might be interested in this <http://code.google.com/p/python-twitter/> Unfortunately the docs don't exist to be fair and last 'release' was in 2009. I've used code from the hg: wget http://python-twitter.googlecode.com/hg/get_access_token.py wget http://python-twitter.googlecode.com/hg/twitter.py After (long) app registration process ( <http://dev.twitter.com/pages/auth#register> ) you should have the Consumer key and secret. They are unique for an app. Next you need to connect the app with your account, edit the get_access_token.py according to instructions in source (sic!) and run. You should have now the Twitter Access Token key and secret. >>> import twitter >>> api = twitter.Api(consumer_key='consumer_key', consumer_secret='consumer_secret', access_token_key='access_token', access_token_secret='access_token_secret') >>> status = api.PostUpdate('I love python-twitter!') >>> print status.text I love python-twitter! And it works for me <http://twitter.com/#!/pawelprazak/status/16504039403425792> (not sure if it's visible to everyone) That said I must add that I don't like the code, so if I would gonna use it I'd rewrite it. EDIT: I've made the example more clear.
Django redirect using reverse() to a URL that relies on query strings Question: I'm writing a django application with a URL like 'http://localhost/entity/id/?overlay=other_id'. Where id is the primary key of the particular entity and overlay is an optional query parameter for a second entity to be overlaid in the display. The user can only ever update an entity when viewing objects through an overlay. When POSTing to /update/id, I want to redirect back to /entity/id, but I don't want to lose my query parameter during the redirect, as the change in view would be jarring. For example, I've got the following in my url.py: ... (r'^update/(?P<id>.+)/(?P<overlay_id>.+)/$', 'update'), (r'^entity/(?P<id>.+)/$', 'view'), ... Because overlay_id is required when updating, it's part of the URL, not a query parameter. In the django view I want to redirect after a successful POST and use reverse() to avoid referencing URLs in my python code. The general idea is: return HttpResponseRedirect( reverse('views.view', kwargs={ 'id': id, }, ) ) But how do I pass my query parameter though reverse? Thanks, Craig Answer: You can use a Django QueryDict object: from django.http import QueryDict # from scratch qdict = QueryDict('',mutable=True) # starting with our existing query params to pass along qdict = request.GET.copy() # put in new values via regular dict qdict.update({'foo':'bar'}) # put it together full_url = reversed_url + '?' + qdict.urlencode() And of course you could write a convenience method for it similar to the previous answer.
Python generate dates series Question: How can i generate array with dates like this: Timestamps in javascript miliseconds format from 2010.12.01 00:00:00 to 2010.12.12.30 23.59.59 with step 5 minutes. ['2010.12.01 00:00:00', '2010.12.01 00:05:00','2010.12.01 00:10:00','2010.12.01 00:15:00', ...] Answer: Well, obviously you start at the start time, loop until you reach the end time and increment inbetween. import datetime dt = datetime.datetime(2010, 12, 01) end = datetime.datetime(2010, 12, 30, 23, 59, 59) step = datetime.timedelta(seconds=5) result = [] while dt < end: result.append(dt.strftime('%Y-%m-%d %H:%M:%S')) dt += step Fairly trivial.
Unable to import pylab? Question: I've installed numpy/scipy/matplotlib on Snow Leopard with python 2.6. Importing pylab does not seem to be working.. Upon calling 'import pylab', I get the following: File "<stdin>", line 1, in <module> File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pylab.py", line 1, in <module> from matplotlib.pylab import * File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/pylab.py", line 216, in <module> from matplotlib import mpl # pulls in most modules File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/mpl.py", line 2, in <module> from matplotlib import axis File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/axis.py", line 10, in <module> import matplotlib.font_manager as font_manager File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/font_manager.py", line 1339, in <module> _rebuild() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/font_manager.py", line 1326, in _rebuild fontManager = FontManager() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/font_manager.py", line 1004, in __init__ self.ttffiles = findSystemFonts(paths) + findSystemFonts() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/font_manager.py", line 343, in findSystemFonts for f in get_fontconfig_fonts(fontext): File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/matplotlib/font_manager.py", line 301, in get_fontconfig_fonts output = pipe.communicate()[0] File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 683, in communicate stdout = self.stdout.read() What gives? Is pylab expecting something I don't have? It seems to be unable to read something, but I don't really understand what that is.. Answer: Just wait. The problem is that fc-list takes a long time to run the first time through, and so it looks like it's hung; but if you wait 2-3 minutes it will finish and then run more quickly thereafter. I ran fc-list at the command line as root, which presumably initialized a cache of some sort; not sure that's necessary, but it worked!
Using Tornado with Pika for Asynchronous Queue Monitoring Question: I have an AMQP server ([RabbitMQ](http://www.rabbitmq.com/)) that I would like to both publish and read from in a [Tornado web server](http://www.tornadoweb.org/). To do this, I figured I would use an asynchronous amqp python library; in particular [Pika](https://github.com/gmr/pika/) (a variation of it that supposedly supports Tornado). I have written code that appears to successfully read from the queue, except that at the end of the request, I get an exception (the browser returns fine): [E 101219 01:07:35 web:868] Uncaught exception GET / (127.0.0.1) HTTPRequest(protocol='http', host='localhost:5000', method='GET', uri='/', version='HTTP/1.1', remote_ip='127.0.0.1', remote_ip='127.0.0.1', body='', headers={'Host': 'localhost:5000', 'Accept-Language': 'en-us,en;q=0.5', 'Accept-Encoding': 'gzip,deflate', 'Keep-Alive': '115', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'User-Agent': 'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13', 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'Connection': 'keep-alive', 'Cache-Control': 'max-age=0', 'If-None-Match': '"58f554b64ed24495235171596351069588d0260e"'}) Traceback (most recent call last): File "/home/dave/devel/lib/python2.6/site-packages/tornado/web.py", line 810, in _stack_context yield File "/home/dave/devel/lib/python2.6/site-packages/tornado/stack_context.py", line 77, in StackContext yield File "/usr/lib/python2.6/contextlib.py", line 113, in nested yield vars File "/home/dave/lib/python2.6/site-packages/tornado/stack_context.py", line 126, in wrapped callback(*args, **kwargs) File "/home/dave/devel/src/pika/pika/tornado_adapter.py", line 42, in _handle_events self._handle_read() File "/home/dave/devel/src/pika/pika/tornado_adapter.py", line 66, in _handle_read self.on_data_available(chunk) File "/home/dave/devel/src/pika/pika/connection.py", line 521, in on_data_available self.channels[frame.channel_number].frame_handler(frame) KeyError: 1 I'm not entirely sure I am using this library correctly, so I might be doing something blatantly wrong. The basic flow of my code is: 1. Request comes in 2. Create connection to RabbitMQ using TornadoConnection; specify a callback 3. In connection callback, create a channel, declare/bind my queue, and call basic_consume; specify a callback 4. In consume callback, close the channel and call Tornado's finish function. 5. See exception. My questions are a few: 1. Is this flow even correct? I'm not sure what the purpose of the connection callback is except that it doesn't work if I don't use it. 2. Should I be creating one AMQP connection per web request? RabbitMQ's documentation suggests that no, I should not but rather I should stick to creating just channels. But where would I create the connection, and how do I attempt reconnects should it go down briefly? 3. If I am creating one AMQP connection per Web request, where should I be closing it? Calling amqp.close() in my callback seems to screw things up even more. I will try to have some sample code up a little later, but the steps I described above lay out the consuming side of things fairly completely. I am having issues with the publishing side as well, but the consuming of queues is more pressing. Answer: It would help to see some source code, but I use this same tornado-supporting pika module without issue in more than one production project. You don't want to create a connection per request. Create a class that wraps all of your AMQP operations, and instantiate it as a singleton at the tornado Application level that can be used across requests (and across request handlers). I do this in a 'runapp()' function that does some stuff like this and then starts the main tornado ioloop. Here's a class called 'Events'. It's a partial implementation (specifically, I don't define 'self.handle_event' here. That's up to you. class Event(object): def __init__(self, config): self.host = 'localhost' self.port = '5672' self.vhost = '/' self.user = 'foo' self.exchange = 'myx' self.queue = 'myq' self.recv_routing_key = 'msgs4me' self.passwd = 'bar' self.connected = False self.connect() def connect(self): credentials = pika.PlainCredentials(self.user, self.passwd) parameters = pika.ConnectionParameters(host = self.host, port = self.port, virtual_host = self.vhost, credentials = credentials) srs = pika.connection.SimpleReconnectionStrategy() logging.debug('Events: Connecting to AMQP Broker: %s:%i' % (self.host, self.port)) self.connection = tornado_adapter.TornadoConnection(parameters, wait_for_open = False, reconnection_strategy = srs, callback = self.on_connected) def on_connected(self): # Open the channel logging.debug("Events: Opening a channel") self.channel = self.connection.channel() # Declare our exchange logging.debug("Events: Declaring the %s exchange" % self.exchange) self.channel.exchange_declare(exchange = self.exchange, type = "fanout", auto_delete = False, durable = True) # Declare our queue for this process logging.debug("Events: Declaring the %s queue" % self.queue) self.channel.queue_declare(queue = self.queue, auto_delete = False, exclusive = False, durable = True) # Bind to the exchange self.channel.queue_bind(exchange = self.exchange, queue = self.queue, routing_key = self.recv_routing_key) self.channel.basic_consume(consumer = self.handle_event, queue = self.queue, no_ack = True) # We should be connected if we made it this far self.connected = True And then I put that in a file called 'events.py'. My RequestHandlers and any back end code all utilize a 'common.py' module that wraps code that's useful to both (my RequestHandlers don't call any amqp module methods directly -- same for db, cache, etc as well), so I define 'events=None' at the module level in common.py, and I instantiate the Event object kinda like this: import events def runapp(config): if myapp.common.events is None: myapp.common.events = myapp.events.Event(config) logging.debug("MYAPP.COMMON.EVENTS: %s", myapp.common.events) http_server = tornado.httpserver.HTTPServer(app, xheaders=config['HTTPServer']['xheaders'], no_keep_alive=config['HTTPServer']['no_keep_alive']) http_server.listen(port) main_loop = tornado.ioloop.IOLoop.instance() logging.debug("MAIN IOLOOP: %s", main_loop) main_loop.start() Happy new year :-D
Python subprocess Popen Question: Why its not working? :| import subprocess p = subprocess.Popen([r"snmpget","-v","1","-c","public","-Oqv","","-Ln","192.168.1.1 1.3.6.1.2.1.2.2.1.10.7"],stdout=subprocess.PIPE).communicate()[0] print p Run script: root@OpenWrt:~/python# python w.py root@OpenWrt:~/python# its printing empty line :| Bu on the same machine, from shell: root@OpenWrt:~/python# snmpget -v 1 -c public -Oqv -Ln 192.168.1.1 1.3.6.1.2.1.2.2.1.10.7 3623120418 I know tere is "-Oqv","", but without it i got error from snmpget... Answer: I see you have an empty string in your args: ... ,"-Oqv","","-Ln", ... ^^ Is that possibly causing a problem for snmpget? You've got two arguments in one, too: "192.168.1.1 1.3.6.1.2.1.2.2.1.10.7" That should be split in two: "192.168.1.1", "1.3.6.1.2.1.2.2.1.10.7" When typing a command on the command line, the shell does this splitting for you. When calling `subprocess.Popen()` in this way, you'll have to do _all_ the argument splitting yourself. You'd get the same error if you ran: snmpget -v 1 -c public -Oqv -Ln '192.168.1.1 1.3.6.1.2.1.2.2.1.10.7'
Python Message Box Without huge library dependancy Question: Is there a messagebox class where I can just display a simple message box without a huge GUI library or any library upon program success or failure. (My script only does 1 thing). Also, I only need it to run on Windows. Answer: You can use the [ctypes](http://docs.python.org/library/ctypes.html) library, which comes installed with Python: import ctypes MessageBox = ctypes.windll.user32.MessageBoxA MessageBox(None, 'Hello', 'Window title', 0) Above code is for Python 2.x. For Python 3.x, use `MessageBoxW` instead of `MessageBoxA`: This is the version that accepts unicode strings, which Python 3 uses by default.
How to get an HTML file using Python? Question: I am not very familiar with Python. I am trying to extract the artist names (for a start :)) from the following page: <http://www.infolanka.com/miyuru_gee/art/art.html>. How do I retrieve the page? My two main concerns are; what functions to use and how to filter out useless links from the page? Answer: Example using urlib and lxml.html: import urllib from lxml import html url = "http://www.infolanka.com/miyuru_gee/art/art.html" page = html.fromstring(urllib.urlopen(url).read()) for link in page.xpath("//a"): print "Name", link.text, "URL", link.get("href") output >> [('Aathma Liyanage', 'athma.html'), ('Abewardhana Balasuriya', 'abewardhana.html'), ('Aelian Thilakeratne', 'aelian_thi.html'), ('Ahamed Mohideen', 'ahamed.html'), ]
Increasing the depth of cProfiler in Python to report more functions? Question: I'm trying to profile a function that calls other functions. I call the profiler as follows: from mymodule import foo def start(): # ... foo() import cProfile as profile profile.run('start()', output_file) p = pstats.Stats(output_file) print "name: " print p.sort_stats('name') print "all stats: " p.print_stats() print "cumulative (top 10): " p.sort_stats('cumulative').print_stats(10) I find that the profiler says all the time was spend in function "foo()" of mymodule, instead of brekaing it down into the subfunctions foo() calls, which is what I want to see. How can I make the profiler report the performance of these functions? thanks. Answer: You need `p.print_callees()` to get hierarchical breakdown of method calls. The output is quite self explanatory: On the left column you can find your function of interest e.g.`foo()`, then going to the right side column shows all called sub-functions and their scoped total and cumulative times. Breakdowns for these sub-calls are also included etc.
How to test for situation where a specific library is missing in Python Question: I have some packages that have soft dependencies on other packages with a fall back to a default (simple) implementation. The problem is that this is very hard to test for using unit tests. I could set up separate virtual environments, but that is hard to manage. Is there a package or a way to achieve the following: have import X work as usual, but hide_package('X') import X will raise an ImportError. I keep having bugs creep into the fall-back part of my code because it is hard to test this. Answer: One way is to edit sys.path, especially if your packages install into different directories/zipfiles (e.g. if you are using eggs). Before importing, drop the ones you don't want from sys.path. If that's not feasible (because all components live in a single sys.path entry), you can hack suppression into the packages themselves. E.g. have a global variable (environment, or something patched into the sys module) list the packages whose import you want to fail: sys.suppressed_packages=set() sys.suppressed_packages.add('X') Then, in each package, explicitly raise an ImportError: # X.py import sys if 'X' in sys.suppressed_packages: raise ImportError, 'X is suppressed' Of course, instead of using the sys module, you can make your own infrastructure for that, along with a hide_package function.
Python filter list to remove certain links from html source code Question: I have html source code which I want to filter out one or more links and keep the others. I have set up my filter with "*" as the wildcard: <a*>Link1</a>‚ <a*>Link2</a>‚ or <a*>Link3</a> <a*>A bad link*</a> some text* <a*>update*</a> other text right before link <a*>click here</a> I would like to filter out every instance of the link from the html source code using python. I'm ok with loading the list into an array. I need some help with the filter. Each line break would signify a separate filter and I only want to remove the link(s) and not the text I am still very new to python and regex/beautifulsoup. Even if you could point me in the right direction, it would be greatly appreciated. Answer: To remove `<a>` tags and keep only the text not contained within those tags: >>> from BeautifulSoup import BeautifulSoup as bs >>> markup = """<a*>Link1</a> <a*>Link2</a> or <a*>Link3</a> ... <a*>A bad link*</a> ... some text* <a*>update*</a> ... other text right before link <a*>click here</a>""" >>> soup = bs(markup) >>> TAGS_TO_EXTRACT = ('a',) >>> for tag in soup.findAll(): ... if tag.name in TAGS_TO_EXTRACT: ... tag.extract() ... >>> soup or some text* other text right before link It's not clear to me if you want the text within the tags or not. If you want the text contained within the tags do something like this instead: >>> for tag in soup.findAll(): ... if tag.name in TAGS_TO_EXTRACT: ... tag.replaceWith(tag.text) ... >>> soup Link1 Link2 or Link3 A bad link* some text* update* other text right before link click here
With regards to urllib AttributeError: 'module' object has no attribute 'urlopen' Question: import re import string import shutil import os import os.path import time import datetime import math import urllib from array import array import random filehandle = urllib.urlopen('http://www.google.com/') #open webpage s = filehandle.read() #read print s #display #what i plan to do with it once i get the first part working #results = re.findall('[<td style="font-weight:bold;" nowrap>$][0-9][0-9][0-9][.][0-9][0-9][</td></tr></tfoot></table>]',s) #earnings = '$ ' #for money in results: #earnings = earnings + money[1]+money[2]+money[3]+'.'+money[5]+money[6] #print earnings #raw_input() this is the code that i have so far. now i have looked at all the other forums that give solutions such as the name of the script, which is parse_Money.py, and i have tried doing it with urllib.request.urlopen AND i have tried running it on python 2.5, 2.6, and 2.7. If anybody has any suggestions it would be really welcome, thanks everyone!! \--Matt \---_EDIT_ \--- I also tried this code and it worked, so im thinking its some kind of syntax error, so if anybody with a sharp eye can point it out, i would be very appreciative. import shutil import os import os.path import time import datetime import math import urllib from array import array import random b = 3 #find URL URL = raw_input('Type the URL you would like to read from[Example: http://www.google.com/] :') while b == 3: #get file name file1 = raw_input('Enter a file name for the downloaded code:') filepath = file1 + '.txt' if os.path.isfile(filepath): print 'File already exists' b = 3 else: print 'Filename accepted' b = 4 file_path = filepath #open file FileWrite = open(file_path, 'a') #acces URL filehandle = urllib.urlopen(URL) #display souce code for lines in filehandle.readlines(): FileWrite.write(lines) print lines print 'The above has been saved in both a text and html file' #close files filehandle.close() FileWrite.close() Answer: [it appears](http://docs.python.org/py3k/library/urllib.request.html) that the `urlopen` method is available in the `urllib.request` module and not in the `urllib` module as you're expecting. rule of thumb - if you're getting an `AttributeError`, that field/operation is not present in the particular module. **EDIT** \- Thanks to AndiDog for pointing out - this is a solution valid for Py 3.x, and not applicable to Py2.x!
Voodoo Code In Python Question: I was going through Zed Shaw's Learn Python The Hard Way and something in Chapter 15 struck me. In the extra credit exercises he asks us to delete the latter part of the code [everything after print txt.read() ] and then execute it, but the interpreter behaves as if nothing has happened. Yes, I saved the file and when I modified it by adding print statements then the changes still showed up, but the same voodoo code was executed. Why? What's going on over here? from sys import argv script, filename = argv txt = open(filename) print "Here's your file %r:" % filename print txt.read() print "I'll also ask you to type it again:" file_again = raw_input("> ") txt_again = open(file_again) print txt_again.read() Answer: You are probably executing a different file then the one you are editing.
Python: Pickle derived classes as if they were an instance of the base class Question: I want to define a base class so that when derived class instances are pickled, they are pickled as if they are instances of the base class. This is because the derived classes may exist on the client side of the pickling but not on the server side, but this is not important to the server since it only needs information from the base class. I don't want to have to dynamically create classes for every client. The base class is simply an "object handle" which contains an ID, with methods defined on the server, but I would like the client to be able to subclass the server classes and define new methods (which would only be seen by the client, but that doesn't matter). Answer: I _believe_ you can do it by giving the object a `__reduce__` method, returning a tuple, the first part of which should be `BaseClass.__new__` (this will be called when loading the object in unpickling). See the pickle documentation ([Python 2](http://docs.python.org/library/pickle.html#pickling- and-unpickling-extension-types), [Python 3](http://docs.python.org/py3k/library/pickle#pickling-class-instances)) for the full details. I haven't attempted this. Depending on what you're doing, it might be easier to use a simpler serialisation format like JSON, and have code on each side to reconstruct the relevant objects.
Django app throws spurious exception when importing views from third-party app Question: I'm working on a Django app that occasionally throws a `ViewDoesNotExist` exception when trying to import modules from a third-party app (Solango, to be specific). By "occasionally", I mean often enough to be annoying, but definitely a minority of requests. Solango is on the app's `PYTHONPATH` and can be imported reliably through the console. The error also never happens during local development, so maybe it has something to do with the server setup (the app uses Apache + mod_wsgi in daemon mode). Here's a stack trace showing the error occurring in the admin (although it occurs on pretty much every page on the site): Traceback: File "/home/nybooks/ve/lib/python2.5/site-packages/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/nybooks/ve/lib/python2.5/site-packages/django/contrib/admin/sites.py" in root 445. return self.index(request) File "/home/nybooks/ve/lib/python2.5/site-packages/django/views/decorators/cache.py" in _wrapped_view_func 44. response = view_func(request, *args, **kwargs) File "/home/nybooks/ve/lib/python2.5/site-packages/django/contrib/admin/sites.py" in index 342. context_instance=template.RequestContext(request) File "/home/nybooks/ve/lib/python2.5/site-packages/django/shortcuts/__init__.py" in render_to_response 20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/loader.py" in render_to_string 108. return t.render(context_instance) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 178. return self.nodelist.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 778. bits.append(self.render_node(node, context)) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render_node 791. return node.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/loader_tags.py" in render 97. return compiled_parent.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 178. return self.nodelist.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 778. bits.append(self.render_node(node, context)) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render_node 791. return node.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/loader_tags.py" in render 97. return compiled_parent.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 178. return self.nodelist.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 778. bits.append(self.render_node(node, context)) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render_node 791. return node.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/defaulttags.py" in render 245. return self.nodelist_true.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 778. bits.append(self.render_node(node, context)) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render_node 791. return node.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/defaulttags.py" in render 255. return self.nodelist_true.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 778. bits.append(self.render_node(node, context)) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render_node 791. return node.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/loader_tags.py" in render 24. result = self.nodelist.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render 778. bits.append(self.render_node(node, context)) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/__init__.py" in render_node 791. return node.render(context) File "/home/nybooks/ve/lib/python2.5/site-packages/django/template/defaulttags.py" in render 372. url = reverse(self.view_name, args=args, kwargs=kwargs) File "/home/nybooks/ve/lib/python2.5/site-packages/django/core/urlresolvers.py" in reverse 265. *args, **kwargs))) File "/home/nybooks/ve/lib/python2.5/site-packages/django/core/urlresolvers.py" in reverse 238. possibilities = self.reverse_dict.getlist(lookup_view) File "/home/nybooks/ve/lib/python2.5/site-packages/django/core/urlresolvers.py" in _get_reverse_dict 165. for name in pattern.reverse_dict: File "/home/nybooks/ve/lib/python2.5/site-packages/django/core/urlresolvers.py" in _get_reverse_dict 173. lookups.appendlist(pattern.callback, (bits, p_pattern)) File "/home/nybooks/ve/lib/python2.5/site-packages/django/core/urlresolvers.py" in _get_callback 134. raise ViewDoesNotExist, "Could not import %s. Error was: %s" % (mod_name, str(e)) Exception Type: ViewDoesNotExist at /admin/ Exception Value: Could not import solango.views. Error was: cannot import name settings Any ideas on what's causing the problem, or at least how I can go about debugging it? Answer: Your web server is out of file descriptors. Reconfigure mod_wsgi for daemon mode.
sorting lists of list to get unique ids for last column Question: I have this data saved in a file: ['5',60680,60854,'gene_id "ENS1"'] ['5',59106,89211,'gene_id "ENS1"'] ['5',58686,58765,'gene_id "ENS1"'] ['5',80835,93381,'gene_id "ENS2"'] ['5',55555,92223,'gene_id "ENS2"'] ['5',73902,74276,'gene_id "ENS2"'] I need help with python to get an output which ensures that items in the 4th column appear only when the second column has the minimum value and the third column has a maximum value within a 4th column item. So I want my output to look like this: ['5',58686,89211,'gene_id "ENS1"'] ['5',55555,93381,'gene_id "ENS2"'] Each item in the 4th column should only appear once. How can I also get rid of the [] around the data. Thank you. Answer: >>> from itertools import groupby >>> for i, j in groupby(lst, key=lambda x: x[3]): t = list(zip(*j)) print(t[0][0], min(t[1]), max(t[2]), t[3][0]) 5 58686 89211 gene_id "ENS1" 5 55555 93381 gene_id "ENS2" It's not clear, what do you mean by getting rid of `[]`, these are just syntax for python lists.
Reading request parameters in Google App Engine with Java Question: I'm modifying the default project that Eclipse creates when you create a new project with Google Web Toolkit and Google App Engine. It is the GreetingService sample project. How can I read a request parameter in the client's .java file? For example, the current URL is `http://127.0.0.1:8887/MyProj.html?gwt.codesvr=127.0.0.1&foo=bar` and I want to use something like `request.getParameter("foo") == "bar"`. I saw that the documentation mentions the [Request](http://code.google.com/appengine/docs/python/tools/webapp/requestclass.html) class for Python, but I couldn't find the equivalent for Java. It's listed as being in the `google.appengine.ext.webapp` package, but if I try importing that into my .java file (with a `com.` prefix), it says that it can't resolve the `ext` part. Answer: [Google App Engine](http://code.google.com/appengine/docs/java/runtime.html#Requests_and_Servlets) uses the [Java Servlet API](http://download.oracle.com/javaee/1.4/api/javax/servlet/Servlet.html). GWT's [RemoteServiceServlet](http://google-web- toolkit.googlecode.com/svn/javadoc/1.5/com/google/gwt/user/server/rpc/RemoteServiceServlet.html) provides access to the request through: `[HttpServletRequest](http://download.oracle.com/javaee/1.4/api/javax/servlet/http/HttpServletRequest.html) request = this.getThreadLocalRequest();` from which you can call either `request.getQueryString()`, and interpret the query string any way you desire, or you can call `request.getParameter("foo")`
Python GTK "Getting started" tutorial problem Question: I have a problem with compiling a basic and really simple example of PyGTK usage listed on pygtk's website. This is the first example from this site: <http://www.pygtk.org/pygtk2tutorial/ch-GettingStarted.html> My code looks like this: #!/usr/bin/env python # example gtk.py import pygtk pygtk.require('2.0') import gtk class Base: def __init__(self): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.show() def main(self): gtk.main() print __name__ if __name__ == "__main__": base = Base() base.main() And after calling python gtk.py, i'm getting the following error: > gtk **main** Traceback (most recent call last): File "gtk.py", line 19, in > base = Base() File "gtk.py", line 11, in **init** self.window = > gtk.Window(gtk.WINDOW_TOPLEVEL) AttributeError: 'module' object has no > attribute 'Window' I've found an info somewhere that it shpuld be fixed by installing PyGTK from source. I did it but it changed nothing. The message is still the same. I'm using ubuntu 10.10 Have you any ideas on what can be wrong ? Thanks for any help! Mike Answer: Yep, it seems like you might have named your script "gtk.py". Which is a bad idea for what should be fairly obvious reasons!
Python: saving objects and using pickle. Error using pickle.dump Question: Hello I have an Error and I don´t the reason: >>> class Fruits:pass ... >>> banana = Fruits() >>> banana.color = 'yellow' >>> banana.value = 30 >>> import pickle >>> filehandler = open("Fruits.obj",'w') >>> pickle.dump(banana,filehandler) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python31\lib\pickle.py", line 1354, in dump Pickler(file, protocol, fix_imports=fix_imports).dump(obj) TypeError: must be str, not bytes >>> I don´t know how to solve this error because I don´t understand it. Thank you so much. Answer: You have to open your **filehandler** in binary mode, use **wb** instead of **w** : filehandler = open(b"fruits.obj","wb")
Query by non-ascii charachters Question: I am using Python, on Google App Engine platform. Let's say I have in my Data Store the following code : class names(db.Model): name = db.StringProperty(multiline=True) and there are names like : name1 = Beyoncé name2 = El Súper Clásico with non-ascii charachters. When I make a query like : q_1 = names.all().filter('name =', name1) It doesn't work, the comparison is wrong. Do you have any idea how can I solve this problem? I tried encoding the "name" to UTF-8, but it didn't work also. Thanks, Gori Answer: There should be no problems with exact matches when correctly decoding input strings (that you get from web request parameters) and correctly encoding output strings (that you save in GAE data storage) in Unicode. I've tried this snippet in the GAE SDK Interactive Console and it works: from google.appengine.ext import db class names(db.Model): name = db.StringProperty(multiline=True) some_name = 'Beyonc\xc3\xa9'.decode('utf-8') # same as: some_name = u'Beyoncé' # same as: some_name = u'Beyonc\u00e9' n = names(name=some_name) n.put() q = names.all().filter('name =', some_name) print q.get().name.encode('utf-8') # prints Beyoncé You should debug what is the raw value of the strings you are comparing, i.e., the string saved in the storage and the string passed to the query. I recommend reading this [article about Unicode by Joel Spolsky](http://www.joelonsoftware.com/articles/Unicode.html) and the [Python Unicode HOWTO](http://docs.python.org/howto/unicode.html) if you're not familiar with handling Unicode strings. In addition to this, if you're running search queries that should match Unicode characters like `u'é'` when input is `'e'`, consider comparing normalized strings: some_name = u'El S\u00faper Cl\u00e1sico' # El Súper Clásico normalized_name = unicodedata.normalize('NFKD', some_name).encode('ascii', 'ignore') # El Super Clasico
django.db.utils.DatabaseError Question: I'm setting up a django model to store regions, like USA, Germany, etc. I made the region name unique for the table. I have a script that populates the database from a list and if there is a duplicate region name IntegrityError is thrown as expected but then another error happens and I can't tell why from the error message. Any ideas? Thanks! django.db.utils.DatabaseError: current transaction is aborted, commands ignored until end of transaction block Model: class Region(models.Model): name = models.CharField(max_length=512, unique=True) def __unicode__(self): return self.name Populate code: try: Region(name=server['locale']).save() print 'Added region: %(locale)s' % server except IntegrityError: pass I've confirmed that the IntegrityError is occuring but then I get this error which I dont expect: File "/home/daedalus/webapps/wowstatus/lib/python2.6/django/db/models/base.py", line 456, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/home/daedalus/webapps/wowstatus/lib/python2.6/django/db/models/base.py", line 549, in save_base result = manager._insert(values, return_id=update_pk, using=using) File "/home/daedalus/webapps/wowstatus/lib/python2.6/django/db/models/manager.py", line 195, in _insert return insert_query(self.model, values, **kwargs) File "/home/daedalus/webapps/wowstatus/lib/python2.6/django/db/models/query.py", line 1518, in insert_query return query.get_compiler(using=using).execute_sql(return_id) File "/home/daedalus/webapps/wowstatus/lib/python2.6/django/db/models/sql/compiler.py", line 788, in execute_sql cursor = super(SQLInsertCompiler, self).execute_sql(None) File "/home/daedalus/webapps/wowstatus/lib/python2.6/django/db/models/sql/compiler.py", line 732, in execute_sql cursor.execute(sql, params) File "/home/daedalus/webapps/wowstatus/lib/python2.6/django/db/backends/util.py", line 15, in execute return self.cursor.execute(sql, params) File "/home/daedalus/webapps/wowstatus/lib/python2.6/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) django.db.utils.DatabaseError: current transaction is aborted, commands ignored until end of transaction block Answer: You should reset your db state if something fails for example: from django.db import transaction @transaction.commit_manually def Populate(): try: Region(name=server['locale']).save() print 'Added region: %(locale)s' % server except IntegrityError: transaction.rollback() else: transaction.commit()
What is the best way to keep an almost static data for web application? Question: I'm building a web application in python. A part of this application is working with the data that can be described as follows: Symbol Begin Date End Date AAPL Jan-1-1985 Dec-27-2010 ... The data is somewhat static - it will be periodically updated, that is: new entries may be added, and the "End Date" field can be updated for all entries. Now, the question: given the more-or-less static nature of the dataset, what is the best way of storing it and working with it? "Working" means fetching random lines, hopefully more than few times per second. I can do it with XML file, with SQL DB or SQLite, with JSON object file and some kind of python object in memory. What are the cons and pros of different solutions? I'll be thankful for explanations and for the edge cases (such as 'until 10times/sec XML file is the best, after that SQL DB). _Update: Thanks for all the answers! Just a smallish update: currently the set is around 3K lines. It may grow to, say, 15K lines in a year. Access pattern: updates are regular, once a day, for the complete set; so both adding lines and updating end date will be done at once. Fetching a random line is indeed by the symbol, could be done few times a second._ Answer: I would generate a Python source file every time the data changes, and have that file primarily consisting of a dictionary. This assumes that lookup is by symbol, and that the data readily fit into memory. data = { "AAPL": ("Jan-1-1985", "Dec-27-2010"), ... } To bulk-update the end date, use pprint.pprint, overwriting the entire file. **Edit** : To illustrate how such a file can be written, here is a script that fills it out with random data import random, string, pprint def randsym(): res =[] for i in range(4): res.append(random.choice(string.uppercase)) return ''.join(res) months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] days = range(1,29) years = range(1980,2010) def randdate(): return "%s-%s-%s" % (random.choice(months), random.choice(days), random.choice(years)) data = {} for i in range(15000): data[randsym()] = (randdate(), "Dec-27-2010") with open("data.py", "w") as f: f.write("data=") f.write(pprint.pformat(data)) To access the data, do `from data import data`.
Troubleshooting Facebook's graph.put_object that returns error 400 Question: I am using Facebook Python's SDK along with Google App Engine, and making a call to do a checkin: graph.put_object("me", "checkins", message="Hello, world", place="165039136840558", coordinates='{"latitude":"38.2454064", "longitude":"-122.0434404"}') However, this throws an error 400 Bad Request and I don't seem to be able to try catch it so I can have the important information. On a bad request, Facebook should return, an object like below which can help troubleshoot and address the issue, but I am not sure how I can retrieve this object: { "error": { "type" : "OAuthException", "message" : "An active access token must be used to query information about the current user." } } [Edit] I am temporarily able to figure out the issue by: Logging the Post Data logging.info("LOG" + str(post_data)) and then using a REST client like the extension for Firefox to make the request again. The response gives me the information I need to proceed. However, it would have been better if I can obtain the error messages within my app. Answer: Like the error says, you need to create and consume an access token: Here's some more info. Sorry I can't "comment" yet and am forced to answer. <http://benbiddington.wordpress.com/2010/04/23/facebook-graph-api-getting- access-tokens/>
Google App Engine: how to send html using send_mail Question: I have a app with a kind of rest api that I'm using to send emails . However it currently sends only text email so I need to know how to modify it and make it send html . Below is the code : from __future__ import with_statement #!/usr/bin/env python # import cgi import os import logging import contextlib from xml.dom import minidom from xml.dom.minidom import Document import exceptions import warnings import imghdr from google.appengine.api import images from google.appengine.api import users from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext.webapp import template from google.appengine.api import mail import wsgiref.handlers # START Constants CONTENT_TYPE_HEADER = "Content-Type" CONTENT_TYPE_TEXT = "text/plain" XML_CONTENT_TYPE = "application/xml" XML_ENCODING = "utf-8" """ Allows you to specify IP addresses and associated "api_key"s to prevent others from using your app. Storage and Manipulation methods will check for this "api_key" in the POST/GET params. Retrieval methods don't use it (however you could enable them to use it, but maybe rewrite so you have a "read" key and a "write" key to prevent others from manipulating your data). Set "AUTH = False" to disable (allowing anyone use your app and CRUD your data). To generate a hash/api_key visit https://www.grc.com/passwords.htm To find your ip visit http://www.whatsmyip.org/ """ AUTH = { '000.000.000.000':'JLQ7P5SnTPq7AJvLnUysJmXSeXTrhgaJ', } # END Constants # START Exception Handling class Error(StandardError): pass class Forbidden(Error): pass logging.getLogger().setLevel(logging.DEBUG) @contextlib.contextmanager def mailExcpHandler(ctx): try: yield {} except (ValueError), exc: xml_error_response(ctx, 400 ,'app.invalid_parameters', 'The indicated parameters are not valid: ' + exc.message) except (Forbidden), exc: xml_error_response(ctx, 403 ,'app.forbidden', 'You don\'t have permission to perform this action: ' + exc.message) except (Exception), exc: xml_error_response(ctx, 500 ,'system.other', 'An unexpected error in the web service has happened: ' + exc.message) def xml_error_response(ctx, status, error_id, error_msg): ctx.error(status) doc = Document() errorcard = doc.createElement("error") errorcard.setAttribute("id", error_id) doc.appendChild(errorcard) ptext = doc.createTextNode(error_msg) errorcard.appendChild(ptext) ctx.response.headers[CONTENT_TYPE_HEADER] = XML_CONTENT_TYPE ctx.response.out.write(doc.toxml(XML_ENCODING)) # END Exception Handling # START Helper Methods def isAuth(ip = None, key = None): if AUTH == False: return True elif AUTH.has_key(ip) and key == AUTH[ip]: return True else: return False # END Helper Methods # START Request Handlers class Send(webapp.RequestHandler): def post(self): """ Sends an email based on POST params. It will queue if resources are unavailable at the time. Returns "Success" POST Args: to: the receipent address from: the sender address (must be a registered GAE email) subject: email subject body: email body content """ with mailExcpHandler(self): # check authorised if isAuth(self.request.remote_addr,self.request.POST.get('api_key')) == False: raise Forbidden("Invalid Credentials") # read data from request mail_to = str(self.request.POST.get('to')) mail_from = str(self.request.POST.get('from')) mail_subject = str(self.request.POST.get('subject')) mail_body = str(self.request.POST.get('body')) mail.send_mail(mail_from, mail_to, mail_subject, mail_body) self.response.headers[CONTENT_TYPE_HEADER] = CONTENT_TYPE_TEXT self.response.out.write("Success") # END Request Handlers # START Application application = webapp.WSGIApplication([ ('/send', Send) ],debug=True) def main(): run_wsgi_app(application) if __name__ == '__main__': main() # END Application Answer: Have a look to the [Email message fields](http://code.google.com/intl/it/appengine/docs/python/mail/emailmessagefields.html) of the `send_mail` function. Here is the parameter you need: > **html** > An HTML version of the body content, for recipients that prefer HTML email. You should add the `html` input parameter like this: #Your html body mail_html_body = '<h1>Hello!</h1>' # read data from request mail_to = str(self.request.POST.get('to')) mail_from = str(self.request.POST.get('from')) mail_subject = str(self.request.POST.get('subject')) mail_body = str(self.request.POST.get('body')) mail.send_mail(mail_from, mail_to, mail_subject, mail_body, html = mail_html_body ) #your html body
How come my Python code doesn't work? Question: from celery.decorators import task from celery.decorators import task @task() def add(x, y): r = open("./abc.txt","w") r.write("sdf") r.close() return x + y That's my tasks.py file. >>> import tasks >>> r = tasks.add.delay(3,5) >>> r.result 8 As you can see, the function works. However, **the file does not create**. Why? I've tried changing multiple file paths, due to possible permission issues. but no luck. Answer: If the file was not being written, you would get an exception, so the function cannot possibly complete. Since the function is returning 8, it follows that the file is being written somewhere. Perhaps the file is written in a different directory to the one you are expecting The only other possibility I can think of is that the add function that is being run is not the one that you have shown here
How to get current date and time from DB using SQLAlchemy Question: I need to retrieve what's the **current date and time for the database I'm connected** with SQLAlchemy (not date and time of the machine where I'm running Python code). I've seen this functions, but they don't seem to do what they say: >>> from sqlalchemy import * >>> print func.current_date() CURRENT_DATE >>> print func.current_timestamp() CURRENT_TIMESTAMP Moreover it seems they don't need to be binded to any SQLAlchemy session or engine. It makes no sense... Thanks! Answer: I foud the solution: these functions cannot be used in the way I used (`print...`), but need to be called inside of the code that interacts with the database. For instance: print select([my_table, func.current_date()]).execute() or assigned to a field in an insert operation. Accidentally I discovered that exists at least a couple of parameters for these functions: * `type_` that indicates the type of the value to return, I guess * `bind` that indicates a binding to an SQLAlchemy engine Two examples of use: func.current_date(type_=types.Date, bind=engine1) func.current_timestamp(type_=types.Time, bind=engine2) Anyway my tests seems to say these parameters are not so important.
Python urllib2 http -1 error Question: I have this code, that supposed to work but I'm getting strange errors, for other user this code works fine. # -*- coding: utf-8 -*- import re, sys import urllib2 import urllib2_file user_hash='MTggMzc6T1dZgggggzWXpWbVptggggHTXlOV1F5WWgggggggWT0%3D' text_file = 'sveikinimas.txt' postdata = { 'type': '40', 'description': '', 'descr': 'Pelėsiais ir kerpėm apaugus aukštai\nTrakų štai garbinga pilis\n....', 'filetype': '2', 'name': 'Su šventėmis!', 'file': {'fd': open(text_file), 'filename': text_file}, 'nfo': '' } req = urllib2.Request('http://www.linkomanija.net/takefreak.php',postdata) req.add_header('Cookie', 'login=' + user_hash) print req response = urllib2.urlopen(req) print response html = response.read() f = open("out.html", "wb") f.write(html) f.close() This code works for another user, but I'm getting strange error: <urllib2.Request instance at 0x0387C698> Traceback (most recent call last): File "C:\Users\drakaz\Desktop\lm\hello.py", line 25, in <module> response = urllib2.urlopen(req) File "C:\Python26\lib\urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "C:\Python26\lib\urllib2.py", line 392, in open response = self._open(req, data) File "C:\Python26\lib\urllib2.py", line 410, in _open '_open', req) File "C:\Python26\lib\urllib2.py", line 370, in _call_chain result = func(*args) File "C:\Users\drakaz\Desktop\lm\urllib2_file.py", line 207, in http_open return self.do_open(httplib.HTTP, req) File "C:\Users\drakaz\Desktop\lm\urllib2_file.py", line 298, in do_open return self.parent.error('http', req, fp, code, msg, hdrs) File "C:\Python26\lib\urllib2.py", line 436, in error return self._call_chain(*args) File "C:\Python26\lib\urllib2.py", line 370, in _call_chain result = func(*args) File "C:\Python26\lib\urllib2.py", line 519, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error -1: Answer: The error message is from `urllib2_file`. I believe, it is not proper. You may just wish to comment it and try if it that module is not being used. I also see that in the post data, you are having another dictionary inside. It should not be the case. Just have the contents of the file. Read the file and have it the content as the string. No dictionaries, no file names.
Fastest way to swap elements in Python list Question: Is there any any faster way to swap two list elements in Python than L[a], L[b] = L[b], L[a] or would I have to resort to [Cython](http://cython.org/) or [Weave](http://www.scipy.org/Weave) or the like? Answer: Looks like the Python compiler optimizes out the temporary tuple with this construct: ### code: import dis def swap1(): a=5 b=4 a, b = b, a def swap2(): a=5 b=4 c = a a = b b = c print 'swap1():' dis.dis(swap1) print 'swap2():' dis.dis(swap2) ### output: swap1(): 6 0 LOAD_CONST 1 (5) 3 STORE_FAST 0 (a) 7 6 LOAD_CONST 2 (4) 9 STORE_FAST 1 (b) 8 12 LOAD_FAST 1 (b) 15 LOAD_FAST 0 (a) 18 ROT_TWO 19 STORE_FAST 0 (a) 22 STORE_FAST 1 (b) 25 LOAD_CONST 0 (None) 28 RETURN_VALUE swap2(): 11 0 LOAD_CONST 1 (5) 3 STORE_FAST 0 (a) 12 6 LOAD_CONST 2 (4) 9 STORE_FAST 1 (b) 13 12 LOAD_FAST 0 (a) 15 STORE_FAST 2 (c) 14 18 LOAD_FAST 1 (b) 21 STORE_FAST 0 (a) 15 24 LOAD_FAST 2 (c) 27 STORE_FAST 1 (b) 30 LOAD_CONST 0 (None) 33 RETURN_VALUE Two loads, a [`ROT_TWO`](http://docs.python.org/library/dis.html#opcode- ROT_TWO), and two saves, versus three loads and three saves. You are unlikely to find a faster mechanism.
Python imports issue Question: I have a Utilities module which defines a few functions which are repeatedly used and am also adding in some constants. I'm running into trouble importing these constants though... Let's say I'm working in class A, and I have a class in my constants also named A from Utils.Constants import A as DistinctA class A(object): .... Implementation .... some_var = DistinctA.SOME_CONSTANT class Utils(object): class Constants(object): class A(object): SOME_CONSTANT = "Constant" I'm probably making this too much like Java, so if so just yell / smack my knuckles with a ruler. When I attempt to import that class, I get an error that there is no module named Constants. What's this python newbie missing? Answer: The identifier after 'from' must point to a module; you can't refer to a class. While I'm not qualified to say whether your nested classes are 'pythonic', I have never seen it done like that before. I'd be more inclined to create a constants.py module that contains the `A` class. Then you could do this: from constants import A as DistinctA If you really want those constants to live inside utils, you could make utils a package: utils/ utils/__init__.py utils/constants.py Then you can do: from utils.constants import A as DistinctA
Web Python Question Question: Can anyone assist me in getting a Python script running on Hostgator Shared hosting? I work with PHP mostly, but have taken a liking to Python, and would like to try to get it going on the web. The only way I've ever ran Python is with either the interpreter, or through a terminal, with >Python file.py. I tried just uploading a hello world file to the webserver, but all it outputs is the script source. I talked with hostgator, but all they could tell me was I need to use a dispatcher, which I cannot seem to find an example of. All I want to know, is how to make a <p>Hello</p> output to the browser. Thanks, and sorry if I'm rambley, I've been Googling this off and on all week now. Answer: Well, I got [this](http://support.hostgator.com/articles/getting- started/general-help/perl-and-python- http://docs.python.org/library/cgiscripts) from Hostgator's own support site. Assuming your host is running Python 2.x, then you can adapt the linked-to script as follows: #!/usr/bin/python print "Content-type: text/html\r\n\r\n" print "<html><head>" print "<title>CGI Test</title>" print "</head><body>" print "<p>Test page using Python</p>" print "</body></html>" and put it in your `cgi-bin` folder with permissions of 755. **Update:** In terms of "getting around" the cgi-bin folder: that'll depend on options your hosting package allows. Look at [Bottle](http://bottle.paws.de/) for a simple dispatcher which fits in a single Python module. You can deploy it using CGI, import bottle # Put your bottle code here, following the docs on the Bottle site bottle.run(server=bottle.CGIServer) **Further update:** Apart from the Bottle docs, I suggest you read the [Python docs about CGI](http://docs.python.org/library/cgi).
abstract test case using python unittest Question: Is it possible to create an abstract `TestCase`, that will have some test_* methods, but this `TestCase` won't be called and those methods will only be used in subclasses? I think I am going to have one abstract `TestCase` in my test suite and it will be subclassed for a few different implementation of a single interface. This is why all test methods are the some, only one, internal method changes. How can I do it in elegant way? Answer: I didn't quite understand what do you plan to do -- the rule of thumb is "not to be smart with tests" - just have them there, plain written. But to achieve what you want, if you inherit from unittest.TestCase, whenever you call unittest.main() your "abstract" class will be executed - I think this is the situation you want to avoid. Just do this: Create your "abstract" class inheriting from "object", not from TestCase. And for the actual "concrete" implementations, just use multiple inheritance: inherit from both unittest.TestCase and from your abstract class. import unittest class Abstract(object): def test_a(self): print "Running for class", self.__class__ class Test(Abstract, unittest.TestCase): pass unittest.main() **update** : reversed the inheritance order - `Abstract` first so that its defintions are not overriden by `TestCase` defaults, as well pointed in the comments bellow.
Python and the self parameter Question: I'm having some issues with the self parameter, and some seemingly inconsistent behavior in Python is annoying me, so I figure I better ask some people in the know. I have a class, `Foo`. This class will have a bunch of methods, `m1`, through `mN`. For some of these, I will use a standard definition, like in the case of `m1` below. But for others, it's more convinient to just assign the method name directly, like I've done with `m2` and `m3`. import os def myfun(x, y): return x + y class Foo(): def m1(self, y, z): return y + z + 42 m2 = os.access m3 = myfun f = Foo() print f.m1(1, 2) print f.m2("/", os.R_OK) print f.m3(3, 4) Now, I know that `os.access` does not take a `self` parameter (seemingly). And it still has no issues with this type of assignment. However, I cannot do the same for my own modules (imagine `myfun` defined off in `mymodule.myfun`). Running the above code yields the following output: 3 True Traceback (most recent call last): File "foo.py", line 16, in <module> print f.m3(3, 4) TypeError: myfun() takes exactly 2 arguments (3 given) The problem is that, due to the framework I work in, I cannot avoid having a class `Foo` at least. But I'd like to avoid having my `mymodule` stuff in a dummy class. In order to do this, I need to do something ala def m3(self,a1, a2): return mymodule.myfun(a1,a2) Which is hugely redundant when you have like 20 of them. So, the question is, either how do I do this in a totally different and obviously much smarter way, or how can I make my own modules behave like the built-in ones, so it does not complain about receiving 1 argument too many. Answer: `os.access()` is a built-in function, in the sense that it's part of an extension module written in C. When the class is defined, Python doesn't recognize `m2` as a method because it's the wrong type — methods are Python functions, not C functions. `m3`, however, is a Python function, so it's recognized as a method and handled as such. In other words, it's `m2` that's exceptional here, not `m3`. One simple way to do what you want would be to make `m3` a static method: m3 = staticmethod(myfun) Now the interpreter knows never to try and pass `myfun()` a `self` parameter when it's called as the `m3` method of a Foo object.
How do I make a GUI that behaves like this? Question: This is difficult to explain without illustration, so - behold, an illustration, cobbled together from screenshots of a few hello-world examples and a lot of Paint work: ![GUI mockup](http://i.stack.imgur.com/12nPd.png) I have started out using Windows Forms on .NET (via IronPython, but that shouldn't be important), and haven't been able to figure out very much. GUI libraries in general are very intimidating, simply because every class has so many possible attributes. Documentation is good at explaining what everything does, but not so good at helping you figure out what you need. I will be assembling the GUI dynamically, but I'm not expecting that to be the hard part. The sticking points for me right now are: * How do I get text labels to size themselves automatically to the width of the contained text (so that the text doesn't clip, and I also don't reserve unnecessary space for them when resizing the window)? * How do I make the vertical scrollbar always appear? Setting the VScroll property (why is this protected when AutoScroll is public, BTW?) doesn't seem to do anything. * How come the horizontal scrollbar is not added by AutoScroll when contents are laid out vertically (via `Dock = DockStyle.Top`)? I can use a minimum size for panels to prevent the label and corresponding control from overlapping when the window is shrunk horizontally, but then the scrollbar doesn't appear and the control is inaccessible. * How can I put limits on window resizing (e.g. set a minimum width) without disabling it completely? (Just set minimum/maximum sizes for the Form?) Related to that, is there any way to set minimum/maximum widths or heights without setting a minimum/maximum size (i.e. can I constrain the size in only one dimension)? * Is there a built-in control suitable for hex editing or am I going to have to build something myself? ... And should I be using something else (perhaps something more capable?) I've heard WPF mentioned, but I understand that this involves XML and I really want to build a GUI from XML - I already have data in an object graph, and doing some kind of weird XML pseudo-serialization (in Python, no less!) in order to create a GUI seems incredibly roundabout. Answer: If you're willing to use Java/Swing that basic form should be pretty easy. I'd like to say that the Netbeans IDE has a pretty good WYSIWYG GUI editor. Even though it is pretty good I'd by lying if I said that is all there is to it. You have to understand Swing to get things the way you want or you'll be bashing your head against the wall. It's free. Most of what you ask for are properties of the GUI builder, you'll need to at least look before asking specifics. I've had very little experience with Visual Basic, something about the language aggravated me but there is nothing I've seen easier to slap a simple GUI together with.
Python glob multiple filetypes Question: Is there a better way to use glob.glob in python to get a list of multiple file types such as .txt, .mdown, and .markdown? Right now I have something like this: projectFiles1 = glob.glob( os.path.join(projectDir, '*.txt') ) projectFiles2 = glob.glob( os.path.join(projectDir, '*.mdown') ) projectFiles3 = glob.glob( os.path.join(projectDir, '*.markdown') ) Answer: Maybe there is a better way, but how about: >>> import glob >>> types = ('*.pdf', '*.cpp') # the tuple of file types >>> files_grabbed = [] >>> for files in types: ... files_grabbed.extend(glob.glob(files)) ... >>> files_grabbed # the list of pdf and cpp files Perhaps there is another way, so wait in case someone else comes up with a better answer.
Which is quicker? Memcache or file query? (using maxmind geoip.dat file) Question: I'm using Python on Appengine and am looking up the geolocation of an IP address like this: import pygeoip gi = pygeoip.GeoIP('GeoIP.dat') Location = gi.country_code_by_addr(self.request.remote_addr) (pygeoip can be found here: <http://code.google.com/p/pygeoip/>) I want to geolocate each page of my app for a user so currently I lookup the IP address once then store it in memcache. My question - which is quicker? Looking up the IP address each time from the .dat file or fetching it from memcache? Are there any other pros/cons I need to be aware of? For general queries like this, is there a good guide to teach me how to optimise my code and run speed tests myself? I'm new to python and coding in general so apologies if this is a basic concept. Thanks! Tom **EDIT:** Thanks for the responses, memcache seems to be the right answer. I think that Nick and Lennart are suggesting that I add the whole gi variable to memcache. I think this is possible. FYI - the whole GeoIP.dat file is just over 1MB so not that large. Answer: What takes time there is rather loading the database from the dat file. Once you have that in memory, the lookup time is not significant. So if you can keep the gi variable in memory that seems the best solution. If you can't you probably can't use memcached either.
List of names and their numbers needed to be sorted .TXT file Question: I have a list of names (never over 100 names) with a value for each of them, either 3 or 4 digits. > john2E=1023 > mary2E=1045 > fred2E=968 And so on... They're formatted exactly like that in the .txt file. I have Python and Excel, also willing to download whatever I need. **What I want to do is** sort all the names according to their values in a descending order so highest is on top. I've tried to use Excel by replacing the '2E=' with ',' so I can have the name,value then important the data so each are in separate columns but I still couldn't sort them any other way than A to Z. Help is much appreciated, I did take my time to look around before posting this. Answer: Replace the "2E=" with a tab character so that the data is displayed in excel in two columns. Then sort on the value column.
Strange PYTHONPATH problem Question: I recently updated my python installation to 2.7 (previously 2.5), and I've noticed a strange problem where I cannot import certain modules that I created. I had no problem before. Normally, I edit the PYTHONPATH and add the directory I want to import modules. For some strange reason, I can no longer import. I checked my path in PYTHONPATH, and it looked correct. When I display the sys.path in an interpreter, I see the current directory prepended to every PYTHONPATH entry(i.e. 'c:\blah\blah c:\path\to\module') If I edit the sys.path by appending the directory that I want at the end of the list,everything works fine(i.e. 'c:\path\to\module\'). I never had to do this before. I'm on Windows 7 on two computers. Has anyone else had similar trouble? Answer: Think I found the problem. Somehow I had added the some of the python standard libraries into PYTHONPATH. Once I removed those, everything works fine.
Django book question: Database config problem. Getting Operational Error Question: I am on chapter 5 of the django book and trying to proceed, but I'm stuck on one part: Link to exact chapter: <http://www.djangobook.com/en/2.0/chapter05/> Problem: testing database configurations When I: 1. Run python manage.py shell form within the mysite project directory 2. Then in the shell type these commands to test database configuration from django.db import connection cursor = connection.cursor() I get the following error when i do all of the above: > "OperationalError: unable to open database file" Was I supposed to create some sort of file within the new directory or folder i created called "MyDB". If so, how do I do this? Below is how my DB configurations are set: Configurations: DATABASES = { 'default': { 'ENGINE': 'sqlite3', 'NAME': 'C:/Python27/MyDB', 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '', } Answer: My first guess is permissions error. What are the permissions on the Python27 directory, and do they correspond to the permissions from which you are running the python manage command. I typically put the database in the project directory, not the python directory.
python equivalent of filter() getting two output lists (i.e. partition of a list) Question: Let's say I have a list, and a filtering function. Using something like >>> filter(lambda x: x > 10, [1,4,12,7,42]) [12, 42] I can get the elements matching the criterion. Is there a function I could use that would output two lists, one of elements matching, one of the remaining elements? I could call the `filter()` function twice, but that's kinda ugly :) **Edit:** the order of elements should be conserved, and I may have identical elements multiple times. Answer: Try this: def partition(pred, iterable): trues = [] falses = [] for item in iterable: if pred(item): trues.append(item) else: falses.append(item) return trues, falses Usage: >>> trues, falses = partition(lambda x: x > 10, [1,4,12,7,42]) >>> trues [12, 42] >>> falses [1, 4, 7] There is also an implementation suggestion in [itertools recipes](http://docs.python.org/dev/library/itertools.html#itertools-recipes): from itertools import filterfalse, tee def partition(pred, iterable): 'Use a predicate to partition entries into false entries and true entries' # partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9 t1, t2 = tee(iterable) return filterfalse(pred, t1), filter(pred, t2) The recipe comes from the Python 3.x documentation. In Python 2.x `filterfalse` is called `ifilterfalse`.
IronPython and Nodebox in C# Question: **My plan:** I'm trying to setup my C# project to communicate with Nodebox to call a certain function which populates a graph and draws it in a new window. **Current situation: [fixed... see Update2]** I have already included all python-modules needed, but im still getting a > `Library 'GL' not found` it seems that the `pyglet` module needs a reference to `GL/gl.h`, but can't find it due to IronPython behaviour. **Requirement:** The project needs to stay as small as possible without installing new packages. Thats why i have copied all my modules into the project-folder and would like to keep it that or a similar way. **My question:** Is there a certain workaround for my problem or a fix for the library-folder missmatch. Have read some articles about `Tao-Opengl` and `OpenTK` but can't find a good solution. **_Update1:_** Updated my sourcecode with a small pyglet window-rendering example. Problem is in pyglet and referenced c-Objects. How do i include them in my c# project to be called? No idea so far... experimenting alittle now. Keeping you updated. **_SampleCode C#:_** ScriptRuntimeSetup setup = Python.CreateRuntimeSetup(null); ScriptRuntime runtime = new ScriptRuntime(setup); ScriptEngine engine = Python.GetEngine(runtime); ScriptSource source = engine.CreateScriptSourceFromFile("test.py"); ScriptScope scope = engine.CreateScope(); source.Execute(scope); **_SampleCode Python (test.py):_** from nodebox.graphics import * from nodebox.graphics.physics import Vector, Boid, Flock, Obstacle flock = Flock(50, x=-50, y=-50, width=700, height=400) flock.sight(80) def draw(canvas): canvas.clear() flock.update(separation=0.4, cohesion=0.6, alignment=0.1, teleport=True) for boid in flock: push() translate(boid.x, boid.y) scale(0.5 + boid.depth) rotate(boid.heading) arrow(0, 0, 15) pop() canvas.size = 600, 300 def main(canvas): canvas.run(draw) **_Update2:_** Line 139 [pyglet/lib.py] sys.platform is not win32... there was the error. Fixed it by just using the line: from pyglet.gl.lib_wgl import link_GL, link_GLU, link_WGL Now the following Error: 'module' object has no attribute '_getframe' Kind of a pain to fix it. Updating with results... **_Update3:_** Fixed by adding following line right after first line in C#-Code: setup.Options["Frames"] = true; Current Problem: `No module named unicodedata`, but in `Python26/DLLs` is only a `*.pyd` file`. So.. how do i implement it now?! **_Update4:_** Fixed by surfing: [link text](http://www.java2s.com/Open- Source/Python/Windows/pyExcelerator/pywin32-214/isapi/test/build/bdist.win32/winexe/temp/unicodedata.py.htm "Link") and adding `unicodedata.py` and `'.pyd` to C# Projectfolder. Current Problem: 'libGL.so not found'... guys.. im almost giving up on nodebox for C#.. to be continued **_Update5:_** i gave up :/ workaround: c# communicating with nodebox over xml and filesystemwatchers. Not optimal, but case solved. Answer: -X:Frames enables the frames option as runtime (it slows code down a little to have access to the Python frames all the time). To enable frames when hosting you just need to do: ScriptRuntimeSetup setup = Python.CreateRuntimeSetup(new Dictionary<string, object>() { { "Frames", true } }); Instead of the null that you're passing now. That's just creating a new dictionary for the options dictionary w/ the contents "Frames" set to true. You can set other options in there as well and in general the -X:Name option is the same here as it is for the command line.
More than one profile in Django? Question: Is it possible to use Django's user authentication features with more than one profile? Currently I have a settings.py file that has this in it: AUTH_PROFILE_MODULE = 'auth.UserProfileA' and a models.py file that has this in it: from django.db import models from django.contrib.auth.models import User class UserProfileA(models.Model): company = models.CharField(max_length=30) user = models.ForeignKey(User, unique=True) that way, if a user logs in, I can easily get the profile because the User has a get_profile() method. However, I would like to add UserProfileB. From looking around a bit, it seems that the starting point is to create a superclass to use as the AUTH_PROFILE_MODULE and have both UserProfileA and UserProfileB inherit from that superclass. The problem is, I don't think the get_profile() method returns the correct profile. It would return an instance of the superclass. I come from a java background (polymorphism) so I'm not sure exactly what I should be doing. Thanks! Edit: Well I found a way to do it via something called an "inheritance hack" that I found at this site <http://djangosnippets.org/snippets/1031/> It works really well, however, coming from a java background where this stuff happens automatically, I'm a little unsettled by the fact that someone had to code this up and call it a "hack" to do it in python. Is there a reason why python doesn't enable this? Answer: So the issue you're going to have is that whatever you want for your profile, you need to persist it in a database of some sort. Basically all of the back- ends for django are relational, and thus every field in a persisted object is present in every row of the table. there are a few ways for getting what you want. Django provides some support for [inheritance.](http://docs.djangoproject.com/en/dev/topics/db/models/#model- inheritance) You can use the techniques listed and get reasonable results in a polymorphic way. The most direct approach is to use multiple table inheritance. Roughly: class UserProfile(models.Model): # set settings.AUTH_PROFILE_MODULE to this class! pass class UserProfileA(UserProfile): pass class UserProfileB(UserProfile): pass To use it: try: profile = user.get_profile().userprofilea # user profile is UserProfileA except UserProfileA.DoesNotExist: # user profile wasn't UserProfileB pass try: profile = user.get_profile().userprofileb # user profile is UserProfileB except UserProfileB.DoesNotExist: # user profile wasn't either a or b... **Edit:** Re, your comment. The relational model implies a number of things that [seem to disagree](http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx) with object oriented philosophy. For a relation to be useful, it requires that every element in the relation to have the same dimensions, so that relational queries are valid for the whole relation. Since this is known a-priori, before encountering an instance of a class stored in the relation, then the row cannot be a subclass. django's orm overcomes this impedance mismatch by storing the subclass information in a different relation (one specific to the subclass), There are other solutions, but they all obey this basic nature of the relational model. If it helps you come to terms with this, I'd suggest looking at how persistence on a RDBMs works for applications in the absence of an ORM. In particular, relational databases are more about collections and summaries of many rows, rather than applying behaviors to data once fetched from the database. The specific example of using the profile feature of `django.contrib.auth` is a rather uninteresting one, especially if the only way that model is ever used is to fetch the profile data associated with a particular `django.contrib.auth.models.User` instance. If there are no other queries, you don't need a `django.models.Model` subclass at all. You can pickle a regular python class and store it in a blob field of an otherwise featureless model. On the other hand, if you want to do more interesting things with profiles, like search for users that live in a particular city, then it will be important for _all_ profiles to have an index for their city property. That's got nothing to do with OOP, and everything to do with relational.
Get parts of html code as a new string in python Question: I was wondering how I could get a value, between some html tags, from some html code using python. Say I wanted to get the price of a product in an amazon page: I've got up to: url = raw_input("Enter the url:\n") sock = urllib.urlopen(url) htmlsource = sock.read() sock.close() so now I got the html source as a string but I don't know how to extract the price. I've played around with re.search but can't get the right expression. say the price is between `<span class="price">£79.98</span>` What would be the best way to get `var1 = 79.98`? Answer: You need to use a HTML Parsing Library. It provides better features than using standard regexs, where you can go wrong easily and it is hard to maintain. Python Standard Library comes with `html.parse` in py3k and `HTMLParser` in python2.x series which would help you parse the HTML file and get the values of the tags. You may also use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) library which many have found easy to use. from BeautifulSoup import BeautifulSoup soup = BeautifulSoup('<span class="price">79.98</span>') t = soup.find('span', attrs={"class":"price"}) print t.renderContents()
Unexpected end of archive Question: Hey there, I'm pretty new to programming and I've got a problem with the Python Challenge; and I've removed the exact url in hopes avoiding any heavy spoilers. Anyway, my problem is that I'm trying to open the file I've created, in WinRAR after I've ran the following code, and it tells me the file has an "unexpected of end of archive". Naturally I've tried to rerun my code a few times just in case, and still no luck. I've also grabbed the file with my browser from the same url to make sure that the file itself isn't damaged, and opened it without any errors, so I'm pretty stumped. I guess I'm missing some basic element of the process? I appreciate your help in advance! import urllib url = "http://www.pythonchallenge.com/pc/def/xxxxxxx.zip" site = urllib.urlopen(url) newfile = open(url.split('/')[-1],'w') newfile.write(site.read()) site.close() newfile.close() Answer: I'm guessing you're on a Windows machine. (Mostly due to "WinRAR") newfile = open(url.split('/')[-1],'w') The `'w'` opens the file for writing, but in "text" mode. In text mode, some OSs (like Windows) convert `'\n'` to something else (`'\r\n'` in Window's case.). To avoid this translation, open the file in binary mode `'b'`, with writing `'w'`: `'wb'` These letters derive from `fopen`. [See the manual page for `fopen`](http://www.google.com/search?q=man+fopen), as I feel it has a better description of the flags than the [Python docs](http://docs.python.org/library/functions.html#open). (Note however, that Python adds a few things to the flags.)
file walking in python Question: So, I've got a working solution, but it's ugly and seems un-idiomatic. The problem is this: For a directory tree, where every directory is set up to have: * 1 `.xc` file * at least 1 `.x` file * any number of directories which follow the same format and nothing else. I'd like to, given the root path and walk the tree applying `xc()` to the contents of `.xc` fies, `x` to the contents to `.x` files, and then doing the same thing to child folders' contents. Actual code with explanation would be appreciated. Thanks! Answer: The function [`os.walk`](http://docs.python.org/library/os.html) recursively walks through a directory tree, returning all file and subdirectory names. So all you have to do is detect the `.x` and `.xc` extensions from the filenames and apply your functions when they do (untested code follows): import os for dirpath, dnames, fnames in os.walk("./"): for f in fnames: if f.endswith(".x"): x(os.path.join(dirpath, f)) elif f.endswith(".xc"): xc(os.path.join(dirpath,f)) This assumes `x` and `xc` can be called on filenames; alternately you can read the contents first and pass that as a string to the functions.
python thread error Question: obj = functioning() from threading import Thread Thread(target=obj.runCron(cronDetails)).start() print "new thread started..." I am runnning this, this should run as new thread for runCron function and should print new thread started. but this is not printing new thread started and not creating new thread Answer: You question is missing some details, e.g. what error message you are getting, etc. – below is a working example mimicked after your code. #!/usr/bin/env python import time class Obj(object): def runCron(self, cronDetails): time.sleep(1) print cronDetails obj = Obj() cronDetails = "I'm here." from threading import Thread # Note, that the `target` is a function object # (or a callable in general), we don't actually call it yet! t = Thread(target=obj.runCron, args=(cronDetails, )) t.start() print "New thread started (should be here in a second) ..." It prints: New thread started (should be here in a second) ... I'm here.
Canonical embedded interactive Python interpreter example? Question: I would like to create an embedded Python interpreter in my C/C++ application. Ideally this interpreter would behave exactly like the real Python interpreter, but yield after processing each line of input. The standard Python module `code` looks from the outside exactly like what I want, except that it is written in Python. E.g.: >>> import code >>> code.interact() Python 2.7.1 (r271:86832, Jan 3 2011, 15:34:27) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> The core of `code` is the function that accepts potentially incomplete user input and either displays a syntax error (case 1), waits for more input (case 2), or executes the user input (case 3). try: code = self.compile(source, filename, symbol) except (OverflowError, SyntaxError, ValueError): # Case 1 self.showsyntaxerror(filename) return False if code is None: # Case 2 return True # Case 3 self.runcode(code) return False The example in the Python source tree `Demo/embed/demo.c` is the outer shell but not what I want because that example only handles complete statements. I'm including part of it here for reference: /* Example of embedding Python in another program */ #include "Python.h" main(int argc, char **argv) { /* Initialize the Python interpreter. Required. */ Py_Initialize(); [snip] /* Execute some Python statements (in module __main__) */ PyRun_SimpleString("import sys\n"); [snip] /* Exit, cleaning up the interpreter */ Py_Exit(0); } What I'm looking for is the C code to handle incomplete blocks, stacktraces, etc. That is, all the behaviors of the real Python interpreter. Thanks in advance. Answer: Take a look at [boost.python](http://www.boost.org/doc/libs/1_45_0/libs/python/doc/tutorial/doc/html/python/embedding.html#python.using_the_interpreter). It's a fantastic integretaion of Python in C++ and vice versa. But you are able to use the C API anyway. The [PyRun_InteractiveLoopFlags()](http://docs.python.org/c-api/veryhigh.html#PyRun_InteractiveLoopFlags) function offer a interactive console in your C++ application.
Django ImportError while adding guardian module Question: Being a beginner of using Django, i am trying to add some module for the purpose of testing Django, but I've got a problem regarding the importError which I've googled for solution with no success. Below is my situation The project is created to my PC J:\ while the python package is installed on C:. According to guardian's installation guide following code have to be added in django's backend: AUTHENTICATION_BACKENDS = ( 'django.contrib.auth.backends.ModelBackend', # default 'guardian.backends.ObjectPermissionBackend',) The problem's come, when I put the guardian's app under the directory of mysite, configurate the install_apps setting with 'guardian' and start syncdb, the error claim as below which i believe that it is because the django's filw do not understand what is "guradian" really is > File "J:\mysite\guardian\conf\settings.py", line 6, in raise > ImproperlyConfigured("In order to use django-guardian's " > django.core.exceptions.ImproperlyConfigured: In order to use django- > guardian's O bjectPermissionBackend authorization backend you have to > configure ANONYMOUS_USE R_ID at your settings module So, I move the 'guardian' folder under Django's contrib folder, adding the sys path and configurate the install_app setting with 'django.contrib.guardian'. However, I end up with the importerror. Answer: As it seems from the error message, you need to add the user id for anonymous user for you site. Create a user (named maybe anonymous) and put the id of user in the settings.py file. Obtain the user id from the database using the shell. Put the id in the settings file: ANONYMOUS_USER_ID = <Your anonymous USER_ID> ### EDIT: Just looked through the documentation of the django-guardian app. It also specifies this: <http://packages.python.org/django-guardian/configuration.html>
Twisted server for multiple clients Question: I want to write a server that can accept multiple clients in python (twisted). I am already quite familiar with socket programming with the standard python socket module but here comes the trouble.. I think twisted is really hard to get into and i have read some tutorials about it. But a thing that i can't really find is a simple socket server that accepts multiple connections.. Can anyone help? If i missed some valuable information online please let me know because i am pulling my hair out.. Any help is much appreciated, Andesay Answer: Say, you want to run a server accepting client connections on port 9000: from twisted.internet import reactor, protocol PORT = 9000 class MyServer(protocol.Protocol): pass class MyServerFactory(protocol.Factory): protocol = MyServer factory = MyServerFactory() reactor.listenTCP(PORT, factory) reactor.run() And if you want to test connecting to this server, here's the code for a client (to launch in a different terminal): from twisted.internet import reactor, protocol HOST = 'localhost' PORT = 9000 class MyClient(protocol.Protocol): def connectionMade(self): print "connected!" class MyClientFactory(protocol.ClientFactory): protocol = MyClient factory = MyClientFactory() reactor.connectTCP(HOST, PORT, factory) reactor.run() You'll notice the code is very similar, only we use a Factory for a server and a ClientFactory for a client, and the servers needs to listen (listenTCP) while the client needs to connect (connectTCP). Good luck!
Python: strip html from text data Question: My question is slightly related to: [Strip html from strings in python](http://stackoverflow.com/questions/753052/strip-html-from-strings-in- python) I am looking for a simple way to strip HTML code from text. For example: string = 'foo <SOME_VALID_HTML_TAG> something </SOME_VALID_HTML_TAG> bar' stripIt(string) Would then yield `foo bar`. Is there any simple tool to achieve this in Python? The HTML code could be nested. Answer: import lxml.html import re def stripIt(s): doc = lxml.html.fromstring(s) # parse html string txt = doc.xpath('text()') # ['foo ', ' bar'] txt = ' '.join(txt) # 'foo bar' return re.sub('\s+', ' ', txt) # 'foo bar' s = 'foo <SOME_VALID_HTML_TAG> something </SOME_VALID_HTML_TAG> bar' stripIt(s) returns foo bar
Compiling a SWIG Python wrapper for a static library? Question: This is a noob question. I'm trying to learn how to use SWIG to make a python interface for a C++ library. The library is a proprietary 3rd party library; it comes to me in the form of a header file (foo.h) and a static archive (libfoo.a). To simplify matters, I've cooked up an example which I think has the same pathology. Same error messages anyways. /* foo.hpp */ class TC { public: TC(); int i; private: }; For reference, here's foo.c. I only have the header and archive files for the real 3rd party library. /*foo.cxx */ #include "foo.hpp" TC::TC() { i = 0; } I made this library by typing `g++ -c foo.cxx && ar rcs libfoo.a foo.o` My SWIG interface file is as follows: /* foo.i */ %module foo %{ #include "foo.hpp" %} %include "foo.hpp" I generate foo_wrap.cxx by typing swig -python -c++ foo.i and then compile. g++ -c -fPIC -I/usr/include/python2.6 foo_wrap.cxx g++ -shared -L. -lfoo -lpython2.6 -Wl,-soname,_foo.so foo_wrap.o -o _foo.so The compilation succeeds, but when I run Python and `import foo`, I get an undefined symbol error. >>> import foo Traceback (most recent call last): File "<stdin>", line 1, in <module> File "foo.py", line 25, in <module> _foo = swig_import_helper() File "foo.py", line 21, in swig_import_helper _mod = imp.load_module('_foo', fp, pathname, description) ImportError: ./_foo.so: undefined symbol: _ZN2TCC1Ev What's going on here? The problem seems to be that the linking step isn't finding the definition of the constructor TC::TC. Note: If I alter the linking step to g++ -shared -L. -lfoo -lpython2.6 -Wl,-soname,_foo.so foo_wrap.o -o _foo.so then everything works. But is this an option for my real problem, where I don't have the raw source code? Can one extract a .o from a .a? Presumably one can do this by hand, but shouldn't there be some automated way of doing it? Answer: I'm not really sure if it's the case for you but in general the order of object files and static libraries matters. The order defines the order of initialisation. You have to put the most general objects and/or static archives as last parameters. The objects/archives with the most dependencies have to be placed at the beginning. An example. The object file A.o offer the fucntion A(). The object B.o uses the function A(). You have to write `ld -o libmy.so B.o A.o` (the most general file A.o as last parameter). You can also check with `objdump -x _foo.so` if the symbol exists in the file. The right call would be: `g++ -shared -L. -lpython2.6 -Wl,-soname,_foo.so foo_wrap.o -lfoo -o _foo.so` Do not be confused with -lpython2.6, it's a dynamic library _linked_ at _runtime_.
Interpretation of range(n) and boolean list, one-to-one map, simpler? Question: #!/usr/bin/python # # Description: bitwise factorization and then trying to find # an elegant way to print numbers # Source: http://forums.xkcd.com/viewtopic.php?f=11&t=61300#p2195422 # bug with large numbers such as 99, but main point in simplifying it # def primes(n): # all even numbers greater than 2 are not prime. s = [False]*2 + [True]*2 + [False,True]*((n-4)//2) + [False]*(n%2) i = 3; while i*i < n: # get rid of ** and skip even numbers. s[i*i : n : i*2] = [False]*(1+(n-i*i)//(i*2)) i += 2 # skip non-primes while not s[i]: i += 2 return s # TRIAL: can you find a simpler way to print them? # feeling the overuse of assignments but cannot see a way to get it simpler # p = 49 boolPrimes = primes(p) numbs = range(len(boolPrimes)) mydict = dict(zip(numbs, boolPrimes)) print([numb for numb in numbs if mydict[numb]]) **Something I am looking for, can you get`TRIAL` to be of the extreme simplicity below? Any such method?** a=[True, False, True] b=[1,2,3] b_a # any such simple way to get it evaluated to [1,3] # above a crude way to do it in TRIAL Answer: For python2.7+, you can use [itertools.compress](http://docs.python.org/py3k/library/itertools.html#itertools.compress) itertools.compress(b,a) eg >>> from itertools import compress >>> a=[True, False, True] >>> b=[1,2,3] >>> list(compress(b,a)) [1, 3] otherwise you can use a list comprehension >>> [j for i,j in zip(a,b) if i] [1, 3] If you want to do this on your list of primes, it may be simpler to use enumerate >>> primes = [False, False, True, True, False, True] >>> list(compress(*zip(*enumerate(primes)))) [2, 3, 5]
Python 256bit Hash function with number output Question: I need a Hash function with a 256bit output (as long int). First I thought I could use SHA256 from the hashlib but it has an String Output and I need a number to calculate with. Converting the 32 Byte String to a long would work also but I didn't find anything. In struct there is a unpack function but this only works for 8 Byte long types and not for longer longs. Answer: How about: >>> import hashlib >>> h = hashlib.sha256('something to hash') >>> h.hexdigest() 'a3899c4070fc75880fa445b6dfa44207cbaf924a450ce7175cd8500e597d3ec1' >>> n = int(h.hexdigest(),base=16) >>> print n 73970130776712578303406724846815845410916448611708558169000368019946742824641
Python: Convert Relative Date String to Absolute Date Stamp Question: There are several questions along the same lines in Stackoverflow but this case is different. As input, I have a date string that can take three general formats. Either a) January 6, 2011 b) 4 days ago c) 12 hours ago I want the script to be able to recognize the format and call the appropriate function with the parameters. So if a then convert_full_string("January 6, 2011") if b then convert_days(4) if c then convert_hours(12) Once I recognize the format and able to call the appropriate function, it will be relatively easy. I plan on using [dateutil](http://stackoverflow.com/questions/4528194/python-converting- legacy-string-dates-to-dates/4528300#4528300) But I am not sure how to recognize the format. Any suggestions with code samples much appreciated. Thanks. Answer: Using [parsedatetime](https://github.com/bear/parsedatetime), you could parse all three date formats into `datetime.datetime` objects without having to code the logic yourself: import parsedatetime.parsedatetime as pdt import parsedatetime.parsedatetime_consts as pdc import datetime c = pdc.Constants() p = pdt.Calendar(c) for text in ('january 6, 2011', '4 days ago', '12 hours ago'): date=datetime.datetime(*p.parse(text)[0][:6]) # print(date.isoformat()) # 2011-01-06T09:00:18 # 2011-01-02T09:00:18 # 2011-01-05T21:00:18 print(date.strftime('%Y%m%dT%H%M%S')) # 20110106T090208 # 20110102T090208 # 20110105T210208
Jython script implementing a class isn't initialized correctly from Java Question: I'm trying to do something similar to [Question 4617364](http://stackoverflow.com/questions/4617364/how-to-get-from-jruby-a- correctly-typed-ruby-implementation-of-a-java-interface) but for Python - load a class from python script file, where said class implements a Java interface and hand it over to some Java code that can use its methods - but calls to the object method return invalid values and printing from the initializer doesn't seem to do anything. My implementation looks like this: Interface: package some.package; import java.util.List; public interface ScriptDemoIf { int fibonacci(int d); List<String> filterLength(List<String> source, int maxlen); } Python Implementation: from some.package import ScriptDemoIf class ScriptDemo(ScriptDemoIf): """ Class ScriptDemo implementing ScriptDemoIf """ def __init__(self): print "Script Demo init" def fibonacci(self, d): if d < 2: return d else: return self.fibonacci(d-1) + self.fibonacci(d-2) def filterLength(self, source, maxlen): return [ str for str in source if len(str) <= maxlen ] Class loader: public ScriptDemoIf load(String filename) throws ScriptException { ScriptEngine engine = new ScriptEngineManager().getEngineByName("jython"); FileReader script = new FileReader(filename); try { engine.eval(new FileReader(script)); } catch (FileNotFoundException e) { throw new ScriptException("Failed to load " + filename); } return (ScriptDemoIf) engine.eval("ScriptDemo()"); } public void run() { ScriptDemoIf test = load("ScriptDemo.py"); System.out.println(test.fibonacci(30)); } (Obviously the loader is a bit more generic in real life - it doesn't assume that the implementation class name is "ScriptDemo" - this is just for simplicity). When the code is being ran, I don't see the print from the Python's `__init__` (though if I put a print in the body of the script then I do see that), but the `test` variable in `run()` look like a valid jython "proxy object" and I get no casting errors. When I try to run the `fibonacci()` method I always get 0 (even if I change the method to always return a fixed number) and the `filterLength()` method always returns null (probably something to do with defaults according to the Java interface). what am I doing wrong? Answer: What version of jython are you using? You might have run into the JSR223 Jython bug : <http://bugs.jython.org/issue1681> From the bug description: > Calling methods from an embedded Jython script does nothing when using > JSR-223 and Jython 2.5.2rc2, while Jython 2.2.1 just works fine.
How do I set up the python/c library correctly? Question: I have been trying to get the python/c library to like my mingW compiler. The python online doncumentation; <http://docs.python.org/c-api/intro.html#include-files> only mentions that I need to import the python.h file. I grabbed it from the installation directory (as is required on the windows platform), and tested it by compiling the script: `#include "Python.h"`. This compiled fine. Next, I tried out the snippet of code shown a bit lower on the python/c API page: PyObject *t; t = PyTuple_New(3); PyTuple_SetItem(t, 0, PyInt_FromLong(1L)); PyTuple_SetItem(t, 1, PyInt_FromLong(2L)); PyTuple_SetItem(t, 2, PyString_FromString("three")); For some reason, the compiler would compile the code if I'd remove the last 4 lines (so that only the pyObject variable definition would be left), yet calling the actual constructor of the tuple returned errors. I am probably missing something completely obvious here, given I am very new to C, but does anyone know what it is? Answer: I've done some crafty Googling, and if you are getting errors at the linker stage (the error messages might have hex strings or references to `ld`), you may need to make sure the Python library that ships with the Windows version has been converted to a format that GCC (MinGW) can read; see [here](http://eli.thegreenplace.net/2008/06/28/compiling-python-extensions- with-distutils-and-mingw/), among other sites. Also ensure that GCC can find and is using the library file if needs be, using `-L/my/directory` and `-lpython26` (substituting appropriately for your path and Python version). If the errors are at the compilation stage (if line numbers are given in the messages), make sure that you don't need to add any other directories to the include search path. Python might (I've not used its C API) include other header files in `Python.h` which are stored in some other directory. If this is the case, use the `-I/my/directory/` flag to GCC to tell it to search there as well. Exact (copied-and-pasted) error messages would help, though. * * * _**Warning_** : The text below does not answer the question! Did you put the code inside a function? Try putting it in `main`, like so: int main(int argc, char *argv[]) { PyObject *t; t = PyTuple_New(3); PyTuple_SetItem(t, 0, PyInt_FromLong(1L)); PyTuple_SetItem(t, 1, PyInt_FromLong(2L)); PyTuple_SetItem(t, 2, PyString_FromString("three")); return 0; } This code will be run on execution of the program. You can then use whatever other methods are provided to examine the contents of the tuple. If it isn't to be run separately as an executable program, then stick it in a differently- named method; I assume you have another way to invoke the function. The `PyObject *t;` definition is valid outside the function as a global variable definition, as well as inside a function, declaring it as a local variable. The other four lines are function calls, which must be inside another function. The above code on its own does not a program make. Are you trying to write a C extension to Python? If so, look at some more complete documentation [here](http://docs.python.org/extending/extending.html).
Pairs from single list Question: Often enough, I've found the need to process a list by pairs. I was wondering which would be the pythonic and efficient way to do it, and found this on Google: pairs = zip(t[::2], t[1::2]) I thought that was pythonic enough, but after a recent discussion involving [idioms versus efficiency](http://stackoverflow.com/questions/4619367/avoid- object-aliasing-in-python/4619575#4619575), I decided to do some tests: import time from itertools import islice, izip def pairs_1(t): return zip(t[::2], t[1::2]) def pairs_2(t): return izip(t[::2], t[1::2]) def pairs_3(t): return izip(islice(t,None,None,2), islice(t,1,None,2)) A = range(10000) B = xrange(len(A)) def pairs_4(t): # ignore value of t! t = B return izip(islice(t,None,None,2), islice(t,1,None,2)) for f in pairs_1, pairs_2, pairs_3, pairs_4: # time the pairing s = time.time() for i in range(1000): p = f(A) t1 = time.time() - s # time using the pairs s = time.time() for i in range(1000): p = f(A) for a, b in p: pass t2 = time.time() - s print t1, t2, t2-t1 These were the results on my computer: 1.48668909073 2.63187503815 1.14518594742 0.105381965637 1.35109519958 1.24571323395 0.00257992744446 1.46182489395 1.45924496651 0.00251388549805 1.70076990128 1.69825601578 If I'm interpreting them correctly, that should mean that the implementation of lists, list indexing, and list slicing in Python is very efficient. It's a result both comforting and unexpected. **Is there another, "better" way of traversing a list in pairs?** Note that if the list has an odd number of elements then the last one will not be in any of the pairs. **Which would be the right way to ensure that all elements are included?** I added these two suggestions from the answers to the tests: def pairwise(t): it = iter(t) return izip(it, it) def chunkwise(t, size=2): it = iter(t) return izip(*[it]*size) These are the results: 0.00159502029419 1.25745987892 1.25586485863 0.00222492218018 1.23795199394 1.23572707176 ## Results so far Most pythonic and very efficient: pairs = izip(t[::2], t[1::2]) Most efficient and very pythonic: pairs = izip(*[iter(t)]*2) It took me a moment to grok that the first answer uses two iterators while the second uses a single one. To deal with sequences with an odd number of elements, the suggestion has been to augment the original sequence adding one element (`None`) that gets paired with the previous last element, something that can be achieved with `itertools.izip_longest()`. ## Finally Note that, in Python 3.x, `zip()` behaves as `itertools.izip()`, and `itertools.izip()` is gone. Answer: I'd say that your initial solution `pairs = zip(t[::2], t[1::2])` is the best one because it is easiest to read (and in Python 3, `zip` automatically returns an iterator instead of a list). To ensure that all elements are included, you could simply extend the list by `None`. Then, if the list has an odd number of elements, the last pair will be `(item, None)`. >>> t = [1,2,3,4,5] >>> t.append(None) >>> zip(t[::2], t[1::2]) [(1, 2), (3, 4), (5, None)] >>> t = [1,2,3,4,5,6] >>> t.append(None) >>> zip(t[::2], t[1::2]) [(1, 2), (3, 4), (5, 6)]
Why can't I use __getattr__ with Django models? Question: I've seen examples online of people using `__getattr__` with Django models, but whenever I try I get errors. (Django 1.2.3) I don't have any problems when I am using `__getattr__` on normal objects. For example: class Post(object): def __getattr__(self, name): return 42 Works just fine... > > >>> from blog.models import Post > >>> p = Post() > >>> p.random > 42 > Now when I try it with a Django model: from django.db import models class Post(models.Model): def __getattr__(self, name): return 42 And test it on on the interpreter: > > >>> from blog.models import Post > >>> p = Post() > ERROR: An unexpected error occurred while tokenizing input The > > > following traceback may be corrupted or invalid The error message is: ('EOF > in multi-line statement', (6, 0)) > > \--------------------------------------------------------------------------- > TypeError > Traceback (most recent call last) > > /Users/josh/project/ in () > > /Users/josh/project/lib/python2.6/site-packages/django/db/models/base.pyc in > **init**(self, *args, **kwargs) 338 if kwargs: 339 raise TypeError("'%s' is > an invalid keyword argument for this function" % kwargs.keys()[0]) \--> 340 > signals.post_init.send(sender=self.**class** , instance=self) 341 342 def > **repr**(self): > > /Users/josh/project/lib/python2.6/site- > packages/django/dispatch/dispatcher.pyc in send(self, sender, **named) 160 > 161 for receiver in self._live_receivers(_make_id(sender)): \--> 162 > response = receiver(signal=self, sender=sender, **named) 163 > responses.append((receiver, response)) 164 return responses > > /Users/josh/project/python2.6/site-packages/photologue/models.pyc in > add_methods(sender, instance, signal, *args, **kwargs) 728 """ 729 if > hasattr(instance, 'add_accessor_methods'): \--> 730 > instance.add_accessor_methods() 731 732 # connect the add_accessor_methods > function to the post_init signal > > TypeError: 'int' object is not callable Can someone explain what is going on? * * * EDIT: I may have been too abstract in the examples, here is some code that is closer to what I actually would use on the website: class Post(models.Model): title = models.CharField(max_length=255) slug = models.SlugField() date_published = models.DateTimeField() content = RichTextField('Content', blank=True, null=True) # Etc... Class CuratedPost(models.Model): post = models.ForeignKey('Post') position = models.PositiveSmallIntegerField() def __getattr__(self, name): ''' If the user tries to access a property of the CuratedPost, return the property of the Post instead... ''' return self.post.name # Etc... While I _could_ create a property for each attribute of the Post class, that would lead to a lot of code duplication. Further more, that would mean anytime I add or edit a attribute of the Post class I would have to remember to make the same change to the CuratedPost class, which seems like a recipe for code rot. Answer: One must be careful using __getattr__ . Only intercept what you know, and let the base class handle what you do not. The first step is, can you use a property instead? If you want a "random" attribute which return 42 then this is much safer: class Post(...): @property def random(self): return 42 If you want "random_*" (like "random_1", "random_34", etc) to do something then you'll have to use __getattr__ like this: class Post(...): def __getattr__(self, name): if name.startswith("random_"): return name[7:] return super(Post, self).__getattr__(name)
Django ViewDoesNotExist error on deployment only Question: I'm working on a Django app that I thought was nearly ready to deploy. Everything works on the development server, but when hosted on a test Apache/mod_wsgi server, I get an error for every last one of my views. If I put in a invalid URL, it serves me the list of valid URL's as expected, but nothing else seems to work as per the development server. I have tried accessing from other PC's on the local network to no joy. If anyone can shed any light on the issue it would be appreciated. A good couple of hours reading around hasn't helped so far. The errors are as follows; Environment: Request Method: GET Request URL: http://192.168.1.4/results.php Django Version: 1.2.4 Python Version: 2.6.5 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'results', 'django.contrib.admin'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware') Traceback: File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response 80. response = middleware_method(request) File "/usr/local/lib/python2.6/dist-packages/django/middleware/common.py" in process_request 57. if (not _is_valid_path(request.path_info, urlconf) and File "/usr/local/lib/python2.6/dist-packages/django/middleware/common.py" in _is_valid_path 143. urlresolvers.resolve(path, urlconf) File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve 302. return get_resolver(urlconf).resolve(path) File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve 217. sub_match = pattern.resolve(new_path) File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve 217. sub_match = pattern.resolve(new_path) File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve 123. return self.callback, args, kwargs File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in _get_callback 135. raise ViewDoesNotExist("Tried %s in module %s. Error was: %s" % (func_name, mod_name, str(e))) Exception Type: ViewDoesNotExist at /results.php Exception Value: Tried index in module results.views. Error was: 'module' object has no attribute 'cbook' Answer: There is an error in your `results.views` file, that breaks everything! Apparently the `Error was: 'module' object has no attribute 'cbook'`. So search through your `results.views` python file for "cbook" you may be trying to import cbook or using it somewhere. Either way the problem will be "cbook" related.
Random List of millions of elements in Python Efficiently Question: I have read this [answer](http://stackoverflow.com/questions/1022141/best-way- to-randomize-a-list-of-strings-in-python) potentially as the best way to randomize a list of strings in Python. I'm just wondering then if that's the most efficient way to do it because I have a list of about 30 million elements via the following code: import json from sets import Set from random import shuffle a = [] for i in range(0,193): json_data = open("C:/Twitter/user/user_" + str(i) + ".json") data = json.load(json_data) for j in range(0,len(data)): a.append(data[j]['su']) new = list(Set(a)) print "Cleaned length is: " + str(len(new)) ## Take Cleaned List and Randomize it for Analysis shuffle(new) If there is a more efficient way to do it, I'd greatly appreciate any advice on how to do it. Thanks, Answer: A couple of possible suggestions: import json from random import shuffle a = set() for i in range(193): with open("C:/Twitter/user/user_{0}.json".format(i)) as json_data: data = json.load(json_data) a.update(d['su'] for d in data) print("Cleaned length is {0}".format(len(a))) # Take Cleaned List and Randomize it for Analysis new = list(a) shuffle(new) . * the only way to know if this is faster is to profile it! * do you prefer sets.Set to the built-in set() for a reason? * I have introduced a with clause (preferred way of opening files, as it guarantees they get closed) * it did not appear that you were doing anything with 'a' as a list except converting it to a set; why not make it a set from the start? * rather than iterate on an index, then do a lookup on the index, I just iterate on the data items... * which makes it easily rewriteable as a generator expression
Why are these strings escaping from my regular expression in python? Question: In my code, I load up an entire folder into a list and then try to get rid of every file in the list except the .mp3 files. import os import re path = '/home/user/mp3/' dirList = os.listdir(path) dirList.sort() i = 0 for names in dirList: match = re.search(r'\.mp3', names) if match: i = i+1 else: dirList.remove(names) print dirList print i After I run the file, the code does get rid of some files in the list but keeps these two especifically: > ['00. Various Artists - Indie Rock Playlist October 2008.m3u', '00. Various > Artists - Indie Rock Playlist October 2008.pls'] I can't understand what's going on, why are those two specifically escaping my search. Answer: You are modifying your list inside a loop. That can cause issues. You should loop over a copy of the list instead (`for name in dirList[:]:`), or create a new list. modifiedDirList = [] for name in dirList: match = re.search(r'\.mp3', name) if match: i += 1 modifiedDirList.append(name) print modifiedDirList Or even better, use a list comprehension: dirList = [name for name in sorted(os.listdir(path)) if re.search(r'\.mp3', name)] The same thing, without a regular expression: dirList = [name for name in sorted(os.listdir(path)) if name.endswith('.mp3')]
Efficient Python Daemon Question: I was curious how you can run a python script in the background, repeating a task every 60 seconds. I know you can put something in the background using &, is that effeictive for this case? I was thinking of doing a loop, having it wait 60s and loading it again, but something feels off about that. Answer: Rather than writing your own daemon, use [python- daemon](http://pypi.python.org/pypi/python-daemon) instead! [python- daemon](http://pypi.python.org/pypi/python-daemon) implements the well-behaved daemon specification of [PEP 3143](http://python.org/dev/peps/pep-3143), "Standard daemon process library". I have included example code based on the accepted answer to this question, and even though the code looks almost identical, it has an important fundamental difference. Without [python- daemon](http://pypi.python.org/pypi/python-daemon) you would have to use `&` to put your process in the background and `nohup` and to keep your process from getting killed when you exit your shell. Instead this will automatically detach from your terminal when you run the program. For example: import daemon import time def do_something(): while True: with open("/tmp/current_time.txt", "w") as f: f.write("The time is now " + time.ctime()) time.sleep(5) def run(): with daemon.DaemonContext(): do_something() if __name__ == "__main__": run() To actually run it: python background_test.py And note the absence of `&` here. Also, [this other stackoverflow answer](http://stackoverflow.com/questions/473620/how-do-you-create-a-daemon- in-python) explains in detail the many benefits of using [python- daemon](http://pypi.python.org/pypi/python-daemon).
Cython inline function with numpy array as parameter Question: Consider code like this: import numpy as np cimport numpy as np cdef inline inc(np.ndarray[np.int32_t] arr, int i): arr[i]+= 1 def test1(np.ndarray[np.int32_t] arr): cdef int i for i in xrange(len(arr)): inc(arr, i) def test2(np.ndarray[np.int32_t] arr): cdef int i for i in xrange(len(arr)): arr[i] += 1 I used ipython to measure speed of test1 and test2: In [7]: timeit ttt.test1(arr) 100 loops, best of 3: 6.13 ms per loop In [8]: timeit ttt.test2(arr) 100000 loops, best of 3: 9.79 us per loop Is there a way to optimize test1? Why doesn't cython inline this function as told? **UPDATE: Actually what I need is multidimension code like this:** # cython: infer_types=True # cython: boundscheck=False # cython: wraparound=False import numpy as np cimport numpy as np cdef inline inc(np.ndarray[np.int32_t, ndim=2] arr, int i, int j): arr[i, j] += 1 def test1(np.ndarray[np.int32_t, ndim=2] arr): cdef int i,j for i in xrange(arr.shape[0]): for j in xrange(arr.shape[1]): inc(arr, i, j) def test2(np.ndarray[np.int32_t, ndim=2] arr): cdef int i,j for i in xrange(arr.shape[0]): for j in xrange(arr.shape[1]): arr[i,j] += 1 Timing for it: In [7]: timeit ttt.test1(arr) 1 loops, best of 3: 647 ms per loop In [8]: timeit ttt.test2(arr) 100 loops, best of 3: 2.07 ms per loop Explicit inlining gives 300x speedup. And **my real function is quite big so inlining it makes code maintainability much worse** **UPDATE2:** # cython: infer_types=True # cython: boundscheck=False # cython: wraparound=False import numpy as np cimport numpy as np cdef inline inc(np.ndarray[np.float32_t, ndim=2] arr, int i, int j): arr[i, j]+= 1 def test1(np.ndarray[np.float32_t, ndim=2] arr): cdef int i,j for i in xrange(arr.shape[0]): for j in xrange(arr.shape[1]): inc(arr, i, j) def test2(np.ndarray[np.float32_t, ndim=2] arr): cdef int i,j for i in xrange(arr.shape[0]): for j in xrange(arr.shape[1]): arr[i,j] += 1 cdef class FastPassingFloat2DArray(object): cdef float* data cdef int stride0, stride1 def __init__(self, np.ndarray[np.float32_t, ndim=2] arr): self.data = <float*>arr.data self.stride0 = arr.strides[0]/arr.dtype.itemsize self.stride1 = arr.strides[1]/arr.dtype.itemsize def __getitem__(self, tuple tp): cdef int i, j cdef float *pr, r i, j = tp pr = (self.data + self.stride0*i + self.stride1*j) r = pr[0] return r def __setitem__(self, tuple tp, float value): cdef int i, j cdef float *pr, r i, j = tp pr = (self.data + self.stride0*i + self.stride1*j) pr[0] = value cdef inline inc2(FastPassingFloat2DArray arr, int i, int j): arr[i, j]+= 1 def test3(np.ndarray[np.float32_t, ndim=2] arr): cdef int i,j cdef FastPassingFloat2DArray tmparr = FastPassingFloat2DArray(arr) for i in xrange(arr.shape[0]): for j in xrange(arr.shape[1]): inc2(tmparr, i,j) Timings: In [4]: timeit ttt.test1(arr) 1 loops, best of 3: 623 ms per loop In [5]: timeit ttt.test2(arr) 100 loops, best of 3: 2.29 ms per loop In [6]: timeit ttt.test3(arr) 1 loops, best of 3: 201 ms per loop Answer: More than 3 years have passed since the question was posted and there have been great progress in the meantime. On this code (Update 2 of the question): # cython: infer_types=True # cython: boundscheck=False # cython: wraparound=False import numpy as np cimport numpy as np cdef inline inc(np.ndarray[np.int32_t, ndim=2] arr, int i, int j): arr[i, j]+= 1 def test1(np.ndarray[np.int32_t, ndim=2] arr): cdef int i,j for i in xrange(arr.shape[0]): for j in xrange(arr.shape[1]): inc(arr, i, j) def test2(np.ndarray[np.int32_t, ndim=2] arr): cdef int i,j for i in xrange(arr.shape[0]): for j in xrange(arr.shape[1]): arr[i,j] += 1 I get the following timings: arr = np.zeros((1000,1000), dtype=np.int32) %timeit test1(arr) %timeit test2(arr) 1 loops, best of 3: 354 ms per loop 1000 loops, best of 3: 1.02 ms per loop So the problem is reproducible even after more than 3 years. Cython now has [**typed memoryviews**](http://docs.cython.org/src/userguide/memoryviews.html), AFAIK it was introduced in Cython 0.16, so not available at the time the question was posted. With this: # cython: infer_types=True # cython: boundscheck=False # cython: wraparound=False import numpy as np cimport numpy as np cdef inline inc(int[:, ::1] tmv, int i, int j): tmv[i, j]+= 1 def test3(np.ndarray[np.int32_t, ndim=2] arr): cdef int i,j cdef int[:, ::1] tmv = arr for i in xrange(tmv.shape[0]): for j in xrange(tmv.shape[1]): inc(tmv, i, j) def test4(np.ndarray[np.int32_t, ndim=2] arr): cdef int i,j cdef int[:, ::1] tmv = arr for i in xrange(tmv.shape[0]): for j in xrange(tmv.shape[1]): tmv[i,j] += 1 With this I get: arr = np.zeros((1000,1000), dtype=np.int32) %timeit test3(arr) %timeit test4(arr) 1000 loops, best of 3: 977 µs per loop 1000 loops, best of 3: 838 µs per loop We are _almost_ there and already faster than the old-fashioned way! Now, the `inc()` function is eligible to be declared [`nogil`](http://docs.cython.org/src/userguide/external_C_code.html#declaring- a-function-as-callable-without-the-gil), so let's declare it so! But oops: Error compiling Cython file: [...] cdef inline inc(int[:, ::1] tmv, int i, int j) nogil: ^ [...] Function with Python return type cannot be declared nogil Aaah, I totally missed that the `void` return type was missing! Once again but now with `void`: cdef inline void inc(int[:, ::1] tmv, int i, int j) nogil: tmv[i, j]+= 1 And finally I get: %timeit test3(arr) %timeit test4(arr) 1000 loops, best of 3: 843 µs per loop 1000 loops, best of 3: 853 µs per loop As fast as manual inlining! * * * Now, just for fun, I tried [Numba](http://numba.pydata.org/) on this code: import numpy as np from numba import autojit, jit @autojit def inc(arr, i, j): arr[i, j] += 1 @autojit def test5(arr): for i in xrange(arr.shape[0]): for j in xrange(arr.shape[1]): inc(arr, i, j) I get: arr = np.zeros((1000,1000), dtype=np.int32) %timeit test5(arr) 100 loops, best of 3: 4.03 ms per loop Even though it's 4.7x slower than Cython, most likely because the JIT compiler failed to inline `inc()`, I think it is **AWESOME!** All I needed to do is to add `@autojit` and didn't have to mess up the code with clumsy type declarations; 88x speedup for next to nothing! I have tried other things with Numba, such as @jit('void(i4[:],i4,i4)') def inc(arr, i, j): arr[i, j] += 1 or `nopython=True` but failed to improve it any further. [Improving inlining is on the Numba developers' list](https://github.com/numba/numba/issues/160), we only need to file more requests to make it have higher priority. ;)
Issue with importing scipy.integrate or scipy.integrate.quad Question: This might be something really simple: I am using Python 2.6.5 and I am unable to load any integration module in my working space. Everything is OK when I import scipy, but if I try to import scipy.integrate or scipy.integrate.quad I get an error message from python. Any clue?? Thanks. Answer: Try `from scipy import integrate`.
What can be used instead of parse_qs function Question: I have the following code for parsing youtube feed and returning youtube movie id. How can I rewrite this to be python 2.4 compatible which I suppose doesn't support `parse_qs` function ? YTSearchFeed = feedparser.parse("http://gdata.youtube.com" + path) videos = [] for yt in YTSearchFeed.entries: url_data = urlparse.urlparse(yt['link']) query = urlparse.parse_qs(url_data[4]) id = query["v"][0] videos.append(id) Answer: I assume your existing code runs in 2.6 or something newer, and you're trying to go back to 2.4? `parse_qs` used to be in the `cgi` module before it was moved to `urlparse`. Try `import cgi`, `cgi.parse_qs`. Inspired by [TryPyPy's](http://stackoverflow.com/users/555569/trypypy) comment, I think you could make your source run in either environment by doing: import urlparse # if we're pre-2.6, this will not include parse_qs try: from urlparse import parse_qs except ImportError: # old version, grab it from cgi from cgi import parse_qs urlparse.parse_qs = parse_qs But I don't have 2.4 to try this out, so no promises.
Parse Python file and evaluate selected functions Question: I have a file that contains several python functions, each with some statements. def func1(): codeX... def func2(): codeY... codeX and codeY can be multiple statements. I want to be able to parse the file, find a function by name, then evaluate the code in that function. With the ast module, I can parse the file, find the FunctionDef objects, and get the list of Stmt objects, but how do I turn this into bytecode that I can pass to eval? Should I use the compile module, or the parser module instead? Basically, the function defs are just used to create separate blocks of code. I want to be able to grab any block of code given the name and then execute that code in eval (providing my own local/global scope objects). If there is a better way to do this than what I described that would be helpful too. Thanks Answer: > I want to be able to grab any block of code given the name and then execute > that code ... (providing my own local/global scope objects). A naive solution looks like this. This is based on the assumption that the functions don't all depend on global variables. from file_that_contains_several_python_functions import * Direction = some_value func1() func2() func3() That _should_ do exactly what you want. However, if all of your functions rely on global variables -- a design that calls to mind 1970's-era FORTRAN -- then you have to do something slightly more complex. from file_that_contains_several_python_functions import * Direction = some_value func1( globals() ) func2( globals() ) func3( globals() ) And you have to rewrite all of your global-using functions like this. def func1( context ) globals().update( context ) # Now you have access to all kinds of global variables This seems ugly because it is. Functions which rely entirely on global variables are not really the best idea.
Error in Selenium Python Script Question: I am trying to get the hang of both Python and Selenium RC and am having some difficulty getting the following sample Selenium Python Script to parse. I have resolved all of the following code's errors besides one: from selenium import selenium import unittest class SignUpTask(unittest.TestCase): """ The following needs to have the issues corrected to make it run. When the run is completed the answer for question 2 will be shown""" def setUp(self): self.selenium = selenium("localhost", 4444, "*firefox", "http://www.google.com/") self.selenium.start() def test_that_will_print_out_a_url_as_answer_for_task(sel): self.selenium.open("/") self.selenium.click("link=Web QA") self.selenium.wait_for_page_to_load("30000") self.selenium.click("link=Get Involved") self.selenium.wait_for_page_to_load("30000") url = self.selenium.get_attribute("//ol/li[5]/a@href") print """The Url below needs to be entered as the answer for Question 2) in the signup task""" print "URL is: %s" % url def tearDown(self): self.selenium.stop() if __name__ == "__main__": unittest.main() After running the above script via Selenium RC, I get the following error: * * * ERROR: test_that_will_print_out_a_url_as_answer_for_task (**main**.SignUpTask) Traceback (most recent call last): File "/Users/eanderson/Desktop/TestFiles/Selenium1.py", line 16, in test_that_will_print_out_a_url_as_answer_for_task self.selenium.open("/") NameError: global name 'self' is not defined Ran 1 test in 24.577s failed (errors=1) * * * Does anyone out there understand why I am getting the > NameError: global name 'self' is not defined error on line 16 and could help me alleviate this error so my script can parse without error? Answer: `def test_that_will_print_out_a_url_as_answer_for_task(`**sel**`):` That should have been `self`.
boto: EC2 instance get_attribute results in AttributeError: 'EC2Connection' object has no attribute 'describe_attribute' Question: **What steps will reproduce the problem?** 1.attempt to get a running EBS-backed instance's kernel attribute with instance.get_attribute('kernel') >>> import boto.ec2 >>> regions = boto.ec2.regions() >>> regions [RegionInfo:eu-west-1, RegionInfo:us-east-1, RegionInfo:us-west-1, RegionInfo:ap-southeast-1] >>> usw = regions[2] >>> conn = usw.connect() >>> reservations = conn.get_all_instances() >>> reservations [Reservation:r-XXXXXXXX] >>> r1 = reservations[0] >>> for i in r1.instances: print i ... Instance:i-XXXXXXXX >>> instance = r1.instances[0] >>> instance.get_attribute('kernel') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/boto/ec2/instance.py", line 293, in get_attribute return self.connection.describe_attribute(self.id, attribute) AttributeError: 'EC2Connection' object has no attribute 'describe_attribute' >>> **What is the expected output? What do you see instead?** expect to get the attribute, but instead get this error: AttributeError: 'EC2Connection' object has no attribute 'describe_attribute' **What version of the product are you using? On what operating system?** boto.Version == 2.0b3 on Mac OS X 10.5 (boto installed using pip) **Please provide any additional information below.** Code on github repo seems to also indicate that there's not a describe_attribute <https://github.com/boto/boto/blob/master/boto/ec2/connection.py> submitted issue: <http://code.google.com/p/boto/issues/detail?id=487> Answer: Problem was promptly fixed by the developers! <https://github.com/boto/boto/commit/899ab55752d31af22f41168f92fe39553504d6fc> However at time of writing, it's still not on pypi (so pip install --upgrade) won't help - must git clone and install from there.
Matplotlib: simultaneous plotting in multiple threads Question: I am trying to do some plotting in parallel to finish large batch jobs quicker. To this end, I start a thread for each plot I plan on making. I had hoped that each thread would finish its plotting and close itself (as I understand it, Python closes threads when they get through all the statements in run()). Below is some code that shows this behavior. If the line that creates a figure is commented out, it runs as expected. Another plausibly helpful tidbit is that it also runs as expected when you only spawn one thread. import matplotlib.pyplot as plt import time import Queue import threading def TapHistplots(): ## for item in ['str1']: # # it behaves as expected if the line above is used instead of the one below for item in ['str1','str2']: otheritem = 1 TapHistQueue.put((item, otheritem)) makeTapHist().start() class makeTapHist(threading.Thread): def run(self): item, otheritem = TapHistQueue.get() fig = FigureQueue.get() FigureQueue.put(fig+1) print item+':'+str(fig)+'\n', time.sleep(1.3) plt.figure(fig) # comment out this line and it behaves as expected plt.close(fig) TapHistQueue = Queue.Queue(0) FigureQueue = Queue.Queue(0) def main(): start = time.time() """Code in here runs only when this module is run directly""" FigureQueue.put(1) TapHistplots() while threading.activeCount()>1: time.sleep(1) print 'waiting on %d threads\n' % (threading.activeCount()-1), print '%ds elapsed' % (time.time()-start) if __name__ == '__main__': main() Any help is duly appreciated. Answer: Why not just use multiprocessing? As far as I can tell from your description, threading won't help you much, anyway... Matplotlib already threads so that you can display and interact with multiple figures at once. If you want to speed up batch processing on a multicore machine, you're going to need multiprocessing regardless. As a basic example (**Warning: This will create 20 small .png files in whatever directory you run it in!**) import multiprocessing import matplotlib.pyplot as plt import numpy as np def main(): pool = multiprocessing.Pool() num_figs = 20 input = zip(np.random.randint(10,1000,num_figs), range(num_figs)) pool.map(plot, input) def plot(args): num, i = args fig = plt.figure() data = np.random.randn(num).cumsum() plt.plot(data) plt.title('Plot of a %i-element brownian noise sequence' % num) fig.savefig('temp_fig_%02i.png' % i) main()
Python / Tracing - How to stop a function's execution (return from it ) from within a tracer Question: Is there a way to return out of a function badfunc() using a tracer function tracer(), if we set sys.settrace(tracer). I want to count the number of lines that badfunc() executes, and return out of it if it executes more than a given amount of lines. e.g: def badfunc(): while True: import time time.sleep(1) def tracer(*args): counter += 1 if counter > MAX_NUMLINES: return_from_badfunc() return tracer sys.settrace(tracer) Thanks! Answer: Something like this maybe: import sys MAX_NUMLINES = 7 counter = 0 class TooMuchLine(Exception): pass def tracer(frame, event, arg): global counter if event == "line": counter += 1 print "counter", counter if counter > MAX_NUMLINES: raise TooMuchLine() return tracer def badfunc(): while True: import time time.sleep(1) sys.settrace(tracer) print 'start' try: badfunc() except TooMuchLine: print 'stopped' print 'done' Output: start counter 1 counter 2 counter 3 counter 4 counter 5 counter 6 counter 7 counter 8 stopped done N.B: I hope that you already read [this](http://stackoverflow.com/questions/1692866/what-cool-hacks-can-be-done- using-sys-settrace/1693108#1693108) before playing with sys.settracer :)
How do I write a long integer as binary in Python? Question: In Python, long integers have unlimited precision. I would like to write a 16 byte (128 bit) integer to a file. `struct` from the standard library supports only up to 8 byte integers. `array` has the same limitation. Is there a way to do this without masking and shifting each integer? Some clarification here: I'm writing to a file that's going to be read in from non-Python programs, so pickle is out. All 128 bits are used. Answer: I think for unsigned integers (and ignoring endianness) something like import binascii def binify(x): h = hex(x)[2:].rstrip('L') return binascii.unhexlify('0'*(32-len(h))+h) >>> for i in 0, 1, 2**128-1: ... print i, repr(binify(i)) ... 0 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' 1 '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01' 340282366920938463463374607431768211455 '\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff' might technically satisfy the requirements of having non-Python-specific output, not using an explicit mask, and (I assume) not using any non-standard modules. Not particularly elegant, though.
Text file to list in Python Question: Suppose I have a document called test1.txt that contains the following numbers: 133213 123123 349135 345345 I want to be able to take each number and append it to the end of the URL below to make a HTTP request. How do I stuff the id's into a list and call each one? This is what I have so far. file = open('C:\Users\Owner\Desktop\\test1.txt') startcount = 1 endcount = len(file.readlines()) o = urllib2.build_opener( urllib2.HTTPCookieProcessor() ) urllib2.install_opener( o ) while startcount < endcount: f = o.open( 'http://www.test.com/?userid=' + ID GOES HERE ) f.close() Answer: >>> from urllib import urlencode >>> with open('C:\Users\Owner\Desktop\\test1.txt') as myfile: ... for line in myfile: ... params = urlencode({'userid': line.strip()}) ... f = opener.open('http://www.test.com/?' + params) ... # do sth ...
Google Python gdata Library Installation Failing Question: [Note, I have removed some information, such as my username, and the IDs to my spreadsheets] Hi! I'm on a mac, and I'm trying my best to install gdata for google python. Before I go on, I'm using this tutorial here: <http://code.google.com/apis/gdata/articles/python_client_lib.html> I have python version: 2.6.1, so I skipped to installing dependencies as instructed. Terminal looked like: Last login: Sat Jan 1 11:28:47 on ttys000 Users-MacBook-Pro:~ user$ python -V Python 2.6.1 Users-MacBook-Pro:~ user$ I fired up the python interpreter and tried importing xml tree. Nothing happened, so I tried importing banannas. Terminal looked like: Last login: Sat Jan 1 11:30:26 on ttys000 Users-MacBook-Pro:~ user$ python Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from xml.etree import ElementTree >>> from ninjas import banannas Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named ninjas This makes me mostly sure I have xmltree, though I do not remember ever having installed it. At this point, I downloaded the gdata library, and my mac automagically decompressed it. I then ran the install command, and ran the test commands. My terminal looked like this: Last login: Sat Jan 1 11:31:21 on ttys000 Users-MacBook-Pro:~ user$ cd /Users/User/Downloads/gdata-2-1.0.13 Users-MacBook-Pro:gdata-2-1.0.13 user$ ./setup.py install /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ distutils/dist.py:266: UserWarning: Unknown distribution option: 'install_requires' warnings.warn(msg) running install running build running build_py creating build creating build/lib creating build/lib/atom copying src/atom/__init__.py -> build/lib/atom copying src/atom/auth.py -> build/lib/atom copying src/atom/client.py -> build/lib/atom copying src/atom/core.py -> build/lib/atom copying src/atom/data.py -> build/lib/atom copying src/atom/http.py -> build/lib/atom copying src/atom/http_core.py -> build/lib/atom copying src/atom/http_interface.py -> build/lib/atom copying src/atom/mock_http.py -> build/lib/atom copying src/atom/mock_http_core.py -> build/lib/atom copying src/atom/mock_service.py -> build/lib/atom copying src/atom/service.py -> build/lib/atom copying src/atom/token_store.py -> build/lib/atom copying src/atom/url.py -> build/lib/atom creating build/lib/gdata copying src/gdata/__init__.py -> build/lib/gdata copying src/gdata/apps_property.py -> build/lib/gdata copying src/gdata/auth.py -> build/lib/gdata copying src/gdata/client.py -> build/lib/gdata copying src/gdata/core.py -> build/lib/gdata copying src/gdata/data.py -> build/lib/gdata copying src/gdata/gauth.py -> build/lib/gdata --Various copying, migration, and creation printouts exactly the same to the above and below, asides from path, removed to save space-- migration copying src/gdata/apps/migration/service.py -> build/lib/gdata/apps/ migration creating build/lib/gdata/apps/organization copying src/gdata/apps/organization/__init__.py -> build/lib/gdata/ apps/organization copying src/gdata/apps/organization/service.py -> build/lib/gdata/apps/ organization creating build/lib/gdata/base copying src/gdata/base/__init__.py -> build/lib/gdata/base copying src/gdata/base/service.py -> build/lib/gdata/base creating build/lib/gdata/blogger copying src/gdata/blogger/__init__.py -> build/lib/gdata/blogger copying src/gdata/blogger/client.py -> build/lib/gdata/blogger copying src/gdata/blogger/data.py -> build/lib/gdata/blogger copying src/gdata/blogger/service.py -> build/lib/gdata/blogger creating build/lib/gdata/books copying src/gdata/books/__init__.py -> build/lib/gdata/books copying src/gdata/books/data.py -> build/lib/gdata/books copying src/gdata/books/service.py -> build/lib/gdata/books creating build/lib/gdata/calendar copying src/gdata/calendar/__init__.py -> build/lib/gdata/calendar copying src/gdata/calendar/data.py -> build/lib/gdata/calendar copying src/gdata/calendar/service.py -> build/lib/gdata/calendar creating build/lib/gdata/calendar_resource copying src/gdata/calendar_resource/__init__.py -> build/lib/gdata/ calendar_resource copying src/gdata/calendar_resource/client.py -> build/lib/gdata/ calendar_resource copying src/gdata/calendar_resource/data.py -> build/lib/gdata/ calendar_resource creating build/lib/gdata/codesearch copying src/gdata/codesearch/__init__.py -> build/lib/gdata/codesearch copying src/gdata/codesearch/service.py -> build/lib/gdata/codesearch creating build/lib/gdata/contacts copying src/gdata/contacts/__init__.py -> build/lib/gdata/contacts copying src/gdata/contacts/client.py -> build/lib/gdata/contacts copying src/gdata/contacts/data.py -> build/lib/gdata/contacts copying src/gdata/contacts/service.py -> build/lib/gdata/contacts creating build/lib/gdata/docs copying src/gdata/docs/__init__.py -> build/lib/gdata/docs copying src/gdata/docs/client.py -> build/lib/gdata/docs copying src/gdata/docs/data.py -> build/lib/gdata/docs copying src/gdata/docs/service.py -> build/lib/gdata/docs creating build/lib/gdata/dublincore copying src/gdata/dublincore/__init__.py -> build/lib/gdata/dublincore copying src/gdata/dublincore/data.py -> build/lib/gdata/dublincore creating build/lib/gdata/exif copying src/gdata/exif/__init__.py -> build/lib/gdata/exif creating build/lib/gdata/finance copying src/gdata/finance/__init__.py -> build/lib/gdata/finance copying src/gdata/finance/data.py -> build/lib/gdata/finance copying src/gdata/finance/service.py -> build/lib/gdata/finance creating build/lib/gdata/geo copying src/gdata/geo/__init__.py -> build/lib/gdata/geo copying src/gdata/geo/data.py -> build/lib/gdata/geo creating build/lib/gdata/health copying src/gdata/health/__init__.py -> build/lib/gdata/health copying src/gdata/health/service.py -> build/lib/gdata/health creating build/lib/gdata/maps copying src/gdata/maps/__init__.py -> build/lib/gdata/maps copying src/gdata/maps/client.py -> build/lib/gdata/maps copying src/gdata/maps/data.py -> build/lib/gdata/maps creating build/lib/gdata/media copying src/gdata/media/__init__.py -> build/lib/gdata/media copying src/gdata/media/data.py -> build/lib/gdata/media creating build/lib/gdata/notebook copying src/gdata/notebook/__init__.py -> build/lib/gdata/notebook copying src/gdata/notebook/data.py -> build/lib/gdata/notebook creating build/lib/gdata/oauth copying src/gdata/oauth/__init__.py -> build/lib/gdata/oauth copying src/gdata/oauth/rsa.py -> build/lib/gdata/oauth creating build/lib/gdata/opensearch copying src/gdata/opensearch/__init__.py -> build/lib/gdata/opensearch copying src/gdata/opensearch/data.py -> build/lib/gdata/opensearch creating build/lib/gdata/photos copying src/gdata/photos/__init__.py -> build/lib/gdata/photos copying src/gdata/photos/service.py -> build/lib/gdata/photos creating build/lib/gdata/projecthosting copying src/gdata/projecthosting/__init__.py -> build/lib/gdata/ projecthosting copying src/gdata/projecthosting/client.py -> build/lib/gdata/ projecthosting copying src/gdata/projecthosting/data.py -> build/lib/gdata/ projecthosting creating build/lib/gdata/sites copying src/gdata/sites/__init__.py -> build/lib/gdata/sites copying src/gdata/sites/client.py -> build/lib/gdata/sites copying src/gdata/sites/data.py -> build/lib/gdata/sites creating build/lib/gdata/spreadsheet copying src/gdata/spreadsheet/__init__.py -> build/lib/gdata/ spreadsheet copying src/gdata/spreadsheet/service.py -> build/lib/gdata/ spreadsheet copying src/gdata/spreadsheet/text_db.py -> build/lib/gdata/ spreadsheet creating build/lib/gdata/spreadsheets copying src/gdata/spreadsheets/__init__.py -> build/lib/gdata/ spreadsheets copying src/gdata/spreadsheets/client.py -> build/lib/gdata/ spreadsheets copying src/gdata/spreadsheets/data.py -> build/lib/gdata/spreadsheets creating build/lib/gdata/tlslite copying src/gdata/tlslite/__init__.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/api.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/BaseDB.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/Checker.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/constants.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/errors.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/FileObject.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/HandshakeSettings.py -> build/lib/gdata/ tlslite copying src/gdata/tlslite/mathtls.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/messages.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/Session.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/SessionCache.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/SharedKeyDB.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/TLSConnection.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/TLSRecordLayer.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/VerifierDB.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/X509.py -> build/lib/gdata/tlslite copying src/gdata/tlslite/X509CertChain.py -> build/lib/gdata/tlslite creating build/lib/gdata/tlslite/integration copying src/gdata/tlslite/integration/__init__.py -> build/lib/gdata/ tlslite/integration copying src/gdata/tlslite/integration/AsyncStateMachine.py -> build/ lib/gdata/tlslite/integration copying src/gdata/tlslite/integration/ClientHelper.py -> build/lib/ gdata/tlslite/integration copying src/gdata/tlslite/integration/HTTPTLSConnection.py -> build/ lib/gdata/tlslite/integration copying src/gdata/tlslite/integration/IMAP4_TLS.py -> build/lib/gdata/ tlslite/integration copying src/gdata/tlslite/integration/IntegrationHelper.py -> build/ lib/gdata/tlslite/integration copying src/gdata/tlslite/integration/POP3_TLS.py -> build/lib/gdata/ tlslite/integration copying src/gdata/tlslite/integration/SMTP_TLS.py -> build/lib/gdata/ tlslite/integration copying src/gdata/tlslite/integration/TLSAsyncDispatcherMixIn.py -> build/lib/gdata/tlslite/integration copying src/gdata/tlslite/integration/TLSSocketServerMixIn.py -> build/ lib/gdata/tlslite/integration copying src/gdata/tlslite/integration/TLSTwistedProtocolWrapper.py -> build/lib/gdata/tlslite/integration copying src/gdata/tlslite/integration/XMLRPCTransport.py -> build/lib/ gdata/tlslite/integration creating build/lib/gdata/tlslite/utils copying src/gdata/tlslite/utils/__init__.py -> build/lib/gdata/tlslite/ utils copying src/gdata/tlslite/utils/AES.py -> build/lib/gdata/tlslite/ utils copying src/gdata/tlslite/utils/ASN1Parser.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/cipherfactory.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/codec.py -> build/lib/gdata/tlslite/ utils copying src/gdata/tlslite/utils/compat.py -> build/lib/gdata/tlslite/ utils copying src/gdata/tlslite/utils/Cryptlib_AES.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/Cryptlib_RC4.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/Cryptlib_TripleDES.py -> build/lib/ gdata/tlslite/utils copying src/gdata/tlslite/utils/cryptomath.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/dateFuncs.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/hmac.py -> build/lib/gdata/tlslite/ utils copying src/gdata/tlslite/utils/jython_compat.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/keyfactory.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/OpenSSL_AES.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/OpenSSL_RC4.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/OpenSSL_RSAKey.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/OpenSSL_TripleDES.py -> build/lib/ gdata/tlslite/utils copying src/gdata/tlslite/utils/PyCrypto_AES.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/PyCrypto_RC4.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/PyCrypto_RSAKey.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/PyCrypto_TripleDES.py -> build/lib/ gdata/tlslite/utils copying src/gdata/tlslite/utils/Python_AES.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/Python_RC4.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/Python_RSAKey.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/RC4.py -> build/lib/gdata/tlslite/ utils copying src/gdata/tlslite/utils/rijndael.py -> build/lib/gdata/tlslite/ utils copying src/gdata/tlslite/utils/RSAKey.py -> build/lib/gdata/tlslite/ utils copying src/gdata/tlslite/utils/TripleDES.py -> build/lib/gdata/ tlslite/utils copying src/gdata/tlslite/utils/xmltools.py -> build/lib/gdata/tlslite/ utils creating build/lib/gdata/webmastertools copying src/gdata/webmastertools/__init__.py -> build/lib/gdata/ webmastertools copying src/gdata/webmastertools/data.py -> build/lib/gdata/ webmastertools copying src/gdata/webmastertools/service.py -> build/lib/gdata/ webmastertools creating build/lib/gdata/youtube copying src/gdata/youtube/__init__.py -> build/lib/gdata/youtube copying src/gdata/youtube/client.py -> build/lib/gdata/youtube copying src/gdata/youtube/data.py -> build/lib/gdata/youtube copying src/gdata/youtube/service.py -> build/lib/gdata/youtube running install_lib running install_egg_info Removing /Library/Python/2.6/site-packages/gdata-2.0.13-py2.6.egg-info Writing /Library/Python/2.6/site-packages/gdata-2.0.13-py2.6.egg-info Users-MacBook-Pro:gdata-2-1.0.13 user$ I then ran the first test mentioned: Last login: Sat Jan 1 11:38:03 on ttys000 Users-MacBook-Pro:~ user$ ./tests/run_data_tests.py -bash: ./tests/run_data_tests.py: No such file or directory Users-MacBook-Pro:~ user$ cd /Users/user/Downloads/gdata-2-1.0.13 Users-MacBook-Pro:gdata-2-1.0.13 user$ ./tests/run_data_tests.py Running all tests in module gdata_test ..................... ---------------------------------------------------------------------- Ran 21 tests in 0.085s OK Running all tests in module atom_test ............................................... ---------------------------------------------------------------------- Ran 47 tests in 0.019s OK Running all tests in module atom_tests.url_test .... ---------------------------------------------------------------------- Ran 4 tests in 0.001s OK Running all tests in module atom_tests.http_interface_test . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK Running all tests in module atom_tests.mock_http_test ... ---------------------------------------------------------------------- Ran 3 tests in 0.729s OK Running all tests in module atom_tests.core_test ................ ---------------------------------------------------------------------- Ran 16 tests in 0.008s OK Running all tests in module atom_tests.token_store_test ... ---------------------------------------------------------------------- Ran 3 tests in 0.001s OK Running all tests in module gdata_tests.client_test ............... ---------------------------------------------------------------------- Ran 15 tests in 0.006s OK Running all tests in module gdata_tests.apps_test ............................................... ---------------------------------------------------------------------- Ran 47 tests in 0.028s OK Running all tests in module gdata_tests.apps.emailsettings.data_test ................................. ---------------------------------------------------------------------- Ran 33 tests in 0.009s OK Running all tests in module gdata_tests.auth_test ............................. ---------------------------------------------------------------------- Ran 29 tests in 0.109s OK Running all tests in module gdata_tests.base_test ..................... ---------------------------------------------------------------------- Ran 21 tests in 0.028s OK Running all tests in module gdata_tests.books_test ... ---------------------------------------------------------------------- Ran 3 tests in 0.001s OK Running all tests in module gdata_tests.calendar_test ............................................................ ---------------------------------------------------------------------- Ran 60 tests in 0.361s OK Running all tests in module gdata_tests.docs_test ....F... ====================================================================== FAIL: testToAndFromStringWithData (gdata_tests.docs_test.DocumentListEntryTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/user/Downloads/gdata-2-1.0.13/tests/gdata_tests/ docs_test.py", line 39, in testToAndFromStringWithData 'spreadsheet%3Asupercalifragilisticexpealidocious') AssertionError: 'https://docs.google.com/feeds/documents/private/full/ spreadsheet%3Asupercalifragilisticexpealidocious' != 'http:// docs.google.com/feeds/documents/private/full/spreadsheet %3Asupercalifragilisticexpealidocious' ---------------------------------------------------------------------- Ran 8 tests in 0.012s FAILED (failures=1) Running all tests in module gdata_tests.health_test ............... ---------------------------------------------------------------------- Ran 15 tests in 0.195s OK Running all tests in module gdata_tests.spreadsheet_test ........... ---------------------------------------------------------------------- Ran 11 tests in 0.012s OK Running all tests in module gdata_tests.photos_test ... ---------------------------------------------------------------------- Ran 3 tests in 0.007s OK Running all tests in module gdata_tests.codesearch_test . ---------------------------------------------------------------------- Ran 1 test in 0.004s OK Running all tests in module gdata_tests.contacts_test ....... ---------------------------------------------------------------------- Ran 7 tests in 0.009s OK Running all tests in module gdata_tests.youtube_test ................ ---------------------------------------------------------------------- Ran 16 tests in 0.019s OK Running all tests in module gdata_tests.blogger_test ...... ---------------------------------------------------------------------- Ran 6 tests in 0.002s OK Running all tests in module gdata_tests.webmastertools_test ............................. ---------------------------------------------------------------------- Ran 29 tests in 0.016s OK Running all tests in module gdata_tests.calendar_resource.data_test .. ---------------------------------------------------------------------- Ran 2 tests in 0.003s OK Running all tests in module gdata_tests.oauth.data_test ............../Library/Python/2.6/site-packages/gdata/oauth/ __init__.py:16: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 self.message = message ......................... ---------------------------------------------------------------------- Ran 39 tests in 0.038s OK Users-MacBook-Pro:gdata-2-1.0.13 user$ Worried by the FAILURE up there, I ran the next test mentioned: Last login: Sat Jan 1 11:42:38 on ttys000 Users-MacBook-Pro:~ user$ cd /Users/user/Downloads/gdata-2-1.0.13 Users-MacBook-Pro:gdata-2-1.0.13 user$ ./samples/docs/docs_example.py NOTE: Please run these tests only with a test account. Please enter your username: User Password: Document List Sample 1) List your documents. 2) Search your documents. 3) Upload a document. 4) Download a document. 5) List a document's permissions. 6) Add/change a document's permissions. 7) Exit. > 1 Retrieve (all/document/folder/presentation/spreadsheet/pdf): Enter a category: spreadsheets No entries in feed. TITLE TYPE RESOURCE ID Document List Sample 1) List your documents. 2) Search your documents. 3) Upload a document. 4) Download a document. 5) List a document's permissions. 6) Add/change a document's permissions. 7) Exit. > 1 Retrieve (all/document/folder/presentation/spreadsheet/pdf): Enter a category: all TITLE TYPE RESOURCE ID Whirld spreadsheet spreadsheet:--removed ID-- Whirld Stats spreadsheet spreadsheet:--removed ID-- Document List Sample 1) List your documents. 2) Search your documents. 3) Upload a document. 4) Download a document. 5) List a document's permissions. 6) Add/change a document's permissions. 7) Exit. Everything _seemed_ okay, other than the mysterious failure, so I tried to run some code, just to see if I could get as far as to initialize a DocsService: import cgi from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app import gdata.docs import gdata.docs.service class MainPage(webapp.RequestHandler): def post(self): self.Test = self.request.get('Test') self.respond(bool(1)) def get(self): self.respond(bool(0)) def respond(self,isPost): if isPost: self.response.out.write("""Greetings. The value of Test is""" + self.Test) else: self.response.out.write("""ERROR -- Post data not given!""") #gd_client = gdata.docs.service.DocsService(source='[removed]') self.response.out.write("""Hahahahaha""") application = webapp.WSGIApplication([ ('/', MainPage) ], debug=True) #Run the basic code: def main(): run_wsgi_app(application) if __name__ == "__main__": main() However, when launching that code from the google app engine, and then viewing it in safari, I am greeted to: Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/ GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/ google/appengine/tools/dev_appserver.py", line 3245, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/ GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/ google/appengine/tools/dev_appserver.py", line 3186, in _Dispatch base_env_dict=env_dict) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/ GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/ google/appengine/tools/dev_appserver.py", line 531, in Dispatch base_env_dict=base_env_dict) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/ GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/ google/appengine/tools/dev_appserver.py", line 2410, in Dispatch self._module_dict) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/ GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/ google/appengine/tools/dev_appserver.py", line 2320, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/ GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/ google/appengine/tools/dev_appserver.py", line 2216, in ExecuteOrImportScript exec module_code in script_module.__dict__ File "/Users/user/Documents/Web Projects/Python/HelloWorld/ HelloWorld.py", line 4, in <module> import gdata.docs ImportError: No module named gdata.docs I'm not sure what's going wrong here, but any help would be deeply appreciated. Answer: sudo mv /usr/local/lib/python2.6/dist-packages/* /usr/lib/python2.6/dist-packages
How to get favicon by using beatiful soup and python Question: Hey guys, I wrote some stupid code for learning just, but it doesn't work for any sites. here is the code: import urllib2, re from BeautifulSoup import BeautifulSoup as Soup class Founder: def Find_all_links(self, url): page_source = urllib2.urlopen(url) a = page_source.read() soup = Soup(a) a = soup.findAll(href=re.compile(r'/.a\w+')) return a def Find_shortcut_icon (self, url): a = self.Find_all_links(url) b = '' for i in a: strre=re.compile('shortcut icon', re.IGNORECASE) m=strre.search(str(i)) if m: b = i["href"] return b def Save_icon(self, url): url = self.Find_shortcut_icon(url) print url host = re.search(r'[0-9a-zA-Z]{1,20}\.[a-zA-Z]{2,4}', url).group() opener = urllib2.build_opener() icon = opener.open(url).read() file = open(host+'.ico', "wb") file.write(icon) file.close() print '%s icon succsefully saved' % host c = Founder() print c.Save_icon('http://lala.ru') The most strange thing is it works for site: <http://habrahabr.ru> http://5pd.ru But doesn't work for most others that i've checked. P.S. I know that code is sucks, please give me some advises maybe. Thank you Answer: You're making it far more complicated than it needs to be. Here's a simple way to do it: import urllib page = urllib.urlopen("http://5pd.ru/") soup = BeautifulSoup(page) icon_link = soup.find("link", rel="shortcut icon") icon = urllib.urlopen(icon_link['href']) with open("test.ico", "wb") as f: f.write(icon.read())
Adding System.Data.SQLite reference in IronPython Question: I'm trying to use clr.AddReference to add sqlite3 functionality to a simple IronPython program I'm writing; but everytime I try to reference System.Data.SQLite I get this error: > Traceback (most recent call last): File "", line 1, in IOError: > System.IO.IOException: Could not add reference to assembly > System.Data.SQLite > at Microsoft.Scripting.Actions.Calls.MethodCandidate.Caller.Call(Object[] > args, Boolean&shouldOptimize) > at > IronPython.Runtime.Types.BuiltinFunction.BuiltinFunctionCaller`2.Call1(CallSite > site, CodeContext context, TFuncType func, T0 arg0) > at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite > site, T0 arg0, T1 arg1, T2 arg2) > at CallSite.Target(Closure , CallSite , CodeContext , Object , Object ) > at > IronPython.Compiler.Ast.CallExpression.Invoke1Instruction.Run(InterpretedFrame > frame) > at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) > at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 > arg1) > at IronPython.Runtime.FunctionCode.Call(CodeContext context) > at IronPython.Runtime.Operations.PythonOps.QualifiedExec(CodeContext > context, Object code, PythonDictionary globals, Object locals) > at > Microsoft.Scripting.Interpreter.ActionCallInstruction`4.Run(InterpretedFrame > frame) > at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) I've been testing out the imports and references in the interpreter mainly, and these are the lines I test: > import sys > import clr > sys.path.append("C:/Program Files (x86)/SQLite.NET/bin") > clr.AddReference("System.Data.SQLite") The error happens after the clr.AddReference line is entered. How would I add System.Data.SQLite properly? Answer: My first guess is that you're trying to load the x86 (32-bit) System.Data.SQLite.dll in a x64 (64-bit) process, or vice versa. System.Data.SQLite.dll contains the native sqlite3 library, which must be compiled for x86 or x64, so there is a version of System.Data.SQLite.dll for each CPU. If you're using the console, ipy.exe is always 32-bit (even on 64-bit platforms) while ipy64.exe is AnyCPU, so it matches the current platform. If you're hosting IronPython, and the host app is AnyCPU, you need to load the right copy of System.Data.SQLite.dll for the machine you're running on (or just force the host app x86).
smarter "reverse" of a dictionary in python (acc for some of values being the same)? Question: def revert_dict(d): rd = {} for key in d: val = d[key] if val in rd: rd[val].append(key) else: rd[val] = [key] return rd >>> revert_dict({'srvc3': '1', 'srvc2': '1', 'srvc1': '2'}) {'1': ['srvc3', 'srvc2'], '2': ['srvc1']} This obviously isn't simple exchange of keys with values: this would overwrite some values (as new keys) which is NOT what I'm after. If 2 or more values are the same for different keys, keys are supposed to be grouped in a list. The above function works, but I wonder if there is a smarter / faster way? Answer: That looks pretty good. You could simplify it a little bit by using [`defaultdict`](http://docs.python.org/library/collections.html#collections.defaultdict): import collections def revert_dict(d): rd = collections.defaultdict(list) for key, value in d.iteritems(): rd[value].append(key) return rd
How to insert item into c_char_p array Question: I want to pass **an array of char pointer** to a C function. I refer to <http://docs.python.org/library/ctypes.html#arrays> I write the following code. from ctypes import * names = c_char_p * 4 # A 3 times for loop will be written here. # The last array will assign to a null pointer. # So that C function knows where is the end of the array. names[0] = c_char_p('hello') and I get the following error. > TypeError: '_ctypes.PyCArrayType' object does not support item assignment Any idea how I can resolve this? I want to interface with c_function(const char** array_of_string); Answer: What you did was to create an array _type_ , not an actual array, so basically: import ctypes array_type = ctypes.c_char_p * 4 names = array_type() You can then do something along the lines of: names[0] = "foo" names[1] = "bar" ...and proceed to call your C function with the `names` array as parameter.
Problem in printing array of char pointer passing from Python Question: My following C code works quite well, till my Python code trying to pass an array of char pointer to it. The output I obtain is > The file_name is python-file Another 3 string is not being printed out. Anything I had missed out? **C Code** #include <iostream> #include "c_interface.h" int foo(const char* file_name, const char** names) { std::cout << "The file_name is " << file_name << std::endl; while (*names) { std::cout << "The name is " << *names << std::endl; names++; } return 0; } /* int main() { const char *c[] = {"123gh", "456443432", "789", 0}; foo("hello", c); getchar(); } */ * * * **Python Code** #!c:/Python27/python.exe -u from ctypes import * name0 = "NAME0" name1 = "NAME1" name2 = "NAME2" names = ((c_char_p * 1024) * 4)() names[0].value = name0 names[1].value = name1 names[2].value = name2 names[3].value = 0 libc = CDLL("foo.dll") libc.foo("python-file", names) * * * Answer: Using and compiling your C++ code, I can only repeat the code I already stated in my last answer: In [1]: import ctypes In [2]: lib = ctypes.CDLL("libfoo.so.1.0") In [3]: names = (ctypes.c_char_p*4)() In [4]: names[0] = "NAME0" In [5]: names[1] = "NAME1" In [6]: names[2] = "NAME2" In [7]: names[3] = 0 In [8]: lib.foo("whatever", names) The file_name is whatever The name is NAME0 The name is NAME1 The name is NAME2 Out[8]: 0 As a suggestion for you, open up your Python/IPython shell, execute your line names = ((c_char_p * 1024) * 4)() ...and check the first element `names[0]` directory entry, using `dir`. Or, try to access the value attribute for a start.
python exception message capturing Question: import ftplib import urllib2 import os import logging logger = logging.getLogger('ftpuploader') hdlr = logging.FileHandler('ftplog.log') formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') hdlr.setFormatter(formatter) logger.addHandler(hdlr) logger.setLevel(logging.INFO) FTPADDR = "some ftp address" def upload_to_ftp(con, filepath): try: f = open(filepath,'rb') # file to send con.storbinary('STOR '+ filepath, f) # Send the file f.close() # Close file and FTP logger.info('File successfully uploaded to '+ FTPADDR) except, e: logger.error('Failed to upload to ftp: '+ str(e)) This doesn't seem to work, I get syntax error, what is the proper way of doing this for logging all kind of exceptions to a file Answer: You have to define which type of exception you want to catch. So write `except Exception, e:` instead of `except, e:` for a general exception (that will be logged anyway). Other possibility is to write your whole try/except code this way: try: with open(filepath,'rb') as f: con.storbinary('STOR '+ filepath, f) logger.info('File successfully uploaded to '+ FTPADDR) except Exception, e: logger.error('Failed to upload to ftp: '+ str(e))
My python program always brings down my internet connection after several hours running, how do I debug and fix this problem? Question: I'm writing a python script checking/monitoring several server/websites status(response time and similar stuff), it's a GUI program and I use separate thread to check different server/website, and the basic structure of each thread is using an infinite while loop to request that site every random time period(15 to 30 seconds), once there's changes in website/server each thread will start a new thread to do a thorough check(requesting more pages and similar stuff). The problem is, my internet connection always got blocked/jammed/messed up after several hours running of this script, the situation is, from my script side I got urlopen error timed out each time it's requesting a page, and from my FireFox browser side I cannot open any site. But the weird thing is, the moment I close my script my Internet connection got back on immediately which means now I can surf any site through my browser, so it must be the script causing all the problem. I've checked the program carefully and even use `del` to delete any connection once it's used, still get the same problem. I only use urllib2, urllib, mechanize to do network requests. Anybody knows why such thing happens? How do I debug this problem? Is there a tool or something to check my network status once such situation occurs? It's really bugging me for a while... By the way I'm behind a VPN, does it have something to do with this problem? Although I don't think so because my network always get back on once the script closed, and the VPN connection never drops(as it appears) during the whole process. [**Updates:**] Just found more info about this problem, when my program brings down the internet connection, well, it's not totally "down", I mean, I cannot open any site in my browser or always get urlopen error timed out, but I still can get reply using "ping google.com" in cmd line. And when I manually dropped the VPN connection then redial, without closing my program it starts to work again and also I can surf the net through my browser. Why this happening? Answer: This may or may not be the problem but it's a good idea to always use context managers when dealing with things that opens resources, like files or urls. Since Python 2.5 you can do this with files: with open('/tmp/filename', 'rt') as infile: data = infile.read() whatever(data) And the file will be automatically closed at the end of the block. urllib2 doesn't support this automatically, but you can use contextlib to help you: >>> import contextlib >>> with contextlib.closing(urllib2.urlopen('http://www.python.org')) as page: ... for line in page: ... print(line) <html> blablablabla</html> This way the connection will be both closed and deleted at the end of the with-block, so you don't have to think about it. :-)
"Operation not permitted" while dropping privileges using setuid() function Question: Why this simple programs that use os.setuid()/gid() fails? Is written in python but I think that is not a language relative problem (at the end are all the same posix system call): import os, pwd if os.getenv("SUDO_UID") and os.getenv("SUDO_GID"): orig_uid=int(os.getenv("SUDO_UID")) orig_gid=int(os.getenv("SUDO_GID")) else: pw = pwd.getpwnam("nobody") orig_uid = pw.pw_uid orig_gid = pw.pw_gid print os.getuid(), os.getgid(), os.geteuid(), os.getegid(), orig_uid, orig_gid os.setgid(orig_gid) os.setuid(orig_uid) It returns this exception: $ sudo python provgid.py 0 0 0 0 1000 1000 Traceback (most recent call last): File "provgid.py", line 15, in <module> os.setgid(orig_gid) OSError: [Errno 1] Operation not permitted What is the error? Answer: Only the superuser or processes with the `CAP_SETGID` capability are allowed to set the GID. After the `setuid()` call, the effective UID isn't 0 any more, so you are not allowed to call `setgid()`. Try to reorder the two calls.
Use of "global" keyword in Python Question: What I understand from reading the documentation is that Python has a separate namespace for functions, and if I want to use a global variable in that function, I need to use `global`. I'm using Python 2.7 and I tried this little test >>> sub = ['0', '0', '0', '0'] >>> def getJoin(): ... return '.'.join(sub) ... >>> getJoin() '0.0.0.0' It seems things are working fine even without `global`. I was able to access global variable without any problem. Am I missing anything? Also, following is from Python documentation: > Names listed in a global statement must not be defined as formal parameters > or in a for loop control target, class definition, function definition, or > import statement. While formal parameters and class definition make sense to me, I'm not able to understand the restriction on for loop control target and function definition. Answer: The keyword `global` is only useful to change or create global variables in a local context, although creating global variables is seldom considered a good solution. def bob(): me = "locally defined" # Defined only in local context print me bob() print me # Asking for a global variable The above will give you: locally defined Traceback (most recent call last): File "file.py", line 9, in <module> print me NameError: name 'me' is not defined While if you use the `global` statement, the variable will become available "outside" the scope of the function, effectively becoming a global variable. def bob(): global me me = "locally defined" # Defined locally but declared as global print me bob() print me # Asking for a global variable So the above code will give you: locally defined locally defined In addition, due to the nature of python, you could also use `global` to declare functions, classes or other objects in a local context. Although I would advise against it since it causes nightmares if something goes wrong or needs debugging.
Test Driven Development, Unit Testing Question: let me first explain what I'm aiming for with this question: What kind of dev I am? I'm the guy who thinks about the problem, writes the code and then tests it by myself. I'm developing web-apps mainly but there are also projects which are UI based too (RCP/Swing apps). I run my app and click here, test this... You probably know this "style". Well I'm a guy who tries to improve himself with every line/project and I want my code/apps to be tested pragmatically. I write in code - I want test in code. So I started for some of my classes/functions to use unit tests (junit 4). This works for backend stuff where no UI is involved - tbh: I find it hard to write the most of tests. If we're building a webapp there are probably interactions with the session or something. I guess you get the point. What I'm looking for are some resources probably with examples. Any good book advice would be welcome too. Don't get me wrong - I don't want only stuff for logic testing, I'm interested in ways to test my UI. Maybe this is an important part too: I developing in Java (85% of the time) and PHP/Python (the rest) Regards Answer: You can use to [Selenium](http://seleniumhq.org/) for complete front end testing under any mainstream testing framework (eg: JUnit). Then stick to just using a JVM alone for back end code, which is easy enough. You should be covered in this regard. With Selenium you are writing an end-to-end test as opposed to an atomic unit test on each aspect. That is a trade off to using the complete front end to test in.
Using regex to find data in Python Question: I am new to python, and developing in general. Let me give an example what I am trying to do. I want to find the text name="username" type="hidden" value="blah" and I only want to pull the "blah" How would I begin to go about that? Answer: You can use [regex groups](http://docs.python.org/library/re.html#re.MatchObject.group) to pick out relevant parts of a match. #!/usr/bin/env python s = """ Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. name="username" type="hidden" value="blah" Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. """ import re pattern = re.compile(r'name="username"\stype="hidden"\svalue="([^"]*)"') for match in pattern.finditer(s): print match.group(1) # => blah
oauth2 in python Question: I'm looking to write a script that tweets from python (such as [here](http://abhi74k.wordpress.com/2010/12/21/tweeting-from-python/)), however when I try and call import oauth2 as oauth I just get this error: > ImportError: No module named oauth2 Where can I get this module from? Thanks Answer: Make sure that you have pip installed. Then in your terminal enter the following command: $ pip install oauth2
How does this If conditional work in Python? Question: from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app class MainPage(webapp.RequestHandler): def get(self): user = users.get_current_user() if user: self.response.headers['Content-Type'] = 'text/plain' self.response.out.write('Hello, ' + user.nickname()) else: self.redirect(users.create_login_url(self.request.uri)) application = webapp.WSGIApplication( [('/', MainPage)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() I don't understand how this line works: if user: self.response.headers['Content-Type'] = 'text/plain' self.response.out.write('Hello, ' + user.nickname()) else: self.redirect(users.create_login_url(self.request.uri)) I'm guessing the users.get_current_user() return a boolean? Then, if that is the case how can it get a .nickname() method? Thanks for the guidance. Answer: It'll basically check if `user` is an actual object or `None`. If it's `None`, the code will go into the `else` block and redirect to a "create login" page.
Is there a way to keep lines from being skipped when using csv.dictWriter.writerow(somerow) Question: I am processing some files and want to create a log of what I am processing. I created the log by using a dictionary to hold the keys and values for each observation and then I am appending the dictionary to a list (a list of dictionaries). To save the log I am using Python's csv module to write out the list of dictionaries. Initially I was using writerows but I encountered a problem in that very infrequently some of the values I am storing are something other than ascii example Investee\xe2\x80\x99s Share of Profits my solution was to iterate through my list of dictionaries using try / except statements to skip over the problem dictionaries for docnumb, item in enumerate(x[1]): try: dict_writer.writerow(item) except UnicodeEncodeError: missed.append(docnumb) item However, this leads to an extra row being inserted in each line of the output csv file. value1 value2 value3 etc . . . #blank row value1 value2 value3 etc I can't see how to suppress this behavior. a little more code so there is more clarity about how I got here import csv keyset=set([]) for item in x[1]: keyset |=set(item.keys()) keys=list(keyset) logref=open(r'c:\December_2010_File_list.csv','w') dict_writer=csv.DictWriter(logref,keys) keyset |=set(item.keys()) Answer: See the documentation at <http://docs.python.org/library/csv.html#csv- examples> They give a UnicodeWriter class as follows: class UnicodeWriter: """ A CSV writer which will write rows to CSV file "f", which is encoded in the given encoding. """ def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds): # Redirect output to a queue self.queue = cStringIO.StringIO() self.writer = csv.writer(self.queue, dialect=dialect, **kwds) self.stream = f self.encoder = codecs.getincrementalencoder(encoding)() def writerow(self, row): self.writer.writerow([s.encode("utf-8") for s in row]) # Fetch UTF-8 output from the queue ... data = self.queue.getvalue() data = data.decode("utf-8") # ... and reencode it into the target encoding data = self.encoder.encode(data) # write to the target stream self.stream.write(data) # empty queue self.queue.truncate(0) def writerows(self, rows): for row in rows: self.writerow(row)
How to extract a floating number from a string in Python Question: I have a number of strings similar to `Current Level: 13.4 db.` and I would like to extract just the floating point number. I say floating and not decimal as it's sometimes whole. Can RegEx do this or is there a better way? Answer: If your float is always expressed in decimal notation something like >>> import re >>> re.findall("\d+\.\d+", "Current Level: 13.4 db.") ['13.4'] may suffice. A more robust version would be: >>> re.findall(r"[-+]?\d*\.\d+|\d+", "Current Level: -13.2 db or 14.2 or 3") ['-13.2', '14.2', '3'] If you want to validate user input, you could alternatively also check for a float by stepping to it directly: user_input = "Current Level: 1e100 db" for token in user_input.split(): try: # if this succeeds, you have your (first) float print float(token), "is a float" except ValueError: print token, "is something else" # => Would print ... # # Current is something else # Level: is something else # 1e+100 is a float # db is something else
ImportError: cannot import name aliases Question: I just installed the Python 2.7.1 on Windows Vista using installer from [official site](http://www.python.org/download), and get such error when run python.exe C:\Python27>python.exe Traceback (most recent call last): File "C:\Python27\Lib\site.py", line 553, in <module> main() File "C:\Python27\Lib\site.py", line 542, in main aliasmbcs() File "C:\Python27\Lib\site.py", line 467, in aliasmbcs import locale, codecs File "C:\Python27\Lib\locale.py", line 15, in <module> import encodings File "C:\Python27\Lib\encodings\__init__.py", line 32, in <module> from encodings import aliases ImportError: cannot import name aliases With Python 3.1.3, also just installed Fatal Python error: Py_Initialize: can't initialize sys standard streams Traceback (most recent call last): File "C:\Python31\lib\encodings\__init__.py", line 32, in <module> ImportError: cannot import name aliases This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. What is going on? Should I configure PYTHONPATH or some thing like this? If so, what values should it have? Answer: The only reference I could find points to a virus problem: Starts here: <http://mail.python.org/pipermail/tutor/2009-January/066730.html> Goes through: <http://mail.python.org/pipermail/tutor/2009-January/066739.html> And ends: <http://mail.python.org/pipermail/tutor/2009-January/066750.html> If your `\Lib\encodings\aliases.py` is missing and you cannot create a file with that name there, start looking for the trojan.
Any easy way to alter the data that comes from a mysql database? Question: so I'm using mysql to grab data from a database and feeding it into a python function. I import mysqldb, connect to the database and run a query like this: conn.query('SELECT info FROM bag') x = conn.store_result() for row in x.fetch_row(100): print row but my problem is that my data comes out like this (1.234234,)(1.12342,)(3.123412,) when I really want it to come out like this: 1.23424, 1.1341234, 5.1342314 (i.e. without parenthesis). I need it this way to feed it into a python function. Does anyone know how I can grab data from the database in a way that doesn't have parenthesis? Answer: Rows are returned as tuples, even if there is only one column in the query. You can access the first and only item as `row[0]` The first time around in the `for` loop, `row` does indeed refer to the first row. The second time around, it refers to the second row, and so on. By the way, you say that you are using `mySQLdb`, but the methods that you are using are from the underlying `_mysql` library (low level, scarcely portable) ... why??
PIL: using fromarray() with binary data and writing coloured text Question: Hallo. I've a basic problem with Python's library PIL. I have some .txt files containing only **0** and **1** values arranged in matrices. What I do is transforming such "binary" data in an image with the function **Image.fromarray()** included in PIL. The format of my data produces black&white images if I multiply it by 255, and that's fine for me. Now I want to add some text to the image, using the appropriate **text** function included in PIL, but I want that text to be **coloured**. Clearly, I can't do it because the image obtained from **fromarray** has a grayscale colormap. How can I change it? Answer: You can get a RGB image from a monochromatic one like this: from PIL import Image from numpy import eye arr = (eye(200)*255).astype('uint8') # sample array im = Image.fromarray(arr) # monochromatic image imrgb = Image.merge('RGB', (im,im,im)) # color image imrgb.show()
Android SL4A (Python) Force Stop Packages fails. Question: I'm trying to terminate task using code like this: import android droid = android.Android() running = droid.getRunningPackages()[1] for task in running: if (task.find("skype") != -1) droid.forceStopPackages(task) this works properly except killing call `droid.forceStopPackages(task)` # When we get here, variable `task` contains "com.skype.raider" string nothing happens after that. I expected that running skype application will be terminated. Answer: Since Android 2.2 (Froyo) killing apps is not really reliable. Play around with a task killer to see what I mean.
Python Twisted receive command from TCP write to Serial device return response Question: I've managed to connect to usb modem and a client can connect via tcp to my reactor.listenTCP,the data received from modem will be send back to client. I'm want to take dataReceived from client and send this to modem..I'm struggling to get this to work.Any help will be highly appreciated! the code: from twisted.internet import win32eventreactor win32eventreactor.install() from twisted.internet import reactor from twisted.internet.serialport import SerialPort from twisted.internet.protocol import Protocol, Factory from twisted.python import log import sys log.startLogging(sys.stdout) client_list = []#TCP clients connecting to me class USBClient(Protocol): def connectionFailed(self): print "Connection Failed:", self reactor.stop() def connectionMade(self): print 'Connected to USB modem' USBClient.sendLine(self, 'AT\r\n') def dataReceived(self, data): print "Data received", repr(data) print "Data received! with %d bytes!" % len(data) #check & perhaps modify response and return to client for cli in client_list: cli.notifyClient(data) pass def lineReceived(self, line): print "Line received", repr(line) def sendLine(self, cmd): print cmd self.transport.write(cmd + "\r\n") def outReceived(self, data): print "outReceived! with %d bytes!" % len(data) self.data = self.data + data class CommandRx(Protocol): def connectionMade(self): print 'Connection received from tcp..' client_list.append(self) def dataReceived(self, data): print 'Command receive', repr(data) #Build command, if ok, send to serial port #???? def connectionLost(self, reason): print 'Connection lost', reason if self in client_list: print "Removing " + str(self) client_list.remove(self) def notifyClient(self, data): self.transport.write(data) class CommandRxFactory(Factory): protocol = CommandRx def __init__(self): client_list = [] if __name__ == '__main__': reactor.listenTCP(8000, CommandRxFactory()) SerialPort(USBClient(), 'COM8', reactor, baudrate='19200') reactor.run() Answer: Your problem is not about twisted, but about python. Read this FAQ entry: > [_How do I make input on one connection result in output on > another?_](http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#HowdoImakeinputononeconnectionresultinoutputonanother) Thing is, if you want to send stuff to a TCP-connected client in your serial- connected protocol, just pass to the protocol a reference to the factory, so you can use that reference to make the bridge. Here's some example code that roughly does this: class USBClient(Protocol): def __init__(self, network): self.network = network def dataReceived(self, data): print "Data received", repr(data) #check & perhaps modify response and return to client self.network.notifyAll(data) #... class CommandRx(Protocol): def connectionMade(self): self.factory.client_list.append(self) def connectionLost(self, reason): if self in self.factory.client_list: self.factory.client_list.remove(self) class CommandRxFactory(Factory): protocol = CommandRx def __init__(self): self.client_list = [] def notifyAll(self, data): for cli in self.client_list: cli.transport.write(data) When initializing, pass the reference: tcpfactory = CommandRxFactory() reactor.listenTCP(8000, tcpfactory) SerialPort(USBClient(tcpfactory), 'COM8', reactor, baudrate='19200') reactor.run()
testing interactive python programs Question: I would like to know which testing tools for python support the testing of interactive programs. For example, I have an application launched by: $ python dummy_program.py >> Hi whats your name? Joseph I would like to instrument `Joseph` so I can emulate that interactive behaviour. Answer: Your best bet is probably dependency injection, so that what you'd ordinarily pick up from sys.stdin (for example) is actually an object passed in. So you might do something like this: import sys def myapp(stdin, stdout): print >> stdout, "Hi, what's your name?" name = stdin.readline() print >> stdout "Hi,", name # This might be in a separate test module def test_myapp(): mock_stdin = [create mock object that has .readline() method] mock_stdout = [create mock object that has .write() method] myapp(mock_stdin, mock_stdout) if __name__ == '__main__': myapp(sys.stdin, sys.stdout) Fortunately, Python makes this pretty easy. Here's a more detailed link for an example of mocking stdin: <http://konryd.blogspot.com/2010/05/mockity-mock- mock-some-love-for-mock.html>
Markaby/Erector for Python Question: I like using Python, but hate writing HTML. Is there a Markaby/Erector - like module for Python? Answer: Seems like you can kind of do this with lxml: <http://lxml.de/lxmlhtml.html#creating-html-with-the-e-factory> from lxml.html import builder as E from lxml.html import usedoctest html = E.HTML( E.HEAD( E.LINK(rel="stylesheet", href="great.css", type="text/css"), E.TITLE("Best Page Ever") ), E.BODY( E.H1(E.CLASS("heading"), "Top News"), E.P("World News only on this page", style="font-size: 200%"), "Ah, and here's some more text, by the way.", lxml.html.fromstring("<p>... and this is a parsed fragment ...</p>") ) ) There is also [Mimsy](http://www.hoboes.com/Mimsy/hacks/object-oriented-html/) which seems similar. import makeHTML pageTitle = 'Hello World' pageHead = makeHTML.part('head') pageHead.addPart('title', content=pageTitle) pageBody = makeHTML.part('body') pageBody.addPart('h1', content=pageTitle) pageBody.addPart('p', content="Oh no, not again!") pageBody.addPart('hr') fullPage = makeHTML.part('html') fullPage.addPiece(pageHead) fullPage.addPiece(pageBody) fullPage.make()
python -> combinations of numbers and letters Question: #!/usr/bin/python import random lower_a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] upper_a = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'] num = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] all = [] all = " ".join("".join(lower_a) + "".join(upper_a) + "".join(num)) all = all.split() x = 1 c = 1 while x < 10: y = [] for i in range(c): a = random.choice(all) y.append(a) print "".join(y) x += 1 c += 1 what i have now outputs something like the following: 5 hE HAy 1kgy Pt6JM 2pFuCb Jv5osaX 5q8PwWAO SvHWRKfI5 how can i make it systematically go through every combination of letters (upper and lowercase) for a given length, then add 1 to that length and repeat the process? Answer: It's best not to recreate functionality that is already in the standard library. Take a look at the standard library module "itertools". Particularly the combinations(), permutations(), and product() functions. import itertools lower_a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] upper_a = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'] num = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] all = [] all = lower_a + upper_a + num for r in range(1, 3): for s in itertools.product(all, repeat=r): print ''.join(s) If your version of Python is old you may not have access to these functions. However if you take a look in the documentation for Python 2.6, you can see how all of these functions can be implemented in Python. For instance, the implementation of itertools.product is given as: def product(*args, **kwds): # product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy # product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111 pools = map(tuple, args) * kwds.get('repeat', 1) result = [[]] for pool in pools: result = [x+[y] for x in result for y in pool] for prod in result: yield tuple(prod) You could also try a recursive solution instead: lower_a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] upper_a = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'] num = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] all = [] all = lower_a + upper_a + num def recursive_product(myList, length, myString = ""): if length == 0: print myString return for c in myList: recursive_product(myList, length-1, myString + c) for r in range(1, 3): recursive_product(all, r)
Using Python and Mechanize to submit form data and authenticate Question: I want to submit login to the website Reddit.com, navigate to a particular area of the page, and submit a comment. I don't see what's wrong with this code, but it is not working in that no change is reflected on the Reddit site. import mechanize import cookielib def main(): #Browser br = mechanize.Browser() # Cookie Jar cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(True) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(False) # Follows refresh 0 but not hangs on refresh > 0 br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) #Opens the site to be navigated r= br.open('http://www.reddit.com') html = r.read() # Select the second (index one) form br.select_form(nr=1) # User credentials br.form['user'] = 'DUMMYUSERNAME' br.form['passwd'] = 'DUMMYPASSWORD' # Login br.submit() #Open up comment page r= br.open('http://www.reddit.com/r/PoopSandwiches/comments/f47f8/testing/') html = r.read() #Text box is the 8th form on the page (which, I believe, is the text area) br.select_form(nr=7) #Change 'text' value to a testing string br.form['text']= "this is an automated test" #Submit the information br.submit() What's wrong with this? Answer: I would definitely suggest trying to use the API if possible, but this works for me (not for your example post, which has been deleted, but for any active one): #!/usr/bin/env python import mechanize import cookielib import urllib import logging import sys def main(): br = mechanize.Browser() cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) br.set_handle_equiv(True) br.set_handle_gzip(True) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(False) br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) r= br.open('http://www.reddit.com') # Select the second (index one) form br.select_form(nr=1) # User credentials br.form['user'] = 'user' br.form['passwd'] = 'passwd' # Login br.submit() # Open up comment page posting = 'http://www.reddit.com/r/PoopSandwiches/comments/f47f8/testing/' rval = 'PoopSandwiches' # you can get the rval in other ways, but this will work for testing r = br.open(posting) # You need the 'uh' value from the first form br.select_form(nr=0) uh = br.form['uh'] br.select_form(nr=7) thing_id = br.form['thing_id'] id = '#' + br.form.attrs['id'] # The id that gets posted is the form id with a '#' prepended. data = {'uh':uh, 'thing_id':thing_id, 'id':id, 'renderstyle':'html', 'r':rval, 'text':"Your text here!"} new_data_dict = dict((k, urllib.quote(v).replace('%20', '+')) for k, v in data.iteritems()) # not sure if the replace needs to happen, I did it anyway new_data = 'thing_id=%(thing_id)s&text=%(text)s&id=%(id)s&r=%(r)s&uh=%(uh)s&renderstyle=%(renderstyle)s' %(new_data_dict) # not sure which of these headers are really needed, but it works with all # of them, so why not just include them. req = mechanize.Request('http://www.reddit.com/api/comment', new_data) req.add_header('Referer', posting) req.add_header('Accept', ' application/json, text/javascript, */*') req.add_header('Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8') req.add_header('X-Requested-With', 'XMLHttpRequest') cj.add_cookie_header(req) res = mechanize.urlopen(req) main() It would be interesting to turn javascript off and see how the reddit comments are handled then. Right now there is a bunch of `magic` that happens in an onsubmit function called when making your post. This is where the `uh` and `id` value get added.
Extracting Data from a .txt file using python Question: I many, many .xml files and i need to extract some co-ordinates from them. Extracting data straight from .xml files seems to be very, very complicated - so i am working saving the .xml files as .txt files and extracting the data that way. However, when i open the .txt file, my data is all bunched together on about 6 lines.. And all the scripts i have found so far select the data by reading the first word on each line.. but obviously that won't work for me! I need to extract the numbers inbetween these comments: <gml:lowerCorner>137796 483752</gml:lowerCorner> <gml:upperCorner>138178 484222</gml:upperCorner> In the text file they are all grouped together! Does anyone know how to extract this data? Thank you! Answer: This is absolutely the **wrong approach**. Leave it alone and improve your ways :-) Seriously, if the file is XML, then just use an XML parser to read it. Learning how to do it in Python isn't hard and will make your life easier now and much easier in the future, when you may find yourself facing more complex parsing needs, and you won't have to re-learn it. Look at `xml.etree.ElementTree.ElementTree`. Here's some sample code: >>> from xml.etree.ElementTree import ElementTree >>> tree = ElementTree() >>> tree.parse("your_xml_file.xml") Now just read the documentation of the module and see what you can do with `tree`. You'll be surprised to find out how simple it is to get to information this way. If you have specific questions about extracting data, I suggest you open another question in which you specify the format of the XML file you have to parse, and what data you have to take out of there. I'm sure you will have working code suggested to you in matters of minutes.
Python Multidimensional Array as a single List Question: Sure, you can have nested lists to represent multidimensional arrays, but that seems costly... [[0, 1], [2, 3]] Is there some way to "encode" and "decode" the coordinate into a single number, and use that number to lookup the corresponding element? [0, 1, 2, 3] This needs to work with n-dimensions, not just two, and the best I could come up with for encoding is: def getcellindex(self, location): cindex = 0 cdrop = self.gridsize # where self.gridsize is the number of cells for index in xrange(self.numdimensions): # where self.numdimensions is the number of dimensions # where self.dimensions is a tuple of the different sizes of the corresponding dimension cdrop /= self.dimensions[index] cindex += cdrop * location[index] return cindex There're probably ways to optimize this, but more importantly, how do I reverse the process? And, does this function work? Answer: Are you avoiding the obvious answer (i.e. `[[1, 2], [3, 4]]`) because of concerns about its performance? If so and you're working with numberes, look at [NumPy arrays](http://docs.scipy.org/doc/numpy/reference/arrays.html). The best solution would be to not reinvent your own wheel. **Edit:** If you do feel the need to do it your own way, you could follow a [strided index scheme](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#internal- memory-layout-of-an-ndarray) like NumPy, wihch might go something like this: import operator def product(lst): return reduce(operator.mul, lst, 1) class MyArray(object): def __init__(self, shape, initval): self.shape = shape self.strides = [ product(shape[i+1:]) for i in xrange(len(shape)) ] self.data = [initval] * product(shape) def getindex(self, loc): return sum([ x*y for x, y in zip(self.strides, loc) ]) def getloc(self, index): loc = tuple() for s in self.strides: i = index // s index = index % s loc += (i,) return loc To be used as: arr = MyArray((3, 2), 0) arr.getindex((2, 1)) -> 5 arr.getloc(5) -> (2, 1)
xmpppy and Facebook Chat Integration Question: I'm trying to create a very simple script that uses python's xmpppy to send a message over facebook chat. import xmpp FACEBOOK_ID = "username@chat.facebook.com" PASS = "password" SERVER = "chat.facebook.com" jid=xmpp.protocol.JID(FACEBOOK_ID) C=xmpp.Client(jid.getDomain(),debug=[]) if not C.connect((SERVER,5222)): raise IOError('Can not connect to server.') if not C.auth(jid.getNode(),PASS): raise IOError('Can not auth with server.') C.send(xmpp.protocol.Message("friend@chat.facebook.com","Hello world",)) This code works to send a message via gchat, however when I try with facebook I recieve this error: An error occurred while looking up _xmpp-client._tcp.chat.facebook.com When I remove @chat.facebook.com from the FACEBOOK_ID I get this instead: File "gtalktest.py", line 11, in if not C.connect((SERVER,5222)): File "/home/john/xmpppy-0.3.1/xmpp/client.py", line 195, in connect if not CommonClient.connect(self,server,proxy,secure,use_srv) or secureNone and not secure: return self.connected File "/home/john/xmpppy-0.3.1/xmpp/client.py", line 179, in connect if not self.Process(1): return File "/home/john/xmpppy-0.3.1/xmpp/dispatcher.py", line 302, in dispatch handler['func'](session,stanza) File "/home/john/xmpppy-0.3.1/xmpp/dispatcher.py", line 214, in streamErrorHandler raise exc((name,text)) xmpp.protocol.HostUnknown: (u'host-unknown', '') I also notice any time I import xmpp I get the following two messages when running: /home/john/xmpppy-0.3.1/xmpp/auth.py:24: DeprecationWarning: the sha module is deprecated; use the hashlib module instead import sha,base64,random,dispatcher /home/john/xmpppy-0.3.1/xmpp/auth.py:26: DeprecationWarning: the md5 module is deprecated; use hashlib instead import md5 I'm fairly new to solving these kinds of problems, and advise, or links to resources that could help me move forward in solve these issues would be greatly appreciated. Thanks for reading! Answer: I also started the same project, and was trapped into same problem. I found the solution too. You have to write the UserName of facebook (Hence You must opt one Username) and that too in small Caps. This is the most important part. Most probably you too like me would not be writing it in small Caps.