text
stringlengths
226
34.5k
Integration issue with PyObjC and TKinter Question: The following simple code: from PyObjCTools import AppHelper import AppKit import Tkinter class App(AppKit.NSApplication): def finishLaunching(self): self.root=Tkinter.Tk() _=App.sharedApplication() AppHelper.runEventLoop() yields the following exception: `Python[23717:d07] -[App _setup:]: unrecognized selector sent to instance 0x105d05340` What am I doing wrong? Answer: I don't think you can mix Tkinter and Cocoa toolkits so interchangeably. `self.root` is an attribute on the class `App`, which inherits from `AppKit.NSApplication`. My guess is that the `Tk()` call returns a pointer that is then passed to the Cocoa frameworks, but points to a Tk data structure that it can't understand. Also, both Tkinter and PyObjC need their own eventloop; I'm not sure if you can even mix the two (though I've never tried). My recommendation would be to use one UI toolkit or the other, but not both.
How to Calculate the number of days in the year(s) between 2 dates in python Question: I'll try to explain what I need with an example: date 1 : 1 january 2000 date 2 : 17 november 2006 -> now I want to know how many days there are between date 1 and date 2 in the year 2000, 2001, ..., 2006 so I need something that returns something like this (doesn't matter if it's in a list or something): 2000: 365, 2001: 365, ..., 2006: 320 I've looked for something like this on the internet but that only turned up ways to calculate the number of days/months/years between 2 dates kindly regards, Daquicker Answer: hm, try something like this: import datetime, calendar date1 = datetime.date(year1, month1, day1) # month and day are 1-base date2 = datetime.date(year2, month2, day2) days_in_first_year = (datetime.date(year1,12,31)-date1).days days_in_last_year = (date2 - datetime.date(year2, 1, 1)).days if year1 != year2: n_days_list = [days_in_first_year] for year in range(year1+1, year2): n_days_list.append(365 + (1*calendar.isleap(year))) n_days_list.append(days_in_last_year) else: n_days_list = [days_in_first_year + days_in_last_year] haven't tested this, might be some off-by-one errors; make sure it does what you expect. edit: correct the boundaries of the range() call, correctly handle year1 == year2
Threading issue with instance variables Question: So it seems I'm misunderstanding some basic stuff about Python. Passing an instance variable into a function should only pass a reference to the object itself, so in case the object is immutable doing something like `self.var = "something"; foo(self.var)` shouldn't change self.var if foo assigned a new value to the variable - so long everything's fine and as expected. But now consider this: import threading class Test(): def __init__(self): self.lock = threading.Lock() self.arg = "arg0" def foo(self, i): with self.lock: threading.Thread(target=lambda: bar(self.arg)).start() self.arg = "arg" + str(i) def bar(arg): import time time.sleep(1) print("Bar: " + arg) if __name__ == '__main__': t = Test() for i in range(1, 6): t.foo(i) I create a thread object with a reference to the current string and afterwards update it - which the thread shouldn't see. Thanks to the lock the next thread also should only start after the update - so while I can't make any assumptions about the sequence in which arg0-5 will be printed I'd assume every arg should be printed exactly once. But I get the following output (Win7 x64, python 3.1 x64) Bar: arg0 Bar: arg2 Bar: arg2 Bar: arg5 Bar: arg3 Edit: Okay after typing this up, I had the glorious idea that probably the lambda expression isn't executed when creating the thread but later on, which would explain the behavior, so the simple workaround would be to just create a local variable and use that. Well - now that was fast help by SO ;) Answer: Since I noticed I still hadn't answered that one, here we go: The lambda creates a closure to `self` but not to `self.arg` itself, which means when we later execute bar we access the newest value and not the one at lambda creation time.
building executable using python,vtk and py2exe Question: Is it possible to create a binary executable with py2exe for vtk? Could someone provide a minimum working example or at least some hints? Py2exe is not necessary. If there is a working solution on other similar programs (bbfreeze etc) I am intrested too. Answer: This example uses py2exe. Use packages to add any referenced libraries and options includes to add dependencies. I am not too sure about the exact semantics and I reached this stable configuration after much trial and error. Hopefully, you can use this as a template to go ahead. from distutils.core import setup import py2exe import modulefinder from iso8601 import iso8601 setup(name='exeExample', version='1.0', description='Exe example using py2Exe', author='Urjit Singh Bhatia', author_email='person@user.com', packages=['example', 'someLib'], console=['src\\a.py', 'src\\b.py', 'src\\c.py', 'src\\d.py'], options={"py2exe":{"includes":["someLib","csv","iso8601","pymssql","uuid","decimal","urllib2","traceback","re","_mssql","os"]}} ) Keep in mind that options, includes sometimes need to be nested. That means, if pymssql here uses _mssql, it was giving me an error saying that _mssql was missing, so I had to explicitly go and add that as a dependency. I hope someone can improve and explain. Edits: 1\. Added imports. 2\. Simply running this creates a folder called dist where you will see the exe(s) and the dependencies.
Does haskell have a splat operator like python/ruby? Question: In python / ruby (and others, I'm sure). you can prefix an enumerable with * ("splat") to use it as an argument list. e.g in python: >>> def foo(a,b): return a + b >>> foo(1,2) 3 >>> tup = (1,2) >>> foo(*tup) 3 Is there something similar in haskell? I assume it wouldn't work on lists due to their unknown length, but I feel that tuples ought to work. Here's an example of what I'd like: ghci> let f a b = a + b ghci> :t f f :: Num a => a -> a -> a ghci> f 1 2 3 ghci> let tuple = (1,2) I'm looking for an operator (or function) that allows me to do: ghci> f <op> tuple 3 I have seen <*> being called "splat", but it doesn't seem to be referring to the same thing as splat in other languages. I tried it anyway: ghci> import Control.Applicative ghci> f <*> tuple <interactive>:1:7: Couldn't match expected type `b0 -> b0' with actual type `(Integer, Integer)' In the second argument of `(<*>)', namely `tuple' In the expression: f <*> tuple In an equation for `it': it = f <*> tuple Answer: Yes, you can apply functions to tuples, using the [tuple](http://hackage.haskell.org/package/tuple) package. Check out, in particular, the [uncurryN](http://hackage.haskell.org/packages/archive/tuple/0.2.0.1/doc/html/Data- Tuple-Curry.html) function: Prelude Data.Tuple.Curry> (+) `uncurryN` (1, 2) 3
Retrieve plain text JSON, insert into JavaScript Question: I have a URL `http://myapp.com/get_data` that returns an `application/json` `Content-Type`. When I browse to that URL, I'd get a plain-text JSON array in my browser window [[key, value], [key, value], [key, value], ...] I also have a JavaScript function that expects data to be in JSON array format function process_data() { var data = // give me more data in JSON array format... } How do I make my JavaScript browse to `http://myapp.com/get_data` and assign the resulting JSON array into the `data` variable inside `process_data()`? I'm new to JavaScript (coming from a Python background) and I would appreciate if you can suggest solutions that use the core JavaScript library. Solutions using other libraries are welcome also, preferably those that are considered best-practice. # UPDATE It appears I wasn't clear on my question. Let me provide an example from Python. After doing the necessary imports, I can do something like url = "http://myapp.com/get_data" page = urllib2.urlopen(url) page_source = page.read() This time, `page_source` is already a Python `str` object that I can easily play with, assign to other variables, etc. If I could mix Python and JavaScript together, for the context of this question, I want to do something like function process_data() { url = "http://myapp.com/get_data" page = urllib2.urlopen(url) page_source = page.read() var data = convert_str_to_JSON(page_source) } Of course that was just an ugly mishmash of a code, but I hope it conveys what I'm trying to get at: 1. JavaScript will `GET` a URL. 2. Read the source. 3. Interpret source as JSON. 4. Assign it to a variable. Answer: Newer browser support JSON parsing natively. You can say `JSON.parse('json data')`. For older browsers (such as IE 7 or 6), you can use this library: <https://github.com/douglascrockford/JSON-js> Use `json2.js` from above library. It checks if native browser implementation is present, if not, adds it. Do not use `eval` (as [eval is evil](http://blogs.msdn.com/b/ericlippert/archive/2003/11/01/53329.aspx))! **Update:** To get the 'json data', use this: var jsonObject = {}; var xhr = new XMLHttpRequest(); xhr.open( "GET", url, true ); // true makes this call asynchronous xhr.onreadystatechange = function () { // need eventhandler since our call is async if ( xhr.readyState == 4 && xhr.status == 200 ) { // check for success jsonObject = JSON.parse( xhr.responseText ); } }; xhr.send(null); Also, I would suggest reading this [article](http://www.ilinsky.com/articles/XMLHttpRequest/) for cross browser issues and implementation of `XMLHttpRequest` object.
wxPython: Disable a notebook tab? Question: Is there anyway to disable a notebook tab? Like you can with the Widgets themselves? I have a long process I kick off, and while it should be pretty self-explanatory for those looking at it, I want to be able to prevent the user from mucking around in other tabs until the process it is running is complete. I couldn't seem to find anything in `wx.Notebook` to help with this? Code snippet: def __init__(self, parent): wx.Notebook.__init__(self, parent, id=wx.ID_ANY, style=wx.BK_DEFAULT) self.AddPage(launchTab.LaunchPanel(self), "Launch") self.AddPage(scanTab.ScanPanel(self), "Scan") self.AddPage(extractTab.ExtractPanel(self), "Extract") self.AddPage(virtualsTab.VirtualsPanel(self), "Virtuals") Answer: It si not doable with `wx.Notebook`. But you can use some of the more advanced widgets such as `wx.lib.agw.aui.AuiNotebook`: import wx import wx.lib.agw.aui as aui class MainWindow(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, *args, **kwargs) style = aui.AUI_NB_DEFAULT_STYLE ^ aui.AUI_NB_CLOSE_ON_ACTIVE_TAB self.notebook = aui.AuiNotebook(self, agwStyle=style) self.panel1 = wx.Panel(self.notebook) self.panel2 = wx.Panel(self.notebook) self.panel3 = wx.Panel(self.notebook) self.notebook.AddPage(self.panel1, "First") self.notebook.AddPage(self.panel2, "Second") self.notebook.AddPage(self.panel3, "Third") self.notebook.EnableTab(1, False) self.Show() app = wx.App(False) win = MainWindow(None) app.MainLoop()
Accessing an RFID reader via a computer from a server Question: On a computer I want to run a web-based application which is served by a server and this application have to access an RFID reader. I have set this computer to connect to the server via wireless LAN and connect to the RFID reader via an Ethernet cable (tried both straight through and crossover cable). The reader cannot connect to the server directly because of the mobility needed. Setting up the connection appear in the image below. The server assigned IP address for the computer using DHCP. Connection between the computer and the reader have set by static. ![enter image description here](http://i.stack.imgur.com/d6OXI.png) Clearly, the computer can access both the server and the reader but the server cannot access the reader as I needed. EDIT: The application is developed using Python with Django framework. To connect to the reader I just simply used `socket`. import socket HOST = '192.168.1.21' PORT = 50007 soc = socket.socket(socket.AF_INET, socket.SOCK_STREAM) soc.settimeout(2) soc.connect((HOST, PORT)) Answer: I would implement a java-applet and upload it on the server. When user opens a page from this server, java applet loads and starts working. Applet is running on the user's computer context and can access RFID reader, and pass this information to the server. If accessing RFID-reader requires to involve dll-libraries or any native OS modules, then this applet has to be signed. Use keytool and jarsigner tools from Java SDK to sign the applet before uploading it to the server. [Here you can find information on java networking](http://download.oracle.com/javase/tutorial/networking/index.html). [And here you can find information on java applet technology](http://download.oracle.com/javase/tutorial/deployment/applet/)
Given two python lists of same length. How to return the best matches of similar values? Question: Given are two python lists with strings in them (names of persons): list_1 = ['J. Payne', 'George Bush', 'Billy Idol', 'M Stuart', 'Luc van den Bergen'] list_2 = ['John Payne', 'George W. Bush', 'Billy Idol', 'M. Stuart', 'Luc Bergen'] I want a mapping of the names, that are most similar. 'J. Payne' -> 'John Payne' 'George Bush' -> 'George W. Bush' 'Billy Idol' -> 'Billy Idol' 'M Stuart' -> 'M. Stuart' 'Luc van den Bergen' -> 'Luc Bergen' Is there a neat way to do this in python? The lists contain in average 5 or 6 Names. Sometimes more, but this is seldom. Sometimes it is just one name in every list, which could be spelled slightly different. Answer: Using the function defined here: <http://hetland.org/coding/python/levenshtein.py> >>> for i in list_1: ... print i, '==>', min(list_2, key=lambda j:levenshtein(i,j)) ... J. Payne ==> John Payne George Bush ==> George W. Bush Billy Idol ==> Billy Idol M Stuart ==> M. Stuart Luc van den Bergen ==> Luc Bergen You could use functools.partial instead of the lambda >>> from functools import partial >>> for i in list_1: ... print i, '==>', min(list_2, key=partial(levenshtein,i)) ... J. Payne ==> John Payne George Bush ==> George W. Bush Billy Idol ==> Billy Idol M Stuart ==> M. Stuart Luc van den Bergen ==> Luc Bergen
Eclipse - interactive shell for python not working Question: I've installed and configured Pydev with Eclipse but when I try to use interactive shell I get errors. Can someone have a look and advise? Code for two functions: def convert_milage(t): # t - number of miles per 1 gallon litersPer100Kilometers = 100/ ((t * 1.609) / 3.785) return litersPer100Kilometers def liters_needed(t, v): # t: distance in kms, v: fuel consumption fuelConsumption = convert_milage(v) litersNeeded = (t * fuelConsumption) / 100 return litersNeeded Error: <http://i54.tinypic.com/2afmu7p.png> Error message: >>> convert_milage(100) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'convert_milage' is not defined >>> Answer: You need to open the console for the file you're in, not just use the general python console, as that knows nothing about the functions in your source file. Press `Ctrl-Alt-Enter` and select `Current active editor`. That should import the current module into your shell and make things work better.
Perl Compatible Regular Expression (PCRE) in Python Question: I have to parse some strings based on PCRE in Python and I've no idea how to do that. Strings I want to parse looks like : match mysql m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s p/MySQL/ i/$1/ In this example, I have to get this different items : "m/^.\0\0\0\n(4\.[-.\w]+)\0...\0/s" ; "p/MySQL/" ; "i/$1/" The only thing I've found relating to PCRE manipulation in Python is this module : <http://pydoc.org/2.2.3/pcre.html> (but it's written it's a .so ..) Do you know if some python module exist to parse this kind of string ? Answer: # Be Especially Careful with non‐ASCII in Python There are some really subtle issues with how Python deals with, or fails to deal with, non-ASCII in patterns and strings. Worse, these disparities vary substantially according not just to which version of Python you are using, but also whether you have a “wide build”. In general, when you’re doing Unicode stuff, **Python3 with a wide build works best** and Python2 with a narrow build works worst, but all combinations are still a pretty far cry far from how Perl regexes work _vis‐à‐vis_ Unicode. If you’re looking for ᴘᴄʀᴇ patterns in Python, you may have to look a bit further afield than its old `re` module. **EDIT** : The vexing “wide-build” issues have **_finally_ been fixed** once and for all — provided you use a sufficiently advanced release of Python. Here’s an excerpt from [the v3.3 release notes](http://docs.python.org/dev/whatsnew/3.3.html): > ## _Functionality_ > > Changes introduced by [PEP 393](http://www.python.org/dev/peps/pep-0393) are > the following: > > * Python now always supports the full range of Unicode codepoints, > including non-BMP ones (i.e. from U+0000 to U+10FFFF). The distinction > between narrow and wide builds no longer exists and Python now behaves like > a wide build, even under Windows. > * With the death of narrow builds, the problems specific to narrow builds > have also been fixed, for example: > * `len()` now always returns 1 for non-BMP characters, so > `len('\U0010FFFF') == 1`; > * surrogate pairs are not recombined in string literals, so > `'\uDBFF\uDFFF' != '\U0010FFFF'`; > * indexing or slicing non-BMP characters returns the expected value, so > `'\U0010FFFF'[0]` now returns `'\U0010FFFF'` and not `'\uDBFF'`; > * all other functions in the standard library now correctly handle non- > BMP codepoints. > * The value of `sys.maxunicode` is now always 1114111 (0x10FFFF in > hexadecimal). The `PyUnicode_GetMax()` function still returns either 0xFFFF > or 0x10FFFF for backward compatibility, and it should not be used with the > new Unicode API (see [issue 13054](http://bugs.python.org/issue13054)). > * `The ./configure` flag `--with-wide-unicode` has been removed. > ## The Future of Python Regexes In contrast to what’s currently available in the standard Python distribution’s `re` library, [Matthew Barnett’s `regex` module for both Python 2 and Python 3 alike](http://pypi.python.org/pypi/regex) is much, much better in pretty much all possible ways, and will quite probably replace `re` eventually. Its particular relevance to your question is that his `regex` library is far more ᴘᴄʀᴇ (_i.e._ **it’s much more Perl‐compatible**) in every way than `re` now is, which will make porting Perl regexes to Python easier for you. Because it is a ground‐up rewrite (as in from‐scratch not as in hamburger :), it was written with non-ASCII in mind, which `re` was not. The `regex` library therefore much more closely follows the (current) recommendations of [UTS#18: Unicode Regular Expressions](http://unicode.org/reports/tr18/) in how it approaches things. It **meets or exceeds the UTS#18 Level 1 requirements** in most if not all regards, something you normally have to use the ICU regex library or Perl itself for — or if you are especially courageous, the new Java 7 update to its regexes, as that also conforms to the [Level One requirements](http://unicode.org/reports/tr18/#Basic_Unicode_Support) from UTS#18. Beyond meeting those Level One requirements, which are all absolutely essential for basic Unicode support but which are **not met by Python’s current`re` library,** the awesome `regex` library also meets the Level Two requirements for [RL2.5](http://unicode.org/reports/tr18/#Name_Properties) Named Characters (`\N{...})`), [RL2.2](http://unicode.org/reports/tr18/#Default_Grapheme_Clusters) Extended Grapheme Clusters (`\X`), and the new RL2.7 on Full Properties from [revision 14 of UTS#18](http://www.unicode.org/reports/tr18/tr18-14.html). Matthew’s `regex` module also does Unicode casefolding so that case insensitive matches work reliabably on Unicode, **which`re` does not.** **EDIT** : The following is no longer true, because `regex` now supports full Unicode casefolding, like Perl and Ruby. > > ~~One super‐tiny difference is that for now, Perl’s case‐insensitive > patterns use full string‐oriented casefolds while his`regex` module still > uses simple single‐char‐oriented casefolds, but this is something he’s > looking into. It’s actually a very hard problem, one which apart from Perl, > only Ruby even attempts.~~ Under full casefolding, this means that (for example) `"ß"` now correct matches `"SS"`, `"ss"`, `"ſſ"`, `"ſs"` (etc.) when case-insensitive matching is selected. (This is admittedly more important in the Greek script than the Latin one.) See also the slides or doc source code from [my 3rd OSCON2011 talk](http://training.perl.com/OSCON2011/index.html) entitled _“**Unicode Support Shootout: The Good, the Bad, and the (mostly) Ugly** ”_ for general issues in Unicode support across Javascripts, PHP, Go, Ruby, Python, Java, and Perl. If can’t use either Perl regexes or possibly the ICU regex library (which doesn’t have named captures, alas!), then Matthew’s `regex` for Python is probably your best shot. * * * Nᴏᴛᴀ Bᴇɴᴇ s.ᴠ.ᴘ. (= _s’il vous plaît, et même s’il ne vous plaît pas_ :) The following unsolicited noncommercial nonadvertisement was _not_ actually put here by the author of the Python `regex` library. :) ## Cool `regex` Features The Python `regex` library has a **cornucopeia of superneat features** , some of which are found in no other regex system anywhere. These make it very much worth checking out no matter whether you happen to be using it for its ᴘᴄʀᴇ‐ness or its stellar Unicode support. A few of this module’s outstanding features of interest are: * **Variable‐width lookbehind** , a feature which is quite rare in regex engines and very frustrating not to have when you really want it. This may well be the most frequently requested feature in regexes. * Backwards searching so you don’t have to reverse your string yourself first. * Scoped `ismx`‐type options, so that `(?i:foo)` only casefolds for foo, not overall, or `(?-i:foo)` to turn it off just on foo. This is how Perl works (or can). * Fuzzy matching based on edit‐distance (which Udi Manber’s `agrep` and `glimpse` also have) * Implicit shortest‐to‐longest sorted named lists via `\L<list>` interpolation * Metacharacters that specifically match only the start or only the end of a word rather than either side (`\m`, `\M`) * Support for all Unicode line separators (Java can do this, as can Perl albeit somewhat begrudgingly with `\R` per [RL1.6](http://unicode.org/reports/tr18/#Line_Boundaries). * Full set operations — union, intersection, difference, and symmetric difference — on bracketed character classes per [RL1.3](http://unicode.org/reports/tr18/#Subtraction_and_Intersection), which is much easier than getting at it in Perl. * Allows for repeated capture groups like `(\w+\s+)+` where you can get all separate matches of the first group not just its last match. (I believe C♯ might also do this.) * A more straightfoward way to get at overlapping matches than sneaky capture groups in lookaheads. * Start and end positions for all groups for later slicing/substring operations, much like Perl’s `@+` and `@-` arrays. * The branch‐reset operator via `(?|...|...|...|)` to reset group numbering in each branch the way it works in Perl. * Can be configured to have your coffee waiting for you in the morning. * Support for the more sophisticated word boundaries from [RL2.3](http://unicode.org/reports/tr18/#Default_Word_Boundaries). * Assumes Unicode strings by default, and fully supports [RL1.2a](http://unicode.org/reports/tr18/#Compatibility_Properties) so that `\w`, `\b`, `\s`, and such work on Unicode. * Supports `\X` for graphemes. * Supports the `\G` continuation point assertion. * Works correctly for 64‐bit builds (`re` only has 32‐bit indices). * Supports multithreading. Ok, that’s enough hype. :) # Yet Another Fine Alternate Regex Engine One final alternative that is worth looking at if you are a regex geek is the [Python library bindings](http://pypi.python.org/pypi/re2/) to Russ Cox’s awesome [RE2 library](http://swtch.com/~rsc/regexp/). It also supports Unicode natively, including simple char‐based casefolding, and unlike `re` it notably provides for both the Unicode General Category and the Unicode Script character properties, which are the two key properties you most often need for the simpler kinds of Unicode processing. Although RE2 misses out on a few Unicode features like `\N{...}` named character support found in ICU, Perl, and Python, it has extremely serious computational advantages that make it **the regex engine of choice** whenever you’re concern with starvation‐based denial‐of‐service attacks through regexes in web queries and such. It manages this by forbidding backreferences, which cause a regex to stop being regular and risk super‐exponential explosions in time and space. Library bindings for RE2 are available not just for C/C++ and Python, but also for Perl and most especially for Go, where it is slated to very shortly replace the standard regex library there.
cannot urlencode() after storing QueryDict in session Question: I tried to post this to django-users group ( <http://groups.google.com/group/django- users/browse_thread/thread/8572d7f4075cfe0e> ) but got no responses. Maybe here I will get more help. I store `request.GET` in session: request.session['query_string'] = request.GET then I retrieve the value in another page and try to urlencode the QueryDict: context['query_string'] = request.session['query_string'].urlencode() in my context I get the python's string representation of the QueryDict object instead of the expected `key0=value0&key1=value1&...` string. If, instead of QueryDict, I store the urlencoded string in the session, everything works of course: request.session['query_string'] = request.GET.urlencode() is it a bug? Answer: This is not a bug. If you take a peek at the `QueryDict` definition (see [https://github.com/django/django/blob/master/django/http/**init**.py](https://github.com/django/django/blob/master/django/http/__init__.py)), it says explicitly that it's immutable unless you create a copy of it. To demonstrate this, here's what I have in my Python shell, >>> from django.http import QueryDict >>> q1 = QueryDict('', mutable=False) >>> q2 = QueryDict('', mutable=True) >>> q1['next'] = '/a&b/' Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/kenny/Desktop/Kreybits/locker/python/lib/python2.7/site-packages/django/http/__init__.py", line 357, in __setitem__ self._assert_mutable() File "/Users/kenny/Desktop/Kreybits/locker/python/lib/python2.7/site-packages/django/http/__init__.py", line 354, in _assert_mutable raise AttributeError("This QueryDict instance is immutable") AttributeError: This QueryDict instance is immutable >>> q2['next'] = '/a&b/' >>> q2.urlencode() 'next=%2Fa%26b%2F' The `mutable` argument is set to False by default, and since `request.session['query_string'] = request.GET` initialized it to an empty QueryDict to begin with, calling `urlencode()` only returns you an empty str while the `request.session['query_string'] = request.GET.urlencode()` works because you're working with a QueryDict that has been initialized with the appropriate key/values.
webapp2 + jinja2: How can i get uri_for() working in jinja2-views Question: How can i add pass Model-Specific urls to the Template. Let's say, i want to build an edit-link. I would guess, using the uri_for() function would be an easy approach. But the following gives me "UndefinedError: 'webapp2' is undefined" {% webapp2.uri_for("editGreeting", greeting.key().id()) %} Or should i prepare these in the MainPage-Request-Handler? If so, i don't know how to add them to each greeting. The following Code-Example is taken from: <http://webapp- improved.appspot.com/tutorials/gettingstarted/templates.html> Controller/Handler class MainPage(webapp2.RequestHandler): def get(self): guestbook_name=self.request.get('guestbook_name') greetings_query = Greeting.all().ancestor( guestbook_key(guestbook_name)).order('-date') greetings = greetings_query.fetch(10) if users.get_current_user(): url = users.create_logout_url(self.request.uri) url_linktext = 'Logout' else: url = users.create_login_url(self.request.uri) url_linktext = 'Login' template_values = { 'greetings': greetings, 'url': url, 'url_linktext': url_linktext, } path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, template_values)) Template/View: <html> <body> {% for greeting in greetings %} {% if greeting.author %} <b>{{ greeting.author.nickname }}</b> wrote: {% else %} An anonymous person wrote: {% endif %} <blockquote>{{ greeting.content|escape }}</blockquote> {% endfor %} <form action="/sign" method="post"> <div><textarea name="content" rows="3" cols="60"></textarea></div> <div><input type="submit" value="Sign Guestbook"></div> </form> <a href="{{ url }}">{{ url_linktext }}</a> </body> </html The class BaseHandler is the class all handlers inherit from. I tried the following as @moraes suggested. I still get: value = self.func(obj) File "C:\Users\timme04\python\hellowebapp\handlers\basehandler.py", line 23, in jinja2 return jinja2.get_jinja2(factory=self.jinja2_factory) File "C:\Users\timme04\python\hellowebapp\webapp2_extras\jinja2.py", line 212, in get_jinja2 jinja2 = app.registry[key] = factory(app) TypeError: jinja2_factory() takes exactly 1 argument (2 given) :( import webapp2 from webapp2_extras import jinja2 class BaseHandler(webapp2.RequestHandler): def jinja2_factory(app): j = jinja2.Jinja2(app) j.environment.filters.update({ # Set filters. # ... }) j.environment.globals.update({ # Set global variables. 'uri_for': webapp2.uri_for, # ... }) return j @webapp2.cached_property def jinja2(self): # Returns a Jinja2 renderer cached in the app registry. return jinja2.get_jinja2(factory=self.jinja2_factory) def render_response(self, _template, **context): # Renders a template and writes the result to the response. rv = self.jinja2.render_template(_template, **context) self.response.write(rv) Answer: You must set `uri_for` as a global variable. One way to do it is to set an initializer for global variables and filters: import webapp2 from webapp2_extras import jinja2 def jinja2_factory(app): j = jinja2.Jinja2(app) j.environment.filters.update({ # Set filters. # ... }) j.environment.globals.update({ # Set global variables. 'uri_for': webapp2.uri_for, # ... }) return j class BaseHandler(webapp2.RequestHandler): @webapp2.cached_property def jinja2(self): # Returns a Jinja2 renderer cached in the app registry. return jinja2.get_jinja2(factory=jinja2_factory) def render_response(self, _template, **context): # Renders a template and writes the result to the response. rv = self.jinja2.render_template(_template, **context) self.response.write(rv) Edit: changed example to use a RequestHandler.
rpy2 plot problem Question: I use rpy2-2.0.8, R-2.11.1, Python-2.6.2 on Windows XP. When I run this script, output image is filled with text message. I suppose this message is function definition of clusplot. [test.py] #!/usr/bin/env python # -*- mode: python -*- -*- coding: utf-8 -*- import rpy2.robjects as ro r = ro.r # read from csv file dataf = r('read.csv("test.csv", header=T, row.names="name")') # k-means r.library('cluster') k = 2 cluster = r.kmeans(r.cmdscale(r.dist(dataf)), k) # plot r.jpeg('output.jpg') r.clusplot(r.pam(dataf, k)) r('dev.off()') [test.csv] name,a,b,c,d,e,f,g,e,h,I,j,k x1,1421,99.4,19.5,241.4,103.7,18.8,13.4,4.8,76.3,535.6,28.6,10.3 x2,1495,97.8,22.5,263.3,160.3,9.1,13.7,4.3,93.8,568,33.3,10.4 x3,2649,95.8,14.6,198.6,94.6,15.9,11.6,11.7,85,521.5,52.7,8.71 x4,3251,100.2,27.5,240.9,121,28,13.3,18.9,99.1,336.1,5.1,3.03 x5,2705,100.3,15.3,157.1,95.3,23.4,7.5,17,87.9,366.8,12.1,3.59 x6,3157,100.3,12.4,164,97.1,10.2,8.8,17.4,98.4,418.5,24.2,4.45 x7,2045,104.4,25.3,246.3,131,16.6,14,19.1,96.9,584.2,7.8,6.73 x8,2228,99.1,21.7,246.9,112.2,23.3,15.1,5.3,88.4,415.5,54.2,4.03 x9,2037,100.1,30,296.6,150.7,31.5,15.4,17.8,93.1,346.8,6.1,3.47 x10,2336,99.7,17.6,210.8,116.5,21.5,12.6,10.1,69.9,411,63.9,16.5 x11,1264,101.8,29.3,256.1,126.2,14.3,14.8,5.4,94,540.1,28.5,7.46 x12,1566,98.8,23.7,285.3,128.6,15.1,15.5,8.5,91.5,549.3,59.2,10.5 x13,2210,98.8,28.3,234.4,143.2,17.3,12.7,11.2,95.6,492.1,53.5,8.6 But I use R console (not rpy2), No text overwapping problem. Does anyone help me? Answer: I have solved this problem by myself. R plot command completes the plot title automatically when it is not given. R command with console and via rpy2 have different behavior for this complement. (I think it is a kind of bug for rpy2) So set the title with "main" argument explicitly like this. # plot r.jpeg('output.jpg') r.clusplot(r.pam(dataf, k), main="result") r('dev.off()')
How do closures work in runpy? Question: I get unexpected behaviour when I try to run methods defined in a file loaded using the [runpy](http://docs.python.org/library/runpy.html) module. The methods do not see any variables (including imported modules) defined outside of that method. Here is how I am doing it: #test.py import runpy env = runpy.run_path('test', {'y':'world'}) env['fn']() ~ #test import re print(re.compile(r'^hello', re.IGNORECASE).sub('', "hello world")) x = "hello" print(x) print(y) def fn(): try: print(re.compile(r'^hello', re.IGNORECASE).sub('', "hello world")) except: print("No re") try: print(x) except: print("No x") try: print(y) except: print("No y") My expected output of test.py would be: world hello world world hello world because fn would form a closure for re, x and y. However, instead I get: world hello world No re None None It looks like re isn't defined within fn even though it should be with normal closure behaviour. x and y are even stranger because they appear to be defined but set to None. Why is this and how do closures work with runpy? How can I achieve normal behaviour such that fn can 'see' outside variables? Answer: OK, this is a curiosity of the way Python handles modules, which I know about but don't fully understand. I've come across it while working on IPython, where it's explained in [a comment](https://github.com/ipython/ipython/blob/master/IPython/core/interactiveshell.py#L894). When Python runs a module, it produces a module object, the attributes of which are the global names in the module. When the module falls out of scope and is being destroyed, these attributes are set to `None`. Code which was defined in the function then sees these as the globals, as you found. You can demonstrate this by adding `def g(): return globals()` to your file, then calling `env["g"]()`. I don't know if there's a way round this with `runpy`. IPython uses some complicated code to reuse a module object for running other files, caching copies of its `__dict__` to keep the references therein alive. Have a look at the [`magic_run` function](https://github.com/ipython/ipython/blob/master/IPython/core/magic.py#L1445) if you're interested.
django-haystack problem when saving query in session Question: I want to save the user input in my view, I don't know how to do it redefining the searchview so I did this: request.session['q']=request.GET.get('q') from haystack.views import SearchView search_view = SearchView(template = template_name) return search_view(request) but I got this error: Traceback (most recent call last): File "/home/usu/mysites/gondor/local/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 280, in run self.result = application(self.environ, self.start_response) File "/home/usu/mysites/gondor/local/lib/python2.7/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/usu/mysites/gondor/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 245, in __call__ response = middleware_method(request, response) File "/home/usu/mysites/gondor/local/lib/python2.7/site-packages/django/contrib/sessions/middleware.py", line 36, in process_response request.session.save() File "/home/usu/mysites/gondor/local/lib/python2.7/site-packages/django/contrib/sessions/backends/db.py", line 57, in save session_data = self.encode(self._get_session(no_load=must_create)), File "/home/usu/mysites/gondor/local/lib/python2.7/site-packages/django/contrib/sessions/backends/base.py", line 88, in encode pickled = pickle.dumps(session_dict, pickle.HIGHEST_PROTOCOL) File "/usr/local/lib/python2.7/dist-packages/haystack/models.py", line 175, in __getstate__ del(ret_dict['searchsite']) KeyError: 'searchsite' If I remove the line: request.session['q']=request.GET.get('q') the search works ok, I don't know if there is a problem with haystack or I'm doing something wrong. Thanks. Answer: Finally I found the problem. In another view I was saving in the session the result of a SerchQuerySet, but the problem didn't raised at the time I've put the content but it raised when I've put other thing in the session. This was very difficult to find out.
Add django model manager code-completion to Komodo Question: I have been using ActiveState Komodo for a while and while most of the code- completion is spot on it lacks the code completion from Django's model manager. I have included the Django directory in my `PYTHONPATH` and get most of the code completion, the notable exception being the models. Assuming I have a model `users` I would expect the code `users.objects.` to show autocomplete options such as `all()`,`count()`,`filter()` etc. however these are added by the model's manager which does so in a seemingly abnormal way. I am wondering if I can 'force' Komodo to pick up the models. The model manager looks to be included from the following code (taken from manager.py) def ensure_default_manager(sender, **kwargs): """ Ensures that a Model subclass contains a default manager and sets the _default_manager attribute on the class. Also sets up the _base_manager points to a plain Manager instance (which could be the same as _default_manager if it's not a subclass of Manager). """ cls = sender if cls._meta.abstract: return if not getattr(cls, '_default_manager', None): # Create the default manager, if needed. try: cls._meta.get_field('objects') raise ValueError("Model %s must specify a custom Manager, because it has a field named 'objects'" % cls.__name__) except FieldDoesNotExist: pass cls.add_to_class('objects', Manager()) cls._base_manager = cls.objects ... Specifically the last two lines. Is there any way to tell Komodo that `<model>.objects = Manager()` so the proper code completion is shown? Answer: Probably the easiest way to get this to work seems to be to add the following to the top of models.py: from django.db.models import manager and then under each model add objects = manager.Manager() so that, for example, the following: class Site(models.Model): name = models.CharField(max_length=200) prefix = models.CharField(max_length=1) secret = models.CharField(max_length=255) def __unicode__(self): return self.name becomes class Site(models.Model): name = models.CharField(max_length=200) prefix = models.CharField(max_length=1) secret = models.CharField(max_length=255) objects = manager.Manager() def __unicode__(self): return self.name This is how you would (explicitly) set your own model manager, and by explicitly setting the model manager (to the default) Kommodo picks up the code completion perfectly. Hopefully this will help someone :-)
Using PIL to fill empty image space with nearby colors (aka inpainting) Question: I create an image with PIL: ![example image](http://i.stack.imgur.com/uEPqc.png) I need to fill in the empty space (depicted as black). I could easily fill it with a static color, but what I'd like to do is fill the pixels in with nearby colors. For example, the first pixel after the border might be a Gaussian blur of the filled-in pixels. Or perhaps a push-pull type algorithm described in [The Lumigraph, Gortler, et al.](http://research.microsoft.com/apps/pubs/default.aspx?id=68168). I need something that is not too slow because I have to run this on many images. I have access to other libraries, like numpy, and you can assume that I know the borders or a mask of the outside region or inside region. Any suggestions on how to approach this? **UPDATE:** As suggested by belisarius, [opencv's inpaint](http://opencv.willowgarage.com/documentation/cpp/miscellaneous_image_transformations.html#cv- inpaint) method is perfect for this. Here's some python code that uses opencv to achieve what I wanted: import Image, ImageDraw, cv im = Image.open("u7XVL.png") pix = im.load() #create a mask of the background colors # this is slow, but easy for example purposes mask = Image.new('L', im.size) maskdraw = ImageDraw.Draw(mask) for x in range(im.size[0]): for y in range(im.size[1]): if pix[(x,y)] == (0,0,0): maskdraw.point((x,y), 255) #convert image and mask to opencv format cv_im = cv.CreateImageHeader(im.size, cv.IPL_DEPTH_8U, 3) cv.SetData(cv_im, im.tostring()) cv_mask = cv.CreateImageHeader(mask.size, cv.IPL_DEPTH_8U, 1) cv.SetData(cv_mask, mask.tostring()) #do the inpainting cv_painted_im = cv.CloneImage(cv_im) cv.Inpaint(cv_im, cv_mask, cv_painted_im, 3, cv.CV_INPAINT_NS) #convert back to PIL painted_im = Image.fromstring("RGB", cv.GetSize(cv_painted_im), cv_painted_im.tostring()) painted_im.show() And the resulting image: ![painted image](http://i.stack.imgur.com/Sf7fv.png) Answer: A method with nice results is the [**Navier-Stokes Image Restoration**.](http://www.dtic.upf.edu/~mbertalmio/final-cvpr.pdf) I know OpenCV has it, don't know about PIL. Your example: ![enter image description here](http://i.stack.imgur.com/u7XVL.png) ![enter image description here](http://i.stack.imgur.com/IXarX.png) I did it with Mathematica. **Edit** As per your reuquest, the code is: i = Import["http://i.stack.imgur.com/uEPqc.png"]; Inpaint[i, ColorNegate@Binarize@i, Method -> "NavierStokes"] The ColorNegate@ ... part creates the replacement mask. The filling is done with just the `Inpaint[]` command.
Repeatedly extract a line between two delimiters in a text file, Python Question: I have a text file in the following format: DELIMITER1 extract me extract me extract me DELIMITER2 I'd like to extract every block of `extract me`s between DELIMITER1 and DELIMITER2 in the .txt file This is my current, non-performing code: import re def GetTheSentences(file): fileContents = open(file) start_rx = re.compile('DELIMITER') end_rx = re.compile('DELIMITER2') line_iterator = iter(fileContents) start = False for line in line_iterator: if re.findall(start_rx, line): start = True break while start: next_line = next(line_iterator) if re.findall(end_rx, next_line): break print next_line continue line_iterator.next() Any ideas? Answer: You can simplify this to one regular expression using `re.S`, the [DOTALL flag](http://docs.python.org/library/re.html#re.DOTALL). import re def GetTheSentences(infile): with open(infile) as fp: for result in re.findall('DELIMITER1(.*?)DELIMITER2', fp.read(), re.S): print result # extract me # extract me # extract me This also makes use of the non-greedy operator `.*?`, so multiple non- overlapping blocks of DELIMITER1-DELIMITER2 pairs will all be found.
Create PDF with (resized) PNG images using Pycairo - rescaling Surface issue Question: I have som PNG image links that I want to download, "convert to thumbnails" and save to PDF using Python and Cairo. Now, I have a working code, but I don't know how to control image size on paper. Is there a way to resize a PyCairo Surface to the dimensions I want (which happens to be smaller than the original)? I want the original pixels to be "shrinked" to a higher resolution (on paper). Also, I tried `Image.rescale()` function from PIL, but it gives me back a 20x20 pixel output (out of a 200x200 pixel original image, which is not the banner example on the code). What I want is a 200x200 pixel image plotted inside a 20x20 mm square on paper (instead of a 200x200 mm square as I am getting now) My current code is: #!/usr/bin/python import cairo, urllib, StringIO, Image # could I do it without Image module? paper_width = 210 paper_height = 297 margin = 20 point_to_milimeter = 72/25.4 pdfname = "out.pdf" pdf = cairo.PDFSurface(pdfname , paper_width*point_to_milimeter, paper_height*point_to_milimeter) cr = cairo.Context(pdf) cr.scale(point_to_milimeter, point_to_milimeter) f=urllib.urlopen("http://cairographics.org/cairo-banner.png") i=StringIO.StringIO(f.read()) im=Image.open(i) # are these StringIO operations really necessary? imagebuffer = StringIO.StringIO() im.save(imagebuffer, format="PNG") imagebuffer.seek(0) imagesurface = cairo.ImageSurface.create_from_png(imagebuffer) ### EDIT: best answer from Jeremy, and an alternate answer from mine: best_answer = True # put false to use my own alternate answer if best_answer: cr.save() cr.scale(0.5, 0.5) cr.set_source_surface(imagesurface, margin, margin) cr.paint() cr.restore() else: cr.set_source_surface(imagesurface, margin, margin) pattern = cr.get_source() scalematrix = cairo.Matrix() # this can also be used to shear, rotate, etc. scalematrix.scale(2,2) # matrix numbers seem to be the opposite - the greater the number, the smaller the source scalematrix.translate(-margin,-margin) # this is necessary, don't ask me why - negative values!! pattern.set_matrix(scalematrix) cr.paint() pdf.show_page() Note that the beautiful Cairo banner does not even fit the page... The ideal result would be that I could control the width and height of this image in user space units (milimeters, in this case), to create a nice header image, for example. Thanks for reading and for any help or comment!! Answer: Try scaling the context when you draw the image. E.g. cr.save() # push a new context onto the stack cr.scale(0.5, 0.5) # scale the context by (x, y) cr.set_source_surface(imagesurface, margin, margin) cr.paint() cr.restore() # pop the context See: <http://cairographics.org/documentation/pycairo/2/reference/context.html> for more details.
PyQt4: Interrupt QThread exec when GUI is closed Question: I have a PyQt4 GUI that has three threads. One thread is a data source, it provides numpy arrays of data. The next thread is a calculation thread, it takes the numpy array (or multiple numpy arrays) via a Python `Queue.Queue` and calculates what will be displayed on the GUI. The calculator then signals the GUI thread (the main thread) via a custom signal and this tells the GUI to update the matplotlib figure that's displayed. I'm using the "proper" method described [here](http://labs.qt.nokia.com/2010/06/17/youre-doing-it-wrong/) and [here](http://labs.qt.nokia.com/2006/12/04/threading-without-the-headache/). So here's the general layout. I tried to shorten my typing time and used comments instead of the actual code in some parts: class Source(QtCore.QObject): signal_finished = pyQtSignal(...) def __init__(self, window): self._exiting = False self._window = window def do_stuff(self): # Start complicated data generator for data in generator: if not self._exiting: # Get data from generator # Do stuff - add data to Queue # Loop ends when generator ends else: break # Close complicated data generator def prepare_exit(self): self._exiting = True class Calculator(QtCore.QObject): signal_finished = pyQtSignal(...) def __init__(self, window): self._exiting = False self._window = window def do_stuff(self): while not self._exiting: # Get stuff from Queue (with timeout) # Calculate stuff # Emit signal to GUI self._window.signal_for_updating.emit(...) def prepare_exit(self): self._exiting = True class GUI(QtCore.QMainWindow): signal_for_updating = pyQtSignal(...) signal_closing = pyQtSignal(...) def __init__(self): self.signal_for_updating.connect(self.update_handler, type=QtCore.Qt.BlockingQueuedConnection) # Other normal GUI stuff def update_handler(self, ...): # Update GUI def closeEvent(self, ce): self.fileQuit() def fileQuit(self): # Used by a menu I have File->Quit self.signal_closing.emit() # Is there a builtin signal for this if __name__ == '__main__': app = QtCore.QApplication([]) gui = GUI() gui.show() source_thread = QtCore.QThread() # This assumes that run() defaults to calling exec_() source = Source(window) source.moveToThread(source_thread) calc_thread = QtCore.QThread() calc = Calculator(window) calc.moveToThread(calc_thread) gui.signal_closing.connect(source.prepare_exit) gui.signal_closing.connect(calc.prepare_exit) source_thread.started.connect(source.do_stuff) calc_thread.started.connect(calc.do_stuff) source.signal_finished.connect(source_thread.quit) calc.signal_finished.connect(calc_thread.quit) source_thread.start() calc_thread.start() app.exec_() source_thread.wait() # Should I do this? calc_thread.wait() # Should I do this? ...So, my problems all occur when I try to close the GUI before the sources are complete, when I let the data generators finish it closes fine: * While waiting for the threads, the program hangs. As far as I can tell this is because the closing signal's connected slots never get run by the other thread's event loops (they're stuck on the "infinitely" running do_stuff method). * When the calc thread emits the updating gui signal (a BlockedQueuedConnection signal) right after the GUI closing, it seems to hang. I'm guessing this is because the GUI is already closed and isn't there to accept the emitted signal (judging by the print messages I put in my actual code). I've been looking through tons of tutorials and documentation and I just feel like I'm doing something stupid. Is this possible, to have an event loop and an "infinite" running loop that end early...and safely (resources closed properly)? I'm also curious about my BlockedQueuedConnection problem (if my description makes sense), however this problem is probably fixable with a simple redesign that I'm not seeing. Thanks for any help, let me know what doesn't make sense. If it's needed I can also add more to the code instead of just doing comments (I was kind of hoping that I did something dumb and it wouldn't be needed). **Edit:** I found some what of a work around, however, I think I'm just lucky that it works every time so far. If I make the prepare_exit and the thread.quit connections DirectConnections, it runs the function calls in the main thread and the program does not hang. I also figured I should summarize some questions: 1. **Can a QThread have an event loop (via exec_) and have a long running loop?** 2. **Does a BlockingQueuedConnection emitter hang if the receiver disconnects the slot (after the signal was emitted, but before it was acknowledged)?** 3. **Should I wait for the QThreads (via thread.wait()) after app.exec_(), is this needed?** 4. **Is there a Qt provided signal for when QMainWindow closes, or is there one from the QApplication?** **Edit 2/Update on progress:** I have created a runnable example of the problem by adapting [this post](http://stackoverflow.com/questions/6783194/background-thread-with- qthread-in-pyqt/6789205#6789205) to my needs. from PyQt4 import QtCore import time import sys class intObject(QtCore.QObject): finished = QtCore.pyqtSignal() interrupt_signal = QtCore.pyqtSignal() def __init__(self): QtCore.QObject.__init__(self) print "__init__ of interrupt Thread: %d" % QtCore.QThread.currentThreadId() QtCore.QTimer.singleShot(4000, self.send_interrupt) def send_interrupt(self): print "send_interrupt Thread: %d" % QtCore.QThread.currentThreadId() self.interrupt_signal.emit() self.finished.emit() class SomeObject(QtCore.QObject): finished = QtCore.pyqtSignal() def __init__(self): QtCore.QObject.__init__(self) print "__init__ of obj Thread: %d" % QtCore.QThread.currentThreadId() self._exiting = False def interrupt(self): print "Running interrupt" print "interrupt Thread: %d" % QtCore.QThread.currentThreadId() self._exiting = True def longRunning(self): print "longRunning Thread: %d" % QtCore.QThread.currentThreadId() print "Running longRunning" count = 0 while count < 5 and not self._exiting: time.sleep(2) print "Increasing" count += 1 if self._exiting: print "The interrupt ran before longRunning was done" self.finished.emit() class MyThread(QtCore.QThread): def run(self): self.exec_() def usingMoveToThread(): app = QtCore.QCoreApplication([]) print "Main Thread: %d" % QtCore.QThread.currentThreadId() # Simulates user closing the QMainWindow intobjThread = MyThread() intobj = intObject() intobj.moveToThread(intobjThread) # Simulates a data source thread objThread = MyThread() obj = SomeObject() obj.moveToThread(objThread) obj.finished.connect(objThread.quit) intobj.finished.connect(intobjThread.quit) objThread.started.connect(obj.longRunning) objThread.finished.connect(app.exit) #intobj.interrupt_signal.connect(obj.interrupt, type=QtCore.Qt.DirectConnection) intobj.interrupt_signal.connect(obj.interrupt, type=QtCore.Qt.QueuedConnection) objThread.start() intobjThread.start() sys.exit(app.exec_()) if __name__ == "__main__": usingMoveToThread() You can see by running this code and swapping between the two connection types on interrupt_signal that the direct connection works because its running in a separate thread, **proper or bad practice?** I feel like that is bad practice because I am quickly changing something that another thread is reading. The QueuedConnection does not work because the event loop must wait until longRunning is finished before the event loop gets back around to the interrupt signal, which is not what I want. **Edit 3:** I remembered reading that `QtCore.QCoreApplication.processEvents` can be used in cases with long running calculations, but everything I read said don't use it unless you know what you are doing. Well here is what I think it's doing (in a sense) and using it seems to work: When you call processEvents it causes the caller's event loop to hault its current operation and continue on processing the pending events in the event loop, eventually continuing the long calculation event. Other recommendations like in [this email](http://www.mail-archive.com/pyqt@riverbankcomputing.com/msg17349.html) suggest timers or putting the work in other threads, I think this just makes my job even more complicated, especially since I've proven(I think) timers don't work in my case. If processEvents seems to fix all my problems I will answer my own question later. Answer: After looking through the mailing list archives, google searching, stack overflow searching, and thinking about what my question really was and what the purpose of the question was I came up with this answer: The short answer being use processEvents(). The long answer is that all my searching results in people saying "be very careful using processEvents()" and "avoid it at all costs". I think it should be avoided if you are using it because you are not seeing results in your GUI main thread fast enough. Instead of using processEvents in this case, the work being done in the main thread that is not UI purposed should be moved to another thread (as my design has done). The reason my specific problem needs processEvents() is that I want my QThreads to have two way communication with the GUI thread, which means that my QThreads have to have an event loop (exec_()) to accept signals from the GUI. This two way communication is what I meant earlier by "the purpose of the question". Since my QThreads are meant to run "concurrently" with the main GUI thread AND because they need to update the GUI and be "updated" by the GUI (the exit/closing signal in my first example), they need processEvents(). I think this is what processEvents() is for. My understanding of processEvents(), as decribed above, is that when called in a QThread it will block/pause the current event (my longRunning method) while it continues on through the events in the event loop (only for the QThread processEvents() was called in). After going through the pending events, the event loop wraps back around and continues running the event that it paused (my longRunning method). I know I didn't answer all my questions, but the main one is answered. **PLEASE CORRECT ME IF I AM WRONG IN ANY WAY** **Edit:** Please read Ed's answer and the comments.
PYTHON: How to take a .txt file as input from user using raw_input in Python and then read that file row wise? Question: I want to take a .txt file as input from user through `raw_input`. Once the input is taken, I want the txt file name given by user to be accessed and the contents from it written into another file (each in next row). I am having a problem with how to connect a and b above or how to link raw input with reading of file. I need help...the code is as follows (but it is erroneous :S). import sys,os,csv x = raw_input("Enter name of file to be written row wise:") ui = "x" + txt for ui in x: data = open("ui").readlines() outfile = open("myfile.csv","w") out = csv.writer(outfile) for row in data: row = row.strip().split(';') if row: for subrow in row: subrow = subrow.strip().split() if subrow: out.writerow(subrow) outfile.close() I get the error: > cannot concatenate 'str' and 'list' objects Answer: for row in data: row = row.strip().split(';') The fact that you perform `split(';')` has for consequence that a row (you should write: line) split according to ';' gives always a not empty list, even if it is an empty line, and even after having been stripped with **strip()** : `''.split(';')` gives `['']` . So your following condition `if row:` is useless. That means that your code is equivalent to: for row in data: row = row.strip().split(';') for subrow in row: subrow = subrow.split() if subrow: out.writerow(subrow) and then to: for row in data: for subrow in row.strip().split(';'): subrow = subrow.split() if subrow: out.writerow(subrow) . Moreover , the fact that you use **split()** on **subrow** present in the list **row.strip().split(';')** eliminates all the blanks before and after each of the words present in **subrow**. So the first `strip()` in `row.strip().split(';')` is useless too. Your code is then equivalent to: for row in data: for subrow in row.split(';'): subrow = subrow.split() if subrow: out.writerow(subrow) Now , `subrow.split()` can produce a void list when subrow is only blanks, because `split()` without argument has its special algorithm. So the instruction `if subrow` is usefull. . In fact, what your code does is, after having read the content of such a file: Blackcurrant, Redcurrant ; Orange ; Blueberry Pear;Chestnut; Lemon Lime, Grapefruit Apple;Apricot ; Pineapple, Fig; Mulberry, Hedge Apple to record another file like that: Blackcurrant Redcurrant Orange Blueberry Pear Chestnut Lemon Lime Grapefruit Apple Apricot Pineapple Fig Mulberry Hedge Apple I prefer the following code to do that: filename = raw_input("Enter name of file to be written row wise:") + '.txt' filepath = 'I:\\' + filename with open(filepath) as handler,open("myfile.csv","wb") as outfile: out = csv.writer(outfile) for row in handler: gen = ( subrow.split() for subrow in row.split(';') ) out.writerow([x for x in gen if x]) del out . This code will always run, even for files extremely huge whose content can't be held by the memory, because the lines of the file are read one after the other. In case the file isn't enormous like that, it is possible to proceed like you did, with **readlines()** : with open(filepath) as handler: data = handler.readlines() with open("myfile.csv","wb") as outfile: out = csv.writer(outfile) for row in data: gen = ( subrow.split() for subrow in row.split(';') ) out.writerow([x for x in gen if x]) del out But there is no particular interest to proceed so, you can do `for row in handler` as well. . Personnaly, I think it would be better to use writerows(): filename = raw_input("Enter name of file to be written row wise:") + '.txt' filepath = 'I:\\' + filename with open(filepath) as handler,open("myfile.csv","wb") as outfile: out = csv.writer(outfile) gen = ( x for row in handler for x in (subrow.split() for subrow in row.split(';')) ) out.writerows([x for x in gen if]) del out . I end this answer by informing you that a code employing a regex would be far more efficient: import csv, re regx = re.compile('[ ;\r\n]+') filename = raw_input("Enter name of file to be written row wise:") + '.txt' filepath = 'I:\\' + filename with open(filepath) as handler,open("myfile.txt","w") as outfile: outfile.write('\n'.join(x for x in regx.split(handler.read()) if x)) ## Edit 1 handler = open(filepath) outfile = open("myfile.txt","wb") out = csv.writer(outfile) for row in handler: gen = ( subrow.split() for subrow in row.split(';') ) out.writerow([x for x in gen if x]) del out outfile.close() handler.close() or import csv, re regx = re.compile('[ ;\r\n]+') filename = raw_input("Enter name of file to be written row wise:") + '.txt' filepath = 'I:\\' + filename handler = open(filepath) outfile = open("myfile.txt","w") outfile.write('\n'.join(x for x in regx.split(handler.read()) if x)) outfile.close() handler.close()
Windows - running .py directly vs running python blah.py behaves differently Question: I have a python script that uses subprocess: import subprocess print "Running stuff" subprocess.check_call(["do_stuff.bat"]) print "Stuff run" If this was named blah.py, and I run (from a command prompt): python blah.py I will get the output from do_stuff.bat (or whatever I run). If this is run as: blah.py Then I do not get output from do_stuff.bat, only the print statements. So far seen on windows Server 2003. Python version 2.5.2 (stuck there for various reasons). Looking at the associated file type action I see: Python.File="C:\Python25\python.exe" "%1" %* So can anyone explain the difference? Answer: I had common problem using threads, but all of my code was in python. Threads can not write to standard output using print. Just main thread could do that. I used somethnig like this import sys sys.stdout.write("this was printed by thread") I know that probably it wont help you with bat file...
Easy_install and Pip doesn't work Question: Easy_install and Pip doesn't work anymore on python 2.7, when I try to do: sudo easy_install pip I get: Traceback (most recent call last): File "/usr/bin/easy_install", line 5, in <module> from pkg_resources import load_entry_point File "/usr/bin/lib/python2.7/site-packages/distribute-0.6.19-py2.7.egg/pkg_resources.py", line 2713, in <module> parse_requirements(__requires__), Environment() File "/usr/bin/lib/python2.7/site-packages/distribute-0.6.19-py2.7.egg/pkg_resources.py", line 584, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: distribute==0.6.15 And when I try: sudo pip install [package] I get: Traceback (most recent call last): File "/usr/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/usr/bin/lib/python2.7/site-packages/distribute-0.6.19-py2.7.egg/pkg_resources.py", line 2713, in <module> parse_requirements(__requires__), Environment() File "/usr/bin/lib/python2.7/site-packages/distribute-0.6.19-py2.7.egg/pkg_resources.py", line 584, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pip==0.8.2 I've already install both of them (and yes, first deleted them), but no result... Thanks! [(I tried already this post)](http://stackoverflow.com/questions/5585473/upgraded-python-on- snowleopard-using-homebrew-now-pip-and-easy-install-dont-work) Answer: I had this issue where python's distribute package wasn't installed for some reason. After following the instructions on [python- distribute](https://web.archive.org/web/20100309145033/http://guide.python- distribute.org/installation.html), i got it working. install the distribute package as follows: $ wget https://web.archive.org/web/20100225231201/http://python-distribute.org/distribute_setup.py $ python distribute_setup.py **EDIT:~~<http://python-distribute.org/distribute_setup.py>~~ no longer works:** hopefully this will resolve your problem with running $ sudo easy_install Happy Coding!
Python Read Formatted String Question: I have a file with a number of lines formatted with the following syntax: FIELD POSITION DATA TYPE ------------------------------ COOP ID 1-6 Character LATITUDE 8-15 Real LONGITUDE 17-25 Real ELEVATION 27-32 Real STATE 34-35 Character NAME 37-66 Character COMPONENT1 68-73 Character COMPONENT2 75-80 Character COMPONENT3 82-87 Character UTC OFFSET 89-90 Integer The data is all ASCII-formatted. An example of a line is: 011084 31.0581 -87.0547 26.0 AL BREWTON 3 SSE ------ ------ ------ +6 My current thought is that I'd like to read the file in a line at a time and _somehow_ have each line broken up into a dictionary so I can refer to the components. Is there some module that does this in Python, or some other clean way? Thanks! Answer: **EDIT** : You can still use the struct module: See the [struct module](http://docs.python.org/library/struct.html) documentation. Looks to me like you want to use `struct.unpack()` What you want is probably something like: import struct with open("filename.txt", "r") as f: for line in f: (coop_id, lat, lon, elev, state, name, c1, c2, c3, utc_offset ) = struct.unpack("6sx8sx9sx6sx2sx30sx6sx6sx6sx2s", line.strip()) (lat, lon, elev) = map(float, (lat, lon, elev)) utc_offset = int(utc_offset)
Can't access collection from the shell - SyntaxError: missing ; before statement (shell):1 Question: I wrote a script that uses mongoimport to load csv files into mongodb. When I run this for two similar csv files (of the same type) both upload fine, however I can only access one of them from the mongodb shell. Here is a transcript of a mongodb shell session: > show collections 3mLgQAYJCq6_20110802 eTByWMY7zO6_20110802NonUniCode system.indexes > db.3mLgQAYJCq6_20110802 Thu Aug 18 18:44:49 SyntaxError: missing ; before statement (shell):1 > db.eTByWMY7zO6_20110802NonUniCode vh.eTByWMY7zO6_20110802NonUniCode However, I can access both collections from a python script and using mongoexport. I suspect there is a problem with the 3mLgQAYJCq6_20110802 file but I don't know where to start looking. Any ideas? Answer: This works for me when my collection names include special characters: db["3mLgQAYJCq6_20110802"].findOne();
NotImplementedException in Silverlight and IronPython Question: i'm hosting IronPython Scripts in one Silverlight Application and i want to run the script and get one System.Windows.Controls.TextBlock object. so i use this ironPython code: import clr clr.AddReferenceByName("System.Windows.Controls, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35") from System.Windows.Controls import * tb = TextBlock() i'm being able to add the reference, but, when i import System.Windows.Controls i get a System.NotImplementedException. The same happens if i try with "import wpf" i'm using Silverlight 4 and IronPython 2.7.1 beta2 and this is the code to run the script: Dim engine = IronPython.Hosting.Python.CreateEngine Dim scope = engine.CreateScope() Dim source = engine.CreateScriptSourceFromString(CodeTB.Text) source.Execute(scope) ResultLB.Items.Add(scope.GetVariable("hello")) If scope.ContainsVariable("tb") Then GuiStack.Children.Add(scope.GetVariable("tb")) End If Here is the Stack Trace of the exception: en Microsoft.Scripting.PlatformAdaptationLayer.FileExists(String path) en IronPython.Runtime.Importer.LoadModuleFromSource(CodeContext context, String name, String path) en IronPython.Runtime.Importer.LoadPackageFromSource(CodeContext context, String name, String path) en IronPython.Runtime.Importer.LoadFromDisk(CodeContext context, String name, String fullName, String str) en IronPython.Runtime.Importer.ImportFromPathHook(CodeContext context, String name, String fullName, List path, Func`5 defaultLoader) en IronPython.Runtime.Importer.ImportFromPath(CodeContext context, String name, String fullName, List path) en IronPython.Runtime.Importer.ImportTopAbsolute(CodeContext context, String name) en IronPython.Runtime.Importer.ImportModule(CodeContext context, Object globals, String modName, Boolean bottom, Int32 level) en IronPython.Modules.Builtin.__import__(CodeContext context, String name, Object globals, Object locals, Object fromlist, Int32 level) en Microsoft.Scripting.Interpreter.FuncCallInstruction`7.Run(InterpretedFrame frame) en Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) en Microsoft.Scripting.Interpreter.LightLambda.Run7[T0,T1,T2,T3,T4,T5,T6,TRet](T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6) en IronPython.Runtime.Importer.ImportLightThrow(CodeContext context, String fullName, PythonTuple from, Int32 level) en IronPython.Runtime.Importer.Import(CodeContext context, String fullName, PythonTuple from, Int32 level) en IronPython.Runtime.Operations.PythonOps.ImportStar(CodeContext context, String fullName, Int32 level) en Microsoft.Scripting.Interpreter.ActionCallInstruction`3.Run(InterpretedFrame frame) en Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) en Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1) en IronPython.Compiler.PythonScriptCode.RunWorker(CodeContext ctx) en IronPython.Compiler.PythonScriptCode.Run(Scope scope) en IronPython.Compiler.RuntimeScriptCode.InvokeTarget(Scope scope) en IronPython.Compiler.RuntimeScriptCode.Run(Scope scope) en Microsoft.Scripting.SourceUnit.Execute(Scope scope, ErrorSink errorSink) en Microsoft.Scripting.SourceUnit.Execute(Scope scope) en Microsoft.Scripting.Hosting.ScriptSource.Execute(ScriptScope scope) en TestApp2.MainPage.ExecuteButton_Click(Object sender, RoutedEventArgs e) en System.Windows.Controls.Primitives.ButtonBase.OnClick() en System.Windows.Controls.Button.OnClick() en System.Windows.Controls.Primitives.ButtonBase.OnMouseLeftButtonUp(MouseButtonEventArgs e) en System.Windows.Controls.Control.OnMouseLeftButtonUp(Control ctrl, EventArgs e) en MS.Internal.JoltHelper.FireEvent(IntPtr unmanagedObj, IntPtr unmanagedObjArgs, Int32 argsTypeIndex, Int32 actualArgsTypeIndex, String eventName) And here is the all the [source code](http://bit.ly/qIfma3) Thankyou for everything :) Answer: It should work without any `AddReference`: import clr from System.Windows.Controls import TextBlock tb = TextBlock()
Python dictionary get method in assignment Question: All, I'm looping over a dictionary and counting the values that occur. To do this, I'm using the get method in the assignment statement for another dictionary. This returns a syntax error "can't assign to function call" counts = {} mydict = {'a':[1,2,5], 'b': [1,2,10]} for key,value in mydict.iteritems(): counts(value[1]) = counts.get(value[1], 0) + 1 Why would the assignment try to point to the function, rather than the return value? Answer: counts = {} mydict = {'a':[1,2,5], 'b': [1,2,10]} for key,value in mydict.iteritems(): counts[value[1]] = counts.get(value[1], 0) + 1 You need brackets, not parenthesis, to get an item from a dictionary. Also, You're doing this the hard way. from collections import defaultdict # automatically start each count at zero counts = defaultdict(int) # we only need the values, not the keys for value in mydict.itervalues(): # add one to the count for this item counts[value[1]] += 1 or # only on Python 2.7 or newer from collections import Counter counts = Counter(value[1] for value in mydict.itervalues())
In Python, how can I get the file system of a given file path Question: In python, given a directory or file path like /usr/local, I need to get the file system where its available. In some systems it could be / (root) itself and in some others it could be /usr. I tried os.statvfs it doesnt help. Do I have to run the df command with the path name and extract the file system from the output? Is there a better solution? Its for linux/unix platforms only. Thanks Answer: Here is a slightly modified version of a recipe found [here](http://stackoverflow.com/questions/1138383/python-get-mount-point-on- windows-or-linux). `os.path.realpath` was added so symlinks are handled correctly. import os def getmount(path): path = os.path.realpath(os.path.abspath(path)) while path != os.path.sep: if os.path.ismount(path): return path path = os.path.abspath(os.path.join(path, os.pardir)) return path
python "help" function: printing docstrings Question: Is there an option to print the output of help('myfun'). The behaviour I'm seeing is that output is printed to std.out and the script waits for user input (i.e. type 'q' to continue). There must be a setting to set this to just dump docstrings. Alternatively, if I could just dump the docstring PLUS the "def f(args):" line that would be fine too. Searching for "python help function" is comical. :) Maybe I'm missing some nice pydoc page somewhere out there that explains it all? Answer: To get exactly the help that's printed by `help(str)` into the variable `strhelp`: import pydoc strhelp = pydoc.render_doc(str, "Help on %s") Of course you can then easily print it without paging, etc.
Appending various strings between two delimeters in one string. python Question: I've been stuck here a day. So I thought to ask the experts. I am reading contents from a file which has data in form |something| |something else| |something_1 something_2 someting_3| |something blah blah| .. and so on.. so as you guys figured it out.. the delimiter is '|' now.. I want the output in following form |something| |something else| |something_1_something_2_someting_3| |something blah blah| basically everything between a delimiter in one string Any clues how to go about it Programming language is Python Answer: import re print re.findall(r"\|[^|]*\|", text) If you haven't seen this before, its a regular expression. Basically you describe a pattern in text that you are looking for. I recommend reading up on it if you don't know them.
Python search and replace not replacing properly Question: I have this script that needs to replace a file extension and it is not doing so properly: import os import sys #directory is the directory we will work from directory = "C:\\Users\\joe\\Desktop" os.chdir(directory) whatToLookFor = ["Ca", "Cb", "Cd", "Ce", "Cf", "Cg", "Ch", "Ci", "Cj", "Ck", "Cl", "Cm", "Cn", "Co", "Fa", "Fb", "Fc", "Fd", "Fe", "Ff", "Fg", "Fh", "Fi", "Fk", "Fl", "Fm", "Fn", "Fo", "Fp", "Ga", "Gb", "Gc", "Gd", "Ge", "Gf", "Gg", "Gh", "Gi", "Gj", "Gk", "Gn", "Ja", "Jb", "Jc", "Jd", "Je", "Jf", "Jg", "Jh", "Jk", "Jl", "Jm", "Fj", "cc", "cb", "cd", "ce", "cf", "cg", "ch", "ci", "cj", "ck", "cl", "cm", "cn", "co", "fa", "fb", "fc", "fd", "fe", "ff", "fg", "fh", "fi", "fk", "fl", "fm", "fn", "fo", "fp", "ga", "gb", "gc", "gd", "ge", "gf", "gg", "gh", "gi", "gj", "gk", "gn", "ja", "jb", "jc", "jd", "je", "jf", "jg", "jh", "jk", "jl", "jm", "fj"] oldFile = open("links.htm", "r") newFile = open("python test.htm", "w") buffer = oldFile.read() for item in whatToLookFor: for x in range(0, 80): if x < 10: buffer = buffer.replace(item + str(x), item.upper() + "-0" + str(x)) else: buffer = buffer.replace(item + str(x), item.upper() + "-" + str(x)) newFile.write(buffer) oldFile.close() newFile.close() The file ff10 is being changed to FF-010 when it should not be. It should be changed to FF-10 Answer: Without knowing what your actual input is, it will be very difficult to help, however, I did notice one thing. It looks like you are trying to make sure you have two digit numbers in your buffer (after the item from whatToLookFor). If that's true, life would probably be easier if you replaced this: if x < 10: buffer = buffer.replace(item + str(x), item.upper() + "-0" + str(x)) else: buffer = buffer.replace(item + str(x), item.upper() + "-" + str(x)) With: sx = str(x) tmp = sx if len(sx) >= 2 else "0" + sx buffer = buffer.replace(item + sx, item.upper()+ "-" + tmp) Or, even better: buffer = buffer.replace(item + str(x), "%s-%02d" % (item.upper(), int(x)) )
python: unpack IBM 32-bit float point Question: I was reading a binary file in python like this: from struct import unpack ns = 1000 f = open("binary_file", 'rb') while True: data = f.read(ns * 4) if data == '': break unpacked = unpack(">%sf" % ns, data) print str(unpacked) when I realized `unpack(">f", str)` is for unpacking IEEE floating point, my data is IBM 32-bit float point numbers My question is: How can I impliment my `unpack` to unpack IBM 32-bit float point type numbers? I don't mind using like `ctypes` to extend python to get better performance. EDIT: I did some searching: <http://mail.scipy.org/pipermail/scipy- user/2009-January/019392.html> This looks very promising, but I want to get more efficient: there are potential tens of thousands of loops. EDIT: posted answer below. Thanks for the tip. Answer: I think I understood it: first unpack the string to unsigned 4 byte integer, and then use this function: def ibm2ieee(ibm): """ Converts an IBM floating point number into IEEE format. :param: ibm - 32 bit unsigned integer: unpack('>L', f.read(4)) """ if ibm == 0: return 0.0 sign = ibm >> 31 & 0x01 exponent = ibm >> 24 & 0x7f mantissa = (ibm & 0x00ffffff) / float(pow(2, 24)) return (1 - 2 * sign) * mantissa * pow(16, exponent - 64) Thanks for all who helped! IBM Floating Point Architecture, how to encode and decode: <http://en.wikipedia.org/wiki/IBM_Floating_Point_Architecture> **My solution:** I wrote a class, I think in this way, it can be a bit faster, because used Struct object, so that the unpack fmt is compiled only once. EDIT: also because it's unpacking size*bytes all at once, and unpacking can be an expensive operation. from struct import Struct class StructIBM32(object): """ see example in: http://en.wikipedia.org/wiki/IBM_Floating_Point_Architecture#An_Example >>> import struct >>> c = StructIBM32(1) >>> bit = '11000010011101101010000000000000' >>> c.unpack(struct.pack('>L', int(bit, 2))) [-118.625] """ def __init__(self, size): self.p24 = float(pow(2, 24)) self.unpack32int = Struct(">%sL" % size).unpack def unpack(self, data): int32 = self.unpack32int(data) return [self.ibm2ieee(i) for i in int32] def ibm2ieee(self, int32): if int32 == 0: return 0.0 sign = int32 >> 31 & 0x01 exponent = int32 >> 24 & 0x7f mantissa = (int32 & 0x00ffffff) / self.p24 return (1 - 2 * sign) * mantissa * pow(16, exponent - 64) if __name__ == "__main__": import doctest doctest.testmod()
Standard error in non-linear regression Question: I have been doing some Monte Carlo physics simulations with Python and I am in unable to determine the standard error for the coefficients of a non-linear least square fit. Initially, I was using SciPy's `scipy.stats.linregress` for my model since I thought it would be a linear model but noticed it is actually some sort of power function. I then used NumPy's `polyfit` with the degrees of freedom being 2 but I can't find anyway to determine the standard error of the coefficients. I know gnuplot can determine the errors for me but I need to do fits for over 30 different cases. I was wondering if anyone knows of anyway for Python to read the standard error from gnuplot or is there some other library I can use? Answer: Finally found the answer to this long asked question! I'm hoping this can at least save someone a few hours of hopeless research for this topic. Scipy has a special function called curve_fit under its optimize section. It uses the least square method to determine the coefficients and best of all, it gives you the covariance matrix. The covariance matrix contains the variance of each coefficient. More exactly, the diagonal of the matrix is the variance and by square rooting the values, the standard error of each coefficient can be determined! Scipy doesn't have much documentation for this so here's a sample code for a better understanding: import numpy as np from scipy.optimize import curve_fit import matplotlib.pyplot as plot def func(x,a,b,c): return a*x**2 + b*x + c #Refer [1] x = np.linspace(0,4,50) y = func(x,2.6,2,3) + 4*np.random.normal(size=len(x)) #Refer [2] coeff, var_matrix = curve_fit(func,x,y) variance = np.diagonal(var_matrix) #Refer [3] SE = np.sqrt(variance) #Refer [4] #======Making a dictionary to print results======== results = {'a':[coeff[0],SE[0]],'b':[coeff[1],SE[1]],'c':[coeff[2],SE[2]]} print "Coeff\tValue\t\tError" for v,c in results.iteritems(): print v,"\t",c[0],"\t",c[1] #========End Results Printing================= y2 = func(x,coeff[0],coeff[1],coeff[2]) #Saves the y values for the fitted model plot.plot(x,y) plot.plot(x,y2) plot.show() 1. What this function returns is critical because it defines what will used to fit for the model 2. Using the function to create some arbitrary data + some noise 3. Saves the covariance matrix's diagonal to a 1D matrix which is just a normal array 4. Square rooting the variance to get the standard error (SE)
Text similarity algorithm, optimization concerns Question: I have a django blog, and I am writing a simple similiar text algorithm for it. The code below is the code that I tested with a copy of my blog's database. (Note: code was originally in Turkish, I changed variable names to English for convenience. Therefore, things may look weird.) # -*- coding:utf-8 -*- from django.utils.html import strip_tags import os import sys import math import re PROJECT_FOLDER = os.path.abspath(os.path.dirname(__file__)) UPPER_FOLDER = os.path.abspath(PROJECT_FOLDER + "/../") sys.path.append(UPPER_FOLDER) os.environ["DJANGO_SETTINGS_MODULE"] = "similarity.settings" from blog.models import Post def getWords(post_object): all = post_object.title + " " + post_object.abstract + " " + post_object.post all = strip_tags(all.lower()) regex = re.compile("\W+",flags=re.UNICODE) return re.split(regex,all) def count_things(what_to_count,the_set): num = 0 for the_thing in the_set: if what_to_count in the_thing[1]: num += 1 return num a = Post.objects.all() b = [] for post in a: b.append((post.title,getWords(post))) del(a) def adjustWeight(the_list,the_word): numOccr = the_list.count(the_word) if numOccr == 0: return 0 else: return math.log(numOccr,1.6) results = [] uniques = [] for i in range(0,len(b)): for a_word in b[i][1]: if a_word not in uniques: uniques.append(a_word) for i in range(1,len(b)): for j in range(0,i): upper_part = 0 sum1 = 0 sum2 = 0 for a_word in uniques: adjusted1 = adjustWeight(b[i][1],a_word) adjusted2 = adjustWeight(b[j][1],a_word) upper_part += adjusted1 * adjusted2 * math.log(len(b)/count_things(a_word,b)) sum1 += adjusted1 sum2 += adjusted2 lower_part = math.sqrt(sum1 * sum2) results.append((b[i][0], b[j][0], upper_part/lower_part)) results = sorted(results, key = lambda x: x[2]) results.reverse() print("\n".join(["%s and %s => %f" % (x,c,v) for x,c,v in results]).encode("utf-8")) What it does, in a nutshell is, compare all possible pairs and outputs a similarity report. Now what I want is to merge this with my blog. However, this is a very expensive code, so need some optimazing. This is what I have in mind. I will have a cron job for a python file, where it compares newly added or modified texts with all other texts, and store similarity scores in database for use. Another thing I have in mind is, open another table and made some indexing on it like this: "post id" "word" "number of occurence", so instead of reading the post, counting the words everytime, I would just read that data from database, in which everything is already done. I was wondering what do you thing about this. I wanted to get idea of others since I am not expert on the issue. Answer: If you want to do text similarity based searching, you are better off going with a search server like Sphinx: <http://sphinxsearch.com/>
BeautifulSoup installed but not recognized when dev_appserver runs Question: **Update** By adding BeautifulSoup.py to my app source, this error was gone :) Thanks @Ned Deily, that took along time, but was fruitful _**Ignore from here_** I have just one instance of python 2.5 installed with BeautifulSoup, still no luck!, what I am I doing wrong, please help bash-3.2$ ls -ltr /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages total 1096 -rw-r--r-- 1 Harit admin 66866 May 28 2006 BeautifulSoup.py -rw-r--r-- 1 Harit admin 26413 May 28 2006 BeautifulSoupTests.py -rw-rwxr-- 1 root admin 119 Sep 18 2006 README drwxr-xr-x 19 Harit admin 646 Aug 20 23:58 django -rw-r--r-- 1 Harit admin 1228 Aug 20 23:58 Django-1.3-py2.5.egg-info -rw-r--r-- 1 Harit admin 333390 Aug 21 00:17 setuptools-0.6c11-py2.5.egg -rw-r--r-- 1 Harit admin 30 Aug 21 00:17 setuptools.pth -rw-r--r-- 1 Harit admin 215 Aug 21 00:22 easy-install.pth -rw-r--r-- 1 Harit admin 33196 Aug 21 00:23 BeautifulSoupTests.pyc -rw-r--r-- 1 Harit admin 67193 Aug 21 00:23 BeautifulSoup.pyc -rw-r--r-- 1 Harit admin 970 Aug 21 00:23 BeautifulSoup-3.0.0-py2.5.egg-info bash-3.2$ _**Ignore from here_** I removed all versions of python from macport and system and re installed the python 2.7 version bash-3.2$ python Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import django >>> from BeautifulSoup import BeautifulSoup >>> and all the paths also look good bash-3.2$ echo $PATH /Library/Frameworks/Python.framework/Versions/2.7/bin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin:/opt/local/bin and have just one version of python that has both `Django` and `BeautifulSoup` installed bash-3.2$ cd /Library/Frameworks/Python.framework/Versions/Current/ Headers/ Mac/ Python Resources/ bin/ include/ lib/ share/ bash-3.2$ cd /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/lib lib-dynload/ lib-tk/ lib2to3/ bash-3.2$ cd /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/ Display all 641 possibilities? (y or n) bash-3.2$ ls /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages/ BeautifulSoup-3.2.0-py2.7.egg-info BeautifulSoupTests.pyc easy-install.pth BeautifulSoup.py Django-1.3-py2.7.egg-info setuptools-0.6c11-py2.7.egg BeautifulSoup.pyc README setuptools.pth BeautifulSoupTests.py django bash-3.2$ but still when I run `dev_appserver.py project` it says it can not import module `BeautifulSoup` Please help Thank you _**Ignore from below_** I have BeautifulSoup installed on my mac and I can do the following: bash-3.2$ python Python 2.6.7 (r267:88850, Jul 27 2011, 11:54:59) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from BeautifulSoup import BeautifulSoup >>> but when I run my djando app and try to run the code, it fails saying Error was: No module named BeautifulSoup It seems I am using everything correctly bash-3.2$ which python /opt/local/bin/python lrwxr-xr-x 1 root admin 9 Aug 16 13:55 python -> python2.6 bash-3.2$ cd /Library/Python/2.6/site-packages/ BeautifulSoup-3.0.0-py2.6.egg-info ipython-0.11-py2.6.egg/ BeautifulSoup.py mercurial/ BeautifulSoup.pyc mercurial-1.8.3_20110502-py2.6.egg-info/ BeautifulSoupTests.py nose-1.1.2-py2.6.egg/ BeautifulSoupTests.pyc paramiko-1.7.6-py2.6.egg Django-1.3-py2.6.egg-info pip-1.0.2-py2.6.egg/ MySQL_python-1.2.3-py2.6-macosx-10.6-universal.egg pycrypto-2.3-py2.6-macosx-10.6-universal.egg README pysqlite-2.6.3-py2.6.egg-info django/ pysqlite2/ easy-install.pth setuptools-0.6c11-py2.6.egg easy_install setuptools.pth easy_install-2.6 xlrd/ hgext/ xlrd-0.6.1-py2.6.egg-info How can I resolve this issue Thanks Answer: You apparently have installed a second, newer instance of Python 2.6. Chances are that your Django app is installed and being run from another instance of Python 2.6, possibly the system Python 2.6 (`/usr/bin/python` or `/usr/bin/python2.6`). Type `which python` to see the path of the Python which has BeautifulSoup (`/usr/local/bin/python` perhaps?). You'll need to consolidate things; either install BeautifulSoup in the Python with Django or install Django in the Python with BeautifulSoup. UPDATE: Since you are apparently running the Google App Engine dev_server, chances are that it is running under Python 2.5, not Python 2.6; at the moment, GAE is officially supported only with 2.5, as far as I know. Note Apple ships both a Python 2.6 and 2.5 with OS X 10.6. So you probably need to install Beautiful Soup in Python 2.5. Try: easy_install-2.5 -U -v beautifulsoup==3.2 At the moment, you'll need to specify the version as there is currently a newer beta version of Beautiful Soup 4 that appears to be incompatible with Python 2.5. UPDATE: You also seem to have more than one version of Python 2.5 installed. The path you show is for a 3rd-party Python, not the Apple-supplied Python 2.5. _So_ my final suggestion is to try using the easy_install-2.5 for the Apple-supplied Python 2.5: /usr/bin/easy_install-2.5 -U -v beautifulsoup==3.2
Python For Loop Slowing With Time Question: So I'm having a little trouble dealing with for loops in Python - as far as I can tell, they're getting slower with time. I'm looping over a range inside of a range, and as time passes, the loop noticeably slows. This is done inside of a game engine, if it matters. Could anyone tell me what the issue is? Here's a quick example. for x in range(xs): # xs, ys, and zs are all pre-determined size values for z in range(zs): for y in range(ys): vp = [x * vs, y * vs, z * vs] v = Cube(vp) The initial speed of this process is fine, but with time the loop slows. I know it's not anything else like the Rasterizer of the game engine because when the loop is done, the rest of the engine runs at 60 FPS. So what could be the problem? EDIT: I'm using Python 3, so there is no xrange. EDIT 2: For this example, vs is 1.0, and the predetermined size values of xs, ys, and zs are all 20. Answer: This is another case of "need more information". However, Python has a standard way of constructing nested loops like this efficiently, [`itertools.product`](http://docs.python.org/library/itertools.html#itertools.product): from itertools import product for x, y, z in product(xrange(xs), xrange(zs), xrange(ys)): vp = [x * vs, y * vs, z * vs] v = Cube(vp) It doesn't require the construction of `range`s every time in the inner loop. I also switched your use of `range` to `xrange`, as it's better for large ranges, although this is really irrelevant with `product`. @JohnZ's question is good -- if your "predetermined size values" are very large, and especially if `vs` is also large, you could be constructing some large values, and it could be taking a long time for `Cube` to process them. I doubt the loop itself is slowing down, but the numbers are getting larger, so your calculations might be.
Average of large number of Dice Rolls in Haskell Question: In an attempt to learn Haskell better, I'm trying to write a program that displays the average value of the sum of 2 die, rolled X number of times. This is fairly simple in C, Java, Python... but I'm stuck in Haskell. Here's a naive attempt: import System.Random main = do g <- getStdGen let trials = 10000000 let rolls = take trials (randomRs (2, 12) g :: [Int]) let average = div (sum rolls) trials print average For low number of trials, the program works. But when I run this code with ten million trials, I get an error: Stack space overflow: current size 8388608 bytes. Use `+RTS -Ksize -RTS' to increase it. There's got to be a better way to write this program. In the C, Java, and Python versions, this is a simple task. I've looked at [this](http://stackoverflow.com/questions/2110535/sampling-sequences-of- random-numbers-in-haskell) post (and understand about 75% of the material), but when I adapt that code to this situation, summing a sequence of `R [Int]` doesn't work (and I'm not sure how to 'unwrap' the [Int]). What am I doing wrong? What's the right way? How do I reach random number enlightenment in Haskell? **Edit:** in addition to the answer selected, as rtperson points out below, the modeling of 2 dice is incorrect; it should really be the sum of two independent rolls from 1 to 6. Answer: `sum` is no good to sum a long list, it runs in linear space. Try this strict version of `sum`: sum' = foldl' (+) 0 `foldl'` is defined in `Data.List`. **EDIT** More information can be found in [this HaskellWiki article](http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27).
Python win32com - Read text in a text box to a cell? Question: I would like to read the text from a text box in an Excel File and save that value to a variable. The problem I am having is with the reading of the TextBox. I have tried several methods, this one showed the most promise, as it does not generate an error, but it does not elicit the desired result either. Any suggestions are appreciated. See code below. import win32com.client as win32 excel = win32.gencache.EnsureDispatch('Excel.Application') wb = excel.Workbooks.Open("C:\\users\\khillstr\\Testing\\Scripts\\Book1.xlsx") excel.Visible = False ws = wb.Worksheets canvas = excel.ActiveSheet.Shapes for shp in canvas.CanvasItems: if shp.TextFrame.Characters: print shp.TextFrame.Characters else: print "no" Answer: Canvas has to do with graphics in excel files. I think you want access to the cells. Below is code that prints out each row as a tuple. import win32com.client as win32 excel = win32.gencache.EnsureDispatch('Excel.Application') wb = excel.Workbooks.Open("C:\\users\\khillstr\\Testing\\Scripts\\Book1.xlsx") excel.Visible = False sheet = wb.Worksheets(1) for row in sheet.UsedRange.Value: print row
program timeout in windows for python Question: Hi I've seen a bunch of questions here on SE that deal with similar issues but I find much of the answers to be unclear and confusing. My question is very simple. How on a Windows platform can I kill a running function after a certain amount of time using Python v2.6? If I have: def my_function(start): x=start while True: print x x=x+1 return x how can I have this stop after X seconds? Please keep your answers clear about where to put in my function and how to adjust the time limit. Thanks Answer: If you just want to run a normal function with a timeout, try: from datetime import timedelta, datetime from time import sleep endtime = datetime.utcnow() + timedelta(seconds = 2) while True: sleep(1) # just an example if datetime.utcnow() > endtime: # if more than two seconds has elapsed break If you're talking about stopping a thread, there is a blog post about doing this with threads which covers all the bases. [How (not) to set a timeout on a computation in Python](http://eli.thegreenplace.net/2011/08/22/how-not-to-set-a-timeout-on-a- computation-in-python/). The [author](http://stackoverflow.com/users/8206/eli- bendersky) also uses this site. Basically, the answer is that there is no "right way" to do this in Python, though if you're not on Windows `SIGALARM` works.
using python subprocess call to invoke python script Question: I have a python script that needs to invoke another python script in the same directory. I did this: from subprocess import call call('somescript.py') I get the following error call('somescript.py') File "/usr/lib/python2.6/subprocess.py", line 480, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1139, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory I have the script somescript.py in the same folder though. Am i missing something here. Thanks Answer: If 'somescript.py' isn't something you could normally execute directly from the command line (I.E. `$: somescript.py` works), then you can't call it directly using call. Remember that the way Popen works is that the first argument is the program that it executes, and the rest are the arguments passed to that program. In this case, the program is actually _python_ , not your script. So the following will work as you expect: subprocess.call(['python', 'somescript.py', somescript_arg1, somescript_val1,...]). This correctly calls the python interpreter and tells it to execute your script with the given arguments. Note that this is different from the above suggestion: subprocess.call(['python somescript.py']) That will try to execute the program called _python somscript.py_ , which clearly doesn't exist. call('python somescript.py', shell=True) Will also work, but using strings as input to call is not cross platform, is dangerous if you aren't the one building the string, and should generally be avoided if at all possible.
Help calculating Avg and std for excel files saved as CSV Question: I have about 20 excel files saved as CSV in a single folder. Each excel file has numbers saved in the first, second and third columns. I was trying to read the first column for all of the files, second column for all of the file, and third column for all of the files using CSV module in python, and calulate the average and standard deviation for each column and save these results in a single separate excel. please help.... this is what i have so far...how can i access each column separately? import csv import os from numpy import array path="A:\\hello\\folder" dirList=os.listdir(path) for file in dirList: fullpath=os.path.join(path,file) ## print fullpath with open(fullpath, 'rb') as f: [[val for val in line.split(',')] for line in f.readlines()] ## print line nums = array([line]) for row in nums: print row.mean() Answer: A [**list comprehension**](http://docs.python.org/tutorial/datastructures.html#list- comprehensions) works kinda like a backwards for-loop that automatically constructs a `list` for you. If you nest these with the "columns" on the inside and the "rows" on the outside you should get a matrix thingy (nested list structure): nums = [[int(val) for val in line.split(',')] for line in my_file.readlines()] Or maybe if you have a csv reader object it might be like this: nums = [[int(val) for val in line] for line in my_csv_reader] And now you've got your matrix in a variable called `nums` thanks to the above _list comprehension_. Then you should probably use [numpy](http://new.scipy.org/download.html) to compute your stats. This is nice because you can access columns of a numpy array very easily and when you do it returns the column in the form of a numpy array. numpy arrays also happen to have built-in methods for mean and standard deviation. You can cast your `nums` to a numpy array just by passing it into the `array()` constructor function : from numpy import array anums = array(nums) Then if you want to iterate through columns, use the array slice notation and the `shape` variable that is a member of every numpy array: # The 1 index of anums.shape should tell you how many columns you have for c in range(anums.shape[1]): column = anums[:,c] col_mean = column.mean() col_std = column.std() # Do something with these variables here, probably
Problem importing matplotlib.mlab and .pyplot in python 2.7 on Mac OSX 10.6 Question: I am trying to plot a histogram using matplotlib in Python 2.7 on OSX 10.6 I have verified that I can import numpy, scipy, and matplotlib into python. A sample script on the matplotlib website does #!/usr/bin/env python import numpy as np import matplotlib.mlab as mlab import matplotlib.pyplot as plt However, I get an error when doing this. Here is what happens when I try to import mlab. Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import matplotlib.mlab as mlab Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/mlab.py", line 151, in <module> import matplotlib.nxutils as nxutils ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/nxutils.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/nxutils.so: no matching architecture in universal wrapper >>> What am I doing wrong that I can't import these as the script does? Answer: For the _ImportError_ : It seems that there is an architecture mismatch. Maybe you have installed a 32-bit version of matplotlib, but are using a 64-bit Python? What does the following shell command print? file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/nxutils.so For the _AttributeError_ : You have to explicitely import `matplotlib.pyplot`, it won't get imported automatically when just importing `matplotlib`. The most common aliasing scheme is: import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt Then you can draw your histogram using the `plt` name: plt.hist(...)
Renaming filenames using python Question: I need to simply add the word "_Manual" onto the end of all the files i have in a specific directory Here is the script i am using at the moment - i have no experience with python so this script is a frankenstine of other scripts i had lying around! It doesn't give any error messages but it also doesnt work.. folder = "C:\Documents and Settings\DuffA\Bureaublad\test" import os, glob for root, dirs, filenames in os.walk(folder): for filename in filenames: filename_split = os.path.splitext(filename) # filename and extensionname (extension in [1]) filename_zero = filename_split[0] os.rename(filename_zero, filename_zero + "_manual") I am now using folder = "C:\Documents and Settings\DuffA\Bureaublad\test" import os # glob is unnecessary for root, dirs, filenames in os.walk(folder): for filename in filenames: fullpath = os.path.join(root, filename) filename_split = os.path.splitext(fullpath) # filename and extensionname (extension in [1]) filename_zero, fileext = filename_split print fullpath, filename_zero + "_manual" + fileext os.rename(fullpath, filename_zero + "_manual" + fileext) but it still doesnt work.. it doesnt print anything and nothing gets changed in the folder! Answer: `os.rename` requires a source and destination filename. The variable `filename` contains your current filename (e.g., "something.txt"), whereas your split separates that into `something` and `txt`. As the source file to rename, you then only specify `something`, which fails silently. Instead, you want to rename the file given in `filename`, but as you walk into subfolders as well, you need to make sure to use the absolute path. For this you can use `os.path.join(root, filename)`. So in the end you get something like this: os.rename(os.path.join(root, filename), os.path.join(root, filename_zero + "_manual" + filename_split[1])) This would rename `dir1/something.txt` into `dir1/something_manual.txt`.
pass session cookies in http header with python urllib2? Question: I'm trying to write a simple script to log into Wikipedia and perform some actions on my user page, using the Mediawiki api. However, I never seem to get past the first login request (from this page: <https://secure.wikimedia.org/wikipedia/en/wiki/Wikipedia:Creating_a_bot#Logging_in>). I don't think the session cookie that I set is being sent. This is my code so far: import Cookie, urllib, urllib2, xml.etree.ElementTree url = 'https://secure.wikimedia.org/wikipedia/en/w/api.php?action=login&format=xml' username = 'user' password = 'password' user_data = [('lgname', username), ('lgpassword', password)] #Login step 1 #Make the POST request request = urllib2.Request(url) data = urllib.urlencode(user_data) login_raw_data1 = urllib2.urlopen(request, data).read() #Parse the XML for the login information login_data1 = xml.etree.ElementTree.fromstring(login_raw_data1) login_tag = login_data1.find('login') token = login_tag.attrib['token'] cookieprefix = login_tag.attrib['cookieprefix'] sessionid = login_tag.attrib['sessionid'] #Set the cookies cookie = Cookie.SimpleCookie() cookie[cookieprefix + '_session'] = sessionid #Login step 2 request = urllib2.Request(url) session_cookie_header = cookieprefix+'_session='+sessionid+'; path=/; domain=.wikipedia.org; HttpOnly' request.add_header('Set-Cookie', session_cookie_header) user_data.append(('lgtoken', token)) data = urllib.urlencode(user_data) login_raw_data2 = urllib2.urlopen(request, data).read() I think the problem is somewhere in the `request.add_header('Set-Cookie', session_cookie_header)` line, but I don't know for sure. How do I use these python libraries to send cookies in the header with every request (which is necessary for a lot of API functions). Answer: The latest version of [`requests`](http://docs.python- requests.org/en/latest/index.html) has support for [sessions](http://docs.python-requests.org/en/latest/user/advanced/#session- objects) (as well as being really simple to use and generally great): with requests.session() as s: s.post(url, data=user_data) r = s.get(url_2)
Using Python Iterparse For Large XML Files Question: I need to write a parser in Python that can process some extremely large files ( > 2 GB ) on a computer without much memory (only 2 GB). I wanted to use iterparse in lxml to do it. My file is of the format: <item> <title>Item 1</title> <desc>Description 1</desc> </item> <item> <title>Item 2</title> <desc>Description 2</desc> </item> and so far my solution is: from lxml import etree context = etree.iterparse( MYFILE, tag='item' ) for event, elem in context : print elem.xpath( 'description/text( )' ) del context Unfortunately though, this solution is still eating up a lot of memory. I think the problem is that after dealing with each "ITEM" I need to do something to cleanup empty children. Can anyone offer some suggestions on what I might do after processing my data to properly cleanup? Answer: Try [Liza Daly's fast_iter](http://www.ibm.com/developerworks/xml/library/x-hiperfparse/). After processing an element, `elem`, it calls `elem.clear()` to remove descendants and also removes preceding siblings. def fast_iter(context, func, *args, **kwargs): """ http://lxml.de/parsing.html#modifying-the-tree Based on Liza Daly's fast_iter http://www.ibm.com/developerworks/xml/library/x-hiperfparse/ See also http://effbot.org/zone/element-iterparse.htm """ for event, elem in context: func(elem, *args, **kwargs) # It's safe to call clear() here because no descendants will be # accessed elem.clear() # Also eliminate now-empty references from the root node to elem for ancestor in elem.xpath('ancestor-or-self::*'): while ancestor.getprevious() is not None: del ancestor.getparent()[0] del context def process_element(elem): print elem.xpath( 'description/text( )' ) context = etree.iterparse( MYFILE, tag='item' ) fast_iter(context,process_element) Daly's article is an excellent read, especially if you are processing large XML files. * * * Edit: The `fast_iter` posted above is a modified version of Daly's `fast_iter`. After processing an element, it is more aggressive at removing other elements that are no longer needed. The script below shows the difference in behavior. Note in particular that `orig_fast_iter` does not delete the `A1` element, while the `mod_fast_iter` does delete it, thus saving more memory. import lxml.etree as ET import textwrap import io def setup_ABC(): content = textwrap.dedent('''\ <root> <A1> <B1></B1> <C>1<D1></D1></C> <E1></E1> </A1> <A2> <B2></B2> <C>2<D></D></C> <E2></E2> </A2> </root> ''') return content def study_fast_iter(): def orig_fast_iter(context, func, *args, **kwargs): for event, elem in context: print('Processing {e}'.format(e=ET.tostring(elem))) func(elem, *args, **kwargs) print('Clearing {e}'.format(e=ET.tostring(elem))) elem.clear() while elem.getprevious() is not None: print('Deleting {p}'.format( p=(elem.getparent()[0]).tag)) del elem.getparent()[0] del context def mod_fast_iter(context, func, *args, **kwargs): """ http://www.ibm.com/developerworks/xml/library/x-hiperfparse/ Author: Liza Daly See also http://effbot.org/zone/element-iterparse.htm """ for event, elem in context: print('Processing {e}'.format(e=ET.tostring(elem))) func(elem, *args, **kwargs) # It's safe to call clear() here because no descendants will be # accessed print('Clearing {e}'.format(e=ET.tostring(elem))) elem.clear() # Also eliminate now-empty references from the root node to elem for ancestor in elem.xpath('ancestor-or-self::*'): print('Checking ancestor: {a}'.format(a=ancestor.tag)) while ancestor.getprevious() is not None: print( 'Deleting {p}'.format(p=(ancestor.getparent()[0]).tag)) del ancestor.getparent()[0] del context content = setup_ABC() context = ET.iterparse(io.BytesIO(content), events=('end', ), tag='C') orig_fast_iter(context, lambda elem: None) # Processing <C>1<D1/></C> # Clearing <C>1<D1/></C> # Deleting B1 # Processing <C>2<D/></C> # Clearing <C>2<D/></C> # Deleting B2 print('-' * 80) """ The improved fast_iter deletes A1. The original fast_iter does not. """ content = setup_ABC() context = ET.iterparse(io.BytesIO(content), events=('end', ), tag='C') mod_fast_iter(context, lambda elem: None) # Processing <C>1<D1/></C> # Clearing <C>1<D1/></C> # Checking ancestor: root # Checking ancestor: A1 # Checking ancestor: C # Deleting B1 # Processing <C>2<D/></C> # Clearing <C>2<D/></C> # Checking ancestor: root # Checking ancestor: A2 # Deleting A1 # Checking ancestor: C # Deleting B2 study_fast_iter()
pyodbc returns SQL Server DATE fields as strings Question: I'm using pyodbc to query a SQL Server 2008 database table with columns of DATE type. The resulting rows of data contain date strings rather than python datetime.date or datetime.datetime instances. **This only appears to be an issue for columns of type DATE; columns of type DATETIME are handled correctly and return a datetime.datetime instance.** ## Example import pyodbc from pprint import pformat db = pyodbc.connect("DRIVER={SQL Server};SERVER=.\\SQLEXPRESS;DATABASE=scratch;Trusted_Connection=yes") print pformat(db.cursor().execute("select * from Contract").description) Results: (('id', <type 'int'>, None, 10, 10, 0, False), ('name', <type 'str'>, None, 23, 23, 0, False), ('some_date', <type 'unicode'>, None, 10, 10, 0, True), ('write_time', <type 'datetime.datetime'>, None, 23, 23, 3, False)) Note that the **some_date** column is indicated as type unicode string, however, in the database this column is defined as DATE: CREATE TABLE dbo.Contract( id INT NOT NULL, name VARCHAR(23) NOT NULL, some_date DATE NULL, write_time DATETIME NOT NULL) Is this normal, and how can I best correct it? Answer: Use the SQL Server native client. e.g. Put **Driver={SQL Server Native Client 10.0}** in your connection string,instead of **DRIVER={SQL Server}**. Reproduced your scenario with date being returned as string using SQL Server ODBC driver. When using a 2008+ compatible version of the SQL Server native client, the date type is returned as expected, but it looks like datetime2 gets returned as string (in my limited testing). Table definition: create table dbo.datetest ( [date] date not null, [datetime] datetime not null, [datetime2] datetime2 not null ); insert into dbo.datetest values (CAST(current_timestamp as DATE), CAST(current_timestamp as datetime), CAST(current_timestamp as datetime2)); Example: import pyodbc from pprint import pformat db = pyodbc.connect(driver='{SQL Server Native Client 10.0}', server='TESTSRVR', database='TESTDB', trusted_connection='yes') print pformat(db.cursor().execute("select * from dbo.datetest").description) Results: (('date', <type 'datetime.date'>, None, 10, 10, 0, False), ('datetime', <type 'datetime.datetime'>, None, 23, 23, 3, False), ('datetime2', <type 'unicode'>, None, 27, 27, 0, False))
Python mail: encoded attachments are truncated Question: I'm using the following function to send an email message with two attachments in my python script: import smtplib from email.MIMEMultipart import MIMEMultipart from email.MIMEBase import MIMEBase from email.MIMEText import MIMEText from email.Utils import COMMASPACE, formatdate from email import Encoders ... def sendMail(sender_name, to, subject, text, files=None,server="localhost"): assert type(to)==list if files: assert type(files)==list print "Files: ",files fro = sender_name msg = MIMEMultipart() msg['From'] = fro msg['To'] = COMMASPACE.join(to) msg['Date'] = formatdate(localtime=True) msg['Subject'] = subject msg.attach( MIMEText(text) ) if files: for file in files: # ************** File attaching - Start ************** part = MIMEBase('application', "octet-stream") part.set_payload( open(file,"rb").read() ) Encoders.encode_base64(part) part.add_header('Content-Disposition', 'attachment; filename="%s"' % os.path.basename(file)) msg.attach(part) # ************** File attaching - End ************** server.set_debuglevel(1) server.ehlo() server.starttls() server.ehlo() server.sendmail(fro, to, msg.as_string()) server.quit() I get the mail, and the attachments are there, but for some reason, they are truncated a bit. My guess is I'm missing something in the encoding process. For example: Attachment 1: Original file byte count is 1433902, while the new byte count is 1433600 Attachment 2: Original file byte count is 2384703, while the new byte count is 2383872 Any ideas? Answer: Found the problem. Turns out I tried sending the files before the buffer of the writing process was fully flushed. So, it was a synchronization issue and not an encoding issue. Sorry about that, and thanks for the help guys!
Using python datetime.datetime.strptime on windows with BST timezone Question: I need to parse many different dates in many different formats. I am having trouble with the following and wondered if anyopne could explain why; The following works on a linux system: from datetime import datetime datetime.strptime('Tue 23 Aug 2011 09:00:07 PM BST','%a %d %b %Y %H:%M:%S %p %Z') But running under windows it raises ValueError: time data does not match format However, if I try GMT not BST on windows, it works fine; from datetime import datetime datetime.strptime('Tue 23 Aug 2011 09:00:07 PM GMT','%a %d %b %Y %H:%M:%S %p %Z') Is there a reason python does not understand the BST timezone under windows, but it works fine under Linux? thanks, Matt. Answer: In my opinion, parsing a three-letter time zone code like this is not a good practice (unless of course you have no choice). For example, "EST" is commonly used in the USA for UTC-4/5 and is also commonly used in Australia. So any support for "EST" must therefore be dependent on locale. It would not surprise me if "BST" was similarly ambiguous. I highly recommend using the [`pytz`](http://pypi.python.org/pypi/pytz/) module in which British civil time is given the string identifier `Europe/London` and UTC is called `Etc/UTC`. The `pytz` API will give consistent results regardless of the locale of the user or system running the application. If you are working on a UI that must be tied to locale, or parsing inputs with formats you cannot change, then consider using a dictionary of abbreviations to `pytz` timezone objects. For example: `{'BST': 'Europe/London'}`. Then your application can work with UTC dates and times uniformly, which will greatly reduce the possibility of errors.
Deploying Django app using passenger Question: I can get through everything on their wiki - and then I'm lost. <http://wiki.dreamhost.com/Django> I have a blank Django template, and whenever I try to change anything I get a 500 internal server error. I have completely developed my django app locally and just want to host it online - figured it would be easy but am slowly learning that it is not. I upload my app "videos" to this directory and then put it into the installed apps and ran "python manage.py syncdb", which finds no fixtures (which I Found odd). From there, it just gets an internal server error. Here is the error I am getting: <http://tweettune.com/> and here is the error log: [Wed Aug 24 01:49:15 2011] [error] [client 66.212.30.122] Premature end of script headers: [Wed Aug 24 01:49:15 2011] [error] [client 66.212.30.122] Premature end of script headers: internal_error.html [Wed Aug 24 08:16:40 2011] [error] [client 99.229.160.94] Premature end of script headers: [Wed Aug 24 08:16:41 2011] [error] [client 99.229.160.94] Premature end of script headers: internal_error.html [Wed Aug 24 08:21:38 2011] [error] [client 99.229.160.94] Premature end of script headers: [Wed Aug 24 08:21:38 2011] [error] [client 99.229.160.94] Premature end of script headers: internal_error.html [Wed Aug 24 08:27:41 2011] [error] [client 99.229.160.94] Premature end of script headers: [Wed Aug 24 08:27:41 2011] [error] [client 99.229.160.94] Premature end of script headers: internal_error.html I've been trying for 6 hours now and can not figure out what I am doing wrong. I suppose I just don't understand how to deploy an application at all - my thought process now is take my locally hosted app and replace all the files in the default django template online. I don't see why this should not work but it's not. I tried the "hello world app" example by using this code in my passenger_wdgi file and it works... def application(environ, start_response): start_response('200 OK', [('Content-type', 'text/plain')]) return ["Hello, world!"] Any direction would be helpful. **EDIT:** Here are the contents of my passenger_wsgi.py file which may be helpful (although it is automatically generated by dreamhost...so figured it would be correct). import sys, os sys.path.append(os.getcwd()) os.environ['DJANGO_SETTINGS_MODULE'] = "sotd.settings" import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() project_path='/home/tweettune.com/sotd/' sys.path.insert(1, project_path) Answer: I had the same problem. The solution was to add the folder of my application in the wsgi_passenger.py import sys, os sys.path.append(os.getcwd()) sys.path.append(os.path.join(os.getcwd(), 'include your apps folder here')) os.environ['DJANGO_SETTINGS_MODULE'] = "cpc.settings" import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() This link was very useful to me: <http://discussion.dreamhost.com/thread-128918.html>
psycopg2.ProgrammingError, running script to serialize data of django app from postgres Question: I have a `djano web app` which stores the data about some entries to `postgres db`.To copy the data in db to `json file`s ,I generally use the `python manage.py shell` and use the serialization api as mentioned i django tutorial. >>>python manage.py shell ... In[8]:from myapp.models import MyFirstModel In[9]:data = serializers.serialize("xml", MyFirstModel.objects.all()) In[10]:print data I copy this output to some text file and save it as json. I thought of writing a script to do this and tried datacopy.py ........ ... filename = os.path.join(dirpath,basefilename+".json") def write_data_to_file(): from django.core import serializers XMLSerializer = serializers.get_serializer("json") xml_serializer = XMLSerializer() out = open(filename,"a") from django.contrib.auth.models import User from myapp.models import MyFirstModel from myapp.models import MyNextModel xml_serializer.serialize(User.objects.all(), stream=out) xml_serializer.serialize(MyFirstModel.objects.all(), stream=out) xml_serializer.serialize(MyNextModel.objects.all(), stream=out) if __name__ == '__main__': write_data_to_file() From bash shell,I tried >>python datacopy.py But, this writes only the User model's data and fails to copy the models which I create in my app. The error message I get Traceback (most recent call last): File "datacopy.py", line 29, in <module> write_data_to_file() File "datacopy.py", line 23, in write_data_to_file xml_serializer.serialize(MyFirstModel.objects.all(), stream=out) File "/home/me/Django-1.1.1/django/core/serializers/base.py", line 38, in serialize for obj in queryset: File "/home/me/Django-1.1.1/django/db/models/query.py", line 106, in _result_iter self._fill_cache() File "/home/me/Django-1.1.1/django/db/models/query.py", line 692, in _fill_cache self._result_cache.append(self._iter.next()) File "/home/me/Django-1.1.1/django/db/models/query.py", line 238, in iterator for row in self.query.results_iter(): File "/home/me/Django-1.1.1/django/db/models/sql/query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "/home/me/Django-1.1.1/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/home/me/Django-1.1.1/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) psycopg2.ProgrammingError: relation "myapp_myfirstmodel" does not exist I am able to copy data of all three models when I use the `python manage.py shell`.Why does this error happen when I run the script from bash?I have the modules of myapp in PYTHONPATH Answer: Have you looked in to using ./manage.py dumpdata ? You can specify the serialization format. For your script to work have you set DJANGO_SETTINGS to the correct settings.py ? Secondly is there a reason that you are doing your imports inside the function, probably better to move them to the head of the file: from django.core import serializers from django.contrib.auth.models import User from myapp.models import MyFirstModel from myapp.models import MyNextModel def write_data_to_file(): XMLSerializer = serializers.get_serializer("json") xml_serializer = XMLSerializer() out = open(filename,"a")
Can modules with a common package hierarchy mentioned multiple times in my PYTHONPATH? Question: I have two separate projects that share a package name. They run OK as long as they are not both on the PYTHONPATH, but as soon as they both appear one of them cannot find imports in its own project. Example, two projects like this: Project 1: x/ __init__.py test.py foo.py test.py contains the line: import x.foo Project 2: x/ __init__.py bar.py If I run PYTHONPATH=. python x/y/test.py there is no error. But if I run PYTHONPATH='pathtoproject2:.' python x/test.py I get the error: Traceback (most recent call last): File "x/test.py", line 1, in <module> import x.foo ImportError: No module named foo Is there a way to have different Python projects with a common package share the PYTHONPATH? Or will Python always use only the first path where a package is found? Note: I know if you modify the import from x.foo to import foo then it will work. But I want to know if it is possible to do it without modifying either package. Answer: Currently, Python does not support packages from different directories. A package is an unit, not just a namespace. This is different from Java "packages" or the more appropriately named "namespaces" in .NET. When importing a package, Python will scan `sys.path`, sequentially, and use the first match. If there is another module or package with a matching name in a directory that appears later in the path, it won't be found. Your "note" is not true, by the way. When you use `import foo`, Python will try a relative import within the directory of `test.py`, find no match, then attempt an absolute import of module `foo`, which does not exist either, and then raise an `ImportError`. Instead of using package names to group modules using a common prefix, think of packages as smallish, self-contained libraries. In Python, [_flat is better than nested_](http://www.python.org/dev/peps/pep-0020/), and it is preferable to have multiple top-level packages, each fulfilling one distinct purpose, than having one large monolithic package. Instead of `org.example.foo` and `org.example.bar`, just use `foo` and `bar`.
Python dir() not displaying all modules in a package Question: I'm using python 2.6 and I'm seeing somewhat weird behavior with the `dir()` function. I'm trying to import all the modules from a directory/package for a unittest but when I do a `dir()` on on the folder, I don't get _all_ the modules in that directory. **Sample directory structure:** |-mod_dir\ |---__init__.py |---modA.py |---modB.py |---modC.py | |-mod_tests\ |---__init__.py |---test.py **Sample test.py:** import mod_dir for obj in dir(mod_dir): print obj Unfortunately, at this point I only get something like: modA __all__ __builtins__ __doc__ __file__ __name__ __package__ __path__ Any ideas as to why the others aren't appearing here? I don't think it matters, but the `__init__.py` file in the mod_dir is empty. I've tried setting the `__all__` variable but it has no effect. If it does matter, however, I'm using this in WinXp with pydev in eclipse. **Context:** Each module under mod_dir has a unittest in it and I'm trying to include them in a unittest suite within test.py. I'm aware of nose and other methods like [this](http://stackoverflow.com/questions/555571/getting-a-list-of-all- modules-in-the-current-package/555717#555717) one, but I'm more interested in why dir isn't displaying everything. Answer: I believe it does matter that the `__init__.py` file is empty. Try this in the `__init__.py`: import modA import modB import modC From the python docs, [`dir` should not be used for this](http://docs.python.org/library/functions.html#dir) (see the Note at the bottom of the `dir` section). It's not rigorous enough, and mainly used in the interactive prompt.
speeding up numpy kronecker products Question: I am working on my first large python project. I have one function which has the following code in it: # EXPAND THE EXPECTED VALUE TO APPLY TO ALL STATES, # THEN UPDATE fullFnMat EV_subset_expand = np.kron(EV_subset, np.ones((nrows, 1))) fullFnMat[key] = staticMat[key] + EV_subset_expand In my code profiler, it seems like this kronecker product is actually taking up a huge amount of time. Function was called by... ncalls tottime cumtime /home/stevejb/myhg/dpsolve/ootest/tests/ddw2011/profile_dir/BellmanEquation.py:17(bellmanFn) <- 19 37.681 38.768 /home/stevejb/myhg/dpsolve/ootest/tests/ddw2011/profile_dir/dpclient.py:467(solveTheModel) {numpy.core.multiarray.concatenate} <- 342 27.319 27.319 /usr/lib/pymodules/python2.7/numpy/lib/shape_base.py:665(kron) /home/stevejb/myhg/dpsolve/ootest/tests/ddw2011/profile_dir/dpclient.py:467(solveTheModel) <- 1 11.041 91.781 <string>:1(<module>) {method 'argsort' of 'numpy.ndarray' objects} <- 19 7.692 7.692 /usr/lib/pymodules/python2.7/numpy/core/fromnumeric.py:597(argsort) /usr/lib/pymodules/python2.7/numpy/core/numeric.py:789(outer) <- 171 2.526 2.527 /usr/lib/pymodules/python2.7/numpy/lib/shape_base.py:665(kron) {method 'max' of 'numpy.ndarray' objects} <- 209 2.034 2.034 /home/stevejb/myhg/dpsolve/ootest/tests/ddw2011/profile_dir/dpclient.py:391(getValPolMatrices) Is there a way to get faster kronecker products in Numpy? It seems like it shouldn't take as long as it is. Answer: You can certainly take a look at the source for `np.kron`. It can be found in `numpy/lib/shape_base.py`, and you can see if there are improvements that can be made or simplifications that might make it more efficient. Alternatively you could write your own using Cython or some other binding to a low level language to try to eek out better performance. Or as @matt suggested something like the following might be natively faster: import numpy as np nrows = 10 a = np.arange(100).reshape(10,10) b = np.tile(a,nrows).reshape(nrows*a.shape[0],-1) # equiv to np.kron(a,np.ones((nrows,1))) or: b = np.repeat(a,nrows*np.ones(a.shape[0],np.int),axis=0) Timings: In [80]: %timeit np.tile(a,nrows).reshape(nrows*a.shape[0],-1) 10000 loops, best of 3: 25.5 us per loop In [81]: %timeit np.kron(a,np.ones((nrows,1))) 10000 loops, best of 3: 117 us per loop In [91]: %timeit np.repeat(a,nrows*np.ones(a.shape[0],np.int),0) 100000 loops, best of 3: 12.8 us per loop Using `np.repeat` for the sized arrays in the above example gives a pretty nice 10x speed-up, which isn't too shabby.
In Python App Engine How Do I Uniquely Identify An Instance Of My App Running On The Dev SDK? Question: My application relies on an external service that it communicates with using urlfetch. I have multiple developers each running their own instance of my application on their development computers while they add features. Each developer instance needs to be able to uniquely identify itself to the external service so that the external service can keep their data separated. I need a way to automatically generate a unique identifier for each developer from within the application. Yes, I could just have each developer put a unique id in a variable in their code but I would much prefer it was automatic. Also, I could probably read some information about the hardware on the computer (like MAC address) and use that but I want this code to use only things that work on the production server so that I can use it there eventually as well. Answer: The only trick I've seen to identify instances is using a global variable address. UNIQUE_INSTANCE_ID = {} # at module level logging.debug("Instance %s." % (str("%X" % id( UNIQUE_INSTANCE_ID )).zfill(16))) That seems to work fairly well to uniquely identify an instance; but it only identifies an instance, not a machine. So if you restart your instance, you get a new identifier. That might be a "feature". You could also use some of the META variables; if developers are all running out of a home directory, you could parse a username out of 'PATH_TRANSLATED'. At the very least, you could make injecting a UUID into the datastore part of the data population; store a metadata kind in the datastore and the cache, and wrap that UUID into the requests. from uuid import uuid4 from google.appengine.ext import db from google.appengine.api import memcache cache = memcache.Client() class InstanceStamp(db.Model): code = db.StringProperty() INSTANCE_STAMP_KEY = "instance_stamp" @classmethod def get_stamp(cls): cache_key = cls.INSTANCE_STAMP_KEY stamp_code = cache.get(cache_key) if stamp_code is None: code = uuid4().hex stamp = cls.get_or_insert('instance_stamp', code=code) if stamp is not None: cache.set(cache_key, stamp.code, 300) stamp_code = stamp.code return stamp_code
hasattr(obj, '__iter__') vs collections Question: I've seen a couple posts recommending `isinstance(obj, collections.Sequence)` instead of `hasattr(obj, '__iter__')` to determine if something is a list. [len(object) or hasattr(object, __iter__)?](http://stackoverflow.com/questions/1763507/lenobject-or- hasattrobject-iter) [Python: is not sequence](http://stackoverflow.com/questions/2937114/python- is-not-sequence) At first I was excited because testing if an object has `__iter__` always seemed dirty to me. But after further review this still seems to be the best solution because none of the `isinstance` tests on `collection` yield the same results. `collections.Sequence` is close but it returns `True` for strings. hasattr(obj, '__iter__') set([]): True {}: True []: True 'str': False 1: False isinstance(obj, collections.Iterable) set([]): True {}: True []: True 'str': True 1: False isinstance(obj, collections.Iterator) set([]): False {}: False []: False 'str': False 1: False isinstance(obj, collections.Sequence) set([]): False {}: False []: True 'str': True 1: False Here is the code I used to generate this: import collections testObjs = [ set(), dict(), list(), 'str', 1 ] print "hasattr(obj, '__iter__')" for obj in testObjs: print ' %r: %r' % (obj, hasattr(obj, '__iter__')) print print "isinstance(obj, collections.Iterable)" for obj in testObjs: print ' %r: %r' % (obj, isinstance(obj, collections.Iterable)) print print "isinstance(obj, collections.Iterator)" for obj in testObjs: print ' %r: %r' % (obj, isinstance(obj, collections.Iterator)) print print "isinstance(obj, collections.Sequence)" for obj in testObjs: print ' %r: %r' % (obj, isinstance(obj, collections.Sequence)) print Am I missing something or is `hasattr(obj, '__iter__')` still the best option for testing if something is iterable? **EDIT:** I am only interested in detecting the builtin types: `dict`, `list`, and `set`.(**EDIT:** this is foolish :)) **EDIT:** I should have included the use case that got me looking into this. I have a function that takes an arg that can be a single value or a sequence. So I want to detect what it is and turn it into a sequence if it's a single value so I can deal with it as a sequence after that. if hasattr(arg, '__iter__'): arg= set(arg) else: arg= set([arg]) One solution to this is just to let it throw an exception if the object cannot be iterated. But that doesn't work in my use case. Another solution is to use something like: import collections def issequenceforme(obj): if isinstance(obj, basestring): return False return isinstance(obj, collections.Sequence) _From:_ [Python: is not sequence](http://stackoverflow.com/questions/2937114/python-is-not-sequence) But this requires this function to be defined which makes me not want to use it. It looks like `hasattr(arg, '__iter__')` is still the best option. Answer: The `collections.Iterable` will guarantee the object is iterable or not (like using `for x in obj`) but checking `__iter__` will not. A string is a iterable datatype but on Python2.x it doesn't have a `__iter__` method.
I don't even know what infile > outfile means. How am I supposed to use it? Question: I don't know how to use Python, and I'm trying to use a script on a document. I have no idea how to tell it do this! If I just run the script, this is the message I get: Use: C:\Python27\hun2html.py infile > outfile Traceback (most recent call last): File "C:\Python27\hun2html.py", line 75, in <module> sys.exit(1) SystemExit: 1 I'm not sure what info is relevant to anyone who knows about this stuff, but this is the most relevant part of the code, I believe: if __name__ == '__main__': import sys if not sys.argv[1:]: print "Use: %s infile > outfile" % sys.argv[0] sys.exit(1) contents = open(sys.argv[1]).read() print hun2html(contents) It's supposed to change the formatting in a document. If anyone can make sense of this stupid question, I would really appreciate some help! Answer: It means that you should write the path to the file you want to use for input where infile is and the path to the file you want to store the output where outfile is. For example, C:\Python27\hun2html.py C:\input.txt > C:\output.txt Note that the input file is being passed as a parameter (accessed in the code by `sys.argv[1]` ) and the output is being piped, meaning that the Python prints it to standard output, but because you put the `>` character it will be redirected to the file you indicate. If you left off the `> outfile` you would see the output displayed on your terminal.
Python URL variable int add to string Question: pgno = 1 while pgno < 4304: result = urllib.urlopen("http://www.example.comtraderesourcespincode.aspx?" + "&GridInfo=Pincode0"+ pgno) print pgno html = result.read() parser = etree.HTMLParser() tree = etree.parse(StringIO.StringIO(html), parser) pgno += 1 in `http://.......=Pincode0` I need to add 1..for e.g like 'Pincode01', loop it 01 to 02, 03 .. for which I am using a while loop and the variable assigned is 'pgno'. The problem is the counter is adding 1, but 'Pincode01' is not becoming 'Pincode02' ... therefore it is not opening the 2nd page of the site. I even tried `+str(pgno))` ... no luck. Please show how to do it. I am not able to do this ...and have attempted it several times. Answer: Probably, you want this : from urllib import urlopen import re pgno = 2 url = "http://www.eximguru.com/traderesources/pincode.aspx?&amp;GridInfo=Pincode0%s" %str(pgno) print url +'\n' sock = urlopen(url) htmlcode = sock.read() sock.close() x = re.search('%;"><a href="javascript:__doPostBack',htmlcode).start() pat = ('\t\t\t\t<td style="width:\d+%;">(\d+)</td>' '<td style="width:\d+%;">(.+?)</td>' '<td style="width:\d+%;">(.+?)</td>' '<td style="width:30%;">(.+?)</td>\r\n') regx = re.compile(pat) print '\n'.join(map(repr,regx.findall(htmlcode,x))) result http://www.eximguru.com/traderesources/pincode.aspx?&amp;GridInfo=Pincode02 ('110001', 'New Delhi', 'Delhi', 'Baroda House') ('110001', 'New Delhi', 'Delhi', 'Bengali Market') ('110001', 'New Delhi', 'Delhi', 'Bhagat Singh Market') ('110001', 'New Delhi', 'Delhi', 'Connaught Place') ('110001', 'New Delhi', 'Delhi', 'Constitution House') ('110001', 'New Delhi', 'Delhi', 'Election Commission') ('110001', 'New Delhi', 'Delhi', 'Janpath') ('110001', 'New Delhi', 'Delhi', 'Krishi Bhawan') ('110001', 'New Delhi', 'Delhi', 'Lady Harding Medical College') ('110001', 'New Delhi', 'Delhi', 'New Delhi Gpo') ('110001', 'New Delhi', 'Delhi', 'New Delhi Ho') ('110001', 'New Delhi', 'Delhi', 'North Avenue') ('110001', 'New Delhi', 'Delhi', 'Parliament House') ('110001', 'New Delhi', 'Delhi', 'Patiala House') ('110001', 'New Delhi', 'Delhi', 'Pragati Maidan') ('110001', 'New Delhi', 'Delhi', 'Rail Bhawan') ('110001', 'New Delhi', 'Delhi', 'Sansad Marg Hpo') ('110001', 'New Delhi', 'Delhi', 'Sansadiya Soudh') ('110001', 'New Delhi', 'Delhi', 'Secretariat North') ('110001', 'New Delhi', 'Delhi', 'Shastri Bhawan') ('110001', 'New Delhi', 'Delhi', 'Supreme Court') ('110002', 'New Delhi', 'Delhi', 'Rajghat Power House') ('110002', 'New Delhi', 'Delhi', 'Minto Road') ('110002', 'New Delhi', 'Delhi', 'Indraprastha Hpo') ('110002', 'New Delhi', 'Delhi', 'Darya Ganj') I wrote this code after having studied the structure of the HTML source code with the following code (I think you'll understand it without any more explanations): from urllib2 import Request,urlopen import re pgno = 2 url = "http://www.eximguru.com/traderesources/pincode.aspx?&amp;GridInfo=Pincode0%s" %str(pgno) print url +'\n' sock = urlopen(url) htmlcode = sock.read() sock.close() li = htmlcode.splitlines(True) print '\n'.join(str(i) + ' ' + repr(line)+'\n' for i,line in enumerate(li) if 275<i<300) ch = ''.join(li[0:291]) from collections import defaultdict didi =defaultdict(int) for c in ch: didi[c] += 1 print '\n\n'+repr(li[289]) print '\n'.join('%r -> %s' % (c,didi[c]) for c in li[289] if didi[c]<35) . Now, the problem is that the same HTML is returned for all the values of pgno. The site may detect it is a program that wants to connect and fetch data. This problem must be treated with the tools in **urllib2** , but I'm not trained to that.
Python create/import custom module in same directory Question: I'm trying to create a simple python script and import a couple of custom classes. I'd like to do this as one module. Here is what I have: point/point.py class Point: """etc.""" point/pointlist.py class PointList: """etc.""" point/__init__.py from . import point, pointlist script.py import sys, point verbose = False pointlist = PointList() When I run `script.py` I get `NameError: name 'PointList' is not defined` What's weird is that in point/, all three of the module files (__init__, pointlist, point) have a `.pyc` version created that was not there before, so it seems like it is finding the files. The class files themselves also compile without any errors. I feel like I'm probably missing something very simple, so please bear with me. Answer: Sorry, I seem to have made a blunder in my earlier answer and comments: The problem here is that you should access the objects in `point` through the module you import: `point/__init__.py`: from point import Point from pointlist import PointList `script.py:` import sys, point verbose = False pointlist = point.PointList() You access `PointList` through the import `point` which imports whatever is in `__init__.py` If you want to access `PointList` and `Point` directly you could use `from point import Point, PointList` in `script.py` or the least preferable `from point import *` Again, sorry for my earlier error.
How to generate a Cocoa-recognized plot using matplotlib in Python on OS X (Leopard preferably) Question: I'm not sure exactly what is going on under the hood, but here is my setup, example code, and problem: ## setup: * snow leopard (10.6.8) * Python 2.7.2 (provide by EPD 7.1-2) * iPython 0.11 (provided by EPD 7.1-2) * matplotlib (provided by EPD 7.1-2) ## example code: import numpy as np import pylab as pl x=np.random.normal(size=(1000,)) pl.plot(x) ## problem: I can't use the standard Mac OS X shorcuts to access the window generated by the plot command. For example, I can't `Command`-`Tab` to the window. Thus, if the window is behind some other window, I need to _mouse_ over to it! `Command`-`W` doesn't close it. Obviously, this is unacceptable. It seems like perhaps running Lion instead of Leopard might fix this, but i haven't upgraded yet. I feel like the problem has something to do with iPython generating windows that aren't fully Cocoa- aware in some sense, but I really know very little so I'm not particularly confident in this hypothesis. Thus, any ideas on how to either resolve or get around this issue would be much appreciated. Answer: From the description on the [iPython page](http://ipython.org/), it looks like Python uses **Qt** to generate UI. This means that the windows it generates are definitely not Cocoa windows and will not act like them. There's not likely to be an easy solution to this issue.
Caught DatabaseError while rendering: no such column: bookmarks_bookmark.title Question: Learning django from "learning website development using django". In chapter 3, building a bookmark data model, I followed the instructions and code given except for-- from django.contrib.auth.models import User class Bookmark(models.Model): title = models.CharField(maxlength=200) user = models.ForeignKey(User) link = models.ForeignKey(Link) where I changed title = models.CharField(maxlength=200) into title = models.CharField(max_length=200) as I was geting an error message. After that, I ran python manage.py syncdb, then, python manage.py sql bookmarks. When I checked <http://localhost:8000/user/my_username>, I get this error message: Request Method: GET Request URL: http://127.0.0.1:8000/user/j/ Django Version: 1.3 Exception Type: TemplateSyntaxError Exception Value: **Caught DatabaseError while rendering: no such column: bookmarks_bookmark.title** Looking this up, I learned that sqlite3, the database i was using, cannot find bookmarks_bookmark.title. I went back into the book to make sure I have everything copied correctly, and I did except for that part that I changed (max_length). When I ran python manage.py sql bookmarks, it gave me-- BEGIN; CREATE TABLE "bookmarks_bookmark" ( "id" integer NOT NULL PRIMARY KEY, "title" varchar(200) NOT NULL, "user_id" integer NOT NULL REFERENCES "auth_user" ("id"), "link_id" integer NOT NULL REFERENCES "bookmarks_link" ("id"), ); CREATE TABLE "bookmarks_link" ( "id" integer NOT NULL PRIMARY KEY, "url" varchar(200) NOT NULL UNIQUE ); COMMIT; How do I fix this? Thanks! Answer: Manual deleting works. sqlite3 yourdb.db > drop table bookmarks_bookmark; > .quit cd yourpythonproj python2 manage.py syncdb python2 manage.py runserver bash script #!/bin/bash sqlite3 ../yourdb.db 'drop table yourtable' python2 manage.py syncdb
parallel recursive function in python? Question: How do I parallelize a recursive function in Python? My function looks like this: def f(x, depth): if x==0: return ... else : return [x] + map(lambda x:f(x, depth-1), list_of_values(x)) def list_of_values(x): # heavy compute, pure function When trying to parallelize it with `multiprocessing.Pool.map`, windows opens infinite number of processes and hangs. What's a good (preferably simple) way to parallelize it (for a single multicore machine) ? Here is the code that hangs: from multiprocessing import Pool pool = pool(processes=4) def f(x, depth): if x==0: return ... else : return [x] + pool.map(lambda x:f(x, depth-1), list_of_values(x)) def list_of_values(x): # heavy compute, pure function Answer: ok, sorry for the problems with this. i'm going to answer a slightly different question where `f()` returns the sum of the values in the list. that is because it's not clear to me from your example what the return type of `f()` would be, and using an integer makes the code simple to understand. this is complex because there are two different things happening in parallel: 1. the calculation of the expensive function in the pool 2. the recursive expansion of `f()` i am very careful to only use the pool to calculate the expensive function. in that way we don't get an "explosion" of processes. but because this is asynchronous we need to postpone a _lot_ of work for the callback that the worker calls once the expensive function is done. more than that, we need to use a countdown latch so that we know when all the separate sub-calls to `f()` are complete. there may be a simpler way (i am pretty sure there is, but i need to do other things), but perhaps this gives you an idea of what is possible: from multiprocessing import Pool, Value, RawArray, RLock from time import sleep class Latch: '''A countdown latch that lets us wait for a job of "n" parts''' def __init__(self, n): self.__counter = Value('i', n) self.__lock = RLock() def decrement(self): with self.__lock: self.__counter.value -= 1 print('dec', self.read()) return self.read() == 0 def read(self): with self.__lock: return self.__counter.value def join(self): while self.read(): sleep(1) def list_of_values(x): '''An expensive function''' print(x, ': thinking...') sleep(1) print(x, ': thought') return list(range(x)) pool = Pool() def async_f(x, on_complete=None): '''Return the sum of the values in the expensive list''' if x == 0: on_complete(0) # no list, return 0 else: n = x # need to know size of result beforehand latch = Latch(n) # wait for n entires to be calculated result = RawArray('i', n+1) # where we will assemble the map def delayed_map(values): '''This is the callback for the pool async process - it runs in a separate thread within this process once the expensive list has been calculated and orchestrates the mapping of f over the result.''' result[0] = x # first value in list is x for (v, i) in enumerate(values): def callback(fx, i=i): '''This is the callback passed to f() and is called when the function completes. If it is the last of all the calls in the map then it calls on_complete() (ie another instance of this function) for the calling f().''' result[i+1] = fx if latch.decrement(): # have completed list # at this point result contains [x]+map(f, ...) on_complete(sum(result)) # so return sum async_f(v, callback) # Ask worker to generate list then call delayed_map pool.apply_async(list_of_values, [x], callback=delayed_map) def run(): '''Tie into the same mechanism as above, for the final value.''' result = Value('i') latch = Latch(1) def final_callback(value): result.value = value latch.decrement() async_f(6, final_callback) latch.join() # wait for everything to complete return result.value print(run()) ps i am using python3.2 and the ugliness above is because we are delaying computation of the final results (going back up the tree) until later. it's possible something like generators or futures could simplify things. also, i suspect you need a cache to avoid needlessly recalculating the expensive function when called with the same argument as earlier. see also yaniv's answer - [parallel recursive function in python?](http://stackoverflow.com/questions/7222570/parallell-recursive- function-in-python/7228414#7228414) \- which seems to be an alternative way to reverse the order of the evaluation by being explicit about depth.
Running pdb in daemon mode with WSGI Question: I am running a Python script on Apache 2.2 with mod wsgi. Is it possible to run pdb.set_trace() in a python script using daemon mode in wsgi? **Edit** The reason I want to use daemon mode instead of embedded mode is to have the capability to reload code without having to restart the Apache server every time (which embedded mode requires). I would like to be able to use code reloading without restarting Apache everytime and still be able to use pdb... Answer: I had the same need to be able to use the amazingly powerful `pdb`, dropping a `pdb.set_trace()` wherever I wanted to debug some part of the Python server code. Yes, **Apache** spawns the **WSGI** application in a place where it is out of your control [1]. But I found a good compromise is to 1. maintain your Apache `WSGIScriptAlias` 2. and also give yourself the option of starting your Python server in a terminal as well (testing locally and not through **Apache** anymore in this case) So if one uses `WSGIScriptAlias` somewhat like this... pointing to your python WSGI script called `webserver.py` <VirtualHost *:443> ServerName myawesomeserver DocumentRoot /opt/local/apache2/htdocs <Directory /opt/local/apache2/htdocs> [...] </Directory> WSGIScriptAlias /myapp /opt/local/apache2/my_wsgi_scripts/webserver.py/ <Directory /opt/local/apache2/my_wsgi_scripts/> [...] </Directory> [...] SSLEngine on [...] </VirtualHost> And so your `webserver.py` can have a simple **switch** to go between being used by Apache and getting started up for debugging manually. Keep a flag in your config file such as, in some `settings.py`: WEBPY_WSGI_IS_ON = True And `webserver.py` : import web import settings urls = ( '/', 'excellentWebClass', '/store', 'evenClassier',) if settings.WEBPY_WSGI_IS_ON is True: # MODE #1: Non-interactive web.py ; using WSGI # So whenever true, the Web.py application here will talk wsgi. application = web.application(urls, globals()).wsgifunc() class excellentWebClass: def GET(self, name): # Drop a pdb wherever you want only if running manually from terminal. pdb.set_trace() try: f = open (name) return f.read() except IOError: print 'Error: No such file %s' % name if __name__ == "__main__": # MODE #2: Interactive web.py , for debugging. # Here you call it directly. app = web.application(urls, globals()) app.run() So when you want to test out your webserver interactively, you just run it from a terminal, $ python webserver.py 8080 starting web... http://0.0.0.0:8080/ _[1] Footnote: There are some really complex ways of getting Apache child processes under your control, but I think the above is much simpler if you just want to debug your Python server code. And if there are actually easy ways, then I would love to learn about those too._
Invalid Python syntax using file.write Question: Trying to learn some geospatial python. More or less following the class notes [here](http://www.gis.usu.edu/~chrisg/python/2009/lectures/ospy_slides1.pdf). [My Code](http://pastebin.com/pp2b0CvF) #!/usr/bin/python # import modules import ogr, sys, os # set working dir os.chdir('/home/jacques/misc/pythongis/data') # create the text file we're writing to file = open('data_export.txt', 'w') # import the required driver for .shp driver = ogr.GetDriverByName('ESRI Shapefile') # open the datasource data = driver.Open('road_surveys.shp', 1) if data is None: print 'Error, could not locate file' sys.exit(1) # grab the datalayer layer = data.GetLayer() # loop through the features feature = layer.GetNextFeature() while feature: # acquire attributes id = feature.GetFieldAsString('Site_Id') date = feature.GetFieldAsString('Date') # get coordinates geometry = feature.GetGeometryRef() x = str(geometry.GetX()) y = str(geometry.GetY() # write to the file file.Write(id + ' ' + x + ' ' + y + ' ' + cover + '\n') # remove the current feature, and get a new one feature.Destroy() feature = layer.GetNextFeature() # close the data source datasource.Destroy() file.close() Running that gives me the following: File "shape_summary.py", line 38 file.write(id + ' ' + x + ' ' + y + ' ' + cover + '\n') ^ SyntaxError: invalid syntax Running Python 2.7.1 Any help would be fantastic! Answer: Previous line is missing a close parenthesis: y = str(geometry.GetY()) Also, just a style comment: it's a good idea to avoid using the variable name `file` in python because it actually has a meaning. Try opening a new python session and running `help(file)`
Getting Beaker working with GAE Question: I'm trying to port an app I've been running locally to GAE. The app uses the Bottle.py framework. I use Beaker for session management. I'm a bit of a noob and am having trouble getting Beaker imported properly. Help greatly appreciated. I'm running the ported app using GoogleAppEngineLauncher.app under Mac OS X 10.6.7. This runs the app in the simulation environment on my machine, not on Google's servers. For my GAE port, I've put Bottle.py into a directory called 'framework'. This directory has an empty `__init__.py` file. Bottle is working fine and can serve 'hello world'. Beaker exists in its own directory in the root of my app (journal/beaker). Beaker also has an empty `__init__.py`. Relevant code: from framework import bottle from beaker import SessionMiddleware from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app @bottle.route('/') def index(): return "hello, world" def main(): bottle.debug(True) run_wsgi_app(bottle.default_app()) if __name__ == '__main__': main() I get an error message like this: File "/Users/mscantland/code/journal/main.py", line 19, in <module> from beaker import SessionMiddleware ImportError: cannot import name SessionMiddleware Here is what I have tried to get this working so far: * Checked permissions on everything in /beaker to make sure they were executable. * Ran beaker as-is and also re-wrote all import statements so that: from beaker.x import y became: from x import y * Added 'pkg_resources.py' which is not in the standard library for the Python version GAE uses. Answer: SessionMiddleware is in middleware.py. Try: from beaker.middleware import SessionMiddleware
Django management task won't work on CentOS in crontab or outside project directory Question: On my local machine (Mac OSX 10.6) I wrote a django custom admin command which works great. I can use it both within and outside my project directory just fine. For some reason on my CentOS 5.6 server, it won't work from outside the project directory. This is really annoying since using this custom admin command in a cron job requires it to run from the home directory. in short: When I run "python ./manage.py scrape" or "python manage.py scrape", everything is fine. When I run "python /home/[username]/webapps/myproject/manage.py scrape" or "python myproject/manage.py scrape", I get the following error: > unknown command: 'scrape' > > Type 'manage.py help' for usage. On CentOS, when I run manage.py help inside the project directory, scrape shows up as a command; but if I run it outside the project directory, scrape does not appear as a valid command. On OS-X scrape appears as a valid command regardless of where I run manage.py help from. Any idea how I can fix this? Answer: I know CentOS ships with Python 2.4, so is your code running on 2.4 or are you using a contained environment, this is usually fixed by adding your PYTHONPATH correctly import sys print sys.path verify such for starters
Python List: Is this the best way to remove duplicates while preserving order? Question: > **Possible Duplicates:** > [How do you remove duplicates from a list in Python whilst preserving > order?](http://stackoverflow.com/questions/480214/how-do-you-remove- > duplicates-from-a-list-in-python-whilst-preserving-order) > [Algorithm - How to delete duplicate elements in a list > efficiently?](http://stackoverflow.com/questions/1801459/algorithm-how-to- > delete-duplicate-elements-in-a-list-efficiently) I've read a lot of methods for removing duplicates from a python list while preserving the order. All the methods appear to require the creation of a function/sub-routine, which I think is not very computationally efficient. I came up with the following and I would like to know if this is the most computationally efficient method to do so? (My usage for this has to be the most efficient possible due to the need to have fast response time.) Thanks b=[x for i,x in enumerate(a) if i==a.index(x)] Answer: `a.index(x)` itself will be `O(n)` as the list has to be searched for the value `x`. The overall runtime is `O(n^2)`. "Saving" function calls does not make a bad algorithm faster than a good one. More efficient (`O(n)`) would probably be: result = [] seen = set() for i in a: if i not in seen: result.append(i) seen.add(i) Have a look at this question: [How do you remove duplicates from a list in Python whilst preserving order?](http://stackoverflow.com/questions/480214/how-do-you-remove- duplicates-from-a-list-in-python-whilst-preserving-order) _(the top answer also shows how to do this in a list comprehension manner, which will be more efficient than an explicit loop)_ * * * You can easily profile your code yourself using the [`timeit` _[docs]_](http://docs.python.org/library/timeit.html) module. For example, I put your code in `func1` and mine in `func2`. If I repeat this `1000` times with an array with `1000` elements (no duplicates): >>> a = range(1000) >>> timeit.timeit('func1(a)', 'from __main__ import func1, a', number=1000) 11.691882133483887 >>> timeit.timeit('func2(a)', 'from __main__ import func2, a', number=1000) 0.3130321502685547 Now with duplicates (only 100 distinct values): >>> a = [random.randint(0, 99) for _ in range(1000)] >>> timeit.timeit('func1(a)', 'from __main__ import func1, a', number=1000) 2.5020430088043213 >>> timeit.timeit('func2(a)', 'from __main__ import func2, a', number=1000) 0.08332705497741699
Total Python Noob: why doesn't this work? Question: I'm working through Learn Python the Hard Way, and trying to _understand_ it rather than just hammer away. I got stuck on Exercise 16, as discussed already on SO here: [Very basic Python question (strings, formats and escapes)](http://stackoverflow.com/questions/6394170/very-basic-python- question-strings-formats-and-escapes) but I'm still trying to figure out why this approach does not work: from sys import argv script, filename = argv print "Attempting to open the file now." print open(filename).read() print "We're going to erase %r." % filename print "If you don't want that, hit CTRL-C." print "If you do want that, hit RETURN." raw_input("?") print "Opening the file..." target = open(filename, 'w') print "Truncating the file. Goodbye!" target.truncate() print "Now I'm going to ask you for three lines." line1 = raw_input("line 1: ") line2 = raw_input("line 2: ") line3 = raw_input("line 3: ") print "I'm going to write these to the file." linebreak = "\n" target.write("%s %s %s %s %s %s") % (line1, linebreak, line2, linebreak, line3, linebreak) target.write("the ending line") print "And finally, we close it." target.close() I've established a value for linebreak, and am calling the line1, line2 and linebreak values with %s in the target.write command. Should't it parse as "line1 \n line2 \n line3 \n" when it's read? This is probably the equivalent of being asked by a child what keeps the sky up or something, and I apologize for being kind of thick. Thanks! Answer: target.write("%s %s %s %s %s %s") % (line1, linebreak, line2, linebreak, line3, linebreak) should be target.write("%s %s %s %s %s %s" % (line1, linebreak, line2, linebreak, line3, linebreak)) but would be better written as: target.write(' '.join(line1, linebreak, line2, linebreak, line3, linebreak))
csh stdin to Python stdin? Question: How do you redirect the stdin of a csh script to the stdin of a python script? I have a cgi script I'm writing in csh that runs on a Solaris machine. This csh script is a wrapper to a python script that reads from the stdin (I know, scripting in csh is bad but I'm forced to in this case). Thanks for help! (And sorry for the n00b question!) Answer: test.csh #!/bin/env csh python test.py test.py (see [this question](http://stackoverflow.com/questions/1450393/how- do-you-read-from-stdin-in-python)) #!/bin/env python import fileinput if __name__ == '__main__': print "Hi. printing stdin" for line in fileinput.input(): print line print "fin" Then the stdin to `test.csh` is passed in to `test.py` [as Henning said](http://stackoverflow.com/questions/7234640/csh-stdin-to-python- stdin/7234676#7234676). echo "this is stdin" | csh test.csh
HTTPS to HTTP using CherryPy Question: Is it possible for CherryPy to redirect HTTP to HTTPS. Lets for example say the code below is <http://example.com> if someone visits via <https://example.com> I want them to be redirected to the plain HTTP URL (301 redirect maybe?) how do I accomplish this? #!/usr/bin/env python from pprint import pformat from cherrypy import wsgiserver def app(environ, start_response): status = '200 OK' response_headers = [('Content-type', 'text/plain')] start_response(status, response_headers) return [pformat(environ)] server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 80), app) try: server.start() except KeyboardInterrupt: server.stop() Answer: You can check the `request.scheme` if it is "https" then you can raise a redirect. See <http://docs.cherrypy.org/en/latest/refman/_cprequest.html?highlight=request.scheme#cherrypy._cprequest.Request.scheme>
Scaling embedded matplotlib widget in qt application written in python problem Question: I am writing a simple digital image processing program. To do this I have embedded a mpl widget in my qt application. The user can perform some simple analysis on the image such as box car filter, FFT etc. Every thing is working fine until I would like to switch from displaying an image to displaying a plot. If I display a plot first, the axis are fine (see bottom plot in image). But if I display an image first, followed by a plot (top plot in image), the scale compresses. <https://picasaweb.google.com/105163945296073520628/Temp> <\-- sorry I can't post images yet The code is hosted here <https://code.launchpad.net/~marrabld/pymi/trunk> I am using imshow() to display the image. and plot(x,y) for the plots. This is the main update method def updateImage(self): self.ui.mplWidget.canvas.PlotTitle = self.plotTitle self.ui.mplWidget.canvas.xtitle = self.xTitle self.ui.mplWidget.canvas.ytitle = self.yTitle #self.ui.mplWidget.canvas.ax.visible(False) self.ui.mplWidget.canvas.format_labels() if self.projectProperty == globals.IMAGE: if self.lastProjectProperty == globals.PLOT: self.myImage = imageFuncs.basic(self.imageFileName) self.imPlot = self.ui.mplWidget.canvas.ax.imshow(self.myImage.image,cmap=matplotlib.cm.gray,origin='upper') elif self.projectProperty == globals.PLOT: if self.lastProjectProperty == globals.IMAGE: # we need to reload the GUI self.ui.mplWidget.canvas.ax.hold(False) self.ui.mplWidget.canvas.ax.plot(self.xData,self.yData) self.ui.mplWidget.canvas.draw() And the mpl widget I am using #!/usr/bin/env python from PyQt4.QtCore import * from PyQt4.QtGui import * from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas #from matplotlib.backends.backend_qt4 import NavigationToolbar2QT as NavigationToolbar from matplotlib.backend_bases import NavigationToolbar2 from matplotlib.figure import Figure from matplotlib import rc import numpy as N class MyMplCanvas(FigureCanvas): def __init__(self, parent=None, width = 10, height = 12, dpi = 125, sharex = None, sharey = None): rc('text', usetex=True) rc('font', family='sans-serif') rc('legend',fontsize='small' ) rc('legend',shadow='true') self.fig = Figure(figsize = (width, height), dpi=dpi, facecolor = '#FFFFFF') self.ax = self.fig.add_subplot(111, sharex = sharex, sharey = sharey) self.fig.subplots_adjust(left=0.15, bottom=0.15, right=0.9, top=0.9) self.fig.add_axes(yscale='symlog') self.xtitle=r"x-Axis" self.ytitle=r"y-Axis" self.PlotTitle = r"Title" self.grid_status = True self.xaxis_style = 'linear' self.yaxis_style = 'linear' #self.fig.yscale = 'log' self.format_labels() self.ax.hold(True) FigureCanvas.__init__(self, self.fig) #self.fc = FigureCanvas(self.fig) #FigureCanvas.setSizePolicy(self, # QSizePolicy.Expanding, # QSizePolicy.Expanding) FigureCanvas.updateGeometry(self) def format_labels(self): self.ax.set_title(self.PlotTitle) self.ax.title.set_fontsize(5) self.ax.set_xlabel(self.xtitle, fontsize = 4) self.ax.set_ylabel(self.ytitle, fontsize = 4) labels_x = self.ax.get_xticklabels() labels_y = self.ax.get_yticklabels() for xlabel in labels_x: xlabel.set_fontsize(4) for ylabel in labels_y: ylabel.set_fontsize(4) ylabel.set_color('b') def sizeHint(self): w, h = self.get_width_height() return QSize(w, h) def minimumSizeHint(self): return QSize(10, 10) def sizeHint(self): w, h = self.get_width_height() return QSize(w, h) def minimumSizeHint(self): return QSize(10, 10) #mouseClick = pyqtProperty("QPoint",mouseClick,click) class mplWidget(QWidget): def __init__(self, parent = None): QWidget.__init__(self, parent) self.canvas = MyMplCanvas() #self.toolbar = MyNavigationToolbar(self.canvas, self.canvas, direction = 'v') self.hbox = QHBoxLayout() #self.hbox.addWidget(self.toolbar) self.hbox.addWidget(self.canvas) self.setLayout(self.hbox) def savePlot(self,filePath): self.canvas.fig.savefig(filePath) def setLegend(self,handle, label): self.canvas.fig.legend(handle,label,'upper right') def clearPlot(self): self.canvas.fig.clear() width = 10 height = 12 dpi = 125 sharex = None sharey = None self.canvas.fig = Figure(figsize = (width, height), dpi=dpi, facecolor = '#FFFFFF') self.canvas.ax = self.canvas.fig.add_subplot(111, sharex = sharex, sharey = sharey) self.canvas.fig.subplots_adjust(left=0.15, bottom=0.15, right=0.9, top=0.9) self.canvas.fig.add_axes(yscale='symlog') self.canvas.xtitle=r"x-Axis" self.canvas.ytitle=r"y-Axis" self.canvas.PlotTitle = r"Title" self.canvas.grid_status = True self.canvas.xaxis_style = 'linear' self.canvas.yaxis_style = 'linear' #self.fig.yscale = 'log' self.canvas.format_labels() self.canvas.ax.hold(True) FigureCanvas.__init__(self.canvas, self.canvas.fig) #self.fc = FigureCanvas(self.fig) FigureCanvas.setSizePolicy(self, QSizePolicy.Expanding, QSizePolicy.Expanding) FigureCanvas.updateGeometry(self) Any Help would be greatly appreciated. Answer: you can use aspect parameter of imshow() to adjust the ratio between the height & weight: from pylab import * a = np.zeros((100,10)) # height=100, weight=10 subplot(211) imshow(a) # ratio = 10 subplot(212) imshow(a, aspect=0.1) # ratio = 1 show() but it will stretch the image. or you can use xlim(), ylim() the set the range of x-y axis. imshow(a) xlim(-50,50) EDIT: imshow() will set the aspect property of axe to "equal". you need reset it before calling plot(): self.ui.mplWidget.canvas.ax.set_aspect("auto") self.ui.mplWidget.canvas.ax.plot(self.xData,self.yData)
How can I use xml.sax module on an executable made with PyInstaller? Question: I want to have my application read a document using xml.sax.parse. Things work fine but when I move the executable to a Windows server 2008 machine things break down. I get an SAXReaderNotAvailable exception with "No parsers found" message. The setup I'm using to build the executable is: * 64 bit windows 7 * Python 2.7.2 32-bit * PyInstaller 1.5.1 Answer: SAX readers seems to be dynamically imported, so the static analysis can't detect them and they can't be embedded with application. To correct this, you'll have to be explicit to force PyInstaller to import those [hidden modules](http://www.pyinstaller.org/export/latest/tags/1.5.1/doc/Manual.html?format=raw#listing- hidden-imports). Try to add this to you .spec (thanks Velociraptors) file : hiddenimports = ['xml.sax.drivers', 'xml.sax.drivers2']
Download file from web in Python 3 Question: I am creating a program that will download a .jar (java) file from a web server, by reading the URL that is specified in the .jad file of the same game/application. I'm using Python 3.2.1 I've managed to extract the URL of the JAR file from the JAD file (every JAD file contains the URL to the JAR file), but as you may imagine, the extracted value is type() string. Here's the relevant function: def downloadFile(URL=None): import httplib2 h = httplib2.Http(".cache") resp, content = h.request(URL, "GET") return content downloadFile(URL_from_file) However I always get an error saying that the type in the function above has to be bytes, and not string. I've tried using the URL.encode('utf-8'), and also bytes(URL,encoding='utf-8'), but I'd always get the same or similar error. So basically my question is how to download a file from a server when the URL is stored in a string type? Answer: If you want to obtain the contents of a web page into a variable, just `read` the response of [**`urllib.request.urlopen`**](http://docs.python.org/dev/library/urllib.request.html#urllib.request.urlopen): import urllib.request ... url = 'http://example.com/' response = urllib.request.urlopen(url) data = response.read() # a `bytes` object text = data.decode('utf-8') # a `str`; this step can't be used if data is binary * * * The easiest way to download and save a file is to use the [**`urllib.request.urlretrieve`**](http://docs.python.org/dev/library/urllib.request.html#urllib.request.urlretrieve) function: import urllib.request ... # Download the file from `url` and save it locally under `file_name`: urllib.request.urlretrieve(url, file_name) import urllib.request ... # Download the file from `url`, save it in a temporary directory and get the # path to it (e.g. '/tmp/tmpb48zma.txt') in the `file_name` variable: file_name, headers = urllib.request.urlretrieve(url) But keep in mind that `urlretrieve` is considered [legacy](http://docs.python.org/dev/library/urllib.request.html#legacy- interface) and might become deprecated (not sure why, though). So the most _correct_ way to do this would be to use the [**`urllib.request.urlopen`**](http://docs.python.org/dev/library/urllib.request.html#urllib.request.urlopen) function to return a file-like object that represents an HTTP response and copy it to a real file using [**`shutil.copyfileobj`**](http://docs.python.org/dev/library/shutil.html#shutil.copyfileobj). import urllib.request import shutil ... # Download the file from `url` and save it locally under `file_name`: with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file: shutil.copyfileobj(response, out_file) If this seems too complicated, you may want to go simpler and store the whole download in a `bytes` object and then write it to a file. But this works well only for small files. import urllib.request ... # Download the file from `url` and save it locally under `file_name`: with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file: data = response.read() # a `bytes` object out_file.write(data) * * * It is possible to extract `.gz` (and maybe other formats) compressed data on the fly, but such an operation probably requires the HTTP server to support random access to the file. import urllib.request import gzip ... # Read the first 64 bytes of the file inside the .gz archive located at `url` url = 'http://example.com/something.gz' with urllib.request.urlopen(url) as response: with gzip.GzipFile(fileobj=response) as uncompressed: file_header = uncompressed.read(64) # a `bytes` object # Or do anything shown above using `uncompressed` instead of `response`.
regex result filter in python Question: I am working on a program with regexes, I have to filter them but I can't find out how. I want to match every red,xxxx or xxxx,red expression in my string and put the colors xxxx into a group. Here is my code: string = "blue,red red,yellow blue,yellow red,green purple red, ..." regex = re.compile('(?:red,(?P<redfirst>\w+)|(?P<othercolorfirst>\w+),red)') Then I write: for match in regex.finditer(string): if match.group('redfirst')!= "None": print(match.group("redfirst")) But I still obtain printing like: None yellow green None I dont want the 'None' results to appear, I have to skip them in an smart way if possible. Thanks for help! **EDIT** None without quotes doesn't work either Answer: >>> import re >>> regex = re.compile('(?:red,(?P<redfirst>\w+)|(?P<othercolorfirst>\w+),red)') >>> string = "blue,red red,yellow blue,yellow red,green purple red, ..." >>> for matches in regex.finditer(string): ... if matches.group('redfirst'): ... print matches.group('redfirst') ... yellow green >>>
How to build debian package with CPack to execute setup.py? Question: Until now, my project had only **.cpp** files that were compiled into different binaries and I managed to configure **CPack** to build a proper **debian package** without any problems. Recently I wrote a couple of python applications and added them to the project, as well as some custom modules that I would also like to incorporate to the package. After writing a `setup.py` script, I'm wondering how to add these files to the **CPack** configuration in a way that `setup.py` get's executed automatically when the user installs the package on the system with `dpkg -i package.deb`. I'm struggling to find relevant information on how to configure CPack to install custom python applications/modules. Has anyone tried this? Answer: I figured out a way to do it but it's not very simple. I'll do my best to explain the procedure so please be patient. ## The idea of this approach is to use _postinst_ and _prerm_ to install and remove the python application from the system. In the **CMakeLists.txt** that defines the project, you need to state that **CPACK** is going to be used to generate a **.deb package**. There's some variables that need to be filled with info related to the package itself, but one named `CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA` is very important because it's used to specify the location of **postinst** and **prerm** , which are standard scripts of the _debian packaging system_ that are automatically executed by **dpkg** when the package is installed/removed. At some point of your **main** `CMakeLists.txt` you should have something like this: add_subdirectory(name_of_python_app) set(CPACK_COMPONENTS_ALL_IN_ONE_PACKAGE 1) set(CPACK_PACKAGE_NAME "fake-package") set(CPACK_PACKAGE_VENDOR "ACME") set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "fake-package - brought to you by ACME") set(CPACK_PACKAGE_VERSION "1.0.2") set(CPACK_PACKAGE_VERSION_MAJOR "1") set(CPACK_PACKAGE_VERSION_MINOR "0") set(CPACK_PACKAGE_VERSION_PATCH "2") SET(CPACK_SYSTEM_NAME "i386") set(CPACK_GENERATOR "DEB") set(CPACK_DEBIAN_PACKAGE_MAINTAINER "ACME Technology") set(CPACK_DEBIAN_PACKAGE_DEPENDS "libc6 (>= 2.3.1-6), libgcc1 (>= 1:3.4.2-12), python2.6, libboost-program-options1.40.0 (>= 1.40.0)") set(CPACK_DEBIAN_PACKAGE_CONTROL_EXTRA "${CMAKE_SOURCE_DIR}/name_of_python_app/postinst;${CMAKE_SOURCE_DIR}/name_of_python_app/prerm;") set(CPACK_SET_DESTDIR "ON") include(CPack) Some of these variables are **optional** , but I'm filling them with info for educational purposes. Now, let's take a look at the scripts: **postinst** : #!/bin/sh # postinst script for fake_python_app set -e cd /usr/share/pyshared/fake_package sudo python setup.py install **prerm** : #!/bin/sh # prerm script # # Removes all files installed by: ./setup.py install sudo rm -rf /usr/share/pyshared/fake_package sudo rm /usr/local/bin/fake_python_app If you noticed, script **postinst** enters at `/usr/share/pyshared/fake_package` and executes the **setup.py** that is laying there to install the app on the system. Where does this file come from and how it ends up there? This file is created by you and will be copied to that location when your package is installed on the system. This action is configured in `name_of_python_app/CMakeLists.txt`: install(FILES setup.py DESTINATION "/usr/share/pyshared/fake_package" ) install(FILES __init__.py DESTINATION "/usr/share/pyshared/fake_package/fake_package" ) install(FILES fake_python_app DESTINATION "/usr/share/pyshared/fake_package/fake_package" ) install(FILES fake_module_1.py DESTINATION "/usr/share/pyshared/fake_package/fake_package" ) install(FILES fake_module_2.py DESTINATION "/usr/share/pyshared/fake_package/fake_package" ) As you can probably tell, besides the python application I want to install there's also 2 custom python modules that I wrote that also need to be installed. Below I describe the contents of the most important files: **setup.py** : #!/usr/bin/env python from distutils.core import setup setup(name='fake_package', version='1.0.5', description='Python modules used by fake-package', py_modules=['fake_package.fake_module_1', 'fake_package.fake_module_2'], scripts=['fake_package/fake_python_app'] ) **__init_ _.py**: is an empty file. **fake_python_app** : your python application that will be installed in /usr/local/bin And that's pretty much it!
Problem with C library linked to Python interpreter, on Mac OS X Question: I'm trying to use a C library that is supposed to be available from Python. The library compiles fine on Mac OS X (10.6.0, i386) with GCC (version: i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659). When I try to import the python module from python, I get the error: $ python Enthought Python Distribution -- www.enthought.com Version: 7.0-2 (64-bit) Python 2.7.1 |EPD 7.0-2 (64-bit)| (r271:86832, Dec 3 2010, 15:56:20) [GCC 4.0.1 (Apple Inc. build 5488)] on darwin Type "help", "copyright", "credits" or "license" for more information. >> import mymodule Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/EPD64.framework/Versions/7.0/lib/python2.7/site-packages/mymodule/__init__.py", line 2, in <module> from mymodule import * ImportError: dlopen(/Library/Frameworks/EPD64.framework/Versions/7.0/lib/python2.7/site-packages/mymodule/mymodule.so, 2): Symbol not found: _b_char Referenced from: /Library/Frameworks/EPD64.framework/Versions/7.0/lib/python2.7/site-packages/mymodule/mymodule.so Expected in: flat namespace in /Library/Frameworks/EPD64.framework/Versions/7.0/lib/python2.7/site-packages/mymodule/mymodule.so To respond to Ned's questions, this is the output I get: $ file $(python -c 'import sys;print(sys.executable)') /Library/Frameworks/EPD64.framework/Versions/Current/bin/python: Mach-O 64-bit executable x86_64 $ python -c 'import sys;print(sys.maxsize > 2**32)' ; True $ cd /Library/Frameworks/EPD64.framework/Versions/7.0/lib/python2.7/site-packages/mymodule $ file mymodule.so mymodule.so: Mach-O 64-bit bundle x86_64 $ otool -L mymodule.so mymodule.so: /usr/local/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.1) $ file /usr/lib/libSystem.B.dylib /usr/lib/libSystem.B.dylib: Mach-O universal binary with 3 architectures /usr/lib/libSystem.B.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 /usr/lib/libSystem.B.dylib (for architecture i386): Mach-O dynamically linked shared library i386 /usr/lib/libSystem.B.dylib (for architecture ppc7400): Mach-O dynamically linked shared library ppc $ file /usr/local/lib/libgcc_s.1.dylib /usr/local/lib/libgcc_s.1.dylib: Mach-O universal binary with 4 architectures /usr/local/lib/libgcc_s.1.dylib (for architecture i386): Mach-O dynamically linked shared library i386 /usr/local/lib/libgcc_s.1.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 /usr/local/lib/libgcc_s.1.dylib (for architecture ppc): Mach-O dynamically linked shared library ppc /usr/local/lib/libgcc_s.1.dylib (for architecture ppc64): Mach-O 64-bit dynamically linked shared library ppc64 It seems that there's a common architecture, but I'm unsure about whether that's true for the libraries references by otool -L -- those seem to have multiple versions. Another thing I noticed is that when I make this package and compile it and then make the Python module, the "build" directory of the module (i.e. the directory at the same level as the setup.py file) has these Mac OS X _10.5_ files: $ cd build/ $ ls lib.macosx-10.5-x86_64-2.7 temp.macosx-10.5-x86_64-2.7 However, I am using Mac OS X 10.6. What controls which version is used to compile a Python package using distutils? I'm afraid this might be causing the problem. Any idea what could be causing this? Thanks. Answer: It is hard to know exactly what the problem is without more information but it appears you are using a 64-bit version of Python (from EPD). Is the library that you built also built as a 64-bit library? You should be able to tell by doing something like this: file $(python -c 'import sys;print(sys.executable)') # see archs that Python was built with python -c 'import sys;print(sys.maxsize > 2**32)' ; # see if running as 64-bit (false if 32-bit) cd /Library/Frameworks/EPD64.framework/Versions/7.0/lib/python2.7/site-packages/mymodule file mymodule.so # see what architectures the C extension module is built with otool -L mymodule.so # see what libraries are referenced by the C extension module file /path/to/lib1 # see what archs the referenced lib module(s) are built with There needs to be a common arch among all of them. Update: Based on your additional information, the most suspicious looking item is the library reference to `/usr/local/lib/libgcc_s.1.dylib`. That would seem to indicate you have a local copy of `gcc` or other compiler installed in `/usr/local`. Are you sure you aren't mixing compilers here? Try cleaning the build directory and explicitly setting `export CC=/usr/bin/gcc-4.0` before building your module. Or move that other compiler out of `/usr/local`. (The 10.5 thing should not be an issue. That just indicates that the EPD Python distribution was built to run on 10.5 and later systems.)
Why aren't breakpoints working on the Swing Event Dispatch Thread in PyDev? Question: I'm using Jython, Swing, and PyDev (Eclipse). Breakpoints are not being hit on any code that runs on the EDT (aka AWT Event Queue?). This includes: * Functions that are invoked from a Swing event (eg JButton click) * Functions that, via a decorator, are run through `SwingUtilities.invokeLater()` (See the last example [here](http://wiki.python.org/jython/ADB%20SwingExamples#Decorator_to_add_a_function_to_SwingUtilities.invokeLater_donated_by_Alex_Gr.2BAPY-nholm). * Functions that registered as hooks to a Java package (socket class), that I'm using. **Swing event code to reproduce:** from javax.swing import JFrame, JButton def TestFunc(event): #breakpoints in this function don't work print "Hey" if __name__ == '__main__': mainWindow = JFrame('Test', defaultCloseOperation = JFrame.EXIT_ON_CLOSE, size = (1024, 600)) mainWindow.add(JButton("Hey", actionPerformed = TestFunc)) mainWindow.visible = True **invokeLater() code to reproduce:** from java.lang import Runnable from javax.swing import SwingUtilities import threading class foo(Runnable): def __init__(self, bar): self.bar = bar def run(self): #breakpoints in this function don't work print threading.currentThread() print self.bar if __name__ == '__main__': myFoo = foo(5) SwingUtilities.invokeLater(myFoo) Answer: It's actually a Jython issue. I.e.: in the code below, when TestFunc is called, the print from the trace_dispatch should be called, but it's not. So, the Jython tracing implementation is not calling the tracing function as it should in that situation. You can 'help' the PyDev debugger by calling `import pydevd;pydevd.settrace(suspend=False)` so that the debugger discovers about that frame (i.e.: in the start of TestFunc add that line of code). Note that if you don't pass the suspend=False, it'll act as a breakpoint in the code and will stop the execution at that line. import sys import threading def trace_dispatch(frame, event, arg): print frame.f_code.co_filename, frame.f_code.co_name sys.settrace(trace_dispatch) threading.settrace(trace_dispatch) from javax.swing import JFrame, JButton def TestFunc(event): print "Hey" if __name__ == '__main__': mainWindow = JFrame('Test', defaultCloseOperation = JFrame.EXIT_ON_CLOSE, size = (1024, 600)) mainWindow.add(JButton("Hey", actionPerformed = TestFunc)) mainWindow.visible = True
Instantiate all functions in Python Question: Is there a way to do something like this: util.py contains: def add def subtract instantiate.py contains: def instantiate where instantiate does: import util def instantiate(): add = util.add subtract = util.subtract So I can skip typing `util` everytime I use a function and I can instantiate them all using one function? I tried but I get NameError: global name 'util' is not defined Answer: You can import specific functions from a module : from util import add, substract
Regular expressions - C# behaves differently than Perl / Python Question: Under Python: ttsiod@elrond:~$ python >>> import re >>> a='This is a test' >>> re.sub(r'(.*)', 'George', a) 'George' Under Perl: ttsiod@elrond:~$ perl $a="This is a test"; $a=~s/(.*)/George/; print $a; (Ctrl-D) George Under C#: using System; using System.Collections.Generic; using System.Text; using System.Threading; using System.Text.RegularExpressions; namespace IsThisACsharpBug { class Program { static void Main(string[] args) { var matchPattern = "(.*)"; var replacePattern = "George"; var newValue = Regex.Replace("This is nice", matchPattern, replacePattern); Console.WriteLine(newValue); } } } Unfortunately, C# prints: $ csc regexp.cs Microsoft (R) Visual C# 2008 Compiler version 3.5.30729.5420 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. $ ./regexp.exe GeorgeGeorge Is this a bug in the regular expression library of C# ? Why does it print "George" two times, when Perl and Python just print it once? Answer: In your example the difference seems to be in the semantics of the 'replace' function rather than in the regular expression processing itself. .net is doing a "global" replace, i.e. it is replacing all matches rather than just the first match. **Global Replace in Perl** (_notice the small 'g' at the end of the =~s line_) $a="This is a test"; $a=~s/(.*)/George/g; print $a; which produces GeorgeGeorge **Single Replace in .NET** var re = new Regex("(.*)"); var replacePattern = "George"; var newValue = re.Replace("This is nice", replacePattern, 1) ; Console.WriteLine(newValue); which produces George since it stops after the first replacement.
How do I get a reference for the current class object? Question: In Python, how do I get a reference to the current class object within a class statement? Example: def setup_class_members(cls, prefix): setattr(cls, prefix+"_var1", "hello") setattr(cls, prefix+"_var2", "goodbye") class myclass(object): setup_class_members(cls, "coffee") # How to get "cls"? def mytest(self): print(self.coffee_var1) print(self.coffee_var2) x = myclass() x.mytest() >>> hello >>> goodbye Alternatives that I've written off are: 1. Use `locals()`: This gives a dict in a class statement that can be written to. This seems to work for classes, however the documentation tells you not to do this. (I might be tempted to go with this alternative if someone can assure me that this will continue to work for some time.) 2. Add members to the class object after the `class` statement: My actual application is to derive a PyQt4 `QWidget` class with dynamically created `pyqtProperty` class attributes. `QWidget` is unusual in that it has a custom metaclass. Very roughly, the metaclass compiles a list of `pyqtProperties` and stores it as additional member. For this reason, properties that are added to the class after creation have no effect. An example to clear this up: from PyQt4 import QtCore, QtGui # works class MyWidget1(QtGui.QWidget): myproperty = QtCore.pyqtProperty(int) # doesn't work because QWidget's metaclass doesn't get to "compile" myproperty class MyWidget2(QtGui.QWidget): pass MyWidget2.myproperty = QtCore.pyqtProperty(int) Please note that the above will work for most programming cases; my case just happens to be one of those unusual corner cases. Answer: AFAIK there is two way to do what you want: 1. Using [metaclass](http://docs.python.org/reference/datamodel.html#customizing-class-creation), this will create your two variables in class creation time (which i think is what you want): class Meta(type): def __new__(mcs, name, bases, attr): prefix = attr.get("prefix") if prefix: attr[prefix+"_var1"] = "hello" attr[prefix+"_var2"] = "goodbye" return type.__new__(mcs, name, bases, attr) class myclass(object): __metaclass__ = Meta prefix = "coffee" def mytest(self): print(self.coffee_var1) print(self.coffee_var2) 2. Create your two class variable in instantiation time: class myclass(object): prefix = "coffee" def __init__(self): setattr(self.__class__, self.prefix+"_var1", "hello") setattr(self.__class__, self.prefix+"_var2", "goodbye") def mytest(self): print(self.coffee_var1) print(self.coffee_var2) N.B: I'm not sure what you want to achieve because if you want to create dynamic variables depending on the `prefix` variable why are you accessing like you do in your `mytest` method ?! i hope it was just an example.
'No module named' error in Python while importing outside /home directory Question: Probably this is a silly issue, but I haven't been able to figure it out. I'm getting `ImportError: No module named etree.ElementTree` when I write: #!/usr/bin/python3.2 import xml.etree.ElementTree as etree tree = etree.parse('feed.xml') root = tree.getroot() If I run this same script in `/home/` or `/home/<user>/`, it works fine but when my current working directory is `/home/<user>/<some_directory>/<some_subdirectory>`, I get the above mentioned error. What is happening here? Additional info: I'm running Ubuntu 11.04 and Python 3.2 Thanks in advance. Answer: Try running Python in the place where it works and the place where it doesn't work, and compare the values of `sys.path` when running Python in those two locations. My first guess would be that you have `$PYTHONSTARTUP` set to something that depends on the working directory.
Convert sslsocket python code to ruby Question: I have such code on python connect to some software by socket: import socket, ssl host = '127.0.0.1' port = 8963 sert_key = '../keys/key.pem' sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, True) sock.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, True) sock.settimeout(30.5) sock.connect((host, port)) sock = ssl.wrap_socket(sock, server_side=True, certfile=sert_key, ssl_version=ssl.PROTOCOL_TLSv1) cert = "hello" cert = cert.encode('utf-8') req = ('%08x'%len(cert))+cert sock.sendall(req) print sock.recv(4096) Output: "OK", so its work. I try to convert such code on ruby, but it doesn't work: require 'socket' require 'openssl' host = '127.0.0.1' port = 8963 sert_key = '../keys/key.pem' socket = Socket.new(Socket::AF_INET, Socket::SOCK_STREAM, 0) address = Socket.pack_sockaddr_in(port, host) socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_KEEPALIVE, true) socket.setsockopt(Socket::SOL_TCP, Socket::TCP_NODELAY, true) socket.connect(address) #socket = TCPSocket.new(host, port) # not help also ssl_context = OpenSSL::SSL::SSLContext.new(:TLSv1) ssl_context.cert = OpenSSL::X509::Certificate.new(File.open(sert_key)) ssl_context.key = OpenSSL::PKey::RSA.new(File.open(sert_key)) ssl_context.verify_mode = OpenSSL::SSL::VERIFY_NONE ssl_socket = OpenSSL::SSL::SSLSocket.new(socket, ssl_context) ssl_socket.sync_close = true ssl_socket.connect sert = "hello" sert = sert.force_encoding('UTF-8') req = sprintf("%08x", sert.length) + sert ssl_socket.write(req) puts ssl_socket.read(4096) But I have such error: test1.rb:30:in `connect': SSL_connect returned=1 errno=0 state=SSLv3 read server hello B: bad message type (OpenSSL::SSL::SSLError) Help me please port this code on ruby. I don't understand what I miss (where difference in codes). P.S. Sorry, but software to which I want to connect not for sharing for now :( Answer: Seems like you should use something like this: require 'socket' require 'openssl' host = '127.0.0.1' port = 8963 sert_key = '../keys/key.pem' socket = TCPSocket.new(host, port) # not help also ssl_context = OpenSSL::SSL::SSLContext.new(:TLSv1) ssl_context.cert = OpenSSL::X509::Certificate.new(File.open(sert_key)) ssl_context.key = OpenSSL::PKey::RSA.new(File.open(sert_key)) ssl_context.verify_mode = OpenSSL::SSL::VERIFY_NONE ssl_socket = OpenSSL::SSL::SSLSocket.new(socket, ssl_context) ssl_socket.sync_close = true ssl_socket.accept sert = "hello" sert = sert.force_encoding('UTF-8') req = sprintf("%08x", sert.length) + sert ssl_socket.write(req) puts ssl_socket.sysread(4096) **EDIT:** Updated code yet another time.
Set up Scrapy framework to run on Python 2.7 Question: Is it possible to select which version of Python is used by Scrapy? I am running Scrapy on Ubuntu 10.04 which ships with Python 2.6. I have Python 2.7 installed on my machine and would like to take advantage of some of the features of this later version but do not know how to set Scrapy to run on 2.7. When I type "python" into terminal, it runs Python 2.6 ("python2.7" loads Python 2.7). Ideas? Answer: The right way to do this is to organize things so that your special Python is in its own subdirectory that has a bin and lib subdirectory. Then you put that subdirectory in the `PATH` environment variable before the system binary directories. For instance, lets say you have a `/python directory` and you put the python binary in `/python/bin/python`. Whether you do that by building python from scratch, copying files, or linking to existing files, is not important. They will all work. Note that it may not be enough to simply link to the existing python2.7 binary since that will likely expect to find the Python libraries in /python/lib if you run it this way. Second step is to run `export PATH=/python/bin:$PATH`. You can type that at the shell prompt to experiment, but longer term that should either go in a `~/.profile` file, or in a special shell script used to run your application, such as scrapy. Note that a very popular way for Python developers to do this is to install and setup virtualenv but if you aren't going to be changing environments every day, that is probably overkill. If you have this problem on a lot of machines then you might want to have a custom build of Python that you can use everywhere such as [the portable python built with this script](https://github.com/wavetossed/pybuild).
Django exception bugging me, don't know how to debug it Question: I recently upgraded to python2.7 and django1.3 and since then Unhandled exception in thread started by <bound method Command.inner_run of <django.core.management.commands.runserver.Command object at 0x109c57490>> Traceback (most recent call last): File "/Users/ApPeL/.virtualenvs/myhunt/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 88, in inner_run self.validate(display_num_errors=True) File "/Users/ApPeL/.virtualenvs/myhunt/lib/python2.7/site-packages/django/core/management/base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "/Users/ApPeL/.virtualenvs/myhunt/lib/python2.7/site-packages/django/core/management/validation.py", line 36, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "/Users/ApPeL/.virtualenvs/myhunt/lib/python2.7/site-packages/django/db/models/loading.py", line 146, in get_app_errors self._populate() File "/Users/ApPeL/.virtualenvs/myhunt/lib/python2.7/site-packages/django/db/models/loading.py", line 67, in _populate self.write_lock.release() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 137, in release raise RuntimeError("cannot release un-acquired lock") RuntimeError: cannot release un-acquired lock Your help would be greatly appreciated. Answer: A usual first recommendation is to apply the latest updates to gevent or greenlet or what you use related to threads. Implementation of threading.Thread.start has been changed between Python 2.6 and 2.7. There are many recipes how to start green... or green... with Django. Try to read any recent for Python 2.7. and send a link which one makes the problem. **Debugging:** Add following lines to your `manage.py` to enable logging of thread start etc. to stderr: import threading setattr(threading, '__debug__', True) Add the argument `verbose` to `django/db/loading.py` line 39 in order to see also what threads acquire and release the lock. - write_lock = threading.RLock(), + write_lock = threading.RLock(verbose=True), Run development server. For only one thread without autoreload you should see something like: $ python manage.py runserver --noreload Validating models... MainThread: <_RLock owner='MainThread' count=1>.acquire(1): initial success MainThread: <_RLock owner=None count=0>.release(): final release Notes: `count=1 acquire(1)` \-- the first acquire by a blocking lock `owner=None count=0>.release()` \-- the the lock is currently being unlocked $ python manage.py runserver Validating models... Dummy-1: <_RLock owner=-1222960272 count=1>.acquire(1): initial success Dummy-1: <_RLock owner=None count=0>.release(): final release This is the same with autoreload. Models are validated by the child process. "Dummy-1" is a symbolic name of the thread. This can be repeated for more threads, but no threads should/can acquire the lock until it is released by the previous thread. We can continue according the results.
How to clear stdout in Python subprocess? Question: this snippet will ping an ip address in windows and get output line each 2 seconds, however, I found there's a very slowly memory increasement of ping.exe process after run it, if I deploy it to ping 1000 ip parallel, soon it will cause server hang, I think it may because of stdout buffer, may I know how to clear the stdout or limit its size? thanks! ... proc = subprocess.Popen(['c:\windows\system32\ping.exe','127.0.0.1', '-l', '10000', '-t'],stdout=subprocess.PIPE, creationflags=subprocess.CREATE_NEW_PROCESS_GROUP) while True: time.sleep(2) os.kill(proc.pid, signal.CTRL_BREAK_EVENT) line = proc.stdout.readline() Answer: ping is producing many more lines than you're reading due to the 2 second timeout between reads. I'd move the os.kill call into another thread, and use the main thread to read every line from `proc.stdout`: import sys, os import subprocess import threading import signal import time #Use ctrl-c and ctrl-break to terminate the script/ping def sigbreak(signum, frame): import sys if proc.poll() is None: print('Killing ping...') proc.kill() sys.exit(0) signal.signal(signal.SIGBREAK, sigbreak) signal.signal(signal.SIGINT, sigbreak) #executes in a separate thread def run(pid): while True: time.sleep(2) try: os.kill(pid, signal.CTRL_BREAK_EVENT) except WindowsError: #quit the thread if ping is dead break cmd = [r'c:\windows\system32\ping.exe', '127.0.0.1', '-l', '10000', '-t'] flags = subprocess.CREATE_NEW_PROCESS_GROUP proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, creationflags=flags) threading.Thread(target=run, args=(proc.pid,)).start() while True: line = proc.stdout.readline() if b'statistics' in line: #I don't know what you're doing with the ping stats. #I'll just print them. for n in range(4): encoding = getattr(sys.stdout, 'encoding', 'ascii') print(line.decode(encoding).rstrip()) line = proc.stdout.readline() print()
mutagen: how to detect and embed album art in mp3, flac and mp4 Question: I'd like to be able to detect whether an audio file has embedded album art and, if not, add album art to that file. I'm using mutagen 1) Detecting album art. Is there a simpler method than this pseudo code: from mutagen import File audio = File('music.ext') test each of audio.pictures, audio['covr'] and audio['APIC:'] if doesn't raise an exception and isn't None, we found album art 2) I found this for embedding album art into an mp3 file: [How do you embed album art into an MP3 using Python?](http://stackoverflow.com/questions/409949/how-do-you-embed-album-art- into-an-mp3-using-python) How do I embed album art into other formats? EDIT: embed mp4 audio = MP4(filename) data = open(albumart, 'rb').read() covr = [] if albumart.endswith('png'): covr.append(MP4Cover(data, MP4Cover.FORMAT_PNG)) else: covr.append(MP4Cover(data, MP4Cover.FORMAT_JPEG)) audio.tags['covr'] = covr audio.save() Answer: Embed flac: from mutagen.flac import File, Picture, FLAC def add_flac_cover(filename, albumart): audio = File(filename) image = Picture() image.type = 3 if albumart.endswith('png'): mime = 'image/png' else: mime = 'image/jpeg' image.desc = 'front cover' with open(albumart, 'rb') as f: # better than open(albumart, 'rb').read() ? image.data = f.read() audio.add_picture(image) audio.save() For completeness, detect picture def pict_test(audio): try: x = audio.pictures if x: return True except Exception: pass if 'covr' in audio or 'APIC:' in audio: return True return False
What does a . in an import statement in Python mean? Question: I'm looking over the code for Python's `multiprocessing` module, and it contains this line: from ._multiprocessing import win32, Connection, PipeConnection instead of from _multiprocessing import win32, Connection, PipeConnection the subtle difference being the period before `_multiprocessing`. What does that mean? Why the period? Answer: That's the new syntax for explicit [relative imports](http://www.python.org/dev/peps/pep-0328/). It means import from the current package.
Python dynamic class names Question: > **Possible Duplicate:** > [Dynamic loading of python > modules](http://stackoverflow.com/questions/951124/dynamic-loading-of- > python-modules) > [python: How to add property to a class > dynamically?](http://stackoverflow.com/questions/1325673/python-how-to-add- > property-to-a-class-dynamically) I have a dictionary with the filename and class names how can I import this class names and how can I create this classes? Example: classNames = { 'MCTest':MCTestClass} I want to import the MCTest and create the MCTestClass. Answer: You have to use the `__import__` function: [http://docs.python.org/library/functions.html#**import**](http://docs.python.org/library/functions.html#__import__) Example from doc page: >>> import sys >>> name = 'foo.bar.baz' >>> __import__(name) <module 'foo' from ...> >>> baz = sys.modules[name] >>> baz <module 'foo.bar.baz' from ...> To instantiate a class from baz you should be able to do: >>> SomeClass = getattr(baz, 'SomeClass') >>> obj = SomeClass()
In Python, how do you find the index of the first value greater than a threshold in a sorted list? Question: In Python, how do you find the index of the first value greater than a threshold in a sorted list? I can think of several ways of doing this (linear search, hand-written dichotomy,..), but I'm looking for a clean an reasonably efficient way of doing it. Since it's probably a pretty common problem, I'm sure experienced SOers can help! Thanks! Answer: Have a look at [bisect](http://docs.python.org/library/bisect.html). import bisect l = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] bisect.bisect(l, 55) # returns 7 Compare it with linear search: timeit bisect.bisect(l, 55) # 375ns timeit next((i for i,n in enumerate(l) if n > 55), len(l)) # 2.24us timeit next((l.index(n) for n in l if n > 55), len(l)) # 1.93us
Control a specific pin on the Arduino Uno board using pyserial Question: I have a python code that sends in a pattern, in which a light has to blink in. (say eg. 101010. pattern may vary every time the code is run). when it is executing this infinitely i want an interrupt( again sent by the python code )to save the present conditions of the lights (say it is running 1 of the sequence) and perform a specific task like turn off the lights for 10 seconds and then resume the sequence. one way of doing this is by interrupting the program by making the interrupt pin high. The question is can this making of high/low controlled by the pyserial. So a simple pseudo code would be : PYTHON part of the code: Read the sequence: Send the sequence to the arduino board using pyserial. while(1) { Run a timer for 15 second. When the timer overflows interrupt the arduino. } ARDUINO part of the code : Read the sequence while (1) { keep repeating the sequence on the LED. } // if interrupted on pin2 // assuming pin2 has the interrupt procedure // Pyserial has to enable the interrupt WITHOUT using a switch for enabling the pin. ISR { Save the present state of execution. Turn off the LED. } # FOR A BETTER UNDERSTANDING : I built up small codes to show the doubts i had : CODE FOR THE ARDUINO IS : int ledpin1 = 13; int speedy; int patterns; void setup() { Serial.begin(9600); Serial.print("Program Initiated: \n"); pinMode(ledpin1,OUTPUT); //activate the blackout ISR when a interrupt is achieved at a certain pin. In this case pin2 of the arduino attachInterrupt(0,blackout,CHANGE); } void loop() { if (Serial.available()>1) { Serial.print("starting loop \n"); patterns = Serial.read(); patterns = patterns-48; speedy = Serial.read(); speedy = (speedy-48)*1000; while(1) { patterns = !(patterns); Serial.print(patterns); digitalWrite(ledpin1,patterns); delay(speedy); } } } /* void blackout() { // ***Save the present state of the LED(on pin13)*** Serial.print ("The turning off LED's for performing the python code\n"); digitalWrite(ledpin,LOW); //wait for the Python code to complete the task it wants to perform, //so got to dealy the completion of the ISR delay(2000);// delay the whole thing by 2 seconds //***Continue with the while loop by setting the condition of the light to the saved condition.*** } */ ================================================================================== CODE FOR THE PYTHON FRONT IS : import serial import time patterns=1 speedy=1 ser = serial.Serial() ser.setPort("COM4") ser.baudrate = 9600 ser.open() def main(): if (ser.isOpen()): #while(1): ser.write(patterns) ser.write(speedy) blackoutfunc() #ser.close() def blackoutfunc(): while(1): time.sleep(2) print "Performing operations as required" =============================================================================== Now the questions I had : 1) Is there a way to be able to activate the "blackout ISR" depending on the conditions of a pin(in this case pin2 which is the INT0 pin) without using a physical switch present on the pin. Hence the pin state has to be manipulated by the software. 2) Is it possible to perform the operations as mentioned in the comments of the blackout functions? 3) In the python code is it possible to just send in the data(i.e. patterns,speedy) only once and make the arduino perform the pattern in a infinite way without again sending the data by the `serial.write` command. Hence avoiding the `while(1)` loop after the `ser.isOpen()`. Answer: Have a look at this: <https://github.com/ajfisher/arduino-command-server> It's something I pulled together on the Arduino side to issue arbitrary commands like switch a pin high / low and set PWM levels etc. It works over both serial and network though it's a touch buggy on the network side at the moment. To use it, put the code on your arduino then you just write a python script (or any other language that can use a serial connection) to connect over the serial connection and then tell it what you want to do eg DIGW 1 HIGH etc Also have a look at: <https://github.com/ajfisher/arduino-django-visualiser> which is where I use a variation of this library to control some LEDs based on some things going on in Django - it's more heavily python based.
Python regex: how to replace each instance of an occurrence with a different value? Question: Suppose I have this string: `s = "blah blah blah"` Using Python regex, how can I replace each instance of "blah" with a different value (e.g. I have a list of values `v = ("1", "2", "3")` Answer: You could use a [`re.sub` callback](http://docs.python.org/library/re.html#re.sub): import re def callback(match): return next(callback.v) callback.v=iter(('1','2','3')) s = "blah blah blah" print(re.sub(r'blah',callback,s)) yields 1 2 3
Django South ignoring my custom rules Question: I am using a custom Django model field to and widget to render a GoogleMap widget in my admin, i also want to use South with my project to handle database migrations. However after much effort i am unable to generate a custom South rule that fits, this are my custom model and the last of the many instrospection rules that i've tried. class GoogleMapMarkerField(models.CharField): __metaclass__ = models.SubfieldBase description = _('Un marcador de Google Maps') widget = GoogleMapMarkerWidget def __init__(self, center, *args, **kwargs): kwargs['max_length'] = 100 kwargs['help_text'] = _('Arrastre el cursor en el mapa para seleccionar el punto') self.center = center super(GoogleMapMarkerField, self).__init__(*args, **kwargs) def formfield(self, **kwargs): defaults = { 'center': self.center, 'form_class':GoogleMapMarkerFormField } defaults.update(kwargs) return super(GoogleMapMarkerField, self).formfield(**defaults) def to_python(self, value): if isinstance(value, GoogleMapMarker): return value if isinstance(value, list): return GoogleMapMarker(*map(float, value)) elif isinstance(value, basestring): try: return GoogleMapMarker(*map(float, value.split(','))) except ValueError: pass def get_prep_value(self, value): return '%f,%f' % (value.latitude, value.longitude) add_introspection_rules([ ( (GoogleMapMarkerField, ), [], { 'center': ('center', {}), } ) ], ["^website\.fields\.GoogleMapMarkerField"]) And this is the traceback that i'm getting Traceback (most recent call last): File "manage.py", line 14, in <module> execute_manager(settings) File "/home/armonge/workspace/env/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/home/armonge/workspace/env/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/armonge/workspace/env/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/home/armonge/workspace/env/lib/python2.7/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/home/armonge/workspace/env/lib/python2.7/site-packages/south/management/commands/schemamigration.py", line 97, in handle old_orm = last_migration.orm(), File "/home/armonge/workspace/env/lib/python2.7/site-packages/south/utils.py", line 62, in method value = function(self) File "/home/armonge/workspace/env/lib/python2.7/site-packages/south/migration/base.py", line 422, in orm return FakeORM(self.migration_class(), self.app_label()) File "/home/armonge/workspace/env/lib/python2.7/site-packages/south/orm.py", line 46, in FakeORM _orm_cache[args] = _FakeORM(*args) File "/home/armonge/workspace/env/lib/python2.7/site-packages/south/orm.py", line 125, in __init__ self.models[name] = self.make_model(app_label, model_name, data) File "/home/armonge/workspace/env/lib/python2.7/site-packages/south/orm.py", line 321, in make_model field = self.eval_in_context(code, app, extra_imports) File "/home/armonge/workspace/env/lib/python2.7/site-packages/south/orm.py", line 236, in eval_in_context return eval(code, globals(), fake_locals) File "<string>", line 1, in <module> TypeError: __init__() takes at least 2 arguments (1 given) Answer: `center` isn't a keyword argument, it's a positional argument. You shouldn't use positional arguments with South, it doesn't understand them. (See [Custom Fields: Keyword Arguments](http://south.aeracode.org/docs/tutorial/part4.html#keyword- arguments)). You could solve this by providing center with a default value( `center=None` would be fine) and then following the example code at the link for defining the keyword name as passed to `__init__`, the name as stored in the database, and a dictionary of options (may be blank, but setting the default value there too helps).
print redirected to file SOMETIMES results in incomplete printout in python Question: I want to save list to a file so I cycle through it and write it to file. Everything's fine. But SOMETIMES(!?!?) the list is not written entirely, it stops rendering in the middle of the item. No error is raised, it silently continues executing rest of the code. I've tried several ways to write it out, several versions of python (2.4, 2.5, 2.7) and it's all the same. It sometimes work, sometimes not. When it's printed out to the terminal window, not to the file, it's working properly without glitches. Am I missing something? this is it ... from bpnn import * ... # save input weights for later use: writewtsi = open("c:/files/wtsi.txt", "w") for i in range(net.ni): print>>writewtsi, net.wi[i] bpnn is neural network module from here: <http://python.ca/nas/python/bpnn.py> Answer: Close the file when done with all the writes to ensure any write-caching is flushed to the drive with: writewtsi.close()
Python: How can I parse { apple: "1" , orange: "2" } into Dictionary? Question: I have received an output , it likes this. { orange: '2', apple: '1', lemon: '3' } I know it is not a standard JSON format, but is it still possible to parse into Python Dictionary type? Is it a must that **orange , apple , lemon** must be quoted? Thanks you Answer: This is valid [YAML](http://en.wikipedia.org/wiki/YAML) (a superset of JSON). Use [PyYAML](http://pyyaml.org/) to parse it: >>> s = ''' ... { ... orange: '2', ... apple: '1', ... lemon: '3' ... }''' >>> import yaml >>> yaml.load(s) {'orange': '2', 'lemon': '3', 'apple': '1'} More, since there is a tab space inside the string s, we better remove it before parsing into yaml. s=s.replace('\t','') Otherwise, the above string cannot be parsed.
for loop ignores some elements of my list Question: > **Possible Duplicate:** > [Remove items from a list while iterating in > Python](http://stackoverflow.com/questions/1207406/remove-items-from-a-list- > while-iterating-in-python) I want to remove all the dates in a list which are later than a given date. I can't see why my method only removes some items of the list. Here's what I've got: import datetime import numpy as np import os list_dates = [datetime.date(2012,1,3), datetime.date(2012,1,1), datetime.date(2012,1,5), datetime.date(2013,1,3), datetime.date(2013,1,1)] for item in list_dates: if item > datetime.date(2012,1,1): list_dates.remove(item) print list_dates returns [datetime.date(2012, 1, 1), datetime.date(2013, 1, 3)] Answer: James and GWW are right, you cant modify the list while you are iterating through the same. Instead copy the items which you want to keep to a new list. list_dates = [datetime.date(2012,1,3), datetime.date(2012,1,1), datetime.date(2012,1,5), datetime.date(2013,1,3), datetime.date(2013,1,1)] for item in list_dates: if item < datetime.date(2012,1,1): new_list_dates.add(item); // move this item (which you want to keep) to a new list. print new_list_dates
python asynchronization Question: Imagine a text file of 20 mb. I am reading char by char and extracting the useful information. I have actually 2 main functions, one is reading the file and the second is extracting the info. Something like this: def reader(path): f = open(path, 'r') source = f.read() f.close() while True: # here is where I read char by char and call the function extractor def extractor(s): # here I extract the useful information Now, my goal is to continue to read while extractor is working. So basicaly, my question is what is the appropriate way to accomplish my goal? Answer: You can use producer/consumer threads. The threads can be synchronized using a `Queue.Queue`. **EDIT** : an example of a producer/consumer system: from threading import Thread from Queue import Queue def produce(queue, n_items): for d in range(n_items): queue.put(d) print "put {0} in queue".format(d) def consume(queue, n_items): d = 0 while d != n_items -1: # You need some sort of stop condition d = queue.get() print "got {0} from queue".format(d) def start_producer_and_consumer(wait): q = Queue() consumer_thread = Thread(target = consume, args = (q, 10)) producer_thread = Thread(target = produce, args = (q, 10)) producer_thread.start() consumer_thread.start() if wait: producer_thread.join() consumer_thread.join() if __name__ == '__main__': start_producer_and_consumer(True) As you will see if you execute this, everything will be consumed in the correct order.
How should I handle working with celeryd_multi from code? Question: So far, I've been working only with `python manage.py celeryd`, starting it like this: `python manage.py celeryd -l info --settings=settings` The code from my view, does this: BinaryExecTask.delay(request.POST["binary_path"]) And the code from my `settings.py`, is this: import djcelery djcelery.setup_loader() BROKER_BACKEND = "djkombu.transport.DatabaseTransport" #celery BROKER_HOST = "localhost" BROKER_PORT = 5672 BROKER_USER = "guest" BROKER_PASSWORD = "guest" BROKER_VHOST = "/" and it will execute some binaries in the background. The thing is, some of the binaries take pretty short time to run, while others may take up to half an hour. Working with `celeryd`, all my tasks are blocked until the current one finishes it's execution. I saw [here](http://ask.github.com/celery/reference/celery.bin.celeryd_multi.html#examples) some examples of starting celeryd_multi, but running: `python manage.py celeryd_multi start 3 --settings=settings -l info` gives this error: celeryd-multi v2.3.1 > Starting nodes... > celery1.x: Traceback (most recent call last): File "manage.py", line 14, in <module> execute_manager(settings) File "c:\code\python27\lib\site-packages\django-1.3-py2.7.egg\django\core\management\_ line 438, in execute_manager utility.execute() File "c:\code\python27\lib\site-packages\django-1.3-py2.7.egg\django\core\management\_ line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "c:\code\python27\lib\site-packages\django_celery-2.3.3-py2.7.egg\djcelery\manage s\celeryd_multi.py", line 22, in run_from_argv ["%s %s" % (argv[0], argv[1])] + argv[2:]) File "c:\code\python27\lib\site-packages\celery-2.3.1-py2.7.egg\celery\bin\celeryd_mul 172, in execute_from_commandline self.commands[argv[0]](argv[1:], cmd) File "c:\code\python27\lib\site-packages\celery-2.3.1-py2.7.egg\celery\bin\celeryd_mul 205, in start retcode = self.waitexec(argv) File "c:\code\python27\lib\site-packages\celery-2.3.1-py2.7.egg\celery\bin\celeryd_mul 354, in waitexec pipe = Popen(argstr, env=self.env) File "c:\code\python27\lib\subprocess.py", line 672, in __init__ errread, errwrite) File "c:\code\python27\lib\subprocess.py", line 882, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified The `celeryd-multi start 3 -c 3` throws the same error. What should I do so that I could succesfully start a celery instance that will allow me to run more tasks in parallel? Also, would I need to do something different in my view? EDIT: some debugging led me here ( site- packages\celery-2.3.1-py2.7.egg\celery\bin\celeryd_multi.py(354)waitexec ) 351 def waitexec(self, argv, path=sys.executable): 352 args = " ".join([path] + list(argv)) 353 -> argstr = shlex.split(args.encode("utf-8")) 354 pipe = Popen(argstr, env=self.env) (Pdb) p argstr ['c:codepython27python.exe', 'manage.py', 'celeryd_detach', '-l', 'info', '--pidfile=celeryd@1.pid', '-n', 'celery1.x', '--logfile=celeryd@1.log'] (Pdb) p Popen(argstr, env=self.env) *** WindowsError: WindowsError(2, 'The system cannot find the file specified') (Pdb) So, as we can see, the path to Python gets destroyed :). What should I do next? EDIT2: I opened an issue [here](https://github.com/ask/celery/issues/472) Answer: Looks like smth Windows-specific... Did you try to provide full path to python executable in command line like C:\code\python27\bin\python.exe manage.py celeryd_multi start 3 --settings=settings -l info Also, use full paths for python file can be usefull C:\code\python27\bin\python.exe C:\path\to\your\project\manage.py celeryd_multi start 3 --settings=settings -l info
Iterating a loop with a pause Question: I am working to integrate with an API that has a limit on the number of requests per second. Is there a way, when running a `for` loop in python to delay each cycle? Conceptually, something like -- def function(request): for x in [a,b,c,d,...]: do something wait y seconds Thank you. Answer: import time ... time.sleep(5) This will sleep for 5 seconds. See <http://docs.python.org/library/time.html#time.sleep>
Python pytz Converting a timestamp (string format) from one timezone to another Question: I have a timestamp with timezone information in string format and I would like to convert this to display the correct date/time using my local timezone. So for eg... I have timestamp1 = 2011-08-24 13:39:00 +0800 and I would like to convert this to say timezone offset +1000 to dsiplay timestamp2 = 2011-08-24 15:39:00 +1000 I have tried using pytz but couldnt find many examples showing how to use the offset information. One other link that I found on stackoverflow which depicts this exact problem is [here](http://stackoverflow.com/questions/79797/how-do- i-convert-local-time-to-utc-in-python). I was hoping there was some better way I could handle this using pytz. Thanks for all suggestions in advance :). **UPDATE** Thanks Cixate. I just found the solution which is very similar to yours. Found these links helpful - [LINK1](http://stackoverflow.com/questions/79797/how-do- i-convert-local-time-to-utc-in-python) and [LINK2](http://stackoverflow.com/questions/6729902/python-dateutil-parser- fails) Posting the solution for everyones benefit from datetime import datetime import sys, os import pytz from dateutil.parser import parse datestr = "2011-09-09 13:20:00 +0800" dt = parse(datestr) print dt localtime = dt.astimezone (pytz.timezone('Australia/Melbourne')) print localtime.strftime ("%Y-%m-%d %H:%M:%S") 2011-09-09 15:20:00 Answer: [datetime.astimezone](http://docs.python.org/library/datetime.html#datetime.datetime.astimezone) will do your basic conversion once you have a datetime object. If you're trying to get a datetime object from a string, pip install [python- dateutil](http://labix.org/python-dateutil) and it's as simple as: >>> from dateutil.parser import parse >>> from dateutil.tz import tzoffset >>> dt = parse('2011-08-24 13:39:00 +0800') datetime.datetime(2011, 8, 24, 13, 39, tzinfo=tzoffset(None, 28800)) >>> dt.astimezone(tzoffset(None, 3600)) datetime.datetime(2011, 8, 24, 6, 39, tzinfo=tzoffset(None, 3600))
Python to parse non-standard XML file Question: My input file is actually multiple XML files appending to one file. (It's from [Google Patents](http://commondatastorage.googleapis.com/patents/grantbib/2011/ipgb20110104_wk01.zip)). It has below structure: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE us-patent-grant SYSTEM "us-patent-grant.dtd" [ ]> <root_node>...</root_node> <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE us-patent-grant SYSTEM "us-patent-grant.dtd" [ ]> <root_node>...</root_node> <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE us-patent-grant SYSTEM "us-patent-grant.dtd" [ ]> <root_node>...</root_node> Python xml.dom.minidom can't parse this non-standard file. What's a better way to parse this file? I am not below code has good performance or not. for line in infile: if line == '<?xml version="1.0" encoding="UTF-8"?>': xmldoc = minidom.parse(XMLstring) else: XMLstring += line Answer: Here's my take on it, using a generator and `lxml.etree`. Extracted information purely for example. import urllib2, os, zipfile from lxml import etree def xmlSplitter(data,separator=lambda x: x.startswith('<?xml')): buff = [] for line in data: if separator(line): if buff: yield ''.join(buff) buff[:] = [] buff.append(line) yield ''.join(buff) def first(seq,default=None): """Return the first item from sequence, seq or the default(None) value""" for item in seq: return item return default datasrc = "http://commondatastorage.googleapis.com/patents/grantbib/2011/ipgb20110104_wk01.zip" filename = datasrc.split('/')[-1] if not os.path.exists(filename): with open(filename,'wb') as file_write: r = urllib2.urlopen(datasrc) file_write.write(r.read()) zf = zipfile.ZipFile(filename) xml_file = first([ x for x in zf.namelist() if x.endswith('.xml')]) assert xml_file is not None count = 0 for item in xmlSplitter(zf.open(xml_file)): count += 1 if count > 10: break doc = etree.XML(item) docID = "-".join(doc.xpath('//publication-reference/document-id/*/text()')) title = first(doc.xpath('//invention-title/text()')) assignee = first(doc.xpath('//assignee/addressbook/orgname/text()')) print "DocID: {0}\nTitle: {1}\nAssignee: {2}\n".format(docID,title,assignee) Yields: DocID: US-D0629996-S1-20110104 Title: Glove backhand Assignee: Blackhawk Industries Product Group Unlimited LLC DocID: US-D0629997-S1-20110104 Title: Belt sleeve Assignee: None DocID: US-D0629998-S1-20110104 Title: Underwear Assignee: X-Technology Swiss GmbH DocID: US-D0629999-S1-20110104 Title: Portion of compression shorts Assignee: Nike, Inc. DocID: US-D0630000-S1-20110104 Title: Apparel Assignee: None DocID: US-D0630001-S1-20110104 Title: Hooded shirt Assignee: None DocID: US-D0630002-S1-20110104 Title: Hooded shirt Assignee: None DocID: US-D0630003-S1-20110104 Title: Hooded shirt Assignee: None DocID: US-D0630004-S1-20110104 Title: Headwear cap Assignee: None DocID: US-D0630005-S1-20110104 Title: Footwear Assignee: Vibram S.p.A.