content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Using os.execvp in Python I have a question about using os.execvp in Python. I have the following bit of code that's used to create a list of arguments: args = [ "java" , classpath , "-Djava.library.path=" + lib_path() , ea , "-Xmx1000m" , "-server" , "code_swarm" , params ] When I output a string using " ".join(args) and paste that into my shell prompt, the JVM launches fine, and everything works. Everything works if I use os.system(" ".join(args)) in my Python script, too. But the following bit of code does not work: os.execvp("java", args) I get the following error: Unrecognized option: -classpath [and then the classpath I created, which looks okay] Could not create the Java virtual machine. So what gives? Why does copying/pasting into the shell or using os.system() work, but not os.execvp()? A: If your "classpath" variable contains for instance "-classpath foo.jar", it will not work, since it is thinking the option name is "-classpath foo.jar". Split it in two arguments: [..., "-classpath", classpath, ...]. The other ways (copy and paste and system()) work because the shell splits the command line at the spaces (unless they are escaped or quoted). The command line is in fact passed down to the called program as an array (unlike on Windows), and the JVM is expecting to find an element with only "-classpath" followed by another element with the classpath. You can see the difference for yourself by calling the following small Python script instead of the JVM: #!/usr/bin/python import sys print sys.argv A: Make sure you aren't relying on shell expansion in your classpath. E.g. "~/my.jar" will get expanded by the shell in an os.system call, but not, I believe in an os.execvp call.
Using os.execvp in Python
I have a question about using os.execvp in Python. I have the following bit of code that's used to create a list of arguments: args = [ "java" , classpath , "-Djava.library.path=" + lib_path() , ea , "-Xmx1000m" , "-server" , "code_swarm" , params ] When I output a string using " ".join(args) and paste that into my shell prompt, the JVM launches fine, and everything works. Everything works if I use os.system(" ".join(args)) in my Python script, too. But the following bit of code does not work: os.execvp("java", args) I get the following error: Unrecognized option: -classpath [and then the classpath I created, which looks okay] Could not create the Java virtual machine. So what gives? Why does copying/pasting into the shell or using os.system() work, but not os.execvp()?
[ "If your \"classpath\" variable contains for instance \"-classpath foo.jar\", it will not work, since it is thinking the option name is \"-classpath foo.jar\". Split it in two arguments: [..., \"-classpath\", classpath, ...].\nThe other ways (copy and paste and system()) work because the shell splits the command line at the spaces (unless they are escaped or quoted). The command line is in fact passed down to the called program as an array (unlike on Windows), and the JVM is expecting to find an element with only \"-classpath\" followed by another element with the classpath.\nYou can see the difference for yourself by calling the following small Python script instead of the JVM:\n#!/usr/bin/python\nimport sys\nprint sys.argv\n\n", "Make sure you aren't relying on shell expansion in your classpath. E.g. \"~/my.jar\" will get expanded by the shell in an os.system call, but not, I believe in an os.execvp call.\n" ]
[ 11, 0 ]
[]
[]
[ "exec", "python", "shell" ]
stackoverflow_0000210978_exec_python_shell.txt
Q: Starting a new database driven python web application would you use a javascript widget framework? If so which framework? I am starting a new web application project. I want to use python as I am using it at my bread-and-butter-job. However I don't want to reinvent the wheel. Some things I have thought about: AJAX would be nice if it’s not too much of a hazzle. It is best if the licensing allows commercialization but is not crucial at this point. It could also be funny to try out the Google App Engine if the tools will let me. Should I be using a javascript UI framework or should I go for standard HTML forms? Which framework would you recommend? A: jQuery? Though its UI components are perhaps not up to the very best (but lots of work appears to be done in that area), jQuery itself seems to be on track to become the de facto JS standard library. It is both MIT or GPL licensed so commercial use is ok (and costless). A: I heartily suggest Django + Prototype. I think they cover most of the bases you are looking at and they are very straight-forward to get started with. Also you could use them on the GAE if that is the route you decide to take, although you should keep in mind that the GAE does not support Cron jobs, which can limit your functionality. A: I'd take a look at web2py. It's a full-stack framework that requires no configuration and is easy to try out - everything can be driven via a web interface if you choose. I've dabbled with other frameworks and it's by far the easiest to setup and includes lots of helpful things for free. The documentation is good and there is a howto for getting it to work under Google App Engine. It comes with libraries and a howto for Ajax. As far as I remember the licence doesn't restrict using it in commercial applications. A: Take a look at ExtJS. It's got the best widget library out there. They offer a commercial license and an open-source license. There are several python developers in the community and there is some integration with Google APIs. A: web2py uses jQuery
Starting a new database driven python web application would you use a javascript widget framework? If so which framework?
I am starting a new web application project. I want to use python as I am using it at my bread-and-butter-job. However I don't want to reinvent the wheel. Some things I have thought about: AJAX would be nice if it’s not too much of a hazzle. It is best if the licensing allows commercialization but is not crucial at this point. It could also be funny to try out the Google App Engine if the tools will let me. Should I be using a javascript UI framework or should I go for standard HTML forms? Which framework would you recommend?
[ "jQuery? Though its UI components are perhaps not up to the very best (but lots of work appears to be done in that area), jQuery itself seems to be on track to become the de facto JS standard library. It is both MIT or GPL licensed so commercial use is ok (and costless).\n", "I heartily suggest Django + Prototype. I think they cover most of the bases you are looking at and they are very straight-forward to get started with. Also you could use them on the GAE if that is the route you decide to take, although you should keep in mind that the GAE does not support Cron jobs, which can limit your functionality.\n", "I'd take a look at web2py. It's a full-stack framework that requires no configuration and is easy to try out - everything can be driven via a web interface if you choose. I've dabbled with other frameworks and it's by far the easiest to setup and includes lots of helpful things for free. The documentation is good and there is a howto for getting it to work under Google App Engine. It comes with libraries and a howto for Ajax. As far as I remember the licence doesn't restrict using it in commercial applications.\n", "Take a look at ExtJS. It's got the best widget library out there. They offer a commercial license and an open-source license. There are several python developers in the community and there is some integration with Google APIs.\n", "web2py uses jQuery\n" ]
[ 4, 1, 1, 1, 1 ]
[]
[]
[ "frameworks", "javascript", "python" ]
stackoverflow_0000205204_frameworks_javascript_python.txt
Q: Tutorial for Python - Should I use 2.x or 3.0? Python 3.0 is in beta with a final release coming shortly. Obviously it will take some significant time for general adoption and for it to eventually replace 2.x. I am writing a tutorial about certain aspects of programming Python. I'm wondering if I should do it in Python 2.x or 3.0? (not that the difference is huge) a 2.x tutorial is probably more useful now, but it would be nice to start producing 3.0 tutorials. anyone have thoughts? (of course I could do both, but I would prefer to do one or the other) A: Start with 2.x. Most existing libraries will be on 2.x for a long time. Last year, Guido himself said that it would be "two years" until you needed to learn 3.0; there's still another year left. Personally, I think it will be longer. People writing code on 2.x can learn how to use the 2to3 tool and have code that works on both versions. There is no 3to2, so code written for python 3 is significantly less valuable. Thats not to mention how disappointing it will be for your students to learn that python 3 is not installed on their Linux computer ("/usr/bin/python" will be python 2.x for the next 5 years, at least), that there is no django for python 3, no wxwindows for python 3, no GTK for python 3, no Twisted for python 3, no PIL for python 3... the real strength of Python has always been in its extensive collection of libraries, and there are very few libraries for python 3 right now. If your tutorial is well written, you should easily be able to update it to python 2.6, 2.7, and eventually python 3. A: Van Rossum (creator of python) explains that "if you're starting a brand new thing, you should use 3.0." So most people looking to get started should even START with 3.0. It will be useful especially since there are probably very few out there now. the article A: Python 2.x has been out long enough to build up quite a few tutorials already, but 3k has much less resources available. Some intro level 3k stuff would probably see more general purpose use. So unless you're tailoring this to a specific sub domain that lacks any python resources, 3k would be of greater use. A: Learn Python 3.0, as contagious suggests. Python 2.x is not very different, there seems to be a great deal of FUD about the rather minor differences between them. Sure, the differences are great enough that most programs will need to be modified, but almost all of the modifications are straightforward (like changing print statement to print function). In fact, Python 2.6 can optionally enable all the new syntactic features of Python 3.0. It's a very well-thought-out transition process. A: It depends on your audience. If it's a general audience, and you plan to leave it posted for a long time, I'd suggest looking forward and going with 3.0. On the other hand if it's for a project or group that's going to be doing work in the near future, Python 2 probably make more sense. A: The differences are small enough that it's really not going to matter much.
Tutorial for Python - Should I use 2.x or 3.0?
Python 3.0 is in beta with a final release coming shortly. Obviously it will take some significant time for general adoption and for it to eventually replace 2.x. I am writing a tutorial about certain aspects of programming Python. I'm wondering if I should do it in Python 2.x or 3.0? (not that the difference is huge) a 2.x tutorial is probably more useful now, but it would be nice to start producing 3.0 tutorials. anyone have thoughts? (of course I could do both, but I would prefer to do one or the other)
[ "Start with 2.x. Most existing libraries will be on 2.x for a long time. Last year, Guido himself said that it would be \"two years\" until you needed to learn 3.0; there's still another year left. Personally, I think it will be longer. People writing code on 2.x can learn how to use the 2to3 tool and have code that works on both versions. There is no 3to2, so code written for python 3 is significantly less valuable.\nThats not to mention how disappointing it will be for your students to learn that python 3 is not installed on their Linux computer (\"/usr/bin/python\" will be python 2.x for the next 5 years, at least), that there is no django for python 3, no wxwindows for python 3, no GTK for python 3, no Twisted for python 3, no PIL for python 3... the real strength of Python has always been in its extensive collection of libraries, and there are very few libraries for python 3 right now.\nIf your tutorial is well written, you should easily be able to update it to python 2.6, 2.7, and eventually python 3.\n", "Van Rossum (creator of python) explains that \"if you're starting a brand new thing, you should use 3.0.\" So most people looking to get started should even START with 3.0. It will be useful especially since there are probably very few out there now.\nthe article\n", "Python 2.x has been out long enough to build up quite a few tutorials already, but 3k has much less resources available. Some intro level 3k stuff would probably see more general purpose use. So unless you're tailoring this to a specific sub domain that lacks any python resources, 3k would be of greater use.\n", "Learn Python 3.0, as contagious suggests.\nPython 2.x is not very different, there seems to be a great deal of FUD about the rather minor differences between them. Sure, the differences are great enough that most programs will need to be modified, but almost all of the modifications are straightforward (like changing print statement to print function).\nIn fact, Python 2.6 can optionally enable all the new syntactic features of Python 3.0. It's a very well-thought-out transition process.\n", "It depends on your audience. If it's a general audience, and you plan to leave it posted for a long time, I'd suggest looking forward and going with 3.0. On the other hand if it's for a project or group that's going to be doing work in the near future, Python 2 probably make more sense.\n", "The differences are small enough that it's really not going to matter much.\n" ]
[ 14, 11, 2, 2, 0, 0 ]
[]
[]
[ "python", "python_2.x", "python_3.x" ]
stackoverflow_0000209888_python_python_2.x_python_3.x.txt
Q: Socket programming for mobile phones in Python I've written code for communication between my phone and comp thru TCP sockets. When I type out the code line by line in the interactive console it works fine. However, when i try running the script directly through filebrowser.py it just wont work. I'm using Nokia N95. Is there anyway I can run this script directly without using filebrowser.py? A: Have you read Hack a Mobile Phone with Linux and Python? It is rather old, but maybe you find it helpful. A: If the code is working in the interactive interpreter when typed, but not when run directly then I would suggest seeing if your code has reached a deadlock on the socket, for example both ends are waiting for data from the other. When typing into the interactive interpreter there is a longer delay between the execution of each line on code. A: Well, it doesn't appear to be a deadlock situation. It throws an error saying remote server refused connection. However, like I said before, if i type the very same code into the interactive interpreter it works just fine. I'm wondering if the error is because the script is run through filebrowser.py? A: Don't you have the "Run script" menu in your interactive Python shell?
Socket programming for mobile phones in Python
I've written code for communication between my phone and comp thru TCP sockets. When I type out the code line by line in the interactive console it works fine. However, when i try running the script directly through filebrowser.py it just wont work. I'm using Nokia N95. Is there anyway I can run this script directly without using filebrowser.py?
[ "Have you read Hack a Mobile Phone with Linux and Python? It is rather old, but maybe you find it helpful.\n", "If the code is working in the interactive interpreter when typed, but not when run directly then I would suggest seeing if your code has reached a deadlock on the socket, for example both ends are waiting for data from the other. When typing into the interactive interpreter there is a longer delay between the execution of each line on code.\n", "Well, it doesn't appear to be a deadlock situation. It throws an error saying remote server refused connection. However, like I said before, if i type the very same code into the interactive interpreter it works just fine. I'm wondering if the error is because the script is run through filebrowser.py?\n", "Don't you have the \"Run script\" menu in your interactive Python shell? \n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "mobile", "python", "sockets" ]
stackoverflow_0000141647_mobile_python_sockets.txt
Q: How to create a numpy record array from C On the Python side, I can create new numpy record arrays as follows: numpy.zeros((3,), dtype=[('a', 'i4'), ('b', 'U5')]) How do I do the same from a C program? I suppose I have to call PyArray_SimpleNewFromDescr(nd, dims, descr), but how do I construct a PyArray_Descr that is appropriate for passing as the third argument to PyArray_SimpleNewFromDescr? A: Use PyArray_DescrConverter. Here's an example: #include <Python.h> #include <stdio.h> #include <numpy/arrayobject.h> int main(int argc, char *argv[]) { int dims[] = { 2, 3 }; PyObject *op, *array; PyArray_Descr *descr; Py_Initialize(); import_array(); op = Py_BuildValue("[(s, s), (s, s)]", "a", "i4", "b", "U5"); PyArray_DescrConverter(op, &descr); Py_DECREF(op); array = PyArray_SimpleNewFromDescr(2, dims, descr); PyObject_Print(array, stdout, 0); printf("\n"); Py_DECREF(array); return 0; } Thanks to Adam Rosenfield for pointing to Section 13.3.10 of the Guide to NumPy. A: See the Guide to NumPy, section 13.3.10. There's lots of different ways to make a descriptor, although it's not nearly as easy as writing [('a', 'i4'), ('b', 'U5')].
How to create a numpy record array from C
On the Python side, I can create new numpy record arrays as follows: numpy.zeros((3,), dtype=[('a', 'i4'), ('b', 'U5')]) How do I do the same from a C program? I suppose I have to call PyArray_SimpleNewFromDescr(nd, dims, descr), but how do I construct a PyArray_Descr that is appropriate for passing as the third argument to PyArray_SimpleNewFromDescr?
[ "Use PyArray_DescrConverter. Here's an example:\n#include <Python.h>\n#include <stdio.h>\n#include <numpy/arrayobject.h>\n\nint main(int argc, char *argv[])\n{\n int dims[] = { 2, 3 };\n PyObject *op, *array;\n PyArray_Descr *descr;\n\n Py_Initialize();\n import_array();\n op = Py_BuildValue(\"[(s, s), (s, s)]\", \"a\", \"i4\", \"b\", \"U5\");\n PyArray_DescrConverter(op, &descr);\n Py_DECREF(op);\n array = PyArray_SimpleNewFromDescr(2, dims, descr);\n PyObject_Print(array, stdout, 0);\n printf(\"\\n\");\n Py_DECREF(array);\n return 0;\n}\n\nThanks to Adam Rosenfield for pointing to Section 13.3.10 of the Guide to NumPy.\n", "See the Guide to NumPy, section 13.3.10. There's lots of different ways to make a descriptor, although it's not nearly as easy as writing [('a', 'i4'), ('b', 'U5')].\n" ]
[ 11, 6 ]
[]
[]
[ "c", "numpy", "python" ]
stackoverflow_0000214549_c_numpy_python.txt
Q: A good multithreaded python webserver? I am looking for a python webserver which is multithreaded instead of being multi-process (as in case of mod_python for apache). I want it to be multithreaded because I want to have an in memory object cache that will be used by various http threads. My webserver does a lot of expensive stuff and computes some large arrays which needs to be cached in memory for future use to avoid recomputing. This is not possible in a multi-process web server environment. Storing this information in memcache is also not a good idea as the arrays are large and storing them in memcache would lead to deserialization of data coming from memcache apart from the additional overhead of IPC. I implemented a simple webserver using BaseHttpServer, it gives good performance but it gets stuck after a few hours time. I need some more matured webserver. Is it possible to configure apache to use mod_python under a thread model so that I can do some object caching? A: CherryPy. Features, as listed from the website: A fast, HTTP/1.1-compliant, WSGI thread-pooled webserver. Typically, CherryPy itself takes only 1-2ms per page! Support for any other WSGI-enabled webserver or adapter, including Apache, IIS, lighttpd, mod_python, FastCGI, SCGI, and mod_wsgi Easy to run multiple HTTP servers (e.g. on multiple ports) at once A powerful configuration system for developers and deployers alike A flexible plugin system Built-in tools for caching, encoding, sessions, authorization, static content, and many more A native mod_python adapter A complete test suite Swappable and customizable...everything. Built-in profiling, coverage, and testing support. A: Consider reconsidering your design. Maintaining that much state in your webserver is probably a bad idea. Multi-process is a much better way to go for stability. Is there another way to share state between separate processes? What about a service? Database? Index? It seems unlikely that maintaining a huge array of data in memory and relying on a single multi-threaded process to serve all your requests is the best design or architecture for your app. A: Twisted can serve as such a web server. While not multithreaded itself, there is a (not yet released) multithreaded WSGI container present in the current trunk. You can check out the SVN repository and then run: twistd web --wsgi=your.wsgi.application A: Its hard to give a definitive answer without knowing what kind of site you are working on and what kind of load you are expecting. Sub second performance may be a serious requirement or it may not. If you really need to save that last millisecond then you absolutely need to keep your arrays in memory. However as others have suggested it is more than likely that you don't and could get by with something else. Your usage pattern of the data in the array may affect what kinds of choices you make. You probably don't need access to the entire set of data from the array all at once so you could break your data up into smaller chunks and put those chunks in the cache instead of the one big lump. Depending on how often your array data needs to get updated you might make a choice between memcached, local db (berkley, sqlite, small mysql installation, etc) or a remote db. I'd say memcached for fairly frequent updates. A local db for something in the frequency of hourly and remote for the frequency of daily. One thing to consider also is what happens after a cache miss. If 50 clients all of a sudden get a cache miss and all of them at the same time decide to start regenerating those expensive arrays your box(es) will quickly be reduced to 8086's. So you have to take in to consideration how you will handle that. Many articles out there cover how to recover from cache misses. Hope this is helpful. A: Not multithreaded, but twisted might serve your needs. A: Perhaps you have a problem with your implementation in Python using BaseHttpServer. There's no reason for it to "get stuck", and implementing a simple threaded server using BaseHttpServer and threading shouldn't be difficult. Also, see http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer about implementing a simple multi-threaded server with HTTPServer and ThreadingMixIn A: You could instead use a distributed cache that is accessible from each process, memcached being the example that springs to mind. A: web.py has made me happy in the past. Consider checking it out. But it does sound like an architectural redesign might be the proper, though more expensive, solution. A: I use CherryPy both personally and professionally, and I'm extremely happy with it. I even do the kinds of thing you're describing, such as having global object caches, running other threads in the background, etc. And it integrates well with Apache; simply run CherryPy as a standalone server bound to localhost, then use Apache's mod_proxy and mod_rewrite to have Apache transparently forward your requests to CherryPy. The CherryPy website is http://cherrypy.org/ A: I actually had the same issue recently. Namely: we wrote a simple server using BaseHTTPServer and found that the fact that it's not multi-threaded was a big drawback. My solution was to port the server to Pylons (http://pylonshq.com/). The port was fairly easy and one benefit was it's very easy to create a GUI using Pylons so I was able to throw a status page on top of what's basically a daemon process. I would summarize Pylons this way: it's similar to Ruby on Rails in that it aims to be very easy to deploy web apps it's default templating language, Mako, is very nice to work with it uses a system of routing urls that's very convenient for us performance is not an issue, so I can't guarantee that Pylons would perform adequately for your needs you can use it with Apache & Lighthttpd, though I've not tried this We also run an app with Twisted and are happy with it. Twisted has good performance, but I find Twisted's single-threaded/defer-to-thread programming model fairly complicated. It has lots of advantages, but would not be my choice for a simple app. Good luck. A: Just to point out something different from the usual suspects... Some years ago while I was using Zope 2.x I read about Medusa as it was the web server used for the platform. They advertised it to work well under heavy load and it can provide you with the functionality you asked.
A good multithreaded python webserver?
I am looking for a python webserver which is multithreaded instead of being multi-process (as in case of mod_python for apache). I want it to be multithreaded because I want to have an in memory object cache that will be used by various http threads. My webserver does a lot of expensive stuff and computes some large arrays which needs to be cached in memory for future use to avoid recomputing. This is not possible in a multi-process web server environment. Storing this information in memcache is also not a good idea as the arrays are large and storing them in memcache would lead to deserialization of data coming from memcache apart from the additional overhead of IPC. I implemented a simple webserver using BaseHttpServer, it gives good performance but it gets stuck after a few hours time. I need some more matured webserver. Is it possible to configure apache to use mod_python under a thread model so that I can do some object caching?
[ "CherryPy. Features, as listed from the website:\n\nA fast, HTTP/1.1-compliant, WSGI thread-pooled webserver. Typically, CherryPy itself takes only 1-2ms per page!\nSupport for any other WSGI-enabled webserver or adapter, including Apache, IIS, lighttpd, mod_python, FastCGI, SCGI, and mod_wsgi\nEasy to run multiple HTTP servers (e.g. on multiple ports) at once\nA powerful configuration system for developers and deployers alike\nA flexible plugin system\nBuilt-in tools for caching, encoding, sessions, authorization, static content, and many more\nA native mod_python adapter\nA complete test suite\nSwappable and customizable...everything.\nBuilt-in profiling, coverage, and testing support. \n\n", "Consider reconsidering your design. Maintaining that much state in your webserver is probably a bad idea. Multi-process is a much better way to go for stability. \nIs there another way to share state between separate processes? What about a service? Database? Index? \nIt seems unlikely that maintaining a huge array of data in memory and relying on a single multi-threaded process to serve all your requests is the best design or architecture for your app. \n", "Twisted can serve as such a web server. While not multithreaded itself, there is a (not yet released) multithreaded WSGI container present in the current trunk. You can check out the SVN repository and then run:\ntwistd web --wsgi=your.wsgi.application\n\n", "Its hard to give a definitive answer without knowing what kind of site you are working on and what kind of load you are expecting. Sub second performance may be a serious requirement or it may not. If you really need to save that last millisecond then you absolutely need to keep your arrays in memory. However as others have suggested it is more than likely that you don't and could get by with something else. Your usage pattern of the data in the array may affect what kinds of choices you make. You probably don't need access to the entire set of data from the array all at once so you could break your data up into smaller chunks and put those chunks in the cache instead of the one big lump. Depending on how often your array data needs to get updated you might make a choice between memcached, local db (berkley, sqlite, small mysql installation, etc) or a remote db. I'd say memcached for fairly frequent updates. A local db for something in the frequency of hourly and remote for the frequency of daily. One thing to consider also is what happens after a cache miss. If 50 clients all of a sudden get a cache miss and all of them at the same time decide to start regenerating those expensive arrays your box(es) will quickly be reduced to 8086's. So you have to take in to consideration how you will handle that. Many articles out there cover how to recover from cache misses. Hope this is helpful.\n", "Not multithreaded, but twisted might serve your needs.\n", "Perhaps you have a problem with your implementation in Python using BaseHttpServer. There's no reason for it to \"get stuck\", and implementing a simple threaded server using BaseHttpServer and threading shouldn't be difficult.\nAlso, see http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer about implementing a simple multi-threaded server with HTTPServer and ThreadingMixIn\n", "You could instead use a distributed cache that is accessible from each process, memcached being the example that springs to mind.\n", "web.py has made me happy in the past. Consider checking it out.\nBut it does sound like an architectural redesign might be the proper, though more expensive, solution.\n", "I use CherryPy both personally and professionally, and I'm extremely happy with it. I even do the kinds of thing you're describing, such as having global object caches, running other threads in the background, etc. And it integrates well with Apache; simply run CherryPy as a standalone server bound to localhost, then use Apache's mod_proxy and mod_rewrite to have Apache transparently forward your requests to CherryPy.\nThe CherryPy website is http://cherrypy.org/\n", "I actually had the same issue recently. Namely: we wrote a simple server using BaseHTTPServer and found that the fact that it's not multi-threaded was a big drawback.\nMy solution was to port the server to Pylons (http://pylonshq.com/). The port was fairly easy and one benefit was it's very easy to create a GUI using Pylons so I was able to throw a status page on top of what's basically a daemon process. \nI would summarize Pylons this way:\n\nit's similar to Ruby on Rails in that it aims to be very easy to deploy web apps\nit's default templating language, Mako, is very nice to work with\nit uses a system of routing urls that's very convenient\nfor us performance is not an issue, so I can't guarantee that Pylons would perform adequately for your needs\nyou can use it with Apache & Lighthttpd, though I've not tried this\n\nWe also run an app with Twisted and are happy with it. Twisted has good performance, but I find Twisted's single-threaded/defer-to-thread programming model fairly complicated. It has lots of advantages, but would not be my choice for a simple app.\nGood luck.\n", "Just to point out something different from the usual suspects...\nSome years ago while I was using Zope 2.x I read about Medusa as it was the web server used for the platform. They advertised it to work well under heavy load and it can provide you with the functionality you asked. \n" ]
[ 16, 7, 6, 3, 2, 2, 2, 2, 1, 1, 0 ]
[]
[]
[ "apache", "mod_python", "python", "webserver" ]
stackoverflow_0000213483_apache_mod_python_python_webserver.txt
Q: wxpython - Expand list control vertically not horizontally I have a ListCtrl that displays a list of items for the user to select. This works fine except that when the ctrl is not large enough to show all the items, I want it to expand downwards with a vertical scoll bar rather than using a horizontal scroll bar as it expands to the right. The ListCtrl's creation: self.subjectList = wx.ListCtrl(self, self.ID_SUBJECT, style = wx.LC_LIST | wx.LC_SINGLE_SEL | wx.LC_VRULES) Items are inserted using wx.ListItem: item = wx.ListItem() item.SetText(subject) item.SetData(id) item.SetWidth(200) self.subjectList.InsertItem(item) A: Use the wxLC_REPORT style. import wx class Test(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) self.test = wx.ListCtrl(self, style = wx.LC_REPORT | wx.LC_NO_HEADER) for i in range(5): self.test.InsertColumn(i, 'Col %d' % (i + 1)) self.test.SetColumnWidth(i, 200) for i in range(0, 100, 5): index = self.test.InsertStringItem(self.test.GetItemCount(), "") for j in range(5): self.test.SetStringItem(index, j, str(i+j)*30) self.Show() app = wx.PySimpleApp() app.TopWindow = Test() app.MainLoop() A: Try this: import wx class Test(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) self.test = wx.ListCtrl(self, style = wx.LC_ICON | wx.LC_AUTOARRANGE) for i in range(100): self.test.InsertStringItem(self.test.GetItemCount(), str(i)) self.Show() app = wx.PySimpleApp() app.TopWindow = Test() app.MainLoop()
wxpython - Expand list control vertically not horizontally
I have a ListCtrl that displays a list of items for the user to select. This works fine except that when the ctrl is not large enough to show all the items, I want it to expand downwards with a vertical scoll bar rather than using a horizontal scroll bar as it expands to the right. The ListCtrl's creation: self.subjectList = wx.ListCtrl(self, self.ID_SUBJECT, style = wx.LC_LIST | wx.LC_SINGLE_SEL | wx.LC_VRULES) Items are inserted using wx.ListItem: item = wx.ListItem() item.SetText(subject) item.SetData(id) item.SetWidth(200) self.subjectList.InsertItem(item)
[ "Use the wxLC_REPORT style.\nimport wx\n\nclass Test(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n self.test = wx.ListCtrl(self, style = wx.LC_REPORT | wx.LC_NO_HEADER)\n\n for i in range(5):\n self.test.InsertColumn(i, 'Col %d' % (i + 1))\n self.test.SetColumnWidth(i, 200)\n\n\n for i in range(0, 100, 5):\n index = self.test.InsertStringItem(self.test.GetItemCount(), \"\")\n for j in range(5):\n self.test.SetStringItem(index, j, str(i+j)*30)\n\n self.Show()\n\napp = wx.PySimpleApp()\napp.TopWindow = Test()\napp.MainLoop()\n\n", "Try this:\nimport wx\n\nclass Test(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n self.test = wx.ListCtrl(self, style = wx.LC_ICON | wx.LC_AUTOARRANGE)\n\n for i in range(100):\n self.test.InsertStringItem(self.test.GetItemCount(), str(i))\n\n self.Show()\n\napp = wx.PySimpleApp()\napp.TopWindow = Test()\napp.MainLoop()\n\n" ]
[ 3, 1 ]
[]
[]
[ "python", "wxpython", "wxwidgets" ]
stackoverflow_0000215132_python_wxpython_wxwidgets.txt
Q: In Django, how could one use Django's update_object generic view to edit forms of inherited models? In Django, given excerpts from an application animals likeso: A animals/models.py with: from django.db import models from django.contrib.contenttypes.models import ContentType class Animal(models.Model): content_type = models.ForeignKey(ContentType,editable=False,null=True) name = models.CharField() class Dog(Animal): is_lucky = models.BooleanField() class Cat(Animal): lives_left = models.IntegerField() And an animals/urls.py: from django.conf.urls.default import * from animals.models import Animal, Dog, Cat dict = { 'model' : Animal } urlpatterns = ( url(r'^edit/(?P<object_id>\d+)$', 'create_update.update_object', dict), ) How can one use generic views to edit Dog and/or Cat using the same form? I.e. The form object that is passed to animals/animal_form.html will be Animal, and thus won't contain any of the specifics for the derived classes Dog and Cat. How could I have Django automatically pass a form for the child class to animal/animals_form.html? Incidentally, I'm using Djangosnippets #1031 for ContentType management, so Animal would have a method named as_leaf_class that returns the derived class. Clearly, one could create forms for each derived class, but that's quite a lot of unnecessary duplication (as the templates will all be generic -- essentially {{ form.as_p }}). Incidentally, it's best to assume that Animal will probably be one of several unrelated base classes with the same problem, so an ideal solution would be generic. Thank you in advance for the help. A: Alright, here's what I've done, and it seems to work and be a sensible design (though I stand to be corrected!). In a core library (e.g. mysite.core.views.create_update), I've written a decorator: from django.contrib.contenttypes.models import ContentType from django.views.generic import create_update def update_object_as_child(parent_model_class): """ Given a base models.Model class, decorate a function to return create_update.update_object, on the child class. e.g. @update_object(Animal) def update_object(request, object_id): pass kwargs should have an object_id defined. """ def decorator(function): def wrapper(request, **kwargs): # may raise KeyError id = kwargs['object_id'] parent_obj = parent_model_class.objects.get( pk=id ) # following http://www.djangosnippets.org/snippets/1031/ child_class = parent_obj.content_type.model_class() kwargs['model'] = child_class # rely on the generic code for testing/validation/404 return create_update.update_object(request, **kwargs) return wrapper return decorator And in animals/views.py, I have: from mysite.core.views.create_update import update_object_as_child @update_object_as_child(Animal) def edit_animal(request, object_id): pass And in animals/urls.py, I have: urlpatterns += patterns('animals.views', url(r'^edit/(?P<object_id>\d+)$', 'edit_animal', name="edit_animal"), ) Now I only need a unique edit function for each base class, which is trivial to create with a decorator. Hope someone finds that helpful, and I'd be delighted to have feedback. A: AFAICT, cats and dogs are on different DB tables, and maybe there's no Animal table. but you're using one URL pattern for all. somewhere you need to choose between each. I'd use a different URL patter for cats and dogs, both would call 'create_update.update_object'; but using a different dict for each. one with 'model':Dog and the other with 'model':Cat or maybe you want a single table where each record can be a cat or a dog? i don't think you can use inherited models for that.
In Django, how could one use Django's update_object generic view to edit forms of inherited models?
In Django, given excerpts from an application animals likeso: A animals/models.py with: from django.db import models from django.contrib.contenttypes.models import ContentType class Animal(models.Model): content_type = models.ForeignKey(ContentType,editable=False,null=True) name = models.CharField() class Dog(Animal): is_lucky = models.BooleanField() class Cat(Animal): lives_left = models.IntegerField() And an animals/urls.py: from django.conf.urls.default import * from animals.models import Animal, Dog, Cat dict = { 'model' : Animal } urlpatterns = ( url(r'^edit/(?P<object_id>\d+)$', 'create_update.update_object', dict), ) How can one use generic views to edit Dog and/or Cat using the same form? I.e. The form object that is passed to animals/animal_form.html will be Animal, and thus won't contain any of the specifics for the derived classes Dog and Cat. How could I have Django automatically pass a form for the child class to animal/animals_form.html? Incidentally, I'm using Djangosnippets #1031 for ContentType management, so Animal would have a method named as_leaf_class that returns the derived class. Clearly, one could create forms for each derived class, but that's quite a lot of unnecessary duplication (as the templates will all be generic -- essentially {{ form.as_p }}). Incidentally, it's best to assume that Animal will probably be one of several unrelated base classes with the same problem, so an ideal solution would be generic. Thank you in advance for the help.
[ "Alright, here's what I've done, and it seems to work and be a sensible design (though I stand to be corrected!).\nIn a core library (e.g. mysite.core.views.create_update), I've written a decorator:\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.views.generic import create_update\n\ndef update_object_as_child(parent_model_class):\n \"\"\"\n Given a base models.Model class, decorate a function to return \n create_update.update_object, on the child class.\n\n e.g.\n @update_object(Animal)\n def update_object(request, object_id):\n pass\n\n kwargs should have an object_id defined.\n \"\"\"\n\n def decorator(function):\n def wrapper(request, **kwargs):\n # may raise KeyError\n id = kwargs['object_id']\n\n parent_obj = parent_model_class.objects.get( pk=id )\n\n # following http://www.djangosnippets.org/snippets/1031/\n child_class = parent_obj.content_type.model_class()\n\n kwargs['model'] = child_class\n\n # rely on the generic code for testing/validation/404\n return create_update.update_object(request, **kwargs)\n return wrapper\n\n return decorator\n\nAnd in animals/views.py, I have:\nfrom mysite.core.views.create_update import update_object_as_child\n\n@update_object_as_child(Animal)\ndef edit_animal(request, object_id):\n pass\n\nAnd in animals/urls.py, I have:\nurlpatterns += patterns('animals.views',\n url(r'^edit/(?P<object_id>\\d+)$', 'edit_animal', name=\"edit_animal\"),\n)\n\nNow I only need a unique edit function for each base class, which is trivial to create with a decorator.\nHope someone finds that helpful, and I'd be delighted to have feedback.\n", "AFAICT, cats and dogs are on different DB tables, and maybe there's no Animal table. but you're using one URL pattern for all. somewhere you need to choose between each.\nI'd use a different URL patter for cats and dogs, both would call 'create_update.update_object'; but using a different dict for each. one with 'model':Dog and the other with 'model':Cat\nor maybe you want a single table where each record can be a cat or a dog? i don't think you can use inherited models for that.\n" ]
[ 1, 0 ]
[]
[]
[ "decorator", "django", "forms", "inheritance", "python" ]
stackoverflow_0000213237_decorator_django_forms_inheritance_python.txt
Q: How to convert a C string (char array) into a Python string when there are non-ASCII characters in the string? I have embedded a Python interpreter in a C program. Suppose the C program reads some bytes from a file into a char array and learns (somehow) that the bytes represent text with a certain encoding (e.g., ISO 8859-1, Windows-1252, or UTF-8). How do I decode the contents of this char array into a Python string? The Python string should in general be of type unicode—for instance, a 0x93 in Windows-1252 encoded input becomes a u'\u0201c'. I have attempted to use PyString_Decode, but it always fails when there are non-ASCII characters in the string. Here is an example that fails: #include <Python.h> #include <stdio.h> int main(int argc, char *argv[]) { char c_string[] = { (char)0x93, 0 }; PyObject *py_string; Py_Initialize(); py_string = PyString_Decode(c_string, 1, "windows_1252", "replace"); if (!py_string) { PyErr_Print(); return 1; } return 0; } The error message is UnicodeEncodeError: 'ascii' codec can't encode character u'\u201c' in position 0: ordinal not in range(128), which indicates that the ascii encoding is used even though we specify windows_1252 in the call to PyString_Decode. The following code works around the problem by using PyString_FromString to create a Python string of the undecoded bytes, then calling its decode method: #include <Python.h> #include <stdio.h> int main(int argc, char *argv[]) { char c_string[] = { (char)0x93, 0 }; PyObject *raw, *decoded; Py_Initialize(); raw = PyString_FromString(c_string); printf("Undecoded: "); PyObject_Print(raw, stdout, 0); printf("\n"); decoded = PyObject_CallMethod(raw, "decode", "s", "windows_1252"); Py_DECREF(raw); printf("Decoded: "); PyObject_Print(decoded, stdout, 0); printf("\n"); return 0; } A: PyString_Decode does this: PyObject *PyString_Decode(const char *s, Py_ssize_t size, const char *encoding, const char *errors) { PyObject *v, *str; str = PyString_FromStringAndSize(s, size); if (str == NULL) return NULL; v = PyString_AsDecodedString(str, encoding, errors); Py_DECREF(str); return v; } IOW, it does basically what you're doing in your second example - converts to a string, then decode the string. The problem here arises from PyString_AsDecodedString, rather than PyString_AsDecodedObject. PyString_AsDecodedString does PyString_AsDecodedObject, but then tries to convert the resulting unicode object into a string object with the default encoding (for you, looks like that's ASCII). That's where it fails. I believe you'll need to do two calls - but you can use PyString_AsDecodedObject rather than calling the python "decode" method. Something like: #include <Python.h> #include <stdio.h> int main(int argc, char *argv[]) { char c_string[] = { (char)0x93, 0 }; PyObject *py_string, *py_unicode; Py_Initialize(); py_string = PyString_FromStringAndSize(c_string, 1); if (!py_string) { PyErr_Print(); return 1; } py_unicode = PyString_AsDecodedObject(py_string, "windows_1252", "replace"); Py_DECREF(py_string); return 0; } I'm not entirely sure what the reasoning behind PyString_Decode working this way is. A very old thread on python-dev seems to indicate that it has something to do with chaining the output, but since the Python methods don't do the same, I'm not sure if that's still relevant. A: You don't want to decode the string into a Unicode representation, you just want to treat it as an array of bytes, right? Just use PyString_FromString: char *cstring; PyObject *pystring = PyString_FromString(cstring); That's all. Now you have a Python str() object. See docs here: https://docs.python.org/2/c-api/string.html I'm a little bit confused about how to specify "str" or "unicode." They are quite different if you have non-ASCII characters. If you want to decode a C string and you know exactly what character set it's in, then yes, PyString_DecodeString is a good place to start. A: Try calling PyErr_Print() in the "if (!py_string)" clause. Perhaps the python exception will give you some more information.
How to convert a C string (char array) into a Python string when there are non-ASCII characters in the string?
I have embedded a Python interpreter in a C program. Suppose the C program reads some bytes from a file into a char array and learns (somehow) that the bytes represent text with a certain encoding (e.g., ISO 8859-1, Windows-1252, or UTF-8). How do I decode the contents of this char array into a Python string? The Python string should in general be of type unicode—for instance, a 0x93 in Windows-1252 encoded input becomes a u'\u0201c'. I have attempted to use PyString_Decode, but it always fails when there are non-ASCII characters in the string. Here is an example that fails: #include <Python.h> #include <stdio.h> int main(int argc, char *argv[]) { char c_string[] = { (char)0x93, 0 }; PyObject *py_string; Py_Initialize(); py_string = PyString_Decode(c_string, 1, "windows_1252", "replace"); if (!py_string) { PyErr_Print(); return 1; } return 0; } The error message is UnicodeEncodeError: 'ascii' codec can't encode character u'\u201c' in position 0: ordinal not in range(128), which indicates that the ascii encoding is used even though we specify windows_1252 in the call to PyString_Decode. The following code works around the problem by using PyString_FromString to create a Python string of the undecoded bytes, then calling its decode method: #include <Python.h> #include <stdio.h> int main(int argc, char *argv[]) { char c_string[] = { (char)0x93, 0 }; PyObject *raw, *decoded; Py_Initialize(); raw = PyString_FromString(c_string); printf("Undecoded: "); PyObject_Print(raw, stdout, 0); printf("\n"); decoded = PyObject_CallMethod(raw, "decode", "s", "windows_1252"); Py_DECREF(raw); printf("Decoded: "); PyObject_Print(decoded, stdout, 0); printf("\n"); return 0; }
[ "PyString_Decode does this:\nPyObject *PyString_Decode(const char *s,\n Py_ssize_t size,\n const char *encoding,\n const char *errors)\n{\n PyObject *v, *str;\n\n str = PyString_FromStringAndSize(s, size);\n if (str == NULL)\n return NULL;\n v = PyString_AsDecodedString(str, encoding, errors);\n Py_DECREF(str);\n return v;\n}\n\nIOW, it does basically what you're doing in your second example - converts to a string, then decode the string. The problem here arises from PyString_AsDecodedString, rather than PyString_AsDecodedObject. PyString_AsDecodedString does PyString_AsDecodedObject, but then tries to convert the resulting unicode object into a string object with the default encoding (for you, looks like that's ASCII). That's where it fails.\nI believe you'll need to do two calls - but you can use PyString_AsDecodedObject rather than calling the python \"decode\" method. Something like:\n#include <Python.h>\n#include <stdio.h>\n\nint main(int argc, char *argv[])\n{\n char c_string[] = { (char)0x93, 0 };\n PyObject *py_string, *py_unicode;\n\n Py_Initialize();\n\n py_string = PyString_FromStringAndSize(c_string, 1);\n if (!py_string) {\n PyErr_Print();\n return 1;\n }\n py_unicode = PyString_AsDecodedObject(py_string, \"windows_1252\", \"replace\");\n Py_DECREF(py_string);\n\n return 0;\n}\n\nI'm not entirely sure what the reasoning behind PyString_Decode working this way is. A very old thread on python-dev seems to indicate that it has something to do with chaining the output, but since the Python methods don't do the same, I'm not sure if that's still relevant.\n", "You don't want to decode the string into a Unicode representation, you just want to treat it as an array of bytes, right?\nJust use PyString_FromString:\nchar *cstring;\nPyObject *pystring = PyString_FromString(cstring);\n\nThat's all. Now you have a Python str() object. See docs here: https://docs.python.org/2/c-api/string.html\nI'm a little bit confused about how to specify \"str\" or \"unicode.\" They are quite different if you have non-ASCII characters. If you want to decode a C string and you know exactly what character set it's in, then yes, PyString_DecodeString is a good place to start.\n", "Try calling PyErr_Print() in the \"if (!py_string)\" clause. Perhaps the python exception will give you some more information.\n" ]
[ 6, 3, 2 ]
[]
[]
[ "c", "character_encoding", "embedding", "python" ]
stackoverflow_0000213628_c_character_encoding_embedding_python.txt
Q: Python's unittest logic Can someone explain this result to me. The first test succeeds but the second fails, although the variable tested is changed in the first test. >>> class MyTest(unittest.TestCase): def setUp(self): self.i = 1 def testA(self): self.i = 3 self.assertEqual(self.i, 3) def testB(self): self.assertEqual(self.i, 3) >>> unittest.main() .F ====================================================================== FAIL: testB (__main__.MyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "<pyshell#61>", line 8, in testB AssertionError: 1 != 3 ---------------------------------------------------------------------- Ran 2 tests in 0.016s A: From http://docs.python.org/lib/minimal-example.html : When a setUp() method is defined, the test runner will run that method prior to each test. So setUp() gets run before both testA and testB, setting i to 1 each time. Behind the scenes, the entire test object is actually being re-instantiated for each test, with setUp() being run on each new instantiation before the test is executed. A: Each test is run using a new instance of the MyTest class. That means if you change self in one test, changes will not carry over to other tests, since self will refer to a different instance. Additionally, as others have pointed out, setUp is called before each test. A: If I recall correctly in that test framework the setUp method is run before each test A: From a methodological point of view, individual tests should be independent, otherwise it can produce more hard-to-find bugs. Imagine for instance that testA and testB would be called in a different order.
Python's unittest logic
Can someone explain this result to me. The first test succeeds but the second fails, although the variable tested is changed in the first test. >>> class MyTest(unittest.TestCase): def setUp(self): self.i = 1 def testA(self): self.i = 3 self.assertEqual(self.i, 3) def testB(self): self.assertEqual(self.i, 3) >>> unittest.main() .F ====================================================================== FAIL: testB (__main__.MyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "<pyshell#61>", line 8, in testB AssertionError: 1 != 3 ---------------------------------------------------------------------- Ran 2 tests in 0.016s
[ "From http://docs.python.org/lib/minimal-example.html :\n\nWhen a setUp() method is defined, the\n test runner will run that method prior\n to each test.\n\nSo setUp() gets run before both testA and testB, setting i to 1 each time. Behind the scenes, the entire test object is actually being re-instantiated for each test, with setUp() being run on each new instantiation before the test is executed.\n", "Each test is run using a new instance of the MyTest class. That means if you change self in one test, changes will not carry over to other tests, since self will refer to a different instance.\nAdditionally, as others have pointed out, setUp is called before each test.\n", "If I recall correctly in that test framework the setUp method is run before each test\n", "From a methodological point of view, individual tests should be independent, otherwise it can produce more hard-to-find bugs. Imagine for instance that testA and testB would be called in a different order.\n" ]
[ 11, 9, 0, 0 ]
[ "The setUp method, as everyone else has said, runs before every test method you write. So, when testB runs, the value of i is 1, not 3.\nYou can also use a tearDown method which runs after every test method. However if one of your tests crashes, your tearDown method will never run.\n" ]
[ -1 ]
[ "python", "unit_testing" ]
stackoverflow_0000072422_python_unit_testing.txt
Q: Python templates for web designers What are some good templating engines for web designers? I definitely have my preferences as to what I'd prefer to work with as a programmer. But web designers seem to have a different way of thinking about things and thus may prefer a different system. So: Web designers: what templating engine do you prefer to work with? programmers: what templating engines have you worked with that made working with web designers easy? A: Django's templating engine is quite decent. It's pretty robust while not stepping on too many toes. If you're working with Python I would recommend it. I don't know how to divorce it from Django, but I doubt it would be very difficult seeing as Django is quite modular. EDIT: Apparently the mini-guide to using Django's templating engine standalone was sitting in front of me already, thanks insin. A: I had good votes when answering this same question's duplicate. My answer was: Jinja2. Nice syntax, good customization possibilities. Integrates well. Can be sandboxed, so you don't have to trust completely your template authors. (Mako can't). It is also pretty fast, with the bonus of compiling your template to bytecode and cache it, as in the demonstration below: >>> import jinja2 >>> print jinja2.Environment().compile('{% for row in data %}{{ row.name | upper }}{% endfor %}', raw=True) from __future__ import division from jinja2.runtime import LoopContext, Context, TemplateReference, Macro, Markup, TemplateRuntimeError, missing, concat, escape, markup_join, unicode_join name = None def root(context, environment=environment): l_data = context.resolve('data') t_1 = environment.filters['upper'] if 0: yield None for l_row in l_data: if 0: yield None yield unicode(t_1(environment.getattr(l_row, 'name'))) blocks = {} debug_info = '1=9' This code has been generated on the fly by Jinja2. Of course the compiler optmizes it further (e.g. removing if 0: yield None) A: Look at Mako. Here's how I cope with web designers. Ask them to mock up the page. In HTML. Use the HTML as the basis for the template, replacing the mocked-up content with ${...} replacements. Fold in loops to handle repeats. The use of if-statements requires negotiation, since the mock-up is one version of the page, and there are usually some explanations for conditional presentation of some material. A: I personally found Cheetah templates to be very designer-friendly. What needed some time was the idea of templates subclassing, and this was something hard to get at the beginning. But a designer creates a full template, duplicating his code... Then you can go clean things up a bit. A: To add to @Jaime Soriano's comment, Genshi is the template engine used in Trac post- 0.11. It's can be used as a generic templating solution, but has a focus on HTML/XHTML. It has automatic escaping for reducing XSS vulnerabilities. A: Mi vote goes to Clearsilver, it is the template engine used in Trac before 0.11, it's also used in pages like Google Groups or Orkut. The main benefits of this template engine is that it's very fast and language-independent. A: I've played both roles and at heart I prefer more of a programmer's templating language. However, I freelance for a few graphic designers doing the "heavy lifting" backed and db programming and can tell you that I've had the best luck with XML templating languages (SimpleTAL, Genshi, etc). When I'm trying to be web designer friendly I look for something that can be loaded into Dreamweaver and see results. This allows me to provide all the hooks in a template and let the designer tweak it without worrying about breaking what I've already written. It allows us to share the code and work better together where we're both comfortable with the format. If the designer codes without a WYSIWYG editor, I think you're options are less limited and you could go with your own personal favorite.
Python templates for web designers
What are some good templating engines for web designers? I definitely have my preferences as to what I'd prefer to work with as a programmer. But web designers seem to have a different way of thinking about things and thus may prefer a different system. So: Web designers: what templating engine do you prefer to work with? programmers: what templating engines have you worked with that made working with web designers easy?
[ "Django's templating engine is quite decent. It's pretty robust while not stepping on too many toes. If you're working with Python I would recommend it. I don't know how to divorce it from Django, but I doubt it would be very difficult seeing as Django is quite modular.\nEDIT: Apparently the mini-guide to using Django's templating engine standalone was sitting in front of me already, thanks insin.\n", "I had good votes when answering this same question's duplicate.\nMy answer was:\nJinja2.\nNice syntax, good customization possibilities. \nIntegrates well. Can be sandboxed, so you don't have to trust completely your template authors. (Mako can't).\nIt is also pretty fast, with the bonus of compiling your template to bytecode and cache it, as in the demonstration below:\n>>> import jinja2\n>>> print jinja2.Environment().compile('{% for row in data %}{{ row.name | upper }}{% endfor %}', raw=True) \nfrom __future__ import division\nfrom jinja2.runtime import LoopContext, Context, TemplateReference, Macro, Markup, TemplateRuntimeError, missing, concat, escape, markup_join, unicode_join\nname = None\n\ndef root(context, environment=environment):\n l_data = context.resolve('data')\n t_1 = environment.filters['upper']\n if 0: yield None\n for l_row in l_data:\n if 0: yield None\n yield unicode(t_1(environment.getattr(l_row, 'name')))\n\nblocks = {}\ndebug_info = '1=9'\n\nThis code has been generated on the fly by Jinja2. Of course the compiler optmizes it further (e.g. removing if 0: yield None)\n", "Look at Mako.\nHere's how I cope with web designers.\n\nAsk them to mock up the page. In HTML.\nUse the HTML as the basis for the template, replacing the mocked-up content with ${...} replacements.\nFold in loops to handle repeats.\n\nThe use of if-statements requires negotiation, since the mock-up is one version of the page, and there are usually some explanations for conditional presentation of some material.\n", "I personally found Cheetah templates to be very designer-friendly. What needed some time was the idea of templates subclassing, and this was something hard to get at the beginning. But a designer creates a full template, duplicating his code... Then you can go clean things up a bit.\n", "To add to @Jaime Soriano's comment, Genshi is the template engine used in Trac post- 0.11. It's can be used as a generic templating solution, but has a focus on HTML/XHTML. It has automatic escaping for reducing XSS vulnerabilities.\n", "Mi vote goes to Clearsilver, it is the template engine used in Trac before 0.11, it's also used in pages like Google Groups or Orkut. The main benefits of this template engine is that it's very fast and language-independent.\n", "I've played both roles and at heart I prefer more of a programmer's templating language. However, I freelance for a few graphic designers doing the \"heavy lifting\" backed and db programming and can tell you that I've had the best luck with XML templating languages (SimpleTAL, Genshi, etc). \nWhen I'm trying to be web designer friendly I look for something that can be loaded into Dreamweaver and see results. This allows me to provide all the hooks in a template and let the designer tweak it without worrying about breaking what I've already written. It allows us to share the code and work better together where we're both comfortable with the format. \nIf the designer codes without a WYSIWYG editor, I think you're options are less limited and you could go with your own personal favorite.\n" ]
[ 6, 6, 5, 2, 2, 1, 1 ]
[]
[]
[ "python", "templating" ]
stackoverflow_0000214536_python_templating.txt
Q: How do I find all cells with a particular attribute in BeautifulSoup? I am trying to develop a script to pull some data from a large number of html tables. One problem is that the number of rows that contain the information to create the column headings is indeterminate. I have discovered that the last row of the set of header rows has the attribute border-bottom for each cell with a value. Thus I decided to find those cells with the attribute border-bottom. As you can see I initialized a list. I intended to find the parent of each of the cells that end up in the borderCells list. However, when I run this code only one cell, that is the first cell in allCells with the attribute border-bottom is added to the list borderCells. For your information allCells has 193 cells, 9 of them have the attr border-bottom. Thus I was expecting nine members in the borderCells list. Any help is appreciated. borderCells=[] for each in allCells: if each.find(attrs={"style": re.compile("border-bottom")}): borderCells.append(each) A: Is there any reason borderCells = soup.findAll("td", style=re.compile("border-bottom")}) wouldn't work? It's kind of hard to figure out exactly what you're asking for, since your description of the original tables is pretty ambiguous, and it's not really clear what allCells is supposed to be either. I would suggest giving a representative sample of the HTML you're working with, along with the "correct" results pulled from that table. A: Well you know computers are always right. The answer is that the attrs are on different things in the html. What I was modeling on what some html that looked like this: <TD nowrap align="left" valign="bottom"> <DIV style="border-bottom: 1px solid #000000; width: 1%; padding-bottom: 1px"> <B>Name</B> </DIV> </TD> The other places in the file where style="border-bottom etc look like: <TD colspan="2" nowrap align="center" valign="bottom" style="border-bottom: 1px solid 00000"> <B>Location</B> </TD> so now I have to modify the question to figure out how identify those cells where the attr is at the td level not the div level A: Someone took away one of their answers though I tested it and it worked for me. Thanks for the help. Both answers worked and I learned a little bit more about how to post questions and after I stare at the code for a while I might learn more about Python and BeautifulSoup
How do I find all cells with a particular attribute in BeautifulSoup?
I am trying to develop a script to pull some data from a large number of html tables. One problem is that the number of rows that contain the information to create the column headings is indeterminate. I have discovered that the last row of the set of header rows has the attribute border-bottom for each cell with a value. Thus I decided to find those cells with the attribute border-bottom. As you can see I initialized a list. I intended to find the parent of each of the cells that end up in the borderCells list. However, when I run this code only one cell, that is the first cell in allCells with the attribute border-bottom is added to the list borderCells. For your information allCells has 193 cells, 9 of them have the attr border-bottom. Thus I was expecting nine members in the borderCells list. Any help is appreciated. borderCells=[] for each in allCells: if each.find(attrs={"style": re.compile("border-bottom")}): borderCells.append(each)
[ "Is there any reason \n\nborderCells = soup.findAll(\"td\", style=re.compile(\"border-bottom\")})\n\nwouldn't work? It's kind of hard to figure out exactly what you're asking for, since your description of the original tables is pretty ambiguous, and it's not really clear what allCells is supposed to be either.\nI would suggest giving a representative sample of the HTML you're working with, along with the \"correct\" results pulled from that table.\n", "Well you know computers are always right. The answer is that the attrs are on different things in the html. What I was modeling on what some html that looked like this:\n<TD nowrap align=\"left\" valign=\"bottom\">\n<DIV style=\"border-bottom: 1px solid #000000; width: 1%; padding-bottom: 1px\">\n<B>Name</B>\n</DIV>\n</TD>\n\nThe other places in the file where style=\"border-bottom etc look like:\n<TD colspan=\"2\" nowrap align=\"center\" valign=\"bottom\" style=\"border-bottom: 1px solid 00000\">\n<B>Location</B>\n</TD>\n\nso now I have to modify the question to figure out how identify those cells where the attr is at the td level not the div level\n", "Someone took away one of their answers though I tested it and it worked for me. Thanks for the help. Both answers worked and I learned a little bit more about how to post questions and after I stare at the code for a while I might learn more about Python and BeautifulSoup\n" ]
[ 3, 0, 0 ]
[]
[]
[ "beautifulsoup", "parsing", "python" ]
stackoverflow_0000215667_beautifulsoup_parsing_python.txt
Q: Python for web development in Apache I've been playing with mod_python in apache2 which seems to work differently than python does in general - there's a bit different syntax and things you need to do. It's not very well documented and after a few days of playing with it, I'm really not seeing the point of mod_python at all, especially when things like php are so well documented and available. I can see how Python works well for system programming, but can anyone give any information as to why I shouldn't just dump python for a web-based application? A: Don't use mod_python. A common mistake is take mod_python as "mod_php, but for python" and that is not true. Use mod_wsgi instead. Choose a web framework. CherryPy. Pylons. Django. Look at wsgi.org A: mod_python wasn't really made for doing basic webprogramming. I suggest you go with a framework: django cherrypy web.py My suggestion is to give python some time. It's easy to get simplicity and lack of power confused. :)
Python for web development in Apache
I've been playing with mod_python in apache2 which seems to work differently than python does in general - there's a bit different syntax and things you need to do. It's not very well documented and after a few days of playing with it, I'm really not seeing the point of mod_python at all, especially when things like php are so well documented and available. I can see how Python works well for system programming, but can anyone give any information as to why I shouldn't just dump python for a web-based application?
[ "\nDon't use mod_python. A common mistake is take mod_python as \"mod_php, but for python\" and that is not true. Use mod_wsgi instead.\nChoose a web framework. CherryPy. Pylons. Django.\nLook at wsgi.org\n\n", "mod_python wasn't really made for doing basic webprogramming. I suggest you go with a framework:\n\ndjango\ncherrypy\nweb.py\n\nMy suggestion is to give python some time. It's easy to get simplicity and lack of power confused. :)\n" ]
[ 29, 6 ]
[]
[]
[ "apache", "mod_python", "python" ]
stackoverflow_0000215815_apache_mod_python_python.txt
Q: What's the difference between a parent and a reference property in Google App Engine? From what I understand, the parent attribute of a db.Model (typically defined/passed in the constructor call) allows you to define hierarchies in your data models. As a result, this increases the size of the entity group. However, it's not very clear to me why we would want to do that. Is this strictly for ACID compliance? I would like to see scenarios where each is best suited or more appropriate. A: There are several differences: All entities with the same ancestor are in the same entity group. Transactions can only affect entities inside a single entity group. All writes to a single entity group are serialized, so throughput is limited. The parent entity is set on creation and is fixed. References can be changed at any time. With reference properties, you can only query for direct relationships, but with parent properties you can use the .ancestor() filter to find everything (directly or indirectly) descended from a given ancestor. Each entity has only a single parent, but can have multiple reference properties. A: The only purpose of entity groups (defined by the parent attribute) is to enable transactions among different entities. If you don't need the transactions, don't use the entity group relationships. I suggest you re-reading the Keys and Entity Groups section of the docs, it took me quite a few reads to grasp the idea. Also watch these talks, among other things they discuss transactions and entity groups: Building Scalable Web Applications with Google App Engine Under the Covers of the Google App Engine Datastore
What's the difference between a parent and a reference property in Google App Engine?
From what I understand, the parent attribute of a db.Model (typically defined/passed in the constructor call) allows you to define hierarchies in your data models. As a result, this increases the size of the entity group. However, it's not very clear to me why we would want to do that. Is this strictly for ACID compliance? I would like to see scenarios where each is best suited or more appropriate.
[ "There are several differences:\n\nAll entities with the same ancestor are in the same entity group. Transactions can only affect entities inside a single entity group.\nAll writes to a single entity group are serialized, so throughput is limited.\nThe parent entity is set on creation and is fixed. References can be changed at any time.\nWith reference properties, you can only query for direct relationships, but with parent properties you can use the .ancestor() filter to find everything (directly or indirectly) descended from a given ancestor.\nEach entity has only a single parent, but can have multiple reference properties.\n\n", "The only purpose of entity groups (defined by the parent attribute) is to enable transactions among different entities. If you don't need the transactions, don't use the entity group relationships.\nI suggest you re-reading the Keys and Entity Groups section of the docs, it took me quite a few reads to grasp the idea.\nAlso watch these talks, among other things they discuss transactions and entity groups:\n\nBuilding Scalable Web Applications with Google App Engine\nUnder the Covers of the Google App Engine Datastore\n\n" ]
[ 15, 8 ]
[]
[]
[ "api", "google_app_engine", "python" ]
stackoverflow_0000215570_api_google_app_engine_python.txt
Q: Python embedded in CPP: how to get data back to CPP While working on a C++ project, I was looking for a third party library for something that is not my core business. I found a really good library, doing exactly what's needed, but it is written in Python. I decided to experiment with embedding Python code in C++, using the Boost.Python library. The C++ code looks something like this: #include <string> #include <iostream> #include <boost/python.hpp> using namespace boost::python; int main(int, char **) { Py_Initialize(); try { object module((handle<>(borrowed(PyImport_AddModule("__main__"))))); object name_space = module.attr("__dict__"); object ignored = exec("from myModule import MyFunc\n" "MyFunc(\"some_arg\")\n", name_space); std::string res = extract<std::string>(name_space["result"]); } catch (error_already_set) { PyErr_Print(); } Py_Finalize(); return 0; } A (very) simplified version of the Python code looks like this: import thirdparty def MyFunc(some_arg): result = thirdparty.go() print result Now the problem is this: 'MyFunc' executes fine, i can see the print of 'result'. What i cannot do is read 'result' back from the C++ code. The extract command never finds 'result' in any namespace. I tried defining 'result' as a global, i even tried returning a tuple, but i cannot get it to work. A: First of all, change your function to return the value. printing it will complicate things since you want to get the value back. Suppose your MyModule.py looks like this: import thirdparty def MyFunc(some_arg): result = thirdparty.go() return result Now, to do what you want, you have to go beyond basic embedding, as the documentation says. Here is the full code to run your function: #include <Python.h> int main(int argc, char *argv[]) { PyObject *pName, *pModule, *pFunc; PyObject *pArgs, *pArg, *pResult; int i; Py_Initialize(); pName = PyString_FromString("MyModule.py"); /* Error checking of pName left out as exercise */ pModule = PyImport_Import(pName); Py_DECREF(pName); if (pModule != NULL) { pFunc = PyObject_GetAttrString(pModule, "MyFunc"); /* pFunc is a new reference */ if (pFunc) { pArgs = PyTuple_New(0); pArg = PyString_FromString("some parameter") /* pArg reference stolen here: */ PyTuple_SetItem(pArgs, 0, pArg); pResult = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); if (pResult != NULL) { printf("Result of call: %s\n", PyString_AsString(pResult)); Py_DECREF(pResult); } else { Py_DECREF(pFunc); Py_DECREF(pModule); PyErr_Print(); fprintf(stderr,"Call failed\n"); return 1; } } else { if (PyErr_Occurred()) PyErr_Print(); fprintf(stderr, "Cannot find function"); } Py_XDECREF(pFunc); Py_DECREF(pModule); } else { PyErr_Print(); fprintf(stderr, "Failed to load module"); return 1; } Py_Finalize(); return 0; } A: Based on ΤΖΩΤΖΙΟΥ, Josh and Nosklo's answers i finally got it work using boost.python: Python: import thirdparty def MyFunc(some_arg): result = thirdparty.go() return result C++: #include <string> #include <iostream> #include <boost/python.hpp> using namespace boost::python; int main(int, char **) { Py_Initialize(); try { object module = import("__main__"); object name_space = module.attr("__dict__"); exec_file("MyModule.py", name_space, name_space); object MyFunc = name_space["MyFunc"]; object result = MyFunc("some_args"); // result is a dictionary std::string val = extract<std::string>(result["val"]); } catch (error_already_set) { PyErr_Print(); } Py_Finalize(); return 0; } Some important points: I changed 'exec' to 'exec_file' out of convenience, it also works with plain 'exec'. The main reason it failed is that i did not pass a "local" name_sapce to 'exec' or 'exec_file' - this is now fixed by passing name_space twice. If the python function returns unicode strings, they are not convertible to 'std::string', so i had to suffix all python strings with '.encode('ASCII', 'ignore')'. A: I think what you need is either PyObject_CallObject(<py function>, <args>), which returns the return value of the function you call as a PyObject, or PyRun_String(<expression>, Py_eval_input, <globals>, <locals>) which evaluates a single expression and returns its result. A: You should be able to return the result from MyFunc, which would then end up in the variable you are currently calling "ignored". This eliminates the need to access it in any other way.
Python embedded in CPP: how to get data back to CPP
While working on a C++ project, I was looking for a third party library for something that is not my core business. I found a really good library, doing exactly what's needed, but it is written in Python. I decided to experiment with embedding Python code in C++, using the Boost.Python library. The C++ code looks something like this: #include <string> #include <iostream> #include <boost/python.hpp> using namespace boost::python; int main(int, char **) { Py_Initialize(); try { object module((handle<>(borrowed(PyImport_AddModule("__main__"))))); object name_space = module.attr("__dict__"); object ignored = exec("from myModule import MyFunc\n" "MyFunc(\"some_arg\")\n", name_space); std::string res = extract<std::string>(name_space["result"]); } catch (error_already_set) { PyErr_Print(); } Py_Finalize(); return 0; } A (very) simplified version of the Python code looks like this: import thirdparty def MyFunc(some_arg): result = thirdparty.go() print result Now the problem is this: 'MyFunc' executes fine, i can see the print of 'result'. What i cannot do is read 'result' back from the C++ code. The extract command never finds 'result' in any namespace. I tried defining 'result' as a global, i even tried returning a tuple, but i cannot get it to work.
[ "First of all, change your function to return the value. printing it will complicate things since you want to get the value back. Suppose your MyModule.py looks like this:\nimport thirdparty\n\ndef MyFunc(some_arg):\n result = thirdparty.go()\n return result\n\nNow, to do what you want, you have to go beyond basic embedding, as the documentation says. Here is the full code to run your function:\n#include <Python.h>\n\nint\nmain(int argc, char *argv[])\n{\n PyObject *pName, *pModule, *pFunc;\n PyObject *pArgs, *pArg, *pResult;\n int i;\n\n Py_Initialize();\n pName = PyString_FromString(\"MyModule.py\");\n /* Error checking of pName left out as exercise */\n\n pModule = PyImport_Import(pName);\n Py_DECREF(pName);\n\n if (pModule != NULL) {\n pFunc = PyObject_GetAttrString(pModule, \"MyFunc\");\n /* pFunc is a new reference */\n\n if (pFunc) {\n pArgs = PyTuple_New(0);\n pArg = PyString_FromString(\"some parameter\")\n /* pArg reference stolen here: */\n PyTuple_SetItem(pArgs, 0, pArg);\n pResult = PyObject_CallObject(pFunc, pArgs);\n Py_DECREF(pArgs);\n if (pResult != NULL) {\n printf(\"Result of call: %s\\n\", PyString_AsString(pResult));\n Py_DECREF(pResult);\n }\n else {\n Py_DECREF(pFunc);\n Py_DECREF(pModule);\n PyErr_Print();\n fprintf(stderr,\"Call failed\\n\");\n return 1;\n }\n }\n else {\n if (PyErr_Occurred())\n PyErr_Print();\n fprintf(stderr, \"Cannot find function\");\n }\n Py_XDECREF(pFunc);\n Py_DECREF(pModule);\n }\n else {\n PyErr_Print();\n fprintf(stderr, \"Failed to load module\");\n return 1;\n }\n Py_Finalize();\n return 0;\n}\n\n", "Based on ΤΖΩΤΖΙΟΥ, Josh and Nosklo's answers i finally got it work using boost.python:\nPython:\nimport thirdparty\n\ndef MyFunc(some_arg):\n result = thirdparty.go()\n return result\n\nC++:\n#include <string>\n#include <iostream>\n#include <boost/python.hpp>\n\nusing namespace boost::python;\n\nint main(int, char **) \n{\n Py_Initialize();\n\n try \n {\n object module = import(\"__main__\");\n object name_space = module.attr(\"__dict__\");\n exec_file(\"MyModule.py\", name_space, name_space);\n\n object MyFunc = name_space[\"MyFunc\"];\n object result = MyFunc(\"some_args\");\n\n // result is a dictionary\n std::string val = extract<std::string>(result[\"val\"]);\n } \n catch (error_already_set) \n {\n PyErr_Print();\n }\n\n Py_Finalize();\n return 0;\n}\n\nSome important points:\n\nI changed 'exec' to 'exec_file' out of\nconvenience, it also works with\nplain 'exec'. \nThe main reason it failed is that i\ndid not pass a \"local\" name_sapce to\n'exec' or 'exec_file' - this is now\nfixed by passing name_space twice.\nIf the python function returns\nunicode strings, they are not\nconvertible to 'std::string', so i\nhad to suffix all python strings\nwith '.encode('ASCII', 'ignore')'.\n\n", "I think what you need is either PyObject_CallObject(<py function>, <args>), which returns the return value of the function you call as a PyObject, or PyRun_String(<expression>, Py_eval_input, <globals>, <locals>) which evaluates a single expression and returns its result.\n", "You should be able to return the result from MyFunc, which would then end up in the variable you are currently calling \"ignored\". This eliminates the need to access it in any other way.\n" ]
[ 10, 4, 1, 0 ]
[]
[]
[ "boost_python", "c++", "python" ]
stackoverflow_0000215752_boost_python_c++_python.txt
Q: Why do I receive an ImportError when running one of the CherryPy tutorials I have installed CherryPy 3.1.0,. Here is what happens when I try to run tutorial 9: $ cd /Library/Python/2.5/site-packages/cherrypy/tutorial/ $ python tut09_files.py Traceback (most recent call last): File "tut09_files.py", line 48, in <module> from cherrypy.lib import static ImportError: cannot import name static The previous line in the file: import cherrypy passes without error, so it appears that it can find cherrypy on the path. What am I missing? A: This works for me, and I'm also using CherryPy 3.1.0, so I'm not sure what to tell you. Look in your /Library/Python/2.5/site-packages/cherrypy/lib directory for a file named static.py; if this file exists then I'm not sure what to tell you. If it doesn't then something has happened to your CherryPy and I'd advise you to reinstall. If it does then you should check the value of sys.path to make sure it's detecting the right version of CherryPy. You can also try running the python interpreter on the command line and then doing a from cherrypy.lib import static to see if you get the same result. A: I had an old CherryPy-2.3.0-py2.5.egg file in my site-packages. After removing the old .egg I could run the tutorial.
Why do I receive an ImportError when running one of the CherryPy tutorials
I have installed CherryPy 3.1.0,. Here is what happens when I try to run tutorial 9: $ cd /Library/Python/2.5/site-packages/cherrypy/tutorial/ $ python tut09_files.py Traceback (most recent call last): File "tut09_files.py", line 48, in <module> from cherrypy.lib import static ImportError: cannot import name static The previous line in the file: import cherrypy passes without error, so it appears that it can find cherrypy on the path. What am I missing?
[ "This works for me, and I'm also using CherryPy 3.1.0, so I'm not sure what to tell you.\nLook in your /Library/Python/2.5/site-packages/cherrypy/lib directory for a file named static.py; if this file exists then I'm not sure what to tell you. If it doesn't then something has happened to your CherryPy and I'd advise you to reinstall. If it does then you should check the value of sys.path to make sure it's detecting the right version of CherryPy.\nYou can also try running the python interpreter on the command line and then doing a from cherrypy.lib import static to see if you get the same result.\n", "I had an old CherryPy-2.3.0-py2.5.egg file in my site-packages. After removing the old .egg I could run the tutorial.\n" ]
[ 1, 1 ]
[]
[]
[ "cherrypy", "python" ]
stackoverflow_0000209429_cherrypy_python.txt
Q: Hooking up GUI interface with asynchronous (s)ftp operation Trying to implement a progress dialog window for file uploads that looks like a cross between IE download dialog and Firefox download dialog with a python GUI library on Windows. What asynchronous (S)FTP libraries are there for python? Ideally I should be able to do file upload resumes and track the progress of each parallel file uploads. If I'm running each file uploads in a separate process, how to get the upload status and display it in a progress bar dialog? A: "ftplib" is the standard ftp library built in to Python. In Python 2.6, it had a callback parameter added to the method used for uploading. That callback is a function you provide to the library; it is called once for every block that is completed. Your function can send a message to the GUI (perhaps on a different thread/process, using standard inter-thread or inter-process communications) to tell it to update its progress bar. Reference A: If you want a complete example of how to use threads and events to update your GUI with long running tasks using WxPython have a look at this page. This tutorial is quite useful and helped me perform a similar program than yours. A: If you data transfer runs in a separate thread from the GUI, you can use wx.CallAfter() whenever you have to update you progress bar from the data transfer thread. First, using CallAfter() is mandatory as wxPython function cannot be called from child threads. Second, this will decouple the execution of the data transfer from the GUI in the main thread. Note that CallAfter() only works for threads, not for separate processes. In that case, using the multiprocessing package should help. A: If you can't use Python 2.6's ftplib, there is a company offering a commercial solution. Chilkat's CKFTP2 costs several hundreds of dollars, but promises to work with Python 2.5, and offers a function call get_AsyncBytesSent() which returns the information you need. (I didn't see a callback, but it may offer that too.) I haven't used this product. Also consider that if FTP proves to be too hard/expensive, you could always switch to HTTP uploads instead. Chilkat have a free HTTP/HTTPS upload library.
Hooking up GUI interface with asynchronous (s)ftp operation
Trying to implement a progress dialog window for file uploads that looks like a cross between IE download dialog and Firefox download dialog with a python GUI library on Windows. What asynchronous (S)FTP libraries are there for python? Ideally I should be able to do file upload resumes and track the progress of each parallel file uploads. If I'm running each file uploads in a separate process, how to get the upload status and display it in a progress bar dialog?
[ "\"ftplib\" is the standard ftp library built in to Python. In Python 2.6, it had a callback parameter added to the method used for uploading.\nThat callback is a function you provide to the library; it is called once for every block that is completed.\nYour function can send a message to the GUI (perhaps on a different thread/process, using standard inter-thread or inter-process communications) to tell it to update its progress bar.\nReference\n", "If you want a complete example of how to use threads and events to update your GUI with long running tasks using WxPython have a look at this page. This tutorial is quite useful and helped me perform a similar program than yours.\n", "If you data transfer runs in a separate thread from the GUI, you can use wx.CallAfter() whenever you have to update you progress bar from the data transfer thread. \nFirst, using CallAfter() is mandatory as wxPython function cannot be called from child threads.\nSecond, this will decouple the execution of the data transfer from the GUI in the main thread.\nNote that CallAfter() only works for threads, not for separate processes. In that case, using the multiprocessing package should help.\n", "If you can't use Python 2.6's ftplib, there is a company offering a commercial solution.\nChilkat's CKFTP2 costs several hundreds of dollars, but promises to work with Python 2.5, and offers a function call get_AsyncBytesSent() which returns the information you need. (I didn't see a callback, but it may offer that too.)\nI haven't used this product.\nAlso consider that if FTP proves to be too hard/expensive, you could always switch to HTTP uploads instead. Chilkat have a free HTTP/HTTPS upload library.\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "ftp", "python", "sftp", "user_interface", "windows" ]
stackoverflow_0000207230_ftp_python_sftp_user_interface_windows.txt
Q: Keyboard interruptable blocking queue in Python It seems import Queue Queue.Queue().get(timeout=10) is keyboard interruptible (ctrl-c) whereas import Queue Queue.Queue().get() is not. I could always create a loop; import Queue q = Queue() while True: try: q.get(timeout=1000) except Queue.Empty: pass but this seems like a strange thing to do. So, is there a way of getting an indefinitely waiting but keyboard interruptible Queue.get()? A: Queue objects have this behavior because they lock using Condition objects form the threading module. So your solution is really the only way to go. However, if you really want a Queue method that does this, you can monkeypatch the Queue class. For example: def interruptable_get(self): while True: try: return self.get(timeout=1000) except Queue.Empty: pass Queue.interruptable_get = interruptable_get This would let you say q.interruptable_get() instead of interruptable_get(q) although monkeypatching is generally discouraged by the Python community in cases such as these, since a regular function seems just as good. A: This may not apply to your use case at all. But I've successfully used this pattern in several cases: (sketchy and likely buggy, but you get the point). STOP = object() def consumer(q): while True: x = q.get() if x is STOP: return consume(x) def main() q = Queue() c=threading.Thread(target=consumer,args=[q]) try: run_producer(q) except KeybordInterrupt: q.enqueue(STOP) c.join()
Keyboard interruptable blocking queue in Python
It seems import Queue Queue.Queue().get(timeout=10) is keyboard interruptible (ctrl-c) whereas import Queue Queue.Queue().get() is not. I could always create a loop; import Queue q = Queue() while True: try: q.get(timeout=1000) except Queue.Empty: pass but this seems like a strange thing to do. So, is there a way of getting an indefinitely waiting but keyboard interruptible Queue.get()?
[ "Queue objects have this behavior because they lock using Condition objects form the threading module. So your solution is really the only way to go.\nHowever, if you really want a Queue method that does this, you can monkeypatch the Queue class. For example:\ndef interruptable_get(self):\n while True:\n try:\n return self.get(timeout=1000)\n except Queue.Empty:\n pass\nQueue.interruptable_get = interruptable_get\n\nThis would let you say\nq.interruptable_get()\n\ninstead of\ninterruptable_get(q)\n\nalthough monkeypatching is generally discouraged by the Python community in cases such as these, since a regular function seems just as good.\n", "This may not apply to your use case at all. But I've successfully used this pattern in several cases: (sketchy and likely buggy, but you get the point).\nSTOP = object()\n\ndef consumer(q):\n while True:\n x = q.get()\n if x is STOP:\n return\n consume(x)\n\ndef main()\n q = Queue()\n c=threading.Thread(target=consumer,args=[q])\n\n try:\n run_producer(q)\n except KeybordInterrupt:\n q.enqueue(STOP)\n c.join()\n\n" ]
[ 6, 4 ]
[]
[]
[ "concurrency", "multithreading", "python", "python_2.x" ]
stackoverflow_0000212797_concurrency_multithreading_python_python_2.x.txt
Q: Python/editline on OS X: £ sign seems to be bound to ed-prev-word On Mac OS X I can’t enter a pound sterling sign (£) into the Python interactive shell. * Mac OS X 10.5.5 * Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) * European keyboard (£ is shift-3) When I type shift-3 in the Python interactive shell, I seem to invoke the previous word function, i.e. the cursor will move to the start of the last “word” (i.e. space-delimited item) typed on the line. When I’m back in the bash shell, typing shift-3 just produces a £, as expected. This version of Python apparently uses editline for its interactive shell, as opposed to readline. I’m guessing that one of the default editline key bindings binds shift-3 (or whatever editline sees when I type shift-3) to the ed-prev-word command. I’ve tried a few things in my ~/.editrc file to remove this binding, and they don’t have any effect: bind -r £ bind -r \243 bind -r \156 And another that causes a bus error: bind £ \243 Any ideas? A: This may be an editline issue; libedit may not accept UTF-8 characters: http://tracker.firebirdsql.org/browse/CORE-362#action_11593 http://marc.info/?t=119056021900002&r=1&w=2
Python/editline on OS X: £ sign seems to be bound to ed-prev-word
On Mac OS X I can’t enter a pound sterling sign (£) into the Python interactive shell. * Mac OS X 10.5.5 * Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) * European keyboard (£ is shift-3) When I type shift-3 in the Python interactive shell, I seem to invoke the previous word function, i.e. the cursor will move to the start of the last “word” (i.e. space-delimited item) typed on the line. When I’m back in the bash shell, typing shift-3 just produces a £, as expected. This version of Python apparently uses editline for its interactive shell, as opposed to readline. I’m guessing that one of the default editline key bindings binds shift-3 (or whatever editline sees when I type shift-3) to the ed-prev-word command. I’ve tried a few things in my ~/.editrc file to remove this binding, and they don’t have any effect: bind -r £ bind -r \243 bind -r \156 And another that causes a bus error: bind £ \243 Any ideas?
[ "This may be an editline issue; libedit may not accept UTF-8 characters:\n\nhttp://tracker.firebirdsql.org/browse/CORE-362#action_11593\nhttp://marc.info/?t=119056021900002&r=1&w=2\n\n" ]
[ 1 ]
[]
[]
[ "editline", "macos", "python", "terminal", "unix" ]
stackoverflow_0000217020_editline_macos_python_terminal_unix.txt
Q: How close are development webservers to production webservers? Most python frameworks will have a development webserver of some kind that will have a warning that it isn't for use as production servers. How much different do they tend to be from their production equivalents? I haven't quite decided which framework to go with, much less what production server to use, so it's kinda difficult for me to pin this down to a "compare development server x to production server y." So with that said, let me make the question a little bit more precise: In your past experience with a python framework, how much time did you have to spend getting your application up and running with a production system once its been developed on a development server? Or did you skip the development server and develop your app on a server that's more like what you will use in production? A: The lower environments should try to match the production environment as closely as possible given the resources available. This applies to all development efforts regardless of whether they are python-based or even web-based. In practical terms, most organizations are not willing to spend that type of money. In this case try to make at least the environment that is directly below production as close to production as possible. Some of the variable to keep in mind are: many times there are multiple machines (app server, database server, web server, load balancers, fire walls, etc) in a production. Keep these all in mind. Operating Systems number of CPUs. Moving from a one CPU lower environment to a multi core production environment can expose multi-threading issues that were not tested load balancing. Many times lower environments are not load balanced. If you are replicating sessions (for instance) across multiple production app servers, you should try to do the same in a lower environment Software / library versions A: Generally, they are same in terms of the settings which are required to run the applications which include the environment setting. However, the clients genereally have dev systems which are less powerful in terms of the processing power and other h/w resources. I have seen using them virtual servers in dev evironment since they generally have multiple projects going on in parallel an this helps them reducing the cost. A: I develop with django. The production server we have is remote, so it's a pain to be using it for development. Thus, at first, I created a vm and tried to match as closely as I could the environment of the prod server. At some point that vm got hosed (due to an unrelated incident). I took stock of the situation at that time and realized there really was no good reason to be using a customized vm for development. Since the resources available to the app weren't the same as the prod server, it was no good for timing queries anyway (in an absolute sense). That said, I now use django's built in dev server with sqlite for development, and apache/wsgi and postgresql for production. As long as the python dependencies are met on both sides, it's 100% compatible. The only potential problem would be writing raw sql instead of using the orm. A: Ideally, the logical configuration of the development, test, and production server should be the same. They should have the same version of OS, web server, and all other software assets used to run the application. However, depending on how strong your environment things will crop - hand copied images/scripts etc on the dev machine that do not make it through test and or production. to minimize this you probably need some sort of push script that can move you from one stage to the next, ie PushVersionDev, PushVesionTest,PushVersionProd. ideally this should be the same script with parameters for target server(s) representing all that you need to move the app through the various stages. I would recommend a read of Theo Schlossnagle's book Scalable Internet Architectures for more ideas on the matter. To answer your question directly....once you get your application tested and implemented, the time to roll to productoin is not great - deploy OS, web server, supporting frameworks if they need installation, application and you are good to go. From bare metal I have seen linux servers go online in 1 hour, windows about 90 minutes. if you have the OS and web server going even less..minutes. A: Your staging environment should mimic your production environment. Development is more like a playground, and the control on the development environment should not be quite so strict. However, the development environment should periodically be refreshed from the production environment (e.g,. prod data copied to the dev db, close the ports on dev that are closed on prod, etc.). Ideally, dev, stage, and prod are all on separate machines. The separate machines can be separate physical boxes, or virtual machines on the same physical box, depending on budget/needs.
How close are development webservers to production webservers?
Most python frameworks will have a development webserver of some kind that will have a warning that it isn't for use as production servers. How much different do they tend to be from their production equivalents? I haven't quite decided which framework to go with, much less what production server to use, so it's kinda difficult for me to pin this down to a "compare development server x to production server y." So with that said, let me make the question a little bit more precise: In your past experience with a python framework, how much time did you have to spend getting your application up and running with a production system once its been developed on a development server? Or did you skip the development server and develop your app on a server that's more like what you will use in production?
[ "The lower environments should try to match the production environment as closely as possible given the resources available. This applies to all development efforts regardless of whether they are python-based or even web-based. In practical terms, most organizations are not willing to spend that type of money. In this case try to make at least the environment that is directly below production as close to production as possible.\nSome of the variable to keep in mind are:\n\nmany times there are multiple machines (app server, database server, web server, load balancers, fire walls, etc) in a production. Keep these all in mind.\nOperating Systems\nnumber of CPUs. Moving from a one CPU lower environment to a multi core production environment can expose multi-threading issues that were not tested\nload balancing. Many times lower environments are not load balanced. If you are replicating sessions (for instance) across multiple production app servers, you should try to do the same in a lower environment\nSoftware / library versions\n\n", "Generally, they are same in terms of the settings which are required to run the applications which include the environment setting. \nHowever, the clients genereally have dev systems which are less powerful in terms of the processing power and other h/w resources. I have seen using them virtual servers in dev evironment since they generally have multiple projects going on in parallel an this helps them reducing the cost.\n", "I develop with django. The production server we have is remote, so it's a pain to be using it for development. Thus, at first, I created a vm and tried to match as closely as I could the environment of the prod server. At some point that vm got hosed (due to an unrelated incident). I took stock of the situation at that time and realized there really was no good reason to be using a customized vm for development. Since the resources available to the app weren't the same as the prod server, it was no good for timing queries anyway (in an absolute sense).\nThat said, I now use django's built in dev server with sqlite for development, and apache/wsgi and postgresql for production. As long as the python dependencies are met on both sides, it's 100% compatible. The only potential problem would be writing raw sql instead of using the orm.\n", "Ideally, the logical configuration of the development, test, and production server should be the same. They should have the same version of OS, web server, and all other software assets used to run the application. However, depending on how strong your environment things will crop - hand copied images/scripts etc on the dev machine that do not make it through test and or production.\nto minimize this you probably need some sort of push script that can move you from one stage to the next, ie PushVersionDev, PushVesionTest,PushVersionProd. ideally this should be the same script with parameters for target server(s) representing all that you need to move the app through the various stages. \nI would recommend a read of Theo Schlossnagle's book Scalable Internet Architectures for more ideas on the matter.\nTo answer your question directly....once you get your application tested and implemented, the time to roll to productoin is not great - deploy OS, web server, supporting frameworks if they need installation, application and you are good to go. From bare metal I have seen linux servers go online in 1 hour, windows about 90 minutes. if you have the OS and web server going even less..minutes.\n", "Your staging environment should mimic your production environment. Development is more like a playground, and the control on the development environment should not be quite so strict. However, the development environment should periodically be refreshed from the production environment (e.g,. prod data copied to the dev db, close the ports on dev that are closed on prod, etc.).\nIdeally, dev, stage, and prod are all on separate machines. The separate machines can be separate physical boxes, or virtual machines on the same physical box, depending on budget/needs.\n" ]
[ 5, 2, 2, 1, 0 ]
[]
[]
[ "python", "web_frameworks", "webserver" ]
stackoverflow_0000216489_python_web_frameworks_webserver.txt
Q: Smart Sudoku Golf The point of this question is to create the shortest not abusively slow Sudoku solver. This is defined as: don't recurse when there are spots on the board which can only possibly be one digit. Here is the shortest I have so far in python: r=range(81) s=range(1,10) def R(A): bzt={} for i in r: if A[i]!=0: continue; h={} for j in r: h[A[j]if(j/9==i/9 or j%9==i%9 or(j/27==i/27)and((j%9/3)==(i%9/3)))else 0]=1 bzt[9-len(h)]=h,i for l,(h,i)in sorted(bzt.items(),key=lambda x:x[0]): for j in s: if j not in h: A[i]=j if R(A):return 1 A[i]=0;return 0 print A;return 1 R(map(int, "080007095010020000309581000500000300400000006006000007000762409000050020820400060")) The last line I take to be part of the cmd line input, it can be changed to: import sys; R(map(int, sys.argv[1]); This is similar to other sudoku golf challenges, except that I want to eliminate unnecessary recursion. Any language is acceptable. The challenge is on! A: I haven't really made much of a change - the algorithm is identical, but here are a few further micro-optimisations you can make to your python code. No need for !=0, 0 is false in a boolean context. a if c else b is more expensive than using [a,b][c] if you don't need short-circuiting, hence you can use h[ [0,A[j]][j/9.. rest of boolean condition]. Even better is to exploit the fact that you want 0 in the false case, and so multiply by the boolean value (treated as either 0*A[j] (ie. 0) or 1*A[j] (ie. A[j]). You can omit spaces between digits and identifiers. eg "9 or" -> "9or" You can omit the key to sorted(). Since you're sorting on the first element, a normal sort will produce effectively the same order (unless you're relying on stability, which it doesn't look like) You can save a couple of bytes by omitting the .items() call, and just assign h,i in the next line to z[l] You only use s once - no point in using a variable. You can also avoid using range() by selecting the appropriate slice of r instead (r[1:10]) j not in h can become (j in h)-1 (relying on True == 1 in integer context) [Edit] You can also replace the first for loop's construction of h with a dict constructor and a generator expression. This lets you compress the logic onto one line, saving 10 bytes in total. More generally, you probably want to think about ways to change the algorithm to reduce the levels of nesting. Every level gives an additional byte per line within in python, which accumulates. Here's what I've got so far (I've switched to 1 space per indent so that you can get an accurate picture of required characters. Currently it's weighing in at 288 278, which is still pretty big. r=range(81) def R(A): z={} for i in r: if 0==A[i]:h=dict((A[j]*(j/9==i/9or j%9==i%9or j/27==i/27and j%9/3==i%9/3),1)for j in r);z[9-len(h)]=h,i for l in sorted(z): h,i=z[l] for j in r[1:10]: if(j in h)-1: A[i]=j if R(A):return A A[i]=0;return[] return A A: r=range(81) def R(A): if(0in A)-1:yield A;return def H(i):h=set(A[j]for j in r if j/9==i/9or j%9==i%9or j/27==i/27and j%9/3==i%9/3);return len(h),h,i l,h,i=max(H(i)for i in r if not A[i]) for j in r[1:10]: if(j in h)-1: A[i]=j for S in R(A):yield S A[i]=0 269 characters, and it finds all solutions. Usage (not counted in char count): sixsol = map(int, "300000080001093000040780003093800012000040000520006790600021040000530900030000051") for S in R(sixsol): print S A: I've just trimmed the python a bit here: r=range(81);s=range(1,10) def R(A): z={} for i in r: if A[i]!=0:continue h={} for j in r:h[A[j]if j/9==i/9 or j%9==i%9 or j/27==i/27 and j%9/3==i%9/3 else 0]=1 z[9-len(h)]=h,i for l,(h,i)in sorted(z.items(),cmp,lambda x:x[0]): for j in s: if j not in h: A[i]=j if R(A):return A A[i]=0;return[] return A print R(map(int, "080007095010020000309581000500000300400000006006000007000762409000050020820400060")) This is a hefty 410 characters, 250 if you don't count whitespace. If you just turn it into perl you'll undoubtedly be better than mine!
Smart Sudoku Golf
The point of this question is to create the shortest not abusively slow Sudoku solver. This is defined as: don't recurse when there are spots on the board which can only possibly be one digit. Here is the shortest I have so far in python: r=range(81) s=range(1,10) def R(A): bzt={} for i in r: if A[i]!=0: continue; h={} for j in r: h[A[j]if(j/9==i/9 or j%9==i%9 or(j/27==i/27)and((j%9/3)==(i%9/3)))else 0]=1 bzt[9-len(h)]=h,i for l,(h,i)in sorted(bzt.items(),key=lambda x:x[0]): for j in s: if j not in h: A[i]=j if R(A):return 1 A[i]=0;return 0 print A;return 1 R(map(int, "080007095010020000309581000500000300400000006006000007000762409000050020820400060")) The last line I take to be part of the cmd line input, it can be changed to: import sys; R(map(int, sys.argv[1]); This is similar to other sudoku golf challenges, except that I want to eliminate unnecessary recursion. Any language is acceptable. The challenge is on!
[ "I haven't really made much of a change - the algorithm is identical, but here are a few further micro-optimisations you can make to your python code.\n\nNo need for !=0, 0 is false in a boolean context.\na if c else b is more expensive than using [a,b][c] if you don't need short-circuiting, hence you can use h[ [0,A[j]][j/9.. rest of boolean condition]. Even better is to exploit the fact that you want 0 in the false case, and so multiply by the boolean value (treated as either 0*A[j] (ie. 0) or 1*A[j] (ie. A[j]).\nYou can omit spaces between digits and identifiers. eg \"9 or\" -> \"9or\"\nYou can omit the key to sorted(). Since you're sorting on the first element, a normal sort will produce effectively the same order (unless you're relying on stability, which it doesn't look like)\nYou can save a couple of bytes by omitting the .items() call, and just assign h,i in the next line to z[l]\nYou only use s once - no point in using a variable. You can also avoid using range() by selecting the appropriate slice of r instead (r[1:10])\nj not in h can become (j in h)-1 (relying on True == 1 in integer context)\n[Edit] You can also replace the first for loop's construction of h with a dict constructor and a generator expression. This lets you compress the logic onto one line, saving 10 bytes in total.\n\nMore generally, you probably want to think about ways to change the algorithm to reduce the levels of nesting. Every level gives an additional byte per line within in python, which accumulates.\nHere's what I've got so far (I've switched to 1 space per indent so that you can get an accurate picture of required characters. Currently it's weighing in at 288 278, which is still pretty big.\nr=range(81)\ndef R(A):\n z={} \n for i in r:\n if 0==A[i]:h=dict((A[j]*(j/9==i/9or j%9==i%9or j/27==i/27and j%9/3==i%9/3),1)for j in r);z[9-len(h)]=h,i\n for l in sorted(z):\n h,i=z[l]\n for j in r[1:10]:\n if(j in h)-1:\n A[i]=j\n if R(A):return A\n A[i]=0;return[]\n return A\n\n", "r=range(81)\ndef R(A):\n if(0in A)-1:yield A;return\n def H(i):h=set(A[j]for j in r if j/9==i/9or j%9==i%9or j/27==i/27and j%9/3==i%9/3);return len(h),h,i\n l,h,i=max(H(i)for i in r if not A[i])\n for j in r[1:10]:\n if(j in h)-1:\n A[i]=j\n for S in R(A):yield S\n A[i]=0\n\n269 characters, and it finds all solutions. Usage (not counted in char count):\nsixsol = map(int, \"300000080001093000040780003093800012000040000520006790600021040000530900030000051\")\nfor S in R(sixsol):\n print S\n\n", "I've just trimmed the python a bit here:\nr=range(81);s=range(1,10)\ndef R(A):\n z={}\n for i in r:\n if A[i]!=0:continue\n h={}\n for j in r:h[A[j]if j/9==i/9 or j%9==i%9 or j/27==i/27 and j%9/3==i%9/3 else 0]=1\n z[9-len(h)]=h,i\n for l,(h,i)in sorted(z.items(),cmp,lambda x:x[0]):\n for j in s:\n if j not in h:\n A[i]=j\n if R(A):return A\n A[i]=0;return[]\n return A\n\nprint R(map(int, \"080007095010020000309581000500000300400000006006000007000762409000050020820400060\"))\n\nThis is a hefty 410 characters, 250 if you don't count whitespace. If you just turn it into perl you'll undoubtedly be better than mine!\n" ]
[ 3, 3, 2 ]
[]
[]
[ "code_golf", "perl", "python", "sudoku" ]
stackoverflow_0000216141_code_golf_perl_python_sudoku.txt
Q: What's a good library to manipulate Apache2 config files? I'd like to create a script to manipulate Apache2 configuration directly, reading and writing its properties (like adding a new VirtualHost, changing settings of one that already exists). Are there any libs out there, for Perl, Python or Java that automates that task? A: Rather than manipulate the config files, you can use mod_perl to embed Perl directly into the config files. This could allow you, for example, to read required vhosts out of a database. See Configure Apache with Perl Example for quick example and Apache Configuration in Perl for all the details. A: In Perl, you've got at least 2 modules for that: Apache::ConfigFile Apache::Admin::Config A: Look at Augeas, it's not specifically for Apache-httpd config. files it's just a generic config. file "editor" API. One of it's major selling points is that it will keep comments/etc. is happy for other tools to alter the files and will refuse to let you save broken files. Also the fact that you can use the same API in all the languages you asked about, and that you can edit other config. files using the same APIs are both major advantages IMO. A: This is the ultimate Apache configurator: http://perl.apache.org/ exposes many if not all Apache internals to programs written in Perl. For instance: http://perl.apache.org/docs/2.0/api/Apache2/Directive.html (Of course that it can do much much more than just configuring it). On the other hand, it needs to be loaded and runs within Apache, it's not a config file parser/editor. A: Try the Apache::ConfigFile Perl module. A: Also see Config::General, which claims to be fully compatible with Apache configuration files. I use it to parse my Apache configuration files for automatic regression testing after configuration changes.
What's a good library to manipulate Apache2 config files?
I'd like to create a script to manipulate Apache2 configuration directly, reading and writing its properties (like adding a new VirtualHost, changing settings of one that already exists). Are there any libs out there, for Perl, Python or Java that automates that task?
[ "Rather than manipulate the config files, you can use mod_perl to embed Perl directly into the config files. This could allow you, for example, to read required vhosts out of a database.\nSee Configure Apache with Perl Example for quick example and Apache Configuration in Perl for all the details.\n", "In Perl, you've got at least 2 modules for that:\nApache::ConfigFile\nApache::Admin::Config \n", "Look at Augeas, it's not specifically for Apache-httpd config. files it's just a generic config. file \"editor\" API. One of it's major selling points is that it will keep comments/etc. is happy for other tools to alter the files and will refuse to let you save broken files.\nAlso the fact that you can use the same API in all the languages you asked about, and that you can edit other config. files using the same APIs are both major advantages IMO.\n", "This is the ultimate Apache configurator:\nhttp://perl.apache.org/\nexposes many if not all Apache internals to programs written in Perl.\nFor instance: http://perl.apache.org/docs/2.0/api/Apache2/Directive.html\n(Of course that it can do much much more than just configuring it).\nOn the other hand, it needs to be loaded and runs within Apache, it's not a config file parser/editor.\n", "Try the Apache::ConfigFile Perl module.\n", "Also see Config::General, which claims to be fully compatible with Apache configuration files. I use it to parse my Apache configuration files for automatic regression testing after configuration changes.\n" ]
[ 7, 7, 4, 2, 2, 0 ]
[]
[]
[ "apache", "java", "perl", "python" ]
stackoverflow_0000215542_apache_java_perl_python.txt
Q: String Simple Substitution What's the easiest way of me converting the simpler regex format that most users are used to into the correct re python regex string? As an example, I need to convert this: string = "*abc+de?" to this: string = ".*abc.+de.?" Of course I could loop through the string and build up another string character by character, but that's surely an inefficient way of doing this? A: Those don't look like regexps you're trying to translate, they look more like unix shell globs. Python has a module for doing this already. It doesn't know about the "+" syntax you used, but neither does my shell, and I think the syntax is nonstandard. >>> import fnmatch >>> fnmatch.fnmatch("fooabcdef", "*abcde?") True >>> help(fnmatch.fnmatch) Help on function fnmatch in module fnmatch: fnmatch(name, pat) Test whether FILENAME matches PATTERN. Patterns are Unix shell style: * matches everything ? matches any single character [seq] matches any character in seq [!seq] matches any char not in seq An initial period in FILENAME is not special. Both FILENAME and PATTERN are first case-normalized if the operating system requires it. If you don't want this, use fnmatchcase(FILENAME, PATTERN). >>> A: .replacing() each of the wildcards is the quick way, but what if the wildcarded string contains other regex special characters? eg. someone searching for 'my.thing*' probably doesn't mean that '.' to match any character. And in the worst case things like match-group-creating parentheses are likely to break your final handling of the regex matches. re.escape can be used to put literal characters into regexes. You'll have to split out the wildcard characters first though. The usual trick for that is to use re.split with a matching bracket, resulting in a list in the form [literal, wildcard, literal, wildcard, literal...]. Example code: wildcards= re.compile('([?*+])') escapewild= {'?': '.', '*': '.*', '+': '.+'} def escapePart((parti, part)): if parti%2==0: # even items are literals return re.escape(part) else: # odd items are wildcards return escapewild[part] def convertWildcardedToRegex(s): parts= map(escapePart, enumerate(wildcards.split(s))) return '^%s$' % (''.join(parts)) A: You'll probably only be doing this substitution occasionally, such as each time a user enters a new search string, so I wouldn't worry about how efficient the solution is. You need to generate a list of the replacements you need to convert from the "user format" to a regex. For ease of maintenance I would store these in a dictionary, and like @Konrad Rudolph I would just use the replace method: def wildcard_to_regex(wildcard): replacements = { '*': '.*', '?': '.?', '+': '.+', } regex = wildcard for (wildcard_pattern, regex_pattern) in replacements.items(): regex = regex.replace(wildcard_pattern, regex_pattern) return regex Note that this only works for simple character replacements, although other complex code can at least be hidden in the wildcard_to_regex function if necessary. (Also, I'm not sure that ? should translate to .? -- I think normal wildcards have ? as "exactly one character", so its replacement should be a simple . -- but I'm following your example.) A: I'd use replace: def wildcard_to_regex(str): return str.replace("*", ".*").replace("?", .?").replace("#", "\d") This probably isn't the most efficient way but it should be efficient enough for most purposes. Notice that some wildcard formats allow character classes which are more difficult to handle. A: Here is a Perl example of doing this. It is simply using a table to replace each wildcard construct with the corresponding regular expression. I've done this myself previously, but in C. It shouldn't be too hard to port to Python.
String Simple Substitution
What's the easiest way of me converting the simpler regex format that most users are used to into the correct re python regex string? As an example, I need to convert this: string = "*abc+de?" to this: string = ".*abc.+de.?" Of course I could loop through the string and build up another string character by character, but that's surely an inefficient way of doing this?
[ "Those don't look like regexps you're trying to translate, they look more like unix shell globs. Python has a module for doing this already. It doesn't know about the \"+\" syntax you used, but neither does my shell, and I think the syntax is nonstandard.\n>>> import fnmatch\n>>> fnmatch.fnmatch(\"fooabcdef\", \"*abcde?\")\nTrue\n>>> help(fnmatch.fnmatch)\nHelp on function fnmatch in module fnmatch:\n\nfnmatch(name, pat)\n Test whether FILENAME matches PATTERN.\n\n Patterns are Unix shell style:\n\n * matches everything\n ? matches any single character\n [seq] matches any character in seq\n [!seq] matches any char not in seq\n\n An initial period in FILENAME is not special.\n Both FILENAME and PATTERN are first case-normalized\n if the operating system requires it.\n If you don't want this, use fnmatchcase(FILENAME, PATTERN).\n\n>>> \n\n", ".replacing() each of the wildcards is the quick way, but what if the wildcarded string contains other regex special characters? eg. someone searching for 'my.thing*' probably doesn't mean that '.' to match any character. And in the worst case things like match-group-creating parentheses are likely to break your final handling of the regex matches.\nre.escape can be used to put literal characters into regexes. You'll have to split out the wildcard characters first though. The usual trick for that is to use re.split with a matching bracket, resulting in a list in the form [literal, wildcard, literal, wildcard, literal...].\nExample code:\nwildcards= re.compile('([?*+])')\nescapewild= {'?': '.', '*': '.*', '+': '.+'}\n\ndef escapePart((parti, part)):\n if parti%2==0: # even items are literals\n return re.escape(part)\n else: # odd items are wildcards\n return escapewild[part]\n\ndef convertWildcardedToRegex(s):\n parts= map(escapePart, enumerate(wildcards.split(s)))\n return '^%s$' % (''.join(parts))\n\n", "You'll probably only be doing this substitution occasionally, such as each time a user enters a new search string, so I wouldn't worry about how efficient the solution is.\nYou need to generate a list of the replacements you need to convert from the \"user format\" to a regex. For ease of maintenance I would store these in a dictionary, and like @Konrad Rudolph I would just use the replace method:\ndef wildcard_to_regex(wildcard):\n replacements = {\n '*': '.*',\n '?': '.?',\n '+': '.+',\n }\n regex = wildcard\n for (wildcard_pattern, regex_pattern) in replacements.items():\n regex = regex.replace(wildcard_pattern, regex_pattern)\n return regex\n\nNote that this only works for simple character replacements, although other complex code can at least be hidden in the wildcard_to_regex function if necessary. \n(Also, I'm not sure that ? should translate to .? -- I think normal wildcards have ? as \"exactly one character\", so its replacement should be a simple . -- but I'm following your example.)\n", "I'd use replace:\ndef wildcard_to_regex(str):\n return str.replace(\"*\", \".*\").replace(\"?\", .?\").replace(\"#\", \"\\d\")\n\nThis probably isn't the most efficient way but it should be efficient enough for most purposes. Notice that some wildcard formats allow character classes which are more difficult to handle.\n", "Here is a Perl example of doing this. It is simply using a table to replace each wildcard construct with the corresponding regular expression. I've done this myself previously, but in C. It shouldn't be too hard to port to Python.\n" ]
[ 5, 2, 1, 0, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0000217881_python_string.txt
Q: Troubleshooting py2exe packaging problem I've written a setup.py script for py2exe, generated an executable for my python GUI application and I have a whole bunch of files in the dist directory, including the app, w9xopen.exe and MSVCR71.dll. When I try to run the application, I get an error message that just says "see the logfile for details". The only problem is, the log file is empty. The closest error I've seen is "The following modules appear to be missing" but I'm not using any of those modules as far as I know (especially since they seem to be of databases I'm not using) but digging up on Google suggests that these are relatively benign warnings. I've written and packaged a console application as well as a wxpython one with py2exe and both applications have compiled and run successfully. I am using a new python toolkit called dabo, which in turn makes uses of wxpython modules so I can't figure out what I'm doing wrong. Where do I start investigating the problem since obviously the log file hasn't been too useful? Edit 1: The python version is 2.5. py2exe is 0.6.8. There were no significant build errors. The only one was the bit about "The following modules appear to be missing..." which were non critical errors since the packages listed were ones I was definitely not using and shouldn't stop the execution of the app either. Running the executable produced a logfile which was completely empty. Previously it had an error about locales which I've since fixed but clearly something is wrong as the executable wasn't running. The setup.py file is based quite heavily on the original setup.py generated by running their "app wizard" and looking at the example that Ed Leafe and some others posted. Yes, I have a log file and it's not printing anything for me to use, which is why I'm asking if there's any other troubleshooting avenue I've missed which will help me find out what's going on. I have even written a bare bones test application which simply produces a bare bones GUI - an empty frame with some default menu options. The code written itself is only 3 lines and the rest is in the 3rd party toolkit. Again, that compiled into an exe (as did my original app) but simply did not run. There were no error output in the run time log file either. Edit 2: It turns out that switching from "windows" to "console" for initial debugging purposes was insightful. I've now got a basic running test app and on to compiling the real app! The test app: import dabo app = dabo.dApp() app.start() The setup.py for test app: import os import sys import glob from distutils.core import setup import py2exe import dabo.icons daboDir = os.path.split(dabo.__file__)[0] # Find the location of the dabo icons: iconDir = os.path.split(dabo.icons.__file__)[0] iconSubDirs = [] def getIconSubDir(arg, dirname, fnames): if ".svn" not in dirname and dirname[-1] != "\\": icons = glob.glob(os.path.join(dirname, "*.png")) if icons: subdir = (os.path.join("resources", dirname[len(arg)+1:]), icons) iconSubDirs.append(subdir) os.path.walk(iconDir, getIconSubDir, iconDir) # locales: localeDir = "%s%slocale" % (daboDir, os.sep) locales = [] def getLocales(arg, dirname, fnames): if ".svn" not in dirname and dirname[-1] != "\\": mo_files = tuple(glob.glob(os.path.join(dirname, "*.mo"))) if mo_files: subdir = os.path.join("dabo.locale", dirname[len(arg)+1:]) locales.append((subdir, mo_files)) os.path.walk(localeDir, getLocales, localeDir) data_files=[("resources", glob.glob(os.path.join(iconDir, "*.ico"))), ("resources", glob.glob("resources/*"))] data_files.extend(iconSubDirs) data_files.extend(locales) setup(name="basicApp", version='0.01', description="Test Dabo Application", options={"py2exe": { "compressed": 1, "optimize": 2, "bundle_files": 1, "excludes": ["Tkconstants","Tkinter","tcl", "_imagingtk", "PIL._imagingtk", "ImageTk", "PIL.ImageTk", "FixTk", "kinterbasdb", "MySQLdb", 'Numeric', 'OpenGL.GL', 'OpenGL.GLUT', 'dbGadfly', 'email.Generator', 'email.Iterators', 'email.Utils', 'kinterbasdb', 'numarray', 'pymssql', 'pysqlite2', 'wx.BitmapFromImage'], "includes": ["encodings", "locale", "wx.gizmos","wx.lib.calendar"]}}, zipfile=None, windows=[{'script':'basicApp.py'}], data_files=data_files ) A: You may need to fix log handling first, this URL may help. Later you may look for answer here. My answer is very general because you didn't give any more specific info (like py2exe/python version, py2exe log, other used 3rd party libraries). A: See http://www.wxpython.org/docs/api/wx.App-class.html for wxPyton's App class initializer. If you want to run the app from a console and have stderr print to there, then supply False for the redirect argument. Otherwise, if you just want a window to pop up, set redirect to True and filename to None.
Troubleshooting py2exe packaging problem
I've written a setup.py script for py2exe, generated an executable for my python GUI application and I have a whole bunch of files in the dist directory, including the app, w9xopen.exe and MSVCR71.dll. When I try to run the application, I get an error message that just says "see the logfile for details". The only problem is, the log file is empty. The closest error I've seen is "The following modules appear to be missing" but I'm not using any of those modules as far as I know (especially since they seem to be of databases I'm not using) but digging up on Google suggests that these are relatively benign warnings. I've written and packaged a console application as well as a wxpython one with py2exe and both applications have compiled and run successfully. I am using a new python toolkit called dabo, which in turn makes uses of wxpython modules so I can't figure out what I'm doing wrong. Where do I start investigating the problem since obviously the log file hasn't been too useful? Edit 1: The python version is 2.5. py2exe is 0.6.8. There were no significant build errors. The only one was the bit about "The following modules appear to be missing..." which were non critical errors since the packages listed were ones I was definitely not using and shouldn't stop the execution of the app either. Running the executable produced a logfile which was completely empty. Previously it had an error about locales which I've since fixed but clearly something is wrong as the executable wasn't running. The setup.py file is based quite heavily on the original setup.py generated by running their "app wizard" and looking at the example that Ed Leafe and some others posted. Yes, I have a log file and it's not printing anything for me to use, which is why I'm asking if there's any other troubleshooting avenue I've missed which will help me find out what's going on. I have even written a bare bones test application which simply produces a bare bones GUI - an empty frame with some default menu options. The code written itself is only 3 lines and the rest is in the 3rd party toolkit. Again, that compiled into an exe (as did my original app) but simply did not run. There were no error output in the run time log file either. Edit 2: It turns out that switching from "windows" to "console" for initial debugging purposes was insightful. I've now got a basic running test app and on to compiling the real app! The test app: import dabo app = dabo.dApp() app.start() The setup.py for test app: import os import sys import glob from distutils.core import setup import py2exe import dabo.icons daboDir = os.path.split(dabo.__file__)[0] # Find the location of the dabo icons: iconDir = os.path.split(dabo.icons.__file__)[0] iconSubDirs = [] def getIconSubDir(arg, dirname, fnames): if ".svn" not in dirname and dirname[-1] != "\\": icons = glob.glob(os.path.join(dirname, "*.png")) if icons: subdir = (os.path.join("resources", dirname[len(arg)+1:]), icons) iconSubDirs.append(subdir) os.path.walk(iconDir, getIconSubDir, iconDir) # locales: localeDir = "%s%slocale" % (daboDir, os.sep) locales = [] def getLocales(arg, dirname, fnames): if ".svn" not in dirname and dirname[-1] != "\\": mo_files = tuple(glob.glob(os.path.join(dirname, "*.mo"))) if mo_files: subdir = os.path.join("dabo.locale", dirname[len(arg)+1:]) locales.append((subdir, mo_files)) os.path.walk(localeDir, getLocales, localeDir) data_files=[("resources", glob.glob(os.path.join(iconDir, "*.ico"))), ("resources", glob.glob("resources/*"))] data_files.extend(iconSubDirs) data_files.extend(locales) setup(name="basicApp", version='0.01', description="Test Dabo Application", options={"py2exe": { "compressed": 1, "optimize": 2, "bundle_files": 1, "excludes": ["Tkconstants","Tkinter","tcl", "_imagingtk", "PIL._imagingtk", "ImageTk", "PIL.ImageTk", "FixTk", "kinterbasdb", "MySQLdb", 'Numeric', 'OpenGL.GL', 'OpenGL.GLUT', 'dbGadfly', 'email.Generator', 'email.Iterators', 'email.Utils', 'kinterbasdb', 'numarray', 'pymssql', 'pysqlite2', 'wx.BitmapFromImage'], "includes": ["encodings", "locale", "wx.gizmos","wx.lib.calendar"]}}, zipfile=None, windows=[{'script':'basicApp.py'}], data_files=data_files )
[ "You may need to fix log handling first, this URL may help.\nLater you may look for answer here.\nMy answer is very general because you didn't give any more specific info (like py2exe/python version, py2exe log, other used 3rd party libraries).\n", "See http://www.wxpython.org/docs/api/wx.App-class.html for wxPyton's App class initializer. If you want to run the app from a console and have stderr print to there, then supply False for the redirect argument. Otherwise, if you just want a window to pop up, set redirect to True and filename to None.\n" ]
[ 1, 1 ]
[]
[]
[ "py2exe", "python", "user_interface" ]
stackoverflow_0000217666_py2exe_python_user_interface.txt
Q: how to generate unit test code for methods i want to write code for unit test to test my application code. I have different methods and now want to test these methods one by one in python script. but i do not how to i write. can any one give me example of small code for unit testing in python. i am thankful A: Read the unit testing framework section of the Python Library Reference. A basic example from the documentation: import random import unittest class TestSequenceFunctions(unittest.TestCase): def setUp(self): self.seq = range(10) def testshuffle(self): # make sure the shuffled sequence does not lose any elements random.shuffle(self.seq) self.seq.sort() self.assertEqual(self.seq, range(10)) def testchoice(self): element = random.choice(self.seq) self.assert_(element in self.seq) def testsample(self): self.assertRaises(ValueError, random.sample, self.seq, 20) for element in random.sample(self.seq, 5): self.assert_(element in self.seq) if __name__ == '__main__': unittest.main() A: It's probably best to start off with the given unittest example. Some standard best practices: put all your tests in a tests folder at the root of your project. write one test module for each python module you're testing. test modules should start with the word test. test methods should start with the word test. When you've become comfortable with unittest (and it shouldn't take long), there are some nice extensions to it that will make life easier as your tests grow in number and scope: nose -- easily find and run all your tests, and more. testoob -- colorized output (and more, but that's why I use it). pythoscope -- haven't tried it, but this will automatically generate (failing) test stubs for your application. Should save a lot of time writing boilerplate code. A: Here's an example and you might want to read a little more on pythons unit testing.
how to generate unit test code for methods
i want to write code for unit test to test my application code. I have different methods and now want to test these methods one by one in python script. but i do not how to i write. can any one give me example of small code for unit testing in python. i am thankful
[ "Read the unit testing framework section of the Python Library Reference.\nA basic example from the documentation:\nimport random\nimport unittest\n\nclass TestSequenceFunctions(unittest.TestCase):\n\n def setUp(self):\n self.seq = range(10)\n\n def testshuffle(self):\n # make sure the shuffled sequence does not lose any elements\n random.shuffle(self.seq)\n self.seq.sort()\n self.assertEqual(self.seq, range(10))\n\n def testchoice(self):\n element = random.choice(self.seq)\n self.assert_(element in self.seq)\n\n def testsample(self):\n self.assertRaises(ValueError, random.sample, self.seq, 20)\n for element in random.sample(self.seq, 5):\n self.assert_(element in self.seq)\n\nif __name__ == '__main__':\n unittest.main()\n\n", "It's probably best to start off with the given unittest example. Some standard best practices: \n\nput all your tests in a tests folder at the root of your project.\nwrite one test module for each python module you're testing.\ntest modules should start with the word test.\ntest methods should start with the word test. \n\nWhen you've become comfortable with unittest (and it shouldn't take long), there are some nice extensions to it that will make life easier as your tests grow in number and scope:\n\nnose -- easily find and run all your tests, and more.\ntestoob -- colorized output (and more, but that's why I use it).\npythoscope -- haven't tried it, but this will automatically generate (failing) test stubs for your application. Should save a lot of time writing boilerplate code.\n\n", "Here's an example and you might want to read a little more on pythons unit testing.\n" ]
[ 7, 4, 1 ]
[]
[]
[ "python", "unit_testing" ]
stackoverflow_0000217900_python_unit_testing.txt
Q: Group by date in a particular format in SQLAlchemy I have a table called logs which has a datetime field. I want to select the date and count of rows based on a particular date format. How do I do this using SQLAlchemy? A: I don't know of a generic SQLAlchemy answer. Most databases support some form of date formatting, typically via functions. SQLAlchemy supports calling functions via sqlalchemy.sql.func. So for example, using SQLAlchemy over a Postgres back end, and a table my_table(foo varchar(30), when timestamp) I might do something like my_table = metadata.tables['my_table'] foo = my_table.c['foo'] the_date = func.date_trunc('month', my_table.c['when']) stmt = select(foo, the_date).group_by(the_date) engine.execute(stmt) To group by date truncated to month. But keep in mind that in that example, date_trunc() is a Postgres datetime function. Other databases will be different. You didn't mention the underlyig database. If there's a database independent way to do it I've never found one. In my case I run production and test aginst Postgres and unit tests aginst SQLite and have resorted to using SQLite user defined functions in my unit tests to emulate Postgress datetime functions. A: Does counting yield the same result when you just group by the unformatted datetime column? If so, you could just run the query and use Python date's strftime() method afterwards. i.e. query = select([logs.c.datetime, func.count(logs.c.datetime)]).group_by(logs.c.datetime) results = session.execute(query).fetchall() results = [(t[0].strftime("..."), t[1]) for t in results] A: I don't know SQLAlchemy, so I could be off-target. However, I think that all you need is: SELECT date_formatter(datetime_field, "format-specification") AS dt_field, COUNT(*) FROM logs GROUP BY date_formatter(datetime_field, "format-specification") ORDER BY 1; OK, maybe you don't need the ORDER BY, and maybe it would be better to re-specify the date expression. There are likely to be alternatives, such as: SELECT dt_field, COUNT(*) FROM (SELECT date_formatter(datetime_field, "format-specification") AS dt_field FROM logs) AS necessary GROUP BY dt_field ORDER BY dt_field; And so on and so forth. Basically, you format the datetime field and then proceed to do the grouping etc on the formatted value.
Group by date in a particular format in SQLAlchemy
I have a table called logs which has a datetime field. I want to select the date and count of rows based on a particular date format. How do I do this using SQLAlchemy?
[ "I don't know of a generic SQLAlchemy answer. Most databases support some form of date formatting, typically via functions. SQLAlchemy supports calling functions via sqlalchemy.sql.func. So for example, using SQLAlchemy over a Postgres back end, and a table my_table(foo varchar(30), when timestamp) I might do something like\nmy_table = metadata.tables['my_table']\nfoo = my_table.c['foo']\nthe_date = func.date_trunc('month', my_table.c['when'])\nstmt = select(foo, the_date).group_by(the_date)\nengine.execute(stmt)\n\nTo group by date truncated to month. But keep in mind that in that example, date_trunc() is a Postgres datetime function. Other databases will be different. You didn't mention the underlyig database. If there's a database independent way to do it I've never found one. In my case I run production and test aginst Postgres and unit tests aginst SQLite and have resorted to using SQLite user defined functions in my unit tests to emulate Postgress datetime functions. \n", "Does counting yield the same result when you just group by the unformatted datetime column? If so, you could just run the query and use Python date's strftime() method afterwards. i.e.\nquery = select([logs.c.datetime, func.count(logs.c.datetime)]).group_by(logs.c.datetime)\nresults = session.execute(query).fetchall()\nresults = [(t[0].strftime(\"...\"), t[1]) for t in results]\n\n", "I don't know SQLAlchemy, so I could be off-target. However, I think that all you need is:\nSELECT date_formatter(datetime_field, \"format-specification\") AS dt_field, COUNT(*)\n FROM logs\n GROUP BY date_formatter(datetime_field, \"format-specification\")\n ORDER BY 1;\n\nOK, maybe you don't need the ORDER BY, and maybe it would be better to re-specify the date expression. There are likely to be alternatives, such as:\nSELECT dt_field, COUNT(*)\n FROM (SELECT date_formatter(datetime_field, \"format-specification\") AS dt_field\n FROM logs) AS necessary\n GROUP BY dt_field\n ORDER BY dt_field;\n\nAnd so on and so forth. Basically, you format the datetime field and then proceed to do the grouping etc on the formatted value.\n" ]
[ 5, 1, 0 ]
[]
[]
[ "python", "sql", "sqlalchemy" ]
stackoverflow_0000216657_python_sql_sqlalchemy.txt
Q: How can you use BeautifulSoup to get colindex numbers? I had a problem a week or so ago. Since I think the solution was cool I am sharing it here while I am waiting for an answer to the question I posted earlier. I need to know the relative position for the column headings in a table so I know how to match the column heading up with the data in the rows below. I found some of my tables had the following row as the first row in the table <!-- Table Width Row --> <TR style="font-size: 1pt" valign="bottom"> <TD width="60%">&nbsp;</TD> <!-- colindex=01 type=maindata --> <TD width="1%">&nbsp;</TD> <!-- colindex=02 type=gutter --> <TD width="1%" align="right">&nbsp;</TD> <!-- colindex=02 type=lead --> <TD width="9%" align="right">&nbsp;</TD> <!-- colindex=02 type=body --> <TD width="1%" align="left">&nbsp;</TD> <!-- colindex=02 type=hang1 --> <TD width="3%">&nbsp;</TD> <!-- colindex=03 type=gutter --> <TD width="1%" align="right">&nbsp;</TD> <!-- colindex=03 type=lead --> <TD width="4%" align="right">&nbsp;</TD> <!-- colindex=03 type=body --> <TD width="1%" align="left">&nbsp;</TD> <!-- colindex=03 type=hang1 --> <TD width="3%">&nbsp;</TD> <!-- colindex=04 type=gutter --> <TD width="1%" align="right">&nbsp;</TD> <!-- colindex=04 type=lead --> <TD width="4%" align="right">&nbsp;</TD> <!-- colindex=04 type=body --> <TD width="1%" align="left">&nbsp;</TD> <!-- colindex=04 type=hang1 --> <TD width="3%">&nbsp;</TD> <!-- colindex=05 type=gutter --> <TD width="1%" align="right">&nbsp;</TD> <!-- colindex=05 type=lead --> <TD width="5%" align="right">&nbsp;</TD> <!-- colindex=05 type=body --> <TD width="1%" align="left">&nbsp;</TD> <!-- colindex=05 type=hang1 --> </TR> I thought wow, this will be easy because the data is in the column below type=body. Counting down I knew that in the data rows I would need to get the values in columns [3, 7, 11, 15]. So I set out to accomplish that using this code: indexComment = souptoGetColIndex.findAll(text=re.compile("type=body")) indexRow=indexComment[0].findParent() indexCells=indexRow.findAll(text=re.compile("type=body")) for each in range(len(indexCells)): collist.append(tdlist.index(indexCells[each].previousSibling.previousSibling)) what I got back was collist=[0, 3, 7, 7, 15] It turns out I think that because cells in the 7th and 11th position looked exactly alike the same index position was returned. I was trying to figure out how to deal with this, clearly I had to make them look different. So what I did was make them look different by first using a readlines to read each line of the file in and change the blank spaces to a random integer. for each in toGetColIndex: newlt.append(each.replace(r"&nbsp;",str(random.randint(1,14567)))) a friend pointed out that I could lower overhead by using this instead for each in toGetColIndex: newlt.append(each.replace(r"&nbsp;",str(toGetColIndex.index(each)))) Nonetheless, each of these approaches gets me a list with the colindex for the location of my headers for each column and to use on the data rows. Note that replace function is missing the blank space since I guess the html is causing it to disappear the actual code uses r"&.n.b.s.p;" without the periods A: The code below produces [3, 7, 11, 15] which is what I understand you seek from BeautifulSoup import BeautifulSoup from re import compile soup = BeautifulSoup( '''<HTML><BODY> <TABLE> <TR style="font-size: 1pt" valign="bottom"> <TD width="60%"> </TD> <!-- colindex=01 type=maindata --> <TD width="1%"> </TD> <!-- colindex=02 type=gutter --> <TD width="1%" align="right"> </TD> <!-- colindex=02 type=lead --> <TD width="9%" align="right"> </TD> <!-- colindex=02 type=body --> <TD width="1%" align="left"> </TD> <!-- colindex=02 type=hang1 --> <TD width="3%"> </TD> <!-- colindex=03 type=gutter --> <TD width="1%" align="right"> </TD> <!-- colindex=03 type=lead --> <TD width="4%" align="right"> </TD> <!-- colindex=03 type=body --> <TD width="1%" align="left"> </TD> <!-- colindex=03 type=hang1 --> <TD width="3%"> </TD> <!-- colindex=04 type=gutter --> <TD width="1%" align="right"> </TD> <!-- colindex=04 type=lead --> <TD width="4%" align="right"> </TD> <!-- colindex=04 type=body --> <TD width="1%" align="left"> </TD> <!-- colindex=04 type=hang1 --> <TD width="3%"> </TD> <!-- colindex=05 type=gutter --> <TD width="1%" align="right"> </TD> <!-- colindex=05 type=lead --> <TD width="5%" align="right"> </TD> <!-- colindex=05 type=body --> <TD width="1%" align="left"> </TD> <!-- colindex=05 type=hang1 --> </TR> </TABLE> </BODY></HTML>''' ) tables = soup.findAll('table') matcher = compile('colindex') def body_cols(row): for i, comment in enumerate(row.findAll(text=matcher)): if 'type=body' in comment: yield i for table in soup.findAll('table'): index_row = table.find('tr') print list(body_cols(index_row))
How can you use BeautifulSoup to get colindex numbers?
I had a problem a week or so ago. Since I think the solution was cool I am sharing it here while I am waiting for an answer to the question I posted earlier. I need to know the relative position for the column headings in a table so I know how to match the column heading up with the data in the rows below. I found some of my tables had the following row as the first row in the table <!-- Table Width Row --> <TR style="font-size: 1pt" valign="bottom"> <TD width="60%">&nbsp;</TD> <!-- colindex=01 type=maindata --> <TD width="1%">&nbsp;</TD> <!-- colindex=02 type=gutter --> <TD width="1%" align="right">&nbsp;</TD> <!-- colindex=02 type=lead --> <TD width="9%" align="right">&nbsp;</TD> <!-- colindex=02 type=body --> <TD width="1%" align="left">&nbsp;</TD> <!-- colindex=02 type=hang1 --> <TD width="3%">&nbsp;</TD> <!-- colindex=03 type=gutter --> <TD width="1%" align="right">&nbsp;</TD> <!-- colindex=03 type=lead --> <TD width="4%" align="right">&nbsp;</TD> <!-- colindex=03 type=body --> <TD width="1%" align="left">&nbsp;</TD> <!-- colindex=03 type=hang1 --> <TD width="3%">&nbsp;</TD> <!-- colindex=04 type=gutter --> <TD width="1%" align="right">&nbsp;</TD> <!-- colindex=04 type=lead --> <TD width="4%" align="right">&nbsp;</TD> <!-- colindex=04 type=body --> <TD width="1%" align="left">&nbsp;</TD> <!-- colindex=04 type=hang1 --> <TD width="3%">&nbsp;</TD> <!-- colindex=05 type=gutter --> <TD width="1%" align="right">&nbsp;</TD> <!-- colindex=05 type=lead --> <TD width="5%" align="right">&nbsp;</TD> <!-- colindex=05 type=body --> <TD width="1%" align="left">&nbsp;</TD> <!-- colindex=05 type=hang1 --> </TR> I thought wow, this will be easy because the data is in the column below type=body. Counting down I knew that in the data rows I would need to get the values in columns [3, 7, 11, 15]. So I set out to accomplish that using this code: indexComment = souptoGetColIndex.findAll(text=re.compile("type=body")) indexRow=indexComment[0].findParent() indexCells=indexRow.findAll(text=re.compile("type=body")) for each in range(len(indexCells)): collist.append(tdlist.index(indexCells[each].previousSibling.previousSibling)) what I got back was collist=[0, 3, 7, 7, 15] It turns out I think that because cells in the 7th and 11th position looked exactly alike the same index position was returned. I was trying to figure out how to deal with this, clearly I had to make them look different. So what I did was make them look different by first using a readlines to read each line of the file in and change the blank spaces to a random integer. for each in toGetColIndex: newlt.append(each.replace(r"&nbsp;",str(random.randint(1,14567)))) a friend pointed out that I could lower overhead by using this instead for each in toGetColIndex: newlt.append(each.replace(r"&nbsp;",str(toGetColIndex.index(each)))) Nonetheless, each of these approaches gets me a list with the colindex for the location of my headers for each column and to use on the data rows. Note that replace function is missing the blank space since I guess the html is causing it to disappear the actual code uses r"&.n.b.s.p;" without the periods
[ "The code below produces [3, 7, 11, 15] which is what I understand you seek\nfrom BeautifulSoup import BeautifulSoup\nfrom re import compile\n\nsoup = BeautifulSoup(\n '''<HTML><BODY>\n <TABLE>\n <TR style=\"font-size: 1pt\" valign=\"bottom\">\n <TD width=\"60%\"> </TD> <!-- colindex=01 type=maindata -->\n <TD width=\"1%\"> </TD> <!-- colindex=02 type=gutter -->\n <TD width=\"1%\" align=\"right\"> </TD> <!-- colindex=02 type=lead -->\n <TD width=\"9%\" align=\"right\"> </TD> <!-- colindex=02 type=body -->\n <TD width=\"1%\" align=\"left\"> </TD> <!-- colindex=02 type=hang1 -->\n\n <TD width=\"3%\"> </TD> <!-- colindex=03 type=gutter -->\n <TD width=\"1%\" align=\"right\"> </TD> <!-- colindex=03 type=lead -->\n <TD width=\"4%\" align=\"right\"> </TD> <!-- colindex=03 type=body -->\n <TD width=\"1%\" align=\"left\"> </TD> <!-- colindex=03 type=hang1 -->\n <TD width=\"3%\"> </TD> <!-- colindex=04 type=gutter -->\n <TD width=\"1%\" align=\"right\"> </TD> <!-- colindex=04 type=lead -->\n\n <TD width=\"4%\" align=\"right\"> </TD> <!-- colindex=04 type=body -->\n <TD width=\"1%\" align=\"left\"> </TD> <!-- colindex=04 type=hang1 -->\n <TD width=\"3%\"> </TD> <!-- colindex=05 type=gutter -->\n <TD width=\"1%\" align=\"right\"> </TD> <!-- colindex=05 type=lead -->\n <TD width=\"5%\" align=\"right\"> </TD> <!-- colindex=05 type=body -->\n <TD width=\"1%\" align=\"left\"> </TD> <!-- colindex=05 type=hang1 -->\n\n </TR>\n </TABLE> </BODY></HTML>'''\n)\n\ntables = soup.findAll('table')\nmatcher = compile('colindex')\n\ndef body_cols(row):\n for i, comment in enumerate(row.findAll(text=matcher)):\n if 'type=body' in comment:\n yield i\n\nfor table in soup.findAll('table'):\n index_row = table.find('tr')\n print list(body_cols(index_row))\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "html", "parsing", "python" ]
stackoverflow_0000215702_beautifulsoup_html_parsing_python.txt
Q: How to make Apache/mod_python process collect its zombies? Apache/2.2.6 (Unix) DAV/2 mod_python/3.2.8 Python/2.4.4 configured ... One of apache processes spawns some long-running python script asynchronously, and apparently doesn't seem to collect its child process table entry. After that long-run-in-subprocess python script finishes - defunct python process has been left. # ps -ef | grep httpd root 23911 1 0 Oct15 ? 00:00:01 /usr/sbin/httpd ... qa 23920 23911 0 Oct15 ? 00:00:00 /usr/sbin/httpd # ps -ef | grep python ... qa 28449 23920 0 12:38 ? 00:00:00 [python] <defunct> What is the way to make the Apache process to collect its children? Is it possible to do the job via a mod_python request handler ( like PythonCleanupHandler for example)? Thanks. A: File a bug report. EDIT: I'm serious. Leaving zombies behind is a bug, and there is almost certainly nothing you can do from within Python. Upgrade to the latest versions, look for bug reports, post on the mailing list, switch to another product. A: Drop mod_python in favor of mod_wsgi (is used for wsgi), which does not leave orphaned zombies around afaik.
How to make Apache/mod_python process collect its zombies?
Apache/2.2.6 (Unix) DAV/2 mod_python/3.2.8 Python/2.4.4 configured ... One of apache processes spawns some long-running python script asynchronously, and apparently doesn't seem to collect its child process table entry. After that long-run-in-subprocess python script finishes - defunct python process has been left. # ps -ef | grep httpd root 23911 1 0 Oct15 ? 00:00:01 /usr/sbin/httpd ... qa 23920 23911 0 Oct15 ? 00:00:00 /usr/sbin/httpd # ps -ef | grep python ... qa 28449 23920 0 12:38 ? 00:00:00 [python] <defunct> What is the way to make the Apache process to collect its children? Is it possible to do the job via a mod_python request handler ( like PythonCleanupHandler for example)? Thanks.
[ "File a bug report.\nEDIT: I'm serious. Leaving zombies behind is a bug, and there is almost certainly nothing you can do from within Python.\nUpgrade to the latest versions, look for bug reports, post on the mailing list, switch to another product.\n", "Drop mod_python in favor of mod_wsgi (is used for wsgi), which does not leave orphaned zombies around afaik.\n" ]
[ 1, 1 ]
[]
[]
[ "apache", "apache2", "mod_python", "python" ]
stackoverflow_0000208085_apache_apache2_mod_python_python.txt
Q: How to implement a Decorator with non-local equality? Greetings, currently I am refactoring one of my programs, and I found an interesting problem. I have Transitions in an automata. Transitions always have a start-state and an end-state. Some Transitions have a label, which encodes a certain Action that must be performed upon traversal. No label means no action. Some transitions have a condition, which must be fulfilled in order to traverse this condition, if there is no condition, the transition is basically an epsilon-transition in an NFA and will be traversed without consuming an input symbol. I need the following operations: check if the transition has a label get this label add a label to a transition check if the transition has a condition get this condition check for equality Judging from the first five points, this sounds like a clear decorator, with a base transition and two decorators: Labeled and Condition. However, this approach has a problem: two transitions are considered equal if their start-state and end-state are the same, the labels at both transitions are equal (or not-existing) and both conditions are the same (or not existing). With a decorator, I might have two transitions Labeled("foo", Conditional("bar", Transition("baz", "qux"))) and Conditional("bar", Labeled("foo", Transition("baz", "qux"))) which need a non-local equality, that is, the decorators would need to collect all the data and the Transition must compare this collected data on a set-base: class Transition(object): def __init__(self, start, end): self.start = start self.end = end def get_label(self): return None def has_label(self): return False def collect_decorations(self, decorations): return decorations def internal_equality(self, my_decorations, other): try: return (self.start == other.start and self.end == other.end and my_decorations = other.collect_decorations()) def __eq__(self, other): return self.internal_equality(self.collect_decorations({}), other) class Labeled(object): def __init__(self, label, base): self.base = base self.label = label def has_label(self): return True def get_label(self): return self.label def collect_decorations(self, decorations): assert 'label' not in decorations decorations['label'] = self.label return self.base.collect_decorations(decorations) def __getattr__(self, attribute): return self.base.__getattr(attribute) Is this a clean approach? Am I missing something? I am mostly confused, because I can solve this - with longer class names - using cooperative multiple inheritance: class Transition(object): def __init__(self, **kwargs): # init is pythons MI-madness ;-) super(Transition, self).__init__(**kwargs) self.start = kwargs['start'] self.end = kwargs['end'] def get_label(self): return None def get_condition(self): return None def __eq__(self, other): try: return self.start == other.start and self.end == other.end except AttributeError: return False class LabeledTransition(Transition): def __init__(self, **kwargs): super(LabeledTransition).__init__(**kwargs) self.label = kwargs['label'] def get_label(self): return self.label def __eq__(self): super_result = super(LabeledTransition, self).__eq__(other) try: return super_result and self.label == other.label except AttributeError: return False class ConditionalTransition(Transition): def __init__(self, **kwargs): super(ConditionalTransition, self).__init__(**kwargs) self.condition = kwargs['condition'] def get_condition(self): return self.condition def __eq__(self, other): super_result = super(ConditionalTransition, self).__eq__(other) try: return super_result and self.condition = other.condition except AttributeError: return False # ConditionalTransition about the same, with get_condition class LabeledConditionalTransition(LabeledTransition, ConditionalTransition): pass the class LabledConditionalTransition behaves exactly as expected - and having no code in there is appealing and I do not thing MI is confusing at this size. Of course, the third option would be to just hammer everything into a single transition class with a bunch of in has_label/has_transition. So... I am confused. Am I missing something? Which implementation looks better? How do you handle similar cases, that is, objects which look like a Decorator could handle them, but then, such a non-local method comes around? EDIT: Added the ConditionalTransition-class. Basically, this kinda behaves like the decorator, minus the order created by the order of creating the decorators, the transition checks for start and end being correct, the LabeledTransition-class checks for label being correct and ConditionalTransition checks for condition being correct. A: I think its clear that nobody really understands your question. I would suggest putting it in context and making it shorter. As an example, here's one possible implementation of the state pattern in python, please study it to get an idea. class State(object): def __init__(self, name): self.name = name def __repr__(self): return self.name class Automaton(object): def __init__(self, instance, start): self._state = start self.transitions = instance.transitions() def get_state(self): return self._state def set_state(self, target): transition = self.transitions.get((self.state, target)) if transition: action, condition = transition if condition: if condition(): if action: action() self._state = target else: self._state = target else: self._state = target state = property(get_state, set_state) class Door(object): open = State('open') closed = State('closed') def __init__(self, blocked=False): self.blocked = blocked def close(self): print 'closing door' def do_open(self): print 'opening door' def not_blocked(self): return not self.blocked def transitions(self): return { (self.open, self.closed):(self.close, self.not_blocked), (self.closed, self.open):(self.do_open, self.not_blocked), } if __name__ == '__main__': door = Door() automaton = Automaton(door, door.open) print 'door is', automaton.state automaton.state = door.closed print 'door is', automaton.state automaton.state = door.open print 'door is', automaton.state door.blocked = True automaton.state = door.closed print 'door is', automaton.state the output of this programm would be: door is open closing door door is closed opening door door is open door is open A: From the code that was posted, the only difference between Transition and Labeled Transition is the return of get_lable() and has_label(). In which case you can compress these two a single class that sets a label attribute to None and return self.label is not None in the has_label() function. Can you post the code for the ConditionalTransition class? I think this would make it clearer.
How to implement a Decorator with non-local equality?
Greetings, currently I am refactoring one of my programs, and I found an interesting problem. I have Transitions in an automata. Transitions always have a start-state and an end-state. Some Transitions have a label, which encodes a certain Action that must be performed upon traversal. No label means no action. Some transitions have a condition, which must be fulfilled in order to traverse this condition, if there is no condition, the transition is basically an epsilon-transition in an NFA and will be traversed without consuming an input symbol. I need the following operations: check if the transition has a label get this label add a label to a transition check if the transition has a condition get this condition check for equality Judging from the first five points, this sounds like a clear decorator, with a base transition and two decorators: Labeled and Condition. However, this approach has a problem: two transitions are considered equal if their start-state and end-state are the same, the labels at both transitions are equal (or not-existing) and both conditions are the same (or not existing). With a decorator, I might have two transitions Labeled("foo", Conditional("bar", Transition("baz", "qux"))) and Conditional("bar", Labeled("foo", Transition("baz", "qux"))) which need a non-local equality, that is, the decorators would need to collect all the data and the Transition must compare this collected data on a set-base: class Transition(object): def __init__(self, start, end): self.start = start self.end = end def get_label(self): return None def has_label(self): return False def collect_decorations(self, decorations): return decorations def internal_equality(self, my_decorations, other): try: return (self.start == other.start and self.end == other.end and my_decorations = other.collect_decorations()) def __eq__(self, other): return self.internal_equality(self.collect_decorations({}), other) class Labeled(object): def __init__(self, label, base): self.base = base self.label = label def has_label(self): return True def get_label(self): return self.label def collect_decorations(self, decorations): assert 'label' not in decorations decorations['label'] = self.label return self.base.collect_decorations(decorations) def __getattr__(self, attribute): return self.base.__getattr(attribute) Is this a clean approach? Am I missing something? I am mostly confused, because I can solve this - with longer class names - using cooperative multiple inheritance: class Transition(object): def __init__(self, **kwargs): # init is pythons MI-madness ;-) super(Transition, self).__init__(**kwargs) self.start = kwargs['start'] self.end = kwargs['end'] def get_label(self): return None def get_condition(self): return None def __eq__(self, other): try: return self.start == other.start and self.end == other.end except AttributeError: return False class LabeledTransition(Transition): def __init__(self, **kwargs): super(LabeledTransition).__init__(**kwargs) self.label = kwargs['label'] def get_label(self): return self.label def __eq__(self): super_result = super(LabeledTransition, self).__eq__(other) try: return super_result and self.label == other.label except AttributeError: return False class ConditionalTransition(Transition): def __init__(self, **kwargs): super(ConditionalTransition, self).__init__(**kwargs) self.condition = kwargs['condition'] def get_condition(self): return self.condition def __eq__(self, other): super_result = super(ConditionalTransition, self).__eq__(other) try: return super_result and self.condition = other.condition except AttributeError: return False # ConditionalTransition about the same, with get_condition class LabeledConditionalTransition(LabeledTransition, ConditionalTransition): pass the class LabledConditionalTransition behaves exactly as expected - and having no code in there is appealing and I do not thing MI is confusing at this size. Of course, the third option would be to just hammer everything into a single transition class with a bunch of in has_label/has_transition. So... I am confused. Am I missing something? Which implementation looks better? How do you handle similar cases, that is, objects which look like a Decorator could handle them, but then, such a non-local method comes around? EDIT: Added the ConditionalTransition-class. Basically, this kinda behaves like the decorator, minus the order created by the order of creating the decorators, the transition checks for start and end being correct, the LabeledTransition-class checks for label being correct and ConditionalTransition checks for condition being correct.
[ "I think its clear that nobody really understands your question. I would suggest putting it in context and making it shorter. As an example, here's one possible implementation of the state pattern in python, please study it to get an idea.\nclass State(object):\n def __init__(self, name):\n self.name = name\n\n def __repr__(self):\n return self.name\n\nclass Automaton(object):\n def __init__(self, instance, start):\n self._state = start\n self.transitions = instance.transitions()\n\n def get_state(self):\n return self._state\n\n def set_state(self, target):\n transition = self.transitions.get((self.state, target))\n if transition:\n action, condition = transition\n if condition:\n if condition():\n if action:\n action()\n self._state = target\n else:\n self._state = target\n else:\n self._state = target\n\n state = property(get_state, set_state)\n\nclass Door(object):\n open = State('open')\n closed = State('closed')\n\n def __init__(self, blocked=False):\n self.blocked = blocked\n\n def close(self):\n print 'closing door'\n\n def do_open(self):\n print 'opening door'\n\n def not_blocked(self):\n return not self.blocked\n\n def transitions(self):\n return {\n (self.open, self.closed):(self.close, self.not_blocked),\n (self.closed, self.open):(self.do_open, self.not_blocked),\n }\n\nif __name__ == '__main__':\n door = Door()\n automaton = Automaton(door, door.open)\n\n print 'door is', automaton.state\n automaton.state = door.closed\n print 'door is', automaton.state\n automaton.state = door.open\n print 'door is', automaton.state\n door.blocked = True\n automaton.state = door.closed\n print 'door is', automaton.state\n\nthe output of this programm would be:\ndoor is open\nclosing door\ndoor is closed\nopening door\ndoor is open\ndoor is open\n\n", "From the code that was posted, the only difference between Transition and Labeled Transition is the return of get_lable() and has_label(). In which case you can compress these two a single class that sets a label attribute to None and \nreturn self.label is not None\n\nin the has_label() function.\nCan you post the code for the ConditionalTransition class? I think this would make it clearer.\n" ]
[ 2, 0 ]
[]
[]
[ "decorator", "multiple_inheritance", "python" ]
stackoverflow_0000127736_decorator_multiple_inheritance_python.txt
Q: How can I port a legacy Java/J2EE website to a modern scripting language (PHP,Python/Django, etc)? I want to move a legacy Java web application (J2EE) to a scripting language - any scripting language - in order to improve programming efficiency. What is the easiest way to do this? Are there any automated tools that can convert the bulk of the business logic? A: Here's what you have to do. First, be sure you can walk before you run. Build something simple, possibly tangentially related to your main project. DO NOT build a piece of the final project and hope it will "evolve" into the final project. This never works out well. Why? You'll make dumb mistakes. But you can't delete or rework them because you're supposed to evolve that mistake into the final project. Next, pick a a framework. What? Second? Yes. Second. Until you actually do something with some scripting languages and frameworks, you have no real useful concept of what you're doing. Once you've built something, you now have an informed opinion. "Wait," you say. "To do step 1 I had to pick a framework." True. Step 1, however, contains decisions you're allowed to revoke. Pick the wrong framework for step 1 has no long-term bad effects. It was just learning. Third, with your strategic framework, and some experience, break down your existing site into pieces you can build with your new framework. Prioritize those pieces from most important to least important. DO NOT plan the entire conversion as one massive project. It never works. It makes a big job more complex than necessary. We'll use Django as the example framework. You'll have templates, view functions, model definitions, URL mapping and other details. For each build, do the following: Convert your existing model to a Django model. This won't ever fit your legacy SQL. You'll have to rethink your model, fix old mistakes, correct old bugs that you've always wanted to correct. Write unit tests. Build a conversion utility to export old data and import into the new model. Build Django admin pages to touch and feel the new data. Pick representative pages and rework them into the appropriate templates. You might make use of some legacy JSP pages. However, don't waste too much time with this. Use the HTML to create Django templates. Plan your URL's and view functions. Sometimes, these view functions will leverage legacy action classes. Don't "convert". Rewrite from scratch. Use your new language and framework. The only thing that's worth preserving is the data and the operational concept. Don't try to preserve or convert the code. It's misleading. You might convert unittests from JUnit to Python unittest. I gave this advice a few months ago. I had to do some coaching and review during the processing. The revised site is up and running. No conversion from the old technology; they did the suggested rewrite from scratch. Developer happy. Site works well. A: If you already have a large amount of business logic implemented in Java, then I see two possibilities for you. The first is to use a high level language that runs within the JVM and has a web framework, such as Groovy/Grails or JRuby and Rails. This allows you to directly leverage all of the business logic you've implemented in Java without having to re-architect the entire site. You should be able to take advantage of the framework's improved productivity with respect to the web development and still leverage your existing business logic. An alternative approach is to turn your business logic layer into a set of services available over a standard RPC mechanisim - REST, SOAP, XML-RPC or some other simple XML (YAML or JSON) over HTTP protocol (see also DWR) so that the front end can make these RPC calls to your business logic. The first approach, using a high level language on the JVM is probably less re-architecture than the second. If your goal is a complete migration off of Java, then either of these approaches allow you to do so in smaller steps - you may find that this kind of hybrid is better than whole sale deprecation - the JVM has a lot of libraries and integrates well into a lot of other systems. A: Using an automated tool to "port" the web application will almost certainly guarantee that future programming efficiency will be minimised -- not improved. A good scripting language can help programming efficiency when used by good programmers who understand good coding practices in that language. Automated tools are usually not designed to output code that is elegent or well-written, only code that works. You'll only get an improvement in programming efficiency after you've put in the effort to re-implement the web app -- which, due to the time required for the reimplementation, may or may not result in an improvement overall. A: A lot of the recommendations being given here are assuming you -- and just you -- are doing a full rewrite of the application. This is probably not the case, and it changes the answer quite a bit If you've already got J2EE kicking around, the correct answer is Grails. It simply is: you probably already have Hibernate and Spring kicking around, and you're going to want the ability to flip back and forth between your old code and your new with a minimum amount of pain. That's exactly Groovy's forte, and it is even smoother than JRuby in this regards. Also, if you've already got a J2EE app kicking around, you've already got Java developers kicking around. In that case, learning Groovy is like falling off a ladder -- literally. With the exception of anonymous inner classes, Groovy is a pure superset of Java, which means that you can write Java code, call it Groovy, and be done with it. As you become increasingly comfortable with the nicities of Groovy, you can integrate them into your Java-ish Groovy code. Before too long, you'll be writing very Groovy code, and not even really have realized the transition.
How can I port a legacy Java/J2EE website to a modern scripting language (PHP,Python/Django, etc)?
I want to move a legacy Java web application (J2EE) to a scripting language - any scripting language - in order to improve programming efficiency. What is the easiest way to do this? Are there any automated tools that can convert the bulk of the business logic?
[ "Here's what you have to do.\nFirst, be sure you can walk before you run. Build something simple, possibly tangentially related to your main project.\nDO NOT build a piece of the final project and hope it will \"evolve\" into the final project. This never works out well. Why? You'll make dumb mistakes. But you can't delete or rework them because you're supposed to evolve that mistake into the final project.\nNext, pick a a framework. What? Second? Yes. Second. Until you actually do something with some scripting languages and frameworks, you have no real useful concept of what you're doing. Once you've built something, you now have an informed opinion.\n\"Wait,\" you say. \"To do step 1 I had to pick a framework.\" True. Step 1, however, contains decisions you're allowed to revoke. Pick the wrong framework for step 1 has no long-term bad effects. It was just learning.\nThird, with your strategic framework, and some experience, break down your existing site into pieces you can build with your new framework. Prioritize those pieces from most important to least important. \nDO NOT plan the entire conversion as one massive project. It never works. It makes a big job more complex than necessary.\nWe'll use Django as the example framework. You'll have templates, view functions, model definitions, URL mapping and other details.\nFor each build, do the following:\n\nConvert your existing model to a Django model. This won't ever fit your legacy SQL. You'll have to rethink your model, fix old mistakes, correct old bugs that you've always wanted to correct.\nWrite unit tests.\nBuild a conversion utility to export old data and import into the new model.\nBuild Django admin pages to touch and feel the new data.\nPick representative pages and rework them into the appropriate templates. You might make use of some legacy JSP pages. However, don't waste too much time with this. Use the HTML to create Django templates.\nPlan your URL's and view functions. Sometimes, these view functions will leverage legacy action classes. Don't \"convert\". Rewrite from scratch. Use your new language and framework.\n\nThe only thing that's worth preserving is the data and the operational concept. Don't try to preserve or convert the code. It's misleading. You might convert unittests from JUnit to Python unittest. \n\nI gave this advice a few months ago. I had to do some coaching and review during the processing. The revised site is up and running. No conversion from the old technology; they did the suggested rewrite from scratch. Developer happy. Site works well.\n", "If you already have a large amount of business logic implemented in Java, then I see two possibilities for you.\nThe first is to use a high level language that runs within the JVM and has a web framework, such as Groovy/Grails or JRuby and Rails. This allows you to directly leverage all of the business logic you've implemented in Java without having to re-architect the entire site. You should be able to take advantage of the framework's improved productivity with respect to the web development and still leverage your existing business logic.\nAn alternative approach is to turn your business logic layer into a set of services available over a standard RPC mechanisim - REST, SOAP, XML-RPC or some other simple XML (YAML or JSON) over HTTP protocol (see also DWR) so that the front end can make these RPC calls to your business logic.\nThe first approach, using a high level language on the JVM is probably less re-architecture than the second. \nIf your goal is a complete migration off of Java, then either of these approaches allow you to do so in smaller steps - you may find that this kind of hybrid is better than whole sale deprecation - the JVM has a lot of libraries and integrates well into a lot of other systems.\n", "Using an automated tool to \"port\" the web application will almost certainly guarantee that future programming efficiency will be minimised -- not improved.\nA good scripting language can help programming efficiency when used by good programmers who understand good coding practices in that language. Automated tools are usually not designed to output code that is elegent or well-written, only code that works.\nYou'll only get an improvement in programming efficiency after you've put in the effort to re-implement the web app -- which, due to the time required for the reimplementation, may or may not result in an improvement overall.\n", "A lot of the recommendations being given here are assuming you -- and just you -- are doing a full rewrite of the application. This is probably not the case, and it changes the answer quite a bit\nIf you've already got J2EE kicking around, the correct answer is Grails. It simply is: you probably already have Hibernate and Spring kicking around, and you're going to want the ability to flip back and forth between your old code and your new with a minimum amount of pain. That's exactly Groovy's forte, and it is even smoother than JRuby in this regards.\nAlso, if you've already got a J2EE app kicking around, you've already got Java developers kicking around. In that case, learning Groovy is like falling off a ladder -- literally. With the exception of anonymous inner classes, Groovy is a pure superset of Java, which means that you can write Java code, call it Groovy, and be done with it. As you become increasingly comfortable with the nicities of Groovy, you can integrate them into your Java-ish Groovy code. Before too long, you'll be writing very Groovy code, and not even really have realized the transition.\n" ]
[ 11, 7, 6, 1 ]
[]
[]
[ "django", "jakarta_ee", "java", "php", "python" ]
stackoverflow_0000199556_django_jakarta_ee_java_php_python.txt
Q: What is the best way on python 2.3 for windows to execute a program like ghostscript with multiple arguments and spaces in paths? Surely there is some kind of abstraction that allows for this? This is essentially the command cmd = self._ghostscriptPath + 'gswin32c -q -dNOPAUSE -dBATCH -sDEVICE=tiffg4 -r196X204 -sPAPERSIZE=a4 -sOutputFile="' + tifDest + " " + pdfSource + '"' os.popen(cmd) this way looks really dirty to me, there must be some pythonic way A: Use subprocess, it superseeds os.popen, though it is not much more of an abstraction: from subprocess import Popen, PIPE output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0] #this is how I'd mangle the arguments together output = Popen([ self._ghostscriptPath, 'gswin32c', '-q', '-dNOPAUSE', '-dBATCH', '-sDEVICE=tiffg4', '-r196X204', '-sPAPERSIZE=a4', '-sOutputFile="%s %s"' % (tifDest, pdfSource), ], stdout=PIPE).communicate()[0] If you have only python 2.3 which has no subprocess module, you can still use os.popen os.popen(' '.join([ self._ghostscriptPath, 'gswin32c', '-q', '-dNOPAUSE', '-dBATCH', '-sDEVICE=tiffg4', '-r196X204', '-sPAPERSIZE=a4', '-sOutputFile="%s %s"' % (tifDest, pdfSource), ]))
What is the best way on python 2.3 for windows to execute a program like ghostscript with multiple arguments and spaces in paths?
Surely there is some kind of abstraction that allows for this? This is essentially the command cmd = self._ghostscriptPath + 'gswin32c -q -dNOPAUSE -dBATCH -sDEVICE=tiffg4 -r196X204 -sPAPERSIZE=a4 -sOutputFile="' + tifDest + " " + pdfSource + '"' os.popen(cmd) this way looks really dirty to me, there must be some pythonic way
[ "Use subprocess, it superseeds os.popen, though it is not much more of an abstraction:\nfrom subprocess import Popen, PIPE\noutput = Popen([\"mycmd\", \"myarg\"], stdout=PIPE).communicate()[0]\n\n#this is how I'd mangle the arguments together\noutput = Popen([\n self._ghostscriptPath, \n 'gswin32c',\n '-q',\n '-dNOPAUSE',\n '-dBATCH',\n '-sDEVICE=tiffg4',\n '-r196X204',\n '-sPAPERSIZE=a4',\n '-sOutputFile=\"%s %s\"' % (tifDest, pdfSource),\n], stdout=PIPE).communicate()[0]\n\nIf you have only python 2.3 which has no subprocess module, you can still use os.popen\nos.popen(' '.join([\n self._ghostscriptPath, \n 'gswin32c',\n '-q',\n '-dNOPAUSE',\n '-dBATCH',\n '-sDEVICE=tiffg4',\n '-r196X204',\n '-sPAPERSIZE=a4',\n '-sOutputFile=\"%s %s\"' % (tifDest, pdfSource),\n]))\n\n" ]
[ 6 ]
[]
[]
[ "ghostscript", "python", "windows" ]
stackoverflow_0000221097_ghostscript_python_windows.txt
Q: How can I write a method within a Django model to retrieve related objects? I have two models. We'll call them object A and object B. Their design looks something like this: class Foo(models.Model): name = models.CharField() class Bar(models.Model): title = models.CharField() Foo= models.ForeignKey('myapp.Foo') Now, suppose I want to make a method within Foo that returns all Bar objects that reference that instance of Foo. How do I do this? class Foo(models.Model): name = models.CharField() def returnBars(self): ???? A: You get this for free: http://docs.djangoproject.com/en/dev/topics/db/queries/#backwards-related-objects By default, you can access a Manager which gives you access to related items through a RELATEDCLASSNAME_set attribute: some_foo.bar_set.all() Or you can use the related_name argument to ForeignKey to specify the attribute which should hold the reverse relationship Manager: class Foo(models.Model): name = models.CharField() class Bar(models.Model): title = models.CharField() foo = models.ForeignKey(Foo, related_name='bars') ... some_foo.bars.all()
How can I write a method within a Django model to retrieve related objects?
I have two models. We'll call them object A and object B. Their design looks something like this: class Foo(models.Model): name = models.CharField() class Bar(models.Model): title = models.CharField() Foo= models.ForeignKey('myapp.Foo') Now, suppose I want to make a method within Foo that returns all Bar objects that reference that instance of Foo. How do I do this? class Foo(models.Model): name = models.CharField() def returnBars(self): ????
[ "You get this for free:\nhttp://docs.djangoproject.com/en/dev/topics/db/queries/#backwards-related-objects\nBy default, you can access a Manager which gives you access to related items through a RELATEDCLASSNAME_set attribute:\nsome_foo.bar_set.all()\n\nOr you can use the related_name argument to ForeignKey to specify the attribute which should hold the reverse relationship Manager:\nclass Foo(models.Model):\n name = models.CharField()\n\nclass Bar(models.Model):\n title = models.CharField()\n foo = models.ForeignKey(Foo, related_name='bars')\n\n...\n\nsome_foo.bars.all()\n\n" ]
[ 10 ]
[]
[]
[ "django", "frameworks", "model_view_controller", "python" ]
stackoverflow_0000221328_django_frameworks_model_view_controller_python.txt
Q: load dll from python I'm building a python application from some source code I've found Here I've managed to compile and fix some problems by searching the web, but I'm stuck at this point: When running the application this message appears. alt text http://img511.imageshack.us/img511/4481/loadfr0.png This python app, usues swig to link to c/c++ code. I have VC++2005 express edition which I used to compile along with scons and Python 2.5 ( and tried 2.4 too ) The dlls that are attempting to load is "msvcr80.dll" because before the message was "msvcr80.dll" cannot be found or something like that, so I got it and drop it in window32 folder. For what I've read in here: http://msdn.microsoft.com/en-us/library/ms235591(VS.80).aspx The solution is to run MT with the manifest and the dll file. I did it already and doesn't work either. Could anyone point me to the correct direction? This is the content of the manifest fie: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50608.0' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> I'm going to try Python 2.6 now, I'm not quite sure of understanding the problem, but Python 2.5 and Python 2.5 .exe had the string "MSVCR71.dll" inside the .exe file. But probably this has nothing to do. ps. if only everything was as easy as jar files :( This is the stack trace for completeness None INFO:root:Skipping provider enso.platform.osx. INFO:root:Skipping provider enso.platform.linux. INFO:root:Added provider enso.platform.win32. Traceback (most recent call last): File "scripts\run_enso.py", line 24, in <module> enso.run() File "C:\oreyes\apps\enso\enso-read-only\enso\__init__.py", line 40, in run from enso.events import EventManager File "C:\oreyes\apps\enso\enso-read-only\enso\events.py", line 60, in <module> from enso import input File "C:\oreyes\apps\enso\enso-read-only\enso\input\__init__.py", line 3, in <module> _input = enso.providers.getInterface( "input" ) File "C:\oreyes\apps\enso\enso-read-only\enso\providers.py", line 137, in getInterface interface = provider.provideInterface( name ) File "C:\oreyes\apps\enso\enso-read-only\enso\platform\win32\__init__.py", line 48, in provideInterface import enso.platform.win32.input File "C:\oreyes\apps\enso\enso-read-only\enso\platform\win32\input\__init__.py", line 3, in <module> from InputManager import * File "C:\oreyes\apps\enso\enso-read-only\enso\platform\win32\input\InputManager.py", line 7, in <module> import _InputManager ImportError: DLL load failed: Error en una rutina de inicializaci¾n de biblioteca de vÝnculos dinßmicos (DLL). A: Looking at your update, it looks like you need to install Pycairo since you're missing the _cairo module installed as part of Pycairo. See the Pycairo downloads page for instructions on how to obtain/install binaries for Windows. A: You probably need to install the VC++ runtime redistributables. The links to them are here. A: I've been able to compile and run Enso by using /LD as a compiler flag. This links dynamically to the MS Visual C++ runtime, and seems to allow you to get away without a manifest. If you're using SCons, see the diff file here: http://paste2.org/p/69732 A: update I've downloaded python2.6 and VS C++ express edition 2008 and the problem with the msvcr80.dll is gone ( I assume because Python and VSC++2008xe use msvscr90.dll) I've compile with /LD and all the changes listed here: http://paste2.org/p/69732 And now the problem follows: INFO:root:Skipping provider enso.platform.osx. INFO:root:Skipping provider enso.platform.linux. INFO:root:Added provider enso.platform.win32. INFO:root:Obtained interface 'input' from provider 'enso.platform.win32'. Traceback (most recent call last): File "scripts\run_enso.py", line 23, in <module> enso.run() File "C:\oreyes\apps\enso\enso-comunity\enso\__init__.py", line 41, in run from enso.quasimode import Quasimode File "C:\oreyes\apps\enso\enso-comunity\enso\quasimode\__init__.py", line 62, in <module> from enso.quasimode.window import TheQuasimodeWindow File "C:\oreyes\apps\enso\enso-comunity\enso\quasimode\window.py", line 65, in <module> from enso.quasimode.linewindows import TextWindow File "C:\oreyes\apps\enso\enso-comunity\enso\quasimode\linewindows.py", line 44, in <module> from enso import cairo File "C:\oreyes\apps\enso\enso-comunity\enso\cairo.py", line 3, in <module> __cairoImpl = enso.providers.getInterface( "cairo" ) File "C:\oreyes\apps\enso\enso-comunity\enso\providers.py", line 137, in getInterface interface = provider.provideInterface( name ) File "C:\oreyes\apps\enso\enso-comunity\enso\platform\win32\__init__.py", line 61, in provideInterface import enso.platform.win32.cairo File "C:\oreyes\apps\enso\enso-comunity\enso\platform\win32\cairo\__init__.py", line 1, in <module> from _cairo import * ImportError: No module named _cairo
load dll from python
I'm building a python application from some source code I've found Here I've managed to compile and fix some problems by searching the web, but I'm stuck at this point: When running the application this message appears. alt text http://img511.imageshack.us/img511/4481/loadfr0.png This python app, usues swig to link to c/c++ code. I have VC++2005 express edition which I used to compile along with scons and Python 2.5 ( and tried 2.4 too ) The dlls that are attempting to load is "msvcr80.dll" because before the message was "msvcr80.dll" cannot be found or something like that, so I got it and drop it in window32 folder. For what I've read in here: http://msdn.microsoft.com/en-us/library/ms235591(VS.80).aspx The solution is to run MT with the manifest and the dll file. I did it already and doesn't work either. Could anyone point me to the correct direction? This is the content of the manifest fie: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50608.0' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> I'm going to try Python 2.6 now, I'm not quite sure of understanding the problem, but Python 2.5 and Python 2.5 .exe had the string "MSVCR71.dll" inside the .exe file. But probably this has nothing to do. ps. if only everything was as easy as jar files :( This is the stack trace for completeness None INFO:root:Skipping provider enso.platform.osx. INFO:root:Skipping provider enso.platform.linux. INFO:root:Added provider enso.platform.win32. Traceback (most recent call last): File "scripts\run_enso.py", line 24, in <module> enso.run() File "C:\oreyes\apps\enso\enso-read-only\enso\__init__.py", line 40, in run from enso.events import EventManager File "C:\oreyes\apps\enso\enso-read-only\enso\events.py", line 60, in <module> from enso import input File "C:\oreyes\apps\enso\enso-read-only\enso\input\__init__.py", line 3, in <module> _input = enso.providers.getInterface( "input" ) File "C:\oreyes\apps\enso\enso-read-only\enso\providers.py", line 137, in getInterface interface = provider.provideInterface( name ) File "C:\oreyes\apps\enso\enso-read-only\enso\platform\win32\__init__.py", line 48, in provideInterface import enso.platform.win32.input File "C:\oreyes\apps\enso\enso-read-only\enso\platform\win32\input\__init__.py", line 3, in <module> from InputManager import * File "C:\oreyes\apps\enso\enso-read-only\enso\platform\win32\input\InputManager.py", line 7, in <module> import _InputManager ImportError: DLL load failed: Error en una rutina de inicializaci¾n de biblioteca de vÝnculos dinßmicos (DLL).
[ "Looking at your update, it looks like you need to install Pycairo since you're missing the _cairo module installed as part of Pycairo. See the Pycairo downloads page for instructions on how to obtain/install binaries for Windows.\n", "You probably need to install the VC++ runtime redistributables. The links to them are here.\n", "I've been able to compile and run Enso by using /LD as a compiler flag. This links dynamically to the MS Visual C++ runtime, and seems to allow you to get away without a manifest.\nIf you're using SCons, see the diff file here: http://paste2.org/p/69732\n", "update\nI've downloaded python2.6 and VS C++ express edition 2008 and the problem with the msvcr80.dll is gone ( I assume because Python and VSC++2008xe use msvscr90.dll) \nI've compile with /LD and all the changes listed here: http://paste2.org/p/69732 \nAnd now the problem follows:\nINFO:root:Skipping provider enso.platform.osx.\nINFO:root:Skipping provider enso.platform.linux.\nINFO:root:Added provider enso.platform.win32.\nINFO:root:Obtained interface 'input' from provider 'enso.platform.win32'.\nTraceback (most recent call last):\n File \"scripts\\run_enso.py\", line 23, in <module>\n enso.run()\n File \"C:\\oreyes\\apps\\enso\\enso-comunity\\enso\\__init__.py\", line 41, in run\n from enso.quasimode import Quasimode\n File \"C:\\oreyes\\apps\\enso\\enso-comunity\\enso\\quasimode\\__init__.py\", line 62, in <module>\n from enso.quasimode.window import TheQuasimodeWindow\n File \"C:\\oreyes\\apps\\enso\\enso-comunity\\enso\\quasimode\\window.py\", line 65, in <module>\n from enso.quasimode.linewindows import TextWindow\n File \"C:\\oreyes\\apps\\enso\\enso-comunity\\enso\\quasimode\\linewindows.py\", line 44, in <module>\n from enso import cairo\n File \"C:\\oreyes\\apps\\enso\\enso-comunity\\enso\\cairo.py\", line 3, in <module>\n __cairoImpl = enso.providers.getInterface( \"cairo\" )\n File \"C:\\oreyes\\apps\\enso\\enso-comunity\\enso\\providers.py\", line 137, in getInterface\n interface = provider.provideInterface( name )\n File \"C:\\oreyes\\apps\\enso\\enso-comunity\\enso\\platform\\win32\\__init__.py\", line 61, in provideInterface\n import enso.platform.win32.cairo\n File \"C:\\oreyes\\apps\\enso\\enso-comunity\\enso\\platform\\win32\\cairo\\__init__.py\", line 1, in <module>\n from _cairo import *\nImportError: No module named _cairo\n\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "dynamic_linking", "python", "scons", "swig" ]
stackoverflow_0000220902_dynamic_linking_python_scons_swig.txt
Q: What is the meaning of '(?i)password' in python regular expression? Pexpect can be used to automate tasks in python (does not need TCL to be installed). One of the simplest routines of this class is the 'run()' routine. It accepts a dictionary of expected question patterns as keys and the responses as values. For example pexpect.run ('scp foo myname@host.example.com:.', events={'(?i)password': mypassword}) I know that usually '?' is used to indicate 0 or 1 occurrences of previous literal in the string (for regular expressions that is). However, over here, this does not seem to be the meaning. Can experts comment on what is it? A: https://docs.python.org/library/re.html#regular-expression-syntax (?...) This is an extension notation (a "?" following a "(" is not meaningful otherwise). The first character after the "?" determines what the meaning and further syntax of the construct is. Extensions usually do not create a new group; (?P...) is the only exception to this rule. Following are the currently supported extensions. (?iLmsux) (One or more letters from the set "i", "L", "m", "s", "u", "x".) The group matches the empty string; the letters set the corresponding flags (re.I, re.L, re.M, re.S, re.U, re.X) for the entire regular expression. This is useful if you wish to include the flags as part of the regular expression, instead of passing a flag argument to the compile() function. Note that the (?x) flag changes how the expression is parsed. It should be used first in the expression string, or after one or more whitespace characters. If there are non-whitespace characters before the flag, the results are undefined. So in this case the string is a regular expression, and is set to be case-insensitive. A: This is an extension in the regular expression syntax in the re module of Python. The "i" means "ignore case". This means a case insensitive search for "password" is done. from https://docs.python.org/library/re.html#regular-expression-syntax (?iLmsux) (One or more letters from the set "i", "L", "m", "s", "u", "x".) The group matches the empty string; the letters set the corresponding flags (re.I, re.L, re.M, re.S, re.U, re.X) for the entire regular expression. This is useful if you wish to include the flags as part of the regular expression, instead of passing a flag argument to the compile() function. Note that the (?x) flag changes how the expression is parsed. It should be used first in the expression string, or after one or more whitespace characters. If there are non-whitespace characters before the flag, the results are undefined.
What is the meaning of '(?i)password' in python regular expression?
Pexpect can be used to automate tasks in python (does not need TCL to be installed). One of the simplest routines of this class is the 'run()' routine. It accepts a dictionary of expected question patterns as keys and the responses as values. For example pexpect.run ('scp foo myname@host.example.com:.', events={'(?i)password': mypassword}) I know that usually '?' is used to indicate 0 or 1 occurrences of previous literal in the string (for regular expressions that is). However, over here, this does not seem to be the meaning. Can experts comment on what is it?
[ "https://docs.python.org/library/re.html#regular-expression-syntax\n\n(?...) This is an extension\n notation (a \"?\" following a \"(\" is not\n meaningful otherwise). The first\n character after the \"?\" determines\n what the meaning and further syntax of\n the construct is. Extensions usually\n do not create a new group;\n (?P...) is the only exception to\n this rule. Following are the currently\n supported extensions. \n(?iLmsux) (One or more letters from\n the set \"i\", \"L\", \"m\", \"s\", \"u\", \"x\".)\n The group matches the empty string;\n the letters set the corresponding\n flags (re.I, re.L, re.M, re.S, re.U,\n re.X) for the entire regular\n expression. This is useful if you wish\n to include the flags as part of the\n regular expression, instead of passing\n a flag argument to the compile()\n function.\nNote that the (?x) flag changes how\n the expression is parsed. It should be\n used first in the expression string,\n or after one or more whitespace\n characters. If there are\n non-whitespace characters before the\n flag, the results are undefined.\n\nSo in this case the string is a regular expression, and is set to be case-insensitive.\n", "This is an extension in the regular expression syntax in the re module of Python. The \"i\" means \"ignore case\". This means a case insensitive search for \"password\" is done.\nfrom https://docs.python.org/library/re.html#regular-expression-syntax\n\n(?iLmsux)\n (One or more letters from the set \"i\", \"L\", \"m\", \"s\", \"u\", \"x\".) The\n group matches the empty string; the\n letters set the corresponding flags\n (re.I, re.L, re.M, re.S, re.U, re.X)\n for the entire regular expression.\n This is useful if you wish to include\n the flags as part of the regular\n expression, instead of passing a flag\n argument to the compile() function.\nNote that the (?x) flag changes how the expression is parsed. It\n should be used first in the expression\n string, or after one or more\n whitespace characters. If there are\n non-whitespace characters before the\n flag, the results are undefined.\n\n" ]
[ 10, 5 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0000222536_python_regex.txt
Q: I’m stunned: weird problem with python and sockets + threads I have a python script that is a http-server: http://paste2.org/p/89701, when benchmarking it against ApacheBench (ab) with a concurrency level (-c switch) that is lower then or equal to the value i specified in the socket.listen()-call in the sourcecode everything works fine, but as soon as put the concurrency level in apache bench above the value in the socket.listen()-call performance drops through the floor, some example: socket.listen(10) and ab -n 50 -c 10 http://localhost/ = 1200req/s socket.listen(10) and ab -n 50 -c 11 http://localhost/ = 40req/s socket.listen(100) and ab -n 5000 -c 100 http://localhost/ = 1000req/s socket.listen(100) and ab -n 5000 -c 101 http://localhost/ = 32req/s Nothing changes in the code between the two calls, I can’t figure out what is wrong - been at this problem for one day now. Also note that: The multiplexing version of the same code (I wrote to compare to the threaded version) works FINE no matter what socket.listen() is set to or what the concurrency (-c switch) in apache is set to. I've spent a day on IRC/python docs, posted on comp.lang.python and on my blog - I can't find ANYONE that even has an idea what could be wrong. Help me! A: I cannot confirm your results, and your server is coded fishy. I whipped up my own server and do not have this problem either. Let's move the discussion to a simpler level: import thread, socket, Queue connections = Queue.Queue() num_threads = 10 backlog = 10 def request(): while 1: conn = connections.get() data = '' while '\r\n\r\n' not in data: data += conn.recv(4048) conn.sendall('HTTP/1.1 200 OK\r\n\r\nHello World') conn.close() if __name__ == '__main__': for _ in range(num_threads): thread.start_new_thread(request, ()) acceptor = socket.socket(socket.AF_INET, socket.SOCK_STREAM) acceptor.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) acceptor.bind(('', 1234)) acceptor.listen(backlog) while 1: conn, addr = acceptor.accept() connections.put(conn) which on my machine does: ab -n 10000 -c 10 http://127.0.0.1:1234/ --> 8695.03 [#/sec] ab -n 10000 -c 11 http://127.0.0.1:1234/ --> 8529.41 [#/sec] A: For the heck of it I also implemented an asynchronous version: import socket, Queue, select class Request(object): def __init__(self, conn): self.conn = conn self.fileno = conn.fileno self.perform = self._perform().next def _perform(self): data = self.conn.recv(4048) while '\r\n\r\n' not in data: msg = self.conn.recv(4048) if msg: data += msg yield else: break reading.remove(self) writing.append(self) data = 'HTTP/1.1 200 OK\r\n\r\nHello World' while data: sent = self.conn.send(data) data = data[sent:] yield writing.remove(self) self.conn.close() class Acceptor: def __init__(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(('', 1234)) sock.listen(10) self.sock = sock self.fileno = sock.fileno def perform(self): conn, addr = self.sock.accept() reading.append(Request(conn)) if __name__ == '__main__': reading = [Acceptor()] writing = list() while 1: readable, writable, error = select.select(reading, writing, []) for action in readable + writable: try: action.perform() except StopIteration: pass which performs: ab -n 10000 -c 10 http://127.0.0.1:1234/ --> 16822.13 [#/sec] ab -n 10000 -c 11 http://127.0.0.1:1234/ --> 15704.41 [#/sec] A: I found this article on backlog on tomcat / java which gives an interesting insight in the backlog: for example, if all threads are busy in java handling requests, the kernel will handle SYN and TCP handshakes until its backlog is full. when the backlog is full, it will simply drop future SYN requests. it will not send a RST, ie causing "Connection refused" on the client, instead the client will assume the package was lost and retransmit the SYN. hopefully, the backlog queue will have cleared up by then. As I interpret it, by asking ab to create more simultaneous connection than your socket is configured to handle packets get dropped, not refused, and I do not know how ab handles that. It may be that it retransmits the SYN, but possibly after waiting a while. This may even be specced somewhere (TCP protocol?). As said, I do not know but I hope this hints at the cause. Good luck! A: it looks like you're not really getting concurrency. apparently, when you do socket.accept(), the main thread doesn't go immediately back to waiting for the next connection. maybe your connection-handling thread is only python code, so you're getting sequentialized by the SIL (single interpreder lock). if there's not heavy communications between threads, better use a multi-process scheme (with a pool of pre-spawned processes, of course) A: Ok, so I ran the code on a totally different server - (a vps I got at slicehost), not a single problem (everything works as expected) so honestly I think it's something wrong with my laptop now ;p Thanks for everyones help though!
I’m stunned: weird problem with python and sockets + threads
I have a python script that is a http-server: http://paste2.org/p/89701, when benchmarking it against ApacheBench (ab) with a concurrency level (-c switch) that is lower then or equal to the value i specified in the socket.listen()-call in the sourcecode everything works fine, but as soon as put the concurrency level in apache bench above the value in the socket.listen()-call performance drops through the floor, some example: socket.listen(10) and ab -n 50 -c 10 http://localhost/ = 1200req/s socket.listen(10) and ab -n 50 -c 11 http://localhost/ = 40req/s socket.listen(100) and ab -n 5000 -c 100 http://localhost/ = 1000req/s socket.listen(100) and ab -n 5000 -c 101 http://localhost/ = 32req/s Nothing changes in the code between the two calls, I can’t figure out what is wrong - been at this problem for one day now. Also note that: The multiplexing version of the same code (I wrote to compare to the threaded version) works FINE no matter what socket.listen() is set to or what the concurrency (-c switch) in apache is set to. I've spent a day on IRC/python docs, posted on comp.lang.python and on my blog - I can't find ANYONE that even has an idea what could be wrong. Help me!
[ "I cannot confirm your results, and your server is coded fishy. I whipped up my own server and do not have this problem either. Let's move the discussion to a simpler level:\nimport thread, socket, Queue\n\nconnections = Queue.Queue()\nnum_threads = 10\nbacklog = 10\n\ndef request():\n while 1:\n conn = connections.get()\n data = ''\n while '\\r\\n\\r\\n' not in data:\n data += conn.recv(4048)\n conn.sendall('HTTP/1.1 200 OK\\r\\n\\r\\nHello World')\n conn.close()\n\nif __name__ == '__main__':\n for _ in range(num_threads):\n thread.start_new_thread(request, ())\n\n acceptor = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n acceptor.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n acceptor.bind(('', 1234))\n acceptor.listen(backlog)\n while 1:\n conn, addr = acceptor.accept()\n connections.put(conn)\n\nwhich on my machine does:\nab -n 10000 -c 10 http://127.0.0.1:1234/ --> 8695.03 [#/sec]\nab -n 10000 -c 11 http://127.0.0.1:1234/ --> 8529.41 [#/sec]\n\n", "For the heck of it I also implemented an asynchronous version:\nimport socket, Queue, select\n\nclass Request(object):\n def __init__(self, conn):\n self.conn = conn\n self.fileno = conn.fileno\n self.perform = self._perform().next\n\n def _perform(self):\n data = self.conn.recv(4048)\n while '\\r\\n\\r\\n' not in data:\n msg = self.conn.recv(4048)\n if msg:\n data += msg\n yield\n else:\n break\n reading.remove(self)\n writing.append(self)\n\n data = 'HTTP/1.1 200 OK\\r\\n\\r\\nHello World'\n while data:\n sent = self.conn.send(data)\n data = data[sent:]\n yield\n writing.remove(self)\n self.conn.close()\n\nclass Acceptor:\n def __init__(self):\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n sock.bind(('', 1234))\n sock.listen(10)\n self.sock = sock\n self.fileno = sock.fileno\n\n def perform(self):\n conn, addr = self.sock.accept()\n reading.append(Request(conn))\n\nif __name__ == '__main__':\n reading = [Acceptor()]\n writing = list()\n\n while 1:\n readable, writable, error = select.select(reading, writing, [])\n for action in readable + writable:\n try: action.perform()\n except StopIteration: pass\n\nwhich performs:\nab -n 10000 -c 10 http://127.0.0.1:1234/ --> 16822.13 [#/sec]\nab -n 10000 -c 11 http://127.0.0.1:1234/ --> 15704.41 [#/sec]\n\n", "I found this article on backlog on tomcat / java which gives an interesting insight in the backlog:\n\nfor example, if all threads are busy\n in java handling requests, the kernel\n will handle SYN and TCP handshakes\n until its backlog is full. when the\n backlog is full, it will simply drop\n future SYN requests. it will not send\n a RST, ie causing \"Connection refused\"\n on the client, instead the client will\n assume the package was lost and\n retransmit the SYN. hopefully, the\n backlog queue will have cleared up by\n then.\n\nAs I interpret it, by asking ab to create more simultaneous connection than your\nsocket is configured to handle packets get dropped, not refused, and I do not know\nhow ab handles that. It may be that it retransmits the SYN, but possibly after waiting\na while. This may even be specced somewhere (TCP protocol?).\nAs said, I do not know but I hope this hints at the cause. \nGood luck!\n", "it looks like you're not really getting concurrency. apparently, when you do socket.accept(), the main thread doesn't go immediately back to waiting for the next connection. maybe your connection-handling thread is only python code, so you're getting sequentialized by the SIL (single interpreder lock).\nif there's not heavy communications between threads, better use a multi-process scheme (with a pool of pre-spawned processes, of course)\n", "Ok, so I ran the code on a totally different server - (a vps I got at slicehost), not a single problem (everything works as expected) so honestly I think it's something wrong with my laptop now ;p \nThanks for everyones help though!\n" ]
[ 7, 4, 0, 0, 0 ]
[]
[]
[ "apache", "multithreading", "python", "sockets" ]
stackoverflow_0000219547_apache_multithreading_python_sockets.txt
Q: Sorting a tuple that contains tuples I have the following tuple, which contains tuples: MY_TUPLE = ( ('A','Apple'), ('C','Carrot'), ('B','Banana'), ) I'd like to sort this tuple based upon the second value contained in inner-tuples (i.e., sort Apple, Carrot, Banana rather than A, B, C). Any thoughts? A: from operator import itemgetter MY_SORTED_TUPLE = tuple(sorted(MY_TUPLE, key=itemgetter(1))) or without itemgetter: MY_SORTED_TUPLE = tuple(sorted(MY_TUPLE, key=lambda item: item[1])) A: From Sorting Mini-HOW TO Often there's a built-in that will match your needs, such as str.lower(). The operator module contains a number of functions useful for this purpose. For example, you can sort tuples based on their second element using operator.itemgetter(): >>> import operator >>> L = [('c', 2), ('d', 1), ('a', 4), ('b', 3)] >>> map(operator.itemgetter(0), L) ['c', 'd', 'a', 'b'] >>> map(operator.itemgetter(1), L) [2, 1, 4, 3] >>> sorted(L, key=operator.itemgetter(1)) [('d', 1), ('c', 2), ('b', 3), ('a', 4)] Hope this helps. A: sorted(my_tuple, key=lambda tup: tup[1]) In other words, when comparing two elements of the tuple you're sorting, sort based on the return value of the function passed as the key parameter.
Sorting a tuple that contains tuples
I have the following tuple, which contains tuples: MY_TUPLE = ( ('A','Apple'), ('C','Carrot'), ('B','Banana'), ) I'd like to sort this tuple based upon the second value contained in inner-tuples (i.e., sort Apple, Carrot, Banana rather than A, B, C). Any thoughts?
[ "from operator import itemgetter\n\nMY_SORTED_TUPLE = tuple(sorted(MY_TUPLE, key=itemgetter(1)))\n\nor without itemgetter:\nMY_SORTED_TUPLE = tuple(sorted(MY_TUPLE, key=lambda item: item[1]))\n\n", "From Sorting Mini-HOW TO\n\nOften there's a built-in that will\n match your needs, such as str.lower().\n The operator module contains a number\n of functions useful for this purpose.\n For example, you can sort tuples based\n on their second element using\n operator.itemgetter():\n\n>>> import operator \n>>> L = [('c', 2), ('d', 1), ('a', 4), ('b', 3)]\n>>> map(operator.itemgetter(0), L)\n['c', 'd', 'a', 'b']\n>>> map(operator.itemgetter(1), L)\n[2, 1, 4, 3]\n>>> sorted(L, key=operator.itemgetter(1))\n[('d', 1), ('c', 2), ('b', 3), ('a', 4)]\n\nHope this helps.\n", "sorted(my_tuple, key=lambda tup: tup[1])\n\nIn other words, when comparing two elements of the tuple you're sorting, sort based on the return value of the function passed as the key parameter.\n" ]
[ 25, 7, 2 ]
[ "I achieved the same thing using this code, but your suggestion is great. Thanks!\ntemplist = [ (line[1], line) for line in MY_TUPLE ] \ntemplist.sort()\nSORTED_MY_TUPLE = [ line[1] for line in templist ]\n\n" ]
[ -2 ]
[ "python", "sorting", "tuples" ]
stackoverflow_0000222752_python_sorting_tuples.txt
Q: cherrypy not closing the sockets I am using cherrypy as a webserver. It gives good performance for my application but there is a very big problem with it. cherrypy crashes after couple of hours stating that it could not create a socket as there are too many files open: [21/Oct/2008:12:44:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down [21/Oct/2008:12:44:25] ENGINE Stopped thread '_TimeoutMonitor'. [21/Oct/2008:12:44:25] ENGINE Stopped thread 'Autoreloader'. [21/Oct/2008:12:44:25] ENGINE Bus STOPPED [21/Oct/2008:12:44:25] ENGINE Bus EXITING [21/Oct/2008:12:44:25] ENGINE Bus EXITED Exception in thread HTTPServer Thread-3: Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 436, in __bootstrap self.run() File "/usr/lib/python2.3/threading.py", line 416, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib/python2.3/site-packages/cherrypy/process/servers.py", line 73, in _start_http_thread self.httpserver.start() File "/usr/lib/python2.3/site-packages/cherrypy/wsgiserver/__init__.py", line 1388, in start self.tick() File "/usr/lib/python2.3/site-packages/cherrypy/wsgiserver/__init__.py", line 1417, in tick s, addr = self.socket.accept() File "/usr/lib/python2.3/socket.py", line 167, in accept sock, addr = self._sock.accept() error: (24, 'Too many open files') [21/Oct/2008:12:44:25] ENGINE Waiting for child threads to terminate.. I tried to figure out what was happening. My application does not open any file or any socket etc. My file only opens couple of berkeley dbs. I investigated this issue further. I saw the file descriptors used by my cherrypy process with id 4536 in /proc/4536/fd/ Initially there were new sockets created and cleaned up properly but after an hour I found that it had about 509 sockets that were not cleaned. All the sockets were in CLOSE_WAIT state. I got this information using the following command: netstat -ap | grep "4536" | grep CLOSE_WAIT | wc -l CLOSE_WAIT state means that the remote client has closed the connection. Why is cherrypy then not closing the socket and free the file descriptors? What can I do to resolve the problem? I tried to play with the following: cherrypy.config.update({'server.socketQueueSize': '10'}) I thought that this would restrict the number of sockets open at any time to 10 but it was not effective at all. This is the only config I have set, so , rest of the configs hold their default values. Could somebody throw light on this? Do you think its a bug in cherrypy? How can I resolve it? Is there a way I can close these sockets myself? Following is my systems info: CherryPy-3.1.0 python 2.3.4 Red Hat Enterprise Linux ES release 4 (Nahant Update 7) Thanks in advance! A: I imagine you're storing (in-memory) some piece of data which has a reference to the socket; if you store the request objects anywhere, for instance, that would likely do it. The last-ditch chance for sockets to be closed is when they're garbage-collected; if you're doing anything that would prevent garbage collection from reaching them, there's your problem. I suggest that you try to reproduce with a Hello World program written in CherryPy; if you can't reproduce there, you know it's in your code -- look for places where you're persisting information which could (directly or otherwise) reference the socket.
cherrypy not closing the sockets
I am using cherrypy as a webserver. It gives good performance for my application but there is a very big problem with it. cherrypy crashes after couple of hours stating that it could not create a socket as there are too many files open: [21/Oct/2008:12:44:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down [21/Oct/2008:12:44:25] ENGINE Stopped thread '_TimeoutMonitor'. [21/Oct/2008:12:44:25] ENGINE Stopped thread 'Autoreloader'. [21/Oct/2008:12:44:25] ENGINE Bus STOPPED [21/Oct/2008:12:44:25] ENGINE Bus EXITING [21/Oct/2008:12:44:25] ENGINE Bus EXITED Exception in thread HTTPServer Thread-3: Traceback (most recent call last): File "/usr/lib/python2.3/threading.py", line 436, in __bootstrap self.run() File "/usr/lib/python2.3/threading.py", line 416, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib/python2.3/site-packages/cherrypy/process/servers.py", line 73, in _start_http_thread self.httpserver.start() File "/usr/lib/python2.3/site-packages/cherrypy/wsgiserver/__init__.py", line 1388, in start self.tick() File "/usr/lib/python2.3/site-packages/cherrypy/wsgiserver/__init__.py", line 1417, in tick s, addr = self.socket.accept() File "/usr/lib/python2.3/socket.py", line 167, in accept sock, addr = self._sock.accept() error: (24, 'Too many open files') [21/Oct/2008:12:44:25] ENGINE Waiting for child threads to terminate.. I tried to figure out what was happening. My application does not open any file or any socket etc. My file only opens couple of berkeley dbs. I investigated this issue further. I saw the file descriptors used by my cherrypy process with id 4536 in /proc/4536/fd/ Initially there were new sockets created and cleaned up properly but after an hour I found that it had about 509 sockets that were not cleaned. All the sockets were in CLOSE_WAIT state. I got this information using the following command: netstat -ap | grep "4536" | grep CLOSE_WAIT | wc -l CLOSE_WAIT state means that the remote client has closed the connection. Why is cherrypy then not closing the socket and free the file descriptors? What can I do to resolve the problem? I tried to play with the following: cherrypy.config.update({'server.socketQueueSize': '10'}) I thought that this would restrict the number of sockets open at any time to 10 but it was not effective at all. This is the only config I have set, so , rest of the configs hold their default values. Could somebody throw light on this? Do you think its a bug in cherrypy? How can I resolve it? Is there a way I can close these sockets myself? Following is my systems info: CherryPy-3.1.0 python 2.3.4 Red Hat Enterprise Linux ES release 4 (Nahant Update 7) Thanks in advance!
[ "I imagine you're storing (in-memory) some piece of data which has a reference to the socket; if you store the request objects anywhere, for instance, that would likely do it.\nThe last-ditch chance for sockets to be closed is when they're garbage-collected; if you're doing anything that would prevent garbage collection from reaching them, there's your problem. I suggest that you try to reproduce with a Hello World program written in CherryPy; if you can't reproduce there, you know it's in your code -- look for places where you're persisting information which could (directly or otherwise) reference the socket.\n" ]
[ 4 ]
[]
[]
[ "cherrypy", "python", "sockets" ]
stackoverflow_0000222736_cherrypy_python_sockets.txt
Q: Python code generator for Visual Studio? I had an idea, if I add a python .py file to my C# project, and tag the file with a custom generator that would execute the python file, and treat the output as the result of the code generation, ie. put it into a C# file, that would allow me to do quite a lot of code generation as part of the build process. Does anyone know if such a custom generator for Visual Studio 2008 exists? A: I think Cog does what you want. A: I recall that in previous versions of VS, there was a way to add custom build steps to the build process. I used that a lot to do exactly the kind of automated code generation you describe. I imagine the custom build step feature is still there in 2008. A: OK, I see. Well, as far as I know there isn't any code generator for Python. There is a good introduction on how to roll your own here. Actually, that's quite an under-used part of the environment, I suppose it's so because it needs you to use the IDE to compile the project, as it'd seem only the IDE knows about these "generators", but MSBuild ignores them. A: I don't understand what you are trying to do here. Are you trying to execute a Python script that generates a C# file and then compile that with the project? Or are you trying to compile a Python script to C#? A: I dug through my old bookmarks (I love Del.icio.us!) and found this article: Code Generation with Python, Cog, and Nant. Keep in mind that anything you can do in NAnt can probably be done in MSBuild as well. This should be enough to get you started.
Python code generator for Visual Studio?
I had an idea, if I add a python .py file to my C# project, and tag the file with a custom generator that would execute the python file, and treat the output as the result of the code generation, ie. put it into a C# file, that would allow me to do quite a lot of code generation as part of the build process. Does anyone know if such a custom generator for Visual Studio 2008 exists?
[ "I think Cog does what you want.\n", "I recall that in previous versions of VS, there was a way to add custom build steps to the build process. I used that a lot to do exactly the kind of automated code generation you describe.\nI imagine the custom build step feature is still there in 2008.\n", "OK, I see. Well, as far as I know there isn't any code generator for Python. There is a good introduction on how to roll your own here.\nActually, that's quite an under-used part of the environment, I suppose it's so because it needs you to use the IDE to compile the project, as it'd seem only the IDE knows about these \"generators\", but MSBuild ignores them.\n", "I don't understand what you are trying to do here. Are you trying to execute a Python script that generates a C# file and then compile that with the project? Or are you trying to compile a Python script to C#?\n", "I dug through my old bookmarks (I love Del.icio.us!) and found this article: Code Generation with Python, Cog, and Nant. Keep in mind that anything you can do in NAnt can probably be done in MSBuild as well. This should be enough to get you started.\n" ]
[ 4, 2, 2, 1, 1 ]
[]
[]
[ "code_generation", "python", "visual_studio_2008" ]
stackoverflow_0000024193_code_generation_python_visual_studio_2008.txt
Q: When does urllib2 actually download a file from a url? url = "http://example.com/file.xml" data = urllib2.urlopen(url) data.read() The question is, when exactly will the file be downloaded from the internet? When i do urlopen or .read()? On my network interface I see high traffic both times. A: Witout looking at the code, I'd expect that the following happens: urlopen() opens the connection, and sends the query. Then the server starts feeding the reply. At this point, the data accumulates in buffers until they are full and the operating system tells the server to hold on for a while. Then data.read() empties the buffer, so the operating system tells the server to go on, and the rest of the reply gets downloaded. Naturally, if the reply is short enough, or if the .read() happens quickly enough, then the buffers do not have time to fill up and the download happens in one go. A: I agree with ddaa. However, if you want to understand this sort of thing, you can set up a dummy server using something like nc (in *nix) and then open the URL in the interactive Python interpreter. In one terminal, run nc -l 1234 which will open a socket and listen for connections on port 1234 of the local machine. nc will accept an incoming connection and display whatever it reads from the socket. Anything you type into nc will be sent over the socket to the remote connection, in this case Python's urlopen(). Run Python in another terminal and enter your code, i.e. data = urllib2.urlopen('http://127.0.0.1:1234') data.read() The call to urlopen() will establish the connection to the server, send the request and then block waiting for a response. You will see that nc prints the HTTP request into it's terminal. Now type something into the terminal that is running nc. The call to urlopen() will still block until you press ENTER in nc, that is, until it receives a new line character. So urlopen() will not return until it has read at least one new line character. (For those concerned about possible buffering by nc, this is not an issue. urlopen() will block until it sees the first new line character.) So it should be noted that urlopen() will block until the first new line character is received, after which data can be read from the connection. In practice, HTTP responses are short multiline responses, so urlopen() should return quite quickly.
When does urllib2 actually download a file from a url?
url = "http://example.com/file.xml" data = urllib2.urlopen(url) data.read() The question is, when exactly will the file be downloaded from the internet? When i do urlopen or .read()? On my network interface I see high traffic both times.
[ "Witout looking at the code, I'd expect that the following happens:\n\nurlopen() opens the connection, and sends the query. Then the server starts feeding the reply. At this point, the data accumulates in buffers until they are full and the operating system tells the server to hold on for a while.\nThen data.read() empties the buffer, so the operating system tells the server to go on, and the rest of the reply gets downloaded.\n\nNaturally, if the reply is short enough, or if the .read() happens quickly enough, then the buffers do not have time to fill up and the download happens in one go.\n", "I agree with ddaa. However, if you want to understand this sort of thing, you can set up a dummy server using something like nc (in *nix) and then open the URL in the interactive Python interpreter.\nIn one terminal, run nc -l 1234 which will open a socket and listen for connections on port 1234 of the local machine. nc will accept an incoming connection and display whatever it reads from the socket. Anything you type into nc will be sent over the socket to the remote connection, in this case Python's urlopen().\nRun Python in another terminal and enter your code, i.e.\ndata = urllib2.urlopen('http://127.0.0.1:1234')\ndata.read()\n\nThe call to urlopen() will establish the connection to the server, send the request and then block waiting for a response. You will see that nc prints the HTTP request into it's terminal.\nNow type something into the terminal that is running nc. The call to urlopen() will still block until you press ENTER in nc, that is, until it receives a new line character. So urlopen() will not return until it has read at least one new line character. (For those concerned about possible buffering by nc, this is not an issue. urlopen() will block until it sees the first new line character.)\nSo it should be noted that urlopen() will block until the first new line character is received, after which data can be read from the connection. In practice, HTTP responses are short multiline responses, so urlopen() should return quite quickly.\n" ]
[ 5, 5 ]
[]
[]
[ "python" ]
stackoverflow_0000223356_python.txt
Q: What would be a better implementation of all combinations in lexicographic order of a jagged list? I was put in a position today in which I needed to enumerate all possible combinations of jagged list. For instance, a naive approach would be: for a in [1,2,3]: for b in [4,5,6,7,8,9]: for c in [1,2]: yield (a,b,c) This is functional, but not general in terms of the number of lists that can be used. Here is a more generalized approach: from numpy import zeros, array, nonzero, max make_subset = lambda x,y: [x[i][j] for i,j in enumerate(y)] def combinations(items): num_items = [len(i) - 1 for i in items] state = zeros(len(items), dtype=int) finished = array(num_items, dtype=int) yield grab_items(items, state) while True: if state[-1] != num_items[-1]: state[-1] += 1 yield make_subset(items, state) else: incrementable = nonzero(state != finished)[0] if not len(incrementable): raise StopIteration rightmost = max(incrementable) state[rightmost] += 1 state[rightmost+1:] = 0 yield make_subset(items, state) Any recommendations on a better approach or reasons against the above approach? A: The naive approach can be written more compactly as a generator expression: ((a,b,c) for a in [1,2,3] for b in [4,5,6,7,8,9] for c in [1,2]) The general approach can be written much more simply using a recursive function: def combinations(*seqs): if not seqs: return (item for item in ()) first, rest = seqs[0], seqs[1:] if not rest: return ((item,) for item in first) return ((item,) + items for item in first for items in combinations(*rest)) Sample usage: >>> for pair in combinations('abc', [1,2,3]): ... print pair ... ('a', 1) ('a', 2) ('a', 3) ('b', 1) ('b', 2) ('b', 3) ('c', 1) ('c', 2) ('c', 3)
What would be a better implementation of all combinations in lexicographic order of a jagged list?
I was put in a position today in which I needed to enumerate all possible combinations of jagged list. For instance, a naive approach would be: for a in [1,2,3]: for b in [4,5,6,7,8,9]: for c in [1,2]: yield (a,b,c) This is functional, but not general in terms of the number of lists that can be used. Here is a more generalized approach: from numpy import zeros, array, nonzero, max make_subset = lambda x,y: [x[i][j] for i,j in enumerate(y)] def combinations(items): num_items = [len(i) - 1 for i in items] state = zeros(len(items), dtype=int) finished = array(num_items, dtype=int) yield grab_items(items, state) while True: if state[-1] != num_items[-1]: state[-1] += 1 yield make_subset(items, state) else: incrementable = nonzero(state != finished)[0] if not len(incrementable): raise StopIteration rightmost = max(incrementable) state[rightmost] += 1 state[rightmost+1:] = 0 yield make_subset(items, state) Any recommendations on a better approach or reasons against the above approach?
[ "The naive approach can be written more compactly as a generator expression:\n((a,b,c) for a in [1,2,3] for b in [4,5,6,7,8,9] for c in [1,2])\n\nThe general approach can be written much more simply using a recursive function:\ndef combinations(*seqs):\n if not seqs: return (item for item in ())\n first, rest = seqs[0], seqs[1:]\n if not rest: return ((item,) for item in first)\n return ((item,) + items for item in first for items in combinations(*rest))\n\nSample usage:\n>>> for pair in combinations('abc', [1,2,3]):\n... print pair\n... \n('a', 1)\n('a', 2)\n('a', 3)\n('b', 1)\n('b', 2)\n('b', 3)\n('c', 1)\n('c', 2)\n('c', 3)\n\n" ]
[ 6 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0000224145_algorithm_python.txt
Q: Including PYDs/DLLs in py2exe builds One of the modules for my app uses functions from a .pyd file. There's an option to exclude dlls (exclude_dlls) but is there one for including them? The build process doesn't seem to be copying the .pyd in my module despite copying the rest of the files (.py). I also need to include a .dll. How do I get py2exe to include both .pyd and .dll files? A: .pyd's and .DLL's are different here, in that a .pyd ought to be automatically found by modulefinder and so included (as long as you have the appropriate "import" statement) without needing to do anything. If one is missed, you do the same thing as if a .py file was missed (they're both just modules): use the "include" option for the py2exe options. Modulefinder will not necessarily find dependencies on .DLLs (py2exe can detect some), so you may need to explicitly include these, with the 'data_files' option. For example, where you had two .DLL's ('foo.dll' and 'bar.dll') to include, and three .pyd's ('module1.pyd', 'module2.pyd', and 'module3.pyd') to include: setup(name='App', # other options, data_files=[('.', 'foo.dll'), ('.', 'bar.dll')], options = {"py2exe" : {"includes" : "module1,module2,module3"}} ) A: If they're not being automatically detected, try manually copying them into py2exe's temporary build directory. They will be included in the final executable. A: You can modify the setup script to copy the files explicitly: script = "PyInvaders.py" #name of starting .PY project_name = os.path.splitext(os.path.split(script)[1])[0] setup(name=project_name, scripts=[script]) #this installs the program #also need to hand copy the extra files here def installfile(name): dst = os.path.join('dist', project_name) print 'copying', name, '->', dst if os.path.isdir(name): dst = os.path.join(dst, name) if os.path.isdir(dst): shutil.rmtree(dst) shutil.copytree(name, dst) elif os.path.isfile(name): shutil.copy(name, dst) else: print 'Warning, %s not found' % name pygamedir = os.path.split(pygame.base.__file__)[0] installfile(os.path.join(pygamedir, pygame.font.get_default_font())) installfile(os.path.join(pygamedir, 'pygame_icon.bmp')) for data in extra_data: installfile(data) etc... modify to suit your needs, of course. A: Maybe you could use the data_files option to setup(): import glob setup(name='MyApp', # other options, data_files=[('.', glob.glob('*.dll')), ('.', glob.glob('*.pyd'))], ) data_files should be a list of tuples, where each tuple contains: The target directory. A list of files to copy. This won't put the files into library.zip, which shouldn't be a problem for dlls, but I don't know about pyd files.
Including PYDs/DLLs in py2exe builds
One of the modules for my app uses functions from a .pyd file. There's an option to exclude dlls (exclude_dlls) but is there one for including them? The build process doesn't seem to be copying the .pyd in my module despite copying the rest of the files (.py). I also need to include a .dll. How do I get py2exe to include both .pyd and .dll files?
[ ".pyd's and .DLL's are different here, in that a .pyd ought to be automatically found by modulefinder and so included (as long as you have the appropriate \"import\" statement) without needing to do anything. If one is missed, you do the same thing as if a .py file was missed (they're both just modules): use the \"include\" option for the py2exe options.\nModulefinder will not necessarily find dependencies on .DLLs (py2exe can detect some), so you may need to explicitly include these, with the 'data_files' option.\nFor example, where you had two .DLL's ('foo.dll' and 'bar.dll') to include, and three .pyd's ('module1.pyd', 'module2.pyd', and 'module3.pyd') to include:\nsetup(name='App',\n # other options,\n data_files=[('.', 'foo.dll'), ('.', 'bar.dll')],\n options = {\"py2exe\" : {\"includes\" : \"module1,module2,module3\"}}\n )\n\n", "If they're not being automatically detected, try manually copying them into py2exe's temporary build directory. They will be included in the final executable.\n", "You can modify the setup script to copy the files explicitly:\nscript = \"PyInvaders.py\" #name of starting .PY\nproject_name = os.path.splitext(os.path.split(script)[1])[0]\nsetup(name=project_name, scripts=[script]) #this installs the program\n\n#also need to hand copy the extra files here\ndef installfile(name):\n dst = os.path.join('dist', project_name)\n print 'copying', name, '->', dst\n if os.path.isdir(name):\n dst = os.path.join(dst, name)\n if os.path.isdir(dst):\n shutil.rmtree(dst)\n shutil.copytree(name, dst)\n elif os.path.isfile(name):\n shutil.copy(name, dst)\n else:\n print 'Warning, %s not found' % name\n\npygamedir = os.path.split(pygame.base.__file__)[0]\ninstallfile(os.path.join(pygamedir, pygame.font.get_default_font()))\ninstallfile(os.path.join(pygamedir, 'pygame_icon.bmp'))\nfor data in extra_data:\n installfile(data)\n\netc... modify to suit your needs, of course.\n", "Maybe you could use the data_files option to setup():\nimport glob\nsetup(name='MyApp',\n # other options,\n data_files=[('.', glob.glob('*.dll')),\n ('.', glob.glob('*.pyd'))],\n )\n\ndata_files should be a list of tuples, where each tuple contains:\n\nThe target directory.\nA list of files to copy.\n\nThis won't put the files into library.zip, which shouldn't be a problem for dlls, but I don't know about pyd files.\n" ]
[ 12, 2, 2, 2 ]
[]
[]
[ "dll", "installation", "py2exe", "pyd", "python" ]
stackoverflow_0000220777_dll_installation_py2exe_pyd_python.txt
Q: Best approach with dynamic classes using Python globals() I'm working on a web application that will return a variable set of modules depending on user input. Each module is a Python class with a constructor that accepts a single parameter and has an '.html' property that contains the output. Pulling the class dynamically from the global namespace works: result = globals()[classname](param).html And it's certainly more succinct than: if classname == 'Foo': result = Foo(param).html elif classname == 'Bar': ... What is considered the best way to write this, stylistically? Are there risks or reasons not to use the global namespace? A: A flaw with this approach is that it may give the user the ability to to more than you want them to. They can call any single-parameter function in that namespace just by providing the name. You can help guard against this with a few checks (eg. isinstance(SomeBaseClass, theClass), but its probably better to avoid this approach. Another disadvantage is that it constrains your class placement. If you end up with dozens of such classes and decide to group them into modules, your lookup code will stop working. You have several alternative options: Create an explicit mapping: class_lookup = {'Class1' : Class1, ... } ... result = class_lookup[className](param).html though this has the disadvantage that you have to re-list all the classes. Nest the classes in an enclosing scope. Eg. define them within their own module, or within an outer class: class Namespace(object): class Class1(object): ... class Class2(object): ... ... result = getattr(Namespace, className)(param).html You do inadvertantly expose a couple of additional class variables here though (__bases__, __getattribute__ etc) - probably not exploitable, but not perfect. Construct a lookup dict from the subclass tree. Make all your classes inherit from a single baseclass. When all classes have been created, examine all baseclasses and populate a dict from them. This has the advantage that you can define your classes anywhere (eg. in seperate modules), and so long as you create the registry after all are created, you will find them. def register_subclasses(base): d={} for cls in base.__subclasses__(): d[cls.__name__] = cls d.update(register_subclasses(cls)) return d class_lookup = register_subclasses(MyBaseClass) A more advanced variation on the above is to use self-registering classes - create a metaclass than automatically registers any created classes in a dict. This is probably overkill for this case - its useful in some "user-plugins" scenarios though. A: First of all, it sounds like you may be reinventing the wheel a little bit... most Python web frameworks (CherryPy/TurboGears is what I know) already include a way to dispatch requests to specific classes based on the contents of the URL, or the user input. There is nothing wrong with the way that you do it, really, but in my experience it tends to indicate some kind of "missing abstraction" in your program. You're basically relying on the Python interpreter to store a list of the objects you might need, rather than storing it yourself. So, as a first step, you might want to just make a dictionary of all the classes that you might want to call: dispatch = {'Foo': Foo, 'Bar': Bar, 'Bizbaz': Bizbaz} Initially, this won't make much of a difference. But as your web app grows, you may find several advantages: (a) you won't run into namespace clashes, (b) using globals() you may have security issues where an attacker can, in essence, access any global symbol in your program if they can find a way to inject an arbitrary classname into your program, (c) if you ever want to have classname be something other than the actual exact classname, using your own dictionary will be more flexible, (d) you can replace the dispatch dictionary with a more-flexible user-defined class that does database access or something like that if you find the need. The security issues are particularly salient for a web app. Doing globals()[variable] where variable is input from a web form is just asking for trouble. A: Another way to build the map between class names and classes: When defining classes, add an attribute to any class that you want to put in the lookup table, e.g.: class Foo: lookup = True def __init__(self, params): # and so on Once this is done, building the lookup map is: class_lookup = zip([(c, globals()[c]) for c in dir() if hasattr(globals()[c], "lookup")])
Best approach with dynamic classes using Python globals()
I'm working on a web application that will return a variable set of modules depending on user input. Each module is a Python class with a constructor that accepts a single parameter and has an '.html' property that contains the output. Pulling the class dynamically from the global namespace works: result = globals()[classname](param).html And it's certainly more succinct than: if classname == 'Foo': result = Foo(param).html elif classname == 'Bar': ... What is considered the best way to write this, stylistically? Are there risks or reasons not to use the global namespace?
[ "A flaw with this approach is that it may give the user the ability to to more than you want them to. They can call any single-parameter function in that namespace just by providing the name. You can help guard against this with a few checks (eg. isinstance(SomeBaseClass, theClass), but its probably better to avoid this approach. Another disadvantage is that it constrains your class placement. If you end up with dozens of such classes and decide to group them into modules, your lookup code will stop working.\nYou have several alternative options:\n\nCreate an explicit mapping:\n class_lookup = {'Class1' : Class1, ... }\n ...\n result = class_lookup[className](param).html\n\nthough this has the disadvantage that you have to re-list all the classes.\nNest the classes in an enclosing scope. Eg. define them within their own module, or within an outer class:\nclass Namespace(object):\n class Class1(object):\n ...\n class Class2(object):\n ...\n...\nresult = getattr(Namespace, className)(param).html\n\nYou do inadvertantly expose a couple of additional class variables here though (__bases__, __getattribute__ etc) - probably not exploitable, but not perfect.\nConstruct a lookup dict from the subclass tree. Make all your classes inherit from a single baseclass. When all classes have been created, examine all baseclasses and populate a dict from them. This has the advantage that you can define your classes anywhere (eg. in seperate modules), and so long as you create the registry after all are created, you will find them.\ndef register_subclasses(base):\n d={}\n for cls in base.__subclasses__():\n d[cls.__name__] = cls\n d.update(register_subclasses(cls))\n return d\n\nclass_lookup = register_subclasses(MyBaseClass)\n\nA more advanced variation on the above is to use self-registering classes - create a metaclass than automatically registers any created classes in a dict. This is probably overkill for this case - its useful in some \"user-plugins\" scenarios though.\n\n", "First of all, it sounds like you may be reinventing the wheel a little bit... most Python web frameworks (CherryPy/TurboGears is what I know) already include a way to dispatch requests to specific classes based on the contents of the URL, or the user input.\nThere is nothing wrong with the way that you do it, really, but in my experience it tends to indicate some kind of \"missing abstraction\" in your program. You're basically relying on the Python interpreter to store a list of the objects you might need, rather than storing it yourself.\nSo, as a first step, you might want to just make a dictionary of all the classes that you might want to call:\ndispatch = {'Foo': Foo, 'Bar': Bar, 'Bizbaz': Bizbaz}\n\nInitially, this won't make much of a difference. But as your web app grows, you may find several advantages: (a) you won't run into namespace clashes, (b) using globals() you may have security issues where an attacker can, in essence, access any global symbol in your program if they can find a way to inject an arbitrary classname into your program, (c) if you ever want to have classname be something other than the actual exact classname, using your own dictionary will be more flexible, (d) you can replace the dispatch dictionary with a more-flexible user-defined class that does database access or something like that if you find the need.\nThe security issues are particularly salient for a web app. Doing globals()[variable] where variable is input from a web form is just asking for trouble.\n", "Another way to build the map between class names and classes:\nWhen defining classes, add an attribute to any class that you want to put in the lookup table, e.g.:\nclass Foo:\n lookup = True\n def __init__(self, params):\n # and so on\n\nOnce this is done, building the lookup map is:\nclass_lookup = zip([(c, globals()[c]) for c in dir() if hasattr(globals()[c], \"lookup\")])\n\n" ]
[ 6, 4, 0 ]
[]
[]
[ "coding_style", "namespaces", "python" ]
stackoverflow_0000222133_coding_style_namespaces_python.txt
Q: How do you use the cursor for reading multiple files in database in python In python how do you read multiple files from a mysql database using the cursor or loop one by one and store the output in a separate table? A: I don't understand your question (what are files?, what's your table structure?), but here goes a simple sample: >>> import MySQLdb >>> conn = MySQLdb.connect(host="localhost", user="root", password="merlin", db="files") >>> cursor = conn.cursor() >>> cursor.execute("SELECT * FROM files") 5L >>> rows = cursor.fetchall() >>> cursor.execute("CREATE TABLE destination (file varchar(255))") 0L >>> for row in rows: ... cursor.execute("INSERT INTO destination VALUES (%s)" % row[0]) ... 1L 1L 1L 1L 1L A: Here is an example, assuming you have created the table you want to move to, with descriptive names: >>> import MySQLdb >>> conn = MySQLdb.connect(user='username', db='dbname') >>> cur = conn.cursor() >>> cur.execute('select files from old_table where conditions=met') >>> a = cur.fetchall() >>> for item in a: ... cur.execute('update new_table set new_field = %s' % item) # `item` should be tuple with one value, else use "(item,)" with comma
How do you use the cursor for reading multiple files in database in python
In python how do you read multiple files from a mysql database using the cursor or loop one by one and store the output in a separate table?
[ "I don't understand your question (what are files?, what's your table structure?), but here goes a simple sample:\n>>> import MySQLdb\n>>> conn = MySQLdb.connect(host=\"localhost\",\n user=\"root\",\n password=\"merlin\",\n db=\"files\")\n>>> cursor = conn.cursor()\n>>> cursor.execute(\"SELECT * FROM files\")\n5L\n>>> rows = cursor.fetchall()\n>>> cursor.execute(\"CREATE TABLE destination (file varchar(255))\")\n0L\n>>> for row in rows:\n... cursor.execute(\"INSERT INTO destination VALUES (%s)\" % row[0])\n...\n1L\n1L\n1L\n1L\n1L\n\n", "Here is an example, assuming you have created the table you want to move to, with descriptive names:\n>>> import MySQLdb\n>>> conn = MySQLdb.connect(user='username', db='dbname')\n>>> cur = conn.cursor()\n>>> cur.execute('select files from old_table where conditions=met')\n>>> a = cur.fetchall()\n>>> for item in a:\n... cur.execute('update new_table set new_field = %s' % item) # `item` should be tuple with one value, else use \"(item,)\" with comma\n\n" ]
[ 1, 0 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0000224771_mysql_python.txt
Q: Incoming poplib refactoring using windows python 2.3 Hi Guys could you please help me refactor this so that it is sensibly pythonic. import sys import poplib import string import StringIO, rfc822 import datetime import logging def _dump_pop_emails(self): self.logger.info("open pop account %s with username: %s" % (self.account[0], self.account[1])) self.popinstance = poplib.POP3(self.account[0]) self.logger.info(self.popinstance.getwelcome()) self.popinstance.user(self.account[1]) self.popinstance.pass_(self.account[2]) try: (numMsgs, totalSize) = self.popinstance.stat() for thisNum in range(1, numMsgs+1): (server_msg, body, octets) = self.popinstance.retr(thisNum) text = string.join(body, '\n') mesg = StringIO.StringIO(text) msg = rfc822.Message(mesg) name, email = msg.getaddr("From") emailpath = str(self._emailpath + self._inboxfolder + "\\" + email + "_" + msg.getheader("Subject") + ".eml") emailpath = self._replace_whitespace(emailpath) file = open(emailpath,"wb") file.write(text) file.close() self.popinstance.dele(thisNum) finally: self.logger.info(self.popinstance.quit()) def _replace_whitespace(self,name): name = str(name) return name.replace(" ", "_") Also in the _replace_whitespace method I would like to have some kind of cleaning routine which takes out all illegal characters which could cause processing. Basically I want to write the email to the inbox directory in a standard way. Am i doing something wrong here? A: I don't see anything significant wrong with that code -- is it behaving incorrectly, or are you just looking for general style guidelines? A few notes: Instead of logger.info ("foo %s %s" % (bar, baz)), use "foo %s %s", bar, baz. This avoids the overhead of string formatting if the message won't be printed. Put a try...finally around opening emailpath. Use '\n'.join (body), instead of string.join (body, '\n'). Instead of msg.getaddr("From"), just msg.From. A: This isn't refactoring (it doesn't need refactoring as far as I can see), but some suggestions: You should use the email package rather than rfc822. Replace rfc822.Message with email.Message, and use email.Utils.parseaddr(msg["From"]) to get the name and email address, and msg["Subject"] to get the subject. Use os.path.join to create the path. This: emailpath = str(self._emailpath + self._inboxfolder + "\\" + email + "_" + msg.getheader("Subject") + ".eml") Becomes: emailpath = os.path.join(self._emailpath + self._inboxfolder, email + "_" + msg.getheader("Subject") + ".eml") (If self._inboxfolder starts with a slash or self._emailpath ends with one, you could replace the first + with a comma also). It doesn't really hurt anything, but you should probably not use "file" as a variable name, since it shadows a built-in type (checkers like pylint or pychecker would warn you about that). If you're not using self.popinstance outside of this function (seems unlikely given that you connect and quit within the function), then there's no point making it an attribute of self. Just use "popinstance" by itself. Use xrange instead of range. Instead of just importing StringIO, do this: try: import cStringIO as StringIO except ImportError: import StringIO If this is a POP mailbox that can be accessed by more than one client at a time, you might want to put a try/except around the RETR call to continue on if you can't retrieve one message. As John said, use "\n".join rather than string.join, use try/finally to only close the file if it is opened, and pass the logging parameters separately. The one refactoring issue I could think of would be that you don't really need to parse the whole message, since you're just dumping a copy of the raw bytes, and all you want is the From and Subject headers. You could instead use popinstance.top(0) to get the headers, create the message (blank body) from that, and use that for the headers. Then do a full RETR to get the bytes. This would only be worth doing if your messages were large (and so parsing them took a long time). I would definitely measure before I made this optimisation. For your function to sanitise for the names, it depends how nice you want the names to be, and how certain you are that the email and subject make the filename unique (seems fairly unlikely). You could do something like: emailpath = "".join([c for c in emailpath if c in (string.letters + string.digits + "_ ")]) And you'd end up with just alphanumeric characters and the underscore and space, which seems like a readable set. Given that your filesystem (with Windows) is probably case insensitive, you could lowercase that also (add .lower() to the end). You could use emailpath.translate if you want something more complex. A: Further to my comment on John's answer I found out what the issue was, there were illegal characters in the name field and Subject field, which caused python to get the hiccups, as it tried to write the email as a directory, after seeing ":" and "/". John point number 4 doesnt work! so I left it as before. Also is point no 1 correct, have I implemented your suggestion correctly? def _dump_pop_emails(self): self.logger.info("open pop account %s with username: %s", self.account[0], self.account[1]) self.popinstance = poplib.POP3(self.account[0]) self.logger.info(self.popinstance.getwelcome()) self.popinstance.user(self.account[1]) self.popinstance.pass_(self.account[2]) try: (numMsgs, totalSize) = self.popinstance.stat() for thisNum in range(1, numMsgs+1): (server_msg, body, octets) = self.popinstance.retr(thisNum) text = '\n'.join(body) mesg = StringIO.StringIO(text) msg = rfc822.Message(mesg) name, email = msg.getaddr("From") emailpath = str(self._emailpath + self._inboxfolder + "\\" + self._sanitize_string(email + " " + msg.getheader("Subject") + ".eml")) emailpath = self._replace_whitespace(emailpath) print emailpath file = open(emailpath,"wb") file.write(text) file.close() self.popinstance.dele(thisNum) finally: self.logger.info(self.popinstance.quit()) def _replace_whitespace(self,name): name = str(name) return name.replace(" ", "_") def _sanitize_string(self,name): illegal_chars = ":", "/", "\\" name = str(name) for item in illegal_chars: name = name.replace(item, "_") return name
Incoming poplib refactoring using windows python 2.3
Hi Guys could you please help me refactor this so that it is sensibly pythonic. import sys import poplib import string import StringIO, rfc822 import datetime import logging def _dump_pop_emails(self): self.logger.info("open pop account %s with username: %s" % (self.account[0], self.account[1])) self.popinstance = poplib.POP3(self.account[0]) self.logger.info(self.popinstance.getwelcome()) self.popinstance.user(self.account[1]) self.popinstance.pass_(self.account[2]) try: (numMsgs, totalSize) = self.popinstance.stat() for thisNum in range(1, numMsgs+1): (server_msg, body, octets) = self.popinstance.retr(thisNum) text = string.join(body, '\n') mesg = StringIO.StringIO(text) msg = rfc822.Message(mesg) name, email = msg.getaddr("From") emailpath = str(self._emailpath + self._inboxfolder + "\\" + email + "_" + msg.getheader("Subject") + ".eml") emailpath = self._replace_whitespace(emailpath) file = open(emailpath,"wb") file.write(text) file.close() self.popinstance.dele(thisNum) finally: self.logger.info(self.popinstance.quit()) def _replace_whitespace(self,name): name = str(name) return name.replace(" ", "_") Also in the _replace_whitespace method I would like to have some kind of cleaning routine which takes out all illegal characters which could cause processing. Basically I want to write the email to the inbox directory in a standard way. Am i doing something wrong here?
[ "I don't see anything significant wrong with that code -- is it behaving incorrectly, or are you just looking for general style guidelines?\nA few notes:\n\nInstead of logger.info (\"foo %s %s\" % (bar, baz)), use \"foo %s %s\", bar, baz. This avoids the overhead of string formatting if the message won't be printed.\nPut a try...finally around opening emailpath.\nUse '\\n'.join (body), instead of string.join (body, '\\n').\nInstead of msg.getaddr(\"From\"), just msg.From.\n\n", "This isn't refactoring (it doesn't need refactoring as far as I can see), but some suggestions:\nYou should use the email package rather than rfc822. Replace rfc822.Message with email.Message, and use email.Utils.parseaddr(msg[\"From\"]) to get the name and email address, and msg[\"Subject\"] to get the subject.\nUse os.path.join to create the path. This:\nemailpath = str(self._emailpath + self._inboxfolder + \"\\\\\" + email + \"_\" + msg.getheader(\"Subject\") + \".eml\")\n\nBecomes:\nemailpath = os.path.join(self._emailpath + self._inboxfolder, email + \"_\" + msg.getheader(\"Subject\") + \".eml\")\n\n(If self._inboxfolder starts with a slash or self._emailpath ends with one, you could replace the first + with a comma also).\nIt doesn't really hurt anything, but you should probably not use \"file\" as a variable name, since it shadows a built-in type (checkers like pylint or pychecker would warn you about that).\nIf you're not using self.popinstance outside of this function (seems unlikely given that you connect and quit within the function), then there's no point making it an attribute of self. Just use \"popinstance\" by itself.\nUse xrange instead of range.\nInstead of just importing StringIO, do this:\ntry:\n import cStringIO as StringIO\nexcept ImportError:\n import StringIO\n\nIf this is a POP mailbox that can be accessed by more than one client at a time, you might want to put a try/except around the RETR call to continue on if you can't retrieve one message.\nAs John said, use \"\\n\".join rather than string.join, use try/finally to only close the file if it is opened, and pass the logging parameters separately.\nThe one refactoring issue I could think of would be that you don't really need to parse the whole message, since you're just dumping a copy of the raw bytes, and all you want is the From and Subject headers. You could instead use popinstance.top(0) to get the headers, create the message (blank body) from that, and use that for the headers. Then do a full RETR to get the bytes. This would only be worth doing if your messages were large (and so parsing them took a long time). I would definitely measure before I made this optimisation.\nFor your function to sanitise for the names, it depends how nice you want the names to be, and how certain you are that the email and subject make the filename unique (seems fairly unlikely). You could do something like:\nemailpath = \"\".join([c for c in emailpath if c in (string.letters + string.digits + \"_ \")])\n\nAnd you'd end up with just alphanumeric characters and the underscore and space, which seems like a readable set. Given that your filesystem (with Windows) is probably case insensitive, you could lowercase that also (add .lower() to the end). You could use emailpath.translate if you want something more complex.\n", "Further to my comment on John's answer\nI found out what the issue was, there were illegal characters in the name field and Subject field, which caused python to get the hiccups, as it tried to write the email as a directory, after seeing \":\" and \"/\".\nJohn point number 4 doesnt work! so I left it as before.\nAlso is point no 1 correct, have I implemented your suggestion correctly?\ndef _dump_pop_emails(self):\n self.logger.info(\"open pop account %s with username: %s\", self.account[0], self.account[1])\n self.popinstance = poplib.POP3(self.account[0])\n self.logger.info(self.popinstance.getwelcome()) \n self.popinstance.user(self.account[1])\n self.popinstance.pass_(self.account[2])\n try:\n (numMsgs, totalSize) = self.popinstance.stat()\n for thisNum in range(1, numMsgs+1):\n (server_msg, body, octets) = self.popinstance.retr(thisNum)\n text = '\\n'.join(body)\n mesg = StringIO.StringIO(text) \n msg = rfc822.Message(mesg)\n name, email = msg.getaddr(\"From\")\n emailpath = str(self._emailpath + self._inboxfolder + \"\\\\\" + self._sanitize_string(email + \" \" + msg.getheader(\"Subject\") + \".eml\"))\n emailpath = self._replace_whitespace(emailpath)\n print emailpath\n file = open(emailpath,\"wb\")\n file.write(text)\n file.close()\n self.popinstance.dele(thisNum)\n finally:\n self.logger.info(self.popinstance.quit())\n\ndef _replace_whitespace(self,name):\n name = str(name)\n return name.replace(\" \", \"_\") \n\ndef _sanitize_string(self,name):\n illegal_chars = \":\", \"/\", \"\\\\\"\n name = str(name)\n for item in illegal_chars:\n name = name.replace(item, \"_\")\n return name\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "email", "poplib", "python", "refactoring" ]
stackoverflow_0000224660_email_poplib_python_refactoring.txt
Q: Parsing different date formats from feedparser in python? I'm trying to get the dates from entries in two different RSS feeds through feedparser. Here is what I'm doing: import feedparser as fp reddit = fp.parse("http://www.reddit.com/.rss") cc = fp.parse("http://contentconsumer.com/feed") print reddit.entries[0].date print cc.entries[0].date And here's how they come out: 2008-10-21T22:23:28.033841+00:00 Wed, 15 Oct 2008 10:06:10 +0000 I want to get to the point where I can find out which is newer easily. I've tried using the datetime module of Python and searching through the feedparser documentation, but I can't get past this problem. Any help would be much appreciated. A: Parsing of dates is a pain with RSS feeds in-the-wild, and that's where feedparser can be a big help. If you use the *_parsed properties (like updated_parsed), feedparser will have done the work and will return a 9-tuple Python date in UTC. See http://packages.python.org/feedparser/date-parsing.html for more gory details.
Parsing different date formats from feedparser in python?
I'm trying to get the dates from entries in two different RSS feeds through feedparser. Here is what I'm doing: import feedparser as fp reddit = fp.parse("http://www.reddit.com/.rss") cc = fp.parse("http://contentconsumer.com/feed") print reddit.entries[0].date print cc.entries[0].date And here's how they come out: 2008-10-21T22:23:28.033841+00:00 Wed, 15 Oct 2008 10:06:10 +0000 I want to get to the point where I can find out which is newer easily. I've tried using the datetime module of Python and searching through the feedparser documentation, but I can't get past this problem. Any help would be much appreciated.
[ "Parsing of dates is a pain with RSS feeds in-the-wild, and that's where feedparser can be a big help.\nIf you use the *_parsed properties (like updated_parsed), feedparser will have done the work and will return a 9-tuple Python date in UTC.\nSee http://packages.python.org/feedparser/date-parsing.html for more gory details.\n" ]
[ 17 ]
[]
[]
[ "datetime", "feedparser", "parsing", "python", "rss" ]
stackoverflow_0000225274_datetime_feedparser_parsing_python_rss.txt
Q: Cursor event handling in python+Tkinter I'm building a code in which I'd like to be able to generate an event when the user changes the focus of the cursor from an Entry widget to anywhere, for example another entry widget, a button... So far i only came out with the idea to bind to TAB and mouse click, although if i bind the mouse click to the Entry widget i only get mouse events when inside the Entry widget. How can I accomplish generate events for when a widget loses cursor focus? Thanks in advance! A: The events <FocusIn> and <FocusOut> are what you want. Run the following example and you'll see you get focus in and out bindings whether you click or press tab (or shift-tab) when focus is in one of the entry widgets. from Tkinter import * def main(): global text root=Tk() l1=Label(root,text="Field 1:") l2=Label(root,text="Field 2:") t1=Text(root,height=4,width=40) e1=Entry(root) e2=Entry(root) l1.grid(row=0,column=0,sticky="e") e1.grid(row=0,column=1,sticky="ew") l2.grid(row=1,column=0,sticky="e") e2.grid(row=1,column=1,sticky="ew") t1.grid(row=2,column=0,columnspan=2,sticky="nw") root.grid_columnconfigure(1,weight=1) root.grid_rowconfigure(2,weight=1) root.bind_class("Entry","<FocusOut>",focusOutHandler) root.bind_class("Entry","<FocusIn>",focusInHandler) text = t1 root.mainloop() def focusInHandler(event): text.insert("end","FocusIn %s\n" % event.widget) text.see("end") def focusOutHandler(event): text.insert("end","FocusOut %s\n" % event.widget) text.see("end") if __name__ == "__main__": main(); A: This isn't specific to tkinter, and it's not focus based, but I got an answer to a similar question here: Detecting Mouse clicks in windows using python I haven't done any tkinter in quite a while, but there seems to be "FocusIn" and "FocusOut" events. You might be able to bind and track these to solve your issue. From: http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm
Cursor event handling in python+Tkinter
I'm building a code in which I'd like to be able to generate an event when the user changes the focus of the cursor from an Entry widget to anywhere, for example another entry widget, a button... So far i only came out with the idea to bind to TAB and mouse click, although if i bind the mouse click to the Entry widget i only get mouse events when inside the Entry widget. How can I accomplish generate events for when a widget loses cursor focus? Thanks in advance!
[ "The events <FocusIn> and <FocusOut> are what you want. Run the following example and you'll see you get focus in and out bindings whether you click or press tab (or shift-tab) when focus is in one of the entry widgets.\nfrom Tkinter import *\n\ndef main():\n global text\n\n root=Tk()\n\n l1=Label(root,text=\"Field 1:\")\n l2=Label(root,text=\"Field 2:\")\n t1=Text(root,height=4,width=40)\n e1=Entry(root)\n e2=Entry(root)\n l1.grid(row=0,column=0,sticky=\"e\")\n e1.grid(row=0,column=1,sticky=\"ew\")\n l2.grid(row=1,column=0,sticky=\"e\")\n e2.grid(row=1,column=1,sticky=\"ew\")\n t1.grid(row=2,column=0,columnspan=2,sticky=\"nw\")\n\n root.grid_columnconfigure(1,weight=1)\n root.grid_rowconfigure(2,weight=1)\n\n root.bind_class(\"Entry\",\"<FocusOut>\",focusOutHandler)\n root.bind_class(\"Entry\",\"<FocusIn>\",focusInHandler)\n\n text = t1\n root.mainloop()\n\ndef focusInHandler(event):\n text.insert(\"end\",\"FocusIn %s\\n\" % event.widget)\n text.see(\"end\")\n\ndef focusOutHandler(event):\n text.insert(\"end\",\"FocusOut %s\\n\" % event.widget)\n text.see(\"end\")\n\n\nif __name__ == \"__main__\":\n main();\n\n", "This isn't specific to tkinter, and it's not focus based, but I got an answer to a similar question here:\nDetecting Mouse clicks in windows using python\nI haven't done any tkinter in quite a while, but there seems to be \"FocusIn\" and \"FocusOut\" events. You might be able to bind and track these to solve your issue.\nFrom:\nhttp://effbot.org/tkinterbook/tkinter-events-and-bindings.htm\n" ]
[ 5, 0 ]
[]
[]
[ "events", "mouse_cursor", "python", "tkinter" ]
stackoverflow_0000210522_events_mouse_cursor_python_tkinter.txt
Q: Help with event in python Entry widget I'm writing some code in python and I'm having trouble when trying to retrieve content of an Entry widget. The thing is: I want to limit the characters that can be typed, so I'm trying to clear the Entry widget when I reach the specific number of characters (2 in this case), but it looks like I always miss the last typed character. I added the lost character in a print to show. Here's the sample code: from Tkinter import * class sampleFrame: def __init__(self, master): self.__frame = Frame(master) self.__frame.pack() def get_frame(self): return self.__frame class sampleClass: def __init__(self, master): self.__aLabel = Label(master,text="aLabel", width=10) self.__aLabel.pack(side=LEFT) self.__aEntry = Entry (master, width=2) self.__aEntry.bind('<Key>', lambda event: self.callback(event, self.__aEntry)) self.__aEntry.pack(side=LEFT) def callback(self, event, widgetName): self.__value = widgetName.get()+event.char print self.__value if len(self.__value)>2: widgetName.delete(2,4) root = Tk() aSampleFrame = sampleFrame(root) aSampleClass = sampleClass(aSampleFrame.get_frame()) root.mainloop() Any help will be much appreciated! Thanks in advance A: At first, after you do the deletion, the event goes on with its normal processing, i.e. the character gets inserted. You need to signal to Tkinter that the event should be ignored. So in your code above, add the marked line: if len(self.__value) > 2: widgetName.delete(2,4) return "break" # add this line On the other hand, why do you go through the lambda? An event has a .widget attribute which you can use. So you can change your code into: self.__aEntry.bind('<Key>', self.callback) # ※ here! self.__aEntry.pack(side=LEFT) def callback(self, event): self.__value = event.widget.get()+event.char # ※ here! print self.__value if len(self.__value)>2: event.widget.delete(2,4) # ※ here! return "break" All the changed lines are marked with "here!" A: To be a bit more specific, Tk widgets have what are called "bindtags". When an event is processed, each bindtag on the widget is considered in order to see if it has a binding. A widget by default will have as its bindtags the widget, the widget class, the root widget, and "all". Thus, bindings to the widget will occur before the default bindings. Once your binding has been processed you can prevent any further bindtags from being considered by returning a "break". The ramifications are this: if you make a binding on the widget, the class, root window and "all" bindings may fire as well. In addition, any binding you attach to the widget fires before the class binding which is where the default behavior (eg: the insertion of a character) happens. It is important to be aware of that in situations where you may want to handle the event after the default behavior rather than before.
Help with event in python Entry widget
I'm writing some code in python and I'm having trouble when trying to retrieve content of an Entry widget. The thing is: I want to limit the characters that can be typed, so I'm trying to clear the Entry widget when I reach the specific number of characters (2 in this case), but it looks like I always miss the last typed character. I added the lost character in a print to show. Here's the sample code: from Tkinter import * class sampleFrame: def __init__(self, master): self.__frame = Frame(master) self.__frame.pack() def get_frame(self): return self.__frame class sampleClass: def __init__(self, master): self.__aLabel = Label(master,text="aLabel", width=10) self.__aLabel.pack(side=LEFT) self.__aEntry = Entry (master, width=2) self.__aEntry.bind('<Key>', lambda event: self.callback(event, self.__aEntry)) self.__aEntry.pack(side=LEFT) def callback(self, event, widgetName): self.__value = widgetName.get()+event.char print self.__value if len(self.__value)>2: widgetName.delete(2,4) root = Tk() aSampleFrame = sampleFrame(root) aSampleClass = sampleClass(aSampleFrame.get_frame()) root.mainloop() Any help will be much appreciated! Thanks in advance
[ "At first, after you do the deletion, the event goes on with its normal processing, i.e. the character gets inserted. You need to signal to Tkinter that the event should be ignored.\nSo in your code above, add the marked line:\nif len(self.__value) > 2:\n widgetName.delete(2,4)\n return \"break\" # add this line\n\nOn the other hand, why do you go through the lambda? An event has a .widget attribute which you can use. So you can change your code into:\n self.__aEntry.bind('<Key>', self.callback) # ※ here!\n self.__aEntry.pack(side=LEFT)\n\ndef callback(self, event):\n self.__value = event.widget.get()+event.char # ※ here!\n print self.__value\n if len(self.__value)>2:\n event.widget.delete(2,4) # ※ here!\n return \"break\"\n\nAll the changed lines are marked with \"here!\"\n", "To be a bit more specific, Tk widgets have what are called \"bindtags\". When an event is processed, each bindtag on the widget is considered in order to see if it has a binding. A widget by default will have as its bindtags the widget, the widget class, the root widget, and \"all\". Thus, bindings to the widget will occur before the default bindings. Once your binding has been processed you can prevent any further bindtags from being considered by returning a \"break\".\nThe ramifications are this: if you make a binding on the widget, the class, root window and \"all\" bindings may fire as well. In addition, any binding you attach to the widget fires before the class binding which is where the default behavior (eg: the insertion of a character) happens. It is important to be aware of that in situations where you may want to handle the event after the default behavior rather than before.\n" ]
[ 3, 1 ]
[]
[]
[ "events", "python", "tkinter", "widget" ]
stackoverflow_0000206916_events_python_tkinter_widget.txt
Q: Get Bound Event Handler in Tkinter After a bind a method to an event of a Tkinter element is there a way to get the method back? >>> root = Tkinter.Tk() >>> frame = Tkinter.Frame(root, width=100, height=100) >>> frame.bind('<Button-1>', lambda e: pprint('Click')) # function needed >>> frame.pack() >>> bound_event_method = frame.??? A: The standard way to do this in Tcl/Tk is trivial: you use the same bind command but without the final argument. bind .b <Button-1> doSomething puts "the function is [bind .b <Button-1>]" => the function is doSomething You can do something similar with Tkinter but the results are, unfortunately, not quite as usable: e1.bind("<Button-1>",doSomething) e1.bind("<Button-1>") => 'if {"[-1208974516doSomething %# %b %f %h %k %s %t %w %x %y %A %E %K %N %W %T %X %Y %D]" == "break"} break\n' Obviously, Tkinter is doing a lot of juggling below the covers. One solution would be to write a little helper procedure that remembers this for you: def bindWidget(widget,event,func=None): '''Set or retrieve the binding for an event on a widget''' if not widget.__dict__.has_key("bindings"): widget.bindings=dict() if func: widget.bind(event,func) widget.bindings[event] = func else: return(widget.bindings.setdefault(event,None)) You would use it like this: e1=Entry() print "before, binding for <Button-1>: %s" % bindWidget(e1,"<Button-1>") bindWidget(e1,"<Button-1>",doSomething) print " after, binding for <Button-1>: %s" % bindWidget(e1,"<Button-1>") When I run the above code I get: before, binding for <Button-1>: None after, binding for <Button-1>: <function doSomething at 0xb7f2e79c> As a final caveat, I don't use Tkinter much so I'm not sure what the ramifications are of dynamically adding an attribute to a widget instance. It seems to be harmless, but if not you can always create a global dictionary to keep track of the bindings. A: The associated call to do that for the tk C API would be Get_GetCommandInfo which places information about the command in the Tcl_CmdInfo structure pointed to by infoPtr However this function is not used anywhere in _tkinter.c which is the binding for tk used by python trough Tkinter.py. Therefore it is impossible to get the bound function out of tkinter. You need to remember that function yourself. A: Doesn't appear to be... why not just save it yourself if you're going to need it, or use a non-anonymous function? Also, your code doesn't work as written: lambda functions can only contain expressions, not statements, so print is a no-go (this will change in Python 3.0 when print() becomes a function).
Get Bound Event Handler in Tkinter
After a bind a method to an event of a Tkinter element is there a way to get the method back? >>> root = Tkinter.Tk() >>> frame = Tkinter.Frame(root, width=100, height=100) >>> frame.bind('<Button-1>', lambda e: pprint('Click')) # function needed >>> frame.pack() >>> bound_event_method = frame.???
[ "The standard way to do this in Tcl/Tk is trivial: you use the same bind command but without the final argument. \nbind .b <Button-1> doSomething\nputs \"the function is [bind .b <Button-1>]\"\n=> the function is doSomething\n\nYou can do something similar with Tkinter but the results are, unfortunately, not quite as usable:\ne1.bind(\"<Button-1>\",doSomething)\ne1.bind(\"<Button-1>\")\n=> 'if {\"[-1208974516doSomething %# %b %f %h %k %s %t %w %x %y %A %E %K %N %W %T %X %Y %D]\" == \"break\"} break\\n'\n\nObviously, Tkinter is doing a lot of juggling below the covers. One solution would be to write a little helper procedure that remembers this for you:\ndef bindWidget(widget,event,func=None):\n '''Set or retrieve the binding for an event on a widget'''\n\n if not widget.__dict__.has_key(\"bindings\"): widget.bindings=dict()\n\n if func:\n widget.bind(event,func)\n widget.bindings[event] = func\n else:\n return(widget.bindings.setdefault(event,None))\n\nYou would use it like this:\ne1=Entry()\nprint \"before, binding for <Button-1>: %s\" % bindWidget(e1,\"<Button-1>\")\nbindWidget(e1,\"<Button-1>\",doSomething)\nprint \" after, binding for <Button-1>: %s\" % bindWidget(e1,\"<Button-1>\")\n\nWhen I run the above code I get:\nbefore, binding for <Button-1>: None\n after, binding for <Button-1>: <function doSomething at 0xb7f2e79c>\n\nAs a final caveat, I don't use Tkinter much so I'm not sure what the ramifications are of dynamically adding an attribute to a widget instance. It seems to be harmless, but if not you can always create a global dictionary to keep track of the bindings.\n", "The associated call to do that for the tk C API would be Get_GetCommandInfo which\n\nplaces information about the command\n in the Tcl_CmdInfo structure pointed\n to by infoPtr\n\nHowever this function is not used anywhere in _tkinter.c which is the binding for tk used by python trough Tkinter.py.\nTherefore it is impossible to get the bound function out of tkinter. You need to remember that function yourself.\n", "Doesn't appear to be... why not just save it yourself if you're going to need it, or use a non-anonymous function?\nAlso, your code doesn't work as written: lambda functions can only contain expressions, not statements, so print is a no-go (this will change in Python 3.0 when print() becomes a function).\n" ]
[ 3, 2, 0 ]
[]
[]
[ "events", "python", "tkinter", "user_interface" ]
stackoverflow_0000138029_events_python_tkinter_user_interface.txt
Q: How can I generate a report file (ODF, PDF) from a django view I would like to generate a report file from a view&template in django. Preferred file formats would be OpenOffice/ODF or PDF. What is the best way to do this? I do want to reuse the page layout defined in the template, possibly by redefining some blocks in a derived template. Ideally, the report should be inserted into an existing template file so I can provide the overall page layout, headers and footer in the generated output format. A: pisa/xhtml2pdf should get you covered for PDF. It even includes an example Django project. A: Try ReportLab for PDF output: http://www.reportlab.org/
How can I generate a report file (ODF, PDF) from a django view
I would like to generate a report file from a view&template in django. Preferred file formats would be OpenOffice/ODF or PDF. What is the best way to do this? I do want to reuse the page layout defined in the template, possibly by redefining some blocks in a derived template. Ideally, the report should be inserted into an existing template file so I can provide the overall page layout, headers and footer in the generated output format.
[ "pisa/xhtml2pdf should get you covered for PDF. It even includes an example Django project.\n", "Try ReportLab for PDF output:\nhttp://www.reportlab.org/\n" ]
[ 4, 3 ]
[]
[]
[ "django", "pdf", "pdf_generation", "python" ]
stackoverflow_0000224796_django_pdf_pdf_generation_python.txt
Q: Solving an inequality for minimum value I'm working on a programming problem which boils down to a set of an equation and inequality: x[0]*a[0] + x[1]*a[1] + ... x[n]*a[n] >= D x[0]*b[0] + x[1]*b[1] + ... x[n]*b[n] = C I want to solve for the values of X that will give the absolute minimum of C, given the input D and lists and A and B consisting of a[0 - n] and b[0 - n ]. I'm doing the problem at the moment in Python, but the problem in general is language-agnostic. CLARIFICATION UPDATE: the coefficients x[0 - n] are restricted to the set of non-negative integers. A: This looks like a linear programming problem. The Simplex algorithm normally gives good results. It basically walks the boundaries of the subspace delimited by the inequalities, looking for the optimum. Think of it visually: each inequality denotes a half-space, a plane in n-dimensional space that you have to be on the right side of. Your utility function is what you're trying to optimize. If the space is closed, the optimum is going to be at one of the apexes of the closed space; if it's open, it's possible that the optimum is infinite. A: Have a look at the wikipedia entry on linear programming. The integer programming section is what you're searching for (the constraint of the x[i] being integers is not an easy one). Search python libraries for branch&bound, branch&cut and the like (I don't think they have been implemented in scipy yet). Other interesting links: GNU Linear Programming Kit IBM article on GLPK A: It looks like this is a linear programming problem. A: You might want to use Matlab or Mathematica or look at code from Numerical Recipes in C for ideas on how to implement minimization functions. The link provided is to the 1992 version of the book. Newer versions are available at Amazon. A: This company has a tool to do that sort of thing.
Solving an inequality for minimum value
I'm working on a programming problem which boils down to a set of an equation and inequality: x[0]*a[0] + x[1]*a[1] + ... x[n]*a[n] >= D x[0]*b[0] + x[1]*b[1] + ... x[n]*b[n] = C I want to solve for the values of X that will give the absolute minimum of C, given the input D and lists and A and B consisting of a[0 - n] and b[0 - n ]. I'm doing the problem at the moment in Python, but the problem in general is language-agnostic. CLARIFICATION UPDATE: the coefficients x[0 - n] are restricted to the set of non-negative integers.
[ "This looks like a linear programming problem. The Simplex algorithm normally gives good results. It basically walks the boundaries of the subspace delimited by the inequalities, looking for the optimum.\nThink of it visually: each inequality denotes a half-space, a plane in n-dimensional space that you have to be on the right side of. Your utility function is what you're trying to optimize. If the space is closed, the optimum is going to be at one of the apexes of the closed space; if it's open, it's possible that the optimum is infinite.\n", "Have a look at the wikipedia entry on linear programming. The integer programming section is what you're searching for (the constraint of the x[i] being integers is not an easy one). \nSearch python libraries for branch&bound, branch&cut and the like (I don't think they have been implemented in scipy yet).\nOther interesting links:\n\nGNU Linear Programming Kit\nIBM article on GLPK\n\n", "It looks like this is a linear programming problem.\n", "You might want to use Matlab or Mathematica or look at code from Numerical Recipes in C for ideas on how to implement minimization functions. The link provided is to the 1992 version of the book. Newer versions are available at Amazon.\n", "This company has a tool to do that sort of thing.\n" ]
[ 11, 3, 2, 1, 0 ]
[]
[]
[ "equation", "inequality", "language_agnostic", "linear_programming", "python" ]
stackoverflow_0000227282_equation_inequality_language_agnostic_linear_programming_python.txt
Q: What's a good resource for learning CGI programming in Python? I need to write a browser interface for an application running embedded on a single board computer (Gumstix Verdex for anyone who's interested), so I won't be able to use any web frameworks due to space and processor constraints (and availability for the environment I'm running in). I'm limited to the core Python and cgi modules to create pages that will communicate with a C++ application. Can anyone recommend a good resource (web or book form, but books are preferred) for learning CGI programming in Python? What I need the application to do is fairly simple. I have a C++ program running on the same device and I need to create a browser based user interface so the configuration settings of that application can be changed. The UI needs to communicate with the C++ application, where the final data validation will be done. Preliminary validation can be done on the UI using Javascript, then again on the server using Python, but the final validation has to be done in the application itself, since it's getting its initial config from a file anyway. The configuration data takes all forms (booleans, ints, floats, and strings). A: One of the biggest resources for CGI programming is the CGI homepage. Once you're done with that, familiarizing yourself with the cgi and cgitb modules should be your next task. But don't discount learning WSGI (libref) and using a CGI-to-WSGI adaptor such as flup. A: http://www.cs.virginia.edu/~lab2q/ http://www.devshed.com/c/a/Python/Writing-CGI-Programs-in-Python/ http://gnosis.cx/publish/programming/feature_5min_python.html All found via google... And take a look at pyblosxom as well: http://pyblosxom.svn.sourceforge.net/viewvc/pyblosxom/ it's a weblog system written in python, uses CGI. A: What I don't understand is why you insist on CGI, because that's a Common Gateway Interface meant to be used in conjunction with a webserver like apache, which you surely do not have on that device. I would suggest you use wsgiref.simple_server which is a single threaded buildin webserver shipped with python 2.5 and up (if you have 2.4 or below you can d/l wsgiref from pypi, it is a pure python package). That way you can also sidestep messy CGI programming and write a wsgi application: from wsgiref.simple_server import make_server def application(environ, start_response): start_response('200 OK', [ ('Content-Type', 'text/plain'), ]) return ['Hello World!'] httpd = make_server('', 8000, application) httpd.serve_forever()
What's a good resource for learning CGI programming in Python?
I need to write a browser interface for an application running embedded on a single board computer (Gumstix Verdex for anyone who's interested), so I won't be able to use any web frameworks due to space and processor constraints (and availability for the environment I'm running in). I'm limited to the core Python and cgi modules to create pages that will communicate with a C++ application. Can anyone recommend a good resource (web or book form, but books are preferred) for learning CGI programming in Python? What I need the application to do is fairly simple. I have a C++ program running on the same device and I need to create a browser based user interface so the configuration settings of that application can be changed. The UI needs to communicate with the C++ application, where the final data validation will be done. Preliminary validation can be done on the UI using Javascript, then again on the server using Python, but the final validation has to be done in the application itself, since it's getting its initial config from a file anyway. The configuration data takes all forms (booleans, ints, floats, and strings).
[ "One of the biggest resources for CGI programming is the CGI homepage. Once you're done with that, familiarizing yourself with the cgi and cgitb modules should be your next task.\nBut don't discount learning WSGI (libref) and using a CGI-to-WSGI adaptor such as flup.\n", "\nhttp://www.cs.virginia.edu/~lab2q/\nhttp://www.devshed.com/c/a/Python/Writing-CGI-Programs-in-Python/\nhttp://gnosis.cx/publish/programming/feature_5min_python.html\n\nAll found via google...\nAnd take a look at pyblosxom as well: http://pyblosxom.svn.sourceforge.net/viewvc/pyblosxom/ it's a weblog system written in python, uses CGI.\n", "What I don't understand is why you insist on CGI, because that's a Common Gateway Interface meant to be used in conjunction with a webserver like apache, which you surely do not have on that device.\nI would suggest you use wsgiref.simple_server which is a single threaded buildin webserver shipped with python 2.5 and up (if you have 2.4 or below you can d/l wsgiref from pypi, it is a pure python package). That way you can also sidestep messy CGI programming and write a wsgi application:\nfrom wsgiref.simple_server import make_server\n\ndef application(environ, start_response):\n start_response('200 OK', [\n ('Content-Type', 'text/plain'),\n ])\n return ['Hello World!']\n\nhttpd = make_server('', 8000, application)\nhttpd.serve_forever()\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "cgi", "python" ]
stackoverflow_0000227318_cgi_python.txt
Q: Python Find Question I am using Python to extract the filename from a link using rfind like below: url = "http://www.google.com/test.php" print url[url.rfind("/") +1 : ] This works ok with links without a / at the end of them and returns "test.php". I have encountered links with / at the end like so "http://www.google.com/test.php/". I am have trouble getting the page name when there is a "/" at the end, can anyone help? Cheers A: Just removing the slash at the end won't work, as you can probably have a URL that looks like this: http://www.google.com/test.php?filepath=tests/hey.xml ...in which case you'll get back "hey.xml". Instead of manually checking for this, you can use urlparse to get rid of the parameters, then do the check other people suggested: from urlparse import urlparse url = "http://www.google.com/test.php?something=heyharr/sir/a.txt" f = urlparse(url)[2].rstrip("/") print f[f.rfind("/")+1:] A: Use [r]strip to remove trailing slashes: url.rstrip('/').rsplit('/', 1)[-1] If a wider range of possible URLs is possible, including URLs with ?queries, #anchors or without a path, do it properly with urlparse: path= urlparse.urlparse(url).path return path.rstrip('/').rsplit('/', 1)[-1] or '(root path)' A: Filenames with a slash at the end are technically still path definitions and indicate that the index file is to be read. If you actually have one that' ends in test.php/, I would consider that an error. In any case, you can strip the / from the end before running your code as follows: url = url.rstrip('/') A: There is a library called urlparse that will parse the url for you, but still doesn't remove the / at the end so one of the above will be the best option A: Just for fun, you can use a Regexp: import re print re.search('/([^/]+)/?$', url).group(1)
Python Find Question
I am using Python to extract the filename from a link using rfind like below: url = "http://www.google.com/test.php" print url[url.rfind("/") +1 : ] This works ok with links without a / at the end of them and returns "test.php". I have encountered links with / at the end like so "http://www.google.com/test.php/". I am have trouble getting the page name when there is a "/" at the end, can anyone help? Cheers
[ "Just removing the slash at the end won't work, as you can probably have a URL that looks like this:\nhttp://www.google.com/test.php?filepath=tests/hey.xml\n\n...in which case you'll get back \"hey.xml\". Instead of manually checking for this, you can use urlparse to get rid of the parameters, then do the check other people suggested:\nfrom urlparse import urlparse\nurl = \"http://www.google.com/test.php?something=heyharr/sir/a.txt\"\nf = urlparse(url)[2].rstrip(\"/\")\nprint f[f.rfind(\"/\")+1:]\n\n", "Use [r]strip to remove trailing slashes:\nurl.rstrip('/').rsplit('/', 1)[-1]\n\nIf a wider range of possible URLs is possible, including URLs with ?queries, #anchors or without a path, do it properly with urlparse:\npath= urlparse.urlparse(url).path\nreturn path.rstrip('/').rsplit('/', 1)[-1] or '(root path)'\n\n", "Filenames with a slash at the end are technically still path definitions and indicate that the index file is to be read. If you actually have one that' ends in test.php/, I would consider that an error. In any case, you can strip the / from the end before running your code as follows:\nurl = url.rstrip('/')\n\n", "There is a library called urlparse that will parse the url for you, but still doesn't remove the / at the end so one of the above will be the best option\n", "Just for fun, you can use a Regexp:\nimport re\nprint re.search('/([^/]+)/?$', url).group(1)\n\n" ]
[ 9, 4, 1, 0, 0 ]
[ "You could use\nprint url[url.rstrip(\"/\").rfind(\"/\") +1 : ]\n\n", "filter(None, url.split('/'))[-1]\n\n(But urlparse is probably more readable, even if more verbose.)\n" ]
[ -1, -1 ]
[ "python", "url" ]
stackoverflow_0000229352_python_url.txt
Q: Python debugger: Stepping into a function that you have called interactively Python is quite cool, but unfortunately, its debugger is not as good as perl -d. One thing that I do very commonly when experimenting with code is to call a function from within the debugger, and step into that function, like so: # NOTE THAT THIS PROGRAM EXITS IMMEDIATELY WITHOUT CALLING FOO() ~> cat -n /tmp/show_perl.pl 1 #!/usr/local/bin/perl 2 3 sub foo { 4 print "hi\n"; 5 print "bye\n"; 6 } 7 8 exit 0; ~> perl -d /tmp/show_perl.pl Loading DB routines from perl5db.pl version 1.28 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(/tmp/show_perl.pl:8): exit 0; # MAGIC HAPPENS HERE -- I AM STEPPING INTO A FUNCTION THAT I AM CALLING INTERACTIVELY DB<1> s foo() main::((eval 6)[/usr/local/lib/perl5/5.8.6/perl5db.pl:628]:3): 3: foo(); DB<<2>> s main::foo(/tmp/show_perl.pl:4): print "hi\n"; DB<<2>> n hi main::foo(/tmp/show_perl.pl:5): print "bye\n"; DB<<2>> n bye DB<2> n Debugged program terminated. Use q to quit or R to restart, use O inhibit_exit to avoid stopping after program termination, h q, h R or h O to get additional info. DB<2> q This is incredibly useful when trying to step through a function's handling of various different inputs to figure out why it fails. However, it does not seem to work in either pdb or pydb (I'd show an equivalent python example to the one above but it results in a large exception stack dump). So my question is twofold: Am I missing something? Is there a python debugger that would indeed let me do this? Obviously I could put the calls in the code myself, but I love working interactively, eg. not having to start from scratch when I want to try calling with a slightly different set of arguments. A: And I've answered my own question! It's the "debug" command in pydb: ~> cat -n /tmp/test_python.py 1 #!/usr/local/bin/python 2 3 def foo(): 4 print "hi" 5 print "bye" 6 7 exit(0) 8 ~> pydb /tmp/test_python.py (/tmp/test_python.py:7): <module> 7 exit(0) (Pydb) debug foo() ENTERING RECURSIVE DEBUGGER ------------------------Call level 11 (/tmp/test_python.py:3): foo 3 def foo(): ((Pydb)) s (/tmp/test_python.py:4): foo 4 print "hi" ((Pydb)) s hi (/tmp/test_python.py:5): foo 5 print "bye" ((Pydb)) s bye ------------------------Return from level 11 (<type 'NoneType'>) ----------------------Return from level 10 (<type 'NoneType'>) LEAVING RECURSIVE DEBUGGER (/tmp/test_python.py:7): <module> A: You can interactively debug a function with pdb as well, provided the script you want to debug does not exit() at the end: $ cat test.py #!/usr/bin/python def foo(f, g): h = f+g print h return 2*f To debug, start an interactive python session and import pdb: $ python Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pdb >>> import test >>> pdb.runcall(test.foo, 1, 2) > /Users/simon/Desktop/test.py(4)foo() -> h = f+g (Pdb) n > /Users/simon/Desktop/test.py(5)foo() -> print h (Pdb) The pdb module comes with python and is documented in the modules docs at http://docs.python.org/modindex.html A: There is a python debugger that is part of the core distribution of python called 'pdb'. I rarely use it myself, but find it useful sometimes. Given this program: def foo(): a = 0 print "hi" a += 1 print "bye" foo() Here is a session debugging it: $ python /usr/lib/python2.5/pdb.py /var/tmp/pdbtest.py ~ > /var/tmp/pdbtest.py(2)<module>() -> def foo(): (Pdb) s > /var/tmp/pdbtest.py(10)<module>() -> foo() (Pdb) s --Call-- > /var/tmp/pdbtest.py(2)foo() -> def foo(): (Pdb) s > /var/tmp/pdbtest.py(3)foo() -> a = 0 (Pdb) s > /var/tmp/pdbtest.py(4)foo() -> print "hi" (Pdb) print a 0 (Pdb) s hi > /var/tmp/pdbtest.py(6)foo() -> a += 1 (Pdb) s > /var/tmp/pdbtest.py(8)foo() -> print "bye" (Pdb) print a 1 (Pdb) s bye --Return-- > /var/tmp/pdbtest.py(8)foo()->None -> print "bye" (Pdb) s --Return-- > /var/tmp/pdbtest.py(10)<module>()->None -> foo() (Pdb) s A: For interactive work on code I'm developing, I usually find it more efficient to set a programmatic "break point" in the code itself with pdb.set_trace. This makes it easir to break on the program's state deep in a a loop, too: if <state>: pdb.set_trace() A: If you're more familiar with a GUI debugger, there's winpdb ('win' in this case does not refer to Windows). I actually use it on Linux. On debian/ubuntu: sudo aptitude install winpdb Then just put this in your code where you want it to break: import rpdb2; rpdb2.start_embedded_debugger_interactive_password() Then start winpdb and attach to your running script.
Python debugger: Stepping into a function that you have called interactively
Python is quite cool, but unfortunately, its debugger is not as good as perl -d. One thing that I do very commonly when experimenting with code is to call a function from within the debugger, and step into that function, like so: # NOTE THAT THIS PROGRAM EXITS IMMEDIATELY WITHOUT CALLING FOO() ~> cat -n /tmp/show_perl.pl 1 #!/usr/local/bin/perl 2 3 sub foo { 4 print "hi\n"; 5 print "bye\n"; 6 } 7 8 exit 0; ~> perl -d /tmp/show_perl.pl Loading DB routines from perl5db.pl version 1.28 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(/tmp/show_perl.pl:8): exit 0; # MAGIC HAPPENS HERE -- I AM STEPPING INTO A FUNCTION THAT I AM CALLING INTERACTIVELY DB<1> s foo() main::((eval 6)[/usr/local/lib/perl5/5.8.6/perl5db.pl:628]:3): 3: foo(); DB<<2>> s main::foo(/tmp/show_perl.pl:4): print "hi\n"; DB<<2>> n hi main::foo(/tmp/show_perl.pl:5): print "bye\n"; DB<<2>> n bye DB<2> n Debugged program terminated. Use q to quit or R to restart, use O inhibit_exit to avoid stopping after program termination, h q, h R or h O to get additional info. DB<2> q This is incredibly useful when trying to step through a function's handling of various different inputs to figure out why it fails. However, it does not seem to work in either pdb or pydb (I'd show an equivalent python example to the one above but it results in a large exception stack dump). So my question is twofold: Am I missing something? Is there a python debugger that would indeed let me do this? Obviously I could put the calls in the code myself, but I love working interactively, eg. not having to start from scratch when I want to try calling with a slightly different set of arguments.
[ "And I've answered my own question! It's the \"debug\" command in pydb:\n~> cat -n /tmp/test_python.py\n 1 #!/usr/local/bin/python\n 2\n 3 def foo():\n 4 print \"hi\"\n 5 print \"bye\"\n 6\n 7 exit(0)\n 8\n\n~> pydb /tmp/test_python.py\n(/tmp/test_python.py:7): <module>\n7 exit(0)\n\n\n(Pydb) debug foo()\nENTERING RECURSIVE DEBUGGER\n------------------------Call level 11\n(/tmp/test_python.py:3): foo\n3 def foo():\n\n((Pydb)) s\n(/tmp/test_python.py:4): foo\n4 print \"hi\"\n\n((Pydb)) s\nhi\n(/tmp/test_python.py:5): foo\n5 print \"bye\"\n\n\n((Pydb)) s\nbye\n------------------------Return from level 11 (<type 'NoneType'>)\n----------------------Return from level 10 (<type 'NoneType'>)\nLEAVING RECURSIVE DEBUGGER\n(/tmp/test_python.py:7): <module>\n\n", "You can interactively debug a function with pdb as well, provided the script you want to debug does not exit() at the end:\n$ cat test.py\n#!/usr/bin/python\n\ndef foo(f, g):\n h = f+g\n print h\n return 2*f\n\nTo debug, start an interactive python session and import pdb:\n$ python\nPython 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) \n[GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import pdb\n>>> import test\n>>> pdb.runcall(test.foo, 1, 2)\n> /Users/simon/Desktop/test.py(4)foo()\n-> h = f+g\n(Pdb) n\n> /Users/simon/Desktop/test.py(5)foo()\n-> print h\n(Pdb) \n\nThe pdb module comes with python and is documented in the modules docs at http://docs.python.org/modindex.html\n", "There is a python debugger that is part of the core distribution of python called 'pdb'. I rarely use it myself, but find it useful sometimes.\nGiven this program:\ndef foo():\n a = 0\n print \"hi\"\n\n a += 1\n\n print \"bye\"\n\nfoo()\n\nHere is a session debugging it:\n$ python /usr/lib/python2.5/pdb.py /var/tmp/pdbtest.py ~\n> /var/tmp/pdbtest.py(2)<module>()\n-> def foo():\n(Pdb) s\n> /var/tmp/pdbtest.py(10)<module>()\n-> foo()\n(Pdb) s\n--Call--\n> /var/tmp/pdbtest.py(2)foo()\n-> def foo():\n(Pdb) s\n> /var/tmp/pdbtest.py(3)foo()\n-> a = 0\n(Pdb) s\n> /var/tmp/pdbtest.py(4)foo()\n-> print \"hi\"\n(Pdb) print a\n0\n(Pdb) s\nhi\n> /var/tmp/pdbtest.py(6)foo()\n-> a += 1\n(Pdb) s\n> /var/tmp/pdbtest.py(8)foo()\n-> print \"bye\"\n(Pdb) print a\n1\n(Pdb) s\nbye\n--Return--\n> /var/tmp/pdbtest.py(8)foo()->None\n-> print \"bye\"\n(Pdb) s\n--Return--\n> /var/tmp/pdbtest.py(10)<module>()->None\n-> foo()\n(Pdb) s\n\n", "For interactive work on code I'm developing, I usually find it more efficient to set a programmatic \"break point\" in the code itself with pdb.set_trace. This makes it easir to break on the program's state deep in a a loop, too: if <state>: pdb.set_trace()\n", "If you're more familiar with a GUI debugger, there's winpdb ('win' in this case does not refer to Windows). I actually use it on Linux.\nOn debian/ubuntu:\nsudo aptitude install winpdb\n\nThen just put this in your code where you want it to break:\nimport rpdb2; rpdb2.start_embedded_debugger_interactive_password()\n\nThen start winpdb and attach to your running script.\n" ]
[ 47, 25, 4, 2, 2 ]
[]
[]
[ "debugging", "pdb", "python" ]
stackoverflow_0000228642_debugging_pdb_python.txt
Q: Is there a free python debugger that has watchpoints? pdb and winpdb both seem to be missing this essential (to me) feature. I saw something suggesting WingIDE has it but I'd prefer a solution that is free, and if I do have to pay, I'd prefer to pay for something that is better than Wing. A: You should check out Eric4 It's a very good Python IDE with a builtin debugger. The debugger has views for global variables, local variables and watchpoints. A: Please look what pydev in eclipse offers... A: Take a look at PyScripter. It has an integrated debugger, watch windows and much more. It's open source and is developed here. HTH A: It's too bad that the standard pdb module that comes with python itself does not yet support watchpoints. Described here: http://wiki.python.org/moin/PdbImprovments A: This reimplementation of the built-in pdb.py has watchpoints. http://morepypy.blogspot.com/2008/06/pdb-and-rlcompleterng.html I tried it but, in cursory tries was not able to get it to work.
Is there a free python debugger that has watchpoints?
pdb and winpdb both seem to be missing this essential (to me) feature. I saw something suggesting WingIDE has it but I'd prefer a solution that is free, and if I do have to pay, I'd prefer to pay for something that is better than Wing.
[ "You should check out Eric4\nIt's a very good Python IDE with a builtin debugger.\nThe debugger has views for global variables, local variables and watchpoints.\n", "Please look what pydev in eclipse offers...\n", "Take a look at PyScripter. It has an integrated debugger, watch windows and much more.\nIt's open source and is developed here.\nHTH\n", "It's too bad that the standard pdb module that comes with python itself does not yet support watchpoints.\nDescribed here: http://wiki.python.org/moin/PdbImprovments\n", "This reimplementation of the built-in pdb.py has watchpoints.\nhttp://morepypy.blogspot.com/2008/06/pdb-and-rlcompleterng.html\nI tried it but, in cursory tries was not able to get it to work.\n" ]
[ 4, 2, 1, 1, 1 ]
[]
[]
[ "debugging", "pdb", "python", "watchpoint" ]
stackoverflow_0000207904_debugging_pdb_python_watchpoint.txt
Q: Given a list of variable names in Python, how do I a create a dictionary with the variable names as keys (to the variables' values)? I have a list of variable names, like this: ['foo', 'bar', 'baz'] (I originally asked how I convert a list of variables. See Greg Hewgill's answer below.) How do I convert this to a dictionary where the keys are the variable names (as strings) and the values are the values of the variables? {'foo': foo, 'bar': bar, 'baz': baz} Now that I'm re-asking the question, I came up with: d = {} for name in list_of_variable_names: d[name] = eval(name) Can that be improved upon? Update, responding to the question (in a comment) of why I'd want to do this: I often find myself using the % operator to strings with a dictionary of names and values to interpolate. Often the names in the string is just the names of local variables. So (with the answer below) I can do something like this: message = '''Name: %(name)s ZIP: %(zip)s Dear %(name)s, ...''' % dict((x, locals()[x]) for x in ['name', 'zip']) A: Forget filtering locals()! The dictionary you give to the formatting string is allowed to contain unused keys: >>> name = 'foo' >>> zip = 123 >>> unused = 'whoops!' >>> locals() {'name': 'foo', 'zip': 123, ... 'unused': 'whoops!', ...} >>> '%(name)s %(zip)i' % locals() 'foo 123' With the new f-string feature in Python 3.6, using locals() is no longer necessary: >>> name = 'foo' >>> zip = 123 >>> unused = 'whoops!' >>> f'{zip: >5} {name.upper()}' ' 123 FOO' A: You can use list or generator comprehensions to build a list of key, value tuples used to directly instantiate a dict. The best way is below: dict((name, eval(name)) for name in list_of_variable_names) In addition, if you know, for example, that the variables exist in the local symbol table you can save yourself from the dangerous eval by looking the variable directly from locals: dict((name, locals()[name]) for name in list_of_variable_names) After your final update, I think the answer below is really what you want. If you're just using this for string expansion with strings that you control, just pass locals() directly to the string expansion and it will cherry-pick out the desired values If, however, these strings could ever come from an outside source (e.g. translation files), than it's a good idea to filter locals() A: Your original list [foo, bar, baz] doesn't contain the variable names, it just contains elements that refer to the same values as the variables you listed. This is because you can have two different variable names that refer to the same value. So, the list by itself doesn't contain information about what other names refer to the objects. The first element in your array has the name foo but it also has the name a[0] (assuming your array is called a). After executing the following code, quux also refers to the same object: quux = a[0] Update: You're right that you can use eval() for that, but its use is generally discouraged. Python provides a special member named __dict__ that contains the symbol table for the current module. So you can: import __main__ d = dict((x, __main__.__dict__[x]) for x in list_of_variable_names) Having to import __main__ when your code is in the unnamed main module is a quirk of Python. A: Not efficient, but without invoking eval: dict((k,v) for (k,v) in globals().iteritems() if k in list_of_variable_names) or dict((k,v) for (k,v) in vars().iteritems() if k in list_of_variable_names) depending on what you want.
Given a list of variable names in Python, how do I a create a dictionary with the variable names as keys (to the variables' values)?
I have a list of variable names, like this: ['foo', 'bar', 'baz'] (I originally asked how I convert a list of variables. See Greg Hewgill's answer below.) How do I convert this to a dictionary where the keys are the variable names (as strings) and the values are the values of the variables? {'foo': foo, 'bar': bar, 'baz': baz} Now that I'm re-asking the question, I came up with: d = {} for name in list_of_variable_names: d[name] = eval(name) Can that be improved upon? Update, responding to the question (in a comment) of why I'd want to do this: I often find myself using the % operator to strings with a dictionary of names and values to interpolate. Often the names in the string is just the names of local variables. So (with the answer below) I can do something like this: message = '''Name: %(name)s ZIP: %(zip)s Dear %(name)s, ...''' % dict((x, locals()[x]) for x in ['name', 'zip'])
[ "Forget filtering locals()! The dictionary you give to the formatting string is allowed to contain unused keys:\n>>> name = 'foo'\n>>> zip = 123\n>>> unused = 'whoops!'\n>>> locals()\n{'name': 'foo', 'zip': 123, ... 'unused': 'whoops!', ...}\n>>> '%(name)s %(zip)i' % locals()\n'foo 123'\n\nWith the new f-string feature in Python 3.6, using locals() is no longer necessary:\n>>> name = 'foo'\n>>> zip = 123\n>>> unused = 'whoops!'\n>>> f'{zip: >5} {name.upper()}'\n' 123 FOO'\n\n", "You can use list or generator comprehensions to build a list of key, value tuples used to directly instantiate a dict. The best way is below:\ndict((name, eval(name)) for name in list_of_variable_names)\n\nIn addition, if you know, for example, that the variables exist in the local symbol table you can save yourself from the dangerous eval by looking the variable directly from locals:\ndict((name, locals()[name]) for name in list_of_variable_names)\n\nAfter your final update, I think the answer below is really what you want. If you're just using this for string expansion with strings that you control, just pass locals() directly to the string expansion and it will cherry-pick out the desired values\nIf, however, these strings could ever come from an outside source (e.g. translation files), than it's a good idea to filter locals()\n", "Your original list [foo, bar, baz] doesn't contain the variable names, it just contains elements that refer to the same values as the variables you listed. This is because you can have two different variable names that refer to the same value.\nSo, the list by itself doesn't contain information about what other names refer to the objects. The first element in your array has the name foo but it also has the name a[0] (assuming your array is called a). After executing the following code, quux also refers to the same object:\nquux = a[0]\n\nUpdate: You're right that you can use eval() for that, but its use is generally discouraged. Python provides a special member named __dict__ that contains the symbol table for the current module. So you can:\nimport __main__\nd = dict((x, __main__.__dict__[x]) for x in list_of_variable_names)\n\nHaving to import __main__ when your code is in the unnamed main module is a quirk of Python.\n", "Not efficient, but without invoking eval:\ndict((k,v) for (k,v) in globals().iteritems() if k in list_of_variable_names)\n\nor\ndict((k,v) for (k,v) in vars().iteritems() if k in list_of_variable_names)\n\ndepending on what you want.\n" ]
[ 16, 5, 4, 1 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0000230896_dictionary_list_python.txt
Q: Unexpected list comprehension behaviour in Python I believe I'm getting bitten by some combination of nested scoping rules and list comprehensions. Jeremy Hylton's blog post is suggestive about the causes, but I don't really understand CPython's implementation well-enough to figure out how to get around this. Here is an (overcomplicated?) example. If people have a simpler one that demos it, I'd like to hear it. The issue: the list comprehensions using next() are filled with the result from the last iteration. edit: The Problem: What exactly is going on with this, and how do I fix this? Do I have to use a standard for loop? Clearly the function is running the correct number of times, but the list comprehensions end up with the final value instead of the result of each loop. Some hypotheses: generators? lazy filling of list comprehensions? code import itertools def digit(n): digit_list = [ (x,False) for x in xrange(1,n+1)] digit_list[0] = (1,True) return itertools.cycle ( digit_list) >>> D = digit(5) >>> [D.next() for x in range(5)] ## This list comprehension works as expected [(1, True), (2, False), (3, False), (4, False), (5, False)] class counter(object): def __init__(self): self.counter = [ digit(4) for ii in range(2) ] self.totalcount=0 self.display = [0,] * 2 def next(self): self.totalcount += 1 self.display[-1] = self.counter[-1].next()[0] print self.totalcount, self.display return self.display def next2(self,*args): self._cycle(1) self.totalcount += 1 print self.totalcount, self.display return self.display def _cycle(self,digit): d,first = self.counter[digit].next() #print digit, d, first #print self._display self.display[digit] = d if first and digit > 0: self._cycle(digit-1) C = counter() [C.next() for x in range(5)] [C.next2() for x in range(5)] OUTPUT In [44]: [C.next() for x in range(6)] 1 [0, 1] 2 [0, 2] 3 [0, 3] 4 [0, 4] 5 [0, 1] 6 [0, 2] Out[44]: [[0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2]] In [45]: [C.next2() for x in range(6)] 7 [0, 3] 8 [0, 4] 9 [1, 1] 10 [1, 2] 11 [1, 3] 12 [1, 4] Out[45]: [[1, 4], [1, 4], [1, 4], [1, 4], [1, 4], [1, 4]] # this should be: [[0,3],[0,4]....[1,4]] or similar A: The problem is that with return self.display you return a reference to this list (not a copy). So what you end up with is a list where each element is a reference to self.display. To illustrate, look at the following: >>> a = [1,2] >>> b = [a,a] >>> b [[1, 2], [1, 2]] >>> a.append(3) >>> b [[1, 2, 3], [1, 2, 3]] You probably want to use something like return self.display[:]. A: Mind if i refactor this a bit? def digit(n): for i in itertools.count(): yield (i%n+1, not i%n) But actually you don't need that one, if you implement the whole thing as a simple iterator: def counter(digits, base): counter = [0] * digits def iterator(): for total in itertools.count(1): for i in range(len(counter)): counter[i] = (counter[i] + 1) % base if counter[i]: break print total, list(reversed(counter)) yield list(reversed(counter)) return iterator() c = counter(2, 4) print list(itertools.islice(c, 10)) If you want to get rid of the print (debugging, is it?), go with a while-loop. This incindentally also solves your initial problem, because reversed returns a copy of the list. Oh, and it's zero-based now ;)
Unexpected list comprehension behaviour in Python
I believe I'm getting bitten by some combination of nested scoping rules and list comprehensions. Jeremy Hylton's blog post is suggestive about the causes, but I don't really understand CPython's implementation well-enough to figure out how to get around this. Here is an (overcomplicated?) example. If people have a simpler one that demos it, I'd like to hear it. The issue: the list comprehensions using next() are filled with the result from the last iteration. edit: The Problem: What exactly is going on with this, and how do I fix this? Do I have to use a standard for loop? Clearly the function is running the correct number of times, but the list comprehensions end up with the final value instead of the result of each loop. Some hypotheses: generators? lazy filling of list comprehensions? code import itertools def digit(n): digit_list = [ (x,False) for x in xrange(1,n+1)] digit_list[0] = (1,True) return itertools.cycle ( digit_list) >>> D = digit(5) >>> [D.next() for x in range(5)] ## This list comprehension works as expected [(1, True), (2, False), (3, False), (4, False), (5, False)] class counter(object): def __init__(self): self.counter = [ digit(4) for ii in range(2) ] self.totalcount=0 self.display = [0,] * 2 def next(self): self.totalcount += 1 self.display[-1] = self.counter[-1].next()[0] print self.totalcount, self.display return self.display def next2(self,*args): self._cycle(1) self.totalcount += 1 print self.totalcount, self.display return self.display def _cycle(self,digit): d,first = self.counter[digit].next() #print digit, d, first #print self._display self.display[digit] = d if first and digit > 0: self._cycle(digit-1) C = counter() [C.next() for x in range(5)] [C.next2() for x in range(5)] OUTPUT In [44]: [C.next() for x in range(6)] 1 [0, 1] 2 [0, 2] 3 [0, 3] 4 [0, 4] 5 [0, 1] 6 [0, 2] Out[44]: [[0, 2], [0, 2], [0, 2], [0, 2], [0, 2], [0, 2]] In [45]: [C.next2() for x in range(6)] 7 [0, 3] 8 [0, 4] 9 [1, 1] 10 [1, 2] 11 [1, 3] 12 [1, 4] Out[45]: [[1, 4], [1, 4], [1, 4], [1, 4], [1, 4], [1, 4]] # this should be: [[0,3],[0,4]....[1,4]] or similar
[ "The problem is that with return self.display you return a reference to this list (not a copy). So what you end up with is a list where each element is a reference to self.display. To illustrate, look at the following:\n>>> a = [1,2]\n>>> b = [a,a]\n>>> b\n[[1, 2], [1, 2]]\n>>> a.append(3)\n>>> b\n[[1, 2, 3], [1, 2, 3]]\n\nYou probably want to use something like return self.display[:].\n", "Mind if i refactor this a bit?\ndef digit(n):\n for i in itertools.count():\n yield (i%n+1, not i%n)\n\nBut actually you don't need that one, if you implement the whole thing as a simple iterator:\ndef counter(digits, base):\n counter = [0] * digits\n\n def iterator():\n for total in itertools.count(1):\n for i in range(len(counter)):\n counter[i] = (counter[i] + 1) % base\n if counter[i]:\n break\n print total, list(reversed(counter))\n yield list(reversed(counter))\n\n return iterator()\n\nc = counter(2, 4)\nprint list(itertools.islice(c, 10))\n\nIf you want to get rid of the print (debugging, is it?), go with a while-loop.\nThis incindentally also solves your initial problem, because reversed returns a copy of the list.\nOh, and it's zero-based now ;)\n" ]
[ 15, 4 ]
[]
[]
[ "language_implementation", "list_comprehension", "python" ]
stackoverflow_0000225675_language_implementation_list_comprehension_python.txt
Q: How to do Makefile dependencies for python code I have a bunch of C files that are generated by a collection of python programs that have a number of shared python modules and I need to account for this in my make system. It is easy enough to enumerate which python program need to be run to generate each C file. What I can't find a good solution for is determining which other python files those programs depend on. I need this so make will know what needs regenerating if one of the shared python files changes. Is there a good system for producing make style dependency rules from a collection of python sources? A: modulefinder can be used to get the dependency graph. A: The import statements are pretty much all the dependencies there are. There are are two relevant forms for the import statements: import x, y, z from x import a, b, c You'll also need the PYTHONPATH and sites information that is used to build sys.path. This shows the physical locations of the modules and packages. That's kind of painful to process, since you have to do the transitive closure of all imports in all modules you import. As an alternative approach, you can use the -v option to get the complete list of imports and physical files. This produces a log that you can edit into a flat list of dependencies. For instance, when I do >>> import math I see this in the log dlopen("/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-dynload/math.so", 2); import math # dynamically loaded from /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-dynload/math.so
How to do Makefile dependencies for python code
I have a bunch of C files that are generated by a collection of python programs that have a number of shared python modules and I need to account for this in my make system. It is easy enough to enumerate which python program need to be run to generate each C file. What I can't find a good solution for is determining which other python files those programs depend on. I need this so make will know what needs regenerating if one of the shared python files changes. Is there a good system for producing make style dependency rules from a collection of python sources?
[ "modulefinder can be used to get the dependency graph.\n", "The import statements are pretty much all the dependencies there are. There are are two relevant forms for the import statements:\nimport x, y, z\nfrom x import a, b, c\n\nYou'll also need the PYTHONPATH and sites information that is used to build sys.path. This shows the physical locations of the modules and packages.\nThat's kind of painful to process, since you have to do the transitive closure of all imports in all modules you import.\nAs an alternative approach, you can use the -v option to get the complete list of imports and physical files. This produces a log that you can edit into a flat list of dependencies.\nFor instance, when I do \n>>> import math\n\nI see this in the log\ndlopen(\"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-dynload/math.so\", 2);\nimport math # dynamically loaded from /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-dynload/math.so\n\n" ]
[ 3, 1 ]
[]
[]
[ "dependencies", "makefile", "python" ]
stackoverflow_0000232162_dependencies_makefile_python.txt
Q: Default parameters to actions with Django Is there a way to have a default parameter passed to a action in the case where the regex didnt match anything using django? urlpatterns = patterns('',(r'^test/(?P<name>.*)?$','myview.displayName')) #myview.py def displayName(request,name): # write name to response or something I have tried setting the third parameter in the urlpatterns to a dictionary containing ' and giving the name parameter a default value on the method, none of which worked. the name parameter always seems to be None. I really dont want to code a check for None if i could set a default value. Clarification: here is an example of what i was changing it to. def displayName(request,name='Steve'): return HttpResponse(name) #i also tried urlpatterns = patterns('', (r'^test/(?P<name>.*)?$', 'myview.displayName', dict(name='Test') ) ) when i point my browser at the view it displays the text 'None' Any ideas? A: The problem is that when the pattern is matched against 'test/' the groupdict captured by the regex contains the mapping 'name' => None: >>> url.match("test/").groupdict() {'name': None} This means that when the view is invoked, using something I expect that is similar to below: view(request, *groups, **groupdict) which is equivalent to: view(request, name = None) for 'test/', meaning that name is assigned None rather than not assigned. This leaves you with two options. You can: Explicitly check for None in the view code which is kind of hackish. Rewrite the url dispatch rule to make the name capture non-optional and introduce a second rule to capture when no name is provided. For example: urlpatterns = patterns('', (r'^test/(?P<name>.+)$','myview.displayName'), # note the '+' instead of the '*' (r'^test/$','myview.displayName'), ) When taking the second approach, you can simply call the method without the capture pattern, and let python handle the default parameter or you can call a different view which delegates. A: I thought you could def displayName(request, name=defaultObj); that's what I've done in the past, at least. What were you setting the default value to?
Default parameters to actions with Django
Is there a way to have a default parameter passed to a action in the case where the regex didnt match anything using django? urlpatterns = patterns('',(r'^test/(?P<name>.*)?$','myview.displayName')) #myview.py def displayName(request,name): # write name to response or something I have tried setting the third parameter in the urlpatterns to a dictionary containing ' and giving the name parameter a default value on the method, none of which worked. the name parameter always seems to be None. I really dont want to code a check for None if i could set a default value. Clarification: here is an example of what i was changing it to. def displayName(request,name='Steve'): return HttpResponse(name) #i also tried urlpatterns = patterns('', (r'^test/(?P<name>.*)?$', 'myview.displayName', dict(name='Test') ) ) when i point my browser at the view it displays the text 'None' Any ideas?
[ "The problem is that when the pattern is matched against 'test/' the groupdict captured by the regex contains the mapping 'name' => None:\n>>> url.match(\"test/\").groupdict()\n{'name': None}\n\nThis means that when the view is invoked, using something I expect that is similar to below:\nview(request, *groups, **groupdict)\n\nwhich is equivalent to:\nview(request, name = None)\n\nfor 'test/', meaning that name is assigned None rather than not assigned.\nThis leaves you with two options. You can:\n\nExplicitly check for None in the view code which is kind of hackish.\nRewrite the url dispatch rule to make the name capture non-optional and introduce a second rule to capture when no name is provided. \n\nFor example:\nurlpatterns = patterns('',\n (r'^test/(?P<name>.+)$','myview.displayName'), # note the '+' instead of the '*'\n (r'^test/$','myview.displayName'),\n)\n\nWhen taking the second approach, you can simply call the method without the capture pattern, and let python handle the default parameter or you can call a different view which delegates.\n", "I thought you could def displayName(request, name=defaultObj); that's what I've done in the past, at least. What were you setting the default value to?\n" ]
[ 9, 0 ]
[]
[]
[ "django", "django_urls", "python" ]
stackoverflow_0000234695_django_django_urls_python.txt
Q: how do I implement a custom code page used by a serial device so I can convert text to it in Python? I have a scrolling LED sign that takes messages in either ASCII or (using some specific code) characters from a custom code page. For example, the euro sign should be sent as <U00> and ä is <U64> (You can find the full code page in the documentation) My question is, what is the most pythonic way to implement this custom code page, and to have a codec that can convert UTF strings to my custom code page ? A: Pick a name for your encoding, maybe "led_display", whatever. Implement and register a codec with the standard library. Pythonic profit!
how do I implement a custom code page used by a serial device so I can convert text to it in Python?
I have a scrolling LED sign that takes messages in either ASCII or (using some specific code) characters from a custom code page. For example, the euro sign should be sent as <U00> and ä is <U64> (You can find the full code page in the documentation) My question is, what is the most pythonic way to implement this custom code page, and to have a codec that can convert UTF strings to my custom code page ?
[ "\nPick a name for your encoding, maybe \"led_display\", whatever.\nImplement and register a codec with the standard library.\nPythonic profit!\n\n" ]
[ 3 ]
[]
[]
[ "encoding", "python", "utf" ]
stackoverflow_0000235416_encoding_python_utf.txt
Q: How can I use Python for large scale development? I would be interested to learn about large scale development in Python and especially in how do you maintain a large code base? When you make incompatibility changes to the signature of a method, how do you find all the places where that method is being called. In C++/Java the compiler will find it for you, how do you do it in Python? When you make changes deep inside the code, how do you find out what operations an instance provides, since you don't have a static type to lookup? How do you handle/prevent typing errors (typos)? Are UnitTest's used as a substitute for static type checking? As you can guess I almost only worked with statically typed languages (C++/Java), but I would like to try my hands on Python for larger programs. But I had a very bad experience, a long time ago, with the clipper (dBase) language, which was also dynamically typed. A: Don't use a screw driver as a hammer Python is not a statically typed language, so don't try to use it that way. When you use a specific tool, you use it for what it has been built. For Python, it means: Duck typing : no type checking. Only behavior matters. Therefore your code must be designed to use this feature. A good design means generic signatures, no dependences between components, high abstraction levels.. So if you change anything, you won't have to change the rest of the code. Python will not complain either, that what it has been built for. Types are not an issue. Huge standard library. You do not need to change all your calls in the program if you use standard features you haven't coded yourself. And Python come with batteries included. I keep discovering them everyday. I had no idea of the number of modules I could use when I started and tried to rewrite existing stuff like everybody. It's OK, you can't get it all right from the beginning. You don't write Java, C++, Python, PHP, Erlang, whatever, the same way. They are good reasons why there is room for each of so many different languages, they do not do the same things. Unit tests are not a substitute Unit tests must be performed with any language. The most famous unit test library (JUnit) is from the Java world! This has nothing to do with types. You check behaviors, again. You avoid trouble with regression. You ensure your customer you are on tracks. Python for large scale projects Languages, libraries and frameworks don't scale. Architectures do. If you design a solid architecture, if you are able to make it evolves quickly, then it will scale. Unit tests help, automatic code check as well. But they are just safety nets. And small ones. Python is especially suitable for large projects because it enforces some good practices and has a lot of usual design patterns built-in. But again, do not use it for what it is not designed. E.g : Python is not a technology for CPU intensive tasks. In a huge project, you will most likely use several different technologies anyway. As a SGBD (French for DBMS) and a templating language, or else. Python is no exception. You will probably want to use C/C++ for the part of your code you need to be fast. Or Java to fit in a Tomcat environment. Don't know, don't care. Python can play well with these. As a conclusion My answer may feel a bit rude, but don't get me wrong: this is a very good question. A lot of people come to Python with old habits. I screwed myself trying to code Java like Python. You can, but will never get the best of it. If you have played / want to play with Python, it's great! It's a wonderful tool. But just a tool, really. A: I had some experience with modifying "Frets On Fire", an open source python "Guitar Hero" clone. as I see it, python is not really suitable for a really large scale project. I found myself spending a large part of the development time debugging issues related to assignment of incompatible types, things that static typed laguages will reveal effortlessly at compile-time. also, since types are determined on run-time, trying to understand existing code becomes harder, because you have no idea what's the type of that parameter you are currently looking at. in addition to that, calling functions using their name string with the __getattr__ built in function is generally more common in Python than in other programming languages, thus getting the call graph to a certain function somewhat hard (although you can call functions with their name in some statically typed languages as well). I think that Python really shines in small scale software, rapid prototype development, and gluing existing programs together, but I would not use it for large scale software projects, since in those types of programs maintainability becomes the real issue, and in my opinion python is relatively weak there. A: Since nobody pointed out pychecker, pylint and similar tools, I will: pychecker and pylint are tools that can help you find incorrect assumptions (about function signatures, object attributes, etc.) They won't find everything that a compiler might find in a statically typed language -- but they can find problems that such compilers for such languages can't find, too. Python (and any dynamically typed language) is fundamentally different in terms of the errors you're likely to cause and how you would detect and fix them. It has definite downsides as well as upsides, but many (including me) would argue that in Python's case, the ease of writing code (and the ease of making it structurally sound) and of modifying code without breaking API compatibility (adding new optional arguments, providing different objects that have the same set of methods and attributes) make it suitable just fine for large codebases. A: my 0.10 EUR: i have several python application in 'production'-state. our company use java, c++ and python. we develop with the eclipse ide (pydev for python) unittests are the key-solution for the problem. (also for c++ and java) the less secure world of "dynamic-typing" will make you less careless about your code quality BY THE WAY: large scale development doesn't mean, that you use one single language! large scale development often uses a handful of languages specific to the problem. so i agree to the-hammer-problem :-) PS: static-typing & python A: Here are some items that have helped me maintain a fairly large system in python. Structure your code in layers. i.e separate biz logic, presentation logic and your persistence layers. Invest a bit of time in defining these layers and make sure everyone on the project is brought in. For large systems creating a framework that forces you into a certain way of development can be key as well. Tests are key, without unit tests you will likely end up with an unmanagable code base several times quicker than with other languages. Keep in mind that unit tests are often not sufficient, make sure to have several integration/acceptance tests you can run quickly after any major change. Use Fail Fast principle. Add assertions for cases you feel your code maybe vulnerable. Have standard logging/error handling that will help you quickly navigate to the issue Use an IDE( pyDev works for me) that provides type ahead, pyLint/Checker integration to help you detect common typos right away and promote some coding standards Carefull about your imports, never do from x import * or do relative imports without use of . Do refactor, a search/replace tool with regular expressions is often all you need to do move methods/class type refactoring. A: Incompatible changes to the signature of a method. This doesn't happen as much in Python as it does in Java and C++. Python has optional arguments, default values, and far more flexibility in defining method signatures. Also, duck typing means that -- for example -- you don't have to switch from some class to an interface as part of a significant software change. Things just aren't as complex. How do you find all the places where that method is being called? grep works for dynamic languages. If you need to know every place a method is used, grep (or equivalent IDE-supported search) works great. How do you find out what operations an instance provides, since you don't have a static type to lookup? a. Look at the source. You don't have the Java/C++ problem of object libraries and jar files to contend with. You don't need all the elaborate aids and tools that those languages require. b. An IDE can provide signature information under many common circumstances. You can, easily, defeat your IDE's reasoning powers. When that happens, you should probably review what you're doing to be sure it makes sense. If your IDE can't reason out your type information, perhaps it's too dynamic. c. In Python, you often work through the interactive interpreter. Unlike Java and C++, you can explore your instances directly and interactively. You don't need a sophisticated IDE. Example: >>> x= SomeClass() >>> dir(x) How do you handle/prevent typing errors? Same as static languages: you don't prevent them. You find and correct them. Java can only find a certain class of typos. If you have two similar class or variable names, you can wind up in deep trouble, even with static type checking. Example: class MyClass { } class MyClassx extends MyClass { } A typo with these two class names can cause havoc. ["But I wouldn't put myself in that position with Java," folks say. Agreed. I wouldn't put myself in that position with Python, either; you make classes that are profoundly different, and will fail early if they're misused.] Are UnitTest's used as a substitute for static type checking? Here's the other Point of view: static type checking is a substitute for clear, simple design. I've worked with programmers who weren't sure why an application worked. They couldn't figure out why things didn't compile; the didn't know the difference between abstract superclass and interface, and the couldn't figure out why a change in place makes a bunch of other modules in a separate JAR file crash. The static type checking gave them false confidence in a flawed design. Dynamic languages allow programs to be simple. Simplicity is a substitute for static type checking. Clarity is a substitute for static type checking. A: My general rule of thumb is to use dynamic languages for small non-mission-critical projects and statically-typed languages for big projects. I find that code written in a dynamic language such as python gets "tangled" more quickly. Partly that is because it is much quicker to write code in a dynamic language and that leads to shortcuts and worse design, at least in my case. Partly it's because I have IntelliJ for quick and easy refactoring when I use Java, which I don't have for python. A: The usual answer to that is testing testing testing. You're supposed to have an extensive unit test suite and run it often, particularly before a new version goes online. Proponents of dynamically typed languages make the case that you have to test anyway because even in a statically typed language conformance to the crude rules of the type system covers only a small part of what can potentially go wrong.
How can I use Python for large scale development?
I would be interested to learn about large scale development in Python and especially in how do you maintain a large code base? When you make incompatibility changes to the signature of a method, how do you find all the places where that method is being called. In C++/Java the compiler will find it for you, how do you do it in Python? When you make changes deep inside the code, how do you find out what operations an instance provides, since you don't have a static type to lookup? How do you handle/prevent typing errors (typos)? Are UnitTest's used as a substitute for static type checking? As you can guess I almost only worked with statically typed languages (C++/Java), but I would like to try my hands on Python for larger programs. But I had a very bad experience, a long time ago, with the clipper (dBase) language, which was also dynamically typed.
[ "Don't use a screw driver as a hammer\nPython is not a statically typed language, so don't try to use it that way.\nWhen you use a specific tool, you use it for what it has been built. For Python, it means:\n\nDuck typing : no type checking. Only behavior matters. Therefore your code must be designed to use this feature. A good design means generic signatures, no dependences between components, high abstraction levels.. So if you change anything, you won't have to change the rest of the code. Python will not complain either, that what it has been built for. Types are not an issue.\nHuge standard library. You do not need to change all your calls in the program if you use standard features you haven't coded yourself. And Python come with batteries included. I keep discovering them everyday. I had no idea of the number of modules I could use when I started and tried to rewrite existing stuff like everybody. It's OK, you can't get it all right from the beginning.\n\nYou don't write Java, C++, Python, PHP, Erlang, whatever, the same way. They are good reasons why there is room for each of so many different languages, they do not do the same things.\nUnit tests are not a substitute\nUnit tests must be performed with any language. The most famous unit test library (JUnit) is from the Java world!\nThis has nothing to do with types. You check behaviors, again. You avoid trouble with regression. You ensure your customer you are on tracks.\nPython for large scale projects\n\nLanguages, libraries and frameworks\n don't scale. Architectures do.\n\nIf you design a solid architecture, if you are able to make it evolves quickly, then it will scale. Unit tests help, automatic code check as well. But they are just safety nets. And small ones.\nPython is especially suitable for large projects because it enforces some good practices and has a lot of usual design patterns built-in. But again, do not use it for what it is not designed. E.g : Python is not a technology for CPU intensive tasks.\nIn a huge project, you will most likely use several different technologies anyway. As a SGBD (French for DBMS) and a templating language, or else. Python is no exception.\nYou will probably want to use C/C++ for the part of your code you need to be fast. Or Java to fit in a Tomcat environment. Don't know, don't care. Python can play well with these.\nAs a conclusion\nMy answer may feel a bit rude, but don't get me wrong: this is a very good question.\nA lot of people come to Python with old habits. I screwed myself trying to code Java like Python. You can, but will never get the best of it.\nIf you have played / want to play with Python, it's great! It's a wonderful tool. But just a tool, really.\n", "I had some experience with modifying \"Frets On Fire\", an open source python \"Guitar Hero\" clone.\nas I see it, python is not really suitable for a really large scale project.\nI found myself spending a large part of the development time debugging issues related to assignment of incompatible types, things that static typed laguages will reveal effortlessly at compile-time.\nalso, since types are determined on run-time, trying to understand existing code becomes harder, because you have no idea what's the type of that parameter you are currently looking at.\nin addition to that, calling functions using their name string with the __getattr__ built in function is generally more common in Python than in other programming languages, thus getting the call graph to a certain function somewhat hard (although you can call functions with their name in some statically typed languages as well).\nI think that Python really shines in small scale software, rapid prototype development, and gluing existing programs together, but I would not use it for large scale software projects, since in those types of programs maintainability becomes the real issue, and in my opinion python is relatively weak there.\n", "Since nobody pointed out pychecker, pylint and similar tools, I will: pychecker and pylint are tools that can help you find incorrect assumptions (about function signatures, object attributes, etc.) They won't find everything that a compiler might find in a statically typed language -- but they can find problems that such compilers for such languages can't find, too.\nPython (and any dynamically typed language) is fundamentally different in terms of the errors you're likely to cause and how you would detect and fix them. It has definite downsides as well as upsides, but many (including me) would argue that in Python's case, the ease of writing code (and the ease of making it structurally sound) and of modifying code without breaking API compatibility (adding new optional arguments, providing different objects that have the same set of methods and attributes) make it suitable just fine for large codebases.\n", "my 0.10 EUR:\ni have several python application in 'production'-state. our company use java, c++ and python. we develop with the eclipse ide (pydev for python)\nunittests are the key-solution for the problem. (also for c++ and java)\nthe less secure world of \"dynamic-typing\" will make you less careless about your code quality\nBY THE WAY:\nlarge scale development doesn't mean, that you use one single language!\nlarge scale development often uses a handful of languages specific to the problem.\nso i agree to the-hammer-problem :-)\n\nPS: static-typing & python\n", "Here are some items that have helped me maintain a fairly large system in python.\n\nStructure your code in layers. i.e separate biz logic, presentation logic and your persistence layers. Invest a bit of time in defining these layers and make sure everyone on the project is brought in. For large systems creating a framework that forces you into a certain way of development can be key as well.\nTests are key, without unit tests you will likely end up with an unmanagable code base several times quicker than with other languages. Keep in mind that unit tests are often not sufficient, make sure to have several integration/acceptance tests you can run quickly after any major change.\nUse Fail Fast principle. Add assertions for cases you feel your code maybe vulnerable.\nHave standard logging/error handling that will help you quickly navigate to the issue\nUse an IDE( pyDev works for me) that provides type ahead, pyLint/Checker integration to help you detect common typos right away and promote some coding standards\nCarefull about your imports, never do from x import * or do relative imports without use of .\nDo refactor, a search/replace tool with regular expressions is often all you need to do move methods/class type refactoring. \n\n", "Incompatible changes to the signature of a method. This doesn't happen as much in Python as it does in Java and C++.\nPython has optional arguments, default values, and far more flexibility in defining method signatures. Also, duck typing means that -- for example -- you don't have to switch from some class to an interface as part of a significant software change. Things just aren't as complex.\nHow do you find all the places where that method is being called? grep works for dynamic languages. If you need to know every place a method is used, grep (or equivalent IDE-supported search) works great.\nHow do you find out what operations an instance provides, since you don't have a static type to lookup?\na. Look at the source. You don't have the Java/C++ problem of object libraries and jar files to contend with. You don't need all the elaborate aids and tools that those languages require.\nb. An IDE can provide signature information under many common circumstances. You can, easily, defeat your IDE's reasoning powers. When that happens, you should probably review what you're doing to be sure it makes sense. If your IDE can't reason out your type information, perhaps it's too dynamic.\nc. In Python, you often work through the interactive interpreter. Unlike Java and C++, you can explore your instances directly and interactively. You don't need a sophisticated IDE.\nExample:\n >>> x= SomeClass()\n >>> dir(x)\n\nHow do you handle/prevent typing errors? Same as static languages: you don't prevent them. You find and correct them. Java can only find a certain class of typos. If you have two similar class or variable names, you can wind up in deep trouble, even with static type checking.\nExample:\nclass MyClass { }\nclass MyClassx extends MyClass { }\n\nA typo with these two class names can cause havoc. [\"But I wouldn't put myself in that position with Java,\" folks say. Agreed. I wouldn't put myself in that position with Python, either; you make classes that are profoundly different, and will fail early if they're misused.]\nAre UnitTest's used as a substitute for static type checking? Here's the other Point of view: static type checking is a substitute for clear, simple design.\nI've worked with programmers who weren't sure why an application worked. They couldn't figure out why things didn't compile; the didn't know the difference between abstract superclass and interface, and the couldn't figure out why a change in place makes a bunch of other modules in a separate JAR file crash. The static type checking gave them false confidence in a flawed design.\nDynamic languages allow programs to be simple. Simplicity is a substitute for static type checking. Clarity is a substitute for static type checking.\n", "My general rule of thumb is to use dynamic languages for small non-mission-critical projects and statically-typed languages for big projects. I find that code written in a dynamic language such as python gets \"tangled\" more quickly. Partly that is because it is much quicker to write code in a dynamic language and that leads to shortcuts and worse design, at least in my case. Partly it's because I have IntelliJ for quick and easy refactoring when I use Java, which I don't have for python.\n", "The usual answer to that is testing testing testing. You're supposed to have an extensive unit test suite and run it often, particularly before a new version goes online.\nProponents of dynamically typed languages make the case that you have to test anyway because even in a statically typed language conformance to the crude rules of the type system covers only a small part of what can potentially go wrong.\n" ]
[ 69, 40, 24, 16, 16, 8, 3, 2 ]
[]
[]
[ "development_environment", "python" ]
stackoverflow_0000236407_development_environment_python.txt
Q: How to disable HTML encoding when using Context in django In my django application I am using a template to construct email body, one of the parameters is url, note there are two parametes separated by ampersand in the url. t = loader.get_template("sometemplate") c = Context({ 'foo': 'bar', 'url': 'http://127.0.0.1/test?a=1&b=2', }) print t.render(c) After rendering it produces: http://127.0.0.1/test?a=1&amp;amp;b=2 Note the ampersand is HTML encoded as "&amp;". One way around the problem is to pass each parameter separately to my template and construct the url in the template, however I'd like to avoid doing that. Is there a way to disable HTML encoding of context parameters or at the very least avoid encoding of ampersands? A: To turn it off for a single variable, use mark_safe: from django.utils.safestring import mark_safe t = loader.get_template("sometemplate") c = Context({ 'foo': 'bar', 'url': mark_safe('http://127.0.0.1/test?a=1&b=2'), }) print t.render(c) Alternatively, to totally turn autoescaping off from your Python code, use the autoescape argument when initialising a Context: c = Context({ 'foo': 'bar', 'url': 'http://127.0.0.1/test?a=1&b=2', }, autoescape=False) The How to turn [Automatic HTML escaping] off section of the documentation covers some of the in-template options if you'd rather do it there. A: Or just use the "safe" filter in your template. Also, I cannot stress enough how important it is to be familiar with Django's documentation; many common questions like this have easy-to-find answers and explanations (like this one), and reading through the docs and getting a feel for how everything works will drastically decrease the amount of time you need to spend ask "why did it do that" and increase the amount of time you spend building things that work the way you want.
How to disable HTML encoding when using Context in django
In my django application I am using a template to construct email body, one of the parameters is url, note there are two parametes separated by ampersand in the url. t = loader.get_template("sometemplate") c = Context({ 'foo': 'bar', 'url': 'http://127.0.0.1/test?a=1&b=2', }) print t.render(c) After rendering it produces: http://127.0.0.1/test?a=1&amp;amp;b=2 Note the ampersand is HTML encoded as "&amp;". One way around the problem is to pass each parameter separately to my template and construct the url in the template, however I'd like to avoid doing that. Is there a way to disable HTML encoding of context parameters or at the very least avoid encoding of ampersands?
[ "To turn it off for a single variable, use mark_safe:\nfrom django.utils.safestring import mark_safe\n\nt = loader.get_template(\"sometemplate\")\nc = Context({\n 'foo': 'bar',\n 'url': mark_safe('http://127.0.0.1/test?a=1&b=2'),\n})\nprint t.render(c)\n\nAlternatively, to totally turn autoescaping off from your Python code, use the autoescape argument when initialising a Context:\nc = Context({\n 'foo': 'bar',\n 'url': 'http://127.0.0.1/test?a=1&b=2',\n}, autoescape=False)\n\nThe How to turn [Automatic HTML escaping] off section of the documentation covers some of the in-template options if you'd rather do it there.\n", "Or just use the \"safe\" filter in your template.\nAlso, I cannot stress enough how important it is to be familiar with Django's documentation; many common questions like this have easy-to-find answers and explanations (like this one), and reading through the docs and getting a feel for how everything works will drastically decrease the amount of time you need to spend ask \"why did it do that\" and increase the amount of time you spend building things that work the way you want.\n" ]
[ 24, 9 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0000237235_django_django_templates_python.txt
Q: Daylight savings time change affecting the outcome of saving and loading an icalendar file? I have some unit tests that started failing today after a switch in daylight savings time. We're using the iCalendar python module to load and save ics files. The following script is a simplified version of our test. The script works fine in 'summer' and fails in 'winter', as of this morning. The failure can be reproduced by setting the clock back manually. Here's the output of the script: [root@ana icalendar]# date 10250855 Sat Oct 25 08:55:00 CEST 2008 [root@ana icalendar]# python dst.py DTSTART should represent datetime.datetime(2015, 4, 4, 8, 0, tzinfo=tzfile('/usr/share/zoneinfo/Europe/Brussels')) Brussels time DTSTART should represent datetime.datetime(2015, 4, 4, 6, 0, tzinfo=<icalendar.prop.UTC object at 0x956b5cc>) UTC DTSTART represents datetime.datetime(2015, 4, 4, 6, 0, tzinfo=<icalendar.prop.UTC object at 0x956b5cc>) Brussels time [root@ana icalendar]# date 10260855 Sun Oct 26 08:55:00 CET 2008 [root@ana icalendar]# python dst.py DTSTART should represent datetime.datetime(2015, 4, 4, 8, 0, tzinfo=tzfile('/usr/share/zoneinfo/Europe/Brussels')) Brussels time DTSTART should represent datetime.datetime(2015, 4, 4, 6, 0, tzinfo=<icalendar.prop.UTC object at 0x96615cc>) UTC DTSTART represents datetime.datetime(2015, 4, 4, 7, 0, tzinfo=<icalendar.prop.UTC object at 0x96615cc>) Brussels time Traceback (most recent call last): File "dst.py", line 58, in <module> start.dt, startUTCExpected) AssertionError: calendar's datetime.datetime(2015, 4, 4, 7, 0, tzinfo=<icalendar.prop.UTC object at 0x96615cc>) != expected datetime.datetime(2015, 4, 4, 6, 0, tzinfo=<icalendar.prop.UTC object at 0x96615cc>) And here is the whole script. So, questions: - why would my current time (and which part of DST I'm in) affect the loading/saving/parsing of timestamps ? I would expect it not to. - how would you unit test this kind of bug, if it is a bug ? Obviously, I don't want my unit tests to reset the clock on my computer. A: Without looking at your code (and the quoted test-run-script my brain fails to understand right now) I notice that you try to get a time that is in a different timezone than the one you are at. (Think of DST as a another TIMEZONE instead of +-1 hour from current timezone). This could (depending on how you do it) lead to a gain or loss of hours. (Like when your flying, you start at one time and getting to your location before you started, all in local time)
Daylight savings time change affecting the outcome of saving and loading an icalendar file?
I have some unit tests that started failing today after a switch in daylight savings time. We're using the iCalendar python module to load and save ics files. The following script is a simplified version of our test. The script works fine in 'summer' and fails in 'winter', as of this morning. The failure can be reproduced by setting the clock back manually. Here's the output of the script: [root@ana icalendar]# date 10250855 Sat Oct 25 08:55:00 CEST 2008 [root@ana icalendar]# python dst.py DTSTART should represent datetime.datetime(2015, 4, 4, 8, 0, tzinfo=tzfile('/usr/share/zoneinfo/Europe/Brussels')) Brussels time DTSTART should represent datetime.datetime(2015, 4, 4, 6, 0, tzinfo=<icalendar.prop.UTC object at 0x956b5cc>) UTC DTSTART represents datetime.datetime(2015, 4, 4, 6, 0, tzinfo=<icalendar.prop.UTC object at 0x956b5cc>) Brussels time [root@ana icalendar]# date 10260855 Sun Oct 26 08:55:00 CET 2008 [root@ana icalendar]# python dst.py DTSTART should represent datetime.datetime(2015, 4, 4, 8, 0, tzinfo=tzfile('/usr/share/zoneinfo/Europe/Brussels')) Brussels time DTSTART should represent datetime.datetime(2015, 4, 4, 6, 0, tzinfo=<icalendar.prop.UTC object at 0x96615cc>) UTC DTSTART represents datetime.datetime(2015, 4, 4, 7, 0, tzinfo=<icalendar.prop.UTC object at 0x96615cc>) Brussels time Traceback (most recent call last): File "dst.py", line 58, in <module> start.dt, startUTCExpected) AssertionError: calendar's datetime.datetime(2015, 4, 4, 7, 0, tzinfo=<icalendar.prop.UTC object at 0x96615cc>) != expected datetime.datetime(2015, 4, 4, 6, 0, tzinfo=<icalendar.prop.UTC object at 0x96615cc>) And here is the whole script. So, questions: - why would my current time (and which part of DST I'm in) affect the loading/saving/parsing of timestamps ? I would expect it not to. - how would you unit test this kind of bug, if it is a bug ? Obviously, I don't want my unit tests to reset the clock on my computer.
[ "Without looking at your code (and the quoted test-run-script my brain fails to understand right now)\nI notice that you try to get a time that is in a different timezone than the one you are at.\n(Think of DST as a another TIMEZONE instead of +-1 hour from current timezone). \nThis could (depending on how you do it) lead to a gain or loss of hours.\n(Like when your flying, you start at one time and getting to your location before you started, all in local time)\n" ]
[ 1 ]
[]
[]
[ "icalendar", "python", "unit_testing" ]
stackoverflow_0000237731_icalendar_python_unit_testing.txt
Q: What is your convention to distinguish between object methods to be called by the outside, and object methods to be called by a subclass? I know most of the ins and outs of Python's approach to private variables/members/functions/... However, I can't make my mind up on how to distinguish between methods for external use or subclassing use. Consider the following example: class EventMixin(object): def subscribe(self, **kwargs): '''kwargs should be a dict of event -> callable, to be specialized in the subclass''' def event(self, name, *args, **kwargs): ... def _somePrivateMethod(self): ... In this example, I want to make it clear that subscribe is a method to be used by external users of the class/object, while event is a method that should not be called from the outside, but rather by subclass implementations. Right now, I consider both part of the public API, hence don't use any underscores. However, for this particular situation, it would feel cleaner to, for example, use no underscores for the external API, one underscore for the subclassable API, and two underscores for the private/internal API. However, that would become unwieldy because then the internal API would need to be invoked as self._EventMixin__somePrivateMethod() So, what are your conventions, coding-wise, documentationwise, or otherwise ? A: use no underscores for the external API, one underscore for the subclassable API, and two underscores for the private/internal API This is a reasonable and relatively common way of doing it, yes. The double-underline-for-actually-private (as opposed to ‘protected’ in C++ terms) is in practice pretty rare. You never really know what behaviours a subclass might want to override, so assuming ‘protected’ is generally a good bet unless there's a really good reason why messing with a member might be particularly dangerous. However, that would become unwieldy because then the internal API would need to be invoked as self._EventMixin__somePrivateMethod() Nope, you can just use the double-underlined version and it will be munged automatically. It's ugly but it works. A: I generally find using double __ to be more trouble that they are worth, as it makes unit testing very painful. using single _ as convention for methods/attributes that are not intended to be part of the public interface of a particular class/module is my preferred approach. A: I'd like to make the suggestion that when you find yourself encountering this kind of distinction, it may be a good idea to consider using composition instead of inheritance; in other words, instantiating EventMixin (presumably the name would change) instead of inheriting it.
What is your convention to distinguish between object methods to be called by the outside, and object methods to be called by a subclass?
I know most of the ins and outs of Python's approach to private variables/members/functions/... However, I can't make my mind up on how to distinguish between methods for external use or subclassing use. Consider the following example: class EventMixin(object): def subscribe(self, **kwargs): '''kwargs should be a dict of event -> callable, to be specialized in the subclass''' def event(self, name, *args, **kwargs): ... def _somePrivateMethod(self): ... In this example, I want to make it clear that subscribe is a method to be used by external users of the class/object, while event is a method that should not be called from the outside, but rather by subclass implementations. Right now, I consider both part of the public API, hence don't use any underscores. However, for this particular situation, it would feel cleaner to, for example, use no underscores for the external API, one underscore for the subclassable API, and two underscores for the private/internal API. However, that would become unwieldy because then the internal API would need to be invoked as self._EventMixin__somePrivateMethod() So, what are your conventions, coding-wise, documentationwise, or otherwise ?
[ "use no underscores for the external API,\none underscore for the subclassable API,\nand two underscores for the private/internal API\n\nThis is a reasonable and relatively common way of doing it, yes. The double-underline-for-actually-private (as opposed to ‘protected’ in C++ terms) is in practice pretty rare. You never really know what behaviours a subclass might want to override, so assuming ‘protected’ is generally a good bet unless there's a really good reason why messing with a member might be particularly dangerous.\nHowever, that would become unwieldy because then the internal API would\nneed to be invoked as self._EventMixin__somePrivateMethod()\n\nNope, you can just use the double-underlined version and it will be munged automatically. It's ugly but it works.\n", "I generally find using double __ to be more trouble that they are worth, as it makes unit testing very painful. using single _ as convention for methods/attributes that are not intended to be part of the public interface of a particular class/module is my preferred approach. \n", "I'd like to make the suggestion that when you find yourself encountering this kind of distinction, it may be a good idea to consider using composition instead of inheritance; in other words, instantiating EventMixin (presumably the name would change) instead of inheriting it.\n" ]
[ 3, 2, 2 ]
[]
[]
[ "private", "python", "subclass" ]
stackoverflow_0000236359_private_python_subclass.txt
Q: Splitting strings in python I have a string which is like this: this is [bracket test] "and quotes test " I'm trying to write something in Python to split it up by space while ignoring spaces within square braces and quotes. The result I'm looking for is: ['this','is','bracket test','and quotes test '] A: Here's a simplistic solution that works with your test input: import re re.findall('\[[^\]]*\]|\"[^\"]*\"|\S+',s) This will return any code that matches either a open bracket followed by zero or more non-close-bracket characters followed by a close bracket, a double-quote followed by zero or more non-quote characters followed by a quote, any group of non-whitespace characters This works with your example, but might fail for many real-world strings you may encounter. For example, you didn't say what you expect with unbalanced brackets or quotes,or how you want single quotes or escape characters to work. For simple cases, though, the above might be good enough. A: To complete Bryan post and match exactly the answer : >>> import re >>> txt = 'this is [bracket test] "and quotes test "' >>> [x[1:-1] if x[0] in '["' else x for x in re.findall('\[[^\]]*\]|\"[^\"]*\"|\S+', txt)] ['this', 'is', 'bracket test', 'and quotes test '] Don't misunderstand the whole syntax used : This is not several statments on a single line but a single functional statment (more bugproof). A: Here's a simplistic parser (tested against your example input) that introduces the State design pattern. In real world, you probably want to build a real parser using something like PLY. class SimpleParser(object): def __init__(self): self.mode = None self.result = None def parse(self, text): self.initial_mode() self.result = [] for word in text.split(' '): self.mode.handle_word(word) return self.result def initial_mode(self): self.mode = InitialMode(self) def bracket_mode(self): self.mode = BracketMode(self) def quote_mode(self): self.mode = QuoteMode(self) class InitialMode(object): def __init__(self, parser): self.parser = parser def handle_word(self, word): if word.startswith('['): self.parser.bracket_mode() self.parser.mode.handle_word(word[1:]) elif word.startswith('"'): self.parser.quote_mode() self.parser.mode.handle_word(word[1:]) else: self.parser.result.append(word) class BlockMode(object): end_marker = None def __init__(self, parser): self.parser = parser self.result = [] def handle_word(self, word): if word.endswith(self.end_marker): self.result.append(word[:-1]) self.parser.result.append(' '.join(self.result)) self.parser.initial_mode() else: self.result.append(word) class BracketMode(BlockMode): end_marker = ']' class QuoteMode(BlockMode): end_marker = '"' A: Here's a more procedural approach: #!/usr/bin/env python a = 'this is [bracket test] "and quotes test "' words = a.split() wordlist = [] while True: try: word = words.pop(0) except IndexError: break if word[0] in '"[': buildlist = [word[1:]] while True: try: word = words.pop(0) except IndexError: break if word[-1] in '"]': buildlist.append(word[:-1]) break buildlist.append(word) wordlist.append(' '.join(buildlist)) else: wordlist.append(word) print wordlist A: Well, I've encountered this problem quite a few times, which led me to write my own system for parsing any kind of syntax. The result of this can be found here; note that this may be overkill, and it will provide you with something that lets you parse statements with both brackets and parentheses, single and double quotes, as nested as you want. For example, you could parse something like this (example written in Common Lisp): (defun hello_world (&optional (text "Hello, World!")) (format t text)) You can use nesting, brackets (square) and parentheses (round), single- and double-quoted strings, and it's very extensible. The idea is basically a configurable implementation of a Finite State Machine which builds up an abstract syntax tree character-by-character. I recommend you look at the source code (see link above), so that you can get an idea of how to do it. It's capable via regular expressions, but try writing a system using REs and then trying to extend it (or even understand it) later.
Splitting strings in python
I have a string which is like this: this is [bracket test] "and quotes test " I'm trying to write something in Python to split it up by space while ignoring spaces within square braces and quotes. The result I'm looking for is: ['this','is','bracket test','and quotes test ']
[ "Here's a simplistic solution that works with your test input:\nimport re\nre.findall('\\[[^\\]]*\\]|\\\"[^\\\"]*\\\"|\\S+',s)\n\nThis will return any code that matches either \n\na open bracket followed by zero or more non-close-bracket characters followed by a close bracket, \na double-quote followed by zero or more non-quote characters followed by a quote,\nany group of non-whitespace characters\n\nThis works with your example, but might fail for many real-world strings you may encounter. For example, you didn't say what you expect with unbalanced brackets or quotes,or how you want single quotes or escape characters to work. For simple cases, though, the above might be good enough.\n", "To complete Bryan post and match exactly the answer :\n>>> import re\n>>> txt = 'this is [bracket test] \"and quotes test \"'\n>>> [x[1:-1] if x[0] in '[\"' else x for x in re.findall('\\[[^\\]]*\\]|\\\"[^\\\"]*\\\"|\\S+', txt)]\n['this', 'is', 'bracket test', 'and quotes test ']\n\nDon't misunderstand the whole syntax used : This is not several statments on a single line but a single functional statment (more bugproof).\n", "Here's a simplistic parser (tested against your example input) that introduces the State design pattern.\nIn real world, you probably want to build a real parser using something like PLY.\nclass SimpleParser(object):\n\n def __init__(self):\n self.mode = None\n self.result = None\n\n def parse(self, text):\n self.initial_mode()\n self.result = []\n for word in text.split(' '):\n self.mode.handle_word(word)\n return self.result\n\n def initial_mode(self):\n self.mode = InitialMode(self)\n\n def bracket_mode(self):\n self.mode = BracketMode(self)\n\n def quote_mode(self):\n self.mode = QuoteMode(self)\n\n\nclass InitialMode(object):\n\n def __init__(self, parser):\n self.parser = parser\n\n def handle_word(self, word):\n if word.startswith('['):\n self.parser.bracket_mode()\n self.parser.mode.handle_word(word[1:])\n elif word.startswith('\"'):\n self.parser.quote_mode()\n self.parser.mode.handle_word(word[1:])\n else:\n self.parser.result.append(word)\n\n\nclass BlockMode(object):\n\n end_marker = None\n\n def __init__(self, parser):\n self.parser = parser\n self.result = []\n\n def handle_word(self, word):\n if word.endswith(self.end_marker):\n self.result.append(word[:-1])\n self.parser.result.append(' '.join(self.result))\n self.parser.initial_mode()\n else:\n self.result.append(word)\n\nclass BracketMode(BlockMode):\n end_marker = ']'\n\nclass QuoteMode(BlockMode):\n end_marker = '\"'\n\n", "Here's a more procedural approach:\n#!/usr/bin/env python\n\na = 'this is [bracket test] \"and quotes test \"'\n\nwords = a.split()\nwordlist = []\n\nwhile True:\n try:\n word = words.pop(0)\n except IndexError:\n break\n if word[0] in '\"[':\n buildlist = [word[1:]]\n while True:\n try:\n word = words.pop(0)\n except IndexError:\n break\n if word[-1] in '\"]':\n buildlist.append(word[:-1])\n break\n buildlist.append(word)\n wordlist.append(' '.join(buildlist))\n else:\n wordlist.append(word)\n\nprint wordlist\n\n", "Well, I've encountered this problem quite a few times, which led me to write my own system for parsing any kind of syntax.\nThe result of this can be found here; note that this may be overkill, and it will provide you with something that lets you parse statements with both brackets and parentheses, single and double quotes, as nested as you want. For example, you could parse something like this (example written in Common Lisp):\n(defun hello_world (&optional (text \"Hello, World!\"))\n (format t text))\n\nYou can use nesting, brackets (square) and parentheses (round), single- and double-quoted strings, and it's very extensible.\nThe idea is basically a configurable implementation of a Finite State Machine which builds up an abstract syntax tree character-by-character. I recommend you look at the source code (see link above), so that you can get an idea of how to do it. It's capable via regular expressions, but try writing a system using REs and then trying to extend it (or even understand it) later.\n" ]
[ 8, 5, 1, 0, 0 ]
[ "Works for quotes only. \nrrr = []\nqqq = s.split('\\\"')\n[ rrr.extend( qqq[x].split(), [ qqq[x] ] )[ x%2]) for x in range( len( qqq ) )]\nprint rrr\n\n" ]
[ -2 ]
[ "parsing", "python", "split", "string", "tokenize" ]
stackoverflow_0000234512_parsing_python_split_string_tokenize.txt
Q: Is there a windows implementation to python libsvn? Because windows is case-insensitive and because SVN is case-sensitive and because VS2005 tends to rename files giving them the lower-case form which messes my repositories' history, I've tried to add the pre-commit hook script from http://svn.collab.net/repos/svn/trunk/contrib/hook-scripts/case-insensitive.py. Sure enough, the script uses classes from python's libsvn ("from svn import repos, fs") which I fail to find compiled for Windows. Is there an alternative? To libsvn or to the hook script? A: There are two alternative Python bindings for libsvn: pysvn. subvertpy. Subvertpy is quite new and is written by the author of bzr-svn: the transparent svn inter-operation bridge for bzr. For a while, bzr-svn used the upstream SWIG Python bindings, and the author contributed a lot of bug fixes. It helped move the upstream python support for "horribly broken" to "painfully aggravating and unpythonic". So after wasting too many hours of his life to SWIG, the author decided to make his own bindings. A: The Tigris.org's pre-complied python bindings for libsvn are a separate download. The latest as of Oct 27 could be found here. There are other binary SVN distributions listed here, and they probably have different policy for bundling the python bindings.
Is there a windows implementation to python libsvn?
Because windows is case-insensitive and because SVN is case-sensitive and because VS2005 tends to rename files giving them the lower-case form which messes my repositories' history, I've tried to add the pre-commit hook script from http://svn.collab.net/repos/svn/trunk/contrib/hook-scripts/case-insensitive.py. Sure enough, the script uses classes from python's libsvn ("from svn import repos, fs") which I fail to find compiled for Windows. Is there an alternative? To libsvn or to the hook script?
[ "There are two alternative Python bindings for libsvn:\n\npysvn.\nsubvertpy. \n\nSubvertpy is quite new and is written by the author of bzr-svn: the transparent svn inter-operation bridge for bzr.\nFor a while, bzr-svn used the upstream SWIG Python bindings, and the author contributed a lot of bug fixes. It helped move the upstream python support for \"horribly broken\" to \"painfully aggravating and unpythonic\". So after wasting too many hours of his life to SWIG, the author decided to make his own bindings.\n", "The Tigris.org's pre-complied python bindings for libsvn are a separate download. The latest as of Oct 27 could be found here.\nThere are other binary SVN distributions listed here, and they probably have different policy for bundling the python bindings.\n" ]
[ 4, 3 ]
[]
[]
[ "hook", "pre_commit", "python", "svn" ]
stackoverflow_0000238151_hook_pre_commit_python_svn.txt
Q: Solving the shared-server security problem for Python So my group is trying to set up a shared-server environment for various and sundry web services. I think we've settled on setting disable_functions and disable_classes site wide in php.ini and php_admin_value to force open_basedir in each app's httpd.conf for php scripts, and passenger's user switching for ruby scripts. We still need to find something for python though. Passenger does support python, but not for per-application security for specific sub-directories (it's all or nothing at the domain level). Any suggestions? (And if any of the previous doesn't make sense - well, I'm the guy who's supposed to set up the python support, not the guy who set up the php or ruby support, so there's still some "and then some magic happens" steps in there from my perspective). A: Well, there is a system called virtualenv which allows you to run Python in a sort of safe environment, and configure/load/shutdown these environments on the fly. I don't know much about it, but you should take a serious look into it; here is the description from its web page (just Google it and you'll find it): The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.4/site-packages (or whatever your platform's standard location is), it's easy to end up in a situation where you unintentionally upgrade an application that shouldn't be upgraded. Or more generally, what if you want to install an application and leave it be? If an application works, any change in its libraries or the versions of those libraries can break the application. Also, what if you can't install packages into the global site-packages directory? For instance, on a shared host. In all these cases, virtualenv can help you. It creates an environment that has its own installation directories, that doesn't share libraries with other virtualenv environments (and optionally doesn't use the globally installed libraries either).
Solving the shared-server security problem for Python
So my group is trying to set up a shared-server environment for various and sundry web services. I think we've settled on setting disable_functions and disable_classes site wide in php.ini and php_admin_value to force open_basedir in each app's httpd.conf for php scripts, and passenger's user switching for ruby scripts. We still need to find something for python though. Passenger does support python, but not for per-application security for specific sub-directories (it's all or nothing at the domain level). Any suggestions? (And if any of the previous doesn't make sense - well, I'm the guy who's supposed to set up the python support, not the guy who set up the php or ruby support, so there's still some "and then some magic happens" steps in there from my perspective).
[ "Well, there is a system called virtualenv which allows you to run Python in a sort of safe environment, and configure/load/shutdown these environments on the fly. I don't know much about it, but you should take a serious look into it; here is the description from its web page (just Google it and you'll find it):\n\nThe basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.4/site-packages (or whatever your platform's standard location is), it's easy to end up in a situation where you unintentionally upgrade an application that shouldn't be upgraded.\nOr more generally, what if you want to install an application and leave it be? If an application works, any change in its libraries or the versions of those libraries can break the application.\nAlso, what if you can't install packages into the global site-packages directory? For instance, on a shared host.\nIn all these cases, virtualenv can help you. It creates an environment that has its own installation directories, that doesn't share libraries with other virtualenv environments (and optionally doesn't use the globally installed libraries either).\n\n" ]
[ 3 ]
[]
[]
[ "mysql", "php", "python", "ruby", "security" ]
stackoverflow_0000234590_mysql_php_python_ruby_security.txt
Q: How can I support wildcards in user-defined search strings in Python? Is there a simple way to support wildcards ("*") when searching strings - without using RegEx? Users are supposed to enter search terms using wildcards, but should not have to deal with the complexity of RegEx: "foo*" => str.startswith("foo") "*foo" => str.endswith("foo") "*foo*" => "foo" in str (it gets more complicated when there are multiple search terms though, e.g. "foobarbaz") This seems like a common issue, so I wonder whether there's a ready-made solution for it. Any help would be greatly appreciated! A: You could try the fnmatch module, it's got a shell-like wildcard syntax.
How can I support wildcards in user-defined search strings in Python?
Is there a simple way to support wildcards ("*") when searching strings - without using RegEx? Users are supposed to enter search terms using wildcards, but should not have to deal with the complexity of RegEx: "foo*" => str.startswith("foo") "*foo" => str.endswith("foo") "*foo*" => "foo" in str (it gets more complicated when there are multiple search terms though, e.g. "foobarbaz") This seems like a common issue, so I wonder whether there's a ready-made solution for it. Any help would be greatly appreciated!
[ "You could try the fnmatch module, it's got a shell-like wildcard syntax.\n" ]
[ 14 ]
[]
[]
[ "parsing", "python", "search", "string", "wildcard" ]
stackoverflow_0000238600_parsing_python_search_string_wildcard.txt
Q: How do I install plpython on MacOs X 10.5? I have just installed PostgreSQL 8.3.4 on Mac OS X 10.5 (using ports), but I cannot figure out how to enable PL/Python. When I run the CREATE LANGUAGE plpythonu I get the following errors: ERROR: could not access file "$libdir/plpython": No such file or directory STATEMENT: CREATE LANGUAGE plpythonu; psql:<stdin>:18: ERROR: could not access file "$libdir/plpython": No such file or directory How can I fix it? Ideally I would prefer to avoid compiling Postgres without port or something like that. Thia the output of running pg_config: BINDIR = /opt/local/lib/postgresql83/bin DOCDIR = INCLUDEDIR = /opt/local/include/postgresql83 PKGINCLUDEDIR = /opt/local/include/postgresql83 INCLUDEDIR-SERVER = /opt/local/include/postgresql83/server LIBDIR = /opt/local/lib/postgresql83 PKGLIBDIR = /opt/local/lib/postgresql83 LOCALEDIR = MANDIR = /opt/local/share/man SHAREDIR = /opt/local/share/postgresql83 SYSCONFDIR = /opt/local/etc/postgresql83 PGXS = /opt/local/lib/postgresql83/pgxs/src/makefiles/pgxs.mk CONFIGURE = '--prefix=/opt/local' '--sysconfdir=/opt/local/etc/postgresql83' '--bindir=/opt/local/lib/postgresql83/bin' '--libdir=/opt/local/lib/postgresql83' '--includedir=/opt/local/include/postgresql83' '--datadir=/opt/local/share/postgresql83' '--mandir=/opt/local/share/man' '--without-docdir' '--with-includes=/opt/local/include' '--with-libraries=/opt/local/lib' '--with-openssl' '--with-bonjour' '--with-readline' '--with-zlib' '--with-libxml' '--with-libxslt' '--enable-thread-safety' '--enable-integer-datetimes' '--with-ossp-uuid' 'CC=/usr/bin/gcc-4.0' 'CFLAGS=-O2' 'CPPFLAGS=-I/opt/local/include -I/opt/local/include/ossp' 'CPP=/usr/bin/cpp-4.0' 'LDFLAGS=-L/opt/local/lib' CC = /usr/bin/gcc-4.0 -no-cpp-precomp CPPFLAGS = -I/opt/local/include -I/opt/local/include/ossp -I/opt/local/include/libxml2 -I/opt/local/include CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv CFLAGS_SL = LDFLAGS = -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib LDFLAGS_SL = LIBS = -lpgport -lxslt -lxml2 -lssl -lcrypto -lz -lreadline -lm VERSION = PostgreSQL 8.3.4 (I've just switched from Linux to Mac a couple of days ago... In Ubuntu stuff like that used to just work, so I am pretty lost.) A: Silly me: [lib/postgresql83] > variants postgresql83 postgresql83 has the variants: universal python: add support for python krb5: add support for Kerberos 5 authentication perl: add Perl support (I'd had universal.) This means that you have to install the right variant of PostgreSQL to make your python functions fly. $ sudo port install postgresql83 +python postgresql-server +python
How do I install plpython on MacOs X 10.5?
I have just installed PostgreSQL 8.3.4 on Mac OS X 10.5 (using ports), but I cannot figure out how to enable PL/Python. When I run the CREATE LANGUAGE plpythonu I get the following errors: ERROR: could not access file "$libdir/plpython": No such file or directory STATEMENT: CREATE LANGUAGE plpythonu; psql:<stdin>:18: ERROR: could not access file "$libdir/plpython": No such file or directory How can I fix it? Ideally I would prefer to avoid compiling Postgres without port or something like that. Thia the output of running pg_config: BINDIR = /opt/local/lib/postgresql83/bin DOCDIR = INCLUDEDIR = /opt/local/include/postgresql83 PKGINCLUDEDIR = /opt/local/include/postgresql83 INCLUDEDIR-SERVER = /opt/local/include/postgresql83/server LIBDIR = /opt/local/lib/postgresql83 PKGLIBDIR = /opt/local/lib/postgresql83 LOCALEDIR = MANDIR = /opt/local/share/man SHAREDIR = /opt/local/share/postgresql83 SYSCONFDIR = /opt/local/etc/postgresql83 PGXS = /opt/local/lib/postgresql83/pgxs/src/makefiles/pgxs.mk CONFIGURE = '--prefix=/opt/local' '--sysconfdir=/opt/local/etc/postgresql83' '--bindir=/opt/local/lib/postgresql83/bin' '--libdir=/opt/local/lib/postgresql83' '--includedir=/opt/local/include/postgresql83' '--datadir=/opt/local/share/postgresql83' '--mandir=/opt/local/share/man' '--without-docdir' '--with-includes=/opt/local/include' '--with-libraries=/opt/local/lib' '--with-openssl' '--with-bonjour' '--with-readline' '--with-zlib' '--with-libxml' '--with-libxslt' '--enable-thread-safety' '--enable-integer-datetimes' '--with-ossp-uuid' 'CC=/usr/bin/gcc-4.0' 'CFLAGS=-O2' 'CPPFLAGS=-I/opt/local/include -I/opt/local/include/ossp' 'CPP=/usr/bin/cpp-4.0' 'LDFLAGS=-L/opt/local/lib' CC = /usr/bin/gcc-4.0 -no-cpp-precomp CPPFLAGS = -I/opt/local/include -I/opt/local/include/ossp -I/opt/local/include/libxml2 -I/opt/local/include CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv CFLAGS_SL = LDFLAGS = -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib LDFLAGS_SL = LIBS = -lpgport -lxslt -lxml2 -lssl -lcrypto -lz -lreadline -lm VERSION = PostgreSQL 8.3.4 (I've just switched from Linux to Mac a couple of days ago... In Ubuntu stuff like that used to just work, so I am pretty lost.)
[ "Silly me:\n[lib/postgresql83] > variants postgresql83\n postgresql83 has the variants:\n universal\n python: add support for python\n krb5: add support for Kerberos 5 authentication\n perl: add Perl support\n\n(I'd had universal.)\nThis means that you have to install the right variant of PostgreSQL to make your python functions fly.\n$ sudo port install postgresql83 +python postgresql-server +python\n\n" ]
[ 3 ]
[]
[]
[ "macos", "postgresql", "python" ]
stackoverflow_0000238882_macos_postgresql_python.txt
Q: Refactoring "to hit" values for a game I'm making a game and one of the methods calculates a character's base hit numbers based on skill values. The method currently calculates each value individually, since each skill can be used at short, medium, and long range. I originally thought I could combine the skills into a tuple and iterate over it, dynamically creating each hit number. But I don't know if it's actually possible, since I currently have each hit number assigned to it's own variable. I also thought about creating a method for each range, and passing the tuple as an argument. I could create a new tuple or list with the resulting values and then assign them to the individual variables, but I don't see how it would be any better than do it this way, except that it won't look so copy & pasted. Here's what I currently have: def calcBaseHitNumbers(self, dict): """Calculate character's base hit numbers depending on skill level.""" self.skill_dict = dict self.rifle = self.skill_dict.get('CRM', 0) self.pistol = self.skill_dict.get('PST', 0) self.big_gun = self.skill_dict.get('LCG', 0) self.heavy_weapon = self.skill_dict.get('HW', 0) self.bow = self.skill_dict.get('LB', 0) #self.skill_tuple = (self.rifle, self.pistol, self.big_gun, self.heavy_weapon, # self.bow) #---Short range ## for skill in self.skill_tuple: ## self.base_hit_short = skill * 0.6 self.charAttribs.bhCRM_short = self.rifle * 0.6 self.charAttribs.bhPST_short = self.pistol * 0.6 self.charAttribs.bhHW_short = self.heavy_weapon * 0.6 self.charAttribs.bhLCG_short = self.big_gun * 0.6 self.charAttribs.bhLB_short = self.bow * 0.6 #---Med range self.charAttribs.bhCRM_med = self.rifle * 0.3 self.charAttribs.bhPST_med = self.pistol * 0.3 self.charAttribs.bhHW_med = self.heavy_weapon * 0.3 self.charAttribs.bhLCG_med = self.big_gun * 0.3 self.charAttribs.bhLB_med = self.bow * 0.3 #---Long range self.charAttribs.bhCRM_long = self.rifle * 0.1 self.charAttribs.bhPST_long = self.pistol * 0.1 self.charAttribs.bhHW_long = self.heavy_weapon * 0.1 self.charAttribs.bhLCG_long = self.big_gun * 0.1 self.charAttribs.bhLB_long = self.bow * 0.1 How would you refactor this so it's more dynamic? Edit: I guess what I want to do is something like this: Have a tuple (like the one I commented out) and iterate over it 3 times, each time making a new value (for each skill) based on the modifier for each particular range. The resulting value is then automatically assigned to it's respective variable. In my head, it makes sense. But when I actually try to code it, I get lost. The problem, I think, is that this is the first "real" program I've written; all I've done before are small scripts. This is only the 0.1 version of my program, so it's not critical to refactor it now. However, it seems very un-Pythonic to do this manually and I also want to "future-proof" this in case things change down the road. A: It feels like what you really want is a class representing the weapon, with attributes to handle the base values and calculate hit values with various modifiers. Here's a simple example: SHORT_RANGE = 'S' MEDIUM_RANGE = 'M' LONG_RANGE = 'L' SHORT_RANGE_MODIFIER = 0.6 MEDIUM_RANGE_MODIFIER = 0.3 LONG_RANGE_MODIFIER = 0.1 class Weapon(object): def __init__(self, code_name, full_name, base_hit_value, short_range_modifier=None, medium_range_modifier=None, long_range_modifier=None): self.code_name, self.full_name = code_name, full_name self.base_hit_value = base_hit_value self.range_modifiers = { SHORT_RANGE: short_range_modifier or SHORT_RANGE_MODIFIER, MEDIUM_RANGE: medium_range_modifier or MEDIUM_RANGE_MODIFIER, LONG_RANGE: long_range_modifier or LONG_RANGE_MODIFIER, } def hit_value(self, range, modifier=1): return self.base_hit_value * self.range_modifiers[range] * modifier From there, you might create instances of Weapon inside your Character object like so: self.rifle = Weapon('CRM', 'rifle', 5) self.pistol = Weapon('PST', 'pistol', 10) And then if, say, the character fires the pistol at short range: hit_value = self.pistol.hit_value(SHORT_RANGE) The extra argument to the hit_value() method can be used to pass in character- or situation-specific modifications. Of course, the next step beyond this would be to directly model the weapons as subclasses of Weapon (perhaps breaking down into specific types of weapons, like guns, bows, grenades, etc., each with their own base values) and add an Inventory class to represent the weapons a character is carrying. All of this is pretty standard, boring object-oriented design procedure, but for plenty of situations this type of thinking will get you off the ground quickly and provide at least a little bit of basic flexibility. A: Lets see if I understand you scenario: each weapon has its own distinct hit point so a rifle may have 1, a heavy weapon may have 2 etc. Then each character has a short, medium and long value to be multiplied by the hit point of the weapon. You should consider using a Strategy design. That is create a weapon superclass with a hit point property. Create sub class weapons for rifle, pistol, bow etc. I am sure that the differences between the weapons are more than just the hit points. Then the Character has one or more weapons depending on your gameplay. To calculate the hit point for a particular weapon is as simple as current_weapon * self.medium If you decide to add more weapons later on then you do not have to edit your Character code because your character can handle any weapon. In Pseudo Python class Weapon hit = 1 #other properties of weapon class Rifle(Weapon) #other properties of Rifle class Pistol(Weapon) #other properties of Pistol class Character weapon = Rifle() long=0.6 def calcHit() return self.long*weapon.hit john = Character() john.weapon= Rifle() john.calcHit A: @Vinko: perhaps make calcBaseHitNumbers, do the "if not self.calculatedBase:" check internally, and just no-op if it's been done before. That said, I can't see the pressing need for precalculating this information. But I'm no Python performance expert. A: What sense of dynamic do you mean? What is likely to vary - the number of skills, or the weighting factors, the number of ranges (short, med, long) or all of these? What happens to the (e.g.) bhPST_* values afterwards - do they get combined into one number? One thing that leaps out is that the list of skills is hardwired in the code - I would be inclined to replace the bh variables with a method So (please take into account I don't know the first thing about Python :) ) def bh_short(self, key) skill = self.skill_dict.get(key, 0) return skill * 0.6 Now you can keep a list of skills that contribute to hit points and iterate over that calling bh_short etc. Possibly also pass the range (long med short) unto the function, or return all three values - this all depends on what you're going to do next with the calculated hitpoints. Basically, we need more information about the context this is to be used in A: I would have a class for the character's attributes (so you don't have heaps of things in the character class) and a class for a weapon's attributes: class WeaponAttribute(object): short_mod = 0.6 med_mod = 0.3 long_mod = 0.1 def __init__(self, base): self.base = base @property def short(self): return self.base * self.short_mod @property def med(self): return self.base * self.med_mod @property def long(self): return self.base * self.long_mod class CharacterAttributes(object): def __init__(self, attributes): for weapon, base in attributes.items(): setattr(self, weapon, WeaponAttribute(base)) Have a CharacterAttributes object in the character class and use it like this: # Initialise self.charAttribs = CharacterAttributes(self.skill_dict) # Get some values print self.charAttribs.CRM.short print self.charAttribs.PST.med print self.charAttribs.LCG.long
Refactoring "to hit" values for a game
I'm making a game and one of the methods calculates a character's base hit numbers based on skill values. The method currently calculates each value individually, since each skill can be used at short, medium, and long range. I originally thought I could combine the skills into a tuple and iterate over it, dynamically creating each hit number. But I don't know if it's actually possible, since I currently have each hit number assigned to it's own variable. I also thought about creating a method for each range, and passing the tuple as an argument. I could create a new tuple or list with the resulting values and then assign them to the individual variables, but I don't see how it would be any better than do it this way, except that it won't look so copy & pasted. Here's what I currently have: def calcBaseHitNumbers(self, dict): """Calculate character's base hit numbers depending on skill level.""" self.skill_dict = dict self.rifle = self.skill_dict.get('CRM', 0) self.pistol = self.skill_dict.get('PST', 0) self.big_gun = self.skill_dict.get('LCG', 0) self.heavy_weapon = self.skill_dict.get('HW', 0) self.bow = self.skill_dict.get('LB', 0) #self.skill_tuple = (self.rifle, self.pistol, self.big_gun, self.heavy_weapon, # self.bow) #---Short range ## for skill in self.skill_tuple: ## self.base_hit_short = skill * 0.6 self.charAttribs.bhCRM_short = self.rifle * 0.6 self.charAttribs.bhPST_short = self.pistol * 0.6 self.charAttribs.bhHW_short = self.heavy_weapon * 0.6 self.charAttribs.bhLCG_short = self.big_gun * 0.6 self.charAttribs.bhLB_short = self.bow * 0.6 #---Med range self.charAttribs.bhCRM_med = self.rifle * 0.3 self.charAttribs.bhPST_med = self.pistol * 0.3 self.charAttribs.bhHW_med = self.heavy_weapon * 0.3 self.charAttribs.bhLCG_med = self.big_gun * 0.3 self.charAttribs.bhLB_med = self.bow * 0.3 #---Long range self.charAttribs.bhCRM_long = self.rifle * 0.1 self.charAttribs.bhPST_long = self.pistol * 0.1 self.charAttribs.bhHW_long = self.heavy_weapon * 0.1 self.charAttribs.bhLCG_long = self.big_gun * 0.1 self.charAttribs.bhLB_long = self.bow * 0.1 How would you refactor this so it's more dynamic? Edit: I guess what I want to do is something like this: Have a tuple (like the one I commented out) and iterate over it 3 times, each time making a new value (for each skill) based on the modifier for each particular range. The resulting value is then automatically assigned to it's respective variable. In my head, it makes sense. But when I actually try to code it, I get lost. The problem, I think, is that this is the first "real" program I've written; all I've done before are small scripts. This is only the 0.1 version of my program, so it's not critical to refactor it now. However, it seems very un-Pythonic to do this manually and I also want to "future-proof" this in case things change down the road.
[ "It feels like what you really want is a class representing the weapon, with attributes to handle the base values and calculate hit values with various modifiers. Here's a simple example:\nSHORT_RANGE = 'S'\nMEDIUM_RANGE = 'M'\nLONG_RANGE = 'L'\nSHORT_RANGE_MODIFIER = 0.6\nMEDIUM_RANGE_MODIFIER = 0.3\nLONG_RANGE_MODIFIER = 0.1\n\nclass Weapon(object):\n def __init__(self, code_name, full_name, base_hit_value,\n short_range_modifier=None, medium_range_modifier=None,\n long_range_modifier=None):\n self.code_name, self.full_name = code_name, full_name\n self.base_hit_value = base_hit_value\n self.range_modifiers = {\n SHORT_RANGE: short_range_modifier or SHORT_RANGE_MODIFIER,\n MEDIUM_RANGE: medium_range_modifier or MEDIUM_RANGE_MODIFIER,\n LONG_RANGE: long_range_modifier or LONG_RANGE_MODIFIER,\n }\n\n def hit_value(self, range, modifier=1):\n return self.base_hit_value * self.range_modifiers[range] * modifier\n\nFrom there, you might create instances of Weapon inside your Character object like so:\n self.rifle = Weapon('CRM', 'rifle', 5)\n self.pistol = Weapon('PST', 'pistol', 10)\n\nAnd then if, say, the character fires the pistol at short range:\n hit_value = self.pistol.hit_value(SHORT_RANGE)\n\nThe extra argument to the hit_value() method can be used to pass in character- or situation-specific modifications.\nOf course, the next step beyond this would be to directly model the weapons as subclasses of Weapon (perhaps breaking down into specific types of weapons, like guns, bows, grenades, etc., each with their own base values) and add an Inventory class to represent the weapons a character is carrying.\nAll of this is pretty standard, boring object-oriented design procedure, but for plenty of situations this type of thinking will get you off the ground quickly and provide at least a little bit of basic flexibility.\n", "Lets see if I understand you scenario: each weapon has its own distinct hit point so a rifle may have 1, a heavy weapon may have 2 etc. Then each character has a short, medium and long value to be multiplied by the hit point of the weapon.\nYou should consider using a Strategy design. That is create a weapon superclass with a hit point property. Create sub class weapons for rifle, pistol, bow etc. I am sure that the differences between the weapons are more than just the hit points.\nThen the Character has one or more weapons depending on your gameplay. To calculate the hit point for a particular weapon is as simple as\ncurrent_weapon * self.medium\n\nIf you decide to add more weapons later on then you do not have to edit your Character code because your character can handle any weapon.\nIn Pseudo Python\nclass Weapon\n hit = 1\n #other properties of weapon\n\nclass Rifle(Weapon)\n #other properties of Rifle\n\nclass Pistol(Weapon)\n #other properties of Pistol\n\nclass Character\n weapon = Rifle()\n long=0.6\n def calcHit()\n return self.long*weapon.hit\n\njohn = Character()\njohn.weapon= Rifle()\njohn.calcHit\n\n", "@Vinko: perhaps make calcBaseHitNumbers, do the \"if not self.calculatedBase:\" check internally, and just no-op if it's been done before. That said, I can't see the pressing need for precalculating this information. But I'm no Python performance expert.\n", "What sense of dynamic do you mean? What is likely to vary - the number of skills, or the weighting factors, the number of ranges (short, med, long) or all of these?\nWhat happens to the (e.g.) bhPST_* values afterwards - do they get combined into one number?\nOne thing that leaps out is that the list of skills is hardwired in the code - I would be inclined to replace the bh variables with a method\nSo (please take into account I don't know the first thing about Python :) )\ndef bh_short(self, key)\n skill = self.skill_dict.get(key, 0)\n return skill * 0.6\n\nNow you can keep a list of skills that contribute to hit points and iterate over that calling bh_short etc.\nPossibly also pass the range (long med short) unto the function, or return all three values - this all depends on what you're going to do next with the calculated hitpoints.\nBasically, we need more information about the context this is to be used in\n", "I would have a class for the character's attributes (so you don't have heaps of things in the character class) and a class for a weapon's attributes:\nclass WeaponAttribute(object):\n\n short_mod = 0.6\n med_mod = 0.3\n long_mod = 0.1\n\n def __init__(self, base):\n self.base = base\n\n @property\n def short(self):\n return self.base * self.short_mod\n\n @property\n def med(self):\n return self.base * self.med_mod\n\n @property\n def long(self):\n return self.base * self.long_mod\n\n\nclass CharacterAttributes(object):\n\n def __init__(self, attributes):\n for weapon, base in attributes.items():\n setattr(self, weapon, WeaponAttribute(base))\n\nHave a CharacterAttributes object in the character class and use it like this:\n# Initialise\nself.charAttribs = CharacterAttributes(self.skill_dict)\n# Get some values\nprint self.charAttribs.CRM.short\nprint self.charAttribs.PST.med\nprint self.charAttribs.LCG.long\n\n" ]
[ 6, 1, 0, 0, 0 ]
[]
[]
[ "python", "refactoring" ]
stackoverflow_0000237876_python_refactoring.txt
Q: getting pywin32 to work inside open office 2.4 built in python 2.3 interpreter I need to update data to a mssql 2005 database so I have decided to use adodbapi, which is supposed to come built into the standard installation of python 2.1.1 and greater. It needs pywin32 to work correctly and the open office python 2.3 installation does not have pywin32 built into it. It also seems like this built int python installation does not have adodbapi, as I get an error when I go import adodbapi. Any suggestions on how to get both pywin32 and adodbapi installed into this open office 2.4 python installation? thanks oh yeah I tried those ways. annoyingly nothing. So i have reverted to jython, that way I can access Open Office for its conversion capabilities along with decent database access. Thanks for the help. A: maybe the best way to install pywin32 is to place it in (openofficedir)\program\python-core-2.3.4\lib\site-packages it is easy if you have a python 2.3 installation (with pywin installed) under C:\python2.3 move the C:\python2.3\Lib\site-packages\ to your (openofficedir)\program\python-core-2.3.4\lib\site-packages A: http://www.time-travellers.org/shane/howtos/MS-SQL-Express-Python-HOWTO.html use an alternative? A: I don't know about open office python. I suggest trying the standard windows python installation followed by Pywin32. Alternatively, there is a single installer containing both at activestate. In the pythonwin IDE, select menu item tools / COM Makepy utility. The libraries you need to build with makepy are (or similar versions): Microsoft ActiveX Data Objects 2.8 Library (2.8) Microsoft ActiveX Data Objects Recordset 2.8 Library (2.8) After makepy is done, you can use the COM object to access ADODB: from win32com import client conn=client.Dispatch('adodb.connection') conn.Open(connection_string) resultset,x=e.Execute('select * from mytable') resultset.MoveFirst() record_fields=resultset.Fields (etc.)
getting pywin32 to work inside open office 2.4 built in python 2.3 interpreter
I need to update data to a mssql 2005 database so I have decided to use adodbapi, which is supposed to come built into the standard installation of python 2.1.1 and greater. It needs pywin32 to work correctly and the open office python 2.3 installation does not have pywin32 built into it. It also seems like this built int python installation does not have adodbapi, as I get an error when I go import adodbapi. Any suggestions on how to get both pywin32 and adodbapi installed into this open office 2.4 python installation? thanks oh yeah I tried those ways. annoyingly nothing. So i have reverted to jython, that way I can access Open Office for its conversion capabilities along with decent database access. Thanks for the help.
[ "maybe the best way to install pywin32 is to place it in \n(openofficedir)\\program\\python-core-2.3.4\\lib\\site-packages\nit is easy if you have a python 2.3 installation (with pywin installed) under \nC:\\python2.3 \nmove the C:\\python2.3\\Lib\\site-packages\\ to your\n(openofficedir)\\program\\python-core-2.3.4\\lib\\site-packages\n", "http://www.time-travellers.org/shane/howtos/MS-SQL-Express-Python-HOWTO.html\nuse an alternative?\n", "I don't know about open office python.\nI suggest trying the standard windows python installation followed by Pywin32. Alternatively, there is a single installer containing both at activestate. In the pythonwin IDE, select menu item tools / COM Makepy utility. The libraries you need to build with makepy are (or similar versions):\nMicrosoft ActiveX Data Objects 2.8 Library (2.8)\nMicrosoft ActiveX Data Objects Recordset 2.8 Library (2.8)\n\nAfter makepy is done, you can use the COM object to access ADODB:\nfrom win32com import client\nconn=client.Dispatch('adodb.connection')\nconn.Open(connection_string)\nresultset,x=e.Execute('select * from mytable')\nresultset.MoveFirst()\nrecord_fields=resultset.Fields\n(etc.)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "adodbapi", "openoffice.org", "python", "pywin32" ]
stackoverflow_0000239009_adodbapi_openoffice.org_python_pywin32.txt
Q: Python file interface for strings Is there a Python class that wraps the file interface (read, write etc.) around a string? I mean something like the stringstream classes in C++. I was thinking of using it to redirect the output of print into a string, like this sys.stdout = string_wrapper() print "foo", "bar", "baz" s = sys.stdout.to_string() #now s == "foo bar baz" EDIT: This is a duplicate of How do I wrap a string in a file in Python? A: Yes, there is StringIO: import StringIO import sys sys.stdout = StringIO.StringIO() print "foo", "bar", "baz" s = sys.stdout.getvalue() A: For better performance, note that you can also use cStringIO. But also note that this isn't very portable to python 3.
Python file interface for strings
Is there a Python class that wraps the file interface (read, write etc.) around a string? I mean something like the stringstream classes in C++. I was thinking of using it to redirect the output of print into a string, like this sys.stdout = string_wrapper() print "foo", "bar", "baz" s = sys.stdout.to_string() #now s == "foo bar baz" EDIT: This is a duplicate of How do I wrap a string in a file in Python?
[ "Yes, there is StringIO:\nimport StringIO\nimport sys\n\n\nsys.stdout = StringIO.StringIO()\nprint \"foo\", \"bar\", \"baz\"\ns = sys.stdout.getvalue()\n\n", "For better performance, note that you can also use cStringIO. But also note that this isn't very portable to python 3.\n" ]
[ 12, 2 ]
[]
[]
[ "file", "python", "string" ]
stackoverflow_0000239912_file_python_string.txt
Q: How do you programmatically reorder children of an ATFolder subclass? I have Plone product that uses a custom folder type for containing a set of custom content objects. The folder type was created by subclassing BaseFolder and it has a schema with a couple of text fields. Currently, when custom objects are added to the custom folder, the objects are sorted alphabetically by their id. How can I override this behavior and allow my users to sort the custom folders manually, say through the "Contents" view? A: Quickest solution: subclass from ATFolder instead of BaseFolder. That gives you all the "normal" reordering and other commmon plone folder capabilities (which I suspect you also want). If you want to be more selective, look into Products/ATContentTypes/content/base.py: ATCTOrderedFolder and OrderedBaseFolder.
How do you programmatically reorder children of an ATFolder subclass?
I have Plone product that uses a custom folder type for containing a set of custom content objects. The folder type was created by subclassing BaseFolder and it has a schema with a couple of text fields. Currently, when custom objects are added to the custom folder, the objects are sorted alphabetically by their id. How can I override this behavior and allow my users to sort the custom folders manually, say through the "Contents" view?
[ "Quickest solution: subclass from ATFolder instead of BaseFolder. That gives you all the \"normal\" reordering and other commmon plone folder capabilities (which I suspect you also want).\nIf you want to be more selective, look into Products/ATContentTypes/content/base.py: ATCTOrderedFolder and OrderedBaseFolder.\n" ]
[ 4 ]
[]
[]
[ "archetypes", "plone", "python", "zope" ]
stackoverflow_0000237211_archetypes_plone_python_zope.txt
Q: How can I call a DLL from a scripting language? I have a third-party product, a terminal emulator, which provides a DLL that can be linked to a C program to basically automate the driving of this product (send keystrokes, detect what's on the screen and so forth). I want to drive it from a scripting language (I'm comfortable with Python and slightly less so with Perl) so that we don't have to compile and send out executables to our customers whenever there's a problem found. We also want the customers to be able to write their own scripts using ours as baselines and they won't entertain the idea of writing and compiling C code. What's a good way of getting Python/Perl to interface to a Windows DLL. My first thought was to write a server program and have a Python script communicate with it via TCP but there's got to be an easier solution. A: One way to call C libraries from Python is to use ctypes: >>> from ctypes import * >>> windll.user32.MessageBoxA(None, "Hello world", "ctypes", 0); A: In Perl, Win32::API is an easy way to some interfacing to DLLs. There is also Inline::C, if you have access to a compiler and the windows headers. Perl XSUBs can also create an interface between Perl and C. A: In Perl, P5NCI will also do that, at least in some cases. But it seems to me that anything you use that directly manages interfacing with the dll is going to be user-unfriendly, and if you are going to have a user (scriptor?) friendly wrapper, it might as well be an XS module. I guess I don't see a meaningful distinction between "compile and send out executables" and "compile and send out scripts". A: For Python, you could compile an extension which links to the DLL, so that in Python you could just import it like a normal module. You could do this by hand, by using a library like Boost.Python, or by using a tool such as SWIG (which also supports Perl and other scripting languages) to generate a wrapper automatically. A: The Python Py_InitModule API function allows you to create a module from c/c++ functions which can then be call from Python. It takes about a dozen or so lines of c/c++ code to achieve but it is pretty easy code to write: https://python.readthedocs.org/en/v2.7.2/extending/extending.html#the-module-s-method-table-and-initialization-function The Zeus editor that I wrote, uses this appoach to allow Zeus macros to be written in Python and it works very well.
How can I call a DLL from a scripting language?
I have a third-party product, a terminal emulator, which provides a DLL that can be linked to a C program to basically automate the driving of this product (send keystrokes, detect what's on the screen and so forth). I want to drive it from a scripting language (I'm comfortable with Python and slightly less so with Perl) so that we don't have to compile and send out executables to our customers whenever there's a problem found. We also want the customers to be able to write their own scripts using ours as baselines and they won't entertain the idea of writing and compiling C code. What's a good way of getting Python/Perl to interface to a Windows DLL. My first thought was to write a server program and have a Python script communicate with it via TCP but there's got to be an easier solution.
[ "One way to call C libraries from Python is to use ctypes:\n>>> from ctypes import *\n>>> windll.user32.MessageBoxA(None, \"Hello world\", \"ctypes\", 0);\n\n", "In Perl, Win32::API is an easy way to some interfacing to DLLs. There is also Inline::C, if you have access to a compiler and the windows headers.\nPerl XSUBs can also create an interface between Perl and C. \n", "In Perl, P5NCI will also do that, at least in some cases. But it seems to me that anything you use that directly manages interfacing with the dll is going to be user-unfriendly, and if you are going to have a user (scriptor?) friendly wrapper, it might as well be an XS module.\nI guess I don't see a meaningful distinction between \"compile and send out executables\" and \"compile and send out scripts\".\n", "For Python, you could compile an extension which links to the DLL, so that in Python you could just import it like a normal module. You could do this by hand, by using a library like Boost.Python, or by using a tool such as SWIG (which also supports Perl and other scripting languages) to generate a wrapper automatically.\n", "The Python Py_InitModule API function allows you to create a module from c/c++ functions which can then be call from Python. \nIt takes about a dozen or so lines of c/c++ code to achieve but it is pretty easy code to write:\nhttps://python.readthedocs.org/en/v2.7.2/extending/extending.html#the-module-s-method-table-and-initialization-function\nThe Zeus editor that I wrote, uses this appoach to allow Zeus macros to be written in Python and it works very well.\n" ]
[ 15, 12, 5, 4, 3 ]
[]
[]
[ "dll", "perl", "python" ]
stackoverflow_0000239020_dll_perl_python.txt
Q: I want a program that writes every possible combination to a different line of a text file I want to write a program that would print every combination of a set of variables to a text file, creating a word list. Each answer should be written on a separate line and write all of the results for 1 digit, 2 digits, and 3 digits to a single text file. Is there a simple way I can write a python program that can accomplish this? Here is an example of the output I am expecting when printing all the binary number combinations possible for 1, 2, and 3 digits: Output: 0 1 00 01 10 11 000 001 010 011 100 101 110 111 A: # Given two lists of strings, return a list of all ways to concatenate # one from each. def combos(xs, ys): return [x + y for x in xs for y in ys] digits = ['0', '1'] for c in combos(digits, combos(digits, digits)): print c #. 000 #. 001 #. 010 #. 011 #. 100 #. 101 #. 110 #. 111 A: A naïve solution which solves the problem and is general enough for any application you might have is this: def combinations(words, length): if length == 0: return [] result = [[word] for word in words] while length > 1: new_result = [] for combo in result: new_result.extend(combo + [word] for word in words) result = new_result[:] length -= 1 return result Basically, this gradually builds up a tree in memory of all the combinations, and then returns them. It is memory-intensive, however, and so is impractical for large-scale combinations. Another solution for the problem is, indeed, to use counting, but then to transform the numbers generated into a list of words from the wordlist. To do so, we first need a function (called number_to_list()): def number_to_list(number, words): list_out = [] while number: list_out = [number % len(words)] + list_out number = number // len(words) return [words[n] for n in list_out] This is, in fact, a system for converting decimal numbers to other bases. We then write the counting function; this is relatively simple, and will make up the core of the application: def combinations(words, length): numbers = xrange(len(words)**length) for number in numbers: combo = number_to_list(number, words) if len(combo) < length: combo = [words[0]] * (length - len(combo)) + combo yield combo This is a Python generator; making it a generator allows it to use up less RAM. There is a little work to be done after turning the number into a list of words; this is because these lists will need padding so that they are at the requested length. It would be used like this: >>> list(combinations('01', 3)) [['0', '0', '0'], ['0', '0', '1'], ['0', '1', '0'], ['0', '1', '1'], ['1', '0', '0'], ['1', '0', '1'], ['1', '1', '0'], ['1', '1', '1']] As you can see, you get back a list of lists. Each of these sub-lists contains a sequence of the original words; you might then do something like map(''.join, list(combinations('01', 3))) to retrieve the following result: ['000', '001', '010', '011', '100', '101', '110', '111'] You could then write this to disk; a better idea, however, would be to use the built-in optimizations that generators have and do something like this: fileout = open('filename.txt', 'w') fileout.writelines( ''.join(combo) for combo in combinations('01', 3)) fileout.close() This will only use as much RAM as necessary (enough to store one combination). I hope this helps. A: It shouldn't be too hard in most languages. Does the following pseudo-code help? for(int i=0; i < 2^digits; i++) { WriteLine(ToBinaryString(i)); } A: A basic function to produce all the permutations of a list is given below. In this approach, permutations are created lazily by using generators. def perms(seq): if seq == []: yield [] else: res = [] for index,item in enumerate(seq): rest = seq[:index] + seq[index+1:] for restperm in perms(rest): yield [item] + restperm alist = [1,1,0] for permuation in perms(alist): print permuation
I want a program that writes every possible combination to a different line of a text file
I want to write a program that would print every combination of a set of variables to a text file, creating a word list. Each answer should be written on a separate line and write all of the results for 1 digit, 2 digits, and 3 digits to a single text file. Is there a simple way I can write a python program that can accomplish this? Here is an example of the output I am expecting when printing all the binary number combinations possible for 1, 2, and 3 digits: Output: 0 1 00 01 10 11 000 001 010 011 100 101 110 111
[ "# Given two lists of strings, return a list of all ways to concatenate\n# one from each.\ndef combos(xs, ys):\n return [x + y for x in xs for y in ys]\n\ndigits = ['0', '1']\nfor c in combos(digits, combos(digits, digits)):\n print c\n\n#. 000\n#. 001\n#. 010\n#. 011\n#. 100\n#. 101\n#. 110\n#. 111\n\n", "A naïve solution which solves the problem and is general enough for any application you might have is this:\ndef combinations(words, length):\n if length == 0:\n return []\n result = [[word] for word in words]\n while length > 1:\n new_result = []\n for combo in result:\n new_result.extend(combo + [word] for word in words)\n result = new_result[:]\n length -= 1\n return result\n\nBasically, this gradually builds up a tree in memory of all the combinations, and then returns them. It is memory-intensive, however, and so is impractical for large-scale combinations.\nAnother solution for the problem is, indeed, to use counting, but then to transform the numbers generated into a list of words from the wordlist. To do so, we first need a function (called number_to_list()):\ndef number_to_list(number, words):\n list_out = []\n while number:\n list_out = [number % len(words)] + list_out\n number = number // len(words)\n return [words[n] for n in list_out]\n\nThis is, in fact, a system for converting decimal numbers to other bases. We then write the counting function; this is relatively simple, and will make up the core of the application:\ndef combinations(words, length):\n numbers = xrange(len(words)**length)\n for number in numbers:\n combo = number_to_list(number, words)\n if len(combo) < length:\n combo = [words[0]] * (length - len(combo)) + combo\n yield combo\n\nThis is a Python generator; making it a generator allows it to use up less RAM. There is a little work to be done after turning the number into a list of words; this is because these lists will need padding so that they are at the requested length. It would be used like this:\n>>> list(combinations('01', 3))\n[['0', '0', '0'], ['0', '0', '1'],\n['0', '1', '0'], ['0', '1', '1'],\n['1', '0', '0'], ['1', '0', '1'],\n['1', '1', '0'], ['1', '1', '1']]\n\nAs you can see, you get back a list of lists. Each of these sub-lists contains a sequence of the original words; you might then do something like map(''.join, list(combinations('01', 3))) to retrieve the following result:\n['000', '001', '010', '011', '100', '101', '110', '111']\n\nYou could then write this to disk; a better idea, however, would be to use the built-in optimizations that generators have and do something like this:\nfileout = open('filename.txt', 'w')\nfileout.writelines(\n ''.join(combo) for combo in combinations('01', 3))\nfileout.close()\n\nThis will only use as much RAM as necessary (enough to store one combination). I hope this helps.\n", "It shouldn't be too hard in most languages. Does the following pseudo-code help?\nfor(int i=0; i < 2^digits; i++)\n{\n WriteLine(ToBinaryString(i));\n}\n\n", "A basic function to produce all the permutations of a list is given below. In this approach, permutations are created lazily by using generators.\ndef perms(seq):\n if seq == []:\n yield []\n else:\n res = []\n for index,item in enumerate(seq):\n rest = seq[:index] + seq[index+1:]\n for restperm in perms(rest):\n yield [item] + restperm\n\nalist = [1,1,0]\nfor permuation in perms(alist):\n print permuation\n\n" ]
[ 3, 3, 2, 2 ]
[]
[]
[ "python", "recursion" ]
stackoverflow_0000241533_python_recursion.txt
Q: Environment Variables in Python on Linux Python's access to environment variables does not accurately reflect the operating system's view of the processes environment. os.getenv and os.environ do not function as expected in particular cases. Is there a way to properly get the running process' environment? To demonstrate what I mean, take the two roughly equivalent programs (the first in C, the other in python): #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char *argv[]){ char *env; for(;;){ env = getenv("SOME_VARIABLE"); if(env) puts(env); sleep(5); } } import os import time while True: env = os.getenv("SOME_VARIABLE") if env is not None: print env time.sleep(5) Now, if we run the C program and attach to the running process with gdb and forcibly change the environment under the hood by doing something like this: (gdb) print setenv("SOME_VARIABLE", "my value", 1) [Switching to Thread -1208600896 (LWP 16163)] $1 = 0 (gdb) print (char *)getenv("SOME_VARIABLE") $2 = 0x8293126 "my value" then the aforementioned C program will start spewing out "my value" once every 5 seconds. The aforementioned python program, however, will not. Is there a way to get the python program to function like the C program in this case? (Yes, I realize this is a very obscure and potentially damaging action to perform on a running process) Also, I'm currently using python 2.4, this may have been fixed in a later version of python. A: That's a very good question. It turns out that the os module initializes os.environ to the value of posix.environ, which is set on interpreter start up. In other words, the standard library does not appear to provide access to the getenv function. That is a case where it would probably be safe to use ctypes on unix. Since you would be calling an ultra-standard libc function. A: You can use ctypes to do this pretty simply: >>> from ctypes import CDLL, c_char_p >>> getenv = CDLL("libc.so.6").getenv >>> getenv.restype = c_char_p >>> getenv("HOME") '/home/glyph' A: Another possibility is to use pdb, or some other python debugger instead, and change os.environ at the python level, rather than the C level. Here's a small recipe I posted to interrupt a running python process and provide access to a python console on receiving a signal. Alternatively, just stick a pdb.set_trace() at some point in your code you want to interrupt. In either case, just run the statement "import os; os.environ['SOME_VARIABLE']='my_value'" and you should be updated as far as python is concerned. I'm not sure if this will also update the C environment with setenv, so if you have C modules using getenv directly you may have to do some more work to keep this in sync. A: I don't believe many programs EVER expect to have their environment externally modified, so loading a copy of the passed environment at startup is equivalent. You have simply stumbled on an implementation choice. If you are seeing all the set-at-startup values and putenv/setenv from within your program works, I don't think there's anything to be concerned about. There are far cleaner ways to pass updated information to running executables. A: Looking at the Python source code (2.4.5): Modules/posixmodule.c gets the environ in convertenviron() which gets run at startup (see INITFUNC) and stores the environment in a platform-specific module (nt, os2, or posix) Lib/os.py looks at sys.builtin_module_names, and imports all symbols from either posix, nt, or os2 So yes, it gets decided at startup. os.environ is not going to be helpful here. If you really want to do this, then the most obvious approach that comes to mind is to create your own custom C-based python module, with a getenv that always invokes the system call.
Environment Variables in Python on Linux
Python's access to environment variables does not accurately reflect the operating system's view of the processes environment. os.getenv and os.environ do not function as expected in particular cases. Is there a way to properly get the running process' environment? To demonstrate what I mean, take the two roughly equivalent programs (the first in C, the other in python): #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char *argv[]){ char *env; for(;;){ env = getenv("SOME_VARIABLE"); if(env) puts(env); sleep(5); } } import os import time while True: env = os.getenv("SOME_VARIABLE") if env is not None: print env time.sleep(5) Now, if we run the C program and attach to the running process with gdb and forcibly change the environment under the hood by doing something like this: (gdb) print setenv("SOME_VARIABLE", "my value", 1) [Switching to Thread -1208600896 (LWP 16163)] $1 = 0 (gdb) print (char *)getenv("SOME_VARIABLE") $2 = 0x8293126 "my value" then the aforementioned C program will start spewing out "my value" once every 5 seconds. The aforementioned python program, however, will not. Is there a way to get the python program to function like the C program in this case? (Yes, I realize this is a very obscure and potentially damaging action to perform on a running process) Also, I'm currently using python 2.4, this may have been fixed in a later version of python.
[ "That's a very good question.\nIt turns out that the os module initializes os.environ to the value of posix.environ, which is set on interpreter start up. In other words, the standard library does not appear to provide access to the getenv function.\nThat is a case where it would probably be safe to use ctypes on unix. Since you would be calling an ultra-standard libc function.\n", "You can use ctypes to do this pretty simply:\n>>> from ctypes import CDLL, c_char_p\n>>> getenv = CDLL(\"libc.so.6\").getenv\n>>> getenv.restype = c_char_p\n>>> getenv(\"HOME\")\n'/home/glyph'\n\n", "Another possibility is to use pdb, or some other python debugger instead, and change os.environ at the python level, rather than the C level. Here's a small recipe I posted to interrupt a running python process and provide access to a python console on receiving a signal. Alternatively, just stick a pdb.set_trace() at some point in your code you want to interrupt. In either case, just run the statement \"import os; os.environ['SOME_VARIABLE']='my_value'\" and you should be updated as far as python is concerned. \nI'm not sure if this will also update the C environment with setenv, so if you have C modules using getenv directly you may have to do some more work to keep this in sync.\n", "I don't believe many programs EVER expect to have their environment externally modified, so loading a copy of the passed environment at startup is equivalent. You have simply stumbled on an implementation choice.\nIf you are seeing all the set-at-startup values and putenv/setenv from within your program works, I don't think there's anything to be concerned about. There are far cleaner ways to pass updated information to running executables.\n", "Looking at the Python source code (2.4.5):\n\nModules/posixmodule.c gets the environ in convertenviron() which gets run at startup (see INITFUNC) and stores the environment in a platform-specific module (nt, os2, or posix)\nLib/os.py looks at sys.builtin_module_names, and imports all symbols from either posix, nt, or os2\n\nSo yes, it gets decided at startup. os.environ is not going to be helpful here.\nIf you really want to do this, then the most obvious approach that comes to mind is to create your own custom C-based python module, with a getenv that always invokes the system call.\n" ]
[ 16, 12, 4, 3, 1 ]
[]
[]
[ "environment_variables", "gdb", "python" ]
stackoverflow_0000235435_environment_variables_gdb_python.txt
Q: Reading collections of extended elements in an RSS feed with Universal Feed Parser Is there any way to read a collection of extension elements with Universal Feed Parser? This is just a short snippet from Kuler RSS feed: <channel> <item> <!-- snip: regular RSS elements --> <kuler:themeItem> <kuler:themeID>123456</kuler:themeID> <!-- snip --> <kuler:themeSwatches> <kuler:swatch> <kuler:swatchHexColor>FFFFFF</kuler:swatchHexColor> <!-- snip --> </kuler:swatch> <kuler:swatch> <kuler:swatchHexColor>000000</kuler:swatchHexColor> <!-- snip --> </kuler:swatch> </kuler:themeSwatches> </kuler:themeItem> </item> </channel> I tried the following: >>> feed = feedparser.parse(url) >>> feed.channel.title u'kuler highest rated themes' >>> feed.entries[0].title u'Foobar' >>> feed.entries[0].kuler_themeid u'123456' >>> feed.entries[0].kuler_swatch u'' feed.entries[0].kuler_swatchhexcolor returns only last kuler:swatchHexColor. Is there any way to retrieve all elements with feedparser? I have already worked around the issue by using minidom, but I would like to use Universal Feed Parser if possible (due to very simple API). Can it be extended? I haven't found anything about that in the documentation, so if someone has more experience with the library, please, advise me. A: Universal Feed Parser is really nice for most feeds, but for extended feeds, you might wanna try something called BeautifulSoup. It's an XML/HTML/XHTML parsing library which is originally designed for screenscraping; turns out it's also brilliant for this sort of thing. The documentation is pretty good, and it's got a self-explanatory API, so if you're thinking of using anything else, that's what I'd recommend. I'd probably use it like this: >>> import BeautifulSoup >>> import urllib2 # Fetch HTML data from url >>> connection = urllib2.urlopen('http://kuler.adobe.com/path/to/rss.xml') >>> html_data = connection.read() >>> connection.close() # Create and search the soup >>> soup = BeautifulSoup.BeautifulSoup(html_data) >>> themes = soup.findAll('kuler:themeitem') # Note: all lower-case element names # Get the ID of the first theme >>> themes[0].find('kuler:themeid').contents[0] u'123456' # Get an ordered list of the hex colors for the first theme >>> themeswatches = themes[0].find('kuler:themeswatches') >>> colors = [color.contents[0] for color in ... themeswatches.findAll('kuler:swatchhexcolor')] >>> colors [u'FFFFFF', u'000000'] So you can probably get the idea that this is a very cool library. It wouldn't be too good if you were parsing any old RSS feed, but because the data is from Adobe Kuler, you can be pretty sure that it's not going to vary enough to break your app (i.e. it's a trusted enough source). Even worse is trying to parse Adobe's goddamn .ASE format. I tried writing a parser for it and it got really horrible, really quickly. Ug. So, yeah, the RSS feeds are probably the easiest way of interfacing with Kuler.
Reading collections of extended elements in an RSS feed with Universal Feed Parser
Is there any way to read a collection of extension elements with Universal Feed Parser? This is just a short snippet from Kuler RSS feed: <channel> <item> <!-- snip: regular RSS elements --> <kuler:themeItem> <kuler:themeID>123456</kuler:themeID> <!-- snip --> <kuler:themeSwatches> <kuler:swatch> <kuler:swatchHexColor>FFFFFF</kuler:swatchHexColor> <!-- snip --> </kuler:swatch> <kuler:swatch> <kuler:swatchHexColor>000000</kuler:swatchHexColor> <!-- snip --> </kuler:swatch> </kuler:themeSwatches> </kuler:themeItem> </item> </channel> I tried the following: >>> feed = feedparser.parse(url) >>> feed.channel.title u'kuler highest rated themes' >>> feed.entries[0].title u'Foobar' >>> feed.entries[0].kuler_themeid u'123456' >>> feed.entries[0].kuler_swatch u'' feed.entries[0].kuler_swatchhexcolor returns only last kuler:swatchHexColor. Is there any way to retrieve all elements with feedparser? I have already worked around the issue by using minidom, but I would like to use Universal Feed Parser if possible (due to very simple API). Can it be extended? I haven't found anything about that in the documentation, so if someone has more experience with the library, please, advise me.
[ "Universal Feed Parser is really nice for most feeds, but for extended feeds, you might wanna try something called BeautifulSoup. It's an XML/HTML/XHTML parsing library which is originally designed for screenscraping; turns out it's also brilliant for this sort of thing. The documentation is pretty good, and it's got a self-explanatory API, so if you're thinking of using anything else, that's what I'd recommend.\nI'd probably use it like this:\n>>> import BeautifulSoup\n>>> import urllib2\n\n# Fetch HTML data from url\n>>> connection = urllib2.urlopen('http://kuler.adobe.com/path/to/rss.xml')\n>>> html_data = connection.read()\n>>> connection.close()\n\n# Create and search the soup\n>>> soup = BeautifulSoup.BeautifulSoup(html_data)\n>>> themes = soup.findAll('kuler:themeitem') # Note: all lower-case element names\n\n# Get the ID of the first theme\n>>> themes[0].find('kuler:themeid').contents[0]\nu'123456'\n\n# Get an ordered list of the hex colors for the first theme\n>>> themeswatches = themes[0].find('kuler:themeswatches')\n>>> colors = [color.contents[0] for color in\n... themeswatches.findAll('kuler:swatchhexcolor')]\n>>> colors\n[u'FFFFFF', u'000000']\n\nSo you can probably get the idea that this is a very cool library. It wouldn't be too good if you were parsing any old RSS feed, but because the data is from Adobe Kuler, you can be pretty sure that it's not going to vary enough to break your app (i.e. it's a trusted enough source).\nEven worse is trying to parse Adobe's goddamn .ASE format. I tried writing a parser for it and it got really horrible, really quickly. Ug. So, yeah, the RSS feeds are probably the easiest way of interfacing with Kuler.\n" ]
[ 3 ]
[]
[]
[ "adobe", "feed", "python", "rss" ]
stackoverflow_0000241503_adobe_feed_python_rss.txt
Q: WindowsError: priveledged instruction when saving a FreeImagePy Image in script, works in IDLE I'm working on a program to do some image wrangling in Python for work. I'm using FreeImagePy because PIL doesn't support multi-page TIFFs. Whenever I try to save a file with it from my program I get this error message (or something similar depending on which way I try to save): Error returned. TIFF FreeImage_Save: failed to open file C:/OCRtmp/ocr page0 Traceback (most recent call last): File "C:\Python25\Projects\OCRPageUnzipper\PageUnzipper.py", line 102, in <mod ule> OCRBox.convertToPages("C:/OCRtmp/ocr page",FIPY.FIF_TIFF) File "C:\Python25\lib\site-packages\FreeImagePy\FreeImagePy\FreeImagePy.py", l ine 2080, in convertToPages self.Save(FIF, dib, fileNameOut, flags) File "C:\Python25\lib\site-packages\FreeImagePy\FreeImagePy\FreeImagePy.py", l ine 187, in Save return self.__lib.Save(typ, bitmap, fileName, flags) WindowsError: exception: priviledged instruction When I try and do the same things from IDLE, it works fine. A: Looks like a permission issues, make sure you don't have the file open in another application, and that you have write permissions to the file location your trying to write to. A: That's what I thought too, but I figured it out a couple hours ago. Apparently if the directory/file I'm trying to write to doesn't exist, FreeImagePy isn't smart enough to create it (most of the time. Creating a new multipage image seems to work fine) but i guess running it within IDLE, IDLE figures it out and takes care of it or something. I managed to work around it by using os.mkdir to explicitly make sure things that I need exist.
WindowsError: priveledged instruction when saving a FreeImagePy Image in script, works in IDLE
I'm working on a program to do some image wrangling in Python for work. I'm using FreeImagePy because PIL doesn't support multi-page TIFFs. Whenever I try to save a file with it from my program I get this error message (or something similar depending on which way I try to save): Error returned. TIFF FreeImage_Save: failed to open file C:/OCRtmp/ocr page0 Traceback (most recent call last): File "C:\Python25\Projects\OCRPageUnzipper\PageUnzipper.py", line 102, in <mod ule> OCRBox.convertToPages("C:/OCRtmp/ocr page",FIPY.FIF_TIFF) File "C:\Python25\lib\site-packages\FreeImagePy\FreeImagePy\FreeImagePy.py", l ine 2080, in convertToPages self.Save(FIF, dib, fileNameOut, flags) File "C:\Python25\lib\site-packages\FreeImagePy\FreeImagePy\FreeImagePy.py", l ine 187, in Save return self.__lib.Save(typ, bitmap, fileName, flags) WindowsError: exception: priviledged instruction When I try and do the same things from IDLE, it works fine.
[ "Looks like a permission issues, make sure you don't have the file open in another application, and that you have write permissions to the file location your trying to write to.\n", "That's what I thought too, but I figured it out a couple hours ago. Apparently if the directory/file I'm trying to write to doesn't exist, FreeImagePy isn't smart enough to create it (most of the time. Creating a new multipage image seems to work fine) but i guess running it within IDLE, IDLE figures it out and takes care of it or something. I managed to work around it by using os.mkdir to explicitly make sure things that I need exist.\n" ]
[ 1, 0 ]
[]
[]
[ "exception", "python", "windowserror" ]
stackoverflow_0000240031_exception_python_windowserror.txt
Q: What would you recommend for a high traffic ajax intensive website? For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL? A: I can't speak to the MySQL/PostgreSQL question as I have limited experience with Postgres, but my Masters research project was about high-performance websites with CherryPy, and I don't think you'll be disappointed if you use CherryPy for your site. It can easily scale to thousands of simultaneous users on commodity hardware. Of course, the same could be said for PHP, and I don't know of any reasonable benchmarks comparing PHP and CherryPy performance. But if you were wondering whether CherryPy can handle a high-traffic site with a huge number of requests per second, the answer is definitely yes. A: The ideal setup would be close to this: In short, nginx is a fast and light webserver/front-proxy with a unique module that let's it fetch data directly from memcached's RAM store, without hitting the disk, or any dynamic webapp. Of course, if the request's URL wasn't already cached (or if it has expired), the request proceeds to the webapp as usual. The genius part is that when the webapp has generated the response, a copy of it goes to memcached, ready to be reused. All this is perfectly applicable not only to webpages, but to AJAX query/responses. in the article the 'back' servers are http, and specifically talk about mongrel. It would be even better if the back were FastCGI and other (faster?) framework; but it's a lot less critical, since the nginx/memcached team absorb the biggest part of the load. note that if your url scheme for the AJAX traffic is well designed (REST is best, IMHO), you can put most of the DB right in memcached, and any POST (which WILL pass to the app) can preemptively update the cache. A: On the DB question, I'd say PostgreSQL scales better and has better data integrity than MySQL. For a small site MySQL might be faster, but from what I've heard it slows significantly as the size of the database grows. (Note: I've never used MySQL for a large database, so you should probably get a second opinion about its scalability.) But PostgreSQL definitely scales well, and would be a good choice for a high traffic site. A: Going to need more data. Jeff had a few articles on the same problems and the answer was to wait till you hit a performance issue. to start with - who is hosting and what do they have available ? what's your in house talent skill sets ? Are you going to be hiring an outside firm ? what do they recommend ? brand new project w/ a team willing to learn a new framework ? 2nd thing is to do some mockups - how is the interface going to work. what data does it need to load and persist ? the idea is to keep your traffic between the web and db side down. e.g. no chatty pages with lots of queries. etc. Once you have a better idea of the data requirements and flow - then work on the database design. there are plenty of rules to follow but one of the better ones is to follow normalization rules (yea i'm a db guy why ?) Now you have a couple of pages build - run your tests. are you having a problem ? Yes, now look at what is it. Page serving or db pulls ? Measure then pick a course of action. A: I would go with nginx + php + xcache + postgresql
What would you recommend for a high traffic ajax intensive website?
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
[ "I can't speak to the MySQL/PostgreSQL question as I have limited experience with Postgres, but my Masters research project was about high-performance websites with CherryPy, and I don't think you'll be disappointed if you use CherryPy for your site. It can easily scale to thousands of simultaneous users on commodity hardware.\nOf course, the same could be said for PHP, and I don't know of any reasonable benchmarks comparing PHP and CherryPy performance. But if you were wondering whether CherryPy can handle a high-traffic site with a huge number of requests per second, the answer is definitely yes.\n", "The ideal setup would be close to this:\n\nIn short, nginx is a fast and light webserver/front-proxy with a unique module that let's it fetch data directly from memcached's RAM store, without hitting the disk, or any dynamic webapp. Of course, if the request's URL wasn't already cached (or if it has expired), the request proceeds to the webapp as usual. The genius part is that when the webapp has generated the response, a copy of it goes to memcached, ready to be reused.\nAll this is perfectly applicable not only to webpages, but to AJAX query/responses.\nin the article the 'back' servers are http, and specifically talk about mongrel. It would be even better if the back were FastCGI and other (faster?) framework; but it's a lot less critical, since the nginx/memcached team absorb the biggest part of the load.\nnote that if your url scheme for the AJAX traffic is well designed (REST is best, IMHO), you can put most of the DB right in memcached, and any POST (which WILL pass to the app) can preemptively update the cache.\n", "On the DB question, I'd say PostgreSQL scales better and has better data integrity than MySQL. For a small site MySQL might be faster, but from what I've heard it slows significantly as the size of the database grows. (Note: I've never used MySQL for a large database, so you should probably get a second opinion about its scalability.) But PostgreSQL definitely scales well, and would be a good choice for a high traffic site.\n", "Going to need more data. Jeff had a few articles on the same problems and the answer was to wait till you hit a performance issue.\nto start with - who is hosting and what do they have available ? what's your in house talent skill sets ? Are you going to be hiring an outside firm ? what do they recommend ? brand new project w/ a team willing to learn a new framework ?\n2nd thing is to do some mockups - how is the interface going to work. what data does it need to load and persist ? the idea is to keep your traffic between the web and db side down. e.g. no chatty pages with lots of queries. etc.\nOnce you have a better idea of the data requirements and flow - then work on the database design. there are plenty of rules to follow but one of the better ones is to follow normalization rules (yea i'm a db guy why ?)\nNow you have a couple of pages build - run your tests. are you having a problem ? Yes, now look at what is it. Page serving or db pulls ? Measure then pick a course of action.\n", "I would go with nginx + php + xcache + postgresql\n" ]
[ 8, 8, 3, 2, 2 ]
[]
[]
[ "cherrypy", "high_load", "lighttpd", "php", "python" ]
stackoverflow_0000204802_cherrypy_high_load_lighttpd_php_python.txt
Q: How does one add a svn repository build number to Python code? EDIT: This question duplicates How to access the current Subversion build number? (Thanks for the heads up, Charles!) Hi there, This question is similar to Getting the subversion repository number into code The differences being: I would like to add the revision number to Python I want the revision of the repository (not the checked out file) I.e. I would like to extract the Revision number from the return from 'svn info', likeso: $ svn info Path: . URL: svn://localhost/B/trunk Repository Root: svn://localhost/B Revision: 375 Node Kind: directory Schedule: normal Last Changed Author: bmh Last Changed Rev: 375 Last Changed Date: 2008-10-27 12:09:00 -0400 (Mon, 27 Oct 2008) I want a variable with 375 (the Revision). It's easy enough with put $Rev$ into a variable to keep track of changes on a file. However, I would like to keep track of the repository's version, and I understand (and it seems based on my tests) that $Rev$ only updates when the file changes. My initial thoughts turn to using the svn/libsvn module built in to Python, though I can't find any documentation on or examples of how to use them. Alternatively, I've thought calling 'svn info' and regex'ing the code out, though that seems rather brutal. :) Help would be most appreciated. Thanks & Cheers. A: There is a command called svnversion which comes with subversion and is meant to solve exactly that kind of problem. A: Stolen directly from django: def get_svn_revision(path=None): rev = None if path is None: path = MODULE.__path__[0] entries_path = '%s/.svn/entries' % path if os.path.exists(entries_path): entries = open(entries_path, 'r').read() # Versions >= 7 of the entries file are flat text. The first line is # the version number. The next set of digits after 'dir' is the revision. if re.match('(\d+)', entries): rev_match = re.search('\d+\s+dir\s+(\d+)', entries) if rev_match: rev = rev_match.groups()[0] # Older XML versions of the file specify revision as an attribute of # the first entries node. else: from xml.dom import minidom dom = minidom.parse(entries_path) rev = dom.getElementsByTagName('entry')[0].getAttribute('revision') if rev: return u'SVN-%s' % rev return u'SVN-unknown' Adapt as appropriate. YOu might want to change MODULE for the name of one of your codemodules. This code has the advantage of working even if the destination system does not have subversion installed. A: Python has direct bindings to libsvn, so you don't need to invoke the command line client at all. See this blog post for more details. EDIT: You can basically do something like this: from svn import fs, repos, core repository = repos.open(root_path) fs_ptr = repos.fs(repository) youngest_revision_number = fs.youngest_rev(fs_ptr) A: I use a technique very similar to this in order to show the current subversion revision number in my shell: svnRev=$(echo "$(svn info)" | grep "^Revision" | awk -F": " '{print $2};') echo $svnRev It works very well for me. Why do you want the python files to change every time the version number of the entire repository is incremented? This will make doing things like doing a diff between two files annoying if one is from the repo, and the other is from a tarball.. A: If you want to have a variable in one source file that can be set to the current working copy revision, and does not replay on subversion and a working copy being actually available at the time you run your program, then SubWCRev my be your solution. There also seems to be a linux port called SVNWCRev Both perform substitution of $WCREV$ with the highest commit level of the working copy. Other information may also be provided. A: Based on CesarB's response and the link Charles provided, I've done the following: try: from subprocess import Popen, PIPE _p = Popen(["svnversion", "."], stdout=PIPE) REVISION= _p.communicate()[0] _p = None # otherwise we get a wild exception when Django auto-reloads except Exception, e: print "Could not get revision number: ", e REVISION="Unknown" Golly Python is cool. :)
How does one add a svn repository build number to Python code?
EDIT: This question duplicates How to access the current Subversion build number? (Thanks for the heads up, Charles!) Hi there, This question is similar to Getting the subversion repository number into code The differences being: I would like to add the revision number to Python I want the revision of the repository (not the checked out file) I.e. I would like to extract the Revision number from the return from 'svn info', likeso: $ svn info Path: . URL: svn://localhost/B/trunk Repository Root: svn://localhost/B Revision: 375 Node Kind: directory Schedule: normal Last Changed Author: bmh Last Changed Rev: 375 Last Changed Date: 2008-10-27 12:09:00 -0400 (Mon, 27 Oct 2008) I want a variable with 375 (the Revision). It's easy enough with put $Rev$ into a variable to keep track of changes on a file. However, I would like to keep track of the repository's version, and I understand (and it seems based on my tests) that $Rev$ only updates when the file changes. My initial thoughts turn to using the svn/libsvn module built in to Python, though I can't find any documentation on or examples of how to use them. Alternatively, I've thought calling 'svn info' and regex'ing the code out, though that seems rather brutal. :) Help would be most appreciated. Thanks & Cheers.
[ "There is a command called svnversion which comes with subversion and is meant to solve exactly that kind of problem.\n", "Stolen directly from django:\ndef get_svn_revision(path=None):\n rev = None\n if path is None:\n path = MODULE.__path__[0]\n entries_path = '%s/.svn/entries' % path\n\n if os.path.exists(entries_path):\n entries = open(entries_path, 'r').read()\n # Versions >= 7 of the entries file are flat text. The first line is\n # the version number. The next set of digits after 'dir' is the revision.\n if re.match('(\\d+)', entries):\n rev_match = re.search('\\d+\\s+dir\\s+(\\d+)', entries)\n if rev_match:\n rev = rev_match.groups()[0]\n # Older XML versions of the file specify revision as an attribute of\n # the first entries node.\n else:\n from xml.dom import minidom\n dom = minidom.parse(entries_path)\n rev = dom.getElementsByTagName('entry')[0].getAttribute('revision')\n\n if rev:\n return u'SVN-%s' % rev\n return u'SVN-unknown'\n\nAdapt as appropriate. YOu might want to change MODULE for the name of one of your codemodules.\nThis code has the advantage of working even if the destination system does not have subversion installed.\n", "Python has direct bindings to libsvn, so you don't need to invoke the command line client at all. See this blog post for more details.\nEDIT: You can basically do something like this:\nfrom svn import fs, repos, core\nrepository = repos.open(root_path)\nfs_ptr = repos.fs(repository)\nyoungest_revision_number = fs.youngest_rev(fs_ptr)\n\n", "I use a technique very similar to this in order to show the current subversion revision number in my shell:\nsvnRev=$(echo \"$(svn info)\" | grep \"^Revision\" | awk -F\": \" '{print $2};')\necho $svnRev\n\nIt works very well for me.\nWhy do you want the python files to change every time the version number of the entire repository is incremented? This will make doing things like doing a diff between two files annoying if one is from the repo, and the other is from a tarball..\n", "If you want to have a variable in one source file that can be set to the current working copy revision, and does not replay on subversion and a working copy being actually available at the time you run your program, then SubWCRev my be your solution.\nThere also seems to be a linux port called SVNWCRev\nBoth perform substitution of $WCREV$ with the highest commit level of the working copy. Other information may also be provided.\n", "Based on CesarB's response and the link Charles provided, I've done the following:\ntry:\n from subprocess import Popen, PIPE\n _p = Popen([\"svnversion\", \".\"], stdout=PIPE)\n REVISION= _p.communicate()[0]\n _p = None # otherwise we get a wild exception when Django auto-reloads\nexcept Exception, e:\n print \"Could not get revision number: \", e\n REVISION=\"Unknown\"\n\nGolly Python is cool. :)\n" ]
[ 3, 3, 2, 1, 1, 0 ]
[]
[]
[ "python", "svn" ]
stackoverflow_0000242295_python_svn.txt
Q: How do I get the name of a function or method from within a Python function or method? I feel like I should know this, but I haven't been able to figure it out... I want to get the name of a method--which happens to be an integration test--from inside it so it can print out some diagnostic text. I can, of course, just hard-code the method's name in the string, but I'd like to make the test a little more DRY if possible. A: This seems to be the simplest way using module inspect: import inspect def somefunc(a,b,c): print "My name is: %s" % inspect.stack()[0][3] You could generalise this with: def funcname(): return inspect.stack()[1][3] def somefunc(a,b,c): print "My name is: %s" % funcname() Credit to Stefaan Lippens which was found via google. A: The answers involving introspection via inspect and the like are reasonable. But there may be another option, depending on your situation: If your integration test is written with the unittest module, then you could use self.id() within your TestCase. A: This decorator makes the name of the method available inside the function by passing it as a keyword argument. from functools import wraps def pass_func_name(func): "Name of decorated function will be passed as keyword arg _func_name" @wraps(func) def _pass_name(*args, **kwds): kwds['_func_name'] = func.func_name return func(*args, **kwds) return _pass_name You would use it this way: @pass_func_name def sum(a, b, _func_name): print "running function %s" % _func_name return a + b print sum(2, 4) But maybe you'd want to write what you want directly inside the decorator itself. Then the code is an example of a way to get the function name in a decorator. If you give more details about what you want to do in the function, that requires the name, maybe I can suggest something else. A: # file "foo.py" import sys import os def LINE( back = 0 ): return sys._getframe( back + 1 ).f_lineno def FILE( back = 0 ): return sys._getframe( back + 1 ).f_code.co_filename def FUNC( back = 0): return sys._getframe( back + 1 ).f_code.co_name def WHERE( back = 0 ): frame = sys._getframe( back + 1 ) return "%s/%s %s()" % ( os.path.basename( frame.f_code.co_filename ), frame.f_lineno, frame.f_code.co_name ) def testit(): print "Here in %s, file %s, line %s" % ( FUNC(), FILE(), LINE() ) print "WHERE says '%s'" % WHERE() testit() Output: $ python foo.py Here in testit, file foo.py, line 17 WHERE says 'foo.py/18 testit()' Use "back = 1" to find info regarding two levels back down the stack, etc. A: I think the traceback module might have what you're looking for. In particular, the extract_stack function looks like it will do the job.
How do I get the name of a function or method from within a Python function or method?
I feel like I should know this, but I haven't been able to figure it out... I want to get the name of a method--which happens to be an integration test--from inside it so it can print out some diagnostic text. I can, of course, just hard-code the method's name in the string, but I'd like to make the test a little more DRY if possible.
[ "This seems to be the simplest way using module inspect:\nimport inspect\ndef somefunc(a,b,c):\n print \"My name is: %s\" % inspect.stack()[0][3]\n\nYou could generalise this with:\ndef funcname():\n return inspect.stack()[1][3]\n\ndef somefunc(a,b,c):\n print \"My name is: %s\" % funcname()\n\nCredit to Stefaan Lippens which was found via google.\n", "The answers involving introspection via inspect and the like are reasonable. But there may be another option, depending on your situation:\nIf your integration test is written with the unittest module, then you could use self.id() within your TestCase.\n", "This decorator makes the name of the method available inside the function by passing it as a keyword argument.\nfrom functools import wraps\ndef pass_func_name(func):\n \"Name of decorated function will be passed as keyword arg _func_name\"\n @wraps(func)\n def _pass_name(*args, **kwds):\n kwds['_func_name'] = func.func_name\n return func(*args, **kwds)\n return _pass_name\n\nYou would use it this way:\n@pass_func_name\ndef sum(a, b, _func_name):\n print \"running function %s\" % _func_name\n return a + b\n\nprint sum(2, 4)\n\nBut maybe you'd want to write what you want directly inside the decorator itself. Then the code is an example of a way to get the function name in a decorator. If you give more details about what you want to do in the function, that requires the name, maybe I can suggest something else.\n", "# file \"foo.py\" \nimport sys\nimport os\n\ndef LINE( back = 0 ):\n return sys._getframe( back + 1 ).f_lineno\ndef FILE( back = 0 ):\n return sys._getframe( back + 1 ).f_code.co_filename\ndef FUNC( back = 0):\n return sys._getframe( back + 1 ).f_code.co_name\ndef WHERE( back = 0 ):\n frame = sys._getframe( back + 1 )\n return \"%s/%s %s()\" % ( os.path.basename( frame.f_code.co_filename ), \n frame.f_lineno, frame.f_code.co_name )\n\ndef testit():\n print \"Here in %s, file %s, line %s\" % ( FUNC(), FILE(), LINE() )\n print \"WHERE says '%s'\" % WHERE()\n\ntestit()\n\nOutput:\n$ python foo.py\nHere in testit, file foo.py, line 17\nWHERE says 'foo.py/18 testit()'\n\nUse \"back = 1\" to find info regarding two levels back down the stack, etc.\n", "I think the traceback module might have what you're looking for. In particular, the extract_stack function looks like it will do the job.\n" ]
[ 57, 24, 16, 10, 3 ]
[]
[]
[ "python" ]
stackoverflow_0000245304_python.txt
Q: Upload a file in Django and then send it somewhere else through REST? I am building a simple Django app that will use scribd to display documents. I would like to have a page where the administrator can upload documents to scribd through the website, since I need to know a few things about it before it gets to scribd. What is the best/easiest way to do this, display an upload page and then take the file that is uploaded and send it to scribd through the docs.upload method of their api? I'm a little new at this Python/Django/REST API thing, so sorry if this is too many questions at once. A: That is quite a few questions. Handling the file upload is pretty straight-forward with Django, see the File Uploads documentation for examples. In short you can access the uploaded file via request.FILES['file']. To call the scribd api you can use urllib2; see this Hackoarama page for instructions. urllib2 can be a little convoluted but it works once you get a hang of it. You can call the scribd api directly from within your Django view, but it'd be better practice to separate it out: from within your Django view save the file somewhere on disk and put an "upload this" message on messaging system (eg. beanstalkd). Have a separate process pick up the message and upload the file to scribd. That way you shield your http process and user from any issues accessing the API and the associated delays. A: What you want to do (at least from what I read here and on the Django documentation site) is create a custom storage system. This should give you exactly what you need - it's the motivation they use to start the example, after all!
Upload a file in Django and then send it somewhere else through REST?
I am building a simple Django app that will use scribd to display documents. I would like to have a page where the administrator can upload documents to scribd through the website, since I need to know a few things about it before it gets to scribd. What is the best/easiest way to do this, display an upload page and then take the file that is uploaded and send it to scribd through the docs.upload method of their api? I'm a little new at this Python/Django/REST API thing, so sorry if this is too many questions at once.
[ "That is quite a few questions. \nHandling the file upload is pretty straight-forward with Django, see the File Uploads documentation for examples. In short you can access the uploaded file via request.FILES['file'].\nTo call the scribd api you can use urllib2; see this Hackoarama page for instructions. urllib2 can be a little convoluted but it works once you get a hang of it.\nYou can call the scribd api directly from within your Django view, but it'd be better practice to separate it out: from within your Django view save the file somewhere on disk and put an \"upload this\" message on messaging system (eg. beanstalkd). Have a separate process pick up the message and upload the file to scribd. That way you shield your http process and user from any issues accessing the API and the associated delays.\n", "What you want to do (at least from what I read here and on the Django documentation site) is create a custom storage system.\nThis should give you exactly what you need - it's the motivation they use to start the example, after all!\n" ]
[ 3, 1 ]
[]
[]
[ "api", "django", "python", "rest", "scribd" ]
stackoverflow_0000245725_api_django_python_rest_scribd.txt
Q: Porting MATLAB functions to Scilab. How do I use symbolic? I'm porting some MATLAB functions to Scilab. The cool thing is that there is a conversion toolbox that make things very easy. The problem is I did not find the counterpart to the syms function, and the symbolic toolbox in general. (I'd like a port of the Control System Toolbox too, amd I'm still searching for some functions I'd may need). The only thing about symbolic toolbox I've found is this, but it was a little trcky and not so easy (actually I was not able to set up it correctly in 30 minutes, and I gave up for now. I'm going to try later), and it needs Maxima to be installed. Does anyone know anything about that? Scilab is not exactly a must. The project aims to give a more free and open source alternative to MATLAB. I saw there is SymPy for Python, and I just could use it with SciPy, but I'd lost the conversion toolbox thing :\ That said, what should be better? Get SciLab and Maxima work together or move to Python & co.? This is the start of the project, so the earlier I choose this, the better. A: See Bye MATLAB, hello Python, thanks Sage for a first-hand experience of migrating from MATLAB to Python. A: Not to discourage your project, but if you just want a free and open source alternative to MATLAB, have you looked at the Octave project? Contributing there might be more productive than building your own MATLAB alternative. If your project requires the functionality of MATLAB's Symbolic then take a look at http://wiki.octave.org/wiki.pl?CategorySymbolic From my quick Google search I didn't find anything comparable to MATLAB's Simulink. Also, Python and SciPy do have most of the functionality of MATLAB, and I guess Scilab's conversion utility would be useful in porting your own M-Files into Scilab code. Your question seems to imply you want to port over MATLAB Toolboxes The only thing about symbolic toolbox I've found is this... I hope I am just misinterpreting you. If you are then there might be licensing issues if you were to distribute your system because the MATLAB Toolbox. Just a thought. But perhaps you wish to port your MATLAB code to, so that it doesn't not have the MATLAB dependency. Update For Control System functionality Octave, I just found that Octave does have a toolbox, see: Octave Control Systems Toolbox Which has some of the functionality of Simulink, but it doesn't seem to have the graphical interface for building block diagrams.
Porting MATLAB functions to Scilab. How do I use symbolic?
I'm porting some MATLAB functions to Scilab. The cool thing is that there is a conversion toolbox that make things very easy. The problem is I did not find the counterpart to the syms function, and the symbolic toolbox in general. (I'd like a port of the Control System Toolbox too, amd I'm still searching for some functions I'd may need). The only thing about symbolic toolbox I've found is this, but it was a little trcky and not so easy (actually I was not able to set up it correctly in 30 minutes, and I gave up for now. I'm going to try later), and it needs Maxima to be installed. Does anyone know anything about that? Scilab is not exactly a must. The project aims to give a more free and open source alternative to MATLAB. I saw there is SymPy for Python, and I just could use it with SciPy, but I'd lost the conversion toolbox thing :\ That said, what should be better? Get SciLab and Maxima work together or move to Python & co.? This is the start of the project, so the earlier I choose this, the better.
[ "See Bye MATLAB, hello Python, thanks Sage for a first-hand experience of migrating from MATLAB to Python.\n", "Not to discourage your project, but if you just want a free and open source alternative to MATLAB, have you looked at the Octave project? Contributing there might be more productive than building your own MATLAB alternative.\nIf your project requires the functionality of MATLAB's Symbolic then take a look at\n\nhttp://wiki.octave.org/wiki.pl?CategorySymbolic\n\nFrom my quick Google search I didn't find anything comparable to MATLAB's Simulink.\nAlso, Python and SciPy do have most of the functionality of MATLAB, and I guess Scilab's conversion utility would be useful in porting your own M-Files into Scilab code.\nYour question seems to imply you want to port over MATLAB Toolboxes\n\nThe only thing about symbolic toolbox I've found is this...\n\nI hope I am just misinterpreting you. If you are then there might be licensing issues if you were to distribute your system because the MATLAB Toolbox. Just a thought. But perhaps you wish to port your MATLAB code to, so that it doesn't not have the MATLAB dependency.\nUpdate\nFor Control System functionality Octave, I just found that Octave does have a toolbox, see:\n\nOctave Control Systems Toolbox\n\nWhich has some of the functionality of Simulink, but it doesn't seem to have the graphical interface for building block diagrams.\n" ]
[ 3, 1 ]
[]
[]
[ "matlab", "porting", "python", "scilab", "sympy" ]
stackoverflow_0000244803_matlab_porting_python_scilab_sympy.txt
Q: Microphone access in Python Can I access a users microphone in Python? Sorry I forgot not everyone is a mind reader: Windows at minimum XP but Vista support would be VERY good. A: I got the job done with pyaudio It comes with a binary installer for windows and there's even an example on how to record through the microphone and save to a wave file. Nice! I used it on Windows XP, not sure how it will do on Vista though, sorry. A: Best way to go about it would be to use the ctypes library and use WinMM from that. mixerOpen will open a microphone device and you can read the data easily from there. Should be very straightforward. A: You might try SWMixer.
Microphone access in Python
Can I access a users microphone in Python? Sorry I forgot not everyone is a mind reader: Windows at minimum XP but Vista support would be VERY good.
[ "I got the job done with pyaudio\nIt comes with a binary installer for windows and there's even an example on how to record through the microphone and save to a wave file. Nice! I used it on Windows XP, not sure how it will do on Vista though, sorry.\n", "Best way to go about it would be to use the ctypes library and use WinMM from that. mixerOpen will open a microphone device and you can read the data easily from there. Should be very straightforward.\n", "You might try SWMixer.\n" ]
[ 17, 4, 2 ]
[]
[]
[ "microphone", "python", "windows" ]
stackoverflow_0000193789_microphone_python_windows.txt
Q: Alternatives to a wizard I'm making a program that fits the wizard concept ideally; the user is walked through the steps to create a character for a game. However, I'm realizing that the limitations of the wizard are making it difficult to design "elegant" logic flow. For example, because all pages of the wizard are initalized at the same time, I can't have the values entered in one page available to the next one. I have to put a button on each page to get the values from a previous page rather than simply having fields auto-populated. I've thought about alternatives to using the wizard. I think the best idea is to have some buttons on one panel that change the information on another panel, e.g. a splitter window. However, I can't find any documentation in wxPython on how to dynamically change the panel. Everything I've found so far is really pretty static, hence the use of the wizard. Even the "wxPython in Action" book doesn't mention it. Are there any tutorials for making "dynamic panels" or better management of a wizard? A: Here is a simple example. This way you can make your "wizard" work like a finite state machine where states are different pages that are initialized on demand. Also, the data is shared between pages. import wx import wx.lib.newevent (PageChangeEvent, EVT_PAGE_CHANGE) = wx.lib.newevent.NewEvent() class Data: foo = None bar = None class Page1(wx.Panel): def __init__(self, parent, data): wx.Panel.__init__(self, parent) self.parent = parent self.data = data sizer = wx.BoxSizer(wx.VERTICAL) self.SetSizer(sizer) label = wx.StaticText(self, label="Page 1 - foo") self.foo = wx.TextCtrl(self) goto_page2 = wx.Button(self, label="Go to page 2") for c in (label, self.foo, goto_page2): sizer.Add(c, 0, wx.TOP, 5) goto_page2.Bind(wx.EVT_BUTTON, self.OnPage2) def OnPage2(self, event): self.data.foo = self.foo.Value wx.PostEvent(self.parent, PageChangeEvent(page=Page2)) class Page2(wx.Panel): def __init__(self, parent, data): wx.Panel.__init__(self, parent) self.parent = parent self.data = data sizer = wx.BoxSizer(wx.VERTICAL) self.SetSizer(sizer) label = wx.StaticText(self, label="Page 2 - bar") self.bar = wx.TextCtrl(self) goto_finish = wx.Button(self, label="Finish") for c in (label, self.bar, goto_finish): sizer.Add(c, 0, wx.TOP, 5) goto_finish.Bind(wx.EVT_BUTTON, self.OnFinish) def OnFinish(self, event): self.data.bar = self.bar.Value wx.PostEvent(self.parent, PageChangeEvent(page=finish)) def finish(parent, data): wx.MessageBox("foo = %s\nbar = %s" % (data.foo, data.bar)) wx.GetApp().ExitMainLoop() class Test(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) self.data = Data() self.current_page = None self.Bind(EVT_PAGE_CHANGE, self.OnPageChange) wx.PostEvent(self, PageChangeEvent(page=Page1)) def OnPageChange(self, event): page = event.page(self, self.data) if page == None: return if self.current_page: self.current_page.Destroy() self.current_page = page page.Layout() page.Fit() page.Refresh() app = wx.PySimpleApp() app.TopWindow = Test() app.TopWindow.Show() app.MainLoop() A: The wxPython demo has an example of a "dynamic" wizard. Pages override GetNext() and GetPrev() to show pages dynamically. This shows the basic technique; you can extend it to add and remove pages, change pages on the fly, and rearrange pages dynamically. The wizard class is just a convenience, though. You can modify it, or create your own implementation. A style that seems popular nowadays is to use an HTML-based presentation; you can emulate this with the wxHtml control, or the IEHtmlWindow control if your app is Windows only. A: You could try using a workflow engine like WFTK. In this particular case author has done some work on wx-based apps using WFTK and can probably direct you to examples. A: I'd get rid of wizard in whole. They are the most unpleasant things I've ever used. The problem that requires a wizard-application where you click 'next' is perhaps a problem where you could apply a better user interface in a bit different manner. Instead of bringing up a dialog with annoying 'next' -button. Do this: Bring up a page. When the user inserts the information to the page, extend or shorten it according to the input. If your application needs to do some processing to continue, and it's impossible to revert after that, write a new page or disable the earlier section of the current page. When you don't need any input from the user anymore or the app is finished, you can show a button or enable an existing such. I don't mean you should implement it all in browser. Make simply a scrolling container that can contain buttons and labels in a flat list. Benefit: The user can just click a tab, and you are encouraged to put all the processing into the end of filling the page. A: It should be noted that a Wizard should be the interface for mutli-step, infrequently-performed tasks. The wizard is used to guide the user through something they don't really understand, because they almost never do it. And if some users might do the task frequently, you want to give those power users a lightweight interface to do the same thing - even if it less self explanatory. See: Windows Vista User Experience Guidelines - Top Violations Wizards Consider lightweight alternatives first, such as dialog boxes, task panes, or single pages. Wizards are a heavy UI, best used for multi-step, infrequently performed task. You don't have to use wizards—you can provide helpful information and assistance in any UI.
Alternatives to a wizard
I'm making a program that fits the wizard concept ideally; the user is walked through the steps to create a character for a game. However, I'm realizing that the limitations of the wizard are making it difficult to design "elegant" logic flow. For example, because all pages of the wizard are initalized at the same time, I can't have the values entered in one page available to the next one. I have to put a button on each page to get the values from a previous page rather than simply having fields auto-populated. I've thought about alternatives to using the wizard. I think the best idea is to have some buttons on one panel that change the information on another panel, e.g. a splitter window. However, I can't find any documentation in wxPython on how to dynamically change the panel. Everything I've found so far is really pretty static, hence the use of the wizard. Even the "wxPython in Action" book doesn't mention it. Are there any tutorials for making "dynamic panels" or better management of a wizard?
[ "Here is a simple example. This way you can make your \"wizard\" work like a finite state machine where states are different pages that are initialized on demand. Also, the data is shared between pages.\nimport wx\nimport wx.lib.newevent\n\n\n(PageChangeEvent, EVT_PAGE_CHANGE) = wx.lib.newevent.NewEvent()\n\n\nclass Data:\n foo = None\n bar = None\n\n\nclass Page1(wx.Panel):\n def __init__(self, parent, data):\n wx.Panel.__init__(self, parent)\n self.parent = parent\n self.data = data\n\n sizer = wx.BoxSizer(wx.VERTICAL)\n self.SetSizer(sizer)\n label = wx.StaticText(self, label=\"Page 1 - foo\")\n self.foo = wx.TextCtrl(self)\n goto_page2 = wx.Button(self, label=\"Go to page 2\")\n\n for c in (label, self.foo, goto_page2):\n sizer.Add(c, 0, wx.TOP, 5)\n\n goto_page2.Bind(wx.EVT_BUTTON, self.OnPage2)\n\n def OnPage2(self, event):\n self.data.foo = self.foo.Value\n wx.PostEvent(self.parent, PageChangeEvent(page=Page2))\n\n\nclass Page2(wx.Panel):\n def __init__(self, parent, data):\n wx.Panel.__init__(self, parent)\n self.parent = parent\n self.data = data\n\n sizer = wx.BoxSizer(wx.VERTICAL)\n self.SetSizer(sizer)\n label = wx.StaticText(self, label=\"Page 2 - bar\")\n self.bar = wx.TextCtrl(self)\n goto_finish = wx.Button(self, label=\"Finish\")\n\n for c in (label, self.bar, goto_finish):\n sizer.Add(c, 0, wx.TOP, 5)\n\n goto_finish.Bind(wx.EVT_BUTTON, self.OnFinish)\n\n def OnFinish(self, event):\n self.data.bar = self.bar.Value\n wx.PostEvent(self.parent, PageChangeEvent(page=finish))\n\n\ndef finish(parent, data):\n wx.MessageBox(\"foo = %s\\nbar = %s\" % (data.foo, data.bar))\n wx.GetApp().ExitMainLoop()\n\n\nclass Test(wx.Frame):\n def __init__(self):\n wx.Frame.__init__(self, None)\n self.data = Data()\n self.current_page = None\n\n self.Bind(EVT_PAGE_CHANGE, self.OnPageChange)\n wx.PostEvent(self, PageChangeEvent(page=Page1))\n\n def OnPageChange(self, event):\n page = event.page(self, self.data)\n if page == None:\n return\n if self.current_page:\n self.current_page.Destroy()\n self.current_page = page\n page.Layout()\n page.Fit()\n page.Refresh()\n\n\napp = wx.PySimpleApp()\napp.TopWindow = Test()\napp.TopWindow.Show()\napp.MainLoop()\n\n", "The wxPython demo has an example of a \"dynamic\" wizard. Pages override GetNext() and GetPrev() to show pages dynamically. This shows the basic technique; you can extend it to add and remove pages, change pages on the fly, and rearrange pages dynamically.\nThe wizard class is just a convenience, though. You can modify it, or create your own implementation. A style that seems popular nowadays is to use an HTML-based presentation; you can emulate this with the wxHtml control, or the IEHtmlWindow control if your app is Windows only.\n", "You could try using a workflow engine like WFTK. In this particular case author has done some work on wx-based apps using WFTK and can probably direct you to examples.\n", "I'd get rid of wizard in whole. They are the most unpleasant things I've ever used.\nThe problem that requires a wizard-application where you click 'next' is perhaps a problem where you could apply a better user interface in a bit different manner. Instead of bringing up a dialog with annoying 'next' -button. Do this:\nBring up a page. When the user inserts the information to the page, extend or shorten it according to the input. If your application needs to do some processing to continue, and it's impossible to revert after that, write a new page or disable the earlier section of the current page. When you don't need any input from the user anymore or the app is finished, you can show a button or enable an existing such.\nI don't mean you should implement it all in browser. Make simply a scrolling container that can contain buttons and labels in a flat list.\nBenefit: The user can just click a tab, and you are encouraged to put all the processing into the end of filling the page.\n", "It should be noted that a Wizard should be the interface for mutli-step, infrequently-performed tasks. The wizard is used to guide the user through something they don't really understand, because they almost never do it.\nAnd if some users might do the task frequently, you want to give those power users a lightweight interface to do the same thing - even if it less self explanatory.\nSee: Windows Vista User Experience Guidelines - Top Violations\n\nWizards\nConsider lightweight alternatives first, such as dialog boxes, task\n panes, or single pages. Wizards are\n a heavy UI, best used for multi-step,\n infrequently performed task. You don't\n have to use wizards—you can provide\n helpful information and assistance in\n any UI.\n\n" ]
[ 5, 1, 0, 0, 0 ]
[]
[]
[ "python", "wizard", "wxpython" ]
stackoverflow_0000224337_python_wizard_wxpython.txt
Q: Map two lists into one single list of dictionaries Imagine I have these python lists: keys = ['name', 'age'] values = ['Monty', 42, 'Matt', 28, 'Frank', 33] Is there a direct or at least a simple way to produce the following list of dictionaries ? [ {'name': 'Monty', 'age': 42}, {'name': 'Matt', 'age': 28}, {'name': 'Frank', 'age': 33} ] A: Here is the zip way def mapper(keys, values): n = len(keys) return [dict(zip(keys, values[i:i + n])) for i in range(0, len(values), n)] A: It's not pretty but here's a one-liner using a list comprehension, zip and stepping: [dict(zip(keys, a)) for a in zip(values[::2], values[1::2])] A: Dumb way, but one that comes immediately to my mind: def fields_from_list(keys, values): iterator = iter(values) while True: yield dict((key, iterator.next()) for key in keys) list(fields_from_list(keys, values)) # to produce a list. A: zip nearly does what you want; unfortunately, rather than cycling the shorter list, it breaks. Perhaps there's a related function that cycles? $ python >>> keys = ['name', 'age'] >>> values = ['Monty', 42, 'Matt', 28, 'Frank', 33] >>> dict(zip(keys, values)) {'age': 42, 'name': 'Monty'} /EDIT: Oh, you want a list of dict. The following works (thanks to Peter, as well): from itertoos import cycle keys = ['name', 'age'] values = ['Monty', 42, 'Matt', 28, 'Frank', 33] x = zip(cycle(keys), values) map(lambda a: dict(a), zip(x[::2], x[1::2])) A: In the answer by Konrad Rudolph zip nearly does what you want; unfortunately, rather than cycling the shorter list, it breaks. Perhaps there's a related function that cycles? Here's a way: keys = ['name', 'age'] values = ['Monty', 42, 'Matt', 28, 'Frank', 33] iter_values = iter(values) [dict(zip(keys, iter_values)) for _ in range(len(values) // len(keys))] I will not call it Pythonic (I think it's too clever), but it might be what are looking for. There is no benefit in cycling the keys list using itertools.cycle(), because each traversal of keys corresponds to the creation of one dictionnary. EDIT: Here's another way: def iter_cut(seq, size): for i in range(len(seq) / size): yield seq[i*size:(i+1)*size] keys = ['name', 'age'] values = ['Monty', 42, 'Matt', 28, 'Frank', 33] [dict(zip(keys, some_values)) for some_values in iter_cut(values, len(keys))] This is much more pythonic: there's a readable utility function with a clear purpose, and the rest of the code flows naturally from it. A: Here's my simple approach. It seems to be close to the idea that @Cheery had except that I destroy the input list. def pack(keys, values): """This function destructively creates a list of dictionaries from the input lists.""" retval = [] while values: d = {} for x in keys: d[x] = values.pop(0) retval.append(d) return retval A: Yet another try, perhaps dumber than the first one: def split_seq(seq, count): i = iter(seq) while True: yield [i.next() for _ in xrange(count)] >>> [dict(zip(keys, rec)) for rec in split_seq(values, len(keys))] [{'age': 42, 'name': 'Monty'}, {'age': 28, 'name': 'Matt'}, {'age': 33, 'name': 'Frank'}] But it's up to you to decide whether it's dumber. A: [dict(zip(keys,values[n:n+len(keys)])) for n in xrange(0,len(values),len(keys)) ] UG-LEEE. I'd hate to see code that looks like that. But it looks right. def dictizer(keys, values): steps = xrange(0,len(values),len(keys)) bites = ( values[n:n+len(keys)] for n in steps) return ( dict(zip(keys,bite)) for bite in bites ) Still a little ugly, but the names help make sense of it.
Map two lists into one single list of dictionaries
Imagine I have these python lists: keys = ['name', 'age'] values = ['Monty', 42, 'Matt', 28, 'Frank', 33] Is there a direct or at least a simple way to produce the following list of dictionaries ? [ {'name': 'Monty', 'age': 42}, {'name': 'Matt', 'age': 28}, {'name': 'Frank', 'age': 33} ]
[ "Here is the zip way\ndef mapper(keys, values):\n n = len(keys)\n return [dict(zip(keys, values[i:i + n]))\n for i in range(0, len(values), n)]\n\n", "It's not pretty but here's a one-liner using a list comprehension, zip and stepping:\n[dict(zip(keys, a)) for a in zip(values[::2], values[1::2])]\n\n", "Dumb way, but one that comes immediately to my mind:\ndef fields_from_list(keys, values):\n iterator = iter(values)\n while True:\n yield dict((key, iterator.next()) for key in keys)\n\nlist(fields_from_list(keys, values)) # to produce a list.\n\n", "zip nearly does what you want; unfortunately, rather than cycling the shorter list, it breaks. Perhaps there's a related function that cycles?\n$ python\n>>> keys = ['name', 'age']\n>>> values = ['Monty', 42, 'Matt', 28, 'Frank', 33]\n>>> dict(zip(keys, values))\n{'age': 42, 'name': 'Monty'}\n\n/EDIT: Oh, you want a list of dict. The following works (thanks to Peter, as well):\nfrom itertoos import cycle\n\nkeys = ['name', 'age']\nvalues = ['Monty', 42, 'Matt', 28, 'Frank', 33]\n\nx = zip(cycle(keys), values)\nmap(lambda a: dict(a), zip(x[::2], x[1::2]))\n\n", "In the answer by Konrad Rudolph\n\nzip nearly does what you want; unfortunately, rather than cycling the shorter list, it breaks. Perhaps there's a related function that cycles?\n\nHere's a way:\nkeys = ['name', 'age']\nvalues = ['Monty', 42, 'Matt', 28, 'Frank', 33]\niter_values = iter(values)\n[dict(zip(keys, iter_values)) for _ in range(len(values) // len(keys))]\n\nI will not call it Pythonic (I think it's too clever), but it might be what are looking for.\nThere is no benefit in cycling the keys list using itertools.cycle(), because each traversal of keys corresponds to the creation of one dictionnary.\nEDIT: Here's another way:\ndef iter_cut(seq, size):\n for i in range(len(seq) / size):\n yield seq[i*size:(i+1)*size]\n\nkeys = ['name', 'age']\nvalues = ['Monty', 42, 'Matt', 28, 'Frank', 33]\n[dict(zip(keys, some_values)) for some_values in iter_cut(values, len(keys))]\n\nThis is much more pythonic: there's a readable utility function with a clear purpose, and the rest of the code flows naturally from it.\n", "Here's my simple approach. It seems to be close to the idea that @Cheery had except that I destroy the input list.\ndef pack(keys, values):\n \"\"\"This function destructively creates a list of dictionaries from the input lists.\"\"\"\n retval = []\n while values:\n d = {}\n for x in keys:\n d[x] = values.pop(0)\n retval.append(d)\n return retval\n\n", "Yet another try, perhaps dumber than the first one:\ndef split_seq(seq, count):\n i = iter(seq)\n while True:\n yield [i.next() for _ in xrange(count)]\n\n>>> [dict(zip(keys, rec)) for rec in split_seq(values, len(keys))]\n[{'age': 42, 'name': 'Monty'},\n {'age': 28, 'name': 'Matt'},\n {'age': 33, 'name': 'Frank'}]\n\nBut it's up to you to decide whether it's dumber.\n", "[dict(zip(keys,values[n:n+len(keys)])) for n in xrange(0,len(values),len(keys)) ]\n\nUG-LEEE. I'd hate to see code that looks like that. But it looks right.\ndef dictizer(keys, values):\n steps = xrange(0,len(values),len(keys))\n bites = ( values[n:n+len(keys)] for n in steps)\n return ( dict(zip(keys,bite)) for bite in bites )\n\nStill a little ugly, but the names help make sense of it.\n" ]
[ 14, 3, 2, 2, 2, 1, 1, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0000244438_dictionary_list_python.txt
Q: HTTP compliance testing What would you use to perform a compliance testing of an HTTP proxy? I've seen two projects so far: Web Polygraph (the feedback I got from a coworker is mostly negative) Funkload A: Take a look here: http://www.measurement-factory.com/ The Co-Advisor product might be what you are after. Note that this is by the same mob that created Web-Polygraph/
HTTP compliance testing
What would you use to perform a compliance testing of an HTTP proxy? I've seen two projects so far: Web Polygraph (the feedback I got from a coworker is mostly negative) Funkload
[ "Take a look here: http://www.measurement-factory.com/\nThe Co-Advisor product might be what you are after. Note that this is by the same mob that created Web-Polygraph/\n" ]
[ 1 ]
[]
[]
[ "http", "python", "standards_compliance", "testing" ]
stackoverflow_0000246123_http_python_standards_compliance_testing.txt
Q: Is there something between a normal user account and root? I'm developing an application that manages network interfaces on behalf of the user and it calls out to several external programs (such as ifconfig) that requires root to make changes. (Specifically, changing the IP address of a local interface, etc.) During development, I have been running the IDE as root (ugh) and the debugger as root (double-ugh). Is there a nice way for the end-user to run these under a non-root account? I strongly dislike the size of the attack surface presented by GTK, wxPython, Python, and my application when it runs as root. I have looked into capabilities, but they look half-baked and I'm not sure if I'd be able to use them in Python, especially if they are on a thread basis. The only option I haven't explored is a daemon that has the setuid bit set and does all the root-type stuff on behalf of the UI. I'm hesitant to introduce that complexity this early in the project, as running as root is not a dealbreaker for the users. A: Your idea about the daemon has much merit, despite the complexity it introduces. As long as the actions don't require some user interface interaction as root, a daemon allows you to control what operations are allowed and disallowed. However, you can use SUDO to create a controlled compromise between ROOT and normal users... simply grant SUDO access to the users in question for the specific tools they need. That reduces the attack surface by allowing only "permitted" root launches. A: What you want is a "Group" You create a group, specify that the account wanting to do the action belongs to the group, then you specify that the resource you want access to is a member of that group. Sometimes group management can be kind of irritating, but it should allow you to do anything you want, and it's the user that is authorized, not your program. (If you want your program authorized, you can create a specific user to run it as and give that user the proper group membership, then su to that group within your program to execute the operation without giving the running user the ability.) A: You could create and distribute a selinux policy for your application. Selinux allows the kind of fine-grained access that you need. If you can't or won't use selinux, then the daemon is the way to go. A: I would not run the application full time as root, but you might want to explore making your application setuid root, or setuid to some id that can become root using something like sudo for particular applications. You might be able to set up an account that cannot login, use setuid to change your program's id (temporarily when needed) and have sudo set up to not prompt for password, but always allow access to that account for specific tasks. This way your program has no special privileges when running normally, only elevates it's privileges when needed, and is restricted by sudo to only running certain programs. It's been awhile since I've done much Unix development, so I'm not really sure whether it's possible to set up sudo to not prompt for a password (or even if there is an API for it), but as a fallback you could enable setuid to root only when needed. [EDIT] Looks like sudo has a NOPASSWD mode so I think it should work since you're running the programs as external commands. A: The traditional way would be to create and use a setuid helper to do whatever you need. Note that, however, properly writing a setuid helper is tricky (there are several attack vectors you have to protect against). The modern way would be to use a daemon (running as root, started on boot) which listens to requests from the rest of the application. This way, your attack surface is mostly limited to whichever IPC you chose (I'd suggest d-bus, which seems to be the modern way). Finally, if you are managing network interfaces, what you doing is very similar to what network-manager does on a modern distribution. It would be a good idea to either try to somehow integrate what you are doing with network-manager (so it will not conflict with your manipulations), or at least looks at how it works. A: There's no single user that is halfway between a "normal" user and root. You have root, and then you have users; users can have differing levels of capabilities. If you want something that's more powerful than a "normal" user but not as powerful as root, you just create a new user with the capabilities you want, but don't give it the privileges you don't want it to have. A: I'm not familiar enough with Python to tell you what the necessary commands would be in that language, but you should be able to accomplish this by forking and using a pipe to communicate between the parent and child processes. Something along the lines of: Run the program as root via sudo or suid On startup, the program immediately forks and establishes a pipe for communication between the parent and child processes The child process retains root power, but just sits there waiting for input from the pipe The parent process drops root (changes its uid back to that of the user running it), then displays the GUI, interacts with the user, and handles all operations which are available to a non-privileged user When an operation is to be performed which requires root privileges, the (non-root) parent process sends a command down the pipe to the (root) child process which executes it and optionally reports back to the parent This is likely to be a bit easier to write than an independent daemon, as well as more convenient to run (since you don't need to worry about whether the daemon is running or not), while also allowing the GUI and other things which don't need root powers to be run as non-root.
Is there something between a normal user account and root?
I'm developing an application that manages network interfaces on behalf of the user and it calls out to several external programs (such as ifconfig) that requires root to make changes. (Specifically, changing the IP address of a local interface, etc.) During development, I have been running the IDE as root (ugh) and the debugger as root (double-ugh). Is there a nice way for the end-user to run these under a non-root account? I strongly dislike the size of the attack surface presented by GTK, wxPython, Python, and my application when it runs as root. I have looked into capabilities, but they look half-baked and I'm not sure if I'd be able to use them in Python, especially if they are on a thread basis. The only option I haven't explored is a daemon that has the setuid bit set and does all the root-type stuff on behalf of the UI. I'm hesitant to introduce that complexity this early in the project, as running as root is not a dealbreaker for the users.
[ "Your idea about the daemon has much merit, despite the complexity it introduces. As long as the actions don't require some user interface interaction as root, a daemon allows you to control what operations are allowed and disallowed.\nHowever, you can use SUDO to create a controlled compromise between ROOT and normal users... simply grant SUDO access to the users in question for the specific tools they need. That reduces the attack surface by allowing only \"permitted\" root launches.\n", "What you want is a \"Group\"\nYou create a group, specify that the account wanting to do the action belongs to the group, then you specify that the resource you want access to is a member of that group.\nSometimes group management can be kind of irritating, but it should allow you to do anything you want, and it's the user that is authorized, not your program.\n(If you want your program authorized, you can create a specific user to run it as and give that user the proper group membership, then su to that group within your program to execute the operation without giving the running user the ability.)\n", "You could create and distribute a selinux policy for your application. Selinux allows the kind of fine-grained access that you need. If you can't or won't use selinux, then the daemon is the way to go.\n", "I would not run the application full time as root, but you might want to explore making your application setuid root, or setuid to some id that can become root using something like sudo for particular applications. You might be able to set up an account that cannot login, use setuid to change your program's id (temporarily when needed) and have sudo set up to not prompt for password, but always allow access to that account for specific tasks.\nThis way your program has no special privileges when running normally, only elevates it's privileges when needed, and is restricted by sudo to only running certain programs.\nIt's been awhile since I've done much Unix development, so I'm not really sure whether it's possible to set up sudo to not prompt for a password (or even if there is an API for it), but as a fallback you could enable setuid to root only when needed.\n[EDIT] Looks like sudo has a NOPASSWD mode so I think it should work since you're running the programs as external commands.\n", "The traditional way would be to create and use a setuid helper to do whatever you need. Note that, however, properly writing a setuid helper is tricky (there are several attack vectors you have to protect against).\nThe modern way would be to use a daemon (running as root, started on boot) which listens to requests from the rest of the application. This way, your attack surface is mostly limited to whichever IPC you chose (I'd suggest d-bus, which seems to be the modern way).\nFinally, if you are managing network interfaces, what you doing is very similar to what network-manager does on a modern distribution. It would be a good idea to either try to somehow integrate what you are doing with network-manager (so it will not conflict with your manipulations), or at least looks at how it works.\n", "There's no single user that is halfway between a \"normal\" user and root. You have root, and then you have users; users can have differing levels of capabilities. If you want something that's more powerful than a \"normal\" user but not as powerful as root, you just create a new user with the capabilities you want, but don't give it the privileges you don't want it to have.\n", "I'm not familiar enough with Python to tell you what the necessary commands would be in that language, but you should be able to accomplish this by forking and using a pipe to communicate between the parent and child processes. Something along the lines of:\n\nRun the program as root via sudo or suid\nOn startup, the program immediately forks and establishes a pipe for communication between the parent and child processes\nThe child process retains root power, but just sits there waiting for input from the pipe\nThe parent process drops root (changes its uid back to that of the user running it), then displays the GUI, interacts with the user, and handles all operations which are available to a non-privileged user\nWhen an operation is to be performed which requires root privileges, the (non-root) parent process sends a command down the pipe to the (root) child process which executes it and optionally reports back to the parent\n\nThis is likely to be a bit easier to write than an independent daemon, as well as more convenient to run (since you don't need to worry about whether the daemon is running or not), while also allowing the GUI and other things which don't need root powers to be run as non-root.\n" ]
[ 7, 3, 1, 1, 1, 0, 0 ]
[]
[]
[ "linux", "python", "root" ]
stackoverflow_0000248730_linux_python_root.txt
Q: Translate SVN path to local file system path in Python I'm writing a utility in Python that will attach changed files in Subversion to an email and send it when a subset of folders that are under source control in SVN have been changed. I am using the pysvn library to access the repository. I have a copy of the files on my local file system and I do an update to check if the files have changed since the last time the utility was run. I am at the point where I am translating the path names in SVN to the path names on my local copy. Currently I have written the following to do the job: def formatPaths(self, paths): newPaths = list() for path in paths: path = path[len(self.basePath):] path = path.replace("/", "\\") newPaths.append(path) return newPaths self.basePath would be something like "/trunk/project1" and I'm looking to just get the relative path of a subset of folders (I.e. folder1 under "/trunk/project1"). Is this a good way to solve this problem or is there some magical function I missed? A: Stay with the slice operator, but do not change the loop variable inside the loop. for fun, try the generator expression (or keep the listcomp). baselen = len(self.basePath) return (path[baselen:].replace("/", "\\") for path in paths) Edit: `lstrip()' is not relevant here. From the manual: str.lstrip([chars]) Return a copy of the string with leading characters removed. If chars is omitted or None, whitespace characters are removed. If given and not None, chars must be a string; the characters in the string will be stripped from the beginning of the string this method is called on. A: Hm... That would do it: baselen = len(self.basePath) for path in paths: path = path[baselen:].replace("/", "\\") newPaths.append(path) return newPaths If you like, you can do it like this: baselen = len(self.basePath) return (path[baselen:].replace("/", "\\") for path in paths) Not calculating baselen in every loop iteration is also good practice. A: Your specific solution to the path name copy is reasonable, but your general solution to the entire problem could be improved. I would easy_install anyvc, a library developed for the PIDA IDE which is a uniform python interface into version control systems, and use it instead: from anyvc import Subversion vc = Subversion('/trunk') modified = [f.relpath for f in vc.list() if f.state != 'clean'] for f in modified: print f.relpath # the relative path of the file to the source root Additionally, I would probably attach a diff to an email rather than the actual file. But I guess that's your choice.
Translate SVN path to local file system path in Python
I'm writing a utility in Python that will attach changed files in Subversion to an email and send it when a subset of folders that are under source control in SVN have been changed. I am using the pysvn library to access the repository. I have a copy of the files on my local file system and I do an update to check if the files have changed since the last time the utility was run. I am at the point where I am translating the path names in SVN to the path names on my local copy. Currently I have written the following to do the job: def formatPaths(self, paths): newPaths = list() for path in paths: path = path[len(self.basePath):] path = path.replace("/", "\\") newPaths.append(path) return newPaths self.basePath would be something like "/trunk/project1" and I'm looking to just get the relative path of a subset of folders (I.e. folder1 under "/trunk/project1"). Is this a good way to solve this problem or is there some magical function I missed?
[ "Stay with the slice operator, but do not change the loop variable inside the loop. for fun, try the generator expression (or keep the listcomp).\nbaselen = len(self.basePath)\nreturn (path[baselen:].replace(\"/\", \"\\\\\") for path in paths)\n\nEdit: `lstrip()' is not relevant here. From the manual:\n\nstr.lstrip([chars])\nReturn a copy of the string with leading characters removed. If chars is omitted or\n None, whitespace characters are removed. If given and not None, chars must be a\n string; the characters in the string will be stripped from the beginning of the \n string this method is called on.\n\n", "Hm... That would do it:\nbaselen = len(self.basePath)\nfor path in paths:\n path = path[baselen:].replace(\"/\", \"\\\\\")\n newPaths.append(path)\nreturn newPaths\n\nIf you like, you can do it like this:\nbaselen = len(self.basePath)\nreturn (path[baselen:].replace(\"/\", \"\\\\\") for path in paths)\n\nNot calculating baselen in every loop iteration is also good practice.\n", "Your specific solution to the path name copy is reasonable, but your general solution to the entire problem could be improved.\nI would easy_install anyvc, a library developed for the PIDA IDE which is a uniform python interface into version control systems, and use it instead:\nfrom anyvc import Subversion\nvc = Subversion('/trunk')\n\nmodified = [f.relpath for f in vc.list() if f.state != 'clean']\n\nfor f in modified:\n print f.relpath # the relative path of the file to the source root\n\nAdditionally, I would probably attach a diff to an email rather than the actual file. But I guess that's your choice.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "python", "svn" ]
stackoverflow_0000249330_python_svn.txt
Q: How to determine if a directory is on same partition Say I have an input file, and a target directory. How do I determine if the input file is on the same hard-drive (or partition) as the target directory? What I want to do is the copy a file if it's on a different, but move it if it's the same. For example: target_directory = "/Volumes/externalDrive/something/" input_foldername, input_filename = os.path.split(input_file) if same_partition(input_foldername, target_directory): copy(input_file, target_directory) else: move(input_file, target_directory) A: In C, you would use stat() and compare the st_dev field. In python, os.stat should do the same. import os def same_partition(f1, f2): return os.stat(f1).st_dev == os.stat(f2).st_dev A: Another way is the “better to ask forgiveness than permission” approach—just try to rename it, and if that fails, catch the appropriate OSError and try the copy approach. ie: import errno try: os.rename(source, dest): except IOError, ex: if ex.errno == errno.EXDEV: # perform the copy instead. This has the advantage that it will also work on Windows, where st_dev is always 0 for all partitions. Note that if you actually want to copy and then delete the source file (ie. perform a move), rather than just copy, then shutil.move will already do what you want: Help on function move in module shutil: move(src, dst) Recursively move a file or directory to another location. If the destination is on our current filesystem, then simply use rename. Otherwise, copy src to the dst and then remove src.
How to determine if a directory is on same partition
Say I have an input file, and a target directory. How do I determine if the input file is on the same hard-drive (or partition) as the target directory? What I want to do is the copy a file if it's on a different, but move it if it's the same. For example: target_directory = "/Volumes/externalDrive/something/" input_foldername, input_filename = os.path.split(input_file) if same_partition(input_foldername, target_directory): copy(input_file, target_directory) else: move(input_file, target_directory)
[ "In C, you would use stat() and compare the st_dev field. In python, os.stat should do the same.\nimport os\ndef same_partition(f1, f2):\n return os.stat(f1).st_dev == os.stat(f2).st_dev\n\n", "Another way is the “better to ask forgiveness than permission” approach—just try to rename it, and if that fails, catch the appropriate OSError and try the copy approach. ie:\nimport errno\ntry:\n os.rename(source, dest):\nexcept IOError, ex:\n if ex.errno == errno.EXDEV:\n # perform the copy instead.\n\nThis has the advantage that it will also work on Windows, where st_dev is always 0 for all partitions.\nNote that if you actually want to copy and then delete the source file (ie. perform a move), rather than just copy, then shutil.move will already do what you want:\n\nHelp on function move in module shutil:\n\nmove(src, dst)\n Recursively move a file or directory to another location.\n\n If the destination is on our current filesystem, then simply use\n rename. Otherwise, copy src to the dst and then remove src.\n\n" ]
[ 13, 3 ]
[]
[]
[ "filesystems", "linux", "macos", "python" ]
stackoverflow_0000249775_filesystems_linux_macos_python.txt
Q: What is the best way to serve static web pages from within a Django application? I am building a relatively simple Django application and apart from the main page where most of the dynamic parts of the application are, there are a few pages that I will need that will not be dynamic at all (About, FAQ, etc.). What is the best way to integrate these into Django, idealing still using the Django template engine? Should I just create a template for each and then have a view that simply renders that template? A: Have you looked at flat pages in Django? It probably does everything you're looking for. A: If you want to just create a template for each of them, you could use the direct_to_template generic view to serve it up. Another option would be the django.contrib.flatpages app, which would let you configure the static URLs and content via the database.
What is the best way to serve static web pages from within a Django application?
I am building a relatively simple Django application and apart from the main page where most of the dynamic parts of the application are, there are a few pages that I will need that will not be dynamic at all (About, FAQ, etc.). What is the best way to integrate these into Django, idealing still using the Django template engine? Should I just create a template for each and then have a view that simply renders that template?
[ "Have you looked at flat pages in Django? It probably does everything you're looking for.\n", "If you want to just create a template for each of them, you could use the direct_to_template generic view to serve it up.\nAnother option would be the django.contrib.flatpages app, which would let you configure the static URLs and content via the database.\n" ]
[ 7, 6 ]
[]
[]
[ "django", "python", "static", "templates" ]
stackoverflow_0000252035_django_python_static_templates.txt
Q: Is it possible to communicate with a sub subprocess with subprocess.Popen? I'm trying to write a python script that packages our software. This script needs to build our product, and package it. Currently we have other scripts that do each piece individually which include csh, and perl scripts. One such script is run like: sudo mod args where mod is a perl script; so in python I would do proc = Popen(['sudo', 'mod', '-p', '-c', 'noresource', '-u', 'dtt', '-Q'], stderr=PIPE, stdout=PIPE, stdin=PIPE) The problem is that this mod script needs a few questions answered. For this I thought that the traditional (stdout, stderr) = proc.communicate(input='y') would work. I don't think it's working because the process that Popen is controlling is sudo, not the mod script that is asking the question. Is there any way to communicate with the mod script and still run it through sudo? A: I think you should remove the sudo in your Popen call and require the user of your script to type sudo. This additionally makes more explicit the need for elevated privileges in your script, instead of hiding it inside Popen. A: I would choose to go with Pexpect. import pexpect child = pexpect.spawn ('sudo mod -p -c noresource -u dtt -Q') child.expect ('First question:') child.sendline ('Y') child.expect ('Second question:') child.sendline ('Yup') A: The simplest thing to do would be the run the controlling script (the Python script) via sudo. Are you able to do that, or is that not an option? A: We need more information. Is sudo asking you for a password? What kind of interface does the mod script have for asking questions? Because these kind of things are not handled as normal over the pipe. A solution for both of these might be Pexpect, which is rather expert at handling funny scripts that ask for passwords, and various other input issues.
Is it possible to communicate with a sub subprocess with subprocess.Popen?
I'm trying to write a python script that packages our software. This script needs to build our product, and package it. Currently we have other scripts that do each piece individually which include csh, and perl scripts. One such script is run like: sudo mod args where mod is a perl script; so in python I would do proc = Popen(['sudo', 'mod', '-p', '-c', 'noresource', '-u', 'dtt', '-Q'], stderr=PIPE, stdout=PIPE, stdin=PIPE) The problem is that this mod script needs a few questions answered. For this I thought that the traditional (stdout, stderr) = proc.communicate(input='y') would work. I don't think it's working because the process that Popen is controlling is sudo, not the mod script that is asking the question. Is there any way to communicate with the mod script and still run it through sudo?
[ "I think you should remove the sudo in your Popen call and require the user of your script to type sudo.\nThis additionally makes more explicit the need for elevated privileges in your script, instead of hiding it inside Popen.\n", "I would choose to go with Pexpect. \nimport pexpect\nchild = pexpect.spawn ('sudo mod -p -c noresource -u dtt -Q')\nchild.expect ('First question:')\nchild.sendline ('Y')\nchild.expect ('Second question:')\nchild.sendline ('Yup')\n\n", "The simplest thing to do would be the run the controlling script (the Python script) via sudo. Are you able to do that, or is that not an option?\n", "We need more information.\n\nIs sudo asking you for a password?\nWhat kind of interface does the mod script have for asking questions?\n\nBecause these kind of things are not handled as normal over the pipe.\nA solution for both of these might be Pexpect, which is rather expert at handling funny scripts that ask for passwords, and various other input issues.\n" ]
[ 4, 4, 1, 0 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0000250700_python_subprocess.txt
Q: Python: using a recursive algorithm as a generator Recently I wrote a function to generate certain sequences with nontrivial constraints. The problem came with a natural recursive solution. Now it happens that, even for relatively small input, the sequences are several thousands, thus I would prefer to use my algorithm as a generator instead of using it to fill a list with all the sequences. Here is an example. Suppose we want to compute all the permutations of a string with a recursive function. The following naive algorithm takes an extra argument 'storage' and appends a permutation to it whenever it finds one: def getPermutations(string, storage, prefix=""): if len(string) == 1: storage.append(prefix + string) # <----- else: for i in range(len(string)): getPermutations(string[:i]+string[i+1:], storage, prefix+string[i]) storage = [] getPermutations("abcd", storage) for permutation in storage: print permutation (Please don't care about inefficiency, this is only an example.) Now I want to turn my function into a generator, i.e. to yield a permutation instead of appending it to the storage list: def getPermutations(string, prefix=""): if len(string) == 1: yield prefix + string # <----- else: for i in range(len(string)): getPermutations(string[:i]+string[i+1:], prefix+string[i]) for permutation in getPermutations("abcd"): print permutation This code does not work (the function behaves like an empty generator). Am I missing something? Is there a way to turn the above recursive algorithm into a generator without replacing it with an iterative one? A: def getPermutations(string, prefix=""): if len(string) == 1: yield prefix + string else: for i in xrange(len(string)): for perm in getPermutations(string[:i] + string[i+1:], prefix+string[i]): yield perm Or without an accumulator: def getPermutations(string): if len(string) == 1: yield string else: for i in xrange(len(string)): for perm in getPermutations(string[:i] + string[i+1:]): yield string[i] + perm A: This avoids the len(string)-deep recursion, and is in general a nice way to handle generators-inside-generators: from types import GeneratorType def flatten(*stack): stack = list(stack) while stack: try: x = stack[0].next() except StopIteration: stack.pop(0) continue if isinstance(x, GeneratorType): stack.insert(0, x) else: yield x def _getPermutations(string, prefix=""): if len(string) == 1: yield prefix + string else: yield (_getPermutations(string[:i]+string[i+1:], prefix+string[i]) for i in range(len(string))) def getPermutations(string): return flatten(_getPermutations(string)) for permutation in getPermutations("abcd"): print permutation flatten allows us to continue progress in another generator by simply yielding it, instead of iterating through it and yielding each item manually. Python 3.3 will add yield from to the syntax, which allows for natural delegation to a sub-generator: def getPermutations(string, prefix=""): if len(string) == 1: yield prefix + string else: for i in range(len(string)): yield from getPermutations(string[:i]+string[i+1:], prefix+string[i]) A: The interior call to getPermutations -- it's a generator, too. def getPermutations(string, prefix=""): if len(string) == 1: yield prefix + string else: for i in range(len(string)): getPermutations(string[:i]+string[i+1:], prefix+string[i]) # <----- You need to iterate through that with a for-loop (see @MizardX posting, which edged me out by seconds!)
Python: using a recursive algorithm as a generator
Recently I wrote a function to generate certain sequences with nontrivial constraints. The problem came with a natural recursive solution. Now it happens that, even for relatively small input, the sequences are several thousands, thus I would prefer to use my algorithm as a generator instead of using it to fill a list with all the sequences. Here is an example. Suppose we want to compute all the permutations of a string with a recursive function. The following naive algorithm takes an extra argument 'storage' and appends a permutation to it whenever it finds one: def getPermutations(string, storage, prefix=""): if len(string) == 1: storage.append(prefix + string) # <----- else: for i in range(len(string)): getPermutations(string[:i]+string[i+1:], storage, prefix+string[i]) storage = [] getPermutations("abcd", storage) for permutation in storage: print permutation (Please don't care about inefficiency, this is only an example.) Now I want to turn my function into a generator, i.e. to yield a permutation instead of appending it to the storage list: def getPermutations(string, prefix=""): if len(string) == 1: yield prefix + string # <----- else: for i in range(len(string)): getPermutations(string[:i]+string[i+1:], prefix+string[i]) for permutation in getPermutations("abcd"): print permutation This code does not work (the function behaves like an empty generator). Am I missing something? Is there a way to turn the above recursive algorithm into a generator without replacing it with an iterative one?
[ "def getPermutations(string, prefix=\"\"):\n if len(string) == 1:\n yield prefix + string\n else:\n for i in xrange(len(string)):\n for perm in getPermutations(string[:i] + string[i+1:], prefix+string[i]):\n yield perm\n\nOr without an accumulator:\ndef getPermutations(string):\n if len(string) == 1:\n yield string\n else:\n for i in xrange(len(string)):\n for perm in getPermutations(string[:i] + string[i+1:]):\n yield string[i] + perm\n\n", "This avoids the len(string)-deep recursion, and is in general a nice way to handle generators-inside-generators:\nfrom types import GeneratorType\n\ndef flatten(*stack):\n stack = list(stack)\n while stack:\n try: x = stack[0].next()\n except StopIteration:\n stack.pop(0)\n continue\n if isinstance(x, GeneratorType): stack.insert(0, x)\n else: yield x\n\ndef _getPermutations(string, prefix=\"\"):\n if len(string) == 1: yield prefix + string\n else: yield (_getPermutations(string[:i]+string[i+1:], prefix+string[i])\n for i in range(len(string)))\n\ndef getPermutations(string): return flatten(_getPermutations(string))\n\nfor permutation in getPermutations(\"abcd\"): print permutation\n\nflatten allows us to continue progress in another generator by simply yielding it, instead of iterating through it and yielding each item manually.\n\nPython 3.3 will add yield from to the syntax, which allows for natural delegation to a sub-generator:\ndef getPermutations(string, prefix=\"\"):\n if len(string) == 1:\n yield prefix + string\n else:\n for i in range(len(string)):\n yield from getPermutations(string[:i]+string[i+1:], prefix+string[i])\n\n", "The interior call to getPermutations -- it's a generator, too.\ndef getPermutations(string, prefix=\"\"):\n if len(string) == 1:\n yield prefix + string \n else:\n for i in range(len(string)):\n getPermutations(string[:i]+string[i+1:], prefix+string[i]) # <-----\n\nYou need to iterate through that with a for-loop (see @MizardX posting, which edged me out by seconds!)\n" ]
[ 118, 29, 20 ]
[]
[]
[ "generator", "python", "recursion" ]
stackoverflow_0000248830_generator_python_recursion.txt
Q: How do I coherently organize modules for a PyGTK desktop application? I am working on a desktop application in PyGTK and seem to be bumping up against some limitations of my file organization. Thus far I've structured my project this way: application.py - holds the primary application class (most functional routines) gui.py - holds a loosely coupled GTK gui implementation. Handles signal callbacks, etc. command.py - holds command line automation functions not dependent on data in the application class state.py - holds the state data persistence class This has served fairly well so far, but at this point application.py is starting to get rather long. I have looked at numerous other PyGTK applications and they seem to have similar structural issues. At a certain point the primary module starts to get very long and there is not obvious way of breaking the code out into narrower modules without sacrificing clarity and object orientation. I have considered making the GUI the primary module and having seperate modules for the toolbar routines, the menus routines, etc, but at that point I believe I will lose most of the benefits of OOP and end up with an everything-references-everything scenario. Should I just deal with having a very long central module or is there a better way of structuring the project so that I don't have to rely on the class browser so much? EDIT I Ok, so point taken regarding all the MVC stuff. I do have a rough approximation of MVC in my code, but admittedly I could probably gain some mileage by further segregating the model and controller. However, I am reading over python-gtkmvc's documentation (which is a great find by the way, thank you for referencing it) and my impression is that its not going to solve my problem so much as just formalize it. My application is a single glade file, generally a single window. So no matter how tightly I define the MVC roles of the modules I'm still going to have one controller module doing most everything, which is pretty much what I have now. Admittedly I'm a little fuzzy on proper MVC implementation and I'm going to keep researching, but it doesn't look to me like this architecture is going to get any more stuff out of my main file, its just going to rename that file to controller.py. Should I be thinking about separate Controller/View pairs for seperate sections of the window (the toolbar, the menus, etc)? Perhaps that is what I'm missing here. It seems that this is what S. Lott is referring to in his second bullet point. Thanks for the responses so far. A: In the project Wader we use python gtkmvc, that makes much easier to apply the MVC patterns when using pygtk and glade, you can see the file organization of our project in the svn repository: wader/ cli/ common/ contrib/ gtk/ controllers/ models/ views/ test/ utils/ A: This has likely nothing to do with PyGTK, but rather a general code organization issue. You would probably benefit from applying some MVC (Model-View-Controller) design patterns. See Design Patterns, for example. A: "holds the primary application class (most functional routines)" As in singular -- one class? I'm not surprised that the One Class Does Everything design isn't working. It might not be what I'd call object-oriented. It doesn't sound like it follows the typical MVC design pattern if your functionality is piling up in a single class. What's in this massive class? I suggest that you can probably refactor this into pieces. You have two candidate dimensions for refactoring your application class -- if, indeed, I've guessed right that you've put everything into a single class. Before doing anything else, refactor into components that parallel the Real World Entities. It's not clear what's in your "state.py" -- wether this is a proper model of real-world entities, or just mappings between persistent storage and some murky data structure in the application. Most likely you'd move processing out of your application and into your model (possibly state.py, possibly a new module that is a proper model.) Break your model into pieces. It will help organize the control and view elements. The most common MVC mistake is to put too much in control and nothing in the model. Later, once your model is doing most of the work, you can look at refactor into components that parallel the GUI presentation. Various top-level frames, for example, should probably have separate cotrol objects. It's not clear what's in "GUI.py" -- this might be a proper view. What appears to be missing is a Control component. A: Sorry to answer so late. Kiwi seems to me a far better solution than gtkmvc. It is my first dependency for any pygtk project. A: Python 2.6 supports explicit relative imports, which make using packages even easier than previous versions. I suggest you look into breaking your app into smaller modules inside a package. You can organize your application like this: myapp/ application/ gui/ command/ state/ Where each directory has its own __init__.py. You can have a look at any python app or even standard library modules for examples. A: So having not heard back regarding my edit to the original question, I have done some more research and the conclusion I seem to be coming to is that yes, I should break the interface out into several views, each with its own controller. Python-gtkmvc provides the ability to this by providing a glade_top_widget_name parameter to the View constructor. This all seems to make a good deal of sense although it is going to require a large refactoring of my existing codebase which I may or may not be willing to undertake in the near-term (I know, I know, I should.) Moreover, I'm left to wonder whether should just have a single Model object (my application is fairly simple--no more than twenty-five state vars) or if I should break it out into multiple models and have to deal with controllers observing multiple models and chaining notifications across them. (Again, I know I really should do the latter.) If anyone has any further insight, I still don't really feel like I've gotten an answer to the original question, although I have a direction to head in now. (Moreover it seems like their ought to be other architectural choices at hand, given that up until now I had not seen a single Python application coded in the MVC style, but then again many Python applications tend to have the exact problem I've described above.)
How do I coherently organize modules for a PyGTK desktop application?
I am working on a desktop application in PyGTK and seem to be bumping up against some limitations of my file organization. Thus far I've structured my project this way: application.py - holds the primary application class (most functional routines) gui.py - holds a loosely coupled GTK gui implementation. Handles signal callbacks, etc. command.py - holds command line automation functions not dependent on data in the application class state.py - holds the state data persistence class This has served fairly well so far, but at this point application.py is starting to get rather long. I have looked at numerous other PyGTK applications and they seem to have similar structural issues. At a certain point the primary module starts to get very long and there is not obvious way of breaking the code out into narrower modules without sacrificing clarity and object orientation. I have considered making the GUI the primary module and having seperate modules for the toolbar routines, the menus routines, etc, but at that point I believe I will lose most of the benefits of OOP and end up with an everything-references-everything scenario. Should I just deal with having a very long central module or is there a better way of structuring the project so that I don't have to rely on the class browser so much? EDIT I Ok, so point taken regarding all the MVC stuff. I do have a rough approximation of MVC in my code, but admittedly I could probably gain some mileage by further segregating the model and controller. However, I am reading over python-gtkmvc's documentation (which is a great find by the way, thank you for referencing it) and my impression is that its not going to solve my problem so much as just formalize it. My application is a single glade file, generally a single window. So no matter how tightly I define the MVC roles of the modules I'm still going to have one controller module doing most everything, which is pretty much what I have now. Admittedly I'm a little fuzzy on proper MVC implementation and I'm going to keep researching, but it doesn't look to me like this architecture is going to get any more stuff out of my main file, its just going to rename that file to controller.py. Should I be thinking about separate Controller/View pairs for seperate sections of the window (the toolbar, the menus, etc)? Perhaps that is what I'm missing here. It seems that this is what S. Lott is referring to in his second bullet point. Thanks for the responses so far.
[ "In the project Wader we use python gtkmvc, that makes much easier to apply the MVC patterns when using pygtk and glade, you can see the file organization of our project in the svn repository:\nwader/\n cli/\n common/\n contrib/\n gtk/\n controllers/\n models/\n views/\n test/\n utils/\n\n", "This has likely nothing to do with PyGTK, but rather a general code organization issue. You would probably benefit from applying some MVC (Model-View-Controller) design patterns. See Design Patterns, for example.\n", "\"holds the primary application class (most functional routines)\"\nAs in singular -- one class?\nI'm not surprised that the One Class Does Everything design isn't working. It might not be what I'd call object-oriented. It doesn't sound like it follows the typical MVC design pattern if your functionality is piling up in a single class.\nWhat's in this massive class? I suggest that you can probably refactor this into pieces. You have two candidate dimensions for refactoring your application class -- if, indeed, I've guessed right that you've put everything into a single class.\n\nBefore doing anything else, refactor into components that parallel the Real World Entities. It's not clear what's in your \"state.py\" -- wether this is a proper model of real-world entities, or just mappings between persistent storage and some murky data structure in the application. Most likely you'd move processing out of your application and into your model (possibly state.py, possibly a new module that is a proper model.)\nBreak your model into pieces. It will help organize the control and view elements. The most common MVC mistake is to put too much in control and nothing in the model.\n\nLater, once your model is doing most of the work, you can look at refactor into components that parallel the GUI presentation. Various top-level frames, for example, should probably have separate cotrol objects. It's not clear what's in \"GUI.py\" -- this might be a proper view. What appears to be missing is a Control component.\n\n\n", "Sorry to answer so late. Kiwi seems to me a far better solution than gtkmvc. It is my first dependency for any pygtk project.\n", "Python 2.6 supports explicit relative imports, which make using packages even easier than previous versions.\nI suggest you look into breaking your app into smaller modules inside a package.\nYou can organize your application like this:\nmyapp/\n application/\n gui/\n command/\n state/\n\nWhere each directory has its own __init__.py. You can have a look at any python app or even standard library modules for examples.\n", "So having not heard back regarding my edit to the original question, I have done some more research and the conclusion I seem to be coming to is that yes, I should break the interface out into several views, each with its own controller. Python-gtkmvc provides the ability to this by providing a glade_top_widget_name parameter to the View constructor. This all seems to make a good deal of sense although it is going to require a large refactoring of my existing codebase which I may or may not be willing to undertake in the near-term (I know, I know, I should.) Moreover, I'm left to wonder whether should just have a single Model object (my application is fairly simple--no more than twenty-five state vars) or if I should break it out into multiple models and have to deal with controllers observing multiple models and chaining notifications across them. (Again, I know I really should do the latter.) If anyone has any further insight, I still don't really feel like I've gotten an answer to the original question, although I have a direction to head in now.\n(Moreover it seems like their ought to be other architectural choices at hand, given that up until now I had not seen a single Python application coded in the MVC style, but then again many Python applications tend to have the exact problem I've described above.)\n" ]
[ 7, 2, 2, 2, 0, 0 ]
[]
[]
[ "gtk", "module", "organization", "pygtk", "python" ]
stackoverflow_0000216093_gtk_module_organization_pygtk_python.txt
Q: script languages on windows mobile - something similar to python @ nokia s60 I try to find something similar to nokia's python for windows mobile based devices - a script interpreter [in this case also able to create standalone apps] with easy access to all phone interfaces - ability to make a phone call, send SMS, make a photo, send a file over GPRS, etc... While there is 2.5 pythonce available for windows mobile it is pure python interpreter and what I look for are all those "libraries" that nokia's python has like "import camera", "import messaging", ability to control the phone programatically. Also the bluetooth console of nokia python is great. I do not want to use .NET CF as even there (AFAIK) to control camera you need to use some indirect methods (for example: http://blogs.msdn.com/marcpe/archive/2006/03/03/542941.aspx). Appreciate any help you can provide, thanks in advance. I hope there is something I was unable to locate via google. A: It sounds as if this is an opportunity for you to develop some C extension modules for the PythonCE project. A: Well there is Mortscript. a widely used scripting for Windows Mobile. Not sure if it can access all the phones functions. I believe there is TCL for Windows Mobile as well. A: IronPython?
script languages on windows mobile - something similar to python @ nokia s60
I try to find something similar to nokia's python for windows mobile based devices - a script interpreter [in this case also able to create standalone apps] with easy access to all phone interfaces - ability to make a phone call, send SMS, make a photo, send a file over GPRS, etc... While there is 2.5 pythonce available for windows mobile it is pure python interpreter and what I look for are all those "libraries" that nokia's python has like "import camera", "import messaging", ability to control the phone programatically. Also the bluetooth console of nokia python is great. I do not want to use .NET CF as even there (AFAIK) to control camera you need to use some indirect methods (for example: http://blogs.msdn.com/marcpe/archive/2006/03/03/542941.aspx). Appreciate any help you can provide, thanks in advance. I hope there is something I was unable to locate via google.
[ "It sounds as if this is an opportunity for you to develop some C extension modules for the PythonCE project.\n", "Well there is Mortscript. a widely used scripting for Windows Mobile. Not sure if it can access all the phones functions. I believe there is TCL for Windows Mobile as well.\n", "IronPython?\n" ]
[ 3, 0, 0 ]
[]
[]
[ "mobile", "python", "windows_mobile" ]
stackoverflow_0000251506_mobile_python_windows_mobile.txt
Q: Why won't Django 1.0 admin application work? I've just started playing with Django and am loosely following the tutorial with my own set of basic requirements. The models I've sketched out so far are a lot more comprehensive than the tutorial, but they compile fine. Otherwise, everything should have been the same. My problem is with the admin application. I can log into it, and view the editable models, but when I click on a model or any of the change/add buttons, I get a 404. This is the exact error I get: Page not found (404) Request Method: GET Request URL: http://localhost:8000/admin/auth/user/add/ App u'', model u'auth', not found. These are the relevant files and what is in them: urls.py from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^daso/', include('daso.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: #(r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^admin(.*)', admin.site.root) ) admin.py from daso.clients.models import Person, Client, Contact from django.contrib import admin admin.site.register(Person) admin.site.register(Client) admin.site.register(Contact) models.py - I'll just show one model class Client(Person): relationships = models.ManyToManyField("Contact", through="Relationship", null=True) disabilities = models.ManyToManyField("Disability", related_name="disability", null=True) medical_issues = models.ManyToManyField("MedicalIssue", related_name="medical_issue", null=True) medicare_num = models.CharField(max_length=15, blank=True) insurance = models.OneToOneField("Insurance", null=True, blank=True) medications = models.ManyToManyField("Medication", through="Medication_Details", null=True) def __unicode__(self): client = u"[Client[id: ", self.id, " name: ", self.first_name, " ", self.last_name, "]" return client settings.py INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'daso.clients', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', ) Those should be the relevant files/sections of files. If anyone has an idea about WHY I'm getting a 404, please enlighten me? Note, when pasting in here, installed apps had the last 2 apps tabbed instead of spaced*4, and when reloading the admin page it worked for half a second then 404'd again. Strange. Ideas? A: It's because you left out a / in urls.py. Change the admin line to the following: (r'^admin/(.*)', admin.site.root), I checked this on my server and got the same error with your line from urls.py.
Why won't Django 1.0 admin application work?
I've just started playing with Django and am loosely following the tutorial with my own set of basic requirements. The models I've sketched out so far are a lot more comprehensive than the tutorial, but they compile fine. Otherwise, everything should have been the same. My problem is with the admin application. I can log into it, and view the editable models, but when I click on a model or any of the change/add buttons, I get a 404. This is the exact error I get: Page not found (404) Request Method: GET Request URL: http://localhost:8000/admin/auth/user/add/ App u'', model u'auth', not found. These are the relevant files and what is in them: urls.py from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^daso/', include('daso.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: #(r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^admin(.*)', admin.site.root) ) admin.py from daso.clients.models import Person, Client, Contact from django.contrib import admin admin.site.register(Person) admin.site.register(Client) admin.site.register(Contact) models.py - I'll just show one model class Client(Person): relationships = models.ManyToManyField("Contact", through="Relationship", null=True) disabilities = models.ManyToManyField("Disability", related_name="disability", null=True) medical_issues = models.ManyToManyField("MedicalIssue", related_name="medical_issue", null=True) medicare_num = models.CharField(max_length=15, blank=True) insurance = models.OneToOneField("Insurance", null=True, blank=True) medications = models.ManyToManyField("Medication", through="Medication_Details", null=True) def __unicode__(self): client = u"[Client[id: ", self.id, " name: ", self.first_name, " ", self.last_name, "]" return client settings.py INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'daso.clients', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', ) Those should be the relevant files/sections of files. If anyone has an idea about WHY I'm getting a 404, please enlighten me? Note, when pasting in here, installed apps had the last 2 apps tabbed instead of spaced*4, and when reloading the admin page it worked for half a second then 404'd again. Strange. Ideas?
[ "It's because you left out a / in urls.py. Change the admin line to the following:\n(r'^admin/(.*)', admin.site.root),\n\nI checked this on my server and got the same error with your line from urls.py.\n" ]
[ 12 ]
[]
[]
[ "admin", "django", "python" ]
stackoverflow_0000252531_admin_django_python.txt
Q: Finding invocations of a certain function in a c++ file using python I need to find all occurrences of a function call in a C++ file using python, and extract the arguments for each call. I'm playing with the pygccxml package, and extracting the arguments given a string with the function call is extremely easy: from pygccxml.declarations import call_invocation def test_is_call_invocation(call): if call_invocation.is_call_invocation(call): print call_invocation.name(call) for arg in call_invocation.args(call): print " ",arg else: print "not a function invocation" What I couldn't find is a way of getting the calls parsing a file: from pygccxml import parser from pygccxml import declarations decls = parser.parse( ['main.cpp'] ) # ... Is there a way to find the calls to a certain function using the pygccxml package? Or maybe that package is an overkill for what I'm trying to do :) and there's a much simpler way? Finding the function calls with a regular expression is, I'm afraid, much trickier than it might look at a first sight... A: XML-GCC can't do that, because it only reports the data types (and function signatures). It ignores the function bodies. To see that, create a.cc: void foo() {} void bar() { foo(); } and then run gccxml a.cc -fxml=a.xml. Look at the generated a.xml, to see that the only mentioning of foo (or its id) is in the declaration of foo. An alternative might be available in codeviz (http://www.csn.ul.ie/~mel/projects/codeviz/). It consists of a patch to gcc 3.4.6 that generates call dependency information - plus some perl scripts that generate graphviz input; the latter you can safely ignore. As yet another alternative (which doesn't need gcc modifications) you could copy the approach from egypt (http://www.gson.org/egypt/); this parses GCC RTL dumps. It should work with any recent GCC, however, it might be that you don't get calls to inline functions. In any case, with these approaches, you won't get "calls" to macros, but that might be actually the better choice.
Finding invocations of a certain function in a c++ file using python
I need to find all occurrences of a function call in a C++ file using python, and extract the arguments for each call. I'm playing with the pygccxml package, and extracting the arguments given a string with the function call is extremely easy: from pygccxml.declarations import call_invocation def test_is_call_invocation(call): if call_invocation.is_call_invocation(call): print call_invocation.name(call) for arg in call_invocation.args(call): print " ",arg else: print "not a function invocation" What I couldn't find is a way of getting the calls parsing a file: from pygccxml import parser from pygccxml import declarations decls = parser.parse( ['main.cpp'] ) # ... Is there a way to find the calls to a certain function using the pygccxml package? Or maybe that package is an overkill for what I'm trying to do :) and there's a much simpler way? Finding the function calls with a regular expression is, I'm afraid, much trickier than it might look at a first sight...
[ "XML-GCC can't do that, because it only reports the data types (and function signatures). It ignores the function bodies. To see that, create a.cc:\nvoid foo()\n{}\n\nvoid bar()\n{\n foo();\n}\n\nand then run gccxml a.cc -fxml=a.xml. Look at the generated a.xml, to see that the only mentioning of foo (or its id) is in the declaration of foo.\nAn alternative might be available in codeviz (http://www.csn.ul.ie/~mel/projects/codeviz/). It consists of a patch to gcc 3.4.6 that generates call dependency information - plus some perl scripts that generate graphviz input; the latter you can safely ignore.\nAs yet another alternative (which doesn't need gcc modifications) you could copy the approach from egypt (http://www.gson.org/egypt/); this parses GCC RTL dumps. It should work with any recent GCC, however, it might be that you don't get calls to inline functions.\nIn any case, with these approaches, you won't get \"calls\" to macros, but that might be actually the better choice.\n" ]
[ 2 ]
[]
[]
[ "c++", "parsing", "python" ]
stackoverflow_0000252951_c++_parsing_python.txt
Q: How to configure the import path in Visual Studio IronPython projects I have built the IronPythonIntegration solution that comes with the Visual Studio 2005 SDK (as explained at http://www.izume.com/2007/10/13/integrating-ironpython-with-visual-studio-2005), and I can now use IronPython projects inside Visual Studio 2005. However, to let a Python file import from the standard library I need to include these two lines first: import sys sys.path.append('c:\Python24\lib') and similarly for any other folders I want to be able to import from. Does anyone know a way to set up import paths so that all IronPython projects automatically pick them up? A: Set the environment variable IRONPYTHONPATH in your operating system to 'c:\Python24\lib'. (Or anywhere else you need).
How to configure the import path in Visual Studio IronPython projects
I have built the IronPythonIntegration solution that comes with the Visual Studio 2005 SDK (as explained at http://www.izume.com/2007/10/13/integrating-ironpython-with-visual-studio-2005), and I can now use IronPython projects inside Visual Studio 2005. However, to let a Python file import from the standard library I need to include these two lines first: import sys sys.path.append('c:\Python24\lib') and similarly for any other folders I want to be able to import from. Does anyone know a way to set up import paths so that all IronPython projects automatically pick them up?
[ "Set the environment variable IRONPYTHONPATH in your operating system to 'c:\\Python24\\lib'. (Or anywhere else you need).\n" ]
[ 2 ]
[]
[]
[ "ironpython", "ironpython_studio", "python", "visual_studio" ]
stackoverflow_0000253018_ironpython_ironpython_studio_python_visual_studio.txt
Q: Looking a generic Python script to add a field and populate the field with conditions I am looking for a script to allow users to add a text field to a .dbf table(e.g. landuse categories) and allow them to input/update the rows basing on what values in the GRIDCODE (numeric categories) field they think should be assigned into text categories.i.e. if GRIDCODE value is 4, the corresponding field value of landuse/landclass is “forest” etc. Is there such a script in existence? Or, do you have something similar that I can customise to create a new script? The script will accept users' interactive input as parameters passed into the script. Sincerely, David A: When you say dbf table, are you referring to ESRI shape file dbf files, which are in fact dbase files? If so you could implement such a thing pretty easily with the python wrapper for shapelib, which also supports dbf files.
Looking a generic Python script to add a field and populate the field with conditions
I am looking for a script to allow users to add a text field to a .dbf table(e.g. landuse categories) and allow them to input/update the rows basing on what values in the GRIDCODE (numeric categories) field they think should be assigned into text categories.i.e. if GRIDCODE value is 4, the corresponding field value of landuse/landclass is “forest” etc. Is there such a script in existence? Or, do you have something similar that I can customise to create a new script? The script will accept users' interactive input as parameters passed into the script. Sincerely, David
[ "When you say dbf table, are you referring to ESRI shape file dbf files, which are in fact dbase files? If so you could implement such a thing pretty easily with the python wrapper for shapelib, which also supports dbf files.\n" ]
[ 2 ]
[]
[]
[ "python", "sql", "sql_update" ]
stackoverflow_0000253761_python_sql_sql_update.txt
Q: What Python bindings are there for CVS or SVN? I once did a cursory search and found no good CVS bindings for Python. I wanted to be able to write helper scripts to do some fine-grained manipulation of the repository and projects in it. I had to resort to using popen and checking stdout and stderr and then parsing those. It was messy and error-prone. Are there any good quality modules for CVS integration for Python? Which module do you prefer and why? While I am at it, is there a good Subversion integration module for Python? My understanding is that Subversion has a great API for such things. A: For cvs, pyCVS may be worth a look. For svn, there is pysvn, which is pretty good. A: Tailor, a Python program which lets different version control systems interoperate, simply calls the external programs cvs and svn when working with repositories of those formats. This seems pretty ugly, but reduces Tailor's dependencies from "requires ____ bindings" to "requires working system".
What Python bindings are there for CVS or SVN?
I once did a cursory search and found no good CVS bindings for Python. I wanted to be able to write helper scripts to do some fine-grained manipulation of the repository and projects in it. I had to resort to using popen and checking stdout and stderr and then parsing those. It was messy and error-prone. Are there any good quality modules for CVS integration for Python? Which module do you prefer and why? While I am at it, is there a good Subversion integration module for Python? My understanding is that Subversion has a great API for such things.
[ "For cvs, pyCVS may be worth a look.\nFor svn, there is pysvn, which is pretty good.\n", "Tailor, a Python program which lets different version control systems interoperate, simply calls the external programs cvs and svn when working with repositories of those formats. This seems pretty ugly, but reduces Tailor's dependencies from \"requires ____ bindings\" to \"requires working system\".\n" ]
[ 8, 1 ]
[]
[]
[ "cvs", "python", "svn", "version_control" ]
stackoverflow_0000253375_cvs_python_svn_version_control.txt
Q: Bizarre python ImportError Here's my setup: a Mac, running OS X Tiger. Windows XP running in a virtual machine (Parallels). Windows XP has my Mac home directory mapped as a network drive. I have two files in a directory of my Mac home directory: foo.py pass test.py import foo If I run test.py from within my virtual machine by typing 'python test.py', I get this: Traceback (most recent call last): File "test.py", line 1, in <module> import foo ImportError: No module named foo If I try to import foo from the console (running python under Windows from the same directory), all is well: Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import foo >>> If I run test.py with Mac python, all is well. If I copy test.py and foo.py to a different directory, I can run test.py under Windows without problems. There is an init.py in the original directory, but it is empty. Furthermore, copying it with the other files doesn't break anything in the previous paragraph. There are no python-related environment variables set. Any ideas? A: Add import sys; print sys.path to the start of test.py. See what it prints out in the failing case. If "." isn't on the list, that may be your problem. A: As a random guess: are the permissions on foo.py accessable from the windows client? (eg try opening with notepad from the virtual machine). If that's OK, try running: python -v -v test.py and looking at the output (alternatively, set PYTHONVERBOSE=2). This should list all the places it tries to import foo from. Comparing it with a similar trace on the working machine may give some further clues.
Bizarre python ImportError
Here's my setup: a Mac, running OS X Tiger. Windows XP running in a virtual machine (Parallels). Windows XP has my Mac home directory mapped as a network drive. I have two files in a directory of my Mac home directory: foo.py pass test.py import foo If I run test.py from within my virtual machine by typing 'python test.py', I get this: Traceback (most recent call last): File "test.py", line 1, in <module> import foo ImportError: No module named foo If I try to import foo from the console (running python under Windows from the same directory), all is well: Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import foo >>> If I run test.py with Mac python, all is well. If I copy test.py and foo.py to a different directory, I can run test.py under Windows without problems. There is an init.py in the original directory, but it is empty. Furthermore, copying it with the other files doesn't break anything in the previous paragraph. There are no python-related environment variables set. Any ideas?
[ "Add import sys; print sys.path to the start of test.py. See what it prints out in the failing case. If \".\" isn't on the list, that may be your problem.\n", "As a random guess: are the permissions on foo.py accessable from the windows client? (eg try opening with notepad from the virtual machine).\nIf that's OK, try running:\npython -v -v test.py\n\nand looking at the output (alternatively, set PYTHONVERBOSE=2). This should list all the places it tries to import foo from. Comparing it with a similar trace on the working machine may give some further clues.\n" ]
[ 2, 1 ]
[]
[]
[ "import", "python", "windows_xp" ]
stackoverflow_0000252287_import_python_windows_xp.txt