content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Why is "if not someobj:" better than "if someobj == None:" in Python?
I've seen several examples of code like this:
if not someobj:
#do something
But I'm wondering why not doing:
if someobj == None:
#do something
Is there any difference? Does one have an advantage over the other?
A:
In the first test, Python try to convert the object to a bool value if it is not already one. Roughly, we are asking the object : are you meaningful or not ? This is done using the following algorithm :
If the object has a __nonzero__ special method (as do numeric built-ins, int and float), it calls this method. It must either return a bool value which is then directly used, or an int value that is considered False if equal to zero.
Otherwise, if the object has a __len__ special method (as do container built-ins, list, dict, set, tuple, ...), it calls this method, considering a container False if it is empty (length is zero).
Otherwise, the object is considered True unless it is None in which case, it is considered False.
In the second test, the object is compared for equality to None. Here, we are asking the object, "Are you equal to this other value?" This is done using the following algorithm :
If the object has a __eq__ method, it is called, and the return value is then converted to a boolvalue and used to determine the outcome of the if.
Otherwise, if the object has a __cmp__ method, it is called. This function must return an int indicating the order of the two object (-1 if self < other, 0 if self == other, +1 if self > other).
Otherwise, the object are compared for identity (ie. they are reference to the same object, as can be tested by the is operator).
There is another test possible using the is operator. We would be asking the object, "Are you this particular object?"
Generally, I would recommend to use the first test with non-numerical values, to use the test for equality when you want to compare objects of the same nature (two strings, two numbers, ...) and to check for identity only when using sentinel values (None meaning not initialized for a member field for exemple, or when using the getattr or the __getitem__ methods).
To summarize, we have :
>>> class A(object):
... def __repr__(self):
... return 'A()'
... def __nonzero__(self):
... return False
>>> class B(object):
... def __repr__(self):
... return 'B()'
... def __len__(self):
... return 0
>>> class C(object):
... def __repr__(self):
... return 'C()'
... def __cmp__(self, other):
... return 0
>>> class D(object):
... def __repr__(self):
... return 'D()'
... def __eq__(self, other):
... return True
>>> for obj in ['', (), [], {}, 0, 0., A(), B(), C(), D(), None]:
... print '%4s: bool(obj) -> %5s, obj == None -> %5s, obj is None -> %5s' % \
... (repr(obj), bool(obj), obj == None, obj is None)
'': bool(obj) -> False, obj == None -> False, obj is None -> False
(): bool(obj) -> False, obj == None -> False, obj is None -> False
[]: bool(obj) -> False, obj == None -> False, obj is None -> False
{}: bool(obj) -> False, obj == None -> False, obj is None -> False
0: bool(obj) -> False, obj == None -> False, obj is None -> False
0.0: bool(obj) -> False, obj == None -> False, obj is None -> False
A(): bool(obj) -> False, obj == None -> False, obj is None -> False
B(): bool(obj) -> False, obj == None -> False, obj is None -> False
C(): bool(obj) -> True, obj == None -> True, obj is None -> False
D(): bool(obj) -> True, obj == None -> True, obj is None -> False
None: bool(obj) -> False, obj == None -> True, obj is None -> True
A:
These are actually both poor practices. Once upon a time, it was considered OK to casually treat None and False as similar. However, since Python 2.2 this is not the best policy.
First, when you do an if x or if not x kind of test, Python has to implicitly convert x to boolean. The rules for the bool function describe a raft of things which are False; everything else is True. If the value of x wasn't properly boolean to begin with, this implicit conversion isn't really the clearest way to say things.
Before Python 2.2, there was no bool function, so it was even less clear.
Second, you shouldn't really test with == None. You should use is None and is not None.
See PEP 8, Style Guide for Python Code.
- Comparisons to singletons like None should always be done with
'is' or 'is not', never the equality operators.
Also, beware of writing "if x" when you really mean "if x is not None"
-- e.g. when testing whether a variable or argument that defaults to
None was set to some other value. The other value might have a type
(such as a container) that could be false in a boolean context!
How many singletons are there? Five: None, True, False, NotImplemented and Ellipsis. Since you're really unlikely to use NotImplemented or Ellipsis, and you would never say if x is True (because simply if x is a lot clearer), you'll only ever test None.
A:
Because None is not the only thing that is considered false.
if not False:
print "False is false."
if not 0:
print "0 is false."
if not []:
print "An empty list is false."
if not ():
print "An empty tuple is false."
if not {}:
print "An empty dict is false."
if not "":
print "An empty string is false."
False, 0, (), [], {} and "" are all different from None, so your two code snippets are not equivalent.
Moreover, consider the following:
>>> False == 0
True
>>> False == ()
False
if object: is not an equality check. 0, (), [], None, {}, etc. are all different from each other, but they all evaluate to False.
This is the "magic" behind short circuiting expressions like:
foo = bar and spam or eggs
which is shorthand for:
if bar:
foo = spam
else:
foo = eggs
although you really should write:
foo = spam if bar else egg
A:
PEP 8 -- Style Guide for Python Code recommends to use is or is not if you are testing for None-ness
- Comparisons to singletons like None should always be done with
'is' or 'is not', never the equality operators.
On the other hand if you are testing for more than None-ness, you should use the boolean operator.
A:
If you ask
if not spam:
print "Sorry. No SPAM."
the __nonzero__ method of spam gets called. From the Python manual:
__nonzero__(self)
Called to implement truth value testing, and the built-in operation bool(); should return False or True, or their integer equivalents 0 or 1. When this method is not defined, __len__() is called, if it is defined (see below). If a class defines neither __len__() nor __nonzero__(), all its instances are considered true.
If you ask
if spam == None:
print "Sorry. No SPAM here either."
the __eq__ method of spam gets called with the argument None.
For more information of the customization possibilities have a look at the Python documenation at https://docs.python.org/reference/datamodel.html#basic-customization
A:
These two comparisons serve different purposes. The former checks for boolean value of something, the second checks for identity with None value.
A:
For one the first example is shorter and looks nicer. As per the other posts what you choose also depends on what you really want to do with the comparison.
A:
The answer is "it depends".
I use the first example if I consider 0, "", [] and False (list not exhaustive) to be equivalent to None in this context.
A:
Personally, I chose a consistent approach across languages: I do if (var) (or equivalent) only if var is declared as boolean (or defined as such, in C we don't have a specific type). I even prefix these variables with a b (so it would be bVar actually) to be sure I won't accidentally use another type here.
I don't really like implicit casting to boolean, even less when there are numerous, complex rules.
Of course, people will disagree. Some go farther, I see if (bVar == true) in the Java code at my work (too redundant for my taste!), others love too much compact syntax, going while (line = getNextLine()) (too ambiguous for me).
| Why is "if not someobj:" better than "if someobj == None:" in Python? | I've seen several examples of code like this:
if not someobj:
#do something
But I'm wondering why not doing:
if someobj == None:
#do something
Is there any difference? Does one have an advantage over the other?
| [
"In the first test, Python try to convert the object to a bool value if it is not already one. Roughly, we are asking the object : are you meaningful or not ? This is done using the following algorithm :\n\nIf the object has a __nonzero__ special method (as do numeric built-ins, int and float), it calls this method. It must either return a bool value which is then directly used, or an int value that is considered False if equal to zero.\nOtherwise, if the object has a __len__ special method (as do container built-ins, list, dict, set, tuple, ...), it calls this method, considering a container False if it is empty (length is zero).\nOtherwise, the object is considered True unless it is None in which case, it is considered False.\n\nIn the second test, the object is compared for equality to None. Here, we are asking the object, \"Are you equal to this other value?\" This is done using the following algorithm :\n\nIf the object has a __eq__ method, it is called, and the return value is then converted to a boolvalue and used to determine the outcome of the if.\nOtherwise, if the object has a __cmp__ method, it is called. This function must return an int indicating the order of the two object (-1 if self < other, 0 if self == other, +1 if self > other).\nOtherwise, the object are compared for identity (ie. they are reference to the same object, as can be tested by the is operator).\n\nThere is another test possible using the is operator. We would be asking the object, \"Are you this particular object?\"\nGenerally, I would recommend to use the first test with non-numerical values, to use the test for equality when you want to compare objects of the same nature (two strings, two numbers, ...) and to check for identity only when using sentinel values (None meaning not initialized for a member field for exemple, or when using the getattr or the __getitem__ methods).\nTo summarize, we have :\n>>> class A(object):\n... def __repr__(self):\n... return 'A()'\n... def __nonzero__(self):\n... return False\n\n>>> class B(object):\n... def __repr__(self):\n... return 'B()'\n... def __len__(self):\n... return 0\n\n>>> class C(object):\n... def __repr__(self):\n... return 'C()'\n... def __cmp__(self, other):\n... return 0\n\n>>> class D(object):\n... def __repr__(self):\n... return 'D()'\n... def __eq__(self, other):\n... return True\n\n>>> for obj in ['', (), [], {}, 0, 0., A(), B(), C(), D(), None]:\n... print '%4s: bool(obj) -> %5s, obj == None -> %5s, obj is None -> %5s' % \\\n... (repr(obj), bool(obj), obj == None, obj is None)\n '': bool(obj) -> False, obj == None -> False, obj is None -> False\n (): bool(obj) -> False, obj == None -> False, obj is None -> False\n []: bool(obj) -> False, obj == None -> False, obj is None -> False\n {}: bool(obj) -> False, obj == None -> False, obj is None -> False\n 0: bool(obj) -> False, obj == None -> False, obj is None -> False\n 0.0: bool(obj) -> False, obj == None -> False, obj is None -> False\n A(): bool(obj) -> False, obj == None -> False, obj is None -> False\n B(): bool(obj) -> False, obj == None -> False, obj is None -> False\n C(): bool(obj) -> True, obj == None -> True, obj is None -> False\n D(): bool(obj) -> True, obj == None -> True, obj is None -> False\nNone: bool(obj) -> False, obj == None -> True, obj is None -> True\n\n",
"These are actually both poor practices. Once upon a time, it was considered OK to casually treat None and False as similar. However, since Python 2.2 this is not the best policy.\nFirst, when you do an if x or if not x kind of test, Python has to implicitly convert x to boolean. The rules for the bool function describe a raft of things which are False; everything else is True. If the value of x wasn't properly boolean to begin with, this implicit conversion isn't really the clearest way to say things. \nBefore Python 2.2, there was no bool function, so it was even less clear.\nSecond, you shouldn't really test with == None. You should use is None and is not None.\nSee PEP 8, Style Guide for Python Code. \n\n- Comparisons to singletons like None should always be done with\n 'is' or 'is not', never the equality operators.\n\n Also, beware of writing \"if x\" when you really mean \"if x is not None\"\n -- e.g. when testing whether a variable or argument that defaults to\n None was set to some other value. The other value might have a type\n (such as a container) that could be false in a boolean context!\n\n\nHow many singletons are there? Five: None, True, False, NotImplemented and Ellipsis. Since you're really unlikely to use NotImplemented or Ellipsis, and you would never say if x is True (because simply if x is a lot clearer), you'll only ever test None.\n",
"Because None is not the only thing that is considered false.\nif not False:\n print \"False is false.\"\nif not 0:\n print \"0 is false.\"\nif not []:\n print \"An empty list is false.\"\nif not ():\n print \"An empty tuple is false.\"\nif not {}:\n print \"An empty dict is false.\"\nif not \"\":\n print \"An empty string is false.\"\n\nFalse, 0, (), [], {} and \"\" are all different from None, so your two code snippets are not equivalent.\nMoreover, consider the following:\n>>> False == 0\nTrue\n>>> False == ()\nFalse\n\nif object: is not an equality check. 0, (), [], None, {}, etc. are all different from each other, but they all evaluate to False.\nThis is the \"magic\" behind short circuiting expressions like:\nfoo = bar and spam or eggs\n\nwhich is shorthand for:\nif bar:\n foo = spam\nelse:\n foo = eggs\n\nalthough you really should write:\nfoo = spam if bar else egg\n\n",
"PEP 8 -- Style Guide for Python Code recommends to use is or is not if you are testing for None-ness\n\n- Comparisons to singletons like None should always be done with\n 'is' or 'is not', never the equality operators.\n\n\nOn the other hand if you are testing for more than None-ness, you should use the boolean operator.\n",
"If you ask\nif not spam:\n print \"Sorry. No SPAM.\"\n\nthe __nonzero__ method of spam gets called. From the Python manual:\n\n__nonzero__(self)\n Called to implement truth value testing, and the built-in operation bool(); should return False or True, or their integer equivalents 0 or 1. When this method is not defined, __len__() is called, if it is defined (see below). If a class defines neither __len__() nor __nonzero__(), all its instances are considered true.\n\nIf you ask\nif spam == None:\n print \"Sorry. No SPAM here either.\"\n\nthe __eq__ method of spam gets called with the argument None.\nFor more information of the customization possibilities have a look at the Python documenation at https://docs.python.org/reference/datamodel.html#basic-customization\n",
"These two comparisons serve different purposes. The former checks for boolean value of something, the second checks for identity with None value.\n",
"For one the first example is shorter and looks nicer. As per the other posts what you choose also depends on what you really want to do with the comparison.\n",
"The answer is \"it depends\".\nI use the first example if I consider 0, \"\", [] and False (list not exhaustive) to be equivalent to None in this context.\n",
"Personally, I chose a consistent approach across languages: I do if (var) (or equivalent) only if var is declared as boolean (or defined as such, in C we don't have a specific type). I even prefix these variables with a b (so it would be bVar actually) to be sure I won't accidentally use another type here.\nI don't really like implicit casting to boolean, even less when there are numerous, complex rules.\nOf course, people will disagree. Some go farther, I see if (bVar == true) in the Java code at my work (too redundant for my taste!), others love too much compact syntax, going while (line = getNextLine()) (too ambiguous for me).\n"
] | [
207,
56,
39,
6,
3,
2,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0000100732_python.txt |
Q:
Building Python C extension modules for Windows
I have a C extension module and it would be nice to distribute built binaries. Setuptools makes it easy to build extensions modules on OS X and GNU/Linux, since those OSs come with GCC, but I don't know how to do it in Windows.
Would I need to buy a copy of Visual Studio, or does Visual Studio Express work? Can I just use Cygwin or MinGW?
A:
You can use both MinGW and VC++ Express (free, no need to buy it).
See:
http://eli.thegreenplace.net/2008/06/28/compiling-python-extensions-with-distutils-and-mingw/
http://eli.thegreenplace.net/2008/06/27/creating-python-extension-modules-in-c/
A:
Setuptools and distutils don't come with gcc, but they use the same compiler Python was built with. The difference is mostly that on the typical UNIX system that compiler is 'gcc' and you have it installed.
In order to compile extension modules on Windows, you need a compiler for Windows. MSVS will do, even the Express version I believe, but it does have to be the same MSVC++ version as Python was built with. Or you can use Cygwin or MinGW; See the appropriate section of Installing Python Modules.
| Building Python C extension modules for Windows | I have a C extension module and it would be nice to distribute built binaries. Setuptools makes it easy to build extensions modules on OS X and GNU/Linux, since those OSs come with GCC, but I don't know how to do it in Windows.
Would I need to buy a copy of Visual Studio, or does Visual Studio Express work? Can I just use Cygwin or MinGW?
| [
"You can use both MinGW and VC++ Express (free, no need to buy it).\nSee:\n\nhttp://eli.thegreenplace.net/2008/06/28/compiling-python-extensions-with-distutils-and-mingw/\nhttp://eli.thegreenplace.net/2008/06/27/creating-python-extension-modules-in-c/\n\n",
"Setuptools and distutils don't come with gcc, but they use the same compiler Python was built with. The difference is mostly that on the typical UNIX system that compiler is 'gcc' and you have it installed.\nIn order to compile extension modules on Windows, you need a compiler for Windows. MSVS will do, even the Express version I believe, but it does have to be the same MSVC++ version as Python was built with. Or you can use Cygwin or MinGW; See the appropriate section of Installing Python Modules.\n"
] | [
16,
3
] | [] | [] | [
"python",
"windows"
] | stackoverflow_0000101061_python_windows.txt |
Q:
Is there a zip-like method in .Net?
In Python there is a really neat function called zip which can be used to iterate through two lists at the same time:
list1 = [1, 2, 3]
list2 = ["a", "b", "c"]
for v1, v2 in zip(list1, list2):
print v1 + " " + v2
The above code should produce the following:
1 a
2 b
3 c
I wonder if there is a method like it available in .Net? I'm thinking about writing it myself, but there is no point if it's already available.
A:
Update: It is built-in in C# 4 as System.Linq.Enumerable.Zip Method
Here is a C# 3 version:
IEnumerable<TResult> Zip<TResult,T1,T2>
(IEnumerable<T1> a,
IEnumerable<T2> b,
Func<T1,T2,TResult> combine)
{
using (var f = a.GetEnumerator())
using (var s = b.GetEnumerator())
{
while (f.MoveNext() && s.MoveNext())
yield return combine(f.Current, s.Current);
}
}
Dropped the C# 2 version as it was showing its age.
A:
As far as I know there is not. I wrote one for myself (as well as a few other useful extensions and put them in a project called NExtension on Codeplex.
Apparently the Parallel extensions for .NET have a Zip function.
Here's a simplified version from NExtension (but please check it out for more useful extension methods):
public static IEnumerable<TResult> Zip<T1, T2, TResult>(this IEnumerable<T1> source1, IEnumerable<T2> source2, Func<T1, T2, TResult> combine)
{
using (IEnumerator<T1> data1 = source1.GetEnumerator())
using (IEnumerator<T2> data2 = source2.GetEnumerator())
while (data1.MoveNext() && data2.MoveNext())
{
yield return combine(data1.Current, data2.Current);
}
}
Usage:
int[] list1 = new int[] {1, 2, 3};
string[] list2 = new string[] {"a", "b", "c"};
foreach (var result in list1.Zip(list2, (i, s) => i.ToString() + " " + s))
Console.WriteLine(result);
A:
Nope, there is no such function in .NET. You have roll out your own. Note that C# doesn't support tuples, so python-like syntax sugar is missing too.
You can use something like this:
class Pair<T1, T2>
{
public T1 First { get; set;}
public T2 Second { get; set;}
}
static IEnumerable<Pair<T1, T2>> Zip<T1, T2>(IEnumerable<T1> first, IEnumerable<T2> second)
{
if (first.Count() != second.Count())
throw new ArgumentException("Blah blah");
using (IEnumerator<T1> e1 = first.GetEnumerator())
using (IEnumerator<T2> e2 = second.GetEnumerator())
{
while (e1.MoveNext() && e2.MoveNext())
{
yield return new Pair<T1, T2>() {First = e1.Current, Second = e2.Current};
}
}
}
...
var ints = new int[] {1, 2, 3};
var strings = new string[] {"A", "B", "C"};
foreach (var pair in Zip(ints, strings))
{
Console.WriteLine(pair.First + ":" + pair.Second);
}
A:
There's also one in F#:
let zipped = Seq.zip firstEnumeration secondEnumation
| Is there a zip-like method in .Net? | In Python there is a really neat function called zip which can be used to iterate through two lists at the same time:
list1 = [1, 2, 3]
list2 = ["a", "b", "c"]
for v1, v2 in zip(list1, list2):
print v1 + " " + v2
The above code should produce the following:
1 a
2 b
3 c
I wonder if there is a method like it available in .Net? I'm thinking about writing it myself, but there is no point if it's already available.
| [
"Update: It is built-in in C# 4 as System.Linq.Enumerable.Zip Method\nHere is a C# 3 version:\nIEnumerable<TResult> Zip<TResult,T1,T2>\n (IEnumerable<T1> a,\n IEnumerable<T2> b,\n Func<T1,T2,TResult> combine)\n{\n using (var f = a.GetEnumerator())\n using (var s = b.GetEnumerator())\n {\n while (f.MoveNext() && s.MoveNext())\n yield return combine(f.Current, s.Current);\n }\n}\n\nDropped the C# 2 version as it was showing its age.\n",
"As far as I know there is not. I wrote one for myself (as well as a few other useful extensions and put them in a project called NExtension on Codeplex.\nApparently the Parallel extensions for .NET have a Zip function.\nHere's a simplified version from NExtension (but please check it out for more useful extension methods):\npublic static IEnumerable<TResult> Zip<T1, T2, TResult>(this IEnumerable<T1> source1, IEnumerable<T2> source2, Func<T1, T2, TResult> combine)\n{\n using (IEnumerator<T1> data1 = source1.GetEnumerator())\n using (IEnumerator<T2> data2 = source2.GetEnumerator())\n while (data1.MoveNext() && data2.MoveNext())\n {\n yield return combine(data1.Current, data2.Current);\n }\n}\n\nUsage:\nint[] list1 = new int[] {1, 2, 3};\nstring[] list2 = new string[] {\"a\", \"b\", \"c\"};\n\nforeach (var result in list1.Zip(list2, (i, s) => i.ToString() + \" \" + s))\n Console.WriteLine(result);\n\n",
"Nope, there is no such function in .NET. You have roll out your own. Note that C# doesn't support tuples, so python-like syntax sugar is missing too.\nYou can use something like this:\nclass Pair<T1, T2>\n{\n public T1 First { get; set;}\n public T2 Second { get; set;}\n}\n\nstatic IEnumerable<Pair<T1, T2>> Zip<T1, T2>(IEnumerable<T1> first, IEnumerable<T2> second)\n{\n if (first.Count() != second.Count())\n throw new ArgumentException(\"Blah blah\");\n\n using (IEnumerator<T1> e1 = first.GetEnumerator())\n using (IEnumerator<T2> e2 = second.GetEnumerator())\n {\n while (e1.MoveNext() && e2.MoveNext())\n {\n yield return new Pair<T1, T2>() {First = e1.Current, Second = e2.Current};\n }\n }\n}\n\n...\n\nvar ints = new int[] {1, 2, 3};\nvar strings = new string[] {\"A\", \"B\", \"C\"};\n\nforeach (var pair in Zip(ints, strings))\n{\n Console.WriteLine(pair.First + \":\" + pair.Second);\n}\n\n",
"There's also one in F#:\nlet zipped = Seq.zip firstEnumeration secondEnumation\n"
] | [
26,
8,
6,
2
] | [] | [] | [
".net",
"iteration",
"list",
"python"
] | stackoverflow_0000101174_.net_iteration_list_python.txt |
Q:
Python module functions used in unexpected ways
Based on "Split a string by spaces in Python", which uses shlex.split to split a string with quotes smartly, I would be interested in hearing about other common tasks solved by non-obvious standard library functions.
If this turns into Module of The Week, that's fine too.
A:
I was quite surprised to learn that you could use the bisect module to do a very fast binary search in a sequence. It's documentation doesn't say anything about it:
This module provides support for maintaining a list in sorted order without having to sort the list after each insertion.
The usage is very simple:
>>> import bisect
>>> lst = [4, 7, 10, 23, 25, 100, 103, 201, 333]
>>> bisect.bisect_left(lst, 23)
3
You have to remember though, that it's quicker to linearly look for something in a list goes item by item, than sorting the list and then doing a binary search on it. The first option is O(n), the second is O(nlogn).
A:
Oft overlooked modules, uses and tricks:
collections.defaultdict(): for when you want missing keys in a dict to have a default value.
functools.wraps(): for writing decorators that play nicely with introspection.
posixpath: the os.path module for POSIX systems. You can use it for manipulating POSIX paths (including URI elements) even on Windows and other non-POSIX systems.
ntpath: the os.path module for Windows; usable for manipulation of Windows paths on non-Windows systems.
(also: macpath, for MacOS 9 and earlier, os2emxpath for OS/2 EMX, but I'm not sure if anyone still cares.)
pprint: more structured printing of the repr() of containers makes debugging much easier.
imp: all the tools you need to write your own plugin system or make Python import modules from arbitrary archives.
rlcompleter: getting tab-completion in the normal interactive interpreter. Just do "import readline, rlcompleter; readline.parse_and_bind('tab: complete')"
the PYTHONSTARTUP environment variable: can be set to the path to a file that will be executed (in the main namespace) when entering the interactive interpreter; useful for putting things in like the rlcompleter recipe above.
A:
I use itertools (especially cycle, repeat, chain) to make python behave more like R and in other functional / vector applications. Often this lets me avoid the overhead and complication of Numpy.
# in R, shorter iterables are automatically cycled
# and all functions "apply" in a "map"-like way over lists
> 0:10 + 0:2
[1] 0 2 4 3 5 7 6 8 10 9 11
Python
#Normal python
In [1]: range(10) + range(3)
Out[1]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2]
## this code is terrible, but it demos the idea.
from itertools import cycle
def addR(L1,L2):
n = max( len(L1), len(L2))
out = [None,]*n
gen1,gen2 = cycle(L1), cycle(L2)
ii = 0
while ii < n:
out[ii] = gen1.next() + gen2.next()
ii += 1
return out
In [21]: addR(range(10), range(3))
Out[21]: [0, 2, 4, 3, 5, 7, 6, 8, 10, 9]
A:
I found struct.unpack to be a godsend for unpacking binary data formats after I learned of it!
A:
getpass is useful for determining the login name of the current user.
grp allows you to lookup Unix group IDs by name, and vice versa.
dircache might be useful in situations where you're repeatedly polling the contents of a directory.
glob can find filenames matching wildcards like a Unix shell does.
shutil is useful when you need to copy, delete or rename a file.
csv can simplify parsing of delimited text files.
optparse provides a reliable way to parse command line options.
bz2 comes in handy when you need to manipulate a bzip2-compressed file.
urlparse will save you the hassle of breaking up a URL into component parts.
A:
I've found sched module to be helpful in cron-like activities. It simplifies things a lot. Unfortunately I found it too late.
A:
Most of the other examples are merely overlooked, not unexpected uses for module.
fnmatch, like shlex, can be applied in unexpected ways. fnmatch is a kind of poor-person's RE, and can be used for more than matching files, it can compare strings with the simplified wild-card patterns.
A:
One function I've come to appreciate is string.translate. Its very fast at what it does, and useful anywhere you want to alter or remove characters in a string. I've just used it in a seemingly inapplicable problem and found it beat all the other solutions handily.
The downside is that its API is a bit clunky, but this is improving in Py2.6 / Py3.0.
A:
The pickle module is pretty awesome
A:
complex numbers. (The complexobject.c defines a class, so technically it's not a module). Great for 2d coordinates, with easy translation/rotations etc
eg.
TURN_LEFT_90= 1j
TURN_RIGHT_90= -1j
coord= 5+4j # x=5 y=4
print coord*TURN_LEFT_90
| Python module functions used in unexpected ways | Based on "Split a string by spaces in Python", which uses shlex.split to split a string with quotes smartly, I would be interested in hearing about other common tasks solved by non-obvious standard library functions.
If this turns into Module of The Week, that's fine too.
| [
"I was quite surprised to learn that you could use the bisect module to do a very fast binary search in a sequence. It's documentation doesn't say anything about it:\n\nThis module provides support for maintaining a list in sorted order without having to sort the list after each insertion.\n\nThe usage is very simple:\n>>> import bisect\n>>> lst = [4, 7, 10, 23, 25, 100, 103, 201, 333]\n>>> bisect.bisect_left(lst, 23)\n3\n\nYou have to remember though, that it's quicker to linearly look for something in a list goes item by item, than sorting the list and then doing a binary search on it. The first option is O(n), the second is O(nlogn).\n",
"Oft overlooked modules, uses and tricks:\ncollections.defaultdict(): for when you want missing keys in a dict to have a default value.\nfunctools.wraps(): for writing decorators that play nicely with introspection.\nposixpath: the os.path module for POSIX systems. You can use it for manipulating POSIX paths (including URI elements) even on Windows and other non-POSIX systems.\nntpath: the os.path module for Windows; usable for manipulation of Windows paths on non-Windows systems.\n(also: macpath, for MacOS 9 and earlier, os2emxpath for OS/2 EMX, but I'm not sure if anyone still cares.)\npprint: more structured printing of the repr() of containers makes debugging much easier.\nimp: all the tools you need to write your own plugin system or make Python import modules from arbitrary archives.\nrlcompleter: getting tab-completion in the normal interactive interpreter. Just do \"import readline, rlcompleter; readline.parse_and_bind('tab: complete')\"\nthe PYTHONSTARTUP environment variable: can be set to the path to a file that will be executed (in the main namespace) when entering the interactive interpreter; useful for putting things in like the rlcompleter recipe above.\n",
"I use itertools (especially cycle, repeat, chain) to make python behave more like R and in other functional / vector applications. Often this lets me avoid the overhead and complication of Numpy. \n# in R, shorter iterables are automatically cycled\n# and all functions \"apply\" in a \"map\"-like way over lists\n> 0:10 + 0:2\n [1] 0 2 4 3 5 7 6 8 10 9 11\n\nPython\n #Normal python\n In [1]: range(10) + range(3)\n Out[1]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2]\n## this code is terrible, but it demos the idea.\nfrom itertools import cycle\ndef addR(L1,L2):\n n = max( len(L1), len(L2))\n out = [None,]*n\n gen1,gen2 = cycle(L1), cycle(L2)\n ii = 0\n while ii < n:\n out[ii] = gen1.next() + gen2.next()\n ii += 1\n return out\n\nIn [21]: addR(range(10), range(3))\nOut[21]: [0, 2, 4, 3, 5, 7, 6, 8, 10, 9]\n\n",
"I found struct.unpack to be a godsend for unpacking binary data formats after I learned of it!\n",
"getpass is useful for determining the login name of the current user.\ngrp allows you to lookup Unix group IDs by name, and vice versa.\ndircache might be useful in situations where you're repeatedly polling the contents of a directory.\nglob can find filenames matching wildcards like a Unix shell does.\nshutil is useful when you need to copy, delete or rename a file.\ncsv can simplify parsing of delimited text files.\noptparse provides a reliable way to parse command line options.\nbz2 comes in handy when you need to manipulate a bzip2-compressed file.\nurlparse will save you the hassle of breaking up a URL into component parts.\n",
"I've found sched module to be helpful in cron-like activities. It simplifies things a lot. Unfortunately I found it too late. \n",
"Most of the other examples are merely overlooked, not unexpected uses for module.\nfnmatch, like shlex, can be applied in unexpected ways. fnmatch is a kind of poor-person's RE, and can be used for more than matching files, it can compare strings with the simplified wild-card patterns.\n",
"One function I've come to appreciate is string.translate. Its very fast at what it does, and useful anywhere you want to alter or remove characters in a string. I've just used it in a seemingly inapplicable problem and found it beat all the other solutions handily.\nThe downside is that its API is a bit clunky, but this is improving in Py2.6 / Py3.0.\n",
"The pickle module is pretty awesome\n",
"complex numbers. (The complexobject.c defines a class, so technically it's not a module). Great for 2d coordinates, with easy translation/rotations etc\neg.\nTURN_LEFT_90= 1j\nTURN_RIGHT_90= -1j\n\ncoord= 5+4j # x=5 y=4\nprint coord*TURN_LEFT_90\n\n"
] | [
7,
4,
4,
2,
2,
1,
1,
1,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000092533_python.txt |
Q:
What's a good way to find relative paths in Google App Engine?
So I've done the trivial "warmup" apps with GAE. Now I'd like to build something with a more complex directory structure. Something along the lines of:
siteroot/
models/
controllers/
controller1/
controller2/
...
templates/
template1/
template2/
...
..etc. The controllers will be Python modules handling requests. They would then need to locate (Django-style) templates in associated folders. Most of the demo apps I've seen resolve template paths like this:
path = os.path.join(os.path.dirname(__file__), 'myPage.html')
...the __ file __ property resolves to the currently executing script. So, in my above example, if a Python script were running in controllers/controller1/, then the 'myPage.html' would resolve to that same directory -- controllers/controller1/myPage.html -- and I would rather cleanly separate my Python code and templates.
The solution I've hacked together feels... hacky:
base_paths = os.path.split(os.path.dirname(__file__))
template_dir = os.path.join(base_paths[0], "templates")
So, I'm just snipping off the last element of the path for the currently running script and appending the template directory to the new path. The other (non-GAE specific) solutions I've seen for resolving Python paths seem pretty heavyweight (such as splitting paths into lists and manipulating accordingly). Django seems to have an answer for this, but I'd rather stick to the GAE API, vs. creating a full Django app and modifying it for GAE.
I'm assuming anything hard-coded would be non-starter, since the apps live on Google's infinite server farm. So what's a better way?
A:
You can't use relative paths, as Toni suggests, because you have no guarantee that the path from your working directory to your app's directory will remain the same.
The correct solution is to either use os.path.split, as you are, or to use something like:
path = os.path.join(os.path.dirname(__file__), '..', 'templates', 'myPage.html')
My usual approach is to generate a path to the template directory using the above method, and store it as a member of my controller object, and provide a "getTemplatePath" method that takes the provided filename and joins it with the basename.
A:
The dirname function returns an absolute path, use relative paths. See what is the current directory when your controllers are executed with os.path.abspath(os.path.curdir) and build a path to the templates relative to that location (without the os.path.abspath part of course).
This will only work if the current directory is somewhere inside siteroot, else you could do something like this:
template_dir = os.path.join(os.path.dirname(__file__), os.path.pardir, "templates")
| What's a good way to find relative paths in Google App Engine? | So I've done the trivial "warmup" apps with GAE. Now I'd like to build something with a more complex directory structure. Something along the lines of:
siteroot/
models/
controllers/
controller1/
controller2/
...
templates/
template1/
template2/
...
..etc. The controllers will be Python modules handling requests. They would then need to locate (Django-style) templates in associated folders. Most of the demo apps I've seen resolve template paths like this:
path = os.path.join(os.path.dirname(__file__), 'myPage.html')
...the __ file __ property resolves to the currently executing script. So, in my above example, if a Python script were running in controllers/controller1/, then the 'myPage.html' would resolve to that same directory -- controllers/controller1/myPage.html -- and I would rather cleanly separate my Python code and templates.
The solution I've hacked together feels... hacky:
base_paths = os.path.split(os.path.dirname(__file__))
template_dir = os.path.join(base_paths[0], "templates")
So, I'm just snipping off the last element of the path for the currently running script and appending the template directory to the new path. The other (non-GAE specific) solutions I've seen for resolving Python paths seem pretty heavyweight (such as splitting paths into lists and manipulating accordingly). Django seems to have an answer for this, but I'd rather stick to the GAE API, vs. creating a full Django app and modifying it for GAE.
I'm assuming anything hard-coded would be non-starter, since the apps live on Google's infinite server farm. So what's a better way?
| [
"You can't use relative paths, as Toni suggests, because you have no guarantee that the path from your working directory to your app's directory will remain the same.\nThe correct solution is to either use os.path.split, as you are, or to use something like:\npath = os.path.join(os.path.dirname(__file__), '..', 'templates', 'myPage.html')\n\nMy usual approach is to generate a path to the template directory using the above method, and store it as a member of my controller object, and provide a \"getTemplatePath\" method that takes the provided filename and joins it with the basename.\n",
"The dirname function returns an absolute path, use relative paths. See what is the current directory when your controllers are executed with os.path.abspath(os.path.curdir) and build a path to the templates relative to that location (without the os.path.abspath part of course).\nThis will only work if the current directory is somewhere inside siteroot, else you could do something like this:\ntemplate_dir = os.path.join(os.path.dirname(__file__), os.path.pardir, \"templates\")\n\n"
] | [
4,
1
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0000061894_google_app_engine_python.txt |
Q:
Sorting a dict on __iter__
I am trying to sort a dict based on its key and return an iterator to the values from within an overridden iter method in a class. Is there a nicer and more efficient way of doing this than creating a new list, inserting into the list as I sort through the keys?
A:
How about something like this:
def itersorted(d):
for key in sorted(d):
yield d[key]
A:
By far the easiest approach, and almost certainly the fastest, is something along the lines of:
def sorted_dict(d):
keys = d.keys()
keys.sort()
for key in keys:
yield d[key]
You can't sort without fetching all keys. Fetching all keys into a list and then sorting that list is the most efficient way to do that; list sorting is very fast, and fetching the keys list like that is as fast as it can be. You can then either create a new list of values or yield the values as the example does. Keep in mind that you can't modify the dict if you are iterating over it (the next iteration would fail) so if you want to modify the dict before you're done with the result of sorted_dict(), make it return a list.
A:
def sortedDict(dictobj):
return (value for key, value in sorted(dictobj.iteritems()))
This will create a single intermediate list, the 'sorted()' method returns a real list. But at least it's only one.
| Sorting a dict on __iter__ | I am trying to sort a dict based on its key and return an iterator to the values from within an overridden iter method in a class. Is there a nicer and more efficient way of doing this than creating a new list, inserting into the list as I sort through the keys?
| [
"How about something like this:\ndef itersorted(d):\n for key in sorted(d):\n yield d[key]\n\n",
"By far the easiest approach, and almost certainly the fastest, is something along the lines of:\ndef sorted_dict(d):\n keys = d.keys()\n keys.sort()\n for key in keys:\n yield d[key]\n\nYou can't sort without fetching all keys. Fetching all keys into a list and then sorting that list is the most efficient way to do that; list sorting is very fast, and fetching the keys list like that is as fast as it can be. You can then either create a new list of values or yield the values as the example does. Keep in mind that you can't modify the dict if you are iterating over it (the next iteration would fail) so if you want to modify the dict before you're done with the result of sorted_dict(), make it return a list.\n",
"def sortedDict(dictobj):\n return (value for key, value in sorted(dictobj.iteritems()))\n\nThis will create a single intermediate list, the 'sorted()' method returns a real list. But at least it's only one.\n"
] | [
9,
3,
3
] | [
"Assuming you want a default sort order, you can used sorted(list) or list.sort(). If you want your own sort logic, Python lists support the ability to sort based on a function you pass in. For example, the following would be a way to sort numbers from least to greatest (the default behavior) using a function.\ndef compareTwo(a, b):\n if a > b:\n return 1\n if a == b:\n return 0\n if a < b:\n return -1\n\nList.Sort(compareTwo)\nprint a\n\nThis approach is conceptually a bit cleaner than manually creating a new list and appending the new values and allows you to control the sort logic.\n"
] | [
-1
] | [
"optimization",
"python",
"refactoring"
] | stackoverflow_0000102394_optimization_python_refactoring.txt |
Q:
Python implementation of Parsec?
I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language.
Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options.
In Haskell I would use Parsec, a library of parsing functions (known as combinators). Is there a Python implementation of Parsec? Or perhaps some other production quality library full of parsing functionality so I can build a context sensitive parser in Python?
EDIT: All my attempts at context free parsing have failed. For this reason, I don't expect ANTLR to be useful here.
A:
I believe that pyparsing is based on the same principles as parsec.
A:
PySec is another monadic parser, I don't know much about it, but it's worth looking at here
A:
An option you may consider, if an LL parser is ok to you, is to give ANTLR a try, it can generate python too (actually it is LL(*) as they name it, * stands for the quantity of lookahead it can cope with).
A:
Nothing prevents you for diverting your parser from the "context free" path using PLY. You can pass information to the lexer during parsing, and in this way achieve full flexibility. I'm pretty sure that you can parse anything you want with PLY this way.
For a hands-on example, consider - it is a parser for ANSI C written in Python with PLY. It solves the classic C typedef - identifier problem (that makes C's grammar non context-sensitive) by populating a symbol table in the parser that is being used in the lexer to resolve symbol names as either types or not.
A:
There's ANTLR, which is LL(*), there's PyParsing, which is more object friendly and is sort of like a DSL, and then there's Parsing which is like OCaml's Menhir.
A:
ANTLR is great and has the added benefit of working across multiple languages.
| Python implementation of Parsec? | I recently wrote a parser in Python using Ply (it's a python reimplementation of yacc). When I was almost done with the parser I discovered that the grammar I need to parse requires me to do some look up during parsing to inform the lexer. Without doing a look up to inform the lexer I cannot correctly parse the strings in the language.
Given than I can control the state of the lexer from the grammar rules I think I'll be solving my use case using a look up table in the parser module, but it may become too difficult to maintain/test. So I want to know about some of the other options.
In Haskell I would use Parsec, a library of parsing functions (known as combinators). Is there a Python implementation of Parsec? Or perhaps some other production quality library full of parsing functionality so I can build a context sensitive parser in Python?
EDIT: All my attempts at context free parsing have failed. For this reason, I don't expect ANTLR to be useful here.
| [
"I believe that pyparsing is based on the same principles as parsec.\n",
"PySec is another monadic parser, I don't know much about it, but it's worth looking at here\n",
"An option you may consider, if an LL parser is ok to you, is to give ANTLR a try, it can generate python too (actually it is LL(*) as they name it, * stands for the quantity of lookahead it can cope with).\n",
"Nothing prevents you for diverting your parser from the \"context free\" path using PLY. You can pass information to the lexer during parsing, and in this way achieve full flexibility. I'm pretty sure that you can parse anything you want with PLY this way.\nFor a hands-on example, consider - it is a parser for ANSI C written in Python with PLY. It solves the classic C typedef - identifier problem (that makes C's grammar non context-sensitive) by populating a symbol table in the parser that is being used in the lexer to resolve symbol names as either types or not.\n",
"There's ANTLR, which is LL(*), there's PyParsing, which is more object friendly and is sort of like a DSL, and then there's Parsing which is like OCaml's Menhir.\n",
"ANTLR is great and has the added benefit of working across multiple languages.\n"
] | [
9,
6,
5,
2,
1,
0
] | [] | [] | [
"combinators",
"parsec",
"parsing",
"python"
] | stackoverflow_0000094952_combinators_parsec_parsing_python.txt |
Q:
Anyone used Dabo for a medium-big project?
We're at the beginning of a new ERP-ish client-server application, developed as a Python rich client. We're currently evaluating Dabo as our main framework and it looks quite nice and easy to use, but I was wondering, has anyone used it for medium-to-big sized projects?
Thanks for your time!
A:
I'm one of the authors of the Dabo framework. One of our users pointed out to me the extremely negative answer you received, and so I thought I had better chime in and clear up some of the incorrect assumptions in the first reply.
Dabo is indeed well-known in the Python community. I have presented it at 3 of the last 4 US PyCons, and we have several hundred users who subscribe to our email lists. Our website (http://dabodev.com) has not had any service interruptions; I don't know why the first responder claimed to have trouble. Support is through our email lists, and we pride ourselves on helping people quickly and efficiently. Many of the newbie questions help us to identify places where our docs are lacking, so we strongly encourage newcomers to ask questions!
Dabo has been around for 4 years. The fact that it is still a few days away from a 0.9 release is more of a reflection of the rather conservative version numbering of my partner, Paul McNett, than any instabilities in the framework. I know of Dabo apps that have been in production since 2006; I have used it for my own projects since 2004. Whatever importance you attach to release numbers, we are at revision 4522, with consistent work being done to add more and more stuff to the framework; refactor and streamline some of the older code, and yes, clean up some bugs.
Please sign up for our free email support list:
http://leafe.com/mailman/listinfo/dabo-users
...and ask any questions you may have about Dabo there. Not many people have discovered Stack Overflow yet, so I wouldn't expect very informed answers here yet. There are several regular contributors there who use Dabo on a daily basis, and are usually more than happy to offer their opinions and their help.
A:
I have no Dabo experience at all but this question is on the top of the list fo such a long time that I decided to give it a shot:
Framework selection
Assumptions:
medium-to-big project: we're talking about a team of more than 20 people working on something for about a year for the first phase. This is usually an expensive and very important effort for the client.
this project will have significant amount of users (around a hundred) so performance is essential
it's an ERP project so the application will work with large amounts of information
you have no prior Dabo experience in your team
Considerations:
I could not open Dabo project site right now. There seems to be some server problem. That alone would make me think twice about using it for a big project.
It's not a well-known framework. Typing Dabo in Google returns almost no useful results, it does not have a Wikipedia page, all-in-all it's quite obscure. It means that when you will have problems with it (and you will have problems with it) you will have almost no place to go. Your question was unanswered for 8 days on SO, this alone would make me re-consider. If you base your project on an obscure technology you have no previous experience with - it's a huge risk.
You don't have people who know that framework in your team. It means that you have to learn it to get any results at all and to master it will require quite significant amount of time. You will have to factor that time into your project plan. Do you really need it?
What does this framework give you that you cannot do yourself? Quite a lot of time my team tried to use some third-party component or tool only to find that building a custom one would be faster than dealing with third-party problems and limitations. There are brilliant tools available to people nowadays and we would be lost without them - but you have to carefully consider if this tool is one of them
Dabo project version is 0.84. Do you know if they spend time optimising their code for performance at this stage? Did you run any tests to see it will sustain the load you have in your NFRs.
Hope that helps :) Good luck with your project
| Anyone used Dabo for a medium-big project? | We're at the beginning of a new ERP-ish client-server application, developed as a Python rich client. We're currently evaluating Dabo as our main framework and it looks quite nice and easy to use, but I was wondering, has anyone used it for medium-to-big sized projects?
Thanks for your time!
| [
"I'm one of the authors of the Dabo framework. One of our users pointed out to me the extremely negative answer you received, and so I thought I had better chime in and clear up some of the incorrect assumptions in the first reply.\nDabo is indeed well-known in the Python community. I have presented it at 3 of the last 4 US PyCons, and we have several hundred users who subscribe to our email lists. Our website (http://dabodev.com) has not had any service interruptions; I don't know why the first responder claimed to have trouble. Support is through our email lists, and we pride ourselves on helping people quickly and efficiently. Many of the newbie questions help us to identify places where our docs are lacking, so we strongly encourage newcomers to ask questions!\nDabo has been around for 4 years. The fact that it is still a few days away from a 0.9 release is more of a reflection of the rather conservative version numbering of my partner, Paul McNett, than any instabilities in the framework. I know of Dabo apps that have been in production since 2006; I have used it for my own projects since 2004. Whatever importance you attach to release numbers, we are at revision 4522, with consistent work being done to add more and more stuff to the framework; refactor and streamline some of the older code, and yes, clean up some bugs.\nPlease sign up for our free email support list:\nhttp://leafe.com/mailman/listinfo/dabo-users\n...and ask any questions you may have about Dabo there. Not many people have discovered Stack Overflow yet, so I wouldn't expect very informed answers here yet. There are several regular contributors there who use Dabo on a daily basis, and are usually more than happy to offer their opinions and their help.\n",
"I have no Dabo experience at all but this question is on the top of the list fo such a long time that I decided to give it a shot:\nFramework selection\nAssumptions:\n\nmedium-to-big project: we're talking about a team of more than 20 people working on something for about a year for the first phase. This is usually an expensive and very important effort for the client.\nthis project will have significant amount of users (around a hundred) so performance is essential\nit's an ERP project so the application will work with large amounts of information\nyou have no prior Dabo experience in your team\n\nConsiderations:\n\nI could not open Dabo project site right now. There seems to be some server problem. That alone would make me think twice about using it for a big project.\nIt's not a well-known framework. Typing Dabo in Google returns almost no useful results, it does not have a Wikipedia page, all-in-all it's quite obscure. It means that when you will have problems with it (and you will have problems with it) you will have almost no place to go. Your question was unanswered for 8 days on SO, this alone would make me re-consider. If you base your project on an obscure technology you have no previous experience with - it's a huge risk.\nYou don't have people who know that framework in your team. It means that you have to learn it to get any results at all and to master it will require quite significant amount of time. You will have to factor that time into your project plan. Do you really need it?\nWhat does this framework give you that you cannot do yourself? Quite a lot of time my team tried to use some third-party component or tool only to find that building a custom one would be faster than dealing with third-party problems and limitations. There are brilliant tools available to people nowadays and we would be lost without them - but you have to carefully consider if this tool is one of them\nDabo project version is 0.84. Do you know if they spend time optimising their code for performance at this stage? Did you run any tests to see it will sustain the load you have in your NFRs.\n\nHope that helps :) Good luck with your project\n"
] | [
25,
2
] | [] | [] | [
"dabo",
"erp",
"python"
] | stackoverflow_0000056417_dabo_erp_python.txt |
Q:
Are there any "nice to program" GUI toolkits for Python?
I've played around with GTK, TK, wxPython, Cocoa, curses and others. They are are fairly horrible to use.. GTK/TK/wx/curses all seem to basically be direct-ports of the appropriate C libraries, and Cocoa basically mandates using both PyObjC and Interface Builder, both of which I dislike..
The Shoes GUI library for Ruby is great.. It's very sensibly designed, and very "rubyish", and borrows some nice-to-use things from web development (like using hex colours codes, or :color => rgb(128,0,0))
As the title says: are there any nice, "Pythonic" GUI toolkits?
A:
Have you looked at Qt/PyQt? Although PyQt is a direct port from the C++ library, I find it much more pythonic and nice to program with compared to the others you listed. It also has very good documentation.
Dabo has a nice ui library implemented on top of wxPython. It's a framework intended mostly for database-centric applications, but the ui library can be used separately.
There are/were several other attempts to create a very pythonic gui as a layer on top of PyGtk or wxPython, such as wax and PyGui, which seem to be "stuck" at various degrees of being complete.
Also, an exhaustive list of Python GUI toolkits can be found here.
A:
Please check out Dabo, our framework for desktop applications. http://dabodev.com
We have wrapped the wxPython toolkit for the UI classes, and replaced their ugly C++ style functions with simple properties. You mentioned assigning color: in Dabo, you would do it very simply, using your choice of:
obj.BackColor = "red"
obj.BackColor = (255, 0, 0)
obj.BackColor = "FF0000"
obj.BackColor = "#FF0000"
Dabo understands all of these, and handles the differences for you automatically.
I am one of the authors of Dabo, and would be happy to answer any other questions that you may have.
--- Ed Leafe
A:
Seconding PyQt. Coupled with the book Rapid GUI Programming with Python and Qt, it's really easy to learn.
A:
I've used Glade with some success, though I didn't manage to wrap my head around creating anything really complex. It has a nice GUI builder and stores the forms as xml files that are loaded dynamically. Kind of like XAML afiak.
A:
I use pyGtk. I think wxPython is nice but it's too limited, and PyQt is, well, Qt. =)
| Are there any "nice to program" GUI toolkits for Python? | I've played around with GTK, TK, wxPython, Cocoa, curses and others. They are are fairly horrible to use.. GTK/TK/wx/curses all seem to basically be direct-ports of the appropriate C libraries, and Cocoa basically mandates using both PyObjC and Interface Builder, both of which I dislike..
The Shoes GUI library for Ruby is great.. It's very sensibly designed, and very "rubyish", and borrows some nice-to-use things from web development (like using hex colours codes, or :color => rgb(128,0,0))
As the title says: are there any nice, "Pythonic" GUI toolkits?
| [
"Have you looked at Qt/PyQt? Although PyQt is a direct port from the C++ library, I find it much more pythonic and nice to program with compared to the others you listed. It also has very good documentation.\nDabo has a nice ui library implemented on top of wxPython. It's a framework intended mostly for database-centric applications, but the ui library can be used separately. \nThere are/were several other attempts to create a very pythonic gui as a layer on top of PyGtk or wxPython, such as wax and PyGui, which seem to be \"stuck\" at various degrees of being complete.\nAlso, an exhaustive list of Python GUI toolkits can be found here.\n",
"Please check out Dabo, our framework for desktop applications. http://dabodev.com\nWe have wrapped the wxPython toolkit for the UI classes, and replaced their ugly C++ style functions with simple properties. You mentioned assigning color: in Dabo, you would do it very simply, using your choice of:\nobj.BackColor = \"red\"\nobj.BackColor = (255, 0, 0)\nobj.BackColor = \"FF0000\"\nobj.BackColor = \"#FF0000\"\n\nDabo understands all of these, and handles the differences for you automatically.\nI am one of the authors of Dabo, and would be happy to answer any other questions that you may have.\n--- Ed Leafe\n",
"Seconding PyQt. Coupled with the book Rapid GUI Programming with Python and Qt, it's really easy to learn.\n",
"I've used Glade with some success, though I didn't manage to wrap my head around creating anything really complex. It has a nice GUI builder and stores the forms as xml files that are loaded dynamically. Kind of like XAML afiak.\n",
"I use pyGtk. I think wxPython is nice but it's too limited, and PyQt is, well, Qt. =)\n"
] | [
14,
14,
2,
1,
1
] | [] | [] | [
"python",
"user_interface"
] | stackoverflow_0000035922_python_user_interface.txt |
Q:
Why unicode() uses str() on my object only with no encoding given?
I start by creating a string variable with some non-ascii utf-8 encoded data on it:
>>> text = 'á'
>>> text
'\xc3\xa1'
>>> text.decode('utf-8')
u'\xe1'
Using unicode() on it raises errors...
>>> unicode(text)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)
...but if I know the encoding I can use it as second parameter:
>>> unicode(text, 'utf-8')
u'\xe1'
>>> unicode(text, 'utf-8') == text.decode('utf-8')
True
Now if I have a class that returns this text in the __str__() method:
>>> class ReturnsEncoded(object):
... def __str__(self):
... return text
...
>>> r = ReturnsEncoded()
>>> str(r)
'\xc3\xa1'
unicode(r) seems to use str() on it, since it raises the same error as unicode(text) above:
>>> unicode(r)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)
Until now everything is as planned!
But as no one would ever expect, unicode(r, 'utf-8') won't even try:
>>> unicode(r, 'utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: coercing to Unicode: need string or buffer, ReturnsEncoded found
Why? Why this inconsistent behavior? Is it a bug? is it intended? Very awkward.
A:
The behaviour does seem confusing, but intensional. I reproduce here the entirety of the unicode documentation from the Python Built-In Functions documentation (for version 2.5.2, as I write this):
unicode([object[, encoding [, errors]]])
Return the Unicode string version of object using one of the following modes:
If encoding and/or errors are given, unicode() will decode the
object which can either be an 8-bit string or a character buffer
using the codec for encoding. The encoding parameter is a string
giving the name of an encoding; if the encoding is not known,
LookupError is raised. Error handling is done according to
errors; this specifies the treatment of characters which are
invalid in the input encoding. If errors is 'strict' (the
default), a ValueError is raised on errors, while a value of
'ignore' causes errors to be silently ignored, and a value of
'replace' causes the official Unicode replacement character,
U+FFFD, to be used to replace input characters which cannot be
decoded. See also the codecs module.
If no optional parameters are given, unicode() will mimic the
behaviour of str() except that it returns Unicode strings
instead of 8-bit strings. More precisely, if object is a Unicode
string or subclass it will return that Unicode string without
any additional decoding applied.
For objects which provide a __unicode__() method, it will call
this method without arguments to create a Unicode string. For
all other objects, the 8-bit string version or representation is
requested and then converted to a Unicode string using the codec
for the default encoding in 'strict' mode.
New in version 2.0. Changed in version 2.2: Support for __unicode__() added.
So, when you call unicode(r, 'utf-8'), it requires an 8-bit string or a character buffer as the first argument, so it coerces your object using the __str__() method, and attempts to decode that using the utf-8 codec. Without the utf-8, the unicode() function looks for a for a __unicode__() method on your object, and not finding it, calls the __str__() method, as you suggested, attempting to use the default codec to convert to unicode.
A:
unicode does not guess the encoding of your text. If your object can print itself as unicode, define the __unicode__() method that returns a Unicode string.
The secret is that unicode(r) is not actually calling __str__() itself. Instead, it's looking for a __unicode__() method. The default implementation of __unicode__() will call __str__() and then attempt to decode it using the ascii charset. When you pass the encoding, unicode() expects the first object to be something that can be decoded -- that is, an instance of basestring.
Behavior is weird because it tries to decode as ascii if I don't pass 'utf-8'. But if I pass 'utf-8' it gives a different error...
That's because when you specify "utf-8", it treats the first parameter as a string-like object to be decoded. Without it, it treats the parameter as an object to be coerced to unicode.
I do not understand the confusion. If you know that the object's text attribute will always be UTF-8 encoded, just define __unicode__() and then everything will work fine.
| Why unicode() uses str() on my object only with no encoding given? | I start by creating a string variable with some non-ascii utf-8 encoded data on it:
>>> text = 'á'
>>> text
'\xc3\xa1'
>>> text.decode('utf-8')
u'\xe1'
Using unicode() on it raises errors...
>>> unicode(text)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)
...but if I know the encoding I can use it as second parameter:
>>> unicode(text, 'utf-8')
u'\xe1'
>>> unicode(text, 'utf-8') == text.decode('utf-8')
True
Now if I have a class that returns this text in the __str__() method:
>>> class ReturnsEncoded(object):
... def __str__(self):
... return text
...
>>> r = ReturnsEncoded()
>>> str(r)
'\xc3\xa1'
unicode(r) seems to use str() on it, since it raises the same error as unicode(text) above:
>>> unicode(r)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0:
ordinal not in range(128)
Until now everything is as planned!
But as no one would ever expect, unicode(r, 'utf-8') won't even try:
>>> unicode(r, 'utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: coercing to Unicode: need string or buffer, ReturnsEncoded found
Why? Why this inconsistent behavior? Is it a bug? is it intended? Very awkward.
| [
"The behaviour does seem confusing, but intensional. I reproduce here the entirety of the unicode documentation from the Python Built-In Functions documentation (for version 2.5.2, as I write this):\n\nunicode([object[, encoding [, errors]]])\nReturn the Unicode string version of object using one of the following modes:\nIf encoding and/or errors are given, unicode() will decode the\n object which can either be an 8-bit string or a character buffer\n using the codec for encoding. The encoding parameter is a string\n giving the name of an encoding; if the encoding is not known,\n LookupError is raised. Error handling is done according to\n errors; this specifies the treatment of characters which are\n invalid in the input encoding. If errors is 'strict' (the\n default), a ValueError is raised on errors, while a value of\n 'ignore' causes errors to be silently ignored, and a value of\n 'replace' causes the official Unicode replacement character,\n U+FFFD, to be used to replace input characters which cannot be\n decoded. See also the codecs module.\nIf no optional parameters are given, unicode() will mimic the\n behaviour of str() except that it returns Unicode strings\n instead of 8-bit strings. More precisely, if object is a Unicode\n string or subclass it will return that Unicode string without\n any additional decoding applied.\nFor objects which provide a __unicode__() method, it will call\n this method without arguments to create a Unicode string. For\n all other objects, the 8-bit string version or representation is\n requested and then converted to a Unicode string using the codec\n for the default encoding in 'strict' mode.\nNew in version 2.0. Changed in version 2.2: Support for __unicode__() added. \n\nSo, when you call unicode(r, 'utf-8'), it requires an 8-bit string or a character buffer as the first argument, so it coerces your object using the __str__() method, and attempts to decode that using the utf-8 codec. Without the utf-8, the unicode() function looks for a for a __unicode__() method on your object, and not finding it, calls the __str__() method, as you suggested, attempting to use the default codec to convert to unicode.\n",
"unicode does not guess the encoding of your text. If your object can print itself as unicode, define the __unicode__() method that returns a Unicode string.\n\nThe secret is that unicode(r) is not actually calling __str__() itself. Instead, it's looking for a __unicode__() method. The default implementation of __unicode__() will call __str__() and then attempt to decode it using the ascii charset. When you pass the encoding, unicode() expects the first object to be something that can be decoded -- that is, an instance of basestring.\n\n\nBehavior is weird because it tries to decode as ascii if I don't pass 'utf-8'. But if I pass 'utf-8' it gives a different error...\n\nThat's because when you specify \"utf-8\", it treats the first parameter as a string-like object to be decoded. Without it, it treats the parameter as an object to be coerced to unicode.\nI do not understand the confusion. If you know that the object's text attribute will always be UTF-8 encoded, just define __unicode__() and then everything will work fine.\n"
] | [
8,
5
] | [] | [] | [
"encoding",
"python",
"unicode"
] | stackoverflow_0000106630_encoding_python_unicode.txt |
Q:
PLY: Token shifting problem in C parser
I'm writing a C parser using PLY, and recently ran into a problem.
This code:
typedef int my_type;
my_type x;
Is correct C code, because my_type is defined as a type previously to
being used as such. I handle it by filling a type symbol table in the
parser that gets used by the lexer to differentiate between types and
simple identifiers.
However, while the type declaration rule ends with SEMI (the ';' token), PLY shifts the token my_type from the second line before deciding it's done with the first one. Because of this, I have no chance to pass the update in the type symbol table to the lexer and it
sees my_type as an identifier and not a type.
Any ideas for a fix ?
The full code is at: http://code.google.com/p/pycparser/source/browse/trunk/src/c_parser.py
Not sure how I can create a smaller example out of this.
Edit:
Problem solved. See my solution below.
A:
Not sure why you're doing that level of analysis in your lexer.
Lexical analysis should probably be used to separate the input stream into lexical tokens (number, line-change, keyword and so on). It's the parsing phase that should be doing that level of analysis, including table lookups for typedefs and such.
That's the way I've always separated the duties between lexx and yacc, my tools of choice.
A:
With some help from Dave Beazley (PLY's creator), my problem was solved.
The idea is to use special sub-rules and do the actions in them. In my case, I split the declaration rule to:
def p_decl_body(self, p):
""" decl_body : declaration_specifiers init_declarator_list_opt
"""
# <<Handle the declaration here>>
def p_declaration(self, p):
""" declaration : decl_body SEMI
"""
p[0] = p[1]
decl_body is always reduced before the token after SEMI is shifted in, so my action gets executed at the correct time.
A:
I think you need to move the check for whether an ID is a TYPEID from c_lexer.py to c_parser.py.
As you said, since the parser is looking ahead 1 token, you can't make that decision in the lexer.
Instead, alter your parser to check ID's to see if they are TYPEID's in declarations, and, if they aren't, generate an error.
As Pax Diablo said in his excellent answer, the lexer/tokenizer's job isn't to make those kinds of decisions about tokens. That's the parser's job.
| PLY: Token shifting problem in C parser | I'm writing a C parser using PLY, and recently ran into a problem.
This code:
typedef int my_type;
my_type x;
Is correct C code, because my_type is defined as a type previously to
being used as such. I handle it by filling a type symbol table in the
parser that gets used by the lexer to differentiate between types and
simple identifiers.
However, while the type declaration rule ends with SEMI (the ';' token), PLY shifts the token my_type from the second line before deciding it's done with the first one. Because of this, I have no chance to pass the update in the type symbol table to the lexer and it
sees my_type as an identifier and not a type.
Any ideas for a fix ?
The full code is at: http://code.google.com/p/pycparser/source/browse/trunk/src/c_parser.py
Not sure how I can create a smaller example out of this.
Edit:
Problem solved. See my solution below.
| [
"Not sure why you're doing that level of analysis in your lexer.\nLexical analysis should probably be used to separate the input stream into lexical tokens (number, line-change, keyword and so on). It's the parsing phase that should be doing that level of analysis, including table lookups for typedefs and such.\nThat's the way I've always separated the duties between lexx and yacc, my tools of choice.\n",
"With some help from Dave Beazley (PLY's creator), my problem was solved.\nThe idea is to use special sub-rules and do the actions in them. In my case, I split the declaration rule to:\ndef p_decl_body(self, p):\n \"\"\" decl_body : declaration_specifiers init_declarator_list_opt\n \"\"\"\n # <<Handle the declaration here>> \n\ndef p_declaration(self, p):\n \"\"\" declaration : decl_body SEMI \n \"\"\"\n p[0] = p[1]\n\ndecl_body is always reduced before the token after SEMI is shifted in, so my action gets executed at the correct time.\n",
"I think you need to move the check for whether an ID is a TYPEID from c_lexer.py to c_parser.py.\nAs you said, since the parser is looking ahead 1 token, you can't make that decision in the lexer.\nInstead, alter your parser to check ID's to see if they are TYPEID's in declarations, and, if they aren't, generate an error.\nAs Pax Diablo said in his excellent answer, the lexer/tokenizer's job isn't to make those kinds of decisions about tokens. That's the parser's job.\n"
] | [
3,
2,
1
] | [] | [] | [
"parsing",
"ply",
"python",
"yacc"
] | stackoverflow_0000108009_parsing_ply_python_yacc.txt |
Q:
Topological sort, recursive, using generators
Data: a dependency list, already verified to be acyclic. So here, 'a' depends on 'b','c' (c depends on d), etc...
A = { 'a' : dict(b=1, c=1),
'c' : dict(d=1),
'd' : dict(e=1,f=1,g=1),
'h' : dict(j=1)
}
I'd like to have a top-down, recursive solution to let's say, find the chain starting at
'a': a, c, d, e, g, f, b
So, right now (a non-generator solution):
def get_all(D,k):
L = []
def get2(D,k):
L.append(k)
for ii in D.get(k,[]):
get2(D, ii)
get2(D,k)
return L
Obviously, this is pretty weak :) I've been banging my head about how to how to get yields inside there, and I'd appreciate any py-foo y'all can bring to this.
A:
Both answers give the same result, but if my reading of the question is correct give the wrong answer to a simple alteration to the given graph - if you add a dependency on 'c' from 'b' (which doesn't introduce a cycle as the graph is directed) the output is:
a
c
d
e
g
f
b
d
e
g
f
which isn't totally helpful. Try this small variation, which keeps track of which nodes of the graph have already been visited:
def get_all(D, k, seen=None):
if not seen:
seen = set( )
if k not in seen:
seen.add(k)
yield k
for ii in D.get(k, []):
for jj in get_all(D, ii, seen):
yield jj
A:
Try this:
#!/usr/bin/env python
def get_all(D, k):
yield k
for ii in D.get(k, []):
for jj in get_all(D, ii):
yield jj
A = { 'a' : dict(b=1, c=1),
'c' : dict(d=1),
'd' : dict(e=1,f=1,g=1),
'h' : dict(j=1)
}
for ii in get_all(A,'a'):
print ii
Gives me
steve@rei:~/code/tmp
$ python recur.py
a
c
d
e
g
f
b
| Topological sort, recursive, using generators | Data: a dependency list, already verified to be acyclic. So here, 'a' depends on 'b','c' (c depends on d), etc...
A = { 'a' : dict(b=1, c=1),
'c' : dict(d=1),
'd' : dict(e=1,f=1,g=1),
'h' : dict(j=1)
}
I'd like to have a top-down, recursive solution to let's say, find the chain starting at
'a': a, c, d, e, g, f, b
So, right now (a non-generator solution):
def get_all(D,k):
L = []
def get2(D,k):
L.append(k)
for ii in D.get(k,[]):
get2(D, ii)
get2(D,k)
return L
Obviously, this is pretty weak :) I've been banging my head about how to how to get yields inside there, and I'd appreciate any py-foo y'all can bring to this.
| [
"Both answers give the same result, but if my reading of the question is correct give the wrong answer to a simple alteration to the given graph - if you add a dependency on 'c' from 'b' (which doesn't introduce a cycle as the graph is directed) the output is: \na\nc\nd\ne\ng\nf\nb\nd\ne\ng\nf\n\nwhich isn't totally helpful. Try this small variation, which keeps track of which nodes of the graph have already been visited:\ndef get_all(D, k, seen=None):\n if not seen:\n seen = set( )\n if k not in seen:\n seen.add(k)\n yield k\n for ii in D.get(k, []):\n for jj in get_all(D, ii, seen):\n yield jj\n\n",
"Try this:\n#!/usr/bin/env python\n\ndef get_all(D, k):\n yield k\n for ii in D.get(k, []):\n for jj in get_all(D, ii):\n yield jj\n\nA = { 'a' : dict(b=1, c=1),\n 'c' : dict(d=1),\n 'd' : dict(e=1,f=1,g=1),\n 'h' : dict(j=1)\n }\n\nfor ii in get_all(A,'a'):\n print ii\n\nGives me \n\nsteve@rei:~/code/tmp\n$ python recur.py\na\nc\nd\ne\ng\nf\nb\n\n"
] | [
6,
4
] | [] | [] | [
"generator",
"python",
"recursion",
"topology"
] | stackoverflow_0000108586_generator_python_recursion_topology.txt |
Q:
Python signal woes: SIGQUIT handler delays execution if SIGQUIT received during execution of another signal handler?
The following program is very simple: it outputs a single dot each half a second. If it recieves a SIGQUIT, it proceeds to output ten Qs. If it recieves a SIGTSTP (Ctrl-Z), it outputs ten Zs.
If it recieves a SIGTSTP while printing Qs, it will print ten Zs after it's done with the ten Qs. This is a good thing.
However, if it recieves a SIGQUIT while printing Zs, it fails to print Qs after them. Instead, it prints them out only after I manually terminate execution via a KeyboardInterrupt. I want the Qs to be printed immediately after the Zs.
This happens using Python2.3.
What am I doing wrong?
#!/usr/bin/python
from signal import *
from time import sleep
from sys import stdout
def write(text):
stdout.write(text)
stdout.flush()
def process_quit(signum, frame):
for i in range(10):
write("Q")
sleep(0.5)
def process_tstp(signum, frame):
for i in range(10):
write("Z")
sleep(0.5)
signal(SIGQUIT, process_quit)
signal(SIGTSTP, process_tstp)
while 1:
write('.')
sleep(0.5)
A:
Your larger problem is blocking in signal handlers.
This is usually discouraged since it can lead to strange timing conditions. But it's not quite the cause of your problem since the timing condition you're vulnerable to exists because of your choice of signal handlers.
Anyway, here's how to at least minimize the timing condition by only setting flags in your handlers and leaving the main while loop to do the actual work. The explanation for why your code is behaving strangely is described after the code.
#!/usr/bin/python
from signal import *
from time import sleep
from sys import stdout
print_Qs = 0
print_Zs = 0
def write(text):
stdout.write(text)
stdout.flush()
def process_quit(signum, frame):
global print_Qs
print_Qs = 10
def process_tstp(signum, frame):
global print_Zs
print_Zs = 10
signal(SIGQUIT, process_quit)
signal(SIGTSTP, process_tstp)
while 1:
if print_Zs:
print_Zs -= 1
c = 'Z'
elif print_Qs:
print_Qs -= 1
c = 'Q'
else:
c = '.'
write(c)
sleep(0.5)
Anyway, here's what's going on.
SIGTSTP is more special than SIGQUIT.
SIGTSTP masks the other signals from being delivered while its signal handler is running. When the kernel goes to deliver SIGQUIT and sees that SIGTSTP's handler is still running, it simply saves it for later. Once another signal comes through for delivery, such as SIGINT when you CTRL+C (aka KeyboardInterrupt), the kernel remembers that it never delivered SIGQUIT and delivers it now.
You will notice if you change while 1: to for i in range(60): in the main loop and do your test case again, the program will exit without running the SIGTSTP handler since exit doesn't re-trigger the kernel's signal delivery mechanism.
Good luck!
A:
On Python 2.5.2 on Linux 2.6.24, your code works exactly as you describe your desired results (if a signal is received while still processing a previous signal, the new signal is processed immediately after the first one is finished).
On Python 2.4.4 on Linux 2.6.16, I see the problem behavior you describe.
I don't know whether this is due to a change in Python or in the Linux kernel.
| Python signal woes: SIGQUIT handler delays execution if SIGQUIT received during execution of another signal handler? | The following program is very simple: it outputs a single dot each half a second. If it recieves a SIGQUIT, it proceeds to output ten Qs. If it recieves a SIGTSTP (Ctrl-Z), it outputs ten Zs.
If it recieves a SIGTSTP while printing Qs, it will print ten Zs after it's done with the ten Qs. This is a good thing.
However, if it recieves a SIGQUIT while printing Zs, it fails to print Qs after them. Instead, it prints them out only after I manually terminate execution via a KeyboardInterrupt. I want the Qs to be printed immediately after the Zs.
This happens using Python2.3.
What am I doing wrong?
#!/usr/bin/python
from signal import *
from time import sleep
from sys import stdout
def write(text):
stdout.write(text)
stdout.flush()
def process_quit(signum, frame):
for i in range(10):
write("Q")
sleep(0.5)
def process_tstp(signum, frame):
for i in range(10):
write("Z")
sleep(0.5)
signal(SIGQUIT, process_quit)
signal(SIGTSTP, process_tstp)
while 1:
write('.')
sleep(0.5)
| [
"Your larger problem is blocking in signal handlers.\nThis is usually discouraged since it can lead to strange timing conditions. But it's not quite the cause of your problem since the timing condition you're vulnerable to exists because of your choice of signal handlers.\nAnyway, here's how to at least minimize the timing condition by only setting flags in your handlers and leaving the main while loop to do the actual work. The explanation for why your code is behaving strangely is described after the code.\n#!/usr/bin/python\n\nfrom signal import *\nfrom time import sleep\nfrom sys import stdout\n\nprint_Qs = 0\nprint_Zs = 0\n\ndef write(text):\n stdout.write(text)\n stdout.flush()\n\ndef process_quit(signum, frame):\n global print_Qs\n print_Qs = 10\n\ndef process_tstp(signum, frame):\n global print_Zs\n print_Zs = 10\n\nsignal(SIGQUIT, process_quit)\nsignal(SIGTSTP, process_tstp)\n\nwhile 1:\n if print_Zs:\n print_Zs -= 1\n c = 'Z'\n elif print_Qs:\n print_Qs -= 1\n c = 'Q'\n else:\n c = '.'\n write(c)\n sleep(0.5)\n\nAnyway, here's what's going on.\nSIGTSTP is more special than SIGQUIT.\nSIGTSTP masks the other signals from being delivered while its signal handler is running. When the kernel goes to deliver SIGQUIT and sees that SIGTSTP's handler is still running, it simply saves it for later. Once another signal comes through for delivery, such as SIGINT when you CTRL+C (aka KeyboardInterrupt), the kernel remembers that it never delivered SIGQUIT and delivers it now.\nYou will notice if you change while 1: to for i in range(60): in the main loop and do your test case again, the program will exit without running the SIGTSTP handler since exit doesn't re-trigger the kernel's signal delivery mechanism.\nGood luck!\n",
"On Python 2.5.2 on Linux 2.6.24, your code works exactly as you describe your desired results (if a signal is received while still processing a previous signal, the new signal is processed immediately after the first one is finished).\nOn Python 2.4.4 on Linux 2.6.16, I see the problem behavior you describe.\nI don't know whether this is due to a change in Python or in the Linux kernel.\n"
] | [
6,
1
] | [] | [] | [
"python",
"signals"
] | stackoverflow_0000109705_python_signals.txt |
Q:
How does one decrypt a PDF with an owner password, but no user password?
Although the PDF specification is available from Adobe, it's not exactly the simplest document to read through. PDF allows documents to be encrypted so that either a user password and/or an owner password is required to do various things with the document (display, print, etc). A common use is to lock a PDF so that end users can read it without entering any password, but a password is required to do anything else.
I'm trying to parse PDFs that are locked in this way (to get the same privileges as you would get opening them in any reader). Using an empty string as the user password doesn't work, but it seems (section 3.5.2 of the spec) that there has to be a user password to create the hash for the admin password.
What I would like is either an explanation of how to do this, or any code that I can read (ideally Python, C, or C++, but anything readable will do) that does this so that I can understand what I'm meant to be doing. Standalone code, rather than reading through (e.g.) the gsview source, would be best.
A:
A plugin for GSview for viewing encrypted PDFs is here.
If this works for you, you may be able to look at the source.
A:
If I remember correctly, there is a fixed padding string of 32 (?) bytes to apply to any password. All passwords need to be 32 bytes at the start of computing the encryption key, either by truncating or adding some of those padding bytes.
If no user password was set you simply have to pad with all 32 bytes of the string, i.e. use the 32 padding bytes as the starting point for computing the encryption key.
I have to admit it's been a while since I've done this, I do remember that the encryption part of the PDF is an absolute mess as it got changed significantly in nearly every revision, requiring you to cope with a lot of cases to handle all PDF's.
Good luck.
A:
xpdf is probably a good reference implementation for this sort of problem. I have successfully used them to open encrypted pdfs before.
| How does one decrypt a PDF with an owner password, but no user password? | Although the PDF specification is available from Adobe, it's not exactly the simplest document to read through. PDF allows documents to be encrypted so that either a user password and/or an owner password is required to do various things with the document (display, print, etc). A common use is to lock a PDF so that end users can read it without entering any password, but a password is required to do anything else.
I'm trying to parse PDFs that are locked in this way (to get the same privileges as you would get opening them in any reader). Using an empty string as the user password doesn't work, but it seems (section 3.5.2 of the spec) that there has to be a user password to create the hash for the admin password.
What I would like is either an explanation of how to do this, or any code that I can read (ideally Python, C, or C++, but anything readable will do) that does this so that I can understand what I'm meant to be doing. Standalone code, rather than reading through (e.g.) the gsview source, would be best.
| [
"A plugin for GSview for viewing encrypted PDFs is here.\nIf this works for you, you may be able to look at the source.\n",
"If I remember correctly, there is a fixed padding string of 32 (?) bytes to apply to any password. All passwords need to be 32 bytes at the start of computing the encryption key, either by truncating or adding some of those padding bytes.\nIf no user password was set you simply have to pad with all 32 bytes of the string, i.e. use the 32 padding bytes as the starting point for computing the encryption key.\nI have to admit it's been a while since I've done this, I do remember that the encryption part of the PDF is an absolute mess as it got changed significantly in nearly every revision, requiring you to cope with a lot of cases to handle all PDF's.\nGood luck.\n",
"xpdf is probably a good reference implementation for this sort of problem. I have successfully used them to open encrypted pdfs before.\n"
] | [
1,
1,
0
] | [] | [] | [
"c++",
"encryption",
"passwords",
"pdf",
"python"
] | stackoverflow_0000049455_c++_encryption_passwords_pdf_python.txt |
Q:
How do you load an embedded icon from an exe file with PyWin32?
I have an exe file generated with py2exe. In the setup.py I specify an icon to be embedded in the exe:
windows=[{'script': 'my_script.py','icon_resources': [(0, 'my_icon.ico')], ...
I tried loading the icon using:
hinst = win32api.GetModuleHandle(None)
hicon = win32gui.LoadImage(hinst, 0, win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
But this produces an (very unspecific) error:
pywintypes.error: (0, 'LoadImage', 'No error message is available')
If I try specifying 0 as a string
hicon = win32gui.LoadImage(hinst, '0', win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
then I get the error:
pywintypes.error: (1813, 'LoadImage', 'The specified resource type cannot be found in the image file.')
So, what's the correct method/syntax to load the icon?
Also please notice that I don't use any GUI toolkit - just the Windows API via PyWin32.
A:
@efotinis: You're right.
Here is a workaround until py2exe gets fixed and you don't want to include the same icon twice:
hicon = win32gui.CreateIconFromResource(win32api.LoadResource(None, win32con.RT_ICON, 1), True)
Be aware that 1 is not the ID you gave the icon in setup.py (which is the icon group ID), but the resource ID automatically assigned by py2exe to each icon in each icon group. At least that's how I understand it.
If you want to create an icon with a specified size (as CreateIconFromResource uses the system default icon size), you need to use CreateIconFromResourceEx, which isn't available via PyWin32:
icon_res = win32api.LoadResource(None, win32con.RT_ICON, 1)
hicon = ctypes.windll.user32.CreateIconFromResourceEx(icon_res, len(icon_res), True,
0x00030000, 16, 16, win32con.LR_DEFAULTCOLOR)
A:
If you're using wxPython, you can use the following simple code:
wx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)
I usually have code that checks whether it's running from an EXE or not, and acts accordingly:
def get_app_icon():
if hasattr(sys, "frozen") and getattr(sys, "frozen") == "windows_exe":
return wx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)
else:
return wx.Icon("gfx/myapp.ico", wx.BITMAP_TYPE_ICO)
A:
Well, well... I installed py2exe and I think it's a bug. In py2exe_util.c they should init rt_icon_id to 1 instead of 0. The way it is now, it's impossible to load the first format of the first icon using LoadIcon/LoadImage.
I'll notify the developers about this if it's not already a known issue.
A workaround, in the meantime, would be to include the same icon twice in your setup.py:
'icon_resources': [(1, 'my_icon.ico'), (2, 'my_icon.ico')]
You can load the second one, while Windows will use the first one as the shell icon. Remember to use non-zero IDs though. :)
A:
You should set the icon ID to something other than 0:
'icon_resources': [(42, 'my_icon.ico')]
Windows resource IDs must be between 1 and 32767.
| How do you load an embedded icon from an exe file with PyWin32? | I have an exe file generated with py2exe. In the setup.py I specify an icon to be embedded in the exe:
windows=[{'script': 'my_script.py','icon_resources': [(0, 'my_icon.ico')], ...
I tried loading the icon using:
hinst = win32api.GetModuleHandle(None)
hicon = win32gui.LoadImage(hinst, 0, win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
But this produces an (very unspecific) error:
pywintypes.error: (0, 'LoadImage', 'No error message is available')
If I try specifying 0 as a string
hicon = win32gui.LoadImage(hinst, '0', win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
then I get the error:
pywintypes.error: (1813, 'LoadImage', 'The specified resource type cannot be found in the image file.')
So, what's the correct method/syntax to load the icon?
Also please notice that I don't use any GUI toolkit - just the Windows API via PyWin32.
| [
"@efotinis: You're right. \nHere is a workaround until py2exe gets fixed and you don't want to include the same icon twice:\nhicon = win32gui.CreateIconFromResource(win32api.LoadResource(None, win32con.RT_ICON, 1), True)\n\nBe aware that 1 is not the ID you gave the icon in setup.py (which is the icon group ID), but the resource ID automatically assigned by py2exe to each icon in each icon group. At least that's how I understand it.\nIf you want to create an icon with a specified size (as CreateIconFromResource uses the system default icon size), you need to use CreateIconFromResourceEx, which isn't available via PyWin32:\nicon_res = win32api.LoadResource(None, win32con.RT_ICON, 1)\nhicon = ctypes.windll.user32.CreateIconFromResourceEx(icon_res, len(icon_res), True,\n 0x00030000, 16, 16, win32con.LR_DEFAULTCOLOR)\n\n",
"If you're using wxPython, you can use the following simple code:\nwx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)\n\nI usually have code that checks whether it's running from an EXE or not, and acts accordingly:\ndef get_app_icon():\n if hasattr(sys, \"frozen\") and getattr(sys, \"frozen\") == \"windows_exe\":\n return wx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)\n else:\n return wx.Icon(\"gfx/myapp.ico\", wx.BITMAP_TYPE_ICO)\n\n",
"Well, well... I installed py2exe and I think it's a bug. In py2exe_util.c they should init rt_icon_id to 1 instead of 0. The way it is now, it's impossible to load the first format of the first icon using LoadIcon/LoadImage.\nI'll notify the developers about this if it's not already a known issue.\nA workaround, in the meantime, would be to include the same icon twice in your setup.py:\n'icon_resources': [(1, 'my_icon.ico'), (2, 'my_icon.ico')]\n\nYou can load the second one, while Windows will use the first one as the shell icon. Remember to use non-zero IDs though. :)\n",
"You should set the icon ID to something other than 0:\n'icon_resources': [(42, 'my_icon.ico')]\n\nWindows resource IDs must be between 1 and 32767.\n"
] | [
5,
1,
1,
0
] | [] | [] | [
"exe",
"icons",
"python",
"pywin32"
] | stackoverflow_0000090775_exe_icons_python_pywin32.txt |
Q:
Report generation
I am writing a web app using TurboGears, and in that app the users must be able to generate different reports.
The data the reports need is stored in a database (MySQL).
The reports must be returned either as a easily printable html document, or a pdf file.
I have previously used jasper and iReport for creating the documents, but if I can avoid having to fire up Java to create the report I would be happy.
Ideally I would like to specify reports in a easily readable markup language and then just feed the template data with some kind of library that produces the report.
I gladly accept any kind of hints on how I should go about generating these reports!
A:
Pod is my favorite solution to your problem.
A:
You can build some fancy PDFs from Python with the ReportLab toolkit.
A:
A partial answer: the easily readable format you are looking for might be DocBook. From there it is very easy to go to PDF, html, RTF, etc. etc.
| Report generation | I am writing a web app using TurboGears, and in that app the users must be able to generate different reports.
The data the reports need is stored in a database (MySQL).
The reports must be returned either as a easily printable html document, or a pdf file.
I have previously used jasper and iReport for creating the documents, but if I can avoid having to fire up Java to create the report I would be happy.
Ideally I would like to specify reports in a easily readable markup language and then just feed the template data with some kind of library that produces the report.
I gladly accept any kind of hints on how I should go about generating these reports!
| [
"Pod is my favorite solution to your problem.\n",
"You can build some fancy PDFs from Python with the ReportLab toolkit.\n",
"A partial answer: the easily readable format you are looking for might be DocBook. From there it is very easy to go to PDF, html, RTF, etc. etc.\n"
] | [
5,
2,
1
] | [] | [] | [
"python",
"report"
] | stackoverflow_0000110760_python_report.txt |
Q:
Distributed python
What is the best python framework to create distributed applications? For example to build a P2P app.
A:
I think you mean "Networked Apps"? Distributed means an app that can split its workload among multiple worker clients over the network.
You probably want.
Twisted
A:
You probably want Twisted. There is a P2P framework for Twisted called "Vertex". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained.
A:
You could checkout pyprocessing which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading.
A:
You could download the source of BitTorrent for starters and see how they did it.
http://download.bittorrent.com/dl/
A:
If it's something where you're going to need tons of threads and need better concurrent performance, check out Stackless Python. Otherwise you could just use the SOAP or XML-RPC protocols. In response to Ben's post, if you don't want to look over the BitTorrent source, you could just look at the article on the BitTorrent protocol.
| Distributed python | What is the best python framework to create distributed applications? For example to build a P2P app.
| [
"I think you mean \"Networked Apps\"? Distributed means an app that can split its workload among multiple worker clients over the network.\nYou probably want.\nTwisted\n",
"You probably want Twisted. There is a P2P framework for Twisted called \"Vertex\". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained.\n",
"You could checkout pyprocessing which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading.\n",
"You could download the source of BitTorrent for starters and see how they did it.\nhttp://download.bittorrent.com/dl/\n",
"If it's something where you're going to need tons of threads and need better concurrent performance, check out Stackless Python. Otherwise you could just use the SOAP or XML-RPC protocols. In response to Ben's post, if you don't want to look over the BitTorrent source, you could just look at the article on the BitTorrent protocol.\n"
] | [
9,
3,
2,
1,
1
] | [] | [] | [
"distributed",
"python"
] | stackoverflow_0000094334_distributed_python.txt |
Q:
How do I use genshi.builder to programmatically build an HTML document?
I recently discovered the genshi.builder module. It reminds me of Divmod Nevow's Stan module. How would one use genshi.builder.tag to build an HTML document with a particular doctype? Or is this even a good thing to do? If not, what is the right way?
A:
It's not possible to build an entire page using just genshi.builder.tag -- you would need to perform some surgery on the resulting stream to insert the doctype. Besides, the resulting code would look horrific. The recommended way to use Genshi is to use a separate template file, generate a stream from it, and then render that stream to the output type you want.
genshi.builder.tag is mostly useful for when you need to generate simple markup from within Python, such as when you're building a form or doing some sort of logic-heavy modification of the output.
See documentation for:
Creating and using templates
The XML-based template language
genshi.builder API docs
If you really want to generate a full document using only builder.tag, this (completely untested) code could be a good starting point:
from itertools import chain
from genshi.core import DOCTYPE, Stream
from genshi.output import DocType
from genshi.builder import tag as t
# Build the page using `genshi.builder.tag`
page = t.html (t.head (t.title ("Hello world!")), t.body (t.div ("Body text")))
# Convert the page element into a stream
stream = page.generate ()
# Chain the page stream with a stream containing only an HTML4 doctype declaration
stream = Stream (chain ([(DOCTYPE, DocType.get ('html4'), None)], stream))
# Convert the stream to text using the "html" renderer (could also be xml, xhtml, text, etc)
text = stream.render ('html')
The resulting page will have no whitespace in it -- it'll look normal, but you'll have a hard time reading the source code because it will be entirely on one line. Implementing appropriate filters to add whitespace is left as an exercise to the reader.
A:
Genshi.builder is for "programmatically generating markup streams"[1]. I believe the purpose of it is as a backend for the templating language. You're probably looking for the templating language for generating a whole page.
You can, however do the following:
>>> import genshi.output
>>> genshi.output.DocType('html')
('html', '-//W3C//DTD HTML 4.01//EN', 'http://www.w3.org/TR/html4/strict.dtd')
See other Doctypes here: http://genshi.edgewall.org/wiki/ApiDocs/genshi.output#genshi.output:DocType
[1] genshi.builder.__doc__
| How do I use genshi.builder to programmatically build an HTML document? | I recently discovered the genshi.builder module. It reminds me of Divmod Nevow's Stan module. How would one use genshi.builder.tag to build an HTML document with a particular doctype? Or is this even a good thing to do? If not, what is the right way?
| [
"It's not possible to build an entire page using just genshi.builder.tag -- you would need to perform some surgery on the resulting stream to insert the doctype. Besides, the resulting code would look horrific. The recommended way to use Genshi is to use a separate template file, generate a stream from it, and then render that stream to the output type you want.\ngenshi.builder.tag is mostly useful for when you need to generate simple markup from within Python, such as when you're building a form or doing some sort of logic-heavy modification of the output.\nSee documentation for:\n\nCreating and using templates\nThe XML-based template language\ngenshi.builder API docs\n\nIf you really want to generate a full document using only builder.tag, this (completely untested) code could be a good starting point:\nfrom itertools import chain\nfrom genshi.core import DOCTYPE, Stream\nfrom genshi.output import DocType\nfrom genshi.builder import tag as t\n\n# Build the page using `genshi.builder.tag`\npage = t.html (t.head (t.title (\"Hello world!\")), t.body (t.div (\"Body text\")))\n\n# Convert the page element into a stream\nstream = page.generate ()\n\n# Chain the page stream with a stream containing only an HTML4 doctype declaration\nstream = Stream (chain ([(DOCTYPE, DocType.get ('html4'), None)], stream))\n\n# Convert the stream to text using the \"html\" renderer (could also be xml, xhtml, text, etc)\ntext = stream.render ('html')\n\nThe resulting page will have no whitespace in it -- it'll look normal, but you'll have a hard time reading the source code because it will be entirely on one line. Implementing appropriate filters to add whitespace is left as an exercise to the reader.\n",
"Genshi.builder is for \"programmatically generating markup streams\"[1]. I believe the purpose of it is as a backend for the templating language. You're probably looking for the templating language for generating a whole page.\nYou can, however do the following:\n>>> import genshi.output\n>>> genshi.output.DocType('html')\n('html', '-//W3C//DTD HTML 4.01//EN', 'http://www.w3.org/TR/html4/strict.dtd')\n\nSee other Doctypes here: http://genshi.edgewall.org/wiki/ApiDocs/genshi.output#genshi.output:DocType\n[1] genshi.builder.__doc__\n\n"
] | [
6,
3
] | [] | [] | [
"genshi",
"html",
"python",
"templates"
] | stackoverflow_0000112564_genshi_html_python_templates.txt |
Q:
How would one log into a phpBB3 forum through a Python script using urllib, urllib2 and ClientCookie?
(ClientCookie is a module for (automatic) cookie-handling: http://wwwsearch.sourceforge.net/ClientCookie)
# I encode the data I'll be sending:
data = urllib.urlencode({'username': 'mandark', 'password': 'deedee'})
# And I send it and read the page:
page = ClientCookie.urlopen('http://www.forum.com/ucp.php?mode=login', data)
output = page.read()
The script doesn't log in, but rather seems to get redirected back to the same login page asking it for a username and password. What am I doing wrong?
Any help would be greatly appreciated! Thanks!
A:
Have you tried fetching the login page first?
I would suggest using Tamper Data to have a peek at exactly what's being sent when you request the login page and then log in normally using a web browser from a fresh start, with no initial cookies in place, so that your script can replicate it exactly.
That's the approach I used when writing the following, extracted from a script which needs to login to an Invision Power Board forum, using cookielib and urllib2 - you may find it useful as a reference.
import cookielib
import logging
import sys
import urllib
import urllib2
cookies = cookielib.LWPCookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies))
urllib2.install_opener(opener)
headers = {
'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.0; en-GB; rv:1.8.1.12) Gecko/20080201 Firefox/2.0.0.12',
'Accept': 'text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5',
'Accept-Language': 'en-gb,en;q=0.5',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7',
}
# Fetch the login page to set initial cookies
urllib2.urlopen(urllib2.Request('http://www.rllmukforum.com/index.php?act=Login&CODE=00', None, headers))
# Login so we can access the Off Topic forum
login_headers = headers.copy()
login_headers.update({
'Referer': 'http://www.rllmukforum.com/index.php?act=Login&CODE=00',
'Content-Type': 'application/x-www-form-urlencoded',
})
html = urllib2.urlopen(urllib2.Request('http://www.rllmukforum.com/index.php?act=Login&CODE=01',
urllib.urlencode({
'referer': 'http://www.rllmukforum.com/index.php?',
'UserName': RLLMUK_USERNAME,
'PassWord': RLLMUK_PASSWORD,
}),
login_headers)).read()
if 'The following errors were found' in html:
logging.error('RLLMUK login failed')
logging.info(html)
sys.exit(1)
A:
I'd recommend taking a look at the mechanize library; it's designed for precisely this type of task. It's also far easier than doing it by hand.
| How would one log into a phpBB3 forum through a Python script using urllib, urllib2 and ClientCookie? | (ClientCookie is a module for (automatic) cookie-handling: http://wwwsearch.sourceforge.net/ClientCookie)
# I encode the data I'll be sending:
data = urllib.urlencode({'username': 'mandark', 'password': 'deedee'})
# And I send it and read the page:
page = ClientCookie.urlopen('http://www.forum.com/ucp.php?mode=login', data)
output = page.read()
The script doesn't log in, but rather seems to get redirected back to the same login page asking it for a username and password. What am I doing wrong?
Any help would be greatly appreciated! Thanks!
| [
"Have you tried fetching the login page first?\nI would suggest using Tamper Data to have a peek at exactly what's being sent when you request the login page and then log in normally using a web browser from a fresh start, with no initial cookies in place, so that your script can replicate it exactly.\nThat's the approach I used when writing the following, extracted from a script which needs to login to an Invision Power Board forum, using cookielib and urllib2 - you may find it useful as a reference.\nimport cookielib\nimport logging\nimport sys\nimport urllib\nimport urllib2\n\ncookies = cookielib.LWPCookieJar()\nopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies))\nurllib2.install_opener(opener)\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.0; en-GB; rv:1.8.1.12) Gecko/20080201 Firefox/2.0.0.12',\n 'Accept': 'text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5',\n 'Accept-Language': 'en-gb,en;q=0.5',\n 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7',\n}\n\n# Fetch the login page to set initial cookies\nurllib2.urlopen(urllib2.Request('http://www.rllmukforum.com/index.php?act=Login&CODE=00', None, headers))\n\n# Login so we can access the Off Topic forum\nlogin_headers = headers.copy()\nlogin_headers.update({\n 'Referer': 'http://www.rllmukforum.com/index.php?act=Login&CODE=00',\n 'Content-Type': 'application/x-www-form-urlencoded',\n})\nhtml = urllib2.urlopen(urllib2.Request('http://www.rllmukforum.com/index.php?act=Login&CODE=01',\n urllib.urlencode({\n 'referer': 'http://www.rllmukforum.com/index.php?',\n 'UserName': RLLMUK_USERNAME,\n 'PassWord': RLLMUK_PASSWORD,\n }),\n login_headers)).read()\nif 'The following errors were found' in html:\n logging.error('RLLMUK login failed')\n logging.info(html)\n sys.exit(1)\n\n",
"I'd recommend taking a look at the mechanize library; it's designed for precisely this type of task. It's also far easier than doing it by hand.\n"
] | [
2,
0
] | [] | [] | [
"post",
"python",
"urllib"
] | stackoverflow_0000112768_post_python_urllib.txt |
Q:
Writing to the windows logs in Python
Is it possible to write to the windows logs in python?
A:
Yes, just use Windows Python Extension, as stated here.
import win32evtlogutil
win32evtlogutil.ReportEvent(ApplicationName, EventID, EventCategory,
EventType, Inserts, Data, SID)
| Writing to the windows logs in Python | Is it possible to write to the windows logs in python?
| [
"Yes, just use Windows Python Extension, as stated here.\nimport win32evtlogutil\nwin32evtlogutil.ReportEvent(ApplicationName, EventID, EventCategory,\n EventType, Inserts, Data, SID)\n\n"
] | [
20
] | [] | [] | [
"logging",
"python",
"windows"
] | stackoverflow_0000113007_logging_python_windows.txt |
Q:
Python - When to use file vs open
What's the difference between file and open in Python? When should I use which one? (Say I'm in 2.5)
A:
You should always use open().
As the documentation states:
When opening a file, it's preferable
to use open() instead of invoking this
constructor directly. file is more
suited to type testing (for example,
writing "isinstance(f, file)").
Also, file() has been removed since Python 3.0.
A:
Two reasons: The python philosophy of "There ought to be one way to do it" and file is going away.
file is the actual type (using e.g. file('myfile.txt') is calling its constructor). open is a factory function that will return a file object.
In python 3.0 file is going to move from being a built-in to being implemented by multiple classes in the io library (somewhat similar to Java with buffered readers, etc.)
A:
file() is a type, like an int or a list. open() is a function for opening files, and will return a file object.
This is an example of when you should use open:
f = open(filename, 'r')
for line in f:
process(line)
f.close()
This is an example of when you should use file:
class LoggingFile(file):
def write(self, data):
sys.stderr.write("Wrote %d bytes\n" % len(data))
super(LoggingFile, self).write(data)
As you can see, there's a good reason for both to exist, and a clear use-case for both.
A:
Functionally, the two are the same; open will call file anyway, so currently the difference is a matter of style. The Python docs recommend using open.
When opening a file, it's preferable to use open() instead of invoking the file constructor directly.
The reason is that in future versions they is not guaranteed to be the same (open will become a factory function, which returns objects of different types depending on the path it's opening).
A:
Only ever use open() for opening files. file() is actually being removed in 3.0, and it's deprecated at the moment. They've had a sort of strange relationship, but file() is going now, so there's no need to worry anymore.
The following is from the Python 2.6 docs. [bracket stuff] added by me.
When opening a file, it’s preferable to use open() instead of invoking this [file()] constructor directly. file is more suited to type testing (for example, writing isinstance(f, file)
A:
According to Mr Van Rossum, although open() is currently an alias for file() you should use open() because this might change in the future.
| Python - When to use file vs open | What's the difference between file and open in Python? When should I use which one? (Say I'm in 2.5)
| [
"You should always use open().\nAs the documentation states:\n\nWhen opening a file, it's preferable\n to use open() instead of invoking this\n constructor directly. file is more\n suited to type testing (for example,\n writing \"isinstance(f, file)\").\n\nAlso, file() has been removed since Python 3.0.\n",
"Two reasons: The python philosophy of \"There ought to be one way to do it\" and file is going away.\nfile is the actual type (using e.g. file('myfile.txt') is calling its constructor). open is a factory function that will return a file object.\nIn python 3.0 file is going to move from being a built-in to being implemented by multiple classes in the io library (somewhat similar to Java with buffered readers, etc.)\n",
"file() is a type, like an int or a list. open() is a function for opening files, and will return a file object.\nThis is an example of when you should use open:\nf = open(filename, 'r')\nfor line in f:\n process(line)\nf.close()\n\nThis is an example of when you should use file:\nclass LoggingFile(file):\n def write(self, data):\n sys.stderr.write(\"Wrote %d bytes\\n\" % len(data))\n super(LoggingFile, self).write(data)\n\nAs you can see, there's a good reason for both to exist, and a clear use-case for both.\n",
"Functionally, the two are the same; open will call file anyway, so currently the difference is a matter of style. The Python docs recommend using open. \n\nWhen opening a file, it's preferable to use open() instead of invoking the file constructor directly. \n\nThe reason is that in future versions they is not guaranteed to be the same (open will become a factory function, which returns objects of different types depending on the path it's opening).\n",
"Only ever use open() for opening files. file() is actually being removed in 3.0, and it's deprecated at the moment. They've had a sort of strange relationship, but file() is going now, so there's no need to worry anymore.\nThe following is from the Python 2.6 docs. [bracket stuff] added by me.\n\nWhen opening a file, it’s preferable to use open() instead of invoking this [file()] constructor directly. file is more suited to type testing (for example, writing isinstance(f, file)\n\n",
"According to Mr Van Rossum, although open() is currently an alias for file() you should use open() because this might change in the future.\n"
] | [
157,
33,
19,
7,
4,
2
] | [] | [] | [
"file",
"python"
] | stackoverflow_0000112970_file_python.txt |
Q:
What are the Python equivalents of the sighold and sigrelse functions found in C?
It appears the Python signal module doesn't have anything similar to the sighold and sigrelse functions found in C, using signal.h. Are there Python equivalents of any sort?
Many thanks!
A:
There is no way to ``block'' signals temporarily from critical sections (since this is not supported by all Unix flavors).
https://docs.python.org/library/signal.html
A:
There are no direct bindings for this in Python. Accessing them through ctypes is easy enough; here is an example.
import ctypes, signal
libc = ctypes.cdll.LoadLibrary("libc.so.6")
libc.sighold(signal.SIGKILL)
libc.sigrelse(signal.SIGKILL)
I'm not familiar with the use of these calls, but be aware that Python's signal handlers work differently than C. When Python code is attached to a signal callback, the signal is caught on the C side of the interpreter and queued. The interpreter is occasionally interrupted for internal housekeeping (and thread switching, etc). It is during that interrupt the Python handler for the signal will be called.
All that to say, just be aware that Python's signal handling is a little less asynchronous than normal C signal handlers.
| What are the Python equivalents of the sighold and sigrelse functions found in C? | It appears the Python signal module doesn't have anything similar to the sighold and sigrelse functions found in C, using signal.h. Are there Python equivalents of any sort?
Many thanks!
| [
"There is no way to ``block'' signals temporarily from critical sections (since this is not supported by all Unix flavors).\nhttps://docs.python.org/library/signal.html\n",
"There are no direct bindings for this in Python. Accessing them through ctypes is easy enough; here is an example.\nimport ctypes, signal\nlibc = ctypes.cdll.LoadLibrary(\"libc.so.6\")\nlibc.sighold(signal.SIGKILL)\nlibc.sigrelse(signal.SIGKILL)\n\nI'm not familiar with the use of these calls, but be aware that Python's signal handlers work differently than C. When Python code is attached to a signal callback, the signal is caught on the C side of the interpreter and queued. The interpreter is occasionally interrupted for internal housekeeping (and thread switching, etc). It is during that interrupt the Python handler for the signal will be called.\nAll that to say, just be aware that Python's signal handling is a little less asynchronous than normal C signal handlers.\n"
] | [
2,
2
] | [] | [] | [
"python",
"signals"
] | stackoverflow_0000113170_python_signals.txt |
Q:
How can I unit test responses from the webapp WSGI application in Google App Engine?
I'd like to unit test responses from the Google App Engine webapp.WSGIApplication, for example request the url '/' and test that the responses status code is 200, using GAEUnit. How can I do this?
I'd like to use the webapp framework and GAEUnit, which runs within the App Engine sandbox (unfortunately WebTest does not work within the sandbox).
A:
I have added a sample application to the GAEUnit project which demonstrates how to write and execute a web test using GAEUnit. The sample includes a slightly modified version of the 'webtest' module ('import webbrowser' is commented out, as recommended by David Coffin).
Here's the 'web_tests.py' file from the sample application 'test' directory:
import unittest
from webtest import TestApp
from google.appengine.ext import webapp
import index
class IndexTest(unittest.TestCase):
def setUp(self):
self.application = webapp.WSGIApplication([('/', index.IndexHandler)], debug=True)
def test_default_page(self):
app = TestApp(self.application)
response = app.get('/')
self.assertEqual('200 OK', response.status)
self.assertTrue('Hello, World!' in response)
def test_page_with_param(self):
app = TestApp(self.application)
response = app.get('/?name=Bob')
self.assertEqual('200 OK', response.status)
self.assertTrue('Hello, Bob!' in response)
A:
Actually WebTest does work within the sandbox, as long as you comment out
import webbrowser
in webtest/__init__.py
| How can I unit test responses from the webapp WSGI application in Google App Engine? | I'd like to unit test responses from the Google App Engine webapp.WSGIApplication, for example request the url '/' and test that the responses status code is 200, using GAEUnit. How can I do this?
I'd like to use the webapp framework and GAEUnit, which runs within the App Engine sandbox (unfortunately WebTest does not work within the sandbox).
| [
"I have added a sample application to the GAEUnit project which demonstrates how to write and execute a web test using GAEUnit. The sample includes a slightly modified version of the 'webtest' module ('import webbrowser' is commented out, as recommended by David Coffin).\nHere's the 'web_tests.py' file from the sample application 'test' directory:\nimport unittest\nfrom webtest import TestApp\nfrom google.appengine.ext import webapp\nimport index\n\nclass IndexTest(unittest.TestCase):\n\n def setUp(self):\n self.application = webapp.WSGIApplication([('/', index.IndexHandler)], debug=True)\n\n def test_default_page(self):\n app = TestApp(self.application)\n response = app.get('/')\n self.assertEqual('200 OK', response.status)\n self.assertTrue('Hello, World!' in response)\n\n def test_page_with_param(self):\n app = TestApp(self.application)\n response = app.get('/?name=Bob')\n self.assertEqual('200 OK', response.status)\n self.assertTrue('Hello, Bob!' in response)\n\n",
"Actually WebTest does work within the sandbox, as long as you comment out \nimport webbrowser\n\nin webtest/__init__.py \n"
] | [
13,
2
] | [] | [] | [
"google_app_engine",
"python",
"unit_testing"
] | stackoverflow_0000107675_google_app_engine_python_unit_testing.txt |
Q:
What is the fastest way to scale and display an image in Python?
I am required to display a two dimensional numpy.array of int16 at 20fps or so. Using Matplotlib's imshow chokes on anything above 10fps. There obviously are some issues with scaling and interpolation. I should add that the dimensions of the array are not known, but will probably be around thirty by four hundred.
These are data from a sensor that are supposed to have a real-time display, so the data has to be re-sampled on the fly.
A:
The fastest way to display 30x400 data points is to:
Use OpenGL color arrays
If you can quickly transform your data to what OpenGL understands as color array, you could create a vertex array describing quads, one for each sensor, then update your color array and draw this orthographically on screen.
Use OpenGL textures
If you can quickly transform your datapoints to an opengl texture you can draw one quad with fixed UV coordinates that is bound to this texture.
Use pygame
Pygame has support for conversion of Numpy/Numarray to surfaces, Pygame can then transform such surfaces which involves resampling, after resampling you can blit it on screen.
Misc
pyglet makes dealing with opengl very easy
| What is the fastest way to scale and display an image in Python? | I am required to display a two dimensional numpy.array of int16 at 20fps or so. Using Matplotlib's imshow chokes on anything above 10fps. There obviously are some issues with scaling and interpolation. I should add that the dimensions of the array are not known, but will probably be around thirty by four hundred.
These are data from a sensor that are supposed to have a real-time display, so the data has to be re-sampled on the fly.
| [
"The fastest way to display 30x400 data points is to:\nUse OpenGL color arrays\nIf you can quickly transform your data to what OpenGL understands as color array, you could create a vertex array describing quads, one for each sensor, then update your color array and draw this orthographically on screen.\nUse OpenGL textures\nIf you can quickly transform your datapoints to an opengl texture you can draw one quad with fixed UV coordinates that is bound to this texture.\nUse pygame\nPygame has support for conversion of Numpy/Numarray to surfaces, Pygame can then transform such surfaces which involves resampling, after resampling you can blit it on screen.\nMisc\npyglet makes dealing with opengl very easy\n"
] | [
6
] | [] | [] | [
"animation",
"image_scaling",
"matplotlib",
"python"
] | stackoverflow_0000114597_animation_image_scaling_matplotlib_python.txt |
Q:
How do I overlap widgets with the Tkinter pack geometry manager?
I want to put a Canvas with an image in my window, and then I want to pack widgets on top of it, so the Canvas acts as a background.
Is it possible to have two states for the pack manager: one for one set of widgets and another for another set?
A:
The answer to your specific question is no. You can't have two states or otherwise use pack two different ways in the same parent.
However, what I think you want to accomplish is simple. Use the built-in features of the canvas to create an image item that is part of the canvas, then pack things into the canvas as if it were a frame.
You can accomplish a similar thing by creating a label widget with an image, then pack your other widgets into the label.
One advantage to using a canvas is you can easily tile an image to fill the whole canvas with a repeating background image so as the window grows the image will continue to fill the window (of course you can just use a sufficiently large original image...)
A:
I believe that Bryan's answer is probably the best general solution. However, you may also want to look at the place geometry manager. The place geometry manager lets you specify the exact size and position of the widget... which can get tedious quickly, but will get the job done.
A:
... turned out to be unworkable because I wanted to add labels and more canvases to it, but I can't find any way to make their backgrounds transparent
If it is acceptable to load an additional extension, take a look at Tkzinc. From the web site,
Tkzinc (historically called Zinc) widget is very similar to the Tk Canvas in that they both support structured graphics. Like the Canvas, Tkzinc implements items used to display graphical entities. Those items can be manipulated and bindings can be associated with them to implement interaction behaviors. But unlike the Canvas, Tkzinc can structure the items in a hierarchy, has support for scaling and rotation, clipping can be set for sub-trees of the item hierarchy, supports muti-contour curves. It also provides advanced rendering with the help of OpenGL, such as color gradient, antialiasing, transparencies and a triangles item.
I'm currently using it on a tcl project and am quite pleased with the results. Extensions for tcl, perl, and python are available.
A:
Not without swapping widget trees in and out, which I don't think can be done cleanly with Tk. Other toolkits can do this a little more elegantly.
COM/VB/MFC can do this with an ActiveX control - you can hide/show multiple ActiveX controls in the same region. Any of the containers will let you do this by changing the child around. If you're doing a windows-specific program you may be able to accomplish it this way.
QT will also let you do this in a similar manner.
GTK is slightly harder.
| How do I overlap widgets with the Tkinter pack geometry manager? | I want to put a Canvas with an image in my window, and then I want to pack widgets on top of it, so the Canvas acts as a background.
Is it possible to have two states for the pack manager: one for one set of widgets and another for another set?
| [
"The answer to your specific question is no. You can't have two states or otherwise use pack two different ways in the same parent. \nHowever, what I think you want to accomplish is simple. Use the built-in features of the canvas to create an image item that is part of the canvas, then pack things into the canvas as if it were a frame. \nYou can accomplish a similar thing by creating a label widget with an image, then pack your other widgets into the label.\nOne advantage to using a canvas is you can easily tile an image to fill the whole canvas with a repeating background image so as the window grows the image will continue to fill the window (of course you can just use a sufficiently large original image...)\n",
"I believe that Bryan's answer is probably the best general solution. However, you may also want to look at the place geometry manager. The place geometry manager lets you specify the exact size and position of the widget... which can get tedious quickly, but will get the job done.\n",
"\n... turned out to be unworkable because I wanted to add labels and more canvases to it, but I can't find any way to make their backgrounds transparent\n\nIf it is acceptable to load an additional extension, take a look at Tkzinc. From the web site, \n\nTkzinc (historically called Zinc) widget is very similar to the Tk Canvas in that they both support structured graphics. Like the Canvas, Tkzinc implements items used to display graphical entities. Those items can be manipulated and bindings can be associated with them to implement interaction behaviors. But unlike the Canvas, Tkzinc can structure the items in a hierarchy, has support for scaling and rotation, clipping can be set for sub-trees of the item hierarchy, supports muti-contour curves. It also provides advanced rendering with the help of OpenGL, such as color gradient, antialiasing, transparencies and a triangles item. \n\nI'm currently using it on a tcl project and am quite pleased with the results. Extensions for tcl, perl, and python are available.\n",
"Not without swapping widget trees in and out, which I don't think can be done cleanly with Tk. Other toolkits can do this a little more elegantly.\n\nCOM/VB/MFC can do this with an ActiveX control - you can hide/show multiple ActiveX controls in the same region. Any of the containers will let you do this by changing the child around. If you're doing a windows-specific program you may be able to accomplish it this way.\nQT will also let you do this in a similar manner.\nGTK is slightly harder.\n\n"
] | [
2,
1,
1,
0
] | [] | [] | [
"geometry",
"pack",
"python",
"tkinter"
] | stackoverflow_0000112263_geometry_pack_python_tkinter.txt |
Q:
Which is more pythonic, factory as a function in a module, or as a method on the class it creates?
I have some Python code that creates a Calendar object based on parsed VEvent objects from and iCalendar file.
The calendar object just has a method that adds events as they get parsed.
Now I want to create a factory function that creates a calendar from a file object, path, or URL.
I've been using the iCalendar python module, which implements a factory function as a class method directly on the Class that it returns an instance of:
cal = icalendar.Calendar.from_string(data)
From what little I know about Java, this is a common pattern in Java code, though I seem to find more references to a factory method being on a different class than the class you actually want to instantiate instances from.
The question is, is this also considered Pythonic ? Or is it considered more pythonic to just create a module-level method as the factory function ?
A:
[Note. Be very cautious about separating "Calendar" a collection of events, and "Event" - a single event on a calendar. In your question, it seems like there could be some confusion.]
There are many variations on the Factory design pattern.
A stand-alone convenience function (e.g., calendarMaker(data))
A separate class (e.g., CalendarParser) which builds your target class (Calendar).
A class-level method (e.g. Calendar.from_string) method.
These have different purposes. All are Pythonic, the questions are "what do you mean?" and "what's likely to change?" Meaning is everything; change is important.
Convenience functions are Pythonic. Languages like Java can't have free-floating functions; you must wrap a lonely function in a class. Python allows you to have a lonely function without the overhead of a class. A function is relevant when your constructor has no state changes or alternate strategies or any memory of previous actions.
Sometimes folks will define a class and then provide a convenience function that makes an instance of the class, sets the usual parameters for state and strategy and any other configuration, and then calls the single relevant method of the class. This gives you both the statefulness of class plus the flexibility of a stand-alone function.
The class-level method pattern is used, but it has limitations. One, it's forced to rely on class-level variables. Since these can be confusing, a complex constructor as a static method runs into problems when you need to add features (like statefulness or alternative strategies.) Be sure you're never going to expand the static method.
Two, it's more-or-less irrelevant to the rest of the class methods and attributes. This kind of from_string is just one of many alternative encodings for your Calendar objects. You might have a from_xml, from_JSON, from_YAML and on and on. None of this has the least relevance to what a Calendar IS or what it DOES. These methods are all about how a Calendar is encoded for transmission.
What you'll see in the mature Python libraries is that factories are separate from the things they create. Encoding (as strings, XML, JSON, YAML) is subject to a great deal of more-or-less random change. The essential thing, however, rarely changes.
Separate the two concerns. Keep encoding and representation as far away from state and behavior as you can.
A:
It's pythonic not to think about esoteric difference in some pattern you read somewhere and now want to use everywhere, like the factory pattern.
Most of the time you would think of a @staticmethod as a solution it's probably better to use a module function, except when you stuff multiple classes in one module and each has a different implementation of the same interface, then it's better to use a @staticmethod
Ultimately weather you create your instances by a @staticmethod or by module function makes little difference.
I'd probably use the initializer ( __init__ ) of a class because one of the more accepted "patterns" in python is that the factory for a class is the class initialization.
A:
IMHO a module-level method is a cleaner solution. It hides behind the Python module system that gives it a unique namespace prefix, something the "factory pattern" is commonly used for.
A:
The factory pattern has its own strengths and weaknesses. However, choosing one way to create instances usually has little pragmatic effect on your code.
A:
A staticmethod rarely has value, but a classmethod may be useful. It depends on what you want the class and the factory function to actually do.
A factory function in a module would always make an instance of the 'right' type (where 'right' in your case is the 'Calendar' class always, but you might also make it dependant on the contents of what it is creating the instance out of.)
Use a classmethod if you wish to make it dependant not on the data, but on the class you call it on. A classmethod is like a staticmethod in that you can call it on the class, without an instance, but it receives the class it was called on as first argument. This allows you to actually create an instance of that class, which may be a subclass of the original class. An example of a classmethod is dict.fromkeys(), which creates a dict from a list of keys and a single value (defaulting to None.) Because it's a classmethod, when you subclass dict you get the 'fromkeys' method entirely for free. Here's an example of how one could write dict.fromkeys() oneself:
class dict_with_fromkeys(dict):
@classmethod
def fromkeys(cls, keys, value=None):
self = cls()
for key in keys:
self[key] = value
return self
| Which is more pythonic, factory as a function in a module, or as a method on the class it creates? | I have some Python code that creates a Calendar object based on parsed VEvent objects from and iCalendar file.
The calendar object just has a method that adds events as they get parsed.
Now I want to create a factory function that creates a calendar from a file object, path, or URL.
I've been using the iCalendar python module, which implements a factory function as a class method directly on the Class that it returns an instance of:
cal = icalendar.Calendar.from_string(data)
From what little I know about Java, this is a common pattern in Java code, though I seem to find more references to a factory method being on a different class than the class you actually want to instantiate instances from.
The question is, is this also considered Pythonic ? Or is it considered more pythonic to just create a module-level method as the factory function ?
| [
"[Note. Be very cautious about separating \"Calendar\" a collection of events, and \"Event\" - a single event on a calendar. In your question, it seems like there could be some confusion.]\nThere are many variations on the Factory design pattern.\n\nA stand-alone convenience function (e.g., calendarMaker(data))\nA separate class (e.g., CalendarParser) which builds your target class (Calendar).\nA class-level method (e.g. Calendar.from_string) method.\n\nThese have different purposes. All are Pythonic, the questions are \"what do you mean?\" and \"what's likely to change?\" Meaning is everything; change is important.\nConvenience functions are Pythonic. Languages like Java can't have free-floating functions; you must wrap a lonely function in a class. Python allows you to have a lonely function without the overhead of a class. A function is relevant when your constructor has no state changes or alternate strategies or any memory of previous actions. \nSometimes folks will define a class and then provide a convenience function that makes an instance of the class, sets the usual parameters for state and strategy and any other configuration, and then calls the single relevant method of the class. This gives you both the statefulness of class plus the flexibility of a stand-alone function.\nThe class-level method pattern is used, but it has limitations. One, it's forced to rely on class-level variables. Since these can be confusing, a complex constructor as a static method runs into problems when you need to add features (like statefulness or alternative strategies.) Be sure you're never going to expand the static method.\nTwo, it's more-or-less irrelevant to the rest of the class methods and attributes. This kind of from_string is just one of many alternative encodings for your Calendar objects. You might have a from_xml, from_JSON, from_YAML and on and on. None of this has the least relevance to what a Calendar IS or what it DOES. These methods are all about how a Calendar is encoded for transmission.\nWhat you'll see in the mature Python libraries is that factories are separate from the things they create. Encoding (as strings, XML, JSON, YAML) is subject to a great deal of more-or-less random change. The essential thing, however, rarely changes.\nSeparate the two concerns. Keep encoding and representation as far away from state and behavior as you can.\n",
"It's pythonic not to think about esoteric difference in some pattern you read somewhere and now want to use everywhere, like the factory pattern.\nMost of the time you would think of a @staticmethod as a solution it's probably better to use a module function, except when you stuff multiple classes in one module and each has a different implementation of the same interface, then it's better to use a @staticmethod\nUltimately weather you create your instances by a @staticmethod or by module function makes little difference.\nI'd probably use the initializer ( __init__ ) of a class because one of the more accepted \"patterns\" in python is that the factory for a class is the class initialization.\n",
"IMHO a module-level method is a cleaner solution. It hides behind the Python module system that gives it a unique namespace prefix, something the \"factory pattern\" is commonly used for. \n",
"The factory pattern has its own strengths and weaknesses. However, choosing one way to create instances usually has little pragmatic effect on your code.\n",
"A staticmethod rarely has value, but a classmethod may be useful. It depends on what you want the class and the factory function to actually do.\nA factory function in a module would always make an instance of the 'right' type (where 'right' in your case is the 'Calendar' class always, but you might also make it dependant on the contents of what it is creating the instance out of.)\nUse a classmethod if you wish to make it dependant not on the data, but on the class you call it on. A classmethod is like a staticmethod in that you can call it on the class, without an instance, but it receives the class it was called on as first argument. This allows you to actually create an instance of that class, which may be a subclass of the original class. An example of a classmethod is dict.fromkeys(), which creates a dict from a list of keys and a single value (defaulting to None.) Because it's a classmethod, when you subclass dict you get the 'fromkeys' method entirely for free. Here's an example of how one could write dict.fromkeys() oneself:\nclass dict_with_fromkeys(dict):\n @classmethod\n def fromkeys(cls, keys, value=None):\n self = cls()\n for key in keys:\n self[key] = value\n return self\n\n"
] | [
13,
6,
2,
0,
0
] | [] | [] | [
"factory",
"python"
] | stackoverflow_0000115764_factory_python.txt |
Q:
How do you create an osx application/dmg from a python package?
I want to create a mac osx application from python package and then put it in a disk image.
Because I load some resources out of the package, the package should not reside in a zip file.
The resulting disk image should display the background picture to "drag here -> applications" for installation.
A:
I don't know the correct way to do it, but this manual method is the approach I've used for simple scripts which seems to have preformed suitably.
I'll assume that whatever directory I'm in, the Python files for my program are in the relative src/ directory, and that the file I want to execute (which has the proper shebang and execute permissions) is named main.py.
$ mkdir -p MyApplication.app/Contents/MacOS
$ mv src/* MyApplication.app/Contents/MacOS
$ cd MyApplication.app/Contents/MacOS
$ mv main.py MyApplication
At this point we have an application bundle which, as far as I know, should work on any Mac OS system with Python installed (which I think it has by default). It doesn't have an icon or anything, that requires adding some more metadata to the package which is unnecessary for my purposes and I'm not familiar with.
To create the drag-and-drop installer is quite simple. Use Disk Utility to create a New Disk Image of approximately the size you require to store your application. Open it up, copy your application and an alias of /Applications to the drive, then use View Options to position them as you want.
The drag-and-drop message is just a background of the disk image, which you can also specify in View Options. I haven't done it before, but I'd assume that after you whip up an image in your editor of choice you could copy it over, set it as the background and then use chflags hidden to prevent it from cluttering up your nice window.
I know these aren't the clearest, simplest or most detailed instructions out there, but I hope somebody may find them useful.
| How do you create an osx application/dmg from a python package? | I want to create a mac osx application from python package and then put it in a disk image.
Because I load some resources out of the package, the package should not reside in a zip file.
The resulting disk image should display the background picture to "drag here -> applications" for installation.
| [
"I don't know the correct way to do it, but this manual method is the approach I've used for simple scripts which seems to have preformed suitably.\nI'll assume that whatever directory I'm in, the Python files for my program are in the relative src/ directory, and that the file I want to execute (which has the proper shebang and execute permissions) is named main.py.\n\n$ mkdir -p MyApplication.app/Contents/MacOS\n$ mv src/* MyApplication.app/Contents/MacOS\n$ cd MyApplication.app/Contents/MacOS\n$ mv main.py MyApplication\n\nAt this point we have an application bundle which, as far as I know, should work on any Mac OS system with Python installed (which I think it has by default). It doesn't have an icon or anything, that requires adding some more metadata to the package which is unnecessary for my purposes and I'm not familiar with.\nTo create the drag-and-drop installer is quite simple. Use Disk Utility to create a New Disk Image of approximately the size you require to store your application. Open it up, copy your application and an alias of /Applications to the drive, then use View Options to position them as you want.\nThe drag-and-drop message is just a background of the disk image, which you can also specify in View Options. I haven't done it before, but I'd assume that after you whip up an image in your editor of choice you could copy it over, set it as the background and then use chflags hidden to prevent it from cluttering up your nice window.\nI know these aren't the clearest, simplest or most detailed instructions out there, but I hope somebody may find them useful.\n"
] | [
7
] | [] | [] | [
"macos",
"packaging",
"python"
] | stackoverflow_0000116657_macos_packaging_python.txt |
Q:
Python and POST data
In PHP I simply write:
$bob = $_POST['bob'];
How do I do the same in Python?
And yes, I do normally check that it exists etc, I'm just stripping it down specifically to the functionality I am after.
Edit: I am not using a framework
A:
The simplest method is the 'cgi' module:
import cgi
data = cgi.FieldStorage()
data['bob']
But the context you are executing in (frameworks you're using, WSGI or even (heaven forbid) mod_python) may have different, more efficient or more direct methods of access.
| Python and POST data | In PHP I simply write:
$bob = $_POST['bob'];
How do I do the same in Python?
And yes, I do normally check that it exists etc, I'm just stripping it down specifically to the functionality I am after.
Edit: I am not using a framework
| [
"The simplest method is the 'cgi' module:\nimport cgi\ndata = cgi.FieldStorage()\ndata['bob']\n\nBut the context you are executing in (frameworks you're using, WSGI or even (heaven forbid) mod_python) may have different, more efficient or more direct methods of access.\n"
] | [
10
] | [] | [] | [
"http",
"post",
"python"
] | stackoverflow_0000117167_http_post_python.txt |
Q:
What is the scope for imported classes in python?
Please excuse the vague title. If anyone has a suggestion, please let me know! Also please retag with more appropriate tags!
The Problem
I want to have an instance of an imported class be able to view things in the scope (globals, locals) of the importer. Since I'm not sure of the exact mechanism at work here, I can describe it much better with snippets than words.
## File 1
def f1(): print "go f1!"
class C1(object):
def do_eval(self,x): # maybe this should be do_evil, given what happens
print "evaling"
eval(x)
eval(x,globals(),locals())
Then run this code from an iteractive session, there there will be lots of NameErrors
## interactive
class C2(object):
def do_eval(self,x): # maybe this should be do_evil, given what happens
print "evaling"
eval(x)
eval(x,globals(),locals())
def f2():
print "go f2!"
from file1 import C1
import file1
C1().do_eval('file1.f1()')
C1().do_eval('f1()')
C1().do_eval('f2()')
file1.C1().do_eval('file1.f1()')
file1.C1().do_eval('f1()')
file1.C1().do_eval('f2()')
C2().do_eval('f2()')
C2().do_eval('file1.f1()')
C2().do_eval('f1()')
Is there a common idiom / pattern for this sort of task? Am I barking up the wrong tree entirely?
A:
In this example, you can simply hand over functions as objects to the methods in C1:
>>> class C1(object):
>>> def eval(self, x):
>>> x()
>>>
>>> def f2(): print "go f2"
>>> c = C1()
>>> c.eval(f2)
go f2
In Python, you can pass functions and classes to other methods and invoke/create them there.
If you want to actually evaluate a code string, you have to specify the environment, as already mentioned by Thomas.
Your module from above, slightly changed:
## File 1
def f1(): print "go f1!"
class C1(object):
def do_eval(self, x, e_globals = globals(), e_locals = locals()):
eval(x, e_globals, e_locals)
Now, in the interactive interpreter:
>>> def f2():
>>> print "go f2!"
>>> from file1 import * # 1
>>> C1().do_eval("f2()") # 2
NameError: name 'f2' is not defined
>>> C1().do_eval("f2()", globals(), locals()) #3
go f2!
>>> C1().do_eval("f1()", globals(), locals()) #4
go f1!
Some annotations
Here, we insert all objects from file1 into this module's namespace
f2 is not in the namespace of file1, therefore we get a NameError
Now we pass the environment explictly, and the code can be evaluated
f1 is in the namespace of this module, because we imported it
Edit: Added code sample on how to explicitly pass environment for eval.
A:
Functions are always executed in the scope they are defined in, as are methods and class bodies. They are never executed in another scope. Because importing is just another assignment statement, and everything in Python is a reference, the functions, classes and modules don't even know where they are imported to.
You can do two things: explicitly pass the 'environment' you want them to use, or use stack hackery to access their caller's namespace. The former is vastly preferred over the latter, as it's not as implementation-dependent and fragile as the latter.
You may wish to look at the string.Template class, which tries to do something similar.
| What is the scope for imported classes in python? | Please excuse the vague title. If anyone has a suggestion, please let me know! Also please retag with more appropriate tags!
The Problem
I want to have an instance of an imported class be able to view things in the scope (globals, locals) of the importer. Since I'm not sure of the exact mechanism at work here, I can describe it much better with snippets than words.
## File 1
def f1(): print "go f1!"
class C1(object):
def do_eval(self,x): # maybe this should be do_evil, given what happens
print "evaling"
eval(x)
eval(x,globals(),locals())
Then run this code from an iteractive session, there there will be lots of NameErrors
## interactive
class C2(object):
def do_eval(self,x): # maybe this should be do_evil, given what happens
print "evaling"
eval(x)
eval(x,globals(),locals())
def f2():
print "go f2!"
from file1 import C1
import file1
C1().do_eval('file1.f1()')
C1().do_eval('f1()')
C1().do_eval('f2()')
file1.C1().do_eval('file1.f1()')
file1.C1().do_eval('f1()')
file1.C1().do_eval('f2()')
C2().do_eval('f2()')
C2().do_eval('file1.f1()')
C2().do_eval('f1()')
Is there a common idiom / pattern for this sort of task? Am I barking up the wrong tree entirely?
| [
"In this example, you can simply hand over functions as objects to the methods in C1:\n>>> class C1(object):\n>>> def eval(self, x):\n>>> x()\n>>>\n>>> def f2(): print \"go f2\"\n>>> c = C1()\n>>> c.eval(f2)\ngo f2\n\nIn Python, you can pass functions and classes to other methods and invoke/create them there.\nIf you want to actually evaluate a code string, you have to specify the environment, as already mentioned by Thomas.\nYour module from above, slightly changed:\n## File 1\ndef f1(): print \"go f1!\"\n\nclass C1(object):\n def do_eval(self, x, e_globals = globals(), e_locals = locals()):\n eval(x, e_globals, e_locals)\n\nNow, in the interactive interpreter:\n>>> def f2():\n>>> print \"go f2!\"\n>>> from file1 import * # 1\n>>> C1().do_eval(\"f2()\") # 2\nNameError: name 'f2' is not defined\n\n>>> C1().do_eval(\"f2()\", globals(), locals()) #3\ngo f2!\n>>> C1().do_eval(\"f1()\", globals(), locals()) #4\ngo f1!\n\nSome annotations\n\nHere, we insert all objects from file1 into this module's namespace\nf2 is not in the namespace of file1, therefore we get a NameError\nNow we pass the environment explictly, and the code can be evaluated\nf1 is in the namespace of this module, because we imported it\n\nEdit: Added code sample on how to explicitly pass environment for eval.\n",
"Functions are always executed in the scope they are defined in, as are methods and class bodies. They are never executed in another scope. Because importing is just another assignment statement, and everything in Python is a reference, the functions, classes and modules don't even know where they are imported to.\nYou can do two things: explicitly pass the 'environment' you want them to use, or use stack hackery to access their caller's namespace. The former is vastly preferred over the latter, as it's not as implementation-dependent and fragile as the latter.\nYou may wish to look at the string.Template class, which tries to do something similar.\n"
] | [
2,
1
] | [] | [] | [
"eval",
"import",
"python",
"scope"
] | stackoverflow_0000117127_eval_import_python_scope.txt |
Q:
What's the Name of the Python Module that Formats arbitrary Text to nicely looking HTML?
A while ago I came across a Python library that formats regular text to HTML similar to Markdown, reStructuredText and Textile, just that it had no syntax at all. It detected indentatations, quotes, links and newlines/paragraphs only.
Unfortunately I lost the name of the library and was unable to Google it. Anyone any ideas?
Edit: reStructuredText aka rst == docutils. That's not what I'm looking for :)
A:
Okay. I found it now. It's called PottyMouth.
A:
Markdown in python is a python implementation of the perl based markdown utility.
Markown converts various forms of structured text to valid html, and one of the supported forms is just plain ascii. Use is pretty straight forward.
python markdown.py input_file.txt > output_file.html
Markdown can be easily called as a module too:
import markdown
html = markdown.markdown(your_text_string)
A:
Sphinx is a documentation generator using reStructuredText. It's quite nice, although I haven't used it personally.
The website Hazel Tree, which compiles python text uses Sphinx, and so does the new Python documentation.
| What's the Name of the Python Module that Formats arbitrary Text to nicely looking HTML? | A while ago I came across a Python library that formats regular text to HTML similar to Markdown, reStructuredText and Textile, just that it had no syntax at all. It detected indentatations, quotes, links and newlines/paragraphs only.
Unfortunately I lost the name of the library and was unable to Google it. Anyone any ideas?
Edit: reStructuredText aka rst == docutils. That's not what I'm looking for :)
| [
"Okay. I found it now. It's called PottyMouth.\n",
"Markdown in python is a python implementation of the perl based markdown utility.\nMarkown converts various forms of structured text to valid html, and one of the supported forms is just plain ascii. Use is pretty straight forward.\npython markdown.py input_file.txt > output_file.html\n\nMarkdown can be easily called as a module too:\nimport markdown\nhtml = markdown.markdown(your_text_string)\n\n",
"Sphinx is a documentation generator using reStructuredText. It's quite nice, although I haven't used it personally.\nThe website Hazel Tree, which compiles python text uses Sphinx, and so does the new Python documentation.\n"
] | [
14,
1,
0
] | [] | [] | [
"formatting",
"markup",
"python"
] | stackoverflow_0000117477_formatting_markup_python.txt |
Q:
Best practices for manipulating database result sets in Python?
I am writing a simple Python web application that consists of several pages of business data formatted for the iPhone. I'm comfortable programming Python, but I'm not very familiar with Python "idiom," especially regarding classes and objects. Python's object oriented design differs somewhat from other languages I've worked with. So, even though my application is working, I'm curious whether there is a better way to accomplish my goals.
Specifics: How does one typically implement the request-transform-render database workflow in Python? Currently, I am using pyodbc to fetch data, copying the results into attributes on an object, performing some calculations and merges using a list of these objects, then rendering the output from the list of objects. (Sample code below, SQL queries redacted.) Is this sane? Is there a better way? Are there any specific "gotchas" I've stumbled into in my relative ignorance of Python? I'm particularly concerned about how I've implemented the list of rows using the empty "Record" class.
class Record(object):
pass
def calculate_pnl(records, node_prices):
for record in records:
try:
# fill RT and DA prices from the hash retrieved above
if hasattr(record, 'sink') and record.sink:
record.da = node_prices[record.sink][0] - node_prices[record.id][0]
record.rt = node_prices[record.sink][1] - node_prices[record.id][1]
else:
record.da = node_prices[record.id][0]
record.rt = node_prices[record.id][1]
# calculate dependent values: RT-DA and PNL
record.rtda = record.rt - record.da
record.pnl = record.rtda * record.mw
except:
print sys.exc_info()
def map_rows(cursor, mappings, callback=None):
records = []
for row in cursor:
record = Record()
for field, attr in mappings.iteritems():
setattr(record, attr, getattr(row, field, None))
if not callback or callback(record):
records.append(record)
return records
def get_positions(cursor):
# get the latest position time
cursor.execute("SELECT latest data time")
time = cursor.fetchone().time
hour = eelib.util.get_hour_ending(time)
# fetch the current positions
cursor.execute("SELECT stuff FROM atable", (hour))
# read the rows
nodes = {}
def record_callback(record):
if abs(record.mw) > 0:
if record.id: nodes[record.id] = None
return True
else:
return False
records = util.map_rows(cursor, {
'id': 'id',
'name': 'name',
'mw': 'mw'
}, record_callback)
# query prices
for node_id in nodes:
# RT price
row = cursor.execute("SELECT price WHERE ? ? ?", (node_id, time, time)).fetchone()
rt5 = row.lmp if row else None
# DA price
row = cursor.execute("SELECT price WHERE ? ? ?", (node_id, hour, hour)).fetchone()
da = row.da_lmp if row else None
# update the hash value
nodes[node_id] = (da, rt5)
# calculate the position pricing
calculate_pnl(records, nodes)
# sort
records.sort(key=lambda r: r.name)
# return the records
return records
A:
The empty Record class and the free-floating function that (generally) applies to an individual Record is a hint that you haven't designed your class properly.
class Record( object ):
"""Assuming rtda and pnl must exist."""
def __init__( self ):
self.da= 0
self.rt= 0
self.rtda= 0 # or whatever
self.pnl= None #
self.sink = None # Not clear what this is
def setPnl( self, node_prices ):
# fill RT and DA prices from the hash retrieved above
# calculate dependent values: RT-DA and PNL
Now, your calculate_pnl( records, node_prices ) is simpler and uses the object properly.
def calculate_pnl( records, node_prices ):
for record in records:
record.setPnl( node_prices )
The point isn't to trivially refactor the code in small ways.
The point is this: A Class Encapsulates Responsibility.
Yes, an empty-looking class is usually a problem. It means the responsibilities are scattered somewhere else.
A similar analysis holds for the collection of records. This is more than a simple list, since the collection -- as a whole -- has operations it performs.
The "Request-Transform-Render" isn't quite right. You have a Model (the Record class). Instances of the Model get built (possibly because of a Request.) The Model objects are responsible for their own state transformations and updates. Perhaps they get displayed (or rendered) by some object that examines their state.
It's that "Transform" step that often violates good design by scattering responsibility all over the place. "Transform" is a hold-over from non-object design, where responsibility was a nebulous concept.
A:
Have you considered using an ORM? SQLAlchemy is pretty good, and Elixir makes it beautiful. It can really reduce the ammount of boilerplate code needed to deal with databases. Also, a lot of the gotchas mentioned have already shown up and the SQLAlchemy developers dealt with them.
A:
Depending on how much you want to do with the data you may not need to populate an intermediate object. The cursor's header data structure will let you get the column names - a bit of introspection will let you make a dictionary with col-name:value pairs for the row.
You can pass the dictionary to the % operator. The docs for the odbc module will explain how to get at the column metadata.
This snippet of code to shows the application of the % operator in this manner.
>>> a={'col1': 'foo', 'col2': 'bar', 'col3': 'wibble'}
>>> 'Col1=%(col1)s, Col2=%(col2)s, Col3=%(col3)s' % a
'Col1=foo, Col2=bar, Col3=wibble'
>>>
| Best practices for manipulating database result sets in Python? | I am writing a simple Python web application that consists of several pages of business data formatted for the iPhone. I'm comfortable programming Python, but I'm not very familiar with Python "idiom," especially regarding classes and objects. Python's object oriented design differs somewhat from other languages I've worked with. So, even though my application is working, I'm curious whether there is a better way to accomplish my goals.
Specifics: How does one typically implement the request-transform-render database workflow in Python? Currently, I am using pyodbc to fetch data, copying the results into attributes on an object, performing some calculations and merges using a list of these objects, then rendering the output from the list of objects. (Sample code below, SQL queries redacted.) Is this sane? Is there a better way? Are there any specific "gotchas" I've stumbled into in my relative ignorance of Python? I'm particularly concerned about how I've implemented the list of rows using the empty "Record" class.
class Record(object):
pass
def calculate_pnl(records, node_prices):
for record in records:
try:
# fill RT and DA prices from the hash retrieved above
if hasattr(record, 'sink') and record.sink:
record.da = node_prices[record.sink][0] - node_prices[record.id][0]
record.rt = node_prices[record.sink][1] - node_prices[record.id][1]
else:
record.da = node_prices[record.id][0]
record.rt = node_prices[record.id][1]
# calculate dependent values: RT-DA and PNL
record.rtda = record.rt - record.da
record.pnl = record.rtda * record.mw
except:
print sys.exc_info()
def map_rows(cursor, mappings, callback=None):
records = []
for row in cursor:
record = Record()
for field, attr in mappings.iteritems():
setattr(record, attr, getattr(row, field, None))
if not callback or callback(record):
records.append(record)
return records
def get_positions(cursor):
# get the latest position time
cursor.execute("SELECT latest data time")
time = cursor.fetchone().time
hour = eelib.util.get_hour_ending(time)
# fetch the current positions
cursor.execute("SELECT stuff FROM atable", (hour))
# read the rows
nodes = {}
def record_callback(record):
if abs(record.mw) > 0:
if record.id: nodes[record.id] = None
return True
else:
return False
records = util.map_rows(cursor, {
'id': 'id',
'name': 'name',
'mw': 'mw'
}, record_callback)
# query prices
for node_id in nodes:
# RT price
row = cursor.execute("SELECT price WHERE ? ? ?", (node_id, time, time)).fetchone()
rt5 = row.lmp if row else None
# DA price
row = cursor.execute("SELECT price WHERE ? ? ?", (node_id, hour, hour)).fetchone()
da = row.da_lmp if row else None
# update the hash value
nodes[node_id] = (da, rt5)
# calculate the position pricing
calculate_pnl(records, nodes)
# sort
records.sort(key=lambda r: r.name)
# return the records
return records
| [
"The empty Record class and the free-floating function that (generally) applies to an individual Record is a hint that you haven't designed your class properly.\nclass Record( object ):\n \"\"\"Assuming rtda and pnl must exist.\"\"\"\n def __init__( self ):\n self.da= 0\n self.rt= 0\n self.rtda= 0 # or whatever\n self.pnl= None # \n self.sink = None # Not clear what this is\n def setPnl( self, node_prices ):\n # fill RT and DA prices from the hash retrieved above\n # calculate dependent values: RT-DA and PNL\n\nNow, your calculate_pnl( records, node_prices ) is simpler and uses the object properly.\ndef calculate_pnl( records, node_prices ):\n for record in records:\n record.setPnl( node_prices )\n\nThe point isn't to trivially refactor the code in small ways.\nThe point is this: A Class Encapsulates Responsibility.\nYes, an empty-looking class is usually a problem. It means the responsibilities are scattered somewhere else.\nA similar analysis holds for the collection of records. This is more than a simple list, since the collection -- as a whole -- has operations it performs.\nThe \"Request-Transform-Render\" isn't quite right. You have a Model (the Record class). Instances of the Model get built (possibly because of a Request.) The Model objects are responsible for their own state transformations and updates. Perhaps they get displayed (or rendered) by some object that examines their state.\nIt's that \"Transform\" step that often violates good design by scattering responsibility all over the place. \"Transform\" is a hold-over from non-object design, where responsibility was a nebulous concept.\n",
"Have you considered using an ORM? SQLAlchemy is pretty good, and Elixir makes it beautiful. It can really reduce the ammount of boilerplate code needed to deal with databases. Also, a lot of the gotchas mentioned have already shown up and the SQLAlchemy developers dealt with them.\n",
"Depending on how much you want to do with the data you may not need to populate an intermediate object. The cursor's header data structure will let you get the column names - a bit of introspection will let you make a dictionary with col-name:value pairs for the row.\nYou can pass the dictionary to the % operator. The docs for the odbc module will explain how to get at the column metadata.\nThis snippet of code to shows the application of the % operator in this manner.\n>>> a={'col1': 'foo', 'col2': 'bar', 'col3': 'wibble'}\n>>> 'Col1=%(col1)s, Col2=%(col2)s, Col3=%(col3)s' % a\n'Col1=foo, Col2=bar, Col3=wibble'\n>>> \n\n"
] | [
2,
1,
0
] | [
"Using a ORM for an iPhone app might be a bad idea because of performance issues, you want your code to be as fast as possible. So you can't avoid boilerplate code. If you are considering a ORM, besides SQLAlchemy I'd recommend Storm.\n"
] | [
-2
] | [
"database",
"python"
] | stackoverflow_0000116894_database_python.txt |
Q:
How do I create a non-standard type with SOAPpy?
I am calling a WSDL web service from Python using SOAPpy. The call I need to make is to the method Auth_login. This has 2 arguments - the first, a string being the API key; the second, a custom type containing username and password. The custom type is called Auth_credentialsData which contains 2 values as stings - one for the username and one for the password. How can I create this custom type using SOAPpy? I tried passing a list and a dictionary, none of which work.
Code so far:
from SOAPpy import WSDL
wsdlUrl = 'https://ws.pingdom.com/soap/PingdomAPI.wsdl'
client = WSDL.Proxy(wsdlUrl)
Tried both:
credentials = ['email@example.com', 'password']
client.Auth_login('key', credentials)
and
credentials = {'username': 'email@example.com', 'password': 'passsword'}
client.Auth_login('key', credentials)
both of which give an authentication failed error.
A:
The better method is to use the ZSI soap module which allows you to take a WDSL file and turn it into classes and methods that you can then use to call it. The online documentation is on their website but the latest documentation is more easily found in the source package. If you install in Debian/Ubuntu (package name python-zsi) the documentation is in /usr/share/doc/python-zsi in a pair of PDFs you can find in there.
| How do I create a non-standard type with SOAPpy? | I am calling a WSDL web service from Python using SOAPpy. The call I need to make is to the method Auth_login. This has 2 arguments - the first, a string being the API key; the second, a custom type containing username and password. The custom type is called Auth_credentialsData which contains 2 values as stings - one for the username and one for the password. How can I create this custom type using SOAPpy? I tried passing a list and a dictionary, none of which work.
Code so far:
from SOAPpy import WSDL
wsdlUrl = 'https://ws.pingdom.com/soap/PingdomAPI.wsdl'
client = WSDL.Proxy(wsdlUrl)
Tried both:
credentials = ['email@example.com', 'password']
client.Auth_login('key', credentials)
and
credentials = {'username': 'email@example.com', 'password': 'passsword'}
client.Auth_login('key', credentials)
both of which give an authentication failed error.
| [
"The better method is to use the ZSI soap module which allows you to take a WDSL file and turn it into classes and methods that you can then use to call it. The online documentation is on their website but the latest documentation is more easily found in the source package. If you install in Debian/Ubuntu (package name python-zsi) the documentation is in /usr/share/doc/python-zsi in a pair of PDFs you can find in there.\n"
] | [
0
] | [] | [] | [
"python",
"soap",
"web_services"
] | stackoverflow_0000118467_python_soap_web_services.txt |
Q:
How to express this Bash command in pure Python
I have this line in a useful Bash script that I haven't managed to translate into Python, where 'a' is a user-input number of days' worth of files to archive:
find ~/podcasts/current -mindepth 2 -mtime '+`a`+' -exec mv {} ~/podcasts/old \;
I am familiar with the os.name and getpass.getuser for the most general cross-platform elements. I also have this function to generate a list of the full names of all the files in the equivalent of ~/podcasts/current:
def AllFiles(filepath, depth=1, flist=[]):
fpath=os.walk(filepath)
fpath=[item for item in fpath]
while depth < len(fpath):
for item in fpath[depth][-1]:
flist.append(fpath[depth][0]+os.sep+item)
depth+=1
return flist
First off, there must be a better way to do that, any suggestion welcome. Either way, for example, "AllFiles('/users/me/music/itunes/itunes music/podcasts')" gives the relevant list, on Windows. Presumably I should be able to go over this list and call os.stat(list_member).st_mtime and move all the stuff older than a certain number in days to the archive; I am a little stuck on that bit.
Of course, anything with the concision of the bash command would also be illuminating.
A:
import os
import shutil
from os import path
from os.path import join, getmtime
from time import time
archive = "bak"
current = "cur"
def archive_old_versions(days = 3):
for root, dirs, files in os.walk(current):
for name in files:
fullname = join(root, name)
if (getmtime(fullname) < time() - days * 60 * 60 * 24):
shutil.move(fullname, join(archive, name))
A:
import subprocess
subprocess.call(['find', '~/podcasts/current', '-mindepth', '2', '-mtime', '+5',
'-exec', 'mv', '{}', '~/podcasts/old', ';'], shell=True)
That is not a joke. This python script will do exactly what the bash one does.
EDIT: Dropped the backslash on the last param because it is not needed.
A:
That's not a Bash command, it's a find command. If you really want to port it to Python it's possible, but you'll never be able to write a Python version that's as concise. find has been optimized over 20 years to be excellent at manipulating filesystems, while Python is a general-purpose programming language.
A:
import os, stat
os.stat("test")[stat.ST_MTIME]
Will give you the mtime. I suggest fixing those in walk_results[2], and then recursing, calling the function for each dir in walk_results[1].
| How to express this Bash command in pure Python | I have this line in a useful Bash script that I haven't managed to translate into Python, where 'a' is a user-input number of days' worth of files to archive:
find ~/podcasts/current -mindepth 2 -mtime '+`a`+' -exec mv {} ~/podcasts/old \;
I am familiar with the os.name and getpass.getuser for the most general cross-platform elements. I also have this function to generate a list of the full names of all the files in the equivalent of ~/podcasts/current:
def AllFiles(filepath, depth=1, flist=[]):
fpath=os.walk(filepath)
fpath=[item for item in fpath]
while depth < len(fpath):
for item in fpath[depth][-1]:
flist.append(fpath[depth][0]+os.sep+item)
depth+=1
return flist
First off, there must be a better way to do that, any suggestion welcome. Either way, for example, "AllFiles('/users/me/music/itunes/itunes music/podcasts')" gives the relevant list, on Windows. Presumably I should be able to go over this list and call os.stat(list_member).st_mtime and move all the stuff older than a certain number in days to the archive; I am a little stuck on that bit.
Of course, anything with the concision of the bash command would also be illuminating.
| [
"import os\nimport shutil\nfrom os import path\nfrom os.path import join, getmtime\nfrom time import time\n\narchive = \"bak\"\ncurrent = \"cur\"\n\ndef archive_old_versions(days = 3):\n for root, dirs, files in os.walk(current):\n for name in files:\n fullname = join(root, name)\n if (getmtime(fullname) < time() - days * 60 * 60 * 24):\n shutil.move(fullname, join(archive, name))\n\n",
"import subprocess\nsubprocess.call(['find', '~/podcasts/current', '-mindepth', '2', '-mtime', '+5',\n '-exec', 'mv', '{}', '~/podcasts/old', ';'], shell=True)\n\nThat is not a joke. This python script will do exactly what the bash one does.\nEDIT: Dropped the backslash on the last param because it is not needed.\n",
"That's not a Bash command, it's a find command. If you really want to port it to Python it's possible, but you'll never be able to write a Python version that's as concise. find has been optimized over 20 years to be excellent at manipulating filesystems, while Python is a general-purpose programming language.\n",
"import os, stat\nos.stat(\"test\")[stat.ST_MTIME]\n\nWill give you the mtime. I suggest fixing those in walk_results[2], and then recursing, calling the function for each dir in walk_results[1].\n"
] | [
5,
3,
2,
0
] | [] | [] | [
"language_comparisons",
"python",
"shell"
] | stackoverflow_0000118591_language_comparisons_python_shell.txt |
Q:
What is wrong with my snap to grid code?
First of all, I'm fairly sure snapping to grid is fairly easy, however I've run into some odd trouble in this situation and my maths are too weak to work out specifically what is wrong.
Here's the situation
I have an abstract concept of a grid, with Y steps exactly Y_STEP apart (the x steps are working fine so ignore them for now)
The grid is in an abstract coordinate space, and to get things to line up I've got a magic offset in there, let's call it Y_OFFSET
to snap to the grid I've got the following code (python)
def snapToGrid(originalPos, offset, step):
index = int((originalPos - offset) / step) #truncates the remainder away
return index * gap + offset
so I pass the cursor position, Y_OFFSET and Y_STEP into that function and it returns me the nearest floored y position on the grid
That appears to work fine in the original scenario, however when I take into account the fact that the view is scrollable things get a little weird.
Scrolling is made as basic as I can get it, I've got a viewPort that keeps count of the distance scrolled along the Y Axis and just offsets everything that goes through it.
Here's a snippet of the cursor's mouseMotion code:
def mouseMotion(self, event):
pixelPos = event.pos[Y]
odePos = Scroll.pixelPosToOdePos(pixelPos)
self.tool.positionChanged(odePos)
So there's two things to look at there, first the Scroll module's translation from pixel position to the abstract coordinate space, then the tool's positionChanged function which takes the abstract coordinate space value and snaps to the nearest Y step.
Here's the relevant Scroll code
def pixelPosToOdePos(pixelPos):
offsetPixelPos = pixelPos - self.viewPortOffset
return pixelsToOde(offsetPixelPos)
def pixelsToOde(pixels):
return float(pixels) / float(pixels_in_an_ode_unit)
And the tools update code
def positionChanged(self, newPos):
self.snappedPos = snapToGrid(originalPos, Y_OFFSET, Y_STEP)
The last relevant chunk is when the tool goes to render itself. It goes through the Scroll object, which transforms the tool's snapped coordinate space position into an onscreen pixel position, here's the code:
#in Tool
def render(self, screen):
Scroll.render(screen, self.image, self.snappedPos)
#in Scroll
def render(self, screen, image, odePos):
pixelPos = self.odePosToPixelPos(odePos)
screen.blit(image, pixelPos) # screen is a surface from pygame for the curious
def odePosToPixelPos(self.odePos):
offsetPos = odePos + self.viewPortOffset
return odeToPixels(offsetPos)
def odeToPixels(odeUnits):
return int(odeUnits * pixels_in_an_ode_unit)
Whew, that was a long explanation. Hope you're still with me...
The problem I'm now getting is that when I scroll up the drawn image loses alignment with the cursor.
It starts snapping to the Y step exactly 1 step below the cursor.
Additionally it appears to phase in and out of allignment.
At some scrolls it is out by 1 and other scrolls it is spot on.
It's never out by more than 1 and it's always snapping to a valid grid location.
Best guess I can come up with is that somewhere I'm truncating some data in the wrong spot, but no idea where or how it ends up with this behavior.
Anyone familiar with coordinate spaces, scrolling and snapping?
A:
Ok, I'm answering my own question here, as alexk mentioned, using int to truncate was my mistake.
The behaviour I'm after is best modeled by math.floor().
Apologies, the original question does not contain enough information to really work out what the problem is. I didn't have the extra bit of information at that point.
With regards to the typo note, I think I may be using the context in a confusing manner... From the perspective of the positionChanged() function, the parameter is a new position coming in.
From the perspective of the snapToGrid() function the parameter is an original position which is being changed to a snapped position.
The language is like that because part of it is in my event handling code and the other part is in my general services code. I should have changed it for the example
A:
Do you have a typo in positionChanged() ?
def positionChanged(self, newPos):
self.snappedPos = snapToGrid(newPos, Y_OFFSET, Y_STEP)
I guess you are off by one pixel because of the accuracy problems during float division. Try changing your snapToGrid() to this:
def snapToGrid(originalPos, offset, step):
EPS = 1e-6
index = int((originalPos - offset) / step + EPS) #truncates the remainder away
return index * gap + offset
A:
Thanks for the answer, there may be a typo, but I can't see it...
Unfortunately the change to snapToGrid didn't make a difference, so I don't think that's the issue.
It's not off by one pixel, but rather it's off by Y_STEP. Playing around with it some more I've found that I can't get it to be exact at any point that the screen is scrolled up and also that it happens towards the top of the screen, which I suspect is ODE position zero, so I'm guessing my problem is around small or negative values.
| What is wrong with my snap to grid code? | First of all, I'm fairly sure snapping to grid is fairly easy, however I've run into some odd trouble in this situation and my maths are too weak to work out specifically what is wrong.
Here's the situation
I have an abstract concept of a grid, with Y steps exactly Y_STEP apart (the x steps are working fine so ignore them for now)
The grid is in an abstract coordinate space, and to get things to line up I've got a magic offset in there, let's call it Y_OFFSET
to snap to the grid I've got the following code (python)
def snapToGrid(originalPos, offset, step):
index = int((originalPos - offset) / step) #truncates the remainder away
return index * gap + offset
so I pass the cursor position, Y_OFFSET and Y_STEP into that function and it returns me the nearest floored y position on the grid
That appears to work fine in the original scenario, however when I take into account the fact that the view is scrollable things get a little weird.
Scrolling is made as basic as I can get it, I've got a viewPort that keeps count of the distance scrolled along the Y Axis and just offsets everything that goes through it.
Here's a snippet of the cursor's mouseMotion code:
def mouseMotion(self, event):
pixelPos = event.pos[Y]
odePos = Scroll.pixelPosToOdePos(pixelPos)
self.tool.positionChanged(odePos)
So there's two things to look at there, first the Scroll module's translation from pixel position to the abstract coordinate space, then the tool's positionChanged function which takes the abstract coordinate space value and snaps to the nearest Y step.
Here's the relevant Scroll code
def pixelPosToOdePos(pixelPos):
offsetPixelPos = pixelPos - self.viewPortOffset
return pixelsToOde(offsetPixelPos)
def pixelsToOde(pixels):
return float(pixels) / float(pixels_in_an_ode_unit)
And the tools update code
def positionChanged(self, newPos):
self.snappedPos = snapToGrid(originalPos, Y_OFFSET, Y_STEP)
The last relevant chunk is when the tool goes to render itself. It goes through the Scroll object, which transforms the tool's snapped coordinate space position into an onscreen pixel position, here's the code:
#in Tool
def render(self, screen):
Scroll.render(screen, self.image, self.snappedPos)
#in Scroll
def render(self, screen, image, odePos):
pixelPos = self.odePosToPixelPos(odePos)
screen.blit(image, pixelPos) # screen is a surface from pygame for the curious
def odePosToPixelPos(self.odePos):
offsetPos = odePos + self.viewPortOffset
return odeToPixels(offsetPos)
def odeToPixels(odeUnits):
return int(odeUnits * pixels_in_an_ode_unit)
Whew, that was a long explanation. Hope you're still with me...
The problem I'm now getting is that when I scroll up the drawn image loses alignment with the cursor.
It starts snapping to the Y step exactly 1 step below the cursor.
Additionally it appears to phase in and out of allignment.
At some scrolls it is out by 1 and other scrolls it is spot on.
It's never out by more than 1 and it's always snapping to a valid grid location.
Best guess I can come up with is that somewhere I'm truncating some data in the wrong spot, but no idea where or how it ends up with this behavior.
Anyone familiar with coordinate spaces, scrolling and snapping?
| [
"Ok, I'm answering my own question here, as alexk mentioned, using int to truncate was my mistake. \nThe behaviour I'm after is best modeled by math.floor().\nApologies, the original question does not contain enough information to really work out what the problem is. I didn't have the extra bit of information at that point.\nWith regards to the typo note, I think I may be using the context in a confusing manner... From the perspective of the positionChanged() function, the parameter is a new position coming in.\nFrom the perspective of the snapToGrid() function the parameter is an original position which is being changed to a snapped position.\nThe language is like that because part of it is in my event handling code and the other part is in my general services code. I should have changed it for the example\n",
"Do you have a typo in positionChanged() ?\ndef positionChanged(self, newPos):\n self.snappedPos = snapToGrid(newPos, Y_OFFSET, Y_STEP)\n\nI guess you are off by one pixel because of the accuracy problems during float division. Try changing your snapToGrid() to this:\ndef snapToGrid(originalPos, offset, step):\n EPS = 1e-6\n index = int((originalPos - offset) / step + EPS) #truncates the remainder away\n return index * gap + offset\n\n",
"Thanks for the answer, there may be a typo, but I can't see it...\nUnfortunately the change to snapToGrid didn't make a difference, so I don't think that's the issue.\nIt's not off by one pixel, but rather it's off by Y_STEP. Playing around with it some more I've found that I can't get it to be exact at any point that the screen is scrolled up and also that it happens towards the top of the screen, which I suspect is ODE position zero, so I'm guessing my problem is around small or negative values.\n"
] | [
1,
0,
0
] | [] | [] | [
"graphics",
"grid",
"python"
] | stackoverflow_0000118540_graphics_grid_python.txt |
Q:
Python-passing variable between classes
I'm trying to create a character generation wizard for a game. In one class I calculate the attributes of the character. In a different class, I'm displaying to the user which specialties are available based on the attributes of the character. However, I can't remember how to pass variables between different classes.
Here is an example of what I have:
class BasicInfoPage(wx.wizard.WizardPageSimple):
def __init__(self, parent, title):
wiz.WizardPageSimple.__init__(self, parent)
self.next = self.prev = None
self.sizer = makePageTitle(self, title)
<---snip--->
self.intelligence = self.genAttribs()
class MOS(wx.wizard.WizardPageSimple):
def __init__(self, parent, title):
wiz.WizardPageSimple.__init__(self, parent)
self.next = self.prev = None
self.sizer = makePageTitle(self, title)
def eligibleMOS(self, event):
if self.intelligence >= 12:
self.MOS_list.append("Analyst")
The problem is that I can't figure out how to use the "intelligence" variable from the BasicInfoPage class to the MOS class. I've tried several different things from around the Internet but nothing seems to work. What am I missing?
Edit I realized after I posted this that I didn't explain it that well. I'm trying to create a computer version of the Twilight 2000 RPG from the 1980s.
I'm using wxPython to create a wizard; the parent class of my classes is the Wizard from wxPython. That wizard will walk a user through the creation of a character, so the Basic Information page (class BasicInfoPage) lets the user give the character's name and "roll" for the character's attributes. That's where the "self.intelligence" comes from.
I'm trying to use the attributes created her for a page further on in the wizard, where the user selects the speciality of the character. The specialities that are available depend on the attributes the character has, e.g. if the intelligence is high enough, the character can be an Intel Anaylst.
It's been several years since I've programmed, especially with OOP ideas. That's why I'm confused on how to create what's essentially a global variable with classes and methods.
A:
You may have "Class" and "Instance" confused. It's not clear from your example, so I'll presume that you're using a lot of class definitions and don't have appropriate object instances of those classes.
Classes don't really have usable attribute values. A class is just a common set of definitions for a collection of objects. You should think of of classes as definitions, not actual things.
Instances of classes, "objects", are actual things that have actual attribute values and execute method functions.
You don't pass variables among classes. You pass variables among instances. As a practical matter only instance variables matter. [Yes, there are class variables, but they're a fairly specialized and often confusing thing, best avoided.]
When you create an object (an instance of a class)
b= BasicInfoPage(...)
Then b.intelligence is the value of intelligence for the b instance of BasicInfoPage.
A really common thing is
class MOS( wx.wizard.PageSimple ):
def __init__( self, parent, title, basicInfoPage ):
<snip>
self.basicInfo= basicInfoPage
Now, within MOS methods, you can say self.basicInfo.intelligence because MOS has an object that's a BasicInfoPage available to it.
When you build MOS, you provide it with the instance of BasicInfoPage that it's supposed to use.
someBasicInfoPage= BasicInfoPage( ... )
m= MOS( ..., someBasicInfoPage )
Now, the object m can examine someBasicInfoPage.intelligence
A:
Each page of a Wizard -- by itself -- shouldn't actually be the container for the information you're gathering.
Read up on the Model-View-Control design pattern. Your pages have the View and Control parts of the design. They aren't the data model, however.
You'll be happier if you have a separate object that is "built" by the pages. Each page will set some attributes of that underlying model object. Then, the pages are independent of each other, since the pages all get and set values of this underlying model object.
Since you're building a character, you'd have some class like this
class Character( object ):
def __init__( self ):
self.intelligence= 10
<default values for all attributes.>
Then your various Wizard instances just need to be given the underlying Character object as a place to put and get values.
A:
My problem was indeed the confusion of classes vs. instances. I was trying to do everything via classes without ever creating an actual instance. Plus, I was forcing the "BasicInfoPage" class to do too much work.
Ultimately, I created a new class (BaseAttribs) to hold all the variables I need. I then created in instance of that class when I run the wizard and pass that instance as an argument to the classes that need it, as shown below:
#---Run the wizard
if __name__ == "__main__":
app = wx.PySimpleApp()
wizard = wiz.Wizard(None, -1, "TW2K Character Creation")
attribs = BaseAttribs
#---Create each page
page1 = IntroPage(wizard, "Introduction")
page2 = BasicInfoPage(wizard, "Basic Info", attribs)
page3 = Ethnicity(wizard, "Ethnicity")
page4 = MOS(wizard, "Military Occupational Specialty", attribs)
I then used the information S.Lott provided and created individual instances (if that's what it's called) within each class; each class is accessing the same variables though.
Everything works, as far as I can tell. Thanks.
A:
All you need is a reference. It's not really a simple problem that I can give some one-line solution to (other than a simple ugly global that would probably break something else), but one of program structure. You don't magically get access to a variable that was created on another instance of another class. You have to either give the intelligence reference to MOS, or take it from BasicInfoPage, however that might happen. It seems to me that the classes are designed rather oddly-- an information page, for one thing, should not generate anything, and if it does, it should give it back to whatever needs to know-- some sort of central place, which should have been the one generating it in the first place. Ordinarily, you'd set the variables there, and get them from there. Or at least, I would.
If you want the basic answer of "how do I pass variables between different classes", then here you go, but I doubt it's exactly what you want, as you look to be using some sort of controlling framework:
class Foo(object):
def __init__(self, var):
self.var = var
class Bar(object):
def do_something(self, var):
print var*3
if __name__ == '__main__':
f = Foo(3)
b = Bar()
# look, I'm using the variable from one instance in another!
b.do_something(f.var)
A:
If I understood you correctly, then the answer is: You can't.
intelligence should be an attribute of WizardPageSimple, if you'd want both classes to inherit it.
Depending on your situation, you might try to extract intelligence and related attributes into another baseclass. Then you could inherit from both:
class MOS(wiz.WizardPageSimple, wiz.IntelligenceAttributes): # Or something like that.
In that case you must use the co-operative super. In fact, you should be using it already. Instead of calling
wiz.WizardPageSimple.__init__(self, parent)
call
super(MOS, self).__init__(self, parent)
| Python-passing variable between classes | I'm trying to create a character generation wizard for a game. In one class I calculate the attributes of the character. In a different class, I'm displaying to the user which specialties are available based on the attributes of the character. However, I can't remember how to pass variables between different classes.
Here is an example of what I have:
class BasicInfoPage(wx.wizard.WizardPageSimple):
def __init__(self, parent, title):
wiz.WizardPageSimple.__init__(self, parent)
self.next = self.prev = None
self.sizer = makePageTitle(self, title)
<---snip--->
self.intelligence = self.genAttribs()
class MOS(wx.wizard.WizardPageSimple):
def __init__(self, parent, title):
wiz.WizardPageSimple.__init__(self, parent)
self.next = self.prev = None
self.sizer = makePageTitle(self, title)
def eligibleMOS(self, event):
if self.intelligence >= 12:
self.MOS_list.append("Analyst")
The problem is that I can't figure out how to use the "intelligence" variable from the BasicInfoPage class to the MOS class. I've tried several different things from around the Internet but nothing seems to work. What am I missing?
Edit I realized after I posted this that I didn't explain it that well. I'm trying to create a computer version of the Twilight 2000 RPG from the 1980s.
I'm using wxPython to create a wizard; the parent class of my classes is the Wizard from wxPython. That wizard will walk a user through the creation of a character, so the Basic Information page (class BasicInfoPage) lets the user give the character's name and "roll" for the character's attributes. That's where the "self.intelligence" comes from.
I'm trying to use the attributes created her for a page further on in the wizard, where the user selects the speciality of the character. The specialities that are available depend on the attributes the character has, e.g. if the intelligence is high enough, the character can be an Intel Anaylst.
It's been several years since I've programmed, especially with OOP ideas. That's why I'm confused on how to create what's essentially a global variable with classes and methods.
| [
"You may have \"Class\" and \"Instance\" confused. It's not clear from your example, so I'll presume that you're using a lot of class definitions and don't have appropriate object instances of those classes.\nClasses don't really have usable attribute values. A class is just a common set of definitions for a collection of objects. You should think of of classes as definitions, not actual things.\nInstances of classes, \"objects\", are actual things that have actual attribute values and execute method functions.\nYou don't pass variables among classes. You pass variables among instances. As a practical matter only instance variables matter. [Yes, there are class variables, but they're a fairly specialized and often confusing thing, best avoided.]\nWhen you create an object (an instance of a class)\nb= BasicInfoPage(...)\n\nThen b.intelligence is the value of intelligence for the b instance of BasicInfoPage.\nA really common thing is \nclass MOS( wx.wizard.PageSimple ):\n def __init__( self, parent, title, basicInfoPage ):\n <snip>\n self.basicInfo= basicInfoPage\n\nNow, within MOS methods, you can say self.basicInfo.intelligence because MOS has an object that's a BasicInfoPage available to it.\nWhen you build MOS, you provide it with the instance of BasicInfoPage that it's supposed to use.\nsomeBasicInfoPage= BasicInfoPage( ... ) \nm= MOS( ..., someBasicInfoPage )\n\nNow, the object m can examine someBasicInfoPage.intelligence \n",
"Each page of a Wizard -- by itself -- shouldn't actually be the container for the information you're gathering.\nRead up on the Model-View-Control design pattern. Your pages have the View and Control parts of the design. They aren't the data model, however.\nYou'll be happier if you have a separate object that is \"built\" by the pages. Each page will set some attributes of that underlying model object. Then, the pages are independent of each other, since the pages all get and set values of this underlying model object.\nSince you're building a character, you'd have some class like this\nclass Character( object ):\n def __init__( self ):\n self.intelligence= 10\n <default values for all attributes.>\n\nThen your various Wizard instances just need to be given the underlying Character object as a place to put and get values.\n",
"My problem was indeed the confusion of classes vs. instances. I was trying to do everything via classes without ever creating an actual instance. Plus, I was forcing the \"BasicInfoPage\" class to do too much work.\nUltimately, I created a new class (BaseAttribs) to hold all the variables I need. I then created in instance of that class when I run the wizard and pass that instance as an argument to the classes that need it, as shown below:\n#---Run the wizard\nif __name__ == \"__main__\":\n app = wx.PySimpleApp()\n wizard = wiz.Wizard(None, -1, \"TW2K Character Creation\")\n attribs = BaseAttribs\n\n#---Create each page\n page1 = IntroPage(wizard, \"Introduction\")\n page2 = BasicInfoPage(wizard, \"Basic Info\", attribs)\n page3 = Ethnicity(wizard, \"Ethnicity\")\n page4 = MOS(wizard, \"Military Occupational Specialty\", attribs)\n\nI then used the information S.Lott provided and created individual instances (if that's what it's called) within each class; each class is accessing the same variables though.\nEverything works, as far as I can tell. Thanks.\n",
"All you need is a reference. It's not really a simple problem that I can give some one-line solution to (other than a simple ugly global that would probably break something else), but one of program structure. You don't magically get access to a variable that was created on another instance of another class. You have to either give the intelligence reference to MOS, or take it from BasicInfoPage, however that might happen. It seems to me that the classes are designed rather oddly-- an information page, for one thing, should not generate anything, and if it does, it should give it back to whatever needs to know-- some sort of central place, which should have been the one generating it in the first place. Ordinarily, you'd set the variables there, and get them from there. Or at least, I would.\nIf you want the basic answer of \"how do I pass variables between different classes\", then here you go, but I doubt it's exactly what you want, as you look to be using some sort of controlling framework:\nclass Foo(object):\n def __init__(self, var):\n self.var = var\n\nclass Bar(object):\n def do_something(self, var):\n print var*3\n\nif __name__ == '__main__':\n f = Foo(3)\n b = Bar()\n # look, I'm using the variable from one instance in another!\n b.do_something(f.var)\n\n",
"If I understood you correctly, then the answer is: You can't.\nintelligence should be an attribute of WizardPageSimple, if you'd want both classes to inherit it.\nDepending on your situation, you might try to extract intelligence and related attributes into another baseclass. Then you could inherit from both:\nclass MOS(wiz.WizardPageSimple, wiz.IntelligenceAttributes): # Or something like that.\n\nIn that case you must use the co-operative super. In fact, you should be using it already. Instead of calling \nwiz.WizardPageSimple.__init__(self, parent)\n\ncall\nsuper(MOS, self).__init__(self, parent)\n\n"
] | [
8,
4,
2,
1,
0
] | [] | [] | [
"oop",
"python",
"variables",
"wxpython"
] | stackoverflow_0000113341_oop_python_variables_wxpython.txt |
Q:
Dynamically create variables inside function
I want to create variables inside function from dictionary.
Lets say I have a dictionary
bar = {'a': 1, 'b': 2, 'c': 3}
and function
def foo():
pass
What I want to do is to create inside function "foo" variables with names of each dictionary item name and values as dictionary item values
So in the end it should be similar to
def foo():
a = 1
b = 2
c = 3
Is it possible at all? And if it does, how to do such thing?
A:
From your comment, perhaps what you're really looking for is something like a bunch object:
class Bunch(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
b=Bunch(**form.cleaned_data)
print b.first_name, b.last_name
(The ** syntax is because Bunch-type objects are usually used like Bunch(foo=12, bar='blah') - not used in your case but I've left it for consistency with normal usage)
This does require a "b." prefix to access your variables, but if you think about it, this is no bad thing. Consider what would happen if someone crafted a POST request to overwrite variables you aren't expecting to be overwritten - it makes it easy to produce crashes and DOS attacks, and could easily introduce more serious security vulnerabilities.
A:
Your question is not clear.
If you want to "set" said variables when foo is not running, no, you can't. There is no frame object yet to "set" the local variables in.
If you want to do that in the function body, you shouldn't (check the python documentation for locals()).
However, you could do a foo.__dict__.update(bar), and then you could access those variables even from inside the function as foo.a, foo.b and foo.c. The question is: why do you want to do that, and why isn't a class more suitable for your purposes?
A:
Why would you want to do such a thing? Unless you actually do anything with the variables inside the function, a function that just assigns several variables and then discards them is indistinguishable to def foo(): pass (An optimiser would be justified in generating exactly the same bytecode).
If you also want to dynamically append code that uses the values, then you could do this by using exec (though unless this is really user-input code, there are almost certainly better ways to do what you want). eg:
some_code = ' return a+b+c'
exec "def foo():\n " + '\n '.join('%s = %s' for k,v in bar.items()) + '\n' + some_code
(Note that your code must be indented to the same level.)
On the other hand, if you want to actually assign these values to the function object (so you can do foo.a and get 1 - note that your sample code doesn't do this), you can do this by:
for key, val in bar.items():
setattr(foo, key, val)
A:
Thanks guys, I got the point. I should not do such thing. But if your curios what I tried to do is to somehow short number of lines in my view function in django. I have form with many fields, and instead of receive every field in form of:
first_name = form.cleaned_data['first_name']
last_name = form.cleaned_data['last_name'] ..
i was thinking to take every attribute name of my form class and loop over it. Like so:
for name in ProfileRegistration.base_fields.__dict__['keyOrder']:
# and here the variables that i tried to assign
| Dynamically create variables inside function | I want to create variables inside function from dictionary.
Lets say I have a dictionary
bar = {'a': 1, 'b': 2, 'c': 3}
and function
def foo():
pass
What I want to do is to create inside function "foo" variables with names of each dictionary item name and values as dictionary item values
So in the end it should be similar to
def foo():
a = 1
b = 2
c = 3
Is it possible at all? And if it does, how to do such thing?
| [
"From your comment, perhaps what you're really looking for is something like a bunch object:\nclass Bunch(object):\n def __init__(self, **kwargs):\n self.__dict__.update(kwargs)\n\nb=Bunch(**form.cleaned_data)\n\nprint b.first_name, b.last_name\n\n(The ** syntax is because Bunch-type objects are usually used like Bunch(foo=12, bar='blah') - not used in your case but I've left it for consistency with normal usage)\nThis does require a \"b.\" prefix to access your variables, but if you think about it, this is no bad thing. Consider what would happen if someone crafted a POST request to overwrite variables you aren't expecting to be overwritten - it makes it easy to produce crashes and DOS attacks, and could easily introduce more serious security vulnerabilities.\n",
"Your question is not clear.\nIf you want to \"set\" said variables when foo is not running, no, you can't. There is no frame object yet to \"set\" the local variables in.\nIf you want to do that in the function body, you shouldn't (check the python documentation for locals()).\nHowever, you could do a foo.__dict__.update(bar), and then you could access those variables even from inside the function as foo.a, foo.b and foo.c. The question is: why do you want to do that, and why isn't a class more suitable for your purposes?\n",
"Why would you want to do such a thing? Unless you actually do anything with the variables inside the function, a function that just assigns several variables and then discards them is indistinguishable to def foo(): pass (An optimiser would be justified in generating exactly the same bytecode).\nIf you also want to dynamically append code that uses the values, then you could do this by using exec (though unless this is really user-input code, there are almost certainly better ways to do what you want). eg:\nsome_code = ' return a+b+c'\nexec \"def foo():\\n \" + '\\n '.join('%s = %s' for k,v in bar.items()) + '\\n' + some_code\n\n(Note that your code must be indented to the same level.)\nOn the other hand, if you want to actually assign these values to the function object (so you can do foo.a and get 1 - note that your sample code doesn't do this), you can do this by:\nfor key, val in bar.items():\n setattr(foo, key, val)\n\n",
"Thanks guys, I got the point. I should not do such thing. But if your curios what I tried to do is to somehow short number of lines in my view function in django. I have form with many fields, and instead of receive every field in form of:\nfirst_name = form.cleaned_data['first_name']\nlast_name = form.cleaned_data['last_name'] ..\n\ni was thinking to take every attribute name of my form class and loop over it. Like so:\nfor name in ProfileRegistration.base_fields.__dict__['keyOrder']:\n # and here the variables that i tried to assign\n\n"
] | [
3,
2,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0000119941_python.txt |
Q:
Does an application-wide exception handler make sense?
Long story short, I have a substantial Python application that, among other things, does outcalls to "losetup", "mount", etc. on Linux. Essentially consuming system resources that must be released when complete.
If my application crashes, I want to ensure these system resources are properly released.
Does it make sense to do something like the following?
def main():
# TODO: main application entry point
pass
def cleanup():
# TODO: release system resources here
pass
if __name__ == "__main__":
try:
main()
except:
cleanup()
raise
Is this something that is typically done? Is there a better way? Perhaps the destructor in a singleton class?
A:
I like top-level exception handlers in general (regardless of language). They're a great place to cleanup resources that may not be immediately related to resources consumed inside the method that throws the exception.
It's also a fantastic place to log those exceptions if you have such a framework in place. Top-level handlers will catch those bizarre exceptions you didn't plan on and let you correct them in the future, otherwise, you may never know about them at all.
Just be careful that your top-level handler doesn't throw exceptions!
A:
A destructor (as in a __del__ method) is a bad idea, as these are not guaranteed to be called. The atexit module is a safer approach, although these will still not fire if the Python interpreter crashes (rather than the Python application), or if os._exit() is used, or the process is killed aggressively, or the machine reboots. (Of course, the last item isn't an issue in your case.) If your process is crash-prone (it uses fickle third-party extension modules, for instance) you may want to do the cleanup in a simple parent process for more isolation.
If you aren't really worried, use the atexit module.
A:
Application wide handler is fine. They are great for logging. Just make sure that the application wide one is durable and is unlikely to crash itself.
A:
if you use classes, you should free the resources they allocate in their destructors instead, of course. Use the try: on entire application just if you want to free resources that aren't already liberated by your classes' destructors.
And instead of using a catch-all except:, you should use the following block:
try:
main()
finally:
cleanup()
That will ensure cleanup in a more pythonic way.
A:
That seems like a reasonable approach, and more straightforward and reliable than a destructor on a singleton class. You might also look at the "atexit" module. (Pronounced "at exit", not "a tex it" or something like that. I confused that for a long while.)
A:
Consider writing a context manager and using the with statement.
| Does an application-wide exception handler make sense? | Long story short, I have a substantial Python application that, among other things, does outcalls to "losetup", "mount", etc. on Linux. Essentially consuming system resources that must be released when complete.
If my application crashes, I want to ensure these system resources are properly released.
Does it make sense to do something like the following?
def main():
# TODO: main application entry point
pass
def cleanup():
# TODO: release system resources here
pass
if __name__ == "__main__":
try:
main()
except:
cleanup()
raise
Is this something that is typically done? Is there a better way? Perhaps the destructor in a singleton class?
| [
"I like top-level exception handlers in general (regardless of language). They're a great place to cleanup resources that may not be immediately related to resources consumed inside the method that throws the exception.\nIt's also a fantastic place to log those exceptions if you have such a framework in place. Top-level handlers will catch those bizarre exceptions you didn't plan on and let you correct them in the future, otherwise, you may never know about them at all.\nJust be careful that your top-level handler doesn't throw exceptions!\n",
"A destructor (as in a __del__ method) is a bad idea, as these are not guaranteed to be called. The atexit module is a safer approach, although these will still not fire if the Python interpreter crashes (rather than the Python application), or if os._exit() is used, or the process is killed aggressively, or the machine reboots. (Of course, the last item isn't an issue in your case.) If your process is crash-prone (it uses fickle third-party extension modules, for instance) you may want to do the cleanup in a simple parent process for more isolation.\nIf you aren't really worried, use the atexit module.\n",
"Application wide handler is fine. They are great for logging. Just make sure that the application wide one is durable and is unlikely to crash itself.\n",
"if you use classes, you should free the resources they allocate in their destructors instead, of course. Use the try: on entire application just if you want to free resources that aren't already liberated by your classes' destructors.\nAnd instead of using a catch-all except:, you should use the following block:\ntry:\n main()\nfinally:\n cleanup()\n\nThat will ensure cleanup in a more pythonic way.\n",
"That seems like a reasonable approach, and more straightforward and reliable than a destructor on a singleton class. You might also look at the \"atexit\" module. (Pronounced \"at exit\", not \"a tex it\" or something like that. I confused that for a long while.)\n",
"Consider writing a context manager and using the with statement.\n"
] | [
11,
7,
2,
2,
1,
1
] | [] | [] | [
"exception_handling",
"python"
] | stackoverflow_0000095642_exception_handling_python.txt |
Q:
Using **kwargs with SimpleXMLRPCServer in python
I have a class that I wish to expose as a remote service using pythons SimpleXMLRPCServer. The server startup looks like this:
server = SimpleXMLRPCServer((serverSettings.LISTEN_IP,serverSettings.LISTEN_PORT))
service = Service()
server.register_instance(service)
server.serve_forever()
I then have a ServiceRemote class that looks like this:
def __init__(self,ip,port):
self.rpcClient = xmlrpclib.Server('http://%s:%d' %(ip,port))
def __getattr__(self, name):
# forward all calls to the rpc client
return getattr(self.rpcClient, name)
So all calls on the ServiceRemote object will be forwarded to xmlrpclib.Server, which then forwards it to the remote server. The problem is a method in the service that takes named varargs:
@useDb
def select(self, db, fields, **kwargs):
pass
The @useDb decorator wraps the function, creating the db before the call and opening it, then closing it after the call is done before returning the result.
When I call this method, I get the error "call() got an unexpected keyword argument 'name'". So, is it possible to call methods taking variable named arguments remotely? Or will I have to create an override for each method variation I need.
Thanks for the responses. I changed my code around a bit so the question is no longer an issue. However now I know this for future reference if I indeed do need to implement positional arguments and support remote invocation. I think a combination of Thomas and praptaks approaches would be good. Turning kwargs into positional args on the client through xmlrpclient, and having a wrapper on methods serverside to unpack positional arguments.
A:
You can't do this with plain xmlrpc since it has no notion of keyword arguments. However, you can superimpose this as a protocol on top of xmlrpc that would always pass a list as first argument, and a dictionary as a second, and then provide the proper support code so this becomes transparent for your usage, example below:
Server
from SimpleXMLRPCServer import SimpleXMLRPCServer
class Server(object):
def __init__(self, hostport):
self.server = SimpleXMLRPCServer(hostport)
def register_function(self, function, name=None):
def _function(args, kwargs):
return function(*args, **kwargs)
_function.__name__ = function.__name__
self.server.register_function(_function, name)
def serve_forever(self):
self.server.serve_forever()
#example usage
server = Server(('localhost', 8000))
def test(arg1, arg2):
print 'arg1: %s arg2: %s' % (arg1, arg2)
return 0
server.register_function(test)
server.serve_forever()
Client
import xmlrpclib
class ServerProxy(object):
def __init__(self, url):
self._xmlrpc_server_proxy = xmlrpclib.ServerProxy(url)
def __getattr__(self, name):
call_proxy = getattr(self._xmlrpc_server_proxy, name)
def _call(*args, **kwargs):
return call_proxy(args, kwargs)
return _call
#example usage
server = ServerProxy('http://localhost:8000')
server.test(1, 2)
server.test(arg2=2, arg1=1)
server.test(1, arg2=2)
server.test(*[1,2])
server.test(**{'arg1':1, 'arg2':2})
A:
XML-RPC doesn't really have a concept of 'keyword arguments', so xmlrpclib doesn't try to support them. You would need to pick a convention, then modify xmlrpclib._Method to accept keyword arguments and pass them along using that convention.
For instance, I used to work with an XML-RPC server that passed keyword arguments as two arguments, '-KEYWORD' followed by the actual argument, in a flat list. I no longer have access to the code I wrote to access that XML-RPC server from Python, but it was fairly simple, along the lines of:
import xmlrpclib
_orig_Method = xmlrpclib._Method
class KeywordArgMethod(_orig_Method):
def __call__(self, *args, **kwargs):
if args and kwargs:
raise TypeError, "Can't pass both positional and keyword args"
args = list(args)
for key in kwargs:
args.append('-%s' % key.upper())
args.append(kwargs[key])
return _orig_Method.__call__(self, *args)
xmlrpclib._Method = KeywordArgMethod
It uses monkeypatching because that's by far the easiest method to do this, because of some clunky uses of module globals and name-mangled attributes (__request, for instance) in the ServerProxy class.
A:
As far as I know, the underlying protocol doesn't support named varargs (or any named args for that matter). The workaround for this is to create a wrapper that will take the **kwargs and pass it as an ordinary dictionary to the method you want to call. Something like this
Server side:
def select_wrapper(self, db, fields, kwargs):
"""accepts an ordinary dict which can pass through xmlrpc"""
return select(self,db,fields, **kwargs)
On the client side:
def select(self, db, fields, **kwargs):
"""you can call it with keyword arguments and they will be packed into a dict"""
return self.rpcClient.select_wrapper(self,db,fields,kwargs)
Disclaimer: the code shows the general idea, you can do it a bit cleaner (for example writing a decorator to do that).
A:
Using the above advice, I created some working code.
Server method wrapper:
def unwrap_kwargs(func):
def wrapper(*args, **kwargs):
print args
if args and isinstance(args[-1], list) and len(args[-1]) == 2 and "kwargs" == args[-1][0]:
func(*args[:-1], **args[-1][1])
else:
func(*args, **kwargs)
return wrapper
Client setup (do once):
_orig_Method = xmlrpclib._Method
class KeywordArgMethod(_orig_Method):
def __call__(self, *args, **kwargs):
args = list(args)
if kwargs:
args.append(("kwargs", kwargs))
return _orig_Method.__call__(self, *args)
xmlrpclib._Method = KeywordArgMethod
I tested this, and it supports method with fixed, positional and keyword arguments.
A:
As Thomas Wouters said, XML-RPC does not have keyword arguments. Only the order of arguments matters as far as the protocol is concerned and they can be called anything in XML: arg0, arg1, arg2 is perfectly fine, as is cheese, candy and bacon for the same arguments.
Perhaps you should simply rethink your use of the protocol? Using something like document/literal SOAP would be much better than a workaround such as the ones presented in other answers here. Of course, this may not be feasible.
| Using **kwargs with SimpleXMLRPCServer in python | I have a class that I wish to expose as a remote service using pythons SimpleXMLRPCServer. The server startup looks like this:
server = SimpleXMLRPCServer((serverSettings.LISTEN_IP,serverSettings.LISTEN_PORT))
service = Service()
server.register_instance(service)
server.serve_forever()
I then have a ServiceRemote class that looks like this:
def __init__(self,ip,port):
self.rpcClient = xmlrpclib.Server('http://%s:%d' %(ip,port))
def __getattr__(self, name):
# forward all calls to the rpc client
return getattr(self.rpcClient, name)
So all calls on the ServiceRemote object will be forwarded to xmlrpclib.Server, which then forwards it to the remote server. The problem is a method in the service that takes named varargs:
@useDb
def select(self, db, fields, **kwargs):
pass
The @useDb decorator wraps the function, creating the db before the call and opening it, then closing it after the call is done before returning the result.
When I call this method, I get the error "call() got an unexpected keyword argument 'name'". So, is it possible to call methods taking variable named arguments remotely? Or will I have to create an override for each method variation I need.
Thanks for the responses. I changed my code around a bit so the question is no longer an issue. However now I know this for future reference if I indeed do need to implement positional arguments and support remote invocation. I think a combination of Thomas and praptaks approaches would be good. Turning kwargs into positional args on the client through xmlrpclient, and having a wrapper on methods serverside to unpack positional arguments.
| [
"You can't do this with plain xmlrpc since it has no notion of keyword arguments. However, you can superimpose this as a protocol on top of xmlrpc that would always pass a list as first argument, and a dictionary as a second, and then provide the proper support code so this becomes transparent for your usage, example below:\nServer\nfrom SimpleXMLRPCServer import SimpleXMLRPCServer\n\nclass Server(object):\n def __init__(self, hostport):\n self.server = SimpleXMLRPCServer(hostport)\n\n def register_function(self, function, name=None):\n def _function(args, kwargs):\n return function(*args, **kwargs)\n _function.__name__ = function.__name__\n self.server.register_function(_function, name)\n\n def serve_forever(self):\n self.server.serve_forever()\n\n#example usage\nserver = Server(('localhost', 8000))\ndef test(arg1, arg2):\n print 'arg1: %s arg2: %s' % (arg1, arg2)\n return 0\nserver.register_function(test)\nserver.serve_forever()\n\nClient\nimport xmlrpclib\n\nclass ServerProxy(object):\n def __init__(self, url):\n self._xmlrpc_server_proxy = xmlrpclib.ServerProxy(url)\n def __getattr__(self, name):\n call_proxy = getattr(self._xmlrpc_server_proxy, name)\n def _call(*args, **kwargs):\n return call_proxy(args, kwargs)\n return _call\n\n#example usage\nserver = ServerProxy('http://localhost:8000')\nserver.test(1, 2)\nserver.test(arg2=2, arg1=1)\nserver.test(1, arg2=2)\nserver.test(*[1,2])\nserver.test(**{'arg1':1, 'arg2':2})\n\n",
"XML-RPC doesn't really have a concept of 'keyword arguments', so xmlrpclib doesn't try to support them. You would need to pick a convention, then modify xmlrpclib._Method to accept keyword arguments and pass them along using that convention.\nFor instance, I used to work with an XML-RPC server that passed keyword arguments as two arguments, '-KEYWORD' followed by the actual argument, in a flat list. I no longer have access to the code I wrote to access that XML-RPC server from Python, but it was fairly simple, along the lines of:\nimport xmlrpclib\n\n_orig_Method = xmlrpclib._Method\n\nclass KeywordArgMethod(_orig_Method): \n def __call__(self, *args, **kwargs):\n if args and kwargs:\n raise TypeError, \"Can't pass both positional and keyword args\"\n args = list(args) \n for key in kwargs:\n args.append('-%s' % key.upper())\n args.append(kwargs[key])\n return _orig_Method.__call__(self, *args) \n\nxmlrpclib._Method = KeywordArgMethod\n\nIt uses monkeypatching because that's by far the easiest method to do this, because of some clunky uses of module globals and name-mangled attributes (__request, for instance) in the ServerProxy class.\n",
"As far as I know, the underlying protocol doesn't support named varargs (or any named args for that matter). The workaround for this is to create a wrapper that will take the **kwargs and pass it as an ordinary dictionary to the method you want to call. Something like this\nServer side:\ndef select_wrapper(self, db, fields, kwargs):\n \"\"\"accepts an ordinary dict which can pass through xmlrpc\"\"\"\n return select(self,db,fields, **kwargs)\n\nOn the client side:\ndef select(self, db, fields, **kwargs):\n \"\"\"you can call it with keyword arguments and they will be packed into a dict\"\"\"\n return self.rpcClient.select_wrapper(self,db,fields,kwargs)\n\nDisclaimer: the code shows the general idea, you can do it a bit cleaner (for example writing a decorator to do that).\n",
"Using the above advice, I created some working code.\nServer method wrapper:\ndef unwrap_kwargs(func):\n def wrapper(*args, **kwargs):\n print args\n if args and isinstance(args[-1], list) and len(args[-1]) == 2 and \"kwargs\" == args[-1][0]:\n func(*args[:-1], **args[-1][1])\n else:\n func(*args, **kwargs)\n return wrapper\n\nClient setup (do once):\n_orig_Method = xmlrpclib._Method\n\nclass KeywordArgMethod(_orig_Method): \n def __call__(self, *args, **kwargs):\n args = list(args) \n if kwargs:\n args.append((\"kwargs\", kwargs))\n return _orig_Method.__call__(self, *args)\n\nxmlrpclib._Method = KeywordArgMethod\n\nI tested this, and it supports method with fixed, positional and keyword arguments.\n",
"As Thomas Wouters said, XML-RPC does not have keyword arguments. Only the order of arguments matters as far as the protocol is concerned and they can be called anything in XML: arg0, arg1, arg2 is perfectly fine, as is cheese, candy and bacon for the same arguments.\nPerhaps you should simply rethink your use of the protocol? Using something like document/literal SOAP would be much better than a workaround such as the ones presented in other answers here. Of course, this may not be feasible.\n"
] | [
15,
4,
1,
1,
0
] | [] | [] | [
"python",
"simplexmlrpcserver",
"xmlrpclib"
] | stackoverflow_0000119802_python_simplexmlrpcserver_xmlrpclib.txt |
Q:
Find matching sequences in two binary files
Let me start off with a bit of background.
This morning one of our users reported that Testuff's setup file has been reported as infected with a virus by the CA antivirus. Confident that this was a false positive, I looked on the web and found that users of another program (SpyBot) have reported the same problem.
A now, for the actual question.
Assuming the antivirus is looking for a specific binary signature in the file, I'd like to find the matching sequences in both files and hopefully find a way to tweak the setup script to prevent that sequence from appearing.
I tried the following in Python, but it's been running for a long time now and I was wondering if there was a better or faster way.
from difflib import SequenceMatcher
spybot = open("spybotsd160.exe", "rb").read()
testuff = open("TestuffSetup.exe", "rb").read()
s = SequenceMatcher(None, spybot, testuff)
print s.find_longest_match(0, len(spybot), 0, len(testuff))
Is there a better library for Python or for another language that can do this?
A completely different way to tackle the problem is welcome as well.
A:
See the longest common substring problem. I guess difflib uses the DP solution, which is certainly too slow to compare executables. You can do much better with suffix trees/arrays.
Using perl Tree::Suffix might be easiest solution. Apparently it gives all common substrings in a specified length range:
@lcs = $tree->lcs;
@lcs = $tree->lcs($min_len, $max_len);
@lcs = $tree->longest_common_substrings;
A:
Note that even if you did find it this way, there's no guarantee that the longest match is actually the one being looked for. Instead, you may find common initialisation code or string tables added by the same compiler for instance.
A:
Why don't you contact CA and ask them to tell them what they're searching for, for that virus?
Or, you could copy the file and change each individual byte until the warning disappeared (may take a while depending on the size).
It's possible the virus detection may be a lot more complicated than simply looking for a fixed string.
A:
Better not wonder about the complexity and time these kinds of algorithms need.
If you have interest in this - here .ps document linked here you can find a good introduction into this thematic.
If a good implementation for these algorithms exist, I can not tell.
A:
I suspect that looking for binary strings isn't going to help you. An install program is likely to be doing some 'suspicious' things.
You probably need to talk to CA and spybot about white-listing your installer, or about what is triggering the alert.
| Find matching sequences in two binary files | Let me start off with a bit of background.
This morning one of our users reported that Testuff's setup file has been reported as infected with a virus by the CA antivirus. Confident that this was a false positive, I looked on the web and found that users of another program (SpyBot) have reported the same problem.
A now, for the actual question.
Assuming the antivirus is looking for a specific binary signature in the file, I'd like to find the matching sequences in both files and hopefully find a way to tweak the setup script to prevent that sequence from appearing.
I tried the following in Python, but it's been running for a long time now and I was wondering if there was a better or faster way.
from difflib import SequenceMatcher
spybot = open("spybotsd160.exe", "rb").read()
testuff = open("TestuffSetup.exe", "rb").read()
s = SequenceMatcher(None, spybot, testuff)
print s.find_longest_match(0, len(spybot), 0, len(testuff))
Is there a better library for Python or for another language that can do this?
A completely different way to tackle the problem is welcome as well.
| [
"See the longest common substring problem. I guess difflib uses the DP solution, which is certainly too slow to compare executables. You can do much better with suffix trees/arrays.\nUsing perl Tree::Suffix might be easiest solution. Apparently it gives all common substrings in a specified length range:\n@lcs = $tree->lcs;\n@lcs = $tree->lcs($min_len, $max_len);\n@lcs = $tree->longest_common_substrings;\n\n",
"Note that even if you did find it this way, there's no guarantee that the longest match is actually the one being looked for. Instead, you may find common initialisation code or string tables added by the same compiler for instance.\n",
"Why don't you contact CA and ask them to tell them what they're searching for, for that virus?\nOr, you could copy the file and change each individual byte until the warning disappeared (may take a while depending on the size).\nIt's possible the virus detection may be a lot more complicated than simply looking for a fixed string.\n",
"Better not wonder about the complexity and time these kinds of algorithms need.\nIf you have interest in this - here .ps document linked here you can find a good introduction into this thematic.\nIf a good implementation for these algorithms exist, I can not tell.\n",
"I suspect that looking for binary strings isn't going to help you. An install program is likely to be doing some 'suspicious' things. \nYou probably need to talk to CA and spybot about white-listing your installer, or about what is triggering the alert.\n"
] | [
5,
2,
1,
1,
0
] | [] | [] | [
"antivirus",
"binary",
"diff",
"python"
] | stackoverflow_0000119651_antivirus_binary_diff_python.txt |
Q:
Python idiom to chain (flatten) an infinite iterable of finite iterables?
Suppose we have an iterator (an infinite one) that returns lists (or finite iterators), for example one returned by
infinite = itertools.cycle([[1,2,3]])
What is a good Python idiom to get an iterator (obviously infinite) that will return each of the elements from the first iterator, then each from the second one, etc. In the example above it would return 1,2,3,1,2,3,.... The iterator is infinite, so itertools.chain(*infinite) will not work.
Related
Flattening a shallow list in python
A:
Starting with Python 2.6, you can use itertools.chain.from_iterable:
itertools.chain.from_iterable(iterables)
You can also do this with a nested generator comprehension:
def flatten(iterables):
return (elem for iterable in iterables for elem in iterable)
A:
Use a generator:
(item for it in infinite for item in it)
The * construct unpacks into a tuple in order to pass the arguments, so there's no way to use it.
| Python idiom to chain (flatten) an infinite iterable of finite iterables? | Suppose we have an iterator (an infinite one) that returns lists (or finite iterators), for example one returned by
infinite = itertools.cycle([[1,2,3]])
What is a good Python idiom to get an iterator (obviously infinite) that will return each of the elements from the first iterator, then each from the second one, etc. In the example above it would return 1,2,3,1,2,3,.... The iterator is infinite, so itertools.chain(*infinite) will not work.
Related
Flattening a shallow list in python
| [
"Starting with Python 2.6, you can use itertools.chain.from_iterable:\nitertools.chain.from_iterable(iterables)\n\nYou can also do this with a nested generator comprehension:\ndef flatten(iterables):\n return (elem for iterable in iterables for elem in iterable)\n\n",
"Use a generator:\n(item for it in infinite for item in it)\n\nThe * construct unpacks into a tuple in order to pass the arguments, so there's no way to use it.\n"
] | [
52,
13
] | [] | [] | [
"iterator",
"python"
] | stackoverflow_0000120886_iterator_python.txt |
Q:
How can I join a list into a string (caveat)?
Along the lines of my previous question, how can i join a list of strings into a string such that values get quoted cleanly. Something like:
['a', 'one "two" three', 'foo, bar', """both"'"""]
into:
a, 'one "two" three', "foo, bar", "both\"'"
I suspect that the csv module will come into play here, but i'm not sure how to get the output I want.
A:
Using the csv module you can do that way:
import csv
writer = csv.writer(open("some.csv", "wb"))
writer.writerow(the_list)
If you need a string just use StringIO instance as a file:
f = StringIO.StringIO()
writer = csv.writer(f)
writer.writerow(the_list)
print f.getvalue()
The output: a,"one ""two"" three","foo, bar","both""'"
csv will write in a way it can read back later.
You can fine-tune its output by defining a dialect, just set quotechar, escapechar, etc, as needed:
class SomeDialect(csv.excel):
delimiter = ','
quotechar = '"'
escapechar = "\\"
doublequote = False
lineterminator = '\n'
quoting = csv.QUOTE_MINIMAL
f = cStringIO.StringIO()
writer = csv.writer(f, dialect=SomeDialect)
writer.writerow(the_list)
print f.getvalue()
The output: a,one \"two\" three,"foo, bar",both\"'
The same dialect can be used with csv module to read the string back later to a list.
A:
On a related note, Python's builtin encoders can also do string escaping:
>>> print "that's interesting".encode('string_escape')
that\'s interesting
A:
Here's a slightly simpler alternative.
def quote(s):
if "'" in s or '"' in s or "," in str(s):
return repr(s)
return s
We only need to quote a value that might have commas or quotes.
>>> x= ['a', 'one "two" three', 'foo, bar', 'both"\'']
>>> print ", ".join( map(quote,x) )
a, 'one "two" three', 'foo, bar', 'both"\''
| How can I join a list into a string (caveat)? | Along the lines of my previous question, how can i join a list of strings into a string such that values get quoted cleanly. Something like:
['a', 'one "two" three', 'foo, bar', """both"'"""]
into:
a, 'one "two" three', "foo, bar", "both\"'"
I suspect that the csv module will come into play here, but i'm not sure how to get the output I want.
| [
"Using the csv module you can do that way:\nimport csv\nwriter = csv.writer(open(\"some.csv\", \"wb\"))\nwriter.writerow(the_list)\n\nIf you need a string just use StringIO instance as a file:\nf = StringIO.StringIO()\nwriter = csv.writer(f)\nwriter.writerow(the_list)\nprint f.getvalue()\n\nThe output: a,\"one \"\"two\"\" three\",\"foo, bar\",\"both\"\"'\"\ncsv will write in a way it can read back later.\nYou can fine-tune its output by defining a dialect, just set quotechar, escapechar, etc, as needed:\nclass SomeDialect(csv.excel):\n delimiter = ','\n quotechar = '\"'\n escapechar = \"\\\\\"\n doublequote = False\n lineterminator = '\\n'\n quoting = csv.QUOTE_MINIMAL\n\nf = cStringIO.StringIO()\nwriter = csv.writer(f, dialect=SomeDialect)\nwriter.writerow(the_list)\nprint f.getvalue()\n\nThe output: a,one \\\"two\\\" three,\"foo, bar\",both\\\"'\nThe same dialect can be used with csv module to read the string back later to a list.\n",
"On a related note, Python's builtin encoders can also do string escaping:\n>>> print \"that's interesting\".encode('string_escape')\nthat\\'s interesting\n\n",
"Here's a slightly simpler alternative.\ndef quote(s):\n if \"'\" in s or '\"' in s or \",\" in str(s):\n return repr(s)\n return s\n\nWe only need to quote a value that might have commas or quotes.\n>>> x= ['a', 'one \"two\" three', 'foo, bar', 'both\"\\'']\n>>> print \", \".join( map(quote,x) )\na, 'one \"two\" three', 'foo, bar', 'both\"\\''\n\n"
] | [
7,
2,
1
] | [] | [] | [
"csv",
"list",
"python",
"string"
] | stackoverflow_0000118458_csv_list_python_string.txt |
Q:
Where can a save confirmation page be hooked into the Django admin? (similar to delete confirmation)
I want to emulate the delete confirmation page behavior before saving
certain models in the admin. In my case if I change one object,
certain others should be deleted as they depend upon the object's now
out-of-date state.
I understand where to implement the actual cascaded updates (inside
the parent model's save method), but I don't see a quick way to ask
the user for confirmation (and then rollback if they decide not to
save). I suppose I could implement some weird confirmation logic
directly inside the save method (sort of a two phase save) but that
seems...ugly.
Any thoughts, even general pointers into the django codebase?
Thanks!
A:
You could overload the get_form method of your model admin and add an extra checkbox to the generated form that has to be ticket. Alternatively you can override change_view and intercept the request.
A:
I'm by no means a Django expert, so this answer might misguide you.
Start looking somewhere around django.contrib.admin.options.ModelAdmin, especially render_change_form and response_change. I guess you would need to subclass ModelAdmin for your model and provide required behavior around those methods.
A:
Have you considered overriding the administrative templates for the models in question? This link provides an excellent overview of the process. In this particular situation, having a finer-grained level of control may be the best way to achieve the desired result.
| Where can a save confirmation page be hooked into the Django admin? (similar to delete confirmation) | I want to emulate the delete confirmation page behavior before saving
certain models in the admin. In my case if I change one object,
certain others should be deleted as they depend upon the object's now
out-of-date state.
I understand where to implement the actual cascaded updates (inside
the parent model's save method), but I don't see a quick way to ask
the user for confirmation (and then rollback if they decide not to
save). I suppose I could implement some weird confirmation logic
directly inside the save method (sort of a two phase save) but that
seems...ugly.
Any thoughts, even general pointers into the django codebase?
Thanks!
| [
"You could overload the get_form method of your model admin and add an extra checkbox to the generated form that has to be ticket. Alternatively you can override change_view and intercept the request.\n",
"I'm by no means a Django expert, so this answer might misguide you. \nStart looking somewhere around django.contrib.admin.options.ModelAdmin, especially render_change_form and response_change. I guess you would need to subclass ModelAdmin for your model and provide required behavior around those methods.\n",
"Have you considered overriding the administrative templates for the models in question? This link provides an excellent overview of the process. In this particular situation, having a finer-grained level of control may be the best way to achieve the desired result.\n"
] | [
2,
1,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000114283_django_python.txt |
Q:
Python regular expression to split paragraphs
How would one write a regular expression to use in Python to split paragraphs?
A paragraph is defined by two line breaks (\n). But one can have any amount of spaces/tabs together with the line breaks, and it still should be considered as a paragraph.
I am using Python, so the solution can use Python's regular expression syntax which is extended. (can make use of (?P...) stuff)
Examples:
the_str = 'paragraph1\n\nparagraph2'
# Splitting should yield ['paragraph1', 'paragraph2']
the_str = 'p1\n\t\np2\t\n\tstill p2\t \n \n\tp3'
# Should yield ['p1', 'p2\t\n\tstill p2', 'p3']
the_str = 'p1\n\n\n\tp2'
# Should yield ['p1', '\n\tp2']
The best I could come with is: r'[ \t\r\f\v]*\n[ \t\r\f\v]*\n[ \t\r\f\v]*', i.e.
import re
paragraphs = re.split(r'[ \t\r\f\v]*\n[ \t\r\f\v]*\n[ \t\r\f\v]*', the_str)
But that is ugly. Is there anything better?
Suggestions rejected:
r'\s*?\n\s*?\n\s*?' -> That would make example 2 and 3 fail, since \s includes \n, so it would allow paragraph breaks with more than 2 \ns.
A:
Unfortunately there's no nice way to write "space but not a newline".
I think the best you can do is add some space with the x modifier and try to factor out the ugliness a bit, but that's questionable: (?x) (?: [ \t\r\f\v]*? \n ){2} [ \t\r\f\v]*?
You could also try creating a subrule just for the character class and interpolating it three times.
A:
You may be trying to deduce the structure of a document in plain test and doing what docutils does.
You might be able to simply use the Docutils parser rather than roll your own.
A:
It is not a regexp, but it is really elegant:
from itertools import groupby
def paragraph(lines):
for group_separator, line_iteration in groupby(lines.splitlines(True), key = str.isspace):
if not group_separator:
yield ''.join(line_iteration)
for p in paragraph('p1\n\t\np2\t\n\tstill p2\t \n \n\tp'):
print repr(p)
'p1\n'
'p2\t\n\tstill p2\t \n'
'\tp3'
It's up to you to strip the output as you need it of course.
It was inspired by the famous "Python Cookbook" ;-)
A:
Almost the same, but using non-greedy quantifiers and taking advantage of the whitespace sequence.
\s*?\n\s*?\n\s*?
| Python regular expression to split paragraphs | How would one write a regular expression to use in Python to split paragraphs?
A paragraph is defined by two line breaks (\n). But one can have any amount of spaces/tabs together with the line breaks, and it still should be considered as a paragraph.
I am using Python, so the solution can use Python's regular expression syntax which is extended. (can make use of (?P...) stuff)
Examples:
the_str = 'paragraph1\n\nparagraph2'
# Splitting should yield ['paragraph1', 'paragraph2']
the_str = 'p1\n\t\np2\t\n\tstill p2\t \n \n\tp3'
# Should yield ['p1', 'p2\t\n\tstill p2', 'p3']
the_str = 'p1\n\n\n\tp2'
# Should yield ['p1', '\n\tp2']
The best I could come with is: r'[ \t\r\f\v]*\n[ \t\r\f\v]*\n[ \t\r\f\v]*', i.e.
import re
paragraphs = re.split(r'[ \t\r\f\v]*\n[ \t\r\f\v]*\n[ \t\r\f\v]*', the_str)
But that is ugly. Is there anything better?
Suggestions rejected:
r'\s*?\n\s*?\n\s*?' -> That would make example 2 and 3 fail, since \s includes \n, so it would allow paragraph breaks with more than 2 \ns.
| [
"Unfortunately there's no nice way to write \"space but not a newline\".\nI think the best you can do is add some space with the x modifier and try to factor out the ugliness a bit, but that's questionable: (?x) (?: [ \\t\\r\\f\\v]*? \\n ){2} [ \\t\\r\\f\\v]*?\nYou could also try creating a subrule just for the character class and interpolating it three times.\n",
"You may be trying to deduce the structure of a document in plain test and doing what docutils does.\nYou might be able to simply use the Docutils parser rather than roll your own.\n",
"It is not a regexp, but it is really elegant:\nfrom itertools import groupby\n\ndef paragraph(lines):\n for group_separator, line_iteration in groupby(lines.splitlines(True), key = str.isspace):\n if not group_separator:\n yield ''.join(line_iteration)\n\nfor p in paragraph('p1\\n\\t\\np2\\t\\n\\tstill p2\\t \\n \\n\\tp'):\n print repr(p)\n\n'p1\\n'\n'p2\\t\\n\\tstill p2\\t \\n'\n'\\tp3'\n\nIt's up to you to strip the output as you need it of course.\nIt was inspired by the famous \"Python Cookbook\" ;-)\n",
"Almost the same, but using non-greedy quantifiers and taking advantage of the whitespace sequence.\n\\s*?\\n\\s*?\\n\\s*?\n\n"
] | [
5,
2,
2,
0
] | [] | [] | [
"parsing",
"python",
"regex",
"split",
"text"
] | stackoverflow_0000116494_parsing_python_regex_split_text.txt |
Q:
Passing around urls between applications in the same project
I am trying to mock-up an API and am using separate apps within Django to represent different web services. I would like App A to take in a link that corresponds to App B and parse the json response.
Is there a way to dynamically construct the url to App B so that I can test the code in development and not change to much before going into production? The problem is that I can't use localhost as part of a link.
I am currently using urllib, but eventually I would like to do something less hacky and better fitting with the web services REST paradigm.
A:
You could do something like
if settings.DEBUG:
other = "localhost"
else:
other = "somehost"
and use other to build the external URL. Generally you code in DEBUG mode and deploy in non-DEBUG mode. settings.DEBUG is a 'standard' Django thing.
A:
By "separate apps within Django" do you mean separate applications with a common settings? That is to say, two applications within the same Django site (or project)?
If so, the {% url %} tag will generate a proper absolute URL to any of the apps listed in the settings file.
If there are separate Django servers with separate settings, you have the standard internet problem of URI design. Your URI's can be consistent with only the hostname changing.
- http://localhost/some/path - development
- http://123.45.67.78/some/path - someone's laptop who's running a server for testing
- http://qa.mysite.com/some/path - QA
- http://www.mysite.com/some/path - production
You never need to provide the host information, so all of your links are <A HREF="/some/path/">.
This, generally, works out the best. You have can someone's random laptop being a test server; you can get the IP address using ifconfig.
| Passing around urls between applications in the same project | I am trying to mock-up an API and am using separate apps within Django to represent different web services. I would like App A to take in a link that corresponds to App B and parse the json response.
Is there a way to dynamically construct the url to App B so that I can test the code in development and not change to much before going into production? The problem is that I can't use localhost as part of a link.
I am currently using urllib, but eventually I would like to do something less hacky and better fitting with the web services REST paradigm.
| [
"You could do something like\nif settings.DEBUG:\n other = \"localhost\"\nelse:\n other = \"somehost\"\n\nand use other to build the external URL. Generally you code in DEBUG mode and deploy in non-DEBUG mode. settings.DEBUG is a 'standard' Django thing.\n",
"By \"separate apps within Django\" do you mean separate applications with a common settings? That is to say, two applications within the same Django site (or project)?\nIf so, the {% url %} tag will generate a proper absolute URL to any of the apps listed in the settings file.\nIf there are separate Django servers with separate settings, you have the standard internet problem of URI design. Your URI's can be consistent with only the hostname changing.\n- http://localhost/some/path - development\n\n- http://123.45.67.78/some/path - someone's laptop who's running a server for testing\n\n- http://qa.mysite.com/some/path - QA\n\n- http://www.mysite.com/some/path - production\n\nYou never need to provide the host information, so all of your links are <A HREF=\"/some/path/\">.\nThis, generally, works out the best. You have can someone's random laptop being a test server; you can get the IP address using ifconfig.\n"
] | [
1,
1
] | [] | [] | [
"development_environment",
"django",
"python",
"web_services"
] | stackoverflow_0000124108_development_environment_django_python_web_services.txt |
Q:
What is the intended use of the DEFAULT section in config files used by ConfigParser?
I've used ConfigParser for quite a while for simple configs. One thing that's bugged me for a long time is the DEFAULT section. I'm not really sure what's an appropriate use. I've read the documentation, but I would really like to see some clever examples of its use and how it affects other sections in the file (something that really illustrates the kind of things that are possible).
A:
I found an explanation here by googling for "windows ini" "default section". Summary: whatever you put in the [DEFAULT] section gets propagated to every other section. Using the example from the linked website, let's say I have a config file called test1.ini:
[host 1]
lh_server=192.168.0.1
vh_hosts = PloneSite1:8080
lh_root = PloneSite1
[host 2]
lh_server=192.168.0.1
vh_hosts = PloneSite2:8080
lh_root = PloneSite2
I can read this using ConfigParser:
>>> cp = ConfigParser.ConfigParser()
>>> cp.read('test1.ini')
['test1.ini']
>>> cp.get('host 1', 'lh_server')
'192.168.0.1'
But I notice that lh_server is the same in both sections; and, indeed, I realise that it will be the same for most hosts I might add. So I can do this, as test2.ini:
[DEFAULT]
lh_server=192.168.0.1
[host 1]
vh_root = PloneSite1
lh_root = PloneSite1
[host 2]
vh_root = PloneSite2
lh_root = PloneSite2
Despite the sections not having lh_server keys, I can still access them:
>>> cp.read('test2.ini')
['test2.ini']
>>> cp.get('host 1', 'lh_server')
'192.168.0.1'
Read the linked page for a further example of using variable substitution in the DEFAULT section to simplify the INI file even more.
| What is the intended use of the DEFAULT section in config files used by ConfigParser? | I've used ConfigParser for quite a while for simple configs. One thing that's bugged me for a long time is the DEFAULT section. I'm not really sure what's an appropriate use. I've read the documentation, but I would really like to see some clever examples of its use and how it affects other sections in the file (something that really illustrates the kind of things that are possible).
| [
"I found an explanation here by googling for \"windows ini\" \"default section\". Summary: whatever you put in the [DEFAULT] section gets propagated to every other section. Using the example from the linked website, let's say I have a config file called test1.ini:\n[host 1]\nlh_server=192.168.0.1\nvh_hosts = PloneSite1:8080\nlh_root = PloneSite1\n\n[host 2]\nlh_server=192.168.0.1\nvh_hosts = PloneSite2:8080\nlh_root = PloneSite2\n\nI can read this using ConfigParser:\n>>> cp = ConfigParser.ConfigParser()\n>>> cp.read('test1.ini')\n['test1.ini']\n>>> cp.get('host 1', 'lh_server')\n'192.168.0.1'\n\nBut I notice that lh_server is the same in both sections; and, indeed, I realise that it will be the same for most hosts I might add. So I can do this, as test2.ini:\n[DEFAULT]\nlh_server=192.168.0.1\n\n[host 1]\nvh_root = PloneSite1\nlh_root = PloneSite1\n\n[host 2]\nvh_root = PloneSite2\nlh_root = PloneSite2\n\nDespite the sections not having lh_server keys, I can still access them:\n>>> cp.read('test2.ini')\n['test2.ini']\n>>> cp.get('host 1', 'lh_server')\n'192.168.0.1'\n\nRead the linked page for a further example of using variable substitution in the DEFAULT section to simplify the INI file even more.\n"
] | [
56
] | [] | [] | [
"configuration_files",
"parsing",
"python"
] | stackoverflow_0000124692_configuration_files_parsing_python.txt |
Q:
cx_Oracle: How do I iterate over a result set?
There are several ways to iterate over a result set. What are the tradeoff of each?
A:
The canonical way is to use the built-in cursor iterator.
curs.execute('select * from people')
for row in curs:
print row
You can use fetchall() to get all rows at once.
for row in curs.fetchall():
print row
It can be convenient to use this to create a Python list containing the values returned:
curs.execute('select first_name from people')
names = [row[0] for row in curs.fetchall()]
This can be useful for smaller result sets, but can have bad side effects if the result set is large.
You have to wait for the entire result set to be returned to
your client process.
You may eat up a lot of memory in your client to hold
the built-up list.
It may take a while for Python to construct and deconstruct the
list which you are going to immediately discard anyways.
If you know there's a single row being returned in the result set you can call fetchone() to get the single row.
curs.execute('select max(x) from t')
maxValue = curs.fetchone()[0]
Finally, you can loop over the result set fetching one row at a time. In general, there's no particular advantage in doing this over using the iterator.
row = curs.fetchone()
while row:
print row
row = curs.fetchone()
A:
My preferred way is the cursor iterator, but setting first the arraysize property of the cursor.
curs.execute('select * from people')
curs.arraysize = 256
for row in curs:
print row
In this example, cx_Oracle will fetch rows from Oracle 256 rows at a time, reducing the number of network round trips that need to be performed
A:
There's also the way psyco-pg seems to do it... From what I gather, it seems to create dictionary-like row-proxies to map key lookup into the memory block returned by the query. In that case, fetching the whole answer and working with a similar proxy-factory over the rows seems like useful idea. Come to think of it though, it feels more like Lua than Python.
Also, this should be applicable to all PEP-249 DBAPI2.0 interfaces, not just Oracle, or did you mean just fastest using Oracle?
| cx_Oracle: How do I iterate over a result set? | There are several ways to iterate over a result set. What are the tradeoff of each?
| [
"The canonical way is to use the built-in cursor iterator.\ncurs.execute('select * from people')\nfor row in curs:\n print row\n\n\nYou can use fetchall() to get all rows at once.\nfor row in curs.fetchall():\n print row\n\nIt can be convenient to use this to create a Python list containing the values returned:\ncurs.execute('select first_name from people')\nnames = [row[0] for row in curs.fetchall()]\n\nThis can be useful for smaller result sets, but can have bad side effects if the result set is large.\n\nYou have to wait for the entire result set to be returned to\nyour client process.\nYou may eat up a lot of memory in your client to hold\nthe built-up list.\nIt may take a while for Python to construct and deconstruct the\nlist which you are going to immediately discard anyways.\n\n\nIf you know there's a single row being returned in the result set you can call fetchone() to get the single row.\ncurs.execute('select max(x) from t')\nmaxValue = curs.fetchone()[0]\n\n\nFinally, you can loop over the result set fetching one row at a time. In general, there's no particular advantage in doing this over using the iterator.\nrow = curs.fetchone()\nwhile row:\n print row\n row = curs.fetchone()\n\n",
"My preferred way is the cursor iterator, but setting first the arraysize property of the cursor. \ncurs.execute('select * from people')\ncurs.arraysize = 256\nfor row in curs:\n print row\n\nIn this example, cx_Oracle will fetch rows from Oracle 256 rows at a time, reducing the number of network round trips that need to be performed\n",
"There's also the way psyco-pg seems to do it... From what I gather, it seems to create dictionary-like row-proxies to map key lookup into the memory block returned by the query. In that case, fetching the whole answer and working with a similar proxy-factory over the rows seems like useful idea. Come to think of it though, it feels more like Lua than Python.\nAlso, this should be applicable to all PEP-249 DBAPI2.0 interfaces, not just Oracle, or did you mean just fastest using Oracle?\n"
] | [
55,
27,
6
] | [] | [] | [
"cx_oracle",
"database",
"oracle",
"python",
"sql"
] | stackoverflow_0000000594_cx_oracle_database_oracle_python_sql.txt |
Q:
Adding New Element to Text Substring
Say I have the following string:
"I am the most foo h4ck3r ever!!"
I'm trying to write a makeSpecial(foo) function where the foo substring would be wrapped in a new span element, resulting in:
"I am the most <span class="special">foo></span> h4ck3r ever!!"
BeautifulSoup seemed like the way to go, but I haven't been able to make it work.
I could also pass this to the browser and do it with javascript, but that doesn't seem like a great idea.
Some advice for this would be really useful, especially in python.
A:
How about this:
Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> def makeSpecial(mystring, special_substr):
... return mystring.replace(special_substr, '<span class="special">%s</span>
' % special_substr)
...
>>> makeSpecial("I am the most foo h4ck3r ever!!", "foo")
'I am the most <span class="special">foo</span> h4ck3r ever!!'
>>>
A:
As far as I can tell, you're doing a simple string replace. You're replacing "foo" with "bar foo bar." So from string you could just use
replace(old, new[, count])
Return a copy of the string with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced.
So for you it would be:
myStr.replace("foo", "<span>foo</span>")
A:
If you wanted to do it with javascript/jQuery, take a look at this question: Highlight a word with jQuery
| Adding New Element to Text Substring | Say I have the following string:
"I am the most foo h4ck3r ever!!"
I'm trying to write a makeSpecial(foo) function where the foo substring would be wrapped in a new span element, resulting in:
"I am the most <span class="special">foo></span> h4ck3r ever!!"
BeautifulSoup seemed like the way to go, but I haven't been able to make it work.
I could also pass this to the browser and do it with javascript, but that doesn't seem like a great idea.
Some advice for this would be really useful, especially in python.
| [
"How about this:\nPython 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on\nwin32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> def makeSpecial(mystring, special_substr):\n... return mystring.replace(special_substr, '<span class=\"special\">%s</span>\n' % special_substr)\n...\n>>> makeSpecial(\"I am the most foo h4ck3r ever!!\", \"foo\")\n'I am the most <span class=\"special\">foo</span> h4ck3r ever!!'\n>>>\n\n",
"As far as I can tell, you're doing a simple string replace. You're replacing \"foo\" with \"bar foo bar.\" So from string you could just use \nreplace(old, new[, count]) \n\nReturn a copy of the string with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced. \nSo for you it would be:\nmyStr.replace(\"foo\", \"<span>foo</span>\") \n\n",
"If you wanted to do it with javascript/jQuery, take a look at this question: Highlight a word with jQuery\n"
] | [
3,
1,
0
] | [] | [] | [
"beautifulsoup",
"javascript",
"jquery",
"python"
] | stackoverflow_0000125102_beautifulsoup_javascript_jquery_python.txt |
Q:
Python library for rendering HTML and javascript
Is there any python module for rendering a HTML page with javascript and get back a DOM object?
I want to parse a page which generates almost all of its content using javascript.
A:
The big complication here is emulating the full browser environment outside of a browser. You can use stand alone javascript interpreters like Rhino and SpiderMonkey to run javascript code but they don't provide a complete browser like environment to full render a web page.
If I needed to solve a problem like this I would first look at how the javascript is rendering the page, it's quite possible it's fetching data via AJAX and using that to render the page. I could then use python libraries like simplejson and httplib2 to directly fetch the data and use that, negating the need to access the DOM object. However, that's only one possible situation, I don't know the exact problem you are solving.
Other options include the selenium one mentioned by Łukasz, some kind of webkit embedded craziness, some kind of IE win32 scripting craziness or, finally, a pyxpcom based solution (with added craziness). All these have the drawback of requiring pretty much a fully running web browser for python to play with, which might not be an option depending on your environment.
A:
You can probably use python-webkit for it. Requires a running glib and GTK, but that's probably less problematic than wrapping the parts of webkit without glib.
I don't know if it does everything you need, but I guess you should give it a try.
| Python library for rendering HTML and javascript | Is there any python module for rendering a HTML page with javascript and get back a DOM object?
I want to parse a page which generates almost all of its content using javascript.
| [
"The big complication here is emulating the full browser environment outside of a browser. You can use stand alone javascript interpreters like Rhino and SpiderMonkey to run javascript code but they don't provide a complete browser like environment to full render a web page.\nIf I needed to solve a problem like this I would first look at how the javascript is rendering the page, it's quite possible it's fetching data via AJAX and using that to render the page. I could then use python libraries like simplejson and httplib2 to directly fetch the data and use that, negating the need to access the DOM object. However, that's only one possible situation, I don't know the exact problem you are solving.\nOther options include the selenium one mentioned by Łukasz, some kind of webkit embedded craziness, some kind of IE win32 scripting craziness or, finally, a pyxpcom based solution (with added craziness). All these have the drawback of requiring pretty much a fully running web browser for python to play with, which might not be an option depending on your environment.\n",
"You can probably use python-webkit for it. Requires a running glib and GTK, but that's probably less problematic than wrapping the parts of webkit without glib.\nI don't know if it does everything you need, but I guess you should give it a try.\n"
] | [
8,
1
] | [] | [] | [
"html",
"javascript",
"python"
] | stackoverflow_0000126131_html_javascript_python.txt |
Q:
How do I develop and create a self-contained PyGTK application bundle for MacOS, with native-looking widgets?
I have read that it is possible to build GTK+ on MacOS X. I know that it's possible to create a bundle of a GTK+ application on MacOS. I also know that it's possible to create widgets that look sort of native. However, searching around I am not really clear on how to create a bundle that includes the native theme stuff, and uses Python rather than its own C main-point. There are also rumors that it's possible to build PyGTK, but it sounds like there might still be some wrinkles in that process.
However, there is no step-by-step guide that explains how one can set up an environment where an application might be run from Python source, then built and deployed in an app bundle. How can I go about doing that?
A:
Native looking widgets is quite complicated.
There's a beginning of quartz engine (for theming) found here http://git.gnome.org/browse/gtk+/tree/gdk/quartz
For self-contained applications check out the newly released bundle on http://live.gnome.org/GTK%2B/OSX
A:
I'm not sure if I'm grokking all the details of your question, but looking at your problem in general (how do I deploy a python app on mac), I'm inclined to say that the answer is py2app. Basically this will bundle a python interpreter and all relevant python files for you, and give you a scriptable system that you can use to add in whatever other resources/dependencies you need.
A:
While it's not a guide solely targetted at python/GTK+/OS X, this post is a good, detailed description of someone else's attempt to do most of what you describe. Obviously, the app-specific stuff is going to vary.
| How do I develop and create a self-contained PyGTK application bundle for MacOS, with native-looking widgets? | I have read that it is possible to build GTK+ on MacOS X. I know that it's possible to create a bundle of a GTK+ application on MacOS. I also know that it's possible to create widgets that look sort of native. However, searching around I am not really clear on how to create a bundle that includes the native theme stuff, and uses Python rather than its own C main-point. There are also rumors that it's possible to build PyGTK, but it sounds like there might still be some wrinkles in that process.
However, there is no step-by-step guide that explains how one can set up an environment where an application might be run from Python source, then built and deployed in an app bundle. How can I go about doing that?
| [
"Native looking widgets is quite complicated.\nThere's a beginning of quartz engine (for theming) found here http://git.gnome.org/browse/gtk+/tree/gdk/quartz\nFor self-contained applications check out the newly released bundle on http://live.gnome.org/GTK%2B/OSX\n",
"I'm not sure if I'm grokking all the details of your question, but looking at your problem in general (how do I deploy a python app on mac), I'm inclined to say that the answer is py2app. Basically this will bundle a python interpreter and all relevant python files for you, and give you a scriptable system that you can use to add in whatever other resources/dependencies you need. \n",
"While it's not a guide solely targetted at python/GTK+/OS X, this post is a good, detailed description of someone else's attempt to do most of what you describe. Obviously, the app-specific stuff is going to vary.\n"
] | [
3,
1,
1
] | [] | [] | [
"gtk",
"macos",
"pygtk",
"python"
] | stackoverflow_0000118138_gtk_macos_pygtk_python.txt |
Q:
How do I create a new signal in pygtk
I've created a python object, but I want to send signals on it. I made it inherit from gobject.GObject, but there doesn't seem to be any way to create a new signal on my object.
A:
You can also define signals inside the class definition:
class MyGObjectClass(gobject.GObject):
__gsignals__ = {
"some-signal": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, (object, )),
}
The contents of the tuple are the the same as the three last arguments to gobject.signal_new.
A:
Here is how:
import gobject
class MyGObjectClass(gobject.GObject):
...
gobject.signal_new("signal-name", MyGObjectClass, gobject.SIGNAL_RUN_FIRST,
None, (str, int))
Where the second to last argument is the return type and the last argument is a tuple of argument types.
A:
If you use kiwi available here you can just do:
from kiwi.utils import gsignal
class MyObject(gobject.GObject):
gsignal('signal-name')
| How do I create a new signal in pygtk | I've created a python object, but I want to send signals on it. I made it inherit from gobject.GObject, but there doesn't seem to be any way to create a new signal on my object.
| [
"You can also define signals inside the class definition:\nclass MyGObjectClass(gobject.GObject):\n __gsignals__ = {\n \"some-signal\": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, (object, )),\n }\n\nThe contents of the tuple are the the same as the three last arguments to gobject.signal_new.\n",
"Here is how:\nimport gobject\n\nclass MyGObjectClass(gobject.GObject):\n ...\n\ngobject.signal_new(\"signal-name\", MyGObjectClass, gobject.SIGNAL_RUN_FIRST,\n None, (str, int))\n\nWhere the second to last argument is the return type and the last argument is a tuple of argument types.\n",
"If you use kiwi available here you can just do:\nfrom kiwi.utils import gsignal\n\nclass MyObject(gobject.GObject):\n gsignal('signal-name')\n\n"
] | [
11,
4,
2
] | [] | [] | [
"gobject",
"gtk",
"pygtk",
"python"
] | stackoverflow_0000066730_gobject_gtk_pygtk_python.txt |
Q:
"cannot find -lpq" when trying to install psycopg2
Intro: I'm trying to migrate our Trac SQLite to a PostgreSQL backend, to do that I need psycopg2. After clicking past the embarrassing rant on www.initd.org I downloaded the latest version and tried running setup.py install. This didn't work, telling me I needed mingw. So I downloaded and installed mingw.
Problem: I now get the following error when running setup.py build_ext --compiler=mingw32 install:
running build_ext
building 'psycopg2._psycopg' extension
writing build\temp.win32-2.4\Release\psycopg\_psycopg.def
C:\mingw\bin\gcc.exe -mno-cygwin -shared -s build\temp.win32-2.4\Release\psycopg
\psycopgmodule.o build\temp.win32-2.4\Release\psycopg\pqpath.o build\temp.win32-
2.4\Release\psycopg\typecast.o build\temp.win32-2.4\Release\psycopg\microprotoco
ls.o build\temp.win32-2.4\Release\psycopg\microprotocols_proto.o build\temp.win3
2-2.4\Release\psycopg\connection_type.o build\temp.win32-2.4\Release\psycopg\con
nection_int.o build\temp.win32-2.4\Release\psycopg\cursor_type.o build\temp.win3
2-2.4\Release\psycopg\cursor_int.o build\temp.win32-2.4\Release\psycopg\lobject_
type.o build\temp.win32-2.4\Release\psycopg\lobject_int.o build\temp.win32-2.4\R
elease\psycopg\adapter_qstring.o build\temp.win32-2.4\Release\psycopg\adapter_pb
oolean.o build\temp.win32-2.4\Release\psycopg\adapter_binary.o build\temp.win32-
2.4\Release\psycopg\adapter_asis.o build\temp.win32-2.4\Release\psycopg\adapter_
list.o build\temp.win32-2.4\Release\psycopg\adapter_datetime.o build\temp.win32-
2.4\Release\psycopg\_psycopg.def -LC:\Python24\libs -LC:\Python24\PCBuild -Lc:/P
ROGRA~1/POSTGR~1/8.3/lib -lpython24 -lmsvcr71 -lpq -lmsvcr71 -lws2_32 -ladvapi32
-o build\lib.win32-2.4\psycopg2\_psycopg.pyd
C:\mingw\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot fin
d -lpq
collect2: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
What I've tried - I noticed the forward slashes in the -L option, so I manually entered my PostgreSQL lib directory in the library_dirs option in the setup.cfg, to no avail (the call then had a -L option with backslashes, but the error message stayed the same).
A:
Have you tried the binary build of psycopg2 for windows? If that works with your python then it mitigates the need to build by hand.
I've seen random people ask this question on various lists and it seems one recommendation is to build postgresql by hand to work around this problem.
A:
Compiling extensions on windows can be tricky. There are precompiled libraries available however: http://www.stickpeople.com/projects/python/win-psycopg/
| "cannot find -lpq" when trying to install psycopg2 | Intro: I'm trying to migrate our Trac SQLite to a PostgreSQL backend, to do that I need psycopg2. After clicking past the embarrassing rant on www.initd.org I downloaded the latest version and tried running setup.py install. This didn't work, telling me I needed mingw. So I downloaded and installed mingw.
Problem: I now get the following error when running setup.py build_ext --compiler=mingw32 install:
running build_ext
building 'psycopg2._psycopg' extension
writing build\temp.win32-2.4\Release\psycopg\_psycopg.def
C:\mingw\bin\gcc.exe -mno-cygwin -shared -s build\temp.win32-2.4\Release\psycopg
\psycopgmodule.o build\temp.win32-2.4\Release\psycopg\pqpath.o build\temp.win32-
2.4\Release\psycopg\typecast.o build\temp.win32-2.4\Release\psycopg\microprotoco
ls.o build\temp.win32-2.4\Release\psycopg\microprotocols_proto.o build\temp.win3
2-2.4\Release\psycopg\connection_type.o build\temp.win32-2.4\Release\psycopg\con
nection_int.o build\temp.win32-2.4\Release\psycopg\cursor_type.o build\temp.win3
2-2.4\Release\psycopg\cursor_int.o build\temp.win32-2.4\Release\psycopg\lobject_
type.o build\temp.win32-2.4\Release\psycopg\lobject_int.o build\temp.win32-2.4\R
elease\psycopg\adapter_qstring.o build\temp.win32-2.4\Release\psycopg\adapter_pb
oolean.o build\temp.win32-2.4\Release\psycopg\adapter_binary.o build\temp.win32-
2.4\Release\psycopg\adapter_asis.o build\temp.win32-2.4\Release\psycopg\adapter_
list.o build\temp.win32-2.4\Release\psycopg\adapter_datetime.o build\temp.win32-
2.4\Release\psycopg\_psycopg.def -LC:\Python24\libs -LC:\Python24\PCBuild -Lc:/P
ROGRA~1/POSTGR~1/8.3/lib -lpython24 -lmsvcr71 -lpq -lmsvcr71 -lws2_32 -ladvapi32
-o build\lib.win32-2.4\psycopg2\_psycopg.pyd
C:\mingw\bin\..\lib\gcc\mingw32\3.4.5\..\..\..\..\mingw32\bin\ld.exe: cannot fin
d -lpq
collect2: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
What I've tried - I noticed the forward slashes in the -L option, so I manually entered my PostgreSQL lib directory in the library_dirs option in the setup.cfg, to no avail (the call then had a -L option with backslashes, but the error message stayed the same).
| [
"Have you tried the binary build of psycopg2 for windows? If that works with your python then it mitigates the need to build by hand.\nI've seen random people ask this question on various lists and it seems one recommendation is to build postgresql by hand to work around this problem.\n",
"Compiling extensions on windows can be tricky. There are precompiled libraries available however: http://www.stickpeople.com/projects/python/win-psycopg/\n"
] | [
2,
1
] | [] | [] | [
"postgresql",
"python",
"trac"
] | stackoverflow_0000126364_postgresql_python_trac.txt |
Q:
Checking for code changes in all imported python modules
Almost every Python web framework has a simple server that runs a wsgi application and automatically reloads the imported modules every time the source gets changed. I know I can look at the code and see how it's done, but that may take some time and I'm asking just out of curiosity. Does anyone have any idea how this is implemented?
A:
As the author of one of the reloader mechanisms (the one in werkzeug) I can tell you that it doesn't work. What all the reloaders do is forking one time and restarting the child process if a monitor thread notices that one module changed on the file system.
Inline reload()ing doesn't work because references to the reloaded module are not updated.
A:
reload() does not work. "Reloading" is usually implemented by forking.
Implementing "real" reload() is extremely difficult and even the most serious attempt, twisted.python.rebuild isn't perfect.
| Checking for code changes in all imported python modules | Almost every Python web framework has a simple server that runs a wsgi application and automatically reloads the imported modules every time the source gets changed. I know I can look at the code and see how it's done, but that may take some time and I'm asking just out of curiosity. Does anyone have any idea how this is implemented?
| [
"As the author of one of the reloader mechanisms (the one in werkzeug) I can tell you that it doesn't work. What all the reloaders do is forking one time and restarting the child process if a monitor thread notices that one module changed on the file system.\nInline reload()ing doesn't work because references to the reloaded module are not updated.\n",
"reload() does not work. \"Reloading\" is usually implemented by forking.\nImplementing \"real\" reload() is extremely difficult and even the most serious attempt, twisted.python.rebuild isn't perfect.\n"
] | [
2,
1
] | [] | [] | [
"python"
] | stackoverflow_0000126787_python.txt |
Q:
How can i move an object drawn in device context python
I have drawn an image in the device context using python and I want to move it smoothly/animate either vertically or horizontally.
What algorithm should I use? Where can I get info for this kind of tasks in python?
A:
Once an object is drawn in a device context it stays there. If you want to move it you need to redraw it.
You can keep a background that's fixed and only redraw the movable elements each time they move. Basically that's how it's done.
To move an object smoothly over a line you have to do something like this (I don't have a program ready, so can only give you an idea):
choose the start and end position: point A(x1, y1) and B(x2, y2)
choose in how much time the object should change position from A to B (say 10 seconds).
use a timer set to a certain interval (say 2 seconds)
calculate the delta X and Y that the object should change for each timer interval. In this case dx = (x2-x1)*2/10 and dy = (y2-y1)*2/10
in the timer callback increment the current object position with dx and dy and redraw the image
That would be the algorithm.
I suggest that you also take a look to PyGame. Maybe you can use that and it also has some tutorials.
A:
To smoothly move object between starting coordinate (x1, y1) and destination coordinate (x2,y2), you need to first ask yourself, how long the object should take to get to its destination. Lets say you want the object to get there in t time units (which maybe seconds, hours, whatever). Once you have determined this it is then trivial to workout the displacement per unit time:
dx = (x2-x1)/t
dy = (y2-y1)/t
Now you simply need to add (dx,dy) to the object's position ((x,y), initially (x1,y1)) every unit time, and stop when the object gets within some threshold distance of the destination. This is to account for the fact errors in divisions will accumulate, so if you did an equality check like:
(x,y)==(x2,y2)
It is unlikely it will ever be true.
Note the above method gives you constant velocity, straight line movement. You may wish to instead use some sort a slightly more complex formula to give the object the appearance of accelerating, maintaining cruise speed, then decelerating. The following formulae may then be useful:
v(t) = u(t) + t*a(t)
x(t) = v(t) + t*v(t)
This is merely Euler's method, and should suffice for animation purposes.
| How can i move an object drawn in device context python | I have drawn an image in the device context using python and I want to move it smoothly/animate either vertically or horizontally.
What algorithm should I use? Where can I get info for this kind of tasks in python?
| [
"Once an object is drawn in a device context it stays there. If you want to move it you need to redraw it.\nYou can keep a background that's fixed and only redraw the movable elements each time they move. Basically that's how it's done.\nTo move an object smoothly over a line you have to do something like this (I don't have a program ready, so can only give you an idea):\n\nchoose the start and end position: point A(x1, y1) and B(x2, y2)\nchoose in how much time the object should change position from A to B (say 10 seconds).\nuse a timer set to a certain interval (say 2 seconds)\ncalculate the delta X and Y that the object should change for each timer interval. In this case dx = (x2-x1)*2/10 and dy = (y2-y1)*2/10\nin the timer callback increment the current object position with dx and dy and redraw the image\n\nThat would be the algorithm.\nI suggest that you also take a look to PyGame. Maybe you can use that and it also has some tutorials.\n",
"To smoothly move object between starting coordinate (x1, y1) and destination coordinate (x2,y2), you need to first ask yourself, how long the object should take to get to its destination. Lets say you want the object to get there in t time units (which maybe seconds, hours, whatever). Once you have determined this it is then trivial to workout the displacement per unit time:\ndx = (x2-x1)/t\ndy = (y2-y1)/t\n\nNow you simply need to add (dx,dy) to the object's position ((x,y), initially (x1,y1)) every unit time, and stop when the object gets within some threshold distance of the destination. This is to account for the fact errors in divisions will accumulate, so if you did an equality check like: \n(x,y)==(x2,y2)\n\nIt is unlikely it will ever be true. \nNote the above method gives you constant velocity, straight line movement. You may wish to instead use some sort a slightly more complex formula to give the object the appearance of accelerating, maintaining cruise speed, then decelerating. The following formulae may then be useful:\nv(t) = u(t) + t*a(t)\nx(t) = v(t) + t*v(t)\n\nThis is merely Euler's method, and should suffice for animation purposes. \n"
] | [
1,
0
] | [] | [] | [
"animation",
"python"
] | stackoverflow_0000126738_animation_python.txt |
Q:
What is the best way to store set data in Python?
I have a list of data in the following form:
[(id\__1_, description, id\_type), (id\__2_, description, id\_type), ... , (id\__n_, description, id\_type))
The data are loaded from files that belong to the same group. In each group there could be multiples of the same id, each coming from different files. I don't care about the duplicates, so I thought that a nice way to store all of this would be to throw it into a Set type. But there's a problem.
Sometimes for the same id the descriptions can vary slightly, as follows:
IPI00110753
Tubulin alpha-1A chain
Tubulin alpha-1 chain
Alpha-tubulin 1
Alpha-tubulin isotype M-alpha-1
(Note that this example is taken from the uniprot protein database.)
I don't care if the descriptions vary. I cannot throw them away because there is a chance that the protein database I am using will not contain a listing for a certain identifier. If this happens I will want to be able to display the human readable description to the biologists so they know roughly what protein they are looking at.
I am currently solving this problem by using a dictionary type. However I don't really like this solution because it uses a lot of memory (I have a lot of these ID's). This is only an intermediary listing of them. There is some additional processing the ID's go through before they are placed in the database so I would like to keep my data-structure smaller.
I have two questions really. First, will I get a smaller memory footprint using the Set type (over the dictionary type) for this, or should I use a sorted list where I check every time I insert into the list to see if the ID exists, or is there a third solution that I haven't thought of? Second, if the Set type is the better answer how do I key it to look at just the first element of the tuple instead of the whole thing?
Thank you for reading my question,
Tim
Update
based on some of the comments I received let me clarify a little. Most of what I do with data-structure is insert into it. I only read it twice, once to annotate it with additional information,* and once to do be inserted into the database. However down the line there may be additional annotation that is done before I insert into the database. Unfortunately I don't know if that will happen at this time.
Right now I am looking into storing this data in a structure that is not based on a hash-table (ie. a dictionary). I would like the new structure to be fairly quick on insertion, but reading it can be linear since I only really do it twice. I am trying to move away from the hash table to save space. Is there a better structure or is a hash-table about as good as it gets?
*The information is a list of Swiss-Prot protein identifiers that I get by querying uniprot.
A:
Sets don't have keys. The element is the key.
If you think you want keys, you have a mapping. More-or-less by definition.
Sequential list lookup can be slow, even using a binary search. Mappings use hashes and are fast.
Are you talking about a dictionary like this?
{ 'id1': [ ('description1a', 'type1'), ('description1b','type1') ],
'id2': [ ('description2', 'type2') ],
...
}
This sure seems minimal. ID's are only represented once.
Perhaps you have something like this?
{ 'id1': ( ('description1a', 'description1b' ), 'type1' ),
'id2': ( ('description2',), 'type2' ),
...
}
I'm not sure you can find anything more compact unless you resort to using the struct module.
A:
I'm assuming the problem you try to solve by cutting down on the memory you use is the address space limit of your process. Additionally you search for a data structure that allows you fast insertion and reasonable sequential read out.
Use less structures except strings (str)
The question you ask is how to structure your data in one process to use less memory. The one canonical answer to this is (as long as you still need associative lookups), use as little other structures then python strings (str, not unicode) as possible. A python hash (dictionary) stores the references to your strings fairly efficiently (it is not a b-tree implementation).
However I think that you will not get very far with that approach, since what you face are huge datasets that might eventually just exceed the process address space and the physical memory of the machine you're working with altogether.
Alternative Solution
I would propose a different solution that does not involve changing your data structure to something that is harder to insert or interprete.
Split your information up in multiple processes, each holding whatever datastructure is convinient for that.
Implement inter process communication with sockets such that processes might reside on other machines altogether.
Try to divide your data such as to minimize inter process communication (i/o is glacially slow compared to cpu cycles).
The advantage of the approach I outline is that
You get to use two ore more cores on a machine fully for performance
You are not limited by the address space of one process, or even the physical memory of one machine
There are numerous packages and aproaches to distributed processing, some of which are
linda
processing
A:
If you're doing an n-way merge with removing duplicates, the following may be what you're looking for.
This generator will merge any number of sources. Each source must be a sequence.
The key must be in position 0. It yields the merged sequence one item at a time.
def merge( *sources ):
keyPos= 0
for s in sources:
s.sort()
while any( [len(s)>0 for s in sources] ):
topEnum= enumerate([ s[0][keyPos] if len(s) > 0 else None for s in sources ])
top= [ t for t in topEnum if t[1] is not None ]
top.sort( key=lambda a:a[1] )
src, key = top[0]
#print src, key
yield sources[ src ].pop(0)
This generator removes duplicates from a sequence.
def unique( sequence ):
keyPos= 0
seqIter= iter(sequence)
curr= seqIter.next()
for next in seqIter:
if next[keyPos] == curr[keyPos]:
# might want to create a sub-list of matches
continue
yield curr
curr= next
yield curr
Here's a script which uses these functions to produce a resulting sequence which is the union of all the sources with duplicates removed.
for u in unique( merge( source1, source2, source3, ... ) ):
print u
The complete set of data in each sequence must exist in memory once because we're sorting in memory. However, the resulting sequence does not actually exist in memory. Indeed, it works by consuming the other sequences.
A:
How about using {id: (description, id_type)} dictionary? Or {(id, id_type): description} dictionary if (id,id_type) is the key.
A:
Sets in Python are implemented using hash tables. In earlier versions, they were actually implemented using sets, but that has changed AFAIK. The only thing you save by using a set would then be the size of a pointer for each entry (the pointer to the value).
To use only a part of a tuple for the hashcode, you'd have to subclass tuple and override the hashcode method:
class ProteinTuple(tuple):
def __new__(cls, m1, m2, m3):
return tuple.__new__(cls, (m1, m2, m3))
def __hash__(self):
return hash(self[0])
Keep in mind that you pay for the extra function call to __hash__ in this case, because otherwise it would be a C method.
I'd go for Constantin's suggestions and take out the id from the tuple and see how much that helps.
A:
It's still murky, but it sounds like you have some several lists of [(id, description, type)...]
The id's are unique within a list and consistent between lists.
You want to create a UNION: a single list, where each id occurs once, with possibly multiple descriptions.
For some reason, you think a mapping might be too big. Do you have any evidence of this? Don't over-optimize without actual measurements.
This may be (if I'm guessing correctly) the standard "merge" operation from multiple sources.
source1.sort()
source2.sort()
result= []
while len(source1) > 0 or len(source2) > 0:
if len(source1) == 0:
result.append( source2.pop(0) )
elif len(source2) == 0:
result.append( source1.pop(0) )
elif source1[0][0] < source2[0][0]:
result.append( source1.pop(0) )
elif source2[0][0] < source1[0][0]:
result.append( source2.pop(0) )
else:
# keys are equal
result.append( source1.pop(0) )
# check for source2, to see if the description is different.
This assembles a union of two lists by sorting and merging. No mapping, no hash.
| What is the best way to store set data in Python? | I have a list of data in the following form:
[(id\__1_, description, id\_type), (id\__2_, description, id\_type), ... , (id\__n_, description, id\_type))
The data are loaded from files that belong to the same group. In each group there could be multiples of the same id, each coming from different files. I don't care about the duplicates, so I thought that a nice way to store all of this would be to throw it into a Set type. But there's a problem.
Sometimes for the same id the descriptions can vary slightly, as follows:
IPI00110753
Tubulin alpha-1A chain
Tubulin alpha-1 chain
Alpha-tubulin 1
Alpha-tubulin isotype M-alpha-1
(Note that this example is taken from the uniprot protein database.)
I don't care if the descriptions vary. I cannot throw them away because there is a chance that the protein database I am using will not contain a listing for a certain identifier. If this happens I will want to be able to display the human readable description to the biologists so they know roughly what protein they are looking at.
I am currently solving this problem by using a dictionary type. However I don't really like this solution because it uses a lot of memory (I have a lot of these ID's). This is only an intermediary listing of them. There is some additional processing the ID's go through before they are placed in the database so I would like to keep my data-structure smaller.
I have two questions really. First, will I get a smaller memory footprint using the Set type (over the dictionary type) for this, or should I use a sorted list where I check every time I insert into the list to see if the ID exists, or is there a third solution that I haven't thought of? Second, if the Set type is the better answer how do I key it to look at just the first element of the tuple instead of the whole thing?
Thank you for reading my question,
Tim
Update
based on some of the comments I received let me clarify a little. Most of what I do with data-structure is insert into it. I only read it twice, once to annotate it with additional information,* and once to do be inserted into the database. However down the line there may be additional annotation that is done before I insert into the database. Unfortunately I don't know if that will happen at this time.
Right now I am looking into storing this data in a structure that is not based on a hash-table (ie. a dictionary). I would like the new structure to be fairly quick on insertion, but reading it can be linear since I only really do it twice. I am trying to move away from the hash table to save space. Is there a better structure or is a hash-table about as good as it gets?
*The information is a list of Swiss-Prot protein identifiers that I get by querying uniprot.
| [
"Sets don't have keys. The element is the key.\nIf you think you want keys, you have a mapping. More-or-less by definition.\nSequential list lookup can be slow, even using a binary search. Mappings use hashes and are fast.\nAre you talking about a dictionary like this?\n{ 'id1': [ ('description1a', 'type1'), ('description1b','type1') ], \n 'id2': [ ('description2', 'type2') ],\n...\n}\n\nThis sure seems minimal. ID's are only represented once.\nPerhaps you have something like this?\n{ 'id1': ( ('description1a', 'description1b' ), 'type1' ),\n 'id2': ( ('description2',), 'type2' ),\n...\n}\n\nI'm not sure you can find anything more compact unless you resort to using the struct module.\n",
"I'm assuming the problem you try to solve by cutting down on the memory you use is the address space limit of your process. Additionally you search for a data structure that allows you fast insertion and reasonable sequential read out.\nUse less structures except strings (str)\nThe question you ask is how to structure your data in one process to use less memory. The one canonical answer to this is (as long as you still need associative lookups), use as little other structures then python strings (str, not unicode) as possible. A python hash (dictionary) stores the references to your strings fairly efficiently (it is not a b-tree implementation).\nHowever I think that you will not get very far with that approach, since what you face are huge datasets that might eventually just exceed the process address space and the physical memory of the machine you're working with altogether.\nAlternative Solution\nI would propose a different solution that does not involve changing your data structure to something that is harder to insert or interprete.\n\nSplit your information up in multiple processes, each holding whatever datastructure is convinient for that. \nImplement inter process communication with sockets such that processes might reside on other machines altogether. \nTry to divide your data such as to minimize inter process communication (i/o is glacially slow compared to cpu cycles). \n\nThe advantage of the approach I outline is that\n\nYou get to use two ore more cores on a machine fully for performance\nYou are not limited by the address space of one process, or even the physical memory of one machine\n\nThere are numerous packages and aproaches to distributed processing, some of which are\n\nlinda\nprocessing\n\n",
"If you're doing an n-way merge with removing duplicates, the following may be what you're looking for.\nThis generator will merge any number of sources. Each source must be a sequence.\nThe key must be in position 0. It yields the merged sequence one item at a time.\ndef merge( *sources ):\n keyPos= 0\n for s in sources:\n s.sort()\n while any( [len(s)>0 for s in sources] ):\n topEnum= enumerate([ s[0][keyPos] if len(s) > 0 else None for s in sources ])\n top= [ t for t in topEnum if t[1] is not None ]\n top.sort( key=lambda a:a[1] )\n src, key = top[0]\n #print src, key\n yield sources[ src ].pop(0)\n\nThis generator removes duplicates from a sequence. \ndef unique( sequence ):\n keyPos= 0\n seqIter= iter(sequence)\n curr= seqIter.next()\n for next in seqIter:\n if next[keyPos] == curr[keyPos]:\n # might want to create a sub-list of matches\n continue\n yield curr\n curr= next\n yield curr\n\nHere's a script which uses these functions to produce a resulting sequence which is the union of all the sources with duplicates removed.\nfor u in unique( merge( source1, source2, source3, ... ) ):\n print u\n\nThe complete set of data in each sequence must exist in memory once because we're sorting in memory. However, the resulting sequence does not actually exist in memory. Indeed, it works by consuming the other sequences. \n",
"How about using {id: (description, id_type)} dictionary? Or {(id, id_type): description} dictionary if (id,id_type) is the key.\n",
"Sets in Python are implemented using hash tables. In earlier versions, they were actually implemented using sets, but that has changed AFAIK. The only thing you save by using a set would then be the size of a pointer for each entry (the pointer to the value). \nTo use only a part of a tuple for the hashcode, you'd have to subclass tuple and override the hashcode method:\nclass ProteinTuple(tuple):\n def __new__(cls, m1, m2, m3):\n return tuple.__new__(cls, (m1, m2, m3))\n\n def __hash__(self):\n return hash(self[0])\n\nKeep in mind that you pay for the extra function call to __hash__ in this case, because otherwise it would be a C method.\nI'd go for Constantin's suggestions and take out the id from the tuple and see how much that helps.\n",
"It's still murky, but it sounds like you have some several lists of [(id, description, type)...]\nThe id's are unique within a list and consistent between lists.\nYou want to create a UNION: a single list, where each id occurs once, with possibly multiple descriptions.\nFor some reason, you think a mapping might be too big. Do you have any evidence of this? Don't over-optimize without actual measurements. \nThis may be (if I'm guessing correctly) the standard \"merge\" operation from multiple sources.\nsource1.sort()\nsource2.sort()\nresult= []\nwhile len(source1) > 0 or len(source2) > 0:\n if len(source1) == 0:\n result.append( source2.pop(0) )\n elif len(source2) == 0:\n result.append( source1.pop(0) )\n elif source1[0][0] < source2[0][0]:\n result.append( source1.pop(0) )\n elif source2[0][0] < source1[0][0]:\n result.append( source2.pop(0) )\n else:\n # keys are equal\n result.append( source1.pop(0) )\n # check for source2, to see if the description is different.\n\nThis assembles a union of two lists by sorting and merging. No mapping, no hash.\n"
] | [
2,
1,
1,
0,
0,
0
] | [] | [] | [
"data_structures",
"dictionary",
"python",
"set"
] | stackoverflow_0000128259_data_structures_dictionary_python_set.txt |
Q:
Getting Python to use the ActiveTcl libraries
Is there any way to get Python to use my ActiveTcl installation instead of having to copy the ActiveTcl libraries into the Python/tcl directory?
A:
Not familiar with ActiveTcl, but in general here is how to get a package/module to be loaded when that name already exists in the standard library:
import sys
dir_name="/usr/lib/mydir"
sys.path.insert(0,dir_name)
Substitute the value for dir_name with the path to the directory containing your package/module, and run the above code before anything is imported. This is often done through a 'sitecustomize.py' file so that it will take effect as soon as the interpreter starts up so you won't need to worry about import ordering.
| Getting Python to use the ActiveTcl libraries | Is there any way to get Python to use my ActiveTcl installation instead of having to copy the ActiveTcl libraries into the Python/tcl directory?
| [
"Not familiar with ActiveTcl, but in general here is how to get a package/module to be loaded when that name already exists in the standard library:\nimport sys\ndir_name=\"/usr/lib/mydir\"\nsys.path.insert(0,dir_name)\n\nSubstitute the value for dir_name with the path to the directory containing your package/module, and run the above code before anything is imported. This is often done through a 'sitecustomize.py' file so that it will take effect as soon as the interpreter starts up so you won't need to worry about import ordering.\n"
] | [
2
] | [] | [] | [
"activetcl",
"python"
] | stackoverflow_0000129912_activetcl_python.txt |
Q:
Example Facebook Application using TurboGears -- pyFacebook
I have a TurboGears application I'd like to run through Facebook, and am looking for an example TurboGears project using pyFacebook or minifb.py. pyFacebook is Django-centric, and I can probably figure it out, but this is, after all, the lazy web.
A:
Why is pyFacebook django centric? Looks like it works perfectly fine with all kinds of WSGI apps or Python applications in general. No need to use Django.
A:
pyFacebook is Django-centric because it includes a Django example. I did not intend to irk, but am merely looking for a TurboGears example using pyFacebook.
| Example Facebook Application using TurboGears -- pyFacebook | I have a TurboGears application I'd like to run through Facebook, and am looking for an example TurboGears project using pyFacebook or minifb.py. pyFacebook is Django-centric, and I can probably figure it out, but this is, after all, the lazy web.
| [
"Why is pyFacebook django centric? Looks like it works perfectly fine with all kinds of WSGI apps or Python applications in general. No need to use Django.\n",
"pyFacebook is Django-centric because it includes a Django example. I did not intend to irk, but am merely looking for a TurboGears example using pyFacebook.\n"
] | [
3,
1
] | [] | [] | [
"facebook",
"python",
"turbogears"
] | stackoverflow_0000126356_facebook_python_turbogears.txt |
Q:
Do I have to cause an ValueError in Python
I have this code:
chars = #some list
try:
indx = chars.index(chars)
except ValueError:
#doSomething
else:
#doSomethingElse
I want to be able to do this because I don't like knowfully causing Exceptions:
chars = #some list
indx = chars.index(chars)
if indx == -1:
#doSomething
else:
#doSomethingElse
Is there a way I can do this?
A:
Note that the latter approach is going against the generally accepted "pythonic" philosophy of EAFP, or "It is Easier to Ask for Forgiveness than Permission.", while the former follows it.
A:
if element in mylist:
index = mylist.index(element)
# ... do something
else:
# ... do something else
A:
For the specific case where your list is a sequence of single-character strings you can get what you want by changing the list to be searched to a string in advance (eg. ''.join(chars)).
You can then use the .find() method, which does work as you want. However, there's no corresponding method for lists or tuples.
Another possible option is to use a dictionary instead. eg.
d = dict((x, loc) for (loc,x) in enumerate(chars))
...
index = d.get(chars_to_find, -1) # Second argument is default if not found.
This may also perform better if you're doing a lot of searches on the list. If it's just a single search on a throwaway list though, its not worth doing.
| Do I have to cause an ValueError in Python | I have this code:
chars = #some list
try:
indx = chars.index(chars)
except ValueError:
#doSomething
else:
#doSomethingElse
I want to be able to do this because I don't like knowfully causing Exceptions:
chars = #some list
indx = chars.index(chars)
if indx == -1:
#doSomething
else:
#doSomethingElse
Is there a way I can do this?
| [
"Note that the latter approach is going against the generally accepted \"pythonic\" philosophy of EAFP, or \"It is Easier to Ask for Forgiveness than Permission.\", while the former follows it.\n",
"if element in mylist:\n index = mylist.index(element)\n # ... do something\nelse:\n # ... do something else\n\n",
"For the specific case where your list is a sequence of single-character strings you can get what you want by changing the list to be searched to a string in advance (eg. ''.join(chars)).\nYou can then use the .find() method, which does work as you want. However, there's no corresponding method for lists or tuples.\nAnother possible option is to use a dictionary instead. eg.\nd = dict((x, loc) for (loc,x) in enumerate(chars))\n...\nindex = d.get(chars_to_find, -1) # Second argument is default if not found.\n\nThis may also perform better if you're doing a lot of searches on the list. If it's just a single search on a throwaway list though, its not worth doing.\n"
] | [
9,
7,
0
] | [] | [] | [
"exception",
"list",
"python"
] | stackoverflow_0000131449_exception_list_python.txt |
Q:
Contributing to Python
I'm a pretty inexperienced programmer (can make tk apps, text processing, sort of understand oop), but Python is so awesome that I would like to help the community. What's the best way for a beginner to contribute?
A:
Add to the docs. it is downright crappy
Help out other users on the dev and user mailing lists.
TEST PYTHON. bugs in programming languages are real bad. And I have seen someone discover atleast 1 bug in python
Frequent the #python channel on irc.freenode.net
A:
Build something cool in Python and share it with others. Small values of cool are still cool. Not everyone gets to write epic, world-changing software.
Every problem solved well using Python is a way of showing how cool Python is.
A:
I guess one way would be to help with documentation (translation, updating), until you are aware enough about the language. Also following the devs and users mail groups would give you a pretty good idea of what is being done and needs to be done by the community.
A:
I see two ways of going about it: working on Python directly or working on something that utilizes Python
Since you're a beginner, you're probably hesitant to work on the core Python language or feel that you can't contribute in a meaningful way, which is understandable. However, as a beginner, you're in a good position to help improve documentation and other items that are essential to learning Python.
For example, the Python tutorial is less of a tutorial (in the standard sense) and more of a feature listing, at least in my opinion. When I tried to learn from it, I never got the feeling that I was building up my knowledge, like creating an application. It felt more like I was being shown all the parts that make up Python but not how to put them together into a cohesive structure.
Once I became more comfortable with the language (mostly through books and lots of practice), I eventually wrote my own tutorial, trying to provide not only the technical information but also lessons learned and "newbie gotchas".
Alternatively, you can contribute to the Python world by using Python in programs. You can contribute to projects already established, e.g. Django, PyGame, etc., or you can make your own program to "scratch an itch". Either way, you not only build your knowledge of Python but you are giving back to the community.
Finally, you can become an advocate of Python, encouraging others to learn the language. I kept suggesting to my supervisor at my last job to use Python rather than Java when a considering what to use for a new project. I tell everyone I know about the joys of Python and encourage them to give it a try. I convinced the administrator of a computer forum I frequent to create a section for Python. And, as I already said, I wrote a tutorial for Python and I'm working on a new one for wxPython.
There are many ways you can contribute to Python that aren't necessarily programming related. As your programming skills grow, you may want to move further into code contributions. But you may gain more satisfaction by helping others find the same joy you found in Python.
A:
If you aren't up to actually working on the Python core, there are still many ways to contribute.. 2 that immediately come to mind is:
work on documentation.. it can ALWAYS be improved. Take your favorite modules and check out the documentation and add where you can.
Reporting descriptive bugs is very helpful to the development process.
A:
Get involved with the community: http://www.python.org/dev/
A:
Start by contributing to a Python project that you use and enjoy. This can be as simple as answering questions on the mailing list or IRC channel, offering to help with documentation and test writing or fixing bugs.
| Contributing to Python | I'm a pretty inexperienced programmer (can make tk apps, text processing, sort of understand oop), but Python is so awesome that I would like to help the community. What's the best way for a beginner to contribute?
| [
"\nAdd to the docs. it is downright crappy\nHelp out other users on the dev and user mailing lists. \nTEST PYTHON. bugs in programming languages are real bad. And I have seen someone discover atleast 1 bug in python\nFrequent the #python channel on irc.freenode.net\n\n",
"Build something cool in Python and share it with others. Small values of cool are still cool. Not everyone gets to write epic, world-changing software. \nEvery problem solved well using Python is a way of showing how cool Python is.\n",
"I guess one way would be to help with documentation (translation, updating), until you are aware enough about the language. Also following the devs and users mail groups would give you a pretty good idea of what is being done and needs to be done by the community.\n",
"I see two ways of going about it: working on Python directly or working on something that utilizes Python\nSince you're a beginner, you're probably hesitant to work on the core Python language or feel that you can't contribute in a meaningful way, which is understandable. However, as a beginner, you're in a good position to help improve documentation and other items that are essential to learning Python. \nFor example, the Python tutorial is less of a tutorial (in the standard sense) and more of a feature listing, at least in my opinion. When I tried to learn from it, I never got the feeling that I was building up my knowledge, like creating an application. It felt more like I was being shown all the parts that make up Python but not how to put them together into a cohesive structure.\nOnce I became more comfortable with the language (mostly through books and lots of practice), I eventually wrote my own tutorial, trying to provide not only the technical information but also lessons learned and \"newbie gotchas\".\nAlternatively, you can contribute to the Python world by using Python in programs. You can contribute to projects already established, e.g. Django, PyGame, etc., or you can make your own program to \"scratch an itch\". Either way, you not only build your knowledge of Python but you are giving back to the community. \nFinally, you can become an advocate of Python, encouraging others to learn the language. I kept suggesting to my supervisor at my last job to use Python rather than Java when a considering what to use for a new project. I tell everyone I know about the joys of Python and encourage them to give it a try. I convinced the administrator of a computer forum I frequent to create a section for Python. And, as I already said, I wrote a tutorial for Python and I'm working on a new one for wxPython.\nThere are many ways you can contribute to Python that aren't necessarily programming related. As your programming skills grow, you may want to move further into code contributions. But you may gain more satisfaction by helping others find the same joy you found in Python.\n",
"If you aren't up to actually working on the Python core, there are still many ways to contribute.. 2 that immediately come to mind is:\nwork on documentation.. it can ALWAYS be improved. Take your favorite modules and check out the documentation and add where you can.\nReporting descriptive bugs is very helpful to the development process.\n",
"Get involved with the community: http://www.python.org/dev/\n",
"Start by contributing to a Python project that you use and enjoy. This can be as simple as answering questions on the mailing list or IRC channel, offering to help with documentation and test writing or fixing bugs.\n"
] | [
7,
4,
3,
2,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0000127454_python.txt |
Q:
Generic Exception Handling in Python the "Right Way"
Sometimes I find myself in the situation where I want to execute several sequential commands like such:
try:
foo(a, b)
except Exception, e:
baz(e)
try:
bar(c, d)
except Exception, e:
baz(e)
...
This same pattern occurs when exceptions simply need to be ignored.
This feels redundant and the excessive syntax causes it to be surprisingly difficult to follow when reading code.
In C, I would have solved this type of problem easily with a macro, but unfortunately, this cannot be done in straight python.
Question: How can I best reduce the code footprint and increase code readability when coming across this pattern?
A:
You could use the with statement if you have python 2.5 or above:
from __future__ import with_statement
import contextlib
@contextlib.contextmanager
def handler():
try:
yield
except Exception, e:
baz(e)
Your example now becomes:
with handler():
foo(a, b)
with handler():
bar(c, d)
A:
If this is always, always the behaviour you want when a particular function raises an exception, you could use a decorator:
def handle_exception(handler):
def decorate(func):
def call_function(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception, e:
handler(e)
return call_function
return decorate
def baz(e):
print(e)
@handle_exception(baz)
def foo(a, b):
return a + b
@handle_exception(baz)
def bar(c, d):
return c.index(d)
Usage:
>>> foo(1, '2')
unsupported operand type(s) for +: 'int' and 'str'
>>> bar('steve', 'cheese')
substring not found
A:
If they're simple one-line commands, you can wrap them in lambdas:
for cmd in [
(lambda: foo (a, b)),
(lambda: bar (c, d)),
]:
try:
cmd ()
except StandardError, e:
baz (e)
You could wrap that whole thing up in a function, so it looked like this:
ignore_errors (baz, [
(lambda: foo (a, b)),
(lambda: bar (c, d)),
])
A:
The best approach I have found, is to define a function like such:
def handle_exception(function, reaction, *args, **kwargs):
try:
result = function(*args, **kwargs)
except Exception, e:
result = reaction(e)
return result
But that just doesn't feel or look right in practice:
handle_exception(foo, baz, a, b)
handle_exception(bar, baz, c, d)
A:
You could try something like this. This is vaguely C macro-like.
class TryOrBaz( object ):
def __init__( self, that ):
self.that= that
def __call__( self, *args ):
try:
return self.that( *args )
except Exception, e:
baz( e )
TryOrBaz( foo )( a, b )
TryOrBaz( bar )( c, d )
| Generic Exception Handling in Python the "Right Way" | Sometimes I find myself in the situation where I want to execute several sequential commands like such:
try:
foo(a, b)
except Exception, e:
baz(e)
try:
bar(c, d)
except Exception, e:
baz(e)
...
This same pattern occurs when exceptions simply need to be ignored.
This feels redundant and the excessive syntax causes it to be surprisingly difficult to follow when reading code.
In C, I would have solved this type of problem easily with a macro, but unfortunately, this cannot be done in straight python.
Question: How can I best reduce the code footprint and increase code readability when coming across this pattern?
| [
"You could use the with statement if you have python 2.5 or above:\nfrom __future__ import with_statement\nimport contextlib\n\n@contextlib.contextmanager\ndef handler():\n try:\n yield\n except Exception, e:\n baz(e)\n\nYour example now becomes:\nwith handler():\n foo(a, b)\nwith handler():\n bar(c, d)\n\n",
"If this is always, always the behaviour you want when a particular function raises an exception, you could use a decorator:\ndef handle_exception(handler):\n def decorate(func):\n def call_function(*args, **kwargs):\n try:\n func(*args, **kwargs)\n except Exception, e:\n handler(e)\n return call_function\n return decorate\n\ndef baz(e):\n print(e)\n\n@handle_exception(baz)\ndef foo(a, b):\n return a + b\n\n@handle_exception(baz)\ndef bar(c, d):\n return c.index(d)\n\nUsage:\n>>> foo(1, '2')\nunsupported operand type(s) for +: 'int' and 'str'\n>>> bar('steve', 'cheese')\nsubstring not found\n\n",
"If they're simple one-line commands, you can wrap them in lambdas:\nfor cmd in [\n (lambda: foo (a, b)),\n (lambda: bar (c, d)),\n]:\n try:\n cmd ()\n except StandardError, e:\n baz (e)\n\nYou could wrap that whole thing up in a function, so it looked like this:\nignore_errors (baz, [\n (lambda: foo (a, b)),\n (lambda: bar (c, d)),\n])\n\n",
"The best approach I have found, is to define a function like such:\ndef handle_exception(function, reaction, *args, **kwargs):\n try:\n result = function(*args, **kwargs)\n except Exception, e:\n result = reaction(e)\n return result\n\nBut that just doesn't feel or look right in practice:\nhandle_exception(foo, baz, a, b)\nhandle_exception(bar, baz, c, d)\n\n",
"You could try something like this. This is vaguely C macro-like.\nclass TryOrBaz( object ):\n def __init__( self, that ):\n self.that= that\n def __call__( self, *args ):\n try:\n return self.that( *args )\n except Exception, e:\n baz( e )\n\nTryOrBaz( foo )( a, b )\nTryOrBaz( bar )( c, d )\n\n"
] | [
74,
15,
4,
3,
3
] | [
"In your specific case, you can do this:\ntry:\n foo(a, b)\n bar(c, d)\nexcept Exception, e:\n baz(e)\n\nOr, you can catch the exception one step above:\ntry:\n foo_bar() # This function can throw at several places\nexcept Exception, e:\n baz(e)\n\n"
] | [
-4
] | [
"exception",
"python",
"readability"
] | stackoverflow_0000129144_exception_python_readability.txt |
Q:
Reading 32bit Packed Binary Data On 64bit System
I'm attempting to write a Python C extension that reads packed binary data (it is stored as structs of structs) and then parses it out into Python objects. Everything works as expected on a 32 bit machine (the binary files are always written on 32bit architecture), but not on a 64 bit box. Is there a "preferred" way of doing this?
It would be a lot of code to post but as an example:
struct
{
WORD version;
BOOL upgrade;
time_t time1;
time_t time2;
} apparms;
File *fp;
fp = fopen(filePath, "r+b");
fread(&apparms, sizeof(apparms), 1, fp);
return Py_BuildValue("{s:i,s:l,s:l}",
"sysVersion",apparms.version,
"powerFailTime", apparms.time1,
"normKitExpDate", apparms.time2
);
Now on a 32 bit system this works great, but on a 64 bit my time_t sizes are different (32bit vs 64 bit longs).
Damn, you people are fast.
Patrick, I originally started using the struct package but found it just way to slow for my needs. Plus I was looking for an excuse to write a Python Extension.
I know this is a stupid question but what types do I need to watch out for?
Thanks.
A:
Explicitly specify that your data types (e.g. integers) are 32-bit. Otherwise if you have two integers next to each other when you read them they will be read as one 64-bit integer.
When you are dealing with cross-platform issues, the two main things to watch out for are:
Bitness. If your packed data is written with 32-bit ints, then all of your code must explicitly specify 32-bit ints when reading and writing.
Byte order. If you move your code from Intel chips to PPC or SPARC, your byte order will be wrong. You will have to import your data and then byte-flip it so that it matches up with the current architecture. Otherwise 12 (0x0000000C) will be read as 201326592 (0x0C000000).
Hopefully this helps.
A:
The 'struct' module should be able to do this, although alignment of structs in the middle of the data is always an issue. It's not very hard to get it right, however: find out (once) what boundary the structs-in-structs align to, then pad (manually, with the 'x' specifier) to that boundary. You can doublecheck your padding by comparing struct.calcsize() with your actual data. It's certainly easier than writing a C extension for it.
In order to keep using Py_BuildValue() like that, you have two options. You can determine the size of time_t at compiletime (in terms of fundamental types, so 'an int' or 'a long' or 'an ssize_t') and then use the right format character to Py_BuildValue -- 'i' for an int, 'l' for a long, 'n' for an ssize_t. Or you can use PyInt_FromSsize_t() manually, in which case the compiler does the upcasting for you, and then use the 'O' format characters to pass the result to Py_BuildValue.
A:
You need to make sure you're using architecture independent members for your struct. For instance an int may be 32 bits on one architecture and 64 bits on another. As others have suggested, use the int32_t style types instead. If your struct contains unaligned members, you may need to deal with padding added by the compiler too.
Another common problem with cross architecture data is endianness. Intel i386 architecture is little-endian, but if you're reading on a completely different machine (e.g. an Alpha or Sparc), you'll have to worry about this too.
The Python struct module deals with both these situations, using the prefix passed as part of the format string.
@ - Use native size, endianness and alignment. i= sizeof(int), l= sizeof(long)
= - Use native endianness, but standard sizes and alignment (i=32 bits, l=64 bits)
< - Little-endian standard sizes/alignment
Big-endian standard sizes/alignment
In general, if the data passes off your machine, you should nail down the endianness and the size / padding format to something specific — ie. use "<" or ">" as your format. If you want to handle this in your C extension, you may need to add some code to handle it.
A:
What's your code for reading the binary data? Make sure you're copying the data into properly-sized types like int32_t instead of just int.
A:
Why aren't you using the struct package?
| Reading 32bit Packed Binary Data On 64bit System | I'm attempting to write a Python C extension that reads packed binary data (it is stored as structs of structs) and then parses it out into Python objects. Everything works as expected on a 32 bit machine (the binary files are always written on 32bit architecture), but not on a 64 bit box. Is there a "preferred" way of doing this?
It would be a lot of code to post but as an example:
struct
{
WORD version;
BOOL upgrade;
time_t time1;
time_t time2;
} apparms;
File *fp;
fp = fopen(filePath, "r+b");
fread(&apparms, sizeof(apparms), 1, fp);
return Py_BuildValue("{s:i,s:l,s:l}",
"sysVersion",apparms.version,
"powerFailTime", apparms.time1,
"normKitExpDate", apparms.time2
);
Now on a 32 bit system this works great, but on a 64 bit my time_t sizes are different (32bit vs 64 bit longs).
Damn, you people are fast.
Patrick, I originally started using the struct package but found it just way to slow for my needs. Plus I was looking for an excuse to write a Python Extension.
I know this is a stupid question but what types do I need to watch out for?
Thanks.
| [
"Explicitly specify that your data types (e.g. integers) are 32-bit. Otherwise if you have two integers next to each other when you read them they will be read as one 64-bit integer.\nWhen you are dealing with cross-platform issues, the two main things to watch out for are:\n\nBitness. If your packed data is written with 32-bit ints, then all of your code must explicitly specify 32-bit ints when reading and writing.\nByte order. If you move your code from Intel chips to PPC or SPARC, your byte order will be wrong. You will have to import your data and then byte-flip it so that it matches up with the current architecture. Otherwise 12 (0x0000000C) will be read as 201326592 (0x0C000000).\n\nHopefully this helps.\n",
"The 'struct' module should be able to do this, although alignment of structs in the middle of the data is always an issue. It's not very hard to get it right, however: find out (once) what boundary the structs-in-structs align to, then pad (manually, with the 'x' specifier) to that boundary. You can doublecheck your padding by comparing struct.calcsize() with your actual data. It's certainly easier than writing a C extension for it.\nIn order to keep using Py_BuildValue() like that, you have two options. You can determine the size of time_t at compiletime (in terms of fundamental types, so 'an int' or 'a long' or 'an ssize_t') and then use the right format character to Py_BuildValue -- 'i' for an int, 'l' for a long, 'n' for an ssize_t. Or you can use PyInt_FromSsize_t() manually, in which case the compiler does the upcasting for you, and then use the 'O' format characters to pass the result to Py_BuildValue.\n",
"You need to make sure you're using architecture independent members for your struct. For instance an int may be 32 bits on one architecture and 64 bits on another. As others have suggested, use the int32_t style types instead. If your struct contains unaligned members, you may need to deal with padding added by the compiler too.\nAnother common problem with cross architecture data is endianness. Intel i386 architecture is little-endian, but if you're reading on a completely different machine (e.g. an Alpha or Sparc), you'll have to worry about this too.\nThe Python struct module deals with both these situations, using the prefix passed as part of the format string.\n\n@ - Use native size, endianness and alignment. i= sizeof(int), l= sizeof(long)\n= - Use native endianness, but standard sizes and alignment (i=32 bits, l=64 bits)\n< - Little-endian standard sizes/alignment\n\n\nBig-endian standard sizes/alignment\n\n\n\nIn general, if the data passes off your machine, you should nail down the endianness and the size / padding format to something specific — ie. use \"<\" or \">\" as your format. If you want to handle this in your C extension, you may need to add some code to handle it.\n",
"What's your code for reading the binary data? Make sure you're copying the data into properly-sized types like int32_t instead of just int.\n",
"Why aren't you using the struct package?\n"
] | [
4,
2,
2,
1,
0
] | [] | [] | [
"64_bit",
"c",
"python"
] | stackoverflow_0000135246_64_bit_c_python.txt |
Q:
Style - When to serialize a Django model Instance: signals vs model's save method
I plan to serialize a Django model to XML when it's saved or updated. (The XML's going to be imported into a flash movie). Is it better to listen for a post_save() or pre_save() signal and then perform the serialization, or to just handle it in the model's save() methon
A:
If it's core functionality for saving the model you'll want it as part of the save method. However, if you already have a functioning model and you want to extend it for other purposes then signals are your best bet since they allow for properly decoupled modules.
A good example might be that you want to add event logging to your site, so you simply listen for the signals that signify an event rather than modifying the original site code.
post_save() is usually best because it means the model has been successfully saved, using pre_save() doesn't guarantee that the save will be successful so shouldn't be used for anything that would depend on the save being completed.
A:
Post save. That way the new data (the reason for performing the serialization) is already in the database. It'll make for a much cleaner bit of code that simply takes from the database and doesn't have to worry about adding an extra value.
The other way that comes to mind is to maintain the xml file in parallel to the database. That is to say, in your save() add the data to the database, and to the xml file. This would have a much less overhead if you're dealing with huge tables.
| Style - When to serialize a Django model Instance: signals vs model's save method | I plan to serialize a Django model to XML when it's saved or updated. (The XML's going to be imported into a flash movie). Is it better to listen for a post_save() or pre_save() signal and then perform the serialization, or to just handle it in the model's save() methon
| [
"If it's core functionality for saving the model you'll want it as part of the save method. However, if you already have a functioning model and you want to extend it for other purposes then signals are your best bet since they allow for properly decoupled modules.\nA good example might be that you want to add event logging to your site, so you simply listen for the signals that signify an event rather than modifying the original site code.\npost_save() is usually best because it means the model has been successfully saved, using pre_save() doesn't guarantee that the save will be successful so shouldn't be used for anything that would depend on the save being completed.\n",
"Post save. That way the new data (the reason for performing the serialization) is already in the database. It'll make for a much cleaner bit of code that simply takes from the database and doesn't have to worry about adding an extra value.\nThe other way that comes to mind is to maintain the xml file in parallel to the database. That is to say, in your save() add the data to the database, and to the xml file. This would have a much less overhead if you're dealing with huge tables.\n"
] | [
2,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000131327_django_python.txt |
Q:
How do you make Python / PostgreSQL faster?
Right now I have a log parser reading through 515mb of plain-text files (a file for each day over the past 4 years). My code currently stands as this: http://gist.github.com/12978. I've used psyco (as seen in the code) and I'm also compiling it and using the compiled version. It's doing about 100 lines every 0.3 seconds. The machine is a standard 15" MacBook Pro (2.4ghz C2D, 2GB RAM)
Is it possible for this to go faster or is that a limitation on the language/database?
A:
Don't waste time profiling. The time is always in the database operations. Do as few as possible. Just the minimum number of inserts.
Three Things.
One. Don't SELECT over and over again to conform the Date, Hostname and Person dimensions. Fetch all the data ONCE into a Python dictionary and use it in memory. Don't do repeated singleton selects. Use Python.
Two. Don't Update.
Specifically, Do not do this. It's bad code for two reasons.
cursor.execute("UPDATE people SET chats_count = chats_count + 1 WHERE id = '%s'" % person_id)
It be replaced with a simple SELECT COUNT(*) FROM ... . Never update to increment a count. Just count the rows that are there with a SELECT statement. [If you can't do this with a simple SELECT COUNT or SELECT COUNT(DISTINCT), you're missing some data -- your data model should always provide correct complete counts. Never update.]
And. Never build SQL using string substitution. Completely dumb.
If, for some reason the SELECT COUNT(*) isn't fast enough (benchmark first, before doing anything lame) you can cache the result of the count in another table. AFTER all of the loads. Do a SELECT COUNT(*) FROM whatever GROUP BY whatever and insert this into a table of counts. Don't Update. Ever.
Three. Use Bind Variables. Always.
cursor.execute( "INSERT INTO ... VALUES( %(x)s, %(y)s, %(z)s )", {'x':person_id, 'y':time_to_string(time), 'z':channel,} )
The SQL never changes. The values bound in change, but the SQL never changes. This is MUCH faster. Never build SQL statements dynamically. Never.
A:
Use bind variables instead of literal values in the sql statements and create a cursor for
each unique sql statement so that the statement does not need to be reparsed the next time it is used. From the python db api doc:
Prepare and execute a database
operation (query or command).
Parameters may be provided as sequence
or mapping and will be bound to
variables in the operation. Variables
are specified in a database-specific
notation (see the module's paramstyle
attribute for details). [5]
A reference to the operation will be
retained by the cursor. If the same
operation object is passed in again,
then the cursor can optimize its
behavior. This is most effective for
algorithms where the same operation is
used, but different parameters are
bound to it (many times).
ALWAYS ALWAYS ALWAYS use bind variables.
A:
In the for loop, you're inserting into the 'chats' table repeatedly, so you only need a single sql statement with bind variables, to be executed with different values. So you could put this before the for loop:
insert_statement="""
INSERT INTO chats(person_id, message_type, created_at, channel)
VALUES(:person_id,:message_type,:created_at,:channel)
"""
Then in place of each sql statement you execute put this in place:
cursor.execute(insert_statement, person_id='person',message_type='msg',created_at=some_date, channel=3)
This will make things run faster because:
The cursor object won't have to reparse the statement each time
The db server won't have to generate a new execution plan as it can use the one it create previously.
You won't have to call santitize() as special characters in the bind variables won't part of the sql statement that gets executed.
Note: The bind variable syntax I used is Oracle specific. You'll have to check the psycopg2 library's documentation for the exact syntax.
Other optimizations:
You're incrementing with the "UPDATE people SET chatscount" after each loop iteration. Keep a dictionary mapping user to chat_count and then execute the statement of the total number you've seen. This will be faster then hitting the db after every record.
Use bind variables on ALL your queries. Not just the insert statement, I choose that as an example.
Change all the find_*() functions that do db look ups to cache their results so they don't have to hit the db every time.
psycho optimizes python programs that perform a large number of numberic operation. The script is IO expensive and not CPU expensive so I wouldn't expect to give you much if any optimization.
A:
As Mark suggested, use binding variables. The database only has to prepare each statement once, then "fill in the blanks" for each execution. As a nice side effect, it will automatically take care of string-quoting issues (which your program isn't handling).
Turn transactions on (if they aren't already) and do a single commit at the end of the program. The database won't have to write anything to disk until all the data needs to be committed. And if your program encounters an error, none of the rows will be committed, allowing you to simply re-run the program once the problem has been corrected.
Your log_hostname, log_person, and log_date functions are doing needless SELECTs on the tables. Make the appropriate table attributes PRIMARY KEY or UNIQUE. Then, instead of checking for the presence of the key before you INSERT, just do the INSERT. If the person/date/hostname already exists, the INSERT will fail from the constraint violation. (This won't work if you use a transaction with a single commit, as suggested above.)
Alternatively, if you know you're the only one INSERTing into the tables while your program is running, then create parallel data structures in memory and maintain them in memory while you do your INSERTs. For example, read in all the hostnames from the table into an associative array at the start of the program. When want to know whether to do an INSERT, just do an array lookup. If no entry found, do the INSERT and update the array appropriately. (This suggestion is compatible with transactions and a single commit, but requires more programming. It'll be wickedly faster, though.)
A:
Additionally to the many fine suggestions @Mark Roddy has given, do the following:
don't use readlines, you can iterate over file objects
try to use executemany rather than execute: try to do batch inserts rather single inserts, this tends to be faster because there's less overhead. It also reduces the number of commits
str.rstrip will work just fine instead of stripping of the newline with a regex
Batching the inserts will use more memory temporarily, but that should be fine when you don't read the whole file into memory.
| How do you make Python / PostgreSQL faster? | Right now I have a log parser reading through 515mb of plain-text files (a file for each day over the past 4 years). My code currently stands as this: http://gist.github.com/12978. I've used psyco (as seen in the code) and I'm also compiling it and using the compiled version. It's doing about 100 lines every 0.3 seconds. The machine is a standard 15" MacBook Pro (2.4ghz C2D, 2GB RAM)
Is it possible for this to go faster or is that a limitation on the language/database?
| [
"Don't waste time profiling. The time is always in the database operations. Do as few as possible. Just the minimum number of inserts.\nThree Things.\nOne. Don't SELECT over and over again to conform the Date, Hostname and Person dimensions. Fetch all the data ONCE into a Python dictionary and use it in memory. Don't do repeated singleton selects. Use Python.\nTwo. Don't Update.\nSpecifically, Do not do this. It's bad code for two reasons.\ncursor.execute(\"UPDATE people SET chats_count = chats_count + 1 WHERE id = '%s'\" % person_id)\n\nIt be replaced with a simple SELECT COUNT(*) FROM ... . Never update to increment a count. Just count the rows that are there with a SELECT statement. [If you can't do this with a simple SELECT COUNT or SELECT COUNT(DISTINCT), you're missing some data -- your data model should always provide correct complete counts. Never update.]\nAnd. Never build SQL using string substitution. Completely dumb.\nIf, for some reason the SELECT COUNT(*) isn't fast enough (benchmark first, before doing anything lame) you can cache the result of the count in another table. AFTER all of the loads. Do a SELECT COUNT(*) FROM whatever GROUP BY whatever and insert this into a table of counts. Don't Update. Ever.\nThree. Use Bind Variables. Always.\ncursor.execute( \"INSERT INTO ... VALUES( %(x)s, %(y)s, %(z)s )\", {'x':person_id, 'y':time_to_string(time), 'z':channel,} )\n\nThe SQL never changes. The values bound in change, but the SQL never changes. This is MUCH faster. Never build SQL statements dynamically. Never. \n",
"Use bind variables instead of literal values in the sql statements and create a cursor for \neach unique sql statement so that the statement does not need to be reparsed the next time it is used. From the python db api doc:\n\nPrepare and execute a database\n operation (query or command). \n Parameters may be provided as sequence\n or mapping and will be bound to\n variables in the operation. Variables\n are specified in a database-specific\n notation (see the module's paramstyle\n attribute for details). [5]\nA reference to the operation will be\n retained by the cursor. If the same\n operation object is passed in again,\n then the cursor can optimize its\n behavior. This is most effective for\n algorithms where the same operation is\n used, but different parameters are\n bound to it (many times).\n\nALWAYS ALWAYS ALWAYS use bind variables.\n",
"In the for loop, you're inserting into the 'chats' table repeatedly, so you only need a single sql statement with bind variables, to be executed with different values. So you could put this before the for loop:\ninsert_statement=\"\"\"\n INSERT INTO chats(person_id, message_type, created_at, channel)\n VALUES(:person_id,:message_type,:created_at,:channel)\n\"\"\"\n\nThen in place of each sql statement you execute put this in place:\ncursor.execute(insert_statement, person_id='person',message_type='msg',created_at=some_date, channel=3)\n\nThis will make things run faster because:\n\nThe cursor object won't have to reparse the statement each time\nThe db server won't have to generate a new execution plan as it can use the one it create previously.\nYou won't have to call santitize() as special characters in the bind variables won't part of the sql statement that gets executed.\n\nNote: The bind variable syntax I used is Oracle specific. You'll have to check the psycopg2 library's documentation for the exact syntax.\nOther optimizations:\n\nYou're incrementing with the \"UPDATE people SET chatscount\" after each loop iteration. Keep a dictionary mapping user to chat_count and then execute the statement of the total number you've seen. This will be faster then hitting the db after every record.\nUse bind variables on ALL your queries. Not just the insert statement, I choose that as an example.\nChange all the find_*() functions that do db look ups to cache their results so they don't have to hit the db every time.\npsycho optimizes python programs that perform a large number of numberic operation. The script is IO expensive and not CPU expensive so I wouldn't expect to give you much if any optimization.\n\n",
"As Mark suggested, use binding variables. The database only has to prepare each statement once, then \"fill in the blanks\" for each execution. As a nice side effect, it will automatically take care of string-quoting issues (which your program isn't handling).\nTurn transactions on (if they aren't already) and do a single commit at the end of the program. The database won't have to write anything to disk until all the data needs to be committed. And if your program encounters an error, none of the rows will be committed, allowing you to simply re-run the program once the problem has been corrected.\nYour log_hostname, log_person, and log_date functions are doing needless SELECTs on the tables. Make the appropriate table attributes PRIMARY KEY or UNIQUE. Then, instead of checking for the presence of the key before you INSERT, just do the INSERT. If the person/date/hostname already exists, the INSERT will fail from the constraint violation. (This won't work if you use a transaction with a single commit, as suggested above.)\nAlternatively, if you know you're the only one INSERTing into the tables while your program is running, then create parallel data structures in memory and maintain them in memory while you do your INSERTs. For example, read in all the hostnames from the table into an associative array at the start of the program. When want to know whether to do an INSERT, just do an array lookup. If no entry found, do the INSERT and update the array appropriately. (This suggestion is compatible with transactions and a single commit, but requires more programming. It'll be wickedly faster, though.)\n",
"Additionally to the many fine suggestions @Mark Roddy has given, do the following:\n\ndon't use readlines, you can iterate over file objects\ntry to use executemany rather than execute: try to do batch inserts rather single inserts, this tends to be faster because there's less overhead. It also reduces the number of commits\nstr.rstrip will work just fine instead of stripping of the newline with a regex\n\nBatching the inserts will use more memory temporarily, but that should be fine when you don't read the whole file into memory.\n"
] | [
10,
3,
3,
2,
1
] | [] | [] | [
"postgresql",
"python"
] | stackoverflow_0000136789_postgresql_python.txt |
Q:
Regex to remove conditional comments
I want a regex which can match conditional comments in a HTML source page so I can remove only those. I want to preserve the regular comments.
I would also like to avoid using the .*? notation if possible.
The text is
foo
<!--[if IE]>
<style type="text/css">
ul.menu ul li{
font-size: 10px;
font-weight:normal;
padding-top:0px;
}
</style>
<![endif]-->
bar
and I want to remove everything in <!--[if IE]> and <![endif]-->
EDIT: It is because of BeautifulSoup I want to remove these tags. BeautifulSoup fails to parse and gives an incomplete source
EDIT2: [if IE] isn't the only condition. There are lots more and I don't have any list of all possible combinations.
EDIT3: Vinko Vrsalovic's solution works, but the actual problem why beautifulsoup failed was because of a rogue comment within the conditional comment. Like
<!--[if lt IE 7.]>
<script defer type="text/javascript" src="pngfix_253168.js"></script><!--png fix for IE-->
<![endif]-->
Notice the <!--png fix for IE--> comment?
Though my problem was solve, I would love to get a regex solution for this.
A:
>>> from BeautifulSoup import BeautifulSoup, Comment
>>> html = '<html><!--[if IE]> bloo blee<![endif]--></html>'
>>> soup = BeautifulSoup(html)
>>> comments = soup.findAll(text=lambda text:isinstance(text, Comment)
and text.find('if') != -1) #This is one line, of course
>>> [comment.extract() for comment in comments]
[u'[if IE]> bloo blee<![endif]']
>>> print soup.prettify()
<html>
</html>
>>>
python 3 with bf4:
from bs4 import BeautifulSoup, Comment
html = '<html><!--[if IE]> bloo blee<![endif]--></html>'
soup = BeautifulSoup(html, "html.parser")
comments = soup.findAll(text=lambda text:isinstance(text, Comment)
and text.find('if') != -1) #This is one line, of course
[comment.extract() for comment in comments]
[u'[if IE]> bloo blee<![endif]']
print (soup.prettify())
If your data gets BeautifulSoup confused, you can fix it before hand or customize the parser, among other solutions.
EDIT: Per your comment, you just modify the lambda passed to findAll as you need (I modified it)
A:
Here's what you'll need:
<!(|--)\[[^\]]+\]>.+?<!\[endif\](|--)>
It will filter out all sorts of conditional comments including:
<!--[if anything]>
...
<[endif]-->
and
<![if ! IE 6]>
...
<![endif]>
EDIT3: Vinko Vrsalovic's solution works, but the actual problem why beautifulsoup failed was because of a rogue comment within the conditional comment. Like
Notice the comment?
Though my problem was solve, I would love to get a regex solution for this.
How about this:
(<!(|--)\[[^\]]+\]>.*?)(<!--.+?-->)(.*?<!\[endif\](|--)>)
Do a replace on that regular expression leaving \1\4 (or $1$4) as the replacement.
I know it has .*? and .+? in it, see my comment on this post.
A:
I'd simply go with :
import re
html = """fjlk<wb>dsqfjqdsmlkf fdsijfmldsqjfl fjdslmfkqsjf<---- fdjslmjkqfs---><!--[if lt IE 7.]>\
<script defer type="text/javascript" src="pngfix_253168.js"></script><!--png fix for IE-->\
<![endif]-->fjlk<wb>dsqfjqdsmlkf fdsijfmldsqjfl fjdslmfkqsjf<---- fdjslmjkqfs--->"""
# here the black magic occurs (whithout '.')
clean_html = ''.join(re.split(r'<!--\[[^¤]+?endif]-->', html))
print clean_html
'fjlk<wb>dsqfjqdsmlkf fdsijfmldsqjfl fjdslmfkqsjf<---- fdjslmjkqfs--->fjlk<wb>dsqfjqdsmlkf fdsijfmldsqjfl fjdslmfkqsjf<---- fdjslmjkqfs--->'
N.B : [^¤] will match any char that is not '¤'. This is really useful since it's lightning fast and this char can be found on any keyboard. But the trick is it's really hard to type (no one will type it by mistake) and nobody uses it : it's a generical money devise char.
If you don't feel like using ¤, however, you can use chr(7) to generate the "system bell" char, wich is unprintable and can't be found in a web page ;-)
A:
As I see it, you only need to worry about downlevel-hidden comments (the ones that start with <!--), and you don't need to match anything beyond the word if and the space following it. This should do what you want:
"<!--\[if\s(?:[^<]+|<(?!!\[endif\]-->))*<!\[endif\]-->"
That mess in the middle is to satisfy your desire not to use .*?, but I don't really think it's worth the effort. The .*? approach should work fine if you compile the regex with the Re.S flag set or wrap it in (?s:...). For example:
"(?s:<!--\[if\s.*?<!\[endif\]-->)"
A:
@Benoit
Small Correction (with multiline turned on):
"<!--\[if IE\]>.*?<!\[endif\]-->"
A:
Don't use a regular expression for this. You will get confused about comments containing opening tags and what not, and do the wrong thing. HTML isn't regular, and trying to modify it with a single regular expression will fail.
Use a HTML parser for this. BeautifulSoup is a good, easy, flexible and sturdy one that is able to handle real-world (meaning hopelessly broken) HTML. With it you can just look up all comment nodes, examine their content (you can use a regular expression for that, if you wish) and remove them if they need to be removed.
A:
This works in Visual Studio 2005, where there is no line span option:
\<!--\[if IE\]\>{.|\n}*\<!\[endif\]--\>
| Regex to remove conditional comments | I want a regex which can match conditional comments in a HTML source page so I can remove only those. I want to preserve the regular comments.
I would also like to avoid using the .*? notation if possible.
The text is
foo
<!--[if IE]>
<style type="text/css">
ul.menu ul li{
font-size: 10px;
font-weight:normal;
padding-top:0px;
}
</style>
<![endif]-->
bar
and I want to remove everything in <!--[if IE]> and <![endif]-->
EDIT: It is because of BeautifulSoup I want to remove these tags. BeautifulSoup fails to parse and gives an incomplete source
EDIT2: [if IE] isn't the only condition. There are lots more and I don't have any list of all possible combinations.
EDIT3: Vinko Vrsalovic's solution works, but the actual problem why beautifulsoup failed was because of a rogue comment within the conditional comment. Like
<!--[if lt IE 7.]>
<script defer type="text/javascript" src="pngfix_253168.js"></script><!--png fix for IE-->
<![endif]-->
Notice the <!--png fix for IE--> comment?
Though my problem was solve, I would love to get a regex solution for this.
| [
">>> from BeautifulSoup import BeautifulSoup, Comment\n>>> html = '<html><!--[if IE]> bloo blee<![endif]--></html>'\n>>> soup = BeautifulSoup(html)\n>>> comments = soup.findAll(text=lambda text:isinstance(text, Comment) \n and text.find('if') != -1) #This is one line, of course\n>>> [comment.extract() for comment in comments]\n[u'[if IE]> bloo blee<![endif]']\n>>> print soup.prettify()\n<html>\n</html>\n>>> \n\npython 3 with bf4:\nfrom bs4 import BeautifulSoup, Comment\nhtml = '<html><!--[if IE]> bloo blee<![endif]--></html>'\nsoup = BeautifulSoup(html, \"html.parser\")\ncomments = soup.findAll(text=lambda text:isinstance(text, Comment) \n and text.find('if') != -1) #This is one line, of course\n[comment.extract() for comment in comments]\n[u'[if IE]> bloo blee<![endif]']\nprint (soup.prettify())\n\nIf your data gets BeautifulSoup confused, you can fix it before hand or customize the parser, among other solutions.\nEDIT: Per your comment, you just modify the lambda passed to findAll as you need (I modified it)\n",
"Here's what you'll need:\n<!(|--)\\[[^\\]]+\\]>.+?<!\\[endif\\](|--)>\n\nIt will filter out all sorts of conditional comments including:\n<!--[if anything]>\n ...\n<[endif]-->\n\nand\n<![if ! IE 6]>\n ...\n<![endif]>\n\n\n\nEDIT3: Vinko Vrsalovic's solution works, but the actual problem why beautifulsoup failed was because of a rogue comment within the conditional comment. Like\n\n\nNotice the comment?\nThough my problem was solve, I would love to get a regex solution for this.\n\nHow about this:\n(<!(|--)\\[[^\\]]+\\]>.*?)(<!--.+?-->)(.*?<!\\[endif\\](|--)>)\n\nDo a replace on that regular expression leaving \\1\\4 (or $1$4) as the replacement.\nI know it has .*? and .+? in it, see my comment on this post.\n",
"I'd simply go with :\nimport re\n\nhtml = \"\"\"fjlk<wb>dsqfjqdsmlkf fdsijfmldsqjfl fjdslmfkqsjf<---- fdjslmjkqfs---><!--[if lt IE 7.]>\\\n<script defer type=\"text/javascript\" src=\"pngfix_253168.js\"></script><!--png fix for IE-->\\\n<![endif]-->fjlk<wb>dsqfjqdsmlkf fdsijfmldsqjfl fjdslmfkqsjf<---- fdjslmjkqfs--->\"\"\"\n\n# here the black magic occurs (whithout '.')\nclean_html = ''.join(re.split(r'<!--\\[[^¤]+?endif]-->', html))\n\nprint clean_html\n\n'fjlk<wb>dsqfjqdsmlkf fdsijfmldsqjfl fjdslmfkqsjf<---- fdjslmjkqfs--->fjlk<wb>dsqfjqdsmlkf fdsijfmldsqjfl fjdslmfkqsjf<---- fdjslmjkqfs--->'\n\nN.B : [^¤] will match any char that is not '¤'. This is really useful since it's lightning fast and this char can be found on any keyboard. But the trick is it's really hard to type (no one will type it by mistake) and nobody uses it : it's a generical money devise char.\nIf you don't feel like using ¤, however, you can use chr(7) to generate the \"system bell\" char, wich is unprintable and can't be found in a web page ;-)\n",
"As I see it, you only need to worry about downlevel-hidden comments (the ones that start with <!--), and you don't need to match anything beyond the word if and the space following it. This should do what you want:\n\"<!--\\[if\\s(?:[^<]+|<(?!!\\[endif\\]-->))*<!\\[endif\\]-->\"\n\nThat mess in the middle is to satisfy your desire not to use .*?, but I don't really think it's worth the effort. The .*? approach should work fine if you compile the regex with the Re.S flag set or wrap it in (?s:...). For example:\n\"(?s:<!--\\[if\\s.*?<!\\[endif\\]-->)\"\n\n",
"@Benoit \nSmall Correction (with multiline turned on): \n \"<!--\\[if IE\\]>.*?<!\\[endif\\]-->\"\n\n",
"Don't use a regular expression for this. You will get confused about comments containing opening tags and what not, and do the wrong thing. HTML isn't regular, and trying to modify it with a single regular expression will fail.\nUse a HTML parser for this. BeautifulSoup is a good, easy, flexible and sturdy one that is able to handle real-world (meaning hopelessly broken) HTML. With it you can just look up all comment nodes, examine their content (you can use a regular expression for that, if you wish) and remove them if they need to be removed.\n",
"This works in Visual Studio 2005, where there is no line span option:\n\\<!--\\[if IE\\]\\>{.|\\n}*\\<!\\[endif\\]--\\>\n"
] | [
5,
2,
2,
2,
1,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000132488_python_regex.txt |
Q:
Is there something like Python's getattr() in C#?
Is there something like Python's getattr() in C#? I would like to create a window by reading a list which contains the names of controls to put on the window.
A:
There is also Type.InvokeMember.
public static class ReflectionExt
{
public static object GetAttr(this object obj, string name)
{
Type type = obj.GetType();
BindingFlags flags = BindingFlags.Instance |
BindingFlags.Public |
BindingFlags.GetProperty;
return type.InvokeMember(name, flags, Type.DefaultBinder, obj, null);
}
}
Which could be used like:
object value = ReflectionExt.GetAttr(obj, "PropertyName");
or (as an extension method):
object value = obj.GetAttr("PropertyName");
A:
Use reflection for this.
Type.GetProperty() and Type.GetProperties() each return PropertyInfo instances, which can be used to read a property value on an object.
var result = typeof(DateTime).GetProperty("Year").GetValue(dt, null)
Type.GetMethod() and Type.GetMethods() each return MethodInfo instances, which can be used to execute a method on an object.
var result = typeof(DateTime).GetMethod("ToLongDateString").Invoke(dt, null);
If you don't necessarily know the type (which would be a little wierd if you new the property name), than you could do something like this as well.
var result = dt.GetType().GetProperty("Year").Invoke(dt, null);
A:
Yes, you can do this...
typeof(YourObjectType).GetProperty("PropertyName").GetValue(instanceObjectToGetPropFrom, null);
A:
There's the System.Reflection.PropertyInfo class that can be created using object.GetType().GetProperties(). That can be used to probe an object's properties using strings. (Similar methods exist for object methods, fields, etc.)
I don't think that will help you accomplish your goals though. You should probably just create and manipulate the objects directly. Controls have a Name property that you can set, for example.
| Is there something like Python's getattr() in C#? | Is there something like Python's getattr() in C#? I would like to create a window by reading a list which contains the names of controls to put on the window.
| [
"There is also Type.InvokeMember.\npublic static class ReflectionExt\n{\n public static object GetAttr(this object obj, string name)\n {\n Type type = obj.GetType();\n BindingFlags flags = BindingFlags.Instance | \n BindingFlags.Public | \n BindingFlags.GetProperty;\n\n return type.InvokeMember(name, flags, Type.DefaultBinder, obj, null);\n }\n}\n\nWhich could be used like:\nobject value = ReflectionExt.GetAttr(obj, \"PropertyName\");\n\nor (as an extension method):\nobject value = obj.GetAttr(\"PropertyName\");\n\n",
"Use reflection for this.\nType.GetProperty() and Type.GetProperties() each return PropertyInfo instances, which can be used to read a property value on an object.\nvar result = typeof(DateTime).GetProperty(\"Year\").GetValue(dt, null)\n\nType.GetMethod() and Type.GetMethods() each return MethodInfo instances, which can be used to execute a method on an object.\nvar result = typeof(DateTime).GetMethod(\"ToLongDateString\").Invoke(dt, null);\n\nIf you don't necessarily know the type (which would be a little wierd if you new the property name), than you could do something like this as well.\nvar result = dt.GetType().GetProperty(\"Year\").Invoke(dt, null);\n\n",
"Yes, you can do this...\ntypeof(YourObjectType).GetProperty(\"PropertyName\").GetValue(instanceObjectToGetPropFrom, null);\n\n",
"There's the System.Reflection.PropertyInfo class that can be created using object.GetType().GetProperties(). That can be used to probe an object's properties using strings. (Similar methods exist for object methods, fields, etc.)\nI don't think that will help you accomplish your goals though. You should probably just create and manipulate the objects directly. Controls have a Name property that you can set, for example.\n"
] | [
13,
5,
1,
0
] | [] | [] | [
"c#",
"python",
"user_interface"
] | stackoverflow_0000138045_c#_python_user_interface.txt |
Q:
Python language API
I'm starting with Python coming from java.
I was wondering if there exists something similar to JavaDoc API where I can find the class, its methods and and example of how to use it.
I've found very helpul to use help( thing ) from the Python ( command line )
I have found this also:
http://docs.python.org/2/
https://docs.python.org/2/py-modindex.html
But it seems to help when you already have the class name you are looking for. In JavaDoc API I have all the classes so if I need something I scroll down to a class that "sounds like" what I need. Or some times I just browse all the classes to see what they do, and when I need a feature my brain recalls me We saw something similar in the javadoc remember!?
But I don't seem to find the similar in Python ( yet ) and that why I'm posting this questin.
BTW I know that I would eventually will read this:
https://docs.python.org/2/library/
But, well, I think it is not today.
A:
pydoc?
I'm not sure if you're looking for something more sophisticated, but it does the trick.
A:
The standard python library is fairly well documented. Try jumping into python and importing a module say "os" and running:
import os
help(os)
This reads the doc strings on each of the items in the module and displays it. This is exactly what pydoc will do too.
EDIT: epydoc is probably exactly what you're looking for:
A:
Here is a list of all the modules in Python, not sure if that's what you're really after.
A:
I've downloaded Python 2.5 from Python.org and It does not contains pydoc.
Directorio de C:\Python25
9/23/2008 10:45 PM <DIR> .
9/23/2008 10:45 PM <DIR> ..
9/23/2008 10:45 PM <DIR> DLLs
9/23/2008 10:45 PM <DIR> Doc
9/23/2008 10:45 PM <DIR> include
9/25/2008 06:34 PM <DIR> Lib
9/23/2008 10:45 PM <DIR> libs
2/21/2008 01:05 PM 14,013 LICENSE.txt
2/21/2008 01:05 PM 119,048 NEWS.txt
2/21/2008 01:11 PM 24,064 python.exe
2/21/2008 01:12 PM 24,576 pythonw.exe
2/21/2008 01:05 PM 56,354 README.txt
9/23/2008 10:45 PM <DIR> tcl
9/23/2008 10:45 PM <DIR> Tools
2/21/2008 01:11 PM 4,608 w9xpopen.exe
6 archivos 242,663 bytes
But it has ( the substitute I guess ) pydocgui...
C:\Python25>dir Tools\Scripts\pydocgui.pyw
10/28/2005 07:06 PM 222 pydocgui.pyw
1 archivos 222 bytes
This launches a webserver and shows what I was looking for. All the modules plus all the classes that come with the platform.
The Doc dir contains the same as in:
http://docs.python.org/
Thanks a lot for guide me to pydoc.
A:
BTW I know that I would eventually
will read this:
http://docs.python.org/lib/lib.html
But, well, I think it is not today.
I suggest that you're making a mistake. The lib doc has "the class, its methods and and example of how to use it." It is what you are looking for.
I use both Java and Python all the time. Dig into the library doc, you'll find everything you're looking for.
A:
You can set the environment variable PYTHONDOCS to point to where the python documentation is installed.
On my system, it's in /usr/share/doc/python2.5
So you can define this variable in your shell profile or somewhere else depending on your system:
export PYTHONDOCS=/usr/share/doc/python2.5
Now, if you open an interractive python console, you can call the help system. For exemple:
>>> help(Exception)
>>> Help on class Exception in module exceptions:
>>> class Exception(BaseException)
>>> | Common base class for all non-exit exceptions.
>>> |
>>> | Method resolution order:
>>> | Exception
Documentation is here:
https://docs.python.org/library/pydoc.html
A:
If you're working on Windows ActiveState Python comes with the documentation, including the library reference in a searchable help file.
A:
It doesn't directly answer your question (so I'll probably be downgraded), but you may be interested in Jython.
Jython is an implementation of the high-level, dynamic, object-oriented language Python written in 100% Pure Java, and seamlessly integrated with the Java platform. It thus allows you to run Python on any Java platform.
Since you are coming from Java, Jython may help you leverage Python while still allowing you to use your Java knowledge.
A:
Also try
pydoc -p 11111
Then type in web browser http://localhost:11111
EDIT: of course you can use any other value for port number instead of 11111
| Python language API | I'm starting with Python coming from java.
I was wondering if there exists something similar to JavaDoc API where I can find the class, its methods and and example of how to use it.
I've found very helpul to use help( thing ) from the Python ( command line )
I have found this also:
http://docs.python.org/2/
https://docs.python.org/2/py-modindex.html
But it seems to help when you already have the class name you are looking for. In JavaDoc API I have all the classes so if I need something I scroll down to a class that "sounds like" what I need. Or some times I just browse all the classes to see what they do, and when I need a feature my brain recalls me We saw something similar in the javadoc remember!?
But I don't seem to find the similar in Python ( yet ) and that why I'm posting this questin.
BTW I know that I would eventually will read this:
https://docs.python.org/2/library/
But, well, I think it is not today.
| [
"pydoc?\nI'm not sure if you're looking for something more sophisticated, but it does the trick.\n",
"The standard python library is fairly well documented. Try jumping into python and importing a module say \"os\" and running:\nimport os \nhelp(os)\n\nThis reads the doc strings on each of the items in the module and displays it. This is exactly what pydoc will do too.\nEDIT: epydoc is probably exactly what you're looking for: \n",
"Here is a list of all the modules in Python, not sure if that's what you're really after.\n",
"I've downloaded Python 2.5 from Python.org and It does not contains pydoc.\nDirectorio de C:\\Python25\n\n9/23/2008 10:45 PM <DIR> .\n9/23/2008 10:45 PM <DIR> ..\n9/23/2008 10:45 PM <DIR> DLLs\n9/23/2008 10:45 PM <DIR> Doc\n9/23/2008 10:45 PM <DIR> include\n9/25/2008 06:34 PM <DIR> Lib\n9/23/2008 10:45 PM <DIR> libs\n2/21/2008 01:05 PM 14,013 LICENSE.txt\n2/21/2008 01:05 PM 119,048 NEWS.txt\n2/21/2008 01:11 PM 24,064 python.exe\n2/21/2008 01:12 PM 24,576 pythonw.exe\n2/21/2008 01:05 PM 56,354 README.txt\n9/23/2008 10:45 PM <DIR> tcl\n9/23/2008 10:45 PM <DIR> Tools\n2/21/2008 01:11 PM 4,608 w9xpopen.exe\n 6 archivos 242,663 bytes\n\nBut it has ( the substitute I guess ) pydocgui...\nC:\\Python25>dir Tools\\Scripts\\pydocgui.pyw\n\n10/28/2005 07:06 PM 222 pydocgui.pyw\n 1 archivos 222 bytes\n\nThis launches a webserver and shows what I was looking for. All the modules plus all the classes that come with the platform.\nThe Doc dir contains the same as in:\nhttp://docs.python.org/\nThanks a lot for guide me to pydoc.\n",
"\nBTW I know that I would eventually\n will read this:\nhttp://docs.python.org/lib/lib.html\nBut, well, I think it is not today.\n\nI suggest that you're making a mistake. The lib doc has \"the class, its methods and and example of how to use it.\" It is what you are looking for. \nI use both Java and Python all the time. Dig into the library doc, you'll find everything you're looking for.\n",
"You can set the environment variable PYTHONDOCS to point to where the python documentation is installed.\nOn my system, it's in /usr/share/doc/python2.5\nSo you can define this variable in your shell profile or somewhere else depending on your system:\n\nexport PYTHONDOCS=/usr/share/doc/python2.5\n\nNow, if you open an interractive python console, you can call the help system. For exemple:\n\n>>> help(Exception)\n>>> Help on class Exception in module exceptions:\n\n>>> class Exception(BaseException)\n>>> | Common base class for all non-exit exceptions.\n>>> | \n>>> | Method resolution order:\n>>> | Exception\n\n\nDocumentation is here:\nhttps://docs.python.org/library/pydoc.html\n",
"If you're working on Windows ActiveState Python comes with the documentation, including the library reference in a searchable help file.\n",
"It doesn't directly answer your question (so I'll probably be downgraded), but you may be interested in Jython.\n\nJython is an implementation of the high-level, dynamic, object-oriented language Python written in 100% Pure Java, and seamlessly integrated with the Java platform. It thus allows you to run Python on any Java platform.\n\nSince you are coming from Java, Jython may help you leverage Python while still allowing you to use your Java knowledge.\n",
"Also try\npydoc -p 11111\n\nThen type in web browser http://localhost:11111\nEDIT: of course you can use any other value for port number instead of 11111\n"
] | [
5,
2,
1,
1,
1,
1,
0,
0,
0
] | [] | [] | [
"documentation",
"python",
"python_2.x",
"reference"
] | stackoverflow_0000136739_documentation_python_python_2.x_reference.txt |
Q:
wxpython: How do I examine dragged data in OnDragOver?
I'm a bit perplexed by drag and drop in wxPython (but perhaps this questions pertains to drag and drop in other GUI frameworks as well). The frameworks provides a couple of callbacks (OnEnter and OnDragOver) that purportedly allow me to inform the system whether the current mouse position is a valid place to drop whatever it is that is being dragged. From these methods I can return wx.DragNone, wx.DragCopy, etc. What baffles me is that from within these methods I am not allowed to call GetData, which means I am not allowed to examine the data that the user is dragging. If I cannot see the data, how am I supposed to know whether it is OK for the user to drop here?
A:
One solution, which is a hack of limited usefulness, is when a drag is initiated, store the dragged data in a global or static reference somewhere. This way, in the OnEnter and OnDragOver handlers, it is possible to get a reference to the data being dragged. This is of course only useful for drags within the same application (the same instance of the application, actually).
A:
There is no way to see dragged data in OnEnter and OnDragOver methods.
The only solution I found is to store the dragged item in some instance variable that is then readable inside these methods.
| wxpython: How do I examine dragged data in OnDragOver? | I'm a bit perplexed by drag and drop in wxPython (but perhaps this questions pertains to drag and drop in other GUI frameworks as well). The frameworks provides a couple of callbacks (OnEnter and OnDragOver) that purportedly allow me to inform the system whether the current mouse position is a valid place to drop whatever it is that is being dragged. From these methods I can return wx.DragNone, wx.DragCopy, etc. What baffles me is that from within these methods I am not allowed to call GetData, which means I am not allowed to examine the data that the user is dragging. If I cannot see the data, how am I supposed to know whether it is OK for the user to drop here?
| [
"One solution, which is a hack of limited usefulness, is when a drag is initiated, store the dragged data in a global or static reference somewhere. This way, in the OnEnter and OnDragOver handlers, it is possible to get a reference to the data being dragged. This is of course only useful for drags within the same application (the same instance of the application, actually).\n",
"There is no way to see dragged data in OnEnter and OnDragOver methods.\nThe only solution I found is to store the dragged item in some instance variable that is then readable inside these methods.\n"
] | [
1,
1
] | [] | [] | [
"drag_and_drop",
"python",
"user_interface",
"wxpython",
"wxwidgets"
] | stackoverflow_0000026706_drag_and_drop_python_user_interface_wxpython_wxwidgets.txt |
Q:
PyQt - QScrollBar
Dear Stacktoverflow, can you show me an example of how to use a QScrollBar? Thanks.
A:
>>> import sys
>>> from PyQt4 import QtCore, QtGui
>>> app = QtGui.QApplication(sys.argv)
>>> sb = QtGui.QScrollBar()
>>> sb.setMinimum(0)
>>> sb.setMaximum(100)
>>> def on_slider_moved(value): print "new slider position: %i" % (value, )
>>> sb.connect(sb, QtCore.SIGNAL("sliderMoved(int)"), on_slider_moved)
>>> sb.show()
>>> app.exec_()
Now, when you move the slider (you might have to resize the window), you'll see the slider position printed to the terminal as you the handle.
A:
It will come down to you using the QScrollArea, it is a widget that implements showing something that is larger than the available space. You will not need to use QScrollBar directly. I don't have a PyQt example but there is a C++ example in the QT distribution it is called the "Image Viewer". The object hierarchy will still be the same
A:
In the PyQT source code distribution, look at the file:
examples/widgets/sliders.pyw
Or there is a minimal example here (I guess I shouldn't copy paste because of potential copyright issues)
| PyQt - QScrollBar | Dear Stacktoverflow, can you show me an example of how to use a QScrollBar? Thanks.
| [
">>> import sys\n>>> from PyQt4 import QtCore, QtGui\n>>> app = QtGui.QApplication(sys.argv)\n>>> sb = QtGui.QScrollBar()\n>>> sb.setMinimum(0)\n>>> sb.setMaximum(100)\n>>> def on_slider_moved(value): print \"new slider position: %i\" % (value, )\n>>> sb.connect(sb, QtCore.SIGNAL(\"sliderMoved(int)\"), on_slider_moved)\n>>> sb.show()\n>>> app.exec_()\n\nNow, when you move the slider (you might have to resize the window), you'll see the slider position printed to the terminal as you the handle.\n",
"It will come down to you using the QScrollArea, it is a widget that implements showing something that is larger than the available space. You will not need to use QScrollBar directly. I don't have a PyQt example but there is a C++ example in the QT distribution it is called the \"Image Viewer\". The object hierarchy will still be the same \n",
"In the PyQT source code distribution, look at the file:\n\nexamples/widgets/sliders.pyw\n\nOr there is a minimal example here (I guess I shouldn't copy paste because of potential copyright issues)\n"
] | [
2,
1,
0
] | [] | [] | [
"pyqt",
"python"
] | stackoverflow_0000139005_pyqt_python.txt |
Q:
How do I test a django database schema?
I want to write tests that can show whether or not the database is in sync with my models.py file. Actually I have already written them, only to find out that django creates a new database each time the tests are run based on the models.py file.
Is there any way I can make the models.py test use the existing database schema? The one that's in mysql/postgresql, and not the one that's in /myapp/models.py ?
I don't care about the data that's in the database, I only care about it's schema i.e. I want my tests to notice if a table in the database has less fields than the schema in my models.py file.
I'm using the unittest framework (actually the django extension to it) if this has any relevance.
thanks
A:
What we did was override the default test_runner so that it wouldn't create a new database to test against. This way, it runs the test against whatever our current local database looks like. But be very careful if you use this method because any changes to data you make in your tests will be permanent. I made sure that all our tests restores any changes back to their original state, and keep our pristine version of our database on the server and backed up.
So to do this you need to copy the run_test method from django.test.simple to a location in your project -- I put mine in myproject/test/test_runner.py
Then make the following changes to that method:
// change
old_name = settings.DATABASE_NAME
from django.db import connection
connection.creation.create_test_db(verbosity, autoclobber=not interactive)
result = unittest.TextTestRunner(verbosity=verbosity).run(suite)
connection.creation.destroy_test_db(old_name, verbosity)
// to:
result = unittest.TextTestRunner(verbosity=verbosity).run(suite)
Make sure to do all the necessary imports at the top and then in your settings file set the setting:
TEST_RUNNER = 'myproject.test.test_runner.run_tests'
Now when you run ./manage.py test Django will run the tests against the current state of your database rather than creating a new version based on your current model definitions.
Another thing you can do is create a copy of your database locally, and then do a check in your new run_test() method like this:
if settings.DATABASE_NAME != 'my_test_db':
sys.exit("You cannot run tests using the %s database. Please switch DATABASE_NAME to my_test_db in settings.py" % settings.DATABASE_NAME)
That way there's no danger of running tests against your main database.
| How do I test a django database schema? | I want to write tests that can show whether or not the database is in sync with my models.py file. Actually I have already written them, only to find out that django creates a new database each time the tests are run based on the models.py file.
Is there any way I can make the models.py test use the existing database schema? The one that's in mysql/postgresql, and not the one that's in /myapp/models.py ?
I don't care about the data that's in the database, I only care about it's schema i.e. I want my tests to notice if a table in the database has less fields than the schema in my models.py file.
I'm using the unittest framework (actually the django extension to it) if this has any relevance.
thanks
| [
"What we did was override the default test_runner so that it wouldn't create a new database to test against. This way, it runs the test against whatever our current local database looks like. But be very careful if you use this method because any changes to data you make in your tests will be permanent. I made sure that all our tests restores any changes back to their original state, and keep our pristine version of our database on the server and backed up.\nSo to do this you need to copy the run_test method from django.test.simple to a location in your project -- I put mine in myproject/test/test_runner.py\nThen make the following changes to that method:\n// change\nold_name = settings.DATABASE_NAME\nfrom django.db import connection\nconnection.creation.create_test_db(verbosity, autoclobber=not interactive)\nresult = unittest.TextTestRunner(verbosity=verbosity).run(suite)\nconnection.creation.destroy_test_db(old_name, verbosity)\n\n// to:\nresult = unittest.TextTestRunner(verbosity=verbosity).run(suite)\n\nMake sure to do all the necessary imports at the top and then in your settings file set the setting:\nTEST_RUNNER = 'myproject.test.test_runner.run_tests'\n\nNow when you run ./manage.py test Django will run the tests against the current state of your database rather than creating a new version based on your current model definitions.\nAnother thing you can do is create a copy of your database locally, and then do a check in your new run_test() method like this:\nif settings.DATABASE_NAME != 'my_test_db': \n sys.exit(\"You cannot run tests using the %s database. Please switch DATABASE_NAME to my_test_db in settings.py\" % settings.DATABASE_NAME) \n\nThat way there's no danger of running tests against your main database.\n"
] | [
9
] | [] | [] | [
"django",
"model",
"python",
"unit_testing"
] | stackoverflow_0000138851_django_model_python_unit_testing.txt |
Q:
Why results of map() and list comprehension are different?
The following test fails:
#!/usr/bin/env python
def f(*args):
"""
>>> t = 1, -1
>>> f(*map(lambda i: lambda: i, t))
[1, -1]
>>> f(*(lambda: i for i in t)) # -> [-1, -1]
[1, -1]
>>> f(*[lambda: i for i in t]) # -> [-1, -1]
[1, -1]
"""
alist = [a() for a in args]
print(alist)
if __name__ == '__main__':
import doctest; doctest.testmod()
In other words:
>>> t = 1, -1
>>> args = []
>>> for i in t:
... args.append(lambda: i)
...
>>> map(lambda a: a(), args)
[-1, -1]
>>> args = []
>>> for i in t:
... args.append((lambda i: lambda: i)(i))
...
>>> map(lambda a: a(), args)
[1, -1]
>>> args = []
>>> for i in t:
... args.append(lambda i=i: i)
...
>>> map(lambda a: a(), args)
[1, -1]
A:
They are different, because the value of i in both the generator expression and the list comp are evaluated lazily, i.e. when the anonymous functions are invoked in f.
By that time, i is bound to the last value if t, which is -1.
So basically, this is what the list comprehension does (likewise for the genexp):
x = []
i = 1 # 1. from t
x.append(lambda: i)
i = -1 # 2. from t
x.append(lambda: i)
Now the lambdas carry around a closure that references i, but i is bound to -1 in both cases, because that is the last value it was assigned to.
If you want to make sure that the lambda receives the current value of i, do
f(*[lambda u=i: u for i in t])
This way, you force the evaluation of i at the time the closure is created.
Edit: There is one difference between generator expressions and list comprehensions: the latter leak the loop variable into the surrounding scope.
A:
The lambda captures variables, not values, hence the code
lambda : i
will always return the value i is currently bound to in the closure. By the time it gets called, this value has been set to -1.
To get what you want, you'll need to capture the actual binding at the time the lambda is created, by:
>>> f(*(lambda i=i: i for i in t)) # -> [-1, -1]
[1, -1]
>>> f(*[lambda i=i: i for i in t]) # -> [-1, -1]
[1, -1]
A:
Expression f = lambda: i is equivalent to:
def f():
return i
Expression g = lambda i=i: i is equivalent to:
def g(i=i):
return i
i is a free variable in the first case and it is bound to the function parameter in the second case i.e., it is a local variable in that case. Values for default parameters are evaluated at the time of function definition.
Generator expression is the nearest enclosing scope (where i is defined) for i name in the lambda expression, therefore i is resolved in that block:
f(*(lambda: i for i in (1, -1)) # -> [-1, -1]
i is a local variable of the lambda i: ... block, therefore the object it refers to is defined in that block:
f(*map(lambda i: lambda: i, (1,-1))) # -> [1, -1]
| Why results of map() and list comprehension are different? | The following test fails:
#!/usr/bin/env python
def f(*args):
"""
>>> t = 1, -1
>>> f(*map(lambda i: lambda: i, t))
[1, -1]
>>> f(*(lambda: i for i in t)) # -> [-1, -1]
[1, -1]
>>> f(*[lambda: i for i in t]) # -> [-1, -1]
[1, -1]
"""
alist = [a() for a in args]
print(alist)
if __name__ == '__main__':
import doctest; doctest.testmod()
In other words:
>>> t = 1, -1
>>> args = []
>>> for i in t:
... args.append(lambda: i)
...
>>> map(lambda a: a(), args)
[-1, -1]
>>> args = []
>>> for i in t:
... args.append((lambda i: lambda: i)(i))
...
>>> map(lambda a: a(), args)
[1, -1]
>>> args = []
>>> for i in t:
... args.append(lambda i=i: i)
...
>>> map(lambda a: a(), args)
[1, -1]
| [
"They are different, because the value of i in both the generator expression and the list comp are evaluated lazily, i.e. when the anonymous functions are invoked in f.\nBy that time, i is bound to the last value if t, which is -1.\nSo basically, this is what the list comprehension does (likewise for the genexp):\nx = []\ni = 1 # 1. from t\nx.append(lambda: i)\ni = -1 # 2. from t\nx.append(lambda: i)\n\nNow the lambdas carry around a closure that references i, but i is bound to -1 in both cases, because that is the last value it was assigned to.\nIf you want to make sure that the lambda receives the current value of i, do\nf(*[lambda u=i: u for i in t])\n\nThis way, you force the evaluation of i at the time the closure is created.\nEdit: There is one difference between generator expressions and list comprehensions: the latter leak the loop variable into the surrounding scope.\n",
"The lambda captures variables, not values, hence the code\nlambda : i\n\nwill always return the value i is currently bound to in the closure. By the time it gets called, this value has been set to -1.\nTo get what you want, you'll need to capture the actual binding at the time the lambda is created, by:\n>>> f(*(lambda i=i: i for i in t)) # -> [-1, -1]\n[1, -1]\n>>> f(*[lambda i=i: i for i in t]) # -> [-1, -1]\n[1, -1]\n\n",
"Expression f = lambda: i is equivalent to:\ndef f():\n return i\n\nExpression g = lambda i=i: i is equivalent to:\ndef g(i=i):\n return i\n\ni is a free variable in the first case and it is bound to the function parameter in the second case i.e., it is a local variable in that case. Values for default parameters are evaluated at the time of function definition. \nGenerator expression is the nearest enclosing scope (where i is defined) for i name in the lambda expression, therefore i is resolved in that block:\nf(*(lambda: i for i in (1, -1)) # -> [-1, -1]\n\ni is a local variable of the lambda i: ... block, therefore the object it refers to is defined in that block:\nf(*map(lambda i: lambda: i, (1,-1))) # -> [1, -1]\n\n"
] | [
9,
6,
4
] | [] | [] | [
"closures",
"generator_expression",
"late_binding",
"list_comprehension",
"python"
] | stackoverflow_0000139819_closures_generator_expression_late_binding_list_comprehension_python.txt |
Q:
Regular expressions but for writing in the match
When using regular expressions we generally, if not always use them to extract some kind of information. What I need is to replace the match value with some other value...
Right now I'm doing this...
def getExpandedText(pattern, text, replaceValue):
"""
One liner... really ugly but it's only used in here.
"""
return text.replace(text[text.find(re.findall(pattern, text)[0]):], replaceValue) + \
text[text.find(re.findall(pattern, text)[0]) + len(replaceValue):]
so if I do sth like
>>> getExpandedText("aaa(...)bbb", "hola aaaiiibbb como estas?", "ooo")
'hola aaaooobbb como estas?'
It changes the (...) with 'ooo'.
Do you guys know whether with python regular expressions we can do this?
thanks a lot guys!!
A:
sub (replacement, string[, count = 0])
sub returns the string obtained by replacing the leftmost non-overlapping occurrences of the RE in string by the replacement replacement. If the pattern isn't found, string is returned unchanged.
p = re.compile( '(blue|white|red)')
>>> p.sub( 'colour', 'blue socks and red shoes')
'colour socks and colour shoes'
>>> p.sub( 'colour', 'blue socks and red shoes', count=1)
'colour socks and red shoes'
A:
You want to use re.sub:
>>> import re
>>> re.sub(r'aaa...bbb', 'aaaooobbb', "hola aaaiiibbb como estas?")
'hola aaaooobbb como estas?'
To re-use variable parts from the pattern, use \g<n> in the replacement string to access the n-th () group:
>>> re.sub( "(svcOrdNbr +)..", "\g<1>XX", "svcOrdNbr IASZ0080")
'svcOrdNbr XXSZ0080'
A:
Of course. See the 'sub' and 'subn' methods of compiled regular expressions, or the 're.sub' and 're.subn' functions. You can either make it replace the matches with a string argument you give, or you can pass a callable (such as a function) which will be called to supply the replacement. See https://docs.python.org/library/re.html
A:
If you want to continue using the syntax you mentioned (replace the match value instead of replacing the part that didn't match), and considering you will only have one group, you could use the code below.
def getExpandedText(pattern, text, replaceValue):
m = re.search(pattern, text)
expandedText = text[:m.start(1)] + replaceValue + text[m.end(1):]
return expandedText
A:
def getExpandedText(pattern,text,*group):
r""" Searches for pattern in the text and replaces
all captures with the values in group.
Tag renaming:
>>> html = '<div> abc <span id="x"> def </span> ghi </div>'
>>> getExpandedText(r'</?(span\b)[^>]*>', html, 'div')
'<div> abc <div id="x"> def </div> ghi </div>'
Nested groups, capture-references:
>>> getExpandedText(r'A(.*?Z(.*?))B', "abAcdZefBgh", r'<\2>')
'abA<ef>Bgh'
"""
pattern = re.compile(pattern)
ret = []
last = 0
for m in pattern.finditer(text):
for i in xrange(0,len(m.groups())):
start,end = m.span(i+1)
# nested or skipped group
if start < last or group[i] is None:
continue
# text between the previous and current match
if last < start:
ret.append(text[last:start])
last = end
ret.append(m.expand(group[i]))
ret.append(text[last:])
return ''.join(ret)
Edit: Allow capture-references in the replacement strings.
| Regular expressions but for writing in the match | When using regular expressions we generally, if not always use them to extract some kind of information. What I need is to replace the match value with some other value...
Right now I'm doing this...
def getExpandedText(pattern, text, replaceValue):
"""
One liner... really ugly but it's only used in here.
"""
return text.replace(text[text.find(re.findall(pattern, text)[0]):], replaceValue) + \
text[text.find(re.findall(pattern, text)[0]) + len(replaceValue):]
so if I do sth like
>>> getExpandedText("aaa(...)bbb", "hola aaaiiibbb como estas?", "ooo")
'hola aaaooobbb como estas?'
It changes the (...) with 'ooo'.
Do you guys know whether with python regular expressions we can do this?
thanks a lot guys!!
| [
"sub (replacement, string[, count = 0])\n\nsub returns the string obtained by replacing the leftmost non-overlapping occurrences of the RE in string by the replacement replacement. If the pattern isn't found, string is returned unchanged.\n p = re.compile( '(blue|white|red)')\n >>> p.sub( 'colour', 'blue socks and red shoes')\n 'colour socks and colour shoes'\n >>> p.sub( 'colour', 'blue socks and red shoes', count=1)\n 'colour socks and red shoes'\n\n",
"You want to use re.sub:\n>>> import re\n>>> re.sub(r'aaa...bbb', 'aaaooobbb', \"hola aaaiiibbb como estas?\")\n'hola aaaooobbb como estas?'\n\nTo re-use variable parts from the pattern, use \\g<n> in the replacement string to access the n-th () group:\n>>> re.sub( \"(svcOrdNbr +)..\", \"\\g<1>XX\", \"svcOrdNbr IASZ0080\")\n'svcOrdNbr XXSZ0080'\n\n",
"Of course. See the 'sub' and 'subn' methods of compiled regular expressions, or the 're.sub' and 're.subn' functions. You can either make it replace the matches with a string argument you give, or you can pass a callable (such as a function) which will be called to supply the replacement. See https://docs.python.org/library/re.html\n",
"If you want to continue using the syntax you mentioned (replace the match value instead of replacing the part that didn't match), and considering you will only have one group, you could use the code below.\ndef getExpandedText(pattern, text, replaceValue):\n m = re.search(pattern, text)\n expandedText = text[:m.start(1)] + replaceValue + text[m.end(1):]\n return expandedText\n\n",
"def getExpandedText(pattern,text,*group):\n r\"\"\" Searches for pattern in the text and replaces\n all captures with the values in group.\n\n Tag renaming:\n >>> html = '<div> abc <span id=\"x\"> def </span> ghi </div>'\n >>> getExpandedText(r'</?(span\\b)[^>]*>', html, 'div')\n '<div> abc <div id=\"x\"> def </div> ghi </div>'\n\n Nested groups, capture-references:\n >>> getExpandedText(r'A(.*?Z(.*?))B', \"abAcdZefBgh\", r'<\\2>')\n 'abA<ef>Bgh'\n \"\"\"\n pattern = re.compile(pattern)\n ret = []\n last = 0\n for m in pattern.finditer(text):\n for i in xrange(0,len(m.groups())):\n start,end = m.span(i+1)\n\n # nested or skipped group\n if start < last or group[i] is None:\n continue\n\n # text between the previous and current match\n if last < start:\n ret.append(text[last:start])\n\n last = end\n ret.append(m.expand(group[i]))\n\n ret.append(text[last:])\n return ''.join(ret)\n\nEdit: Allow capture-references in the replacement strings.\n"
] | [
7,
2,
1,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0000140182_python_regex.txt |
Q:
Python reading Oracle path
On my desktop I have written a small Pylons app that connects to Oracle. I'm now trying to deploy it to my server which is running Win2k3 x64. (My desktop is 32-bit XP) The Oracle installation on the server is also 64-bit.
I was getting errors about loading the OCI dll, so I installed the 32 bit client into C:\oracle32.
If I add this to the PATH environment variable, it works great. But I also want to run the Pylons app as a service (using this recipe) and don't want to put this 32-bit library on the path for all other applications.
I tried using sys.path.append("C:\\oracle32\\bin") but that doesn't seem to work.
A:
sys.path is python's internal representation of the PYTHONPATH, it sounds to me like you want to modify the PATH.
I'm not sure that this will work, but you can try:
import os
os.environ['PATH'] += os.pathsep + "C:\\oracle32\\bin"
A:
You need to append the c:\Oracle32\bin directory to the PATH variable of your environment before you execute python.exe.
In Linux, I need to set up the LD_LIBRARY_PATH variable for similar reasons, to locate the Oracle libraries, before calling python. I use wrapper shell scripts that set the variable and then call Python.
In your case, maybe you can call, in the service startup, a .cmd or .vbs script that sets the PATH variable and then calls python.exe with your .py script.
I hope this helps!
A:
If your Python application runs in the 64-bit space, you will need to access a 64-bit installation of Oracle's oci.dll, rather than the 32-bit version. Normally you would update the system path to include the appropriate Oracle Home bin directory, prior to running the script. The solution may also vary depending on what component you are using to access Oracle from Python.
| Python reading Oracle path | On my desktop I have written a small Pylons app that connects to Oracle. I'm now trying to deploy it to my server which is running Win2k3 x64. (My desktop is 32-bit XP) The Oracle installation on the server is also 64-bit.
I was getting errors about loading the OCI dll, so I installed the 32 bit client into C:\oracle32.
If I add this to the PATH environment variable, it works great. But I also want to run the Pylons app as a service (using this recipe) and don't want to put this 32-bit library on the path for all other applications.
I tried using sys.path.append("C:\\oracle32\\bin") but that doesn't seem to work.
| [
"sys.path is python's internal representation of the PYTHONPATH, it sounds to me like you want to modify the PATH.\nI'm not sure that this will work, but you can try:\nimport os\nos.environ['PATH'] += os.pathsep + \"C:\\\\oracle32\\\\bin\"\n\n",
"You need to append the c:\\Oracle32\\bin directory to the PATH variable of your environment before you execute python.exe.\nIn Linux, I need to set up the LD_LIBRARY_PATH variable for similar reasons, to locate the Oracle libraries, before calling python. I use wrapper shell scripts that set the variable and then call Python.\nIn your case, maybe you can call, in the service startup, a .cmd or .vbs script that sets the PATH variable and then calls python.exe with your .py script.\nI hope this helps!\n",
"If your Python application runs in the 64-bit space, you will need to access a 64-bit installation of Oracle's oci.dll, rather than the 32-bit version. Normally you would update the system path to include the appropriate Oracle Home bin directory, prior to running the script. The solution may also vary depending on what component you are using to access Oracle from Python.\n"
] | [
3,
0,
0
] | [] | [] | [
"cx_oracle",
"oracle",
"pylons",
"python"
] | stackoverflow_0000095950_cx_oracle_oracle_pylons_python.txt |
Q:
Using locale.setlocale in embedded Python without breaking file parsing in C thread
We're using a third-party middleware product that allows us to write code in an embedded Python interpreter, and which exposes an API that we can call into. Some of these API calls allow us to load various kinds of file, and the loading code is implemented in C. File loading happens in a separate thread, and calls back into Python when the data is available. So far, all well and dandy.
We've been i14ing (heh) our product, and one thing we'd like to do is format user-facing numerical output according to the user's locale settings. So, from Python, we do this:
import locale
locale.setLocale( locale.LC_ALL, '' )
Now, this works (in that the user-facing numbers are formatted correctly for their locale). However, if the user's locale differs from the default C locale, any files that are subsequently loaded will return incorrect data, presumably because all string-to-float conversion has been affected, right down to the metal.
We can't work around this by implementing locale aware file loading, so our current workaround is to only set the locale when formatting output for the user, and then set it back again afterwards. That is, something like:
import locale
currentLocale = locale.getLocale( locale.LC_ALL )
locale.setLocale( locale.LC_ALL, '' )
displayNumbersToTheUser()
locale.setlocale( locale.LC_ALL, currentLocale )
This seems a bit clunky, and I was wondering whether this is a common approach to formatting locale-aware output for the user? My other concern is that this is obviously not thread safe, so we'll presumably still get problems if any file parsing occurs in a separate thread when the locale is changed.
Any information on best practice is appreciated - I don't have much experience with this sort of thing.
A:
Setting the locale after multiple threads have started operating may have unexpected results. Unless I could figure out a more subtle approach, I'd probably just split file loading and the user interface into separate processes, communicating through a pipe or a file socket.
| Using locale.setlocale in embedded Python without breaking file parsing in C thread | We're using a third-party middleware product that allows us to write code in an embedded Python interpreter, and which exposes an API that we can call into. Some of these API calls allow us to load various kinds of file, and the loading code is implemented in C. File loading happens in a separate thread, and calls back into Python when the data is available. So far, all well and dandy.
We've been i14ing (heh) our product, and one thing we'd like to do is format user-facing numerical output according to the user's locale settings. So, from Python, we do this:
import locale
locale.setLocale( locale.LC_ALL, '' )
Now, this works (in that the user-facing numbers are formatted correctly for their locale). However, if the user's locale differs from the default C locale, any files that are subsequently loaded will return incorrect data, presumably because all string-to-float conversion has been affected, right down to the metal.
We can't work around this by implementing locale aware file loading, so our current workaround is to only set the locale when formatting output for the user, and then set it back again afterwards. That is, something like:
import locale
currentLocale = locale.getLocale( locale.LC_ALL )
locale.setLocale( locale.LC_ALL, '' )
displayNumbersToTheUser()
locale.setlocale( locale.LC_ALL, currentLocale )
This seems a bit clunky, and I was wondering whether this is a common approach to formatting locale-aware output for the user? My other concern is that this is obviously not thread safe, so we'll presumably still get problems if any file parsing occurs in a separate thread when the locale is changed.
Any information on best practice is appreciated - I don't have much experience with this sort of thing.
| [
"Setting the locale after multiple threads have started operating may have unexpected results. Unless I could figure out a more subtle approach, I'd probably just split file loading and the user interface into separate processes, communicating through a pipe or a file socket.\n"
] | [
1
] | [] | [] | [
"internationalization",
"locale",
"python"
] | stackoverflow_0000140295_internationalization_locale_python.txt |
Q:
Something like Explorer's icon grid view in a Python GUI
I am making a Python gui project that needs to duplicate the look of a Windows gui environment (ie Explorer). I have my own custom icons to draw but they should be selectable by the same methods as usual; click, ctrl-click, drag box etc. Are any of the gui toolkits going to help with this or will I have to implement it all myself. If there aren't any tools to help with this advice would be greatly appreciated.
edit I am not trying to recreate explorer, that would be madness. I simply want to be able to take icons and lay them out in a scrollable window. Any number of them may be selected at once. It would be great if there was something that could select/deselect them in the same (appearing at least) way that Windows does. Then all I would need is a list of all the selected icons.
A:
Python has extensions for accessing the Win32 API, but good luck trying to re-write explorer in that by yourself. Your best bet is to use a toolkit like Qt, but you'll still have to write the vast majority of the application from scratch.
Is there any way you can re-use explorer itself in your project?
Updated for edited question:
GTK+ has an icon grid widget that you could use. See a reference for PyGTK+: gtk.IconView
A:
In wxPython there's a plethora of ready-made list and tree controls (CustomTreeCtrl, TreeListCtrl, and others), a mixture of which you can use to create a simple explorer in minutes. The wxPython demo even has a few relevant examples (see the demo of MVCTree).
A:
I'll assume you're serious and suggest that you check out the many wonderful GUI libraries available for Python.
| Something like Explorer's icon grid view in a Python GUI | I am making a Python gui project that needs to duplicate the look of a Windows gui environment (ie Explorer). I have my own custom icons to draw but they should be selectable by the same methods as usual; click, ctrl-click, drag box etc. Are any of the gui toolkits going to help with this or will I have to implement it all myself. If there aren't any tools to help with this advice would be greatly appreciated.
edit I am not trying to recreate explorer, that would be madness. I simply want to be able to take icons and lay them out in a scrollable window. Any number of them may be selected at once. It would be great if there was something that could select/deselect them in the same (appearing at least) way that Windows does. Then all I would need is a list of all the selected icons.
| [
"Python has extensions for accessing the Win32 API, but good luck trying to re-write explorer in that by yourself. Your best bet is to use a toolkit like Qt, but you'll still have to write the vast majority of the application from scratch.\nIs there any way you can re-use explorer itself in your project?\n\nUpdated for edited question:\nGTK+ has an icon grid widget that you could use. See a reference for PyGTK+: gtk.IconView\n",
"In wxPython there's a plethora of ready-made list and tree controls (CustomTreeCtrl, TreeListCtrl, and others), a mixture of which you can use to create a simple explorer in minutes. The wxPython demo even has a few relevant examples (see the demo of MVCTree).\n",
"I'll assume you're serious and suggest that you check out the many wonderful GUI libraries available for Python.\n"
] | [
3,
2,
1
] | [] | [] | [
"python",
"user_interface"
] | stackoverflow_0000145155_python_user_interface.txt |
Q:
Dynamic radio button creation
In wxPython, if I create a list of radio buttons and place the list initially, is it possible to change the contents in that list later?
For example, I have a panel that uses a boxSizer to place the widgets initially. One of those widgets is a list of radio buttons (I have also tried a normal radiobox). I would like to dynamically change the list based on variables from another class.
However, once the list is placed in the sizer, it's effectively "locked"; I can't just modify the list and have the changes appear. If I try re-adding the list to the sizer, it just gets put in the top left corner of the panel.
I'm sure I could hide the original list and manually place the new list in the same position but that feels like a kludge. I'm sure I'm making this harder than it is. I'm probably using the wrong widgets for this, much less the wrong approach, but I'm building this as a learning experience.
class Job(wiz.WizardPageSimple):
"""Character's job class."""
def __init__(self, parent, title, attribs):
wiz.WizardPageSimple.__init__(self, parent)
self.next = self.prev = None
self.sizer = makePageTitle(self, title)
self.charAttribs = attribs
#---Create widgets
self.Job_list = ["Aircraft Mechanic", "Vehicle Mechanic", "Electronics Specialist"]
box1_title = wx.StaticBox( self, -1, "" )
box1 = wx.StaticBoxSizer( box1_title, wx.VERTICAL )
grid1 = wx.BoxSizer(wx.VERTICAL)
for item in self.Job_list:
radio = wx.RadioButton(self, -1, item)
grid1.Add(radio)
##Debugging
self.btn = wx.Button(self, -1, "click")
self.Bind(wx.EVT_BUTTON, self.eligibleJob, self.btn)
#---Place widgets
self.sizer.Add(self.Job_intro)
self.sizer.Add(self.btn)
box1.Add(grid1)
self.sizer.Add(box1)
def eligibleJob(self, event):
"""Determine which Jobs a character is eligible for."""
if self.charAttribs.intelligence >= 12:
skillList = ["Analyst", "Interrogator", "Fire Specialist", "Aircraft Pilot"]
for skill in skillList:
self.Job_list.append(skill)
print self.Job_list ##Debugging
#return self.Job_list
A:
To make new list elements appear in correct places, you have to re-layout the grid after adding new elements to it. For example, to add a few new items, you could call:
def addNewSkills(self, newSkillList):
'''newSkillList is a list of skill names you want to add'''
for skillName in newSkillList:
newRadioButton = wx.RadioButton(self, -1, skillName)
self.grid1.Add(newRadioButton) # appears in top-left corner of the panel
self.Layout() # all newly added radio buttons appear where they should be
self.Fit() # if you need to resize the panel to fit new items, this will help
where self.grid1 is the sizer you keep all your radio buttons on.
A:
Two possible solutions
Rebuild the sizer with the radio widgets each time you have to make a change
Hold the radio button widgets in a list, and call SetLabel each time you have to change their labels.
A:
I was able to fix it by using the info DzinX provided, with some modification.
It appears that posting the radio buttons box first "locked in" the box to the sizer. If I tried to add a new box, I would get an error message stating that I was trying to add the widget to the same sizer twice.
By simply removing the radio buttons initially and having the user click a button to call a method, I could simply add a the list of radio buttons without a problem.
Additionally, by having the user click a button, I did not run into errors of "class Foo has no attribute 'bar'". Apparently, when the wizard initalizes, the attributes aren't available to the rest of the wizard pages. I had thought the wizard pages were dynamically created with each click of "Next" but they are all created at the same time.
| Dynamic radio button creation | In wxPython, if I create a list of radio buttons and place the list initially, is it possible to change the contents in that list later?
For example, I have a panel that uses a boxSizer to place the widgets initially. One of those widgets is a list of radio buttons (I have also tried a normal radiobox). I would like to dynamically change the list based on variables from another class.
However, once the list is placed in the sizer, it's effectively "locked"; I can't just modify the list and have the changes appear. If I try re-adding the list to the sizer, it just gets put in the top left corner of the panel.
I'm sure I could hide the original list and manually place the new list in the same position but that feels like a kludge. I'm sure I'm making this harder than it is. I'm probably using the wrong widgets for this, much less the wrong approach, but I'm building this as a learning experience.
class Job(wiz.WizardPageSimple):
"""Character's job class."""
def __init__(self, parent, title, attribs):
wiz.WizardPageSimple.__init__(self, parent)
self.next = self.prev = None
self.sizer = makePageTitle(self, title)
self.charAttribs = attribs
#---Create widgets
self.Job_list = ["Aircraft Mechanic", "Vehicle Mechanic", "Electronics Specialist"]
box1_title = wx.StaticBox( self, -1, "" )
box1 = wx.StaticBoxSizer( box1_title, wx.VERTICAL )
grid1 = wx.BoxSizer(wx.VERTICAL)
for item in self.Job_list:
radio = wx.RadioButton(self, -1, item)
grid1.Add(radio)
##Debugging
self.btn = wx.Button(self, -1, "click")
self.Bind(wx.EVT_BUTTON, self.eligibleJob, self.btn)
#---Place widgets
self.sizer.Add(self.Job_intro)
self.sizer.Add(self.btn)
box1.Add(grid1)
self.sizer.Add(box1)
def eligibleJob(self, event):
"""Determine which Jobs a character is eligible for."""
if self.charAttribs.intelligence >= 12:
skillList = ["Analyst", "Interrogator", "Fire Specialist", "Aircraft Pilot"]
for skill in skillList:
self.Job_list.append(skill)
print self.Job_list ##Debugging
#return self.Job_list
| [
"To make new list elements appear in correct places, you have to re-layout the grid after adding new elements to it. For example, to add a few new items, you could call:\ndef addNewSkills(self, newSkillList):\n '''newSkillList is a list of skill names you want to add'''\n for skillName in newSkillList:\n newRadioButton = wx.RadioButton(self, -1, skillName)\n self.grid1.Add(newRadioButton) # appears in top-left corner of the panel\n self.Layout() # all newly added radio buttons appear where they should be\n self.Fit() # if you need to resize the panel to fit new items, this will help\n\nwhere self.grid1 is the sizer you keep all your radio buttons on.\n",
"Two possible solutions\n\nRebuild the sizer with the radio widgets each time you have to make a change\nHold the radio button widgets in a list, and call SetLabel each time you have to change their labels.\n\n",
"I was able to fix it by using the info DzinX provided, with some modification.\nIt appears that posting the radio buttons box first \"locked in\" the box to the sizer. If I tried to add a new box, I would get an error message stating that I was trying to add the widget to the same sizer twice.\nBy simply removing the radio buttons initially and having the user click a button to call a method, I could simply add a the list of radio buttons without a problem.\nAdditionally, by having the user click a button, I did not run into errors of \"class Foo has no attribute 'bar'\". Apparently, when the wizard initalizes, the attributes aren't available to the rest of the wizard pages. I had thought the wizard pages were dynamically created with each click of \"Next\" but they are all created at the same time.\n"
] | [
1,
0,
0
] | [] | [] | [
"layout",
"python",
"user_interface",
"wxpython"
] | stackoverflow_0000138353_layout_python_user_interface_wxpython.txt |
Q:
How do I successfully pass a function reference to Django’s reverse() function?
I’ve got a brand new Django project. I’ve added one minimal view function to views.py, and one URL pattern to urls.py, passing the view by function reference instead of a string:
# urls.py
# -------
# coding=utf-8
from django.conf.urls.defaults import *
from myapp import views
urlpatterns = patterns('',
url(r'^myview/$', views.myview),
)
# views.py
----------
# coding=utf-8
from django.http import HttpResponse
def myview(request):
return HttpResponse('MYVIEW LOL', content_type="text/plain")
I’m trying to use reverse() to get the URL, by passing it a function reference. But I’m not getting a match, despite confirming that the view function I’m passing to reverse is the exact same view function I put in the URL pattern:
>>> from django.core.urlresolvers import reverse
>>> import urls
>>> from myapp import views
>>> urls.urlpatterns[0].callback is views.myview
True
>>> reverse(views.myview)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 254, in reverse
*args, **kwargs)))
File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 243, in reverse
"arguments '%s' not found." % (lookup_view, args, kwargs))
NoReverseMatch: Reverse for '<function myview at 0x6fe6b0>' with arguments '()' and keyword arguments '{}' not found.
As far as I can tell from the documentation, function references should be fine in both the URL pattern and reverse().
URL patterns with function references
reverse with function references
I’m using the Django trunk, revision 9092.
A:
Got it!! The problem is that some of the imports are of myproject.myapp.views, and some are just of myapp.views. This is confusing the Python module system enough that it no longer detects the functions as the same object. This is because your main settings.py probably has a line like:
ROOT_URLCONF = `myproject.urls`
To solve this, try using the full import in your shell session:
>>> from django.core.urlresolvers import reverse
>>> from myproject.myapp import views
>>> reverse(views.myview)
'/myview/'
Here's a log of the debugging session, for any interested future readers:
>>> from django.core import urlresolvers
>>> from myapp import myview
>>> urlresolvers.get_resolver (None).reverse_dict
{None: ([(u'myview/', [])], 'myview/$'), <function myview at 0x845d17c>: ([(u'myview/', [])], 'myview/$')}
>>> v1 = urlresolvers.get_resolver (None).reverse_dict.items ()[1][0]
>>> reverse(v1)
'/myview/'
>>> v1 is myview
False
>>> v1.__module__
'testproject.myapp.views'
>>> myview.__module__
'myapp.views'
What happens if you change the URL match to be r'^myview/$'?
Have you tried it with the view name? Something like reverse ('myapp.myview')?
Is urls.py the root URLconf, or in the myapp application? There needs to be a full path from the root to a view for it to be resolved. If that's myproject/myapp/urls.py, then in myproject/urls.py you'll need code like this:
from django.conf.urls.defaults import patterns
urlpatterns = patterns ('',
(r'^/', 'myapp.urls'),
)
A:
If your two code pastes are complete, then it doesn't look like the second, which makes the actual call to reverse(), ever imports the urls module and therefor if the url mapping is ever actually achieved.
| How do I successfully pass a function reference to Django’s reverse() function? | I’ve got a brand new Django project. I’ve added one minimal view function to views.py, and one URL pattern to urls.py, passing the view by function reference instead of a string:
# urls.py
# -------
# coding=utf-8
from django.conf.urls.defaults import *
from myapp import views
urlpatterns = patterns('',
url(r'^myview/$', views.myview),
)
# views.py
----------
# coding=utf-8
from django.http import HttpResponse
def myview(request):
return HttpResponse('MYVIEW LOL', content_type="text/plain")
I’m trying to use reverse() to get the URL, by passing it a function reference. But I’m not getting a match, despite confirming that the view function I’m passing to reverse is the exact same view function I put in the URL pattern:
>>> from django.core.urlresolvers import reverse
>>> import urls
>>> from myapp import views
>>> urls.urlpatterns[0].callback is views.myview
True
>>> reverse(views.myview)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 254, in reverse
*args, **kwargs)))
File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 243, in reverse
"arguments '%s' not found." % (lookup_view, args, kwargs))
NoReverseMatch: Reverse for '<function myview at 0x6fe6b0>' with arguments '()' and keyword arguments '{}' not found.
As far as I can tell from the documentation, function references should be fine in both the URL pattern and reverse().
URL patterns with function references
reverse with function references
I’m using the Django trunk, revision 9092.
| [
"Got it!! The problem is that some of the imports are of myproject.myapp.views, and some are just of myapp.views. This is confusing the Python module system enough that it no longer detects the functions as the same object. This is because your main settings.py probably has a line like:\nROOT_URLCONF = `myproject.urls`\n\nTo solve this, try using the full import in your shell session:\n>>> from django.core.urlresolvers import reverse\n>>> from myproject.myapp import views\n>>> reverse(views.myview)\n'/myview/'\n\nHere's a log of the debugging session, for any interested future readers:\n>>> from django.core import urlresolvers\n>>> from myapp import myview\n>>> urlresolvers.get_resolver (None).reverse_dict\n{None: ([(u'myview/', [])], 'myview/$'), <function myview at 0x845d17c>: ([(u'myview/', [])], 'myview/$')}\n>>> v1 = urlresolvers.get_resolver (None).reverse_dict.items ()[1][0]\n>>> reverse(v1)\n'/myview/'\n>>> v1 is myview\nFalse\n>>> v1.__module__\n'testproject.myapp.views'\n>>> myview.__module__\n'myapp.views'\n\nWhat happens if you change the URL match to be r'^myview/$'?\n\nHave you tried it with the view name? Something like reverse ('myapp.myview')?\nIs urls.py the root URLconf, or in the myapp application? There needs to be a full path from the root to a view for it to be resolved. If that's myproject/myapp/urls.py, then in myproject/urls.py you'll need code like this:\nfrom django.conf.urls.defaults import patterns\nurlpatterns = patterns ('',\n (r'^/', 'myapp.urls'),\n)\n\n",
"If your two code pastes are complete, then it doesn't look like the second, which makes the actual call to reverse(), ever imports the urls module and therefor if the url mapping is ever actually achieved.\n"
] | [
11,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000146522_django_python.txt |
Q:
How do I use django.core.urlresolvers.reverse with a function reference instead of a named URL pattern?
In my urls.py file, I have:
from myapp import views
...
(r'^categories/$', views.categories)
Where categories is a view function inside myapp/views.py. No other URLconf lines reference views.categories.
In a unit test file, I’m trying to grab this URL using django.core.urlresolvers.reverse(), instead of just copying '/categories/' (DRY and all that). So, I have:
from django.core.urlresolvers import reverse
from myapp import views
...
url = reverse(views.categories)
When I run my tests, I get a NoReverseMatch error:
NoReverseMatch: Reverse for '<function categories at 0x1082f30>' with arguments '()' and keyword arguments '{}' not found.
It matches just fine if I make the URL pattern a named pattern, like this:
url(r'^categories/$', views.categories, 'myapp-categories')
And use the pattern name to match it:
url = reverse('myapp-categories')
But as far as I can tell from the reverse documentation, I shouldn’t need to make it a named URL pattern just to use reverse.
Any ideas what I’m doing wrong?
A:
Jack M.'s example is nearly correct.
It needs to be a url function, not a tuple, if you want to use named urls.
url(r'^no_monkeys/$', 'views.noMonkeys', {}, "no-monkeys"),
A:
After futher investigation, turns out it was an issue with how I was importing the views module:
How do I successfully pass a function reference to Django’s reverse() function?
Thanks for the help though, guys: you inspired me to look at it properly.
A:
This does work, and all the code that you've pasted is correct and works fine (I just copied it into a clean test/project app and it reversed the URL without any problem). So there's something else going on here that you haven't showed us. Simplify down to the bare-bones basics until it works, then start adding complexity back in and see where it's breaking.
Also, you can do "./manage.py shell" and then interactively import the reverse function and your view function and try the reverse. That'll remove the test setup as a possible cause.
| How do I use django.core.urlresolvers.reverse with a function reference instead of a named URL pattern? | In my urls.py file, I have:
from myapp import views
...
(r'^categories/$', views.categories)
Where categories is a view function inside myapp/views.py. No other URLconf lines reference views.categories.
In a unit test file, I’m trying to grab this URL using django.core.urlresolvers.reverse(), instead of just copying '/categories/' (DRY and all that). So, I have:
from django.core.urlresolvers import reverse
from myapp import views
...
url = reverse(views.categories)
When I run my tests, I get a NoReverseMatch error:
NoReverseMatch: Reverse for '<function categories at 0x1082f30>' with arguments '()' and keyword arguments '{}' not found.
It matches just fine if I make the URL pattern a named pattern, like this:
url(r'^categories/$', views.categories, 'myapp-categories')
And use the pattern name to match it:
url = reverse('myapp-categories')
But as far as I can tell from the reverse documentation, I shouldn’t need to make it a named URL pattern just to use reverse.
Any ideas what I’m doing wrong?
| [
"Jack M.'s example is nearly correct.\nIt needs to be a url function, not a tuple, if you want to use named urls.\nurl(r'^no_monkeys/$', 'views.noMonkeys', {}, \"no-monkeys\"),\n\n",
"After futher investigation, turns out it was an issue with how I was importing the views module:\nHow do I successfully pass a function reference to Django’s reverse() function?\nThanks for the help though, guys: you inspired me to look at it properly.\n",
"This does work, and all the code that you've pasted is correct and works fine (I just copied it into a clean test/project app and it reversed the URL without any problem). So there's something else going on here that you haven't showed us. Simplify down to the bare-bones basics until it works, then start adding complexity back in and see where it's breaking.\nAlso, you can do \"./manage.py shell\" and then interactively import the reverse function and your view function and try the reverse. That'll remove the test setup as a possible cause.\n"
] | [
2,
2,
0
] | [
"The reverse function actually uses the \"name\" of the URL. This is defined like so:\nurlpatterns = patterns('',\n (r'^no_monkeys/$', 'views.noMonkeys', {}, \"no-monkeys\"),\n (r'^admin/(.*)', admin.site.root),\n)\n\nNow you would call reverse with the string \"no-monkeys\" to get the correct url.\nNinja Edit: Here is a link to the django docs on the subject.\n"
] | [
-1
] | [
"django",
"python"
] | stackoverflow_0000134629_django_python.txt |
Q:
I'm using Python regexes in a criminally inefficient manner
My goal here is to create a very simple template language. At the moment, I'm working on replacing a variable with a value, like this:
This input:
The Web
Should produce this output:
The Web This Is A Test Variable
I've got it working. But looking at my code, I'm running multiple identical regexes on the same strings -- that just offends my sense of efficiency. There's got to be a better, more Pythonic way. (It's the two "while" loops that really offend.)
This does pass the unit tests, so if this is silly premature optimization, tell me -- I'm willing to let this go. There may be dozens of these variable definitions and uses in a document, but not hundreds. But I suspect there's obvious (to other people) ways of improving this, and I'm curious what the StackOverflow crowd will come up with.
def stripMatchedQuotes(item):
MatchedSingleQuotes = re.compile(r"'(.*)'", re.LOCALE)
MatchedDoubleQuotes = re.compile(r'"(.*)"', re.LOCALE)
item = MatchedSingleQuotes.sub(r'\1', item, 1)
item = MatchedDoubleQuotes.sub(r'\1', item, 1)
return item
def processVariables(item):
VariableDefinition = re.compile(r'<%(.*?)=(.*?)%>', re.LOCALE)
VariableUse = re.compile(r'<%(.*?)%>', re.LOCALE)
Variables={}
while VariableDefinition.search(item):
VarName, VarDef = VariableDefinition.search(item).groups()
VarName = stripMatchedQuotes(VarName).upper().strip()
VarDef = stripMatchedQuotes(VarDef.strip())
Variables[VarName] = VarDef
item = VariableDefinition.sub('', item, 1)
while VariableUse.search(item):
VarName = stripMatchedQuotes(VariableUse.search(item).group(1).upper()).strip()
item = VariableUse.sub(Variables[VarName], item, 1)
return item
A:
The first thing that may improve things is to move the re.compile outside the function. The compilation is cached, but there is a speed hit in checking this to see if its compiled.
Another possibility is to use a single regex as below:
MatchedQuotes = re.compile(r"(['\"])(.*)\1", re.LOCALE)
item = MatchedQuotes.sub(r'\2', item, 1)
Finally, you can combine this into the regex in processVariables. Taking Torsten Marek's suggestion to use a function for re.sub, this improves and simplifies things dramatically.
VariableDefinition = re.compile(r'<%(["\']?)(.*?)\1=(["\']?)(.*?)\3%>', re.LOCALE)
VarRepl = re.compile(r'<%(["\']?)(.*?)\1%>', re.LOCALE)
def processVariables(item):
vars = {}
def findVars(m):
vars[m.group(2).upper()] = m.group(4)
return ""
item = VariableDefinition.sub(findVars, item)
return VarRepl.sub(lambda m: vars[m.group(2).upper()], item)
print processVariables('<%"TITLE"="This Is A Test Variable"%>The Web <%"TITLE"%>')
Here are my timings for 100000 runs:
Original : 13.637
Global regexes : 12.771
Single regex : 9.095
Final version : 1.846
[Edit] Add missing non-greedy specifier
[Edit2] Added .upper() calls so case insensitive like original version
A:
sub can take a callable as it's argument rather than a simple string. Using that, you can replace all variables with one function call:
>>> import re
>>> var_matcher = re.compile(r'<%(.*?)%>', re.LOCALE)
>>> string = '<%"TITLE"%> <%"SHMITLE"%>'
>>> values = {'"TITLE"': "I am a title.", '"SHMITLE"': "And I am a shmitle."}
>>> var_matcher.sub(lambda m: vars[m.group(1)], string)
'I am a title. And I am a shmitle.
Follow eduffy.myopenid.com's advice and keep the compiled regexes around.
The same recipe can be applied to the first loop, only there you need to store the value of the variable first, and always return "" as replacement.
A:
Never create your own programming language. Ever. (I used to have an exception to this rule, but not any more.)
There is always an existing language you can use which suits your needs better. If you elaborated on your use-case, people may help you select a suitable language.
A:
Creating a templating language is all well and good, but shouldn't one of the goals of the templating language be easy readability and efficient parsing? The example you gave seems to be neither.
As Jamie Zawinsky famously said:
Some people, when confronted with a
problem, think "I know, I'll use
regular expressions!" Now they have
two problems.
If regular expressions are a solution to a problem you have created, the best bet is not to write a better regular expression, but to redesign your approach to eliminate their use entirely. Regular expressions are complicated, expensive, hugely difficult to maintain, and (ideally) should only be used for working around a problem someone else created.
A:
You can match both kind of quotes in one go with r"(\"|')(.*?)\1" - the \1 refers to the first group, so it will only match matching quotes.
A:
You're calling re.compile quite a bit. A global variable for these wouldn't hurt here.
A:
If a regexp only contains one .* wildcard and literals, then you can use find and rfind to locate the opening and closing delimiters.
If it contains only a series of .*? wildcards, and literals, then you can just use a series of find's to do the work.
If the code is time-critical, this switch away from regexp's altogether might give a little more speed.
Also, it looks to me like this is an LL-parsable language. You could look for a library that can already parse such things for you. You could also use recursive calls to do a one-pass parse -- for example, you could implement your processVariables function to only consume up the first quote, and then call a quote-matching function to consume up to the next quote, etc.
A:
Why not use Mako? Seriously. What feature do you require that Mako doesn't have? Perhaps you can adapt or extend something that already works.
A:
Don't call search twice in a row (in the loop conditional, and the first statement in the loop). Call (and cache the result) once before the loop, and then in the final statement of the loop.
| I'm using Python regexes in a criminally inefficient manner | My goal here is to create a very simple template language. At the moment, I'm working on replacing a variable with a value, like this:
This input:
The Web
Should produce this output:
The Web This Is A Test Variable
I've got it working. But looking at my code, I'm running multiple identical regexes on the same strings -- that just offends my sense of efficiency. There's got to be a better, more Pythonic way. (It's the two "while" loops that really offend.)
This does pass the unit tests, so if this is silly premature optimization, tell me -- I'm willing to let this go. There may be dozens of these variable definitions and uses in a document, but not hundreds. But I suspect there's obvious (to other people) ways of improving this, and I'm curious what the StackOverflow crowd will come up with.
def stripMatchedQuotes(item):
MatchedSingleQuotes = re.compile(r"'(.*)'", re.LOCALE)
MatchedDoubleQuotes = re.compile(r'"(.*)"', re.LOCALE)
item = MatchedSingleQuotes.sub(r'\1', item, 1)
item = MatchedDoubleQuotes.sub(r'\1', item, 1)
return item
def processVariables(item):
VariableDefinition = re.compile(r'<%(.*?)=(.*?)%>', re.LOCALE)
VariableUse = re.compile(r'<%(.*?)%>', re.LOCALE)
Variables={}
while VariableDefinition.search(item):
VarName, VarDef = VariableDefinition.search(item).groups()
VarName = stripMatchedQuotes(VarName).upper().strip()
VarDef = stripMatchedQuotes(VarDef.strip())
Variables[VarName] = VarDef
item = VariableDefinition.sub('', item, 1)
while VariableUse.search(item):
VarName = stripMatchedQuotes(VariableUse.search(item).group(1).upper()).strip()
item = VariableUse.sub(Variables[VarName], item, 1)
return item
| [
"The first thing that may improve things is to move the re.compile outside the function. The compilation is cached, but there is a speed hit in checking this to see if its compiled.\nAnother possibility is to use a single regex as below:\nMatchedQuotes = re.compile(r\"(['\\\"])(.*)\\1\", re.LOCALE)\nitem = MatchedQuotes.sub(r'\\2', item, 1)\n\nFinally, you can combine this into the regex in processVariables. Taking Torsten Marek's suggestion to use a function for re.sub, this improves and simplifies things dramatically.\nVariableDefinition = re.compile(r'<%([\"\\']?)(.*?)\\1=([\"\\']?)(.*?)\\3%>', re.LOCALE)\nVarRepl = re.compile(r'<%([\"\\']?)(.*?)\\1%>', re.LOCALE)\n\ndef processVariables(item):\n vars = {}\n def findVars(m):\n vars[m.group(2).upper()] = m.group(4)\n return \"\"\n\n item = VariableDefinition.sub(findVars, item)\n return VarRepl.sub(lambda m: vars[m.group(2).upper()], item)\n\nprint processVariables('<%\"TITLE\"=\"This Is A Test Variable\"%>The Web <%\"TITLE\"%>')\n\nHere are my timings for 100000 runs:\nOriginal : 13.637\nGlobal regexes : 12.771\nSingle regex : 9.095\nFinal version : 1.846\n\n[Edit] Add missing non-greedy specifier\n[Edit2] Added .upper() calls so case insensitive like original version\n",
"sub can take a callable as it's argument rather than a simple string. Using that, you can replace all variables with one function call:\n>>> import re\n>>> var_matcher = re.compile(r'<%(.*?)%>', re.LOCALE)\n>>> string = '<%\"TITLE\"%> <%\"SHMITLE\"%>'\n>>> values = {'\"TITLE\"': \"I am a title.\", '\"SHMITLE\"': \"And I am a shmitle.\"}\n>>> var_matcher.sub(lambda m: vars[m.group(1)], string)\n'I am a title. And I am a shmitle.\n\nFollow eduffy.myopenid.com's advice and keep the compiled regexes around. \nThe same recipe can be applied to the first loop, only there you need to store the value of the variable first, and always return \"\" as replacement.\n",
"Never create your own programming language. Ever. (I used to have an exception to this rule, but not any more.)\nThere is always an existing language you can use which suits your needs better. If you elaborated on your use-case, people may help you select a suitable language.\n",
"Creating a templating language is all well and good, but shouldn't one of the goals of the templating language be easy readability and efficient parsing? The example you gave seems to be neither.\nAs Jamie Zawinsky famously said:\n\nSome people, when confronted with a\n problem, think \"I know, I'll use\n regular expressions!\" Now they have\n two problems.\n\nIf regular expressions are a solution to a problem you have created, the best bet is not to write a better regular expression, but to redesign your approach to eliminate their use entirely. Regular expressions are complicated, expensive, hugely difficult to maintain, and (ideally) should only be used for working around a problem someone else created.\n",
"You can match both kind of quotes in one go with r\"(\\\"|')(.*?)\\1\" - the \\1 refers to the first group, so it will only match matching quotes.\n",
"You're calling re.compile quite a bit. A global variable for these wouldn't hurt here.\n",
"If a regexp only contains one .* wildcard and literals, then you can use find and rfind to locate the opening and closing delimiters.\nIf it contains only a series of .*? wildcards, and literals, then you can just use a series of find's to do the work.\nIf the code is time-critical, this switch away from regexp's altogether might give a little more speed.\nAlso, it looks to me like this is an LL-parsable language. You could look for a library that can already parse such things for you. You could also use recursive calls to do a one-pass parse -- for example, you could implement your processVariables function to only consume up the first quote, and then call a quote-matching function to consume up to the next quote, etc.\n",
"Why not use Mako? Seriously. What feature do you require that Mako doesn't have? Perhaps you can adapt or extend something that already works.\n",
"Don't call search twice in a row (in the loop conditional, and the first statement in the loop). Call (and cache the result) once before the loop, and then in the final statement of the loop.\n"
] | [
12,
4,
2,
2,
1,
1,
1,
1,
0
] | [
"Why not use XML and XSLT instead of creating your own template language? What you want to do is pretty easy in XSLT.\n"
] | [
-3
] | [
"algorithm",
"optimization",
"python",
"regex"
] | stackoverflow_0000146607_algorithm_optimization_python_regex.txt |
Q:
Difflib.SequenceMatcher isjunk optional parameter query: how to ignore whitespaces, tabs, empty lines?
I am trying to use Difflib.SequenceMatcher to compute the similarities between two files. These two files are almost identical except that one contains some extra whitespaces, empty lines and other doesn't. I am trying to use
s=difflib.SequenceMatcher(isjunk,text1,text2)
ratio =s.ratio()
for this purpose.
So, the question is how to write the lambda expression for this isjunk method so the SequenceMatcher method will discount all the whitespaces, empty lines etc. I tried to use the parameter lambda x: x==" ", but the result isn't as great. For two closely similar text, the ratio is very low. This is highly counter intuitive.
For testing purpose, here are the two strings that you can use on testing:
What Motivates jwovu to do your Job
Well? OK, this is an entry trying to
win $100 worth of software development
books despite the fact that I don‘t
read
programming books. In order to win the
prize you have to write an entry and
what motivatesfggmum to do your job
well. Hence this post. First
motivation
money. I know, this doesn‘t sound like
a great inspiration to many, and
saying that money is one of the
motivation factors might just blow my
chances away.
As if money is a taboo in programming
world. I know there are people who
can‘t be motivated by money. Mme, on
the other hand, am living in a real
world,
with house mortgage to pay, myself to
feed and bills to cover. So I can‘t
really exclude money from my
consideration. If I can get a large
sum of money for
doing a good job, then definitely
boost my morale. I won‘t care whether
I am using an old workstation, or
forced to share rooms or cubicle with
other
people, or have to put up with an
annoying boss, or whatever. The fact
that at the end of the day I will walk
off with a large pile of money itself
is enough
for me to overcome all the obstacles,
put up with all the hard feelings and
hurt egos, tolerate a slow computer
and even endure
And here's another string
What Motivates You to do your Job
Well? OK, this is an entry trying to
win $100 worth of software development
books, despite the fact that I don't
read programming books. In order to
win the prize you have to write an
entry and describes what motivates you
to do your job well. Hence this post.
First motivation, money. I know, this
doesn't sound like a great inspiration
to many, and saying that money is one
of the motivation factors might just
blow my chances away. As if money is a
taboo in programming world. I know
there are people who can't be
motivated by money. Kudos to them. Me,
on the other hand, am living in a real
world, with house mortgage to pay,
myself to feed and bills to cover. So
I can't really exclude money from my
consideration.
If I can get a large sum of money for
doing a good job, then thatwill
definitely boost my morale. I won't
care whether I am using an old
workstation, or forced to share rooms
or cubicle with other people, or have
to put up with an annoying boss, or
whatever. The fact that at the end of
the day I will walk off with a large
pile of money itself is enough for me
to overcome all the obstacles, put up
with all the hard feelings and hurt
egos, tolerate a slow computer and
even endure
I ran the above command, and set the isjunk to lambda x:x==" ", the ratio is only 0.36.
A:
If you match all whitespaces the similarity is better:
difflib.SequenceMatcher(lambda x: x in " \t\n", doc1, doc2).ratio()
However, difflib is not ideal to such a problem because these are two nearly identical documents, but typos and such produce differences for difflib where a human wouldn't see many.
Try reading up on tf-idf, Bayesian probability, Vector space Models and w-shingling
I have written a an implementation of tf-idf applying it to a vector space and using the dot product as a distance measure to classify documents.
A:
Using your sample strings:
>>> s=difflib.SequenceMatcher(lambda x: x == '\n', s1, s2)
>>> s.ratio()
0.94669848846459825
Interestingly if ' ' is also included as junk:
>>> s=difflib.SequenceMatcher(lambda x: x in ' \n', s1, s2)
>>> s.ratio()
0.7653142402545744
Looks like the new lines are having a much greater affect than the spaces.
A:
Given the texts above, the test is indeed as suggested:
difflib.SequenceMatcher(lambda x: x in " \t\n", doc1, doc2).ratio()
However, to speed up things a little, you can take advantage of CPython's method-wrappers:
difflib.SequenceMatcher(" \t\n".__contains__, doc1, doc2).ratio()
This avoids many python function calls.
A:
I haven't used Difflib.SequenceMatcher, but have you considered pre-processing the files to remove all blank lines and whitespace (perhaps via regular expressions) and then doing the compare?
| Difflib.SequenceMatcher isjunk optional parameter query: how to ignore whitespaces, tabs, empty lines? | I am trying to use Difflib.SequenceMatcher to compute the similarities between two files. These two files are almost identical except that one contains some extra whitespaces, empty lines and other doesn't. I am trying to use
s=difflib.SequenceMatcher(isjunk,text1,text2)
ratio =s.ratio()
for this purpose.
So, the question is how to write the lambda expression for this isjunk method so the SequenceMatcher method will discount all the whitespaces, empty lines etc. I tried to use the parameter lambda x: x==" ", but the result isn't as great. For two closely similar text, the ratio is very low. This is highly counter intuitive.
For testing purpose, here are the two strings that you can use on testing:
What Motivates jwovu to do your Job
Well? OK, this is an entry trying to
win $100 worth of software development
books despite the fact that I don‘t
read
programming books. In order to win the
prize you have to write an entry and
what motivatesfggmum to do your job
well. Hence this post. First
motivation
money. I know, this doesn‘t sound like
a great inspiration to many, and
saying that money is one of the
motivation factors might just blow my
chances away.
As if money is a taboo in programming
world. I know there are people who
can‘t be motivated by money. Mme, on
the other hand, am living in a real
world,
with house mortgage to pay, myself to
feed and bills to cover. So I can‘t
really exclude money from my
consideration. If I can get a large
sum of money for
doing a good job, then definitely
boost my morale. I won‘t care whether
I am using an old workstation, or
forced to share rooms or cubicle with
other
people, or have to put up with an
annoying boss, or whatever. The fact
that at the end of the day I will walk
off with a large pile of money itself
is enough
for me to overcome all the obstacles,
put up with all the hard feelings and
hurt egos, tolerate a slow computer
and even endure
And here's another string
What Motivates You to do your Job
Well? OK, this is an entry trying to
win $100 worth of software development
books, despite the fact that I don't
read programming books. In order to
win the prize you have to write an
entry and describes what motivates you
to do your job well. Hence this post.
First motivation, money. I know, this
doesn't sound like a great inspiration
to many, and saying that money is one
of the motivation factors might just
blow my chances away. As if money is a
taboo in programming world. I know
there are people who can't be
motivated by money. Kudos to them. Me,
on the other hand, am living in a real
world, with house mortgage to pay,
myself to feed and bills to cover. So
I can't really exclude money from my
consideration.
If I can get a large sum of money for
doing a good job, then thatwill
definitely boost my morale. I won't
care whether I am using an old
workstation, or forced to share rooms
or cubicle with other people, or have
to put up with an annoying boss, or
whatever. The fact that at the end of
the day I will walk off with a large
pile of money itself is enough for me
to overcome all the obstacles, put up
with all the hard feelings and hurt
egos, tolerate a slow computer and
even endure
I ran the above command, and set the isjunk to lambda x:x==" ", the ratio is only 0.36.
| [
"If you match all whitespaces the similarity is better:\ndifflib.SequenceMatcher(lambda x: x in \" \\t\\n\", doc1, doc2).ratio()\n\nHowever, difflib is not ideal to such a problem because these are two nearly identical documents, but typos and such produce differences for difflib where a human wouldn't see many.\nTry reading up on tf-idf, Bayesian probability, Vector space Models and w-shingling\nI have written a an implementation of tf-idf applying it to a vector space and using the dot product as a distance measure to classify documents.\n",
"Using your sample strings:\n>>> s=difflib.SequenceMatcher(lambda x: x == '\\n', s1, s2)\n>>> s.ratio()\n0.94669848846459825\n\nInterestingly if ' ' is also included as junk:\n>>> s=difflib.SequenceMatcher(lambda x: x in ' \\n', s1, s2)\n>>> s.ratio()\n0.7653142402545744\n\nLooks like the new lines are having a much greater affect than the spaces.\n",
"Given the texts above, the test is indeed as suggested:\ndifflib.SequenceMatcher(lambda x: x in \" \\t\\n\", doc1, doc2).ratio()\n\nHowever, to speed up things a little, you can take advantage of CPython's method-wrappers:\ndifflib.SequenceMatcher(\" \\t\\n\".__contains__, doc1, doc2).ratio()\n\nThis avoids many python function calls.\n",
"I haven't used Difflib.SequenceMatcher, but have you considered pre-processing the files to remove all blank lines and whitespace (perhaps via regular expressions) and then doing the compare?\n"
] | [
7,
2,
2,
1
] | [] | [] | [
"difflib",
"lambda",
"python"
] | stackoverflow_0000147437_difflib_lambda_python.txt |
Q:
Adding Cookie to ZSI Posts
I've added cookie support to SOAPpy by overriding HTTPTransport. I need functionality beyond that of SOAPpy, so I was planning on moving to ZSI, but I can't figure out how to put the Cookies on the ZSI posts made to the service. Without these cookies, the server will think it is an unauthorized request and it will fail.
How can I add cookies from a Python CookieJar to ZSI requests?
A:
If you read the _Binding class in client.py of ZSI you can see that it has a variable cookies, which is an instance of Cookie.SimpleCookie. Following the ZSI example and the Cookie example that is how it should work:
b = Binding(url='/cgi-bin/simple-test', tracefile=fp)
b.cookies['foo'] = 'bar'
A:
Additionally, the Binding class also allows any header to be added. So I figured out that I can just add a "Cookie" header for each cookie I need to add. This worked well for the code generated by wsdl2py, just adding the cookies right after the binding is formed in the SOAP client class. Adding a parameter to the generated class to take in the cookies as a dictionary is easy and then they can easily be iterated through and added.
| Adding Cookie to ZSI Posts | I've added cookie support to SOAPpy by overriding HTTPTransport. I need functionality beyond that of SOAPpy, so I was planning on moving to ZSI, but I can't figure out how to put the Cookies on the ZSI posts made to the service. Without these cookies, the server will think it is an unauthorized request and it will fail.
How can I add cookies from a Python CookieJar to ZSI requests?
| [
"If you read the _Binding class in client.py of ZSI you can see that it has a variable cookies, which is an instance of Cookie.SimpleCookie. Following the ZSI example and the Cookie example that is how it should work:\nb = Binding(url='/cgi-bin/simple-test', tracefile=fp)\nb.cookies['foo'] = 'bar'\n\n",
"Additionally, the Binding class also allows any header to be added. So I figured out that I can just add a \"Cookie\" header for each cookie I need to add. This worked well for the code generated by wsdl2py, just adding the cookies right after the binding is formed in the SOAP client class. Adding a parameter to the generated class to take in the cookies as a dictionary is easy and then they can easily be iterated through and added.\n"
] | [
1,
0
] | [] | [] | [
"cookies",
"python",
"soappy",
"web_services",
"zsi"
] | stackoverflow_0000139212_cookies_python_soappy_web_services_zsi.txt |
Q:
What is the simplest way to offer/consume web services in jython?
I have an application for Tomcat which needs to offer/consume web services. Since Java web services are a nightmare (xml, code generation, etc.) compared with what is possible in Python, I would like to learn from your experience using jython instead of java for offerring/consuming web services.
What I have done so far involves adapting http://pywebsvcs.sourceforge.net/ to Jython. I still get errors (namespaces, types and so), although some of it is succesful for the simplest services.
A:
I've put together more details on how to use webservices in jython using axis. Read about it here: How To Script Webservices with Jython and Axis.
A:
PyServlet helps you configure Tomcat to serve up Jython scripts from a URL. You could use this is a "REST-like" way to do some basic web services without much effort. (It is also described here.)
We used a similar home grown framework to provide a variety of data services in a large multiple web application very successfully.
| What is the simplest way to offer/consume web services in jython? | I have an application for Tomcat which needs to offer/consume web services. Since Java web services are a nightmare (xml, code generation, etc.) compared with what is possible in Python, I would like to learn from your experience using jython instead of java for offerring/consuming web services.
What I have done so far involves adapting http://pywebsvcs.sourceforge.net/ to Jython. I still get errors (namespaces, types and so), although some of it is succesful for the simplest services.
| [
"I've put together more details on how to use webservices in jython using axis. Read about it here: How To Script Webservices with Jython and Axis.\n",
"PyServlet helps you configure Tomcat to serve up Jython scripts from a URL. You could use this is a \"REST-like\" way to do some basic web services without much effort. (It is also described here.) \nWe used a similar home grown framework to provide a variety of data services in a large multiple web application very successfully.\n"
] | [
2,
0
] | [] | [] | [
"jython",
"python",
"soap",
"web_services",
"wsdl"
] | stackoverflow_0000115744_jython_python_soap_web_services_wsdl.txt |
Q:
Using OR comparisons with IF statements
When using IF statements in Python, you have to do the following to make the "cascade" work correctly.
if job == "mechanic" or job == "tech":
print "awesome"
elif job == "tool" or job == "rock":
print "dolt"
Is there a way to make Python accept multiple values when checking for "equals to"? For example,
if job == "mechanic" or "tech":
print "awesome"
elif job == "tool" or "rock":
print "dolt"
A:
if job in ("mechanic", "tech"):
print "awesome"
elif job in ("tool", "rock"):
print "dolt"
The values in parentheses are a tuple. The in operator checks to see whether the left hand side item occurs somewhere inside the right handle tuple.
Note that when Python searches a tuple or list using the in operator, it does a linear search. If you have a large number of items on the right hand side, this could be a performance bottleneck. A larger-scale way of doing this would be to use a frozenset:
AwesomeJobs = frozenset(["mechanic", "tech", ... lots of others ])
def func():
if job in AwesomeJobs:
print "awesome"
The use of frozenset over set is preferred if the list of awesome jobs does not need to be changed during the operation of your program.
A:
You can use in:
if job in ["mechanic", "tech"]:
print "awesome"
When checking very large numbers, it may also be worth storing off a set of the items to check, as this will be faster. Eg.
AwesomeJobs = set(["mechanic", "tech", ... lots of others ])
...
def func():
if job in AwesomeJobs:
print "awesome"
A:
if job in ("mechanic", "tech"):
print "awesome"
elif job in ("tool", "rock"):
print "dolt"
A:
While I don't think you can do what you want directly, one alternative is:
if job in [ "mechanic", "tech" ]:
print "awesome"
elif job in [ "tool", "rock" ]:
print "dolt"
A:
Tuples with constant items are stored themselves as constants in the compiled function. They can be loaded with a single instruction. Lists and sets on the other hand, are always constructed anew on each execution.
Both tuples and lists use linear search for the in-operator. Sets uses a hash-based look-up, so it will be faster for a larger number of options.
A:
In other languages I'd use a switch/select statement to get the job done. You can do that in python too.
| Using OR comparisons with IF statements | When using IF statements in Python, you have to do the following to make the "cascade" work correctly.
if job == "mechanic" or job == "tech":
print "awesome"
elif job == "tool" or job == "rock":
print "dolt"
Is there a way to make Python accept multiple values when checking for "equals to"? For example,
if job == "mechanic" or "tech":
print "awesome"
elif job == "tool" or "rock":
print "dolt"
| [
"if job in (\"mechanic\", \"tech\"):\n print \"awesome\"\nelif job in (\"tool\", \"rock\"):\n print \"dolt\"\n\nThe values in parentheses are a tuple. The in operator checks to see whether the left hand side item occurs somewhere inside the right handle tuple.\nNote that when Python searches a tuple or list using the in operator, it does a linear search. If you have a large number of items on the right hand side, this could be a performance bottleneck. A larger-scale way of doing this would be to use a frozenset:\nAwesomeJobs = frozenset([\"mechanic\", \"tech\", ... lots of others ])\ndef func():\n if job in AwesomeJobs:\n print \"awesome\"\n\nThe use of frozenset over set is preferred if the list of awesome jobs does not need to be changed during the operation of your program.\n",
"You can use in:\nif job in [\"mechanic\", \"tech\"]:\n print \"awesome\"\n\nWhen checking very large numbers, it may also be worth storing off a set of the items to check, as this will be faster. Eg.\nAwesomeJobs = set([\"mechanic\", \"tech\", ... lots of others ])\n...\n\ndef func():\n if job in AwesomeJobs:\n print \"awesome\"\n\n",
"if job in (\"mechanic\", \"tech\"):\n print \"awesome\"\nelif job in (\"tool\", \"rock\"):\n print \"dolt\"\n\n",
"While I don't think you can do what you want directly, one alternative is:\nif job in [ \"mechanic\", \"tech\" ]:\n print \"awesome\"\nelif job in [ \"tool\", \"rock\" ]:\n print \"dolt\"\n\n",
"Tuples with constant items are stored themselves as constants in the compiled function. They can be loaded with a single instruction. Lists and sets on the other hand, are always constructed anew on each execution.\nBoth tuples and lists use linear search for the in-operator. Sets uses a hash-based look-up, so it will be faster for a larger number of options.\n",
"In other languages I'd use a switch/select statement to get the job done. You can do that in python too.\n"
] | [
39,
4,
1,
1,
1,
0
] | [] | [] | [
"boolean",
"comparison",
"python"
] | stackoverflow_0000148042_boolean_comparison_python.txt |
Q:
Is there a way to define which fields in the model are editable in the admin app?
Assume the following:
models.py
class Entry(models.Model):
title = models.CharField(max_length=50)
slug = models.CharField(max_length=50, unique=True)
body = models.CharField(max_length=200)
admin.py
class EntryAdmin(admin.ModelAdmin):
prepopulated_fields = {'slug':('title',)}
I want the slug to be pre-populated by the title, but I dont want the user to be able to edit it from the admin. I assumed that adding the fields=[] to the admin object and not including the slug would have worked, but it didnt. I also tried setting editable=False in the model, but that also didnt work (infact, stops the page from rendering).
Thoughts?
A:
For this particular case you can override your save method to slugify (it's built-in method, look at django source) the title and store it in slug field. Also from there you can easily check if this slug is indeed unique and change it somehow if it's not.
Consider this example:
def save(self):
from django.template.defaultfilters import slugify
if not self.slug:
self.slug = slugify(self.title)
super(Your_Model_Name,self).save()
A:
I'm not sure what you're asking for IS possible. Your best bet is probably to hide the slug from the admin interface completely by specifying your fieldsets, and than overriding the save method to copy the slug from the tile, and potentially slugifying it...
A:
This Django Snippet does what you want by defining a custom Read-Only Widget. So you define a custom editor for the field which in fact doesn't allow any editing.
A:
This snippet gives you an AutoSlugField with exactly the behavior you are seeking, and adding it to your model is a one-liner.
A:
In addition to overriding save to provide the generated value you want, you can also use the exclude option in your ModelAdmin class to prevent the field from being displayed in the admin:
class EntryAdmin(admin.ModelAdmin):
exclude = ('slug',)
| Is there a way to define which fields in the model are editable in the admin app? | Assume the following:
models.py
class Entry(models.Model):
title = models.CharField(max_length=50)
slug = models.CharField(max_length=50, unique=True)
body = models.CharField(max_length=200)
admin.py
class EntryAdmin(admin.ModelAdmin):
prepopulated_fields = {'slug':('title',)}
I want the slug to be pre-populated by the title, but I dont want the user to be able to edit it from the admin. I assumed that adding the fields=[] to the admin object and not including the slug would have worked, but it didnt. I also tried setting editable=False in the model, but that also didnt work (infact, stops the page from rendering).
Thoughts?
| [
"For this particular case you can override your save method to slugify (it's built-in method, look at django source) the title and store it in slug field. Also from there you can easily check if this slug is indeed unique and change it somehow if it's not.\nConsider this example:\ndef save(self):\n from django.template.defaultfilters import slugify\n\n if not self.slug:\n self.slug = slugify(self.title)\n\n super(Your_Model_Name,self).save()\n\n",
"I'm not sure what you're asking for IS possible. Your best bet is probably to hide the slug from the admin interface completely by specifying your fieldsets, and than overriding the save method to copy the slug from the tile, and potentially slugifying it...\n",
"This Django Snippet does what you want by defining a custom Read-Only Widget. So you define a custom editor for the field which in fact doesn't allow any editing.\n",
"This snippet gives you an AutoSlugField with exactly the behavior you are seeking, and adding it to your model is a one-liner.\n",
"In addition to overriding save to provide the generated value you want, you can also use the exclude option in your ModelAdmin class to prevent the field from being displayed in the admin:\nclass EntryAdmin(admin.ModelAdmin):\n exclude = ('slug',)\n\n"
] | [
4,
0,
0,
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000149040_django_python.txt |
Q:
What is the difference between __reduce__ and __reduce_ex__?
I understand that these methods are for pickling/unpickling and have no relation to the reduce built-in function, but what's the difference between the 2 and why do we need both?
A:
The docs say that
If provided, at pickling time
__reduce__() will be called with no
arguments, and it must return either a
string or a tuple.
On the other hand,
It is sometimes useful to know the
protocol version when implementing
__reduce__. This can be done by
implementing a method named
__reduce_ex__ instead of
__reduce__. __reduce_ex__, when it
exists, is called in preference over
__reduce__ (you may still provide
__reduce__ for backwards
compatibility). The __reduce_ex__
method will be called with a single
integer argument, the protocol
version.
On the gripping hand, Guido says that this is an area that could be cleaned up.
A:
__reduce_ex__ is what __reduce__ should have been but never became. __reduce_ex__ works like __reduce__ but the pickle protocol is passed.
| What is the difference between __reduce__ and __reduce_ex__? | I understand that these methods are for pickling/unpickling and have no relation to the reduce built-in function, but what's the difference between the 2 and why do we need both?
| [
"The docs say that\n\nIf provided, at pickling time\n__reduce__() will be called with no\narguments, and it must return either a\nstring or a tuple.\n\nOn the other hand,\n\nIt is sometimes useful to know the\nprotocol version when implementing\n__reduce__. This can be done by\nimplementing a method named\n__reduce_ex__ instead of\n__reduce__. __reduce_ex__, when it\nexists, is called in preference over\n__reduce__ (you may still provide\n__reduce__ for backwards\ncompatibility). The __reduce_ex__\nmethod will be called with a single\ninteger argument, the protocol\nversion.\n\nOn the gripping hand, Guido says that this is an area that could be cleaned up.\n",
"__reduce_ex__ is what __reduce__ should have been but never became. __reduce_ex__ works like __reduce__ but the pickle protocol is passed.\n"
] | [
34,
9
] | [] | [] | [
"pickle",
"python"
] | stackoverflow_0000150284_pickle_python.txt |
Q:
Python scope: "UnboundLocalError: local variable 'c' referenced before assignment"
I am trying to figure out this:
c = 1
def f(n):
print c + n
def g(n):
c = c + n
f(1) # => 2
g(1) # => UnboundLocalError: local variable 'c' referenced before assignment
Thanks!
A:
Within a function, variables that are assigned to are treated as local variables by default. To assign to global variables, use the global statement:
def g(n):
global c
c = c + n
This is one of the quirky areas of Python that has never really sat well with me.
A:
Global state is something to avoid, especially needing to mutate it. Consider if g() should simply take two parameters or if f() and g() need to be methods of a common class with c an instance attribute
class A:
c = 1
def f(self, n):
print self.c + n
def g(self, n):
self.c += n
a = A()
a.f(1)
a.g(1)
a.f(1)
Outputs:
2
3
A:
Errata for Greg's post:
There should be no before they are referenced. Take a look:
x = 1
def explode():
print x # raises UnboundLocalError here
x = 2
It explodes, even if x is assigned after it's referenced.
In Python variable can be local or refer outer scope, and it cannot change in one function.
A:
Other than what Greg said, in Python 3.0, there will be the nonlocal statement to state "here are some names that are defined in the enclosing scope". Unlike global those names have to be already defined outside the current scope. It will be easy to track down names and variables. Nowadays you can't be sure where "globals something" is exactly defined.
| Python scope: "UnboundLocalError: local variable 'c' referenced before assignment" | I am trying to figure out this:
c = 1
def f(n):
print c + n
def g(n):
c = c + n
f(1) # => 2
g(1) # => UnboundLocalError: local variable 'c' referenced before assignment
Thanks!
| [
"Within a function, variables that are assigned to are treated as local variables by default. To assign to global variables, use the global statement:\ndef g(n):\n global c\n c = c + n\n\nThis is one of the quirky areas of Python that has never really sat well with me.\n",
"Global state is something to avoid, especially needing to mutate it. Consider if g() should simply take two parameters or if f() and g() need to be methods of a common class with c an instance attribute\nclass A:\n c = 1\n def f(self, n):\n print self.c + n\n def g(self, n):\n self.c += n\n\na = A()\na.f(1)\na.g(1)\na.f(1)\n\nOutputs:\n2\n3\n\n",
"Errata for Greg's post:\nThere should be no before they are referenced. Take a look:\n\nx = 1\ndef explode():\n print x # raises UnboundLocalError here\n x = 2\n\nIt explodes, even if x is assigned after it's referenced.\nIn Python variable can be local or refer outer scope, and it cannot change in one function.\n",
"Other than what Greg said, in Python 3.0, there will be the nonlocal statement to state \"here are some names that are defined in the enclosing scope\". Unlike global those names have to be already defined outside the current scope. It will be easy to track down names and variables. Nowadays you can't be sure where \"globals something\" is exactly defined.\n"
] | [
52,
12,
9,
2
] | [] | [] | [
"python",
"scope"
] | stackoverflow_0000146359_python_scope.txt |
Q:
How Does One Read Bytes from File in Python
Similar to this question, I am trying to read in an ID3v2 tag header and am having trouble figuring out how to get individual bytes in python.
I first read all ten bytes into a string. I then want to parse out the individual pieces of information.
I can grab the two version number chars in the string, but then I have no idea how to take those two chars and get an integer out of them.
The struct package seems to be what I want, but I can't get it to work.
Here is my code so-far (I am very new to python btw...so take it easy on me):
def __init__(self, ten_byte_string):
self.whole_string = ten_byte_string
self.file_identifier = self.whole_string[:3]
self.major_version = struct.pack('x', self.whole_string[3:4]) #this
self.minor_version = struct.pack('x', self.whole_string[4:5]) # and this
self.flags = self.whole_string[5:6]
self.len = self.whole_string[6:10]
Printing out any value except is obviously crap because they are not formatted correctly.
A:
If you have a string, with 2 bytes that you wish to interpret as a 16 bit integer, you can do so by:
>>> s = '\0\x02'
>>> struct.unpack('>H', s)
(2,)
Note that the > is for big-endian (the largest part of the integer comes first). This is the format id3 tags use.
For other sizes of integer, you use different format codes. eg. "i" for a signed 32 bit integer. See help(struct) for details.
You can also unpack several elements at once. eg for 2 unsigned shorts, followed by a signed 32 bit value:
>>> a,b,c = struct.unpack('>HHi', some_string)
Going by your code, you are looking for (in order):
a 3 char string
2 single byte values (major and minor version)
a 1 byte flags variable
a 32 bit length quantity
The format string for this would be:
ident, major, minor, flags, len = struct.unpack('>3sBBBI', ten_byte_string)
A:
Why write your own? (Assuming you haven't checked out these other options.) There's a couple options out there for reading in ID3 tag info from MP3s in Python. Check out my answer over at this question.
A:
I was going to recommend the struct package but then you said you had tried it. Try this:
self.major_version = struct.unpack('H', self.whole_string[3:5])
The pack() function convers Python data types to bits, and the unpack() function converts bits to Python data types.
A:
I am trying to read in an ID3v2 tag header
FWIW, there's already a module for this.
| How Does One Read Bytes from File in Python | Similar to this question, I am trying to read in an ID3v2 tag header and am having trouble figuring out how to get individual bytes in python.
I first read all ten bytes into a string. I then want to parse out the individual pieces of information.
I can grab the two version number chars in the string, but then I have no idea how to take those two chars and get an integer out of them.
The struct package seems to be what I want, but I can't get it to work.
Here is my code so-far (I am very new to python btw...so take it easy on me):
def __init__(self, ten_byte_string):
self.whole_string = ten_byte_string
self.file_identifier = self.whole_string[:3]
self.major_version = struct.pack('x', self.whole_string[3:4]) #this
self.minor_version = struct.pack('x', self.whole_string[4:5]) # and this
self.flags = self.whole_string[5:6]
self.len = self.whole_string[6:10]
Printing out any value except is obviously crap because they are not formatted correctly.
| [
"If you have a string, with 2 bytes that you wish to interpret as a 16 bit integer, you can do so by:\n>>> s = '\\0\\x02'\n>>> struct.unpack('>H', s)\n(2,)\n\nNote that the > is for big-endian (the largest part of the integer comes first). This is the format id3 tags use.\nFor other sizes of integer, you use different format codes. eg. \"i\" for a signed 32 bit integer. See help(struct) for details.\nYou can also unpack several elements at once. eg for 2 unsigned shorts, followed by a signed 32 bit value:\n>>> a,b,c = struct.unpack('>HHi', some_string)\n\nGoing by your code, you are looking for (in order):\n\na 3 char string\n2 single byte values (major and minor version)\na 1 byte flags variable\na 32 bit length quantity\n\nThe format string for this would be:\nident, major, minor, flags, len = struct.unpack('>3sBBBI', ten_byte_string)\n\n",
"Why write your own? (Assuming you haven't checked out these other options.) There's a couple options out there for reading in ID3 tag info from MP3s in Python. Check out my answer over at this question.\n",
"I was going to recommend the struct package but then you said you had tried it. Try this:\nself.major_version = struct.unpack('H', self.whole_string[3:5])\n\nThe pack() function convers Python data types to bits, and the unpack() function converts bits to Python data types.\n",
"\nI am trying to read in an ID3v2 tag header\n\nFWIW, there's already a module for this.\n"
] | [
16,
4,
2,
2
] | [] | [] | [
"id3",
"python"
] | stackoverflow_0000150532_id3_python.txt |
Q:
Can Regex be used for this particular string manipulation?
I need to replace character (say) x with character (say) P in a string, but only if it is contained in a quoted substring.
An example makes it clearer:
axbx'cxdxe'fxgh'ixj'k -> axbx'cPdPe'fxgh'iPj'k
Let's assume, for the sake of simplicity, that quotes always come in pairs.
The obvious way is to just process the string one character at a time (a simple state machine approach);
however, I'm wondering if regular expressions can be used to do all the processing in one go.
My target language is C#, but I guess my question pertains to any language having builtin or library support for regular expressions.
A:
I converted Greg Hewgill's python code to C# and it worked!
[Test]
public void ReplaceTextInQuotes()
{
Assert.AreEqual("axbx'cPdPe'fxgh'iPj'k",
Regex.Replace("axbx'cxdxe'fxgh'ixj'k",
@"x(?=[^']*'([^']|'[^']*')*$)", "P"));
}
That test passed.
A:
I was able to do this with Python:
>>> import re
>>> re.sub(r"x(?=[^']*'([^']|'[^']*')*$)", "P", "axbx'cxdxe'fxgh'ixj'k")
"axbx'cPdPe'fxgh'iPj'k"
What this does is use the non-capturing match (?=...) to check that the character x is within a quoted string. It looks for some nonquote characters up to the next quote, then looks for a sequence of either single characters or quoted groups of characters, until the end of the string.
This relies on your assumption that the quotes are always balanced. This is also not very efficient.
A:
A more general (and simpler) solution which allows non-paired quotes.
Find quoted string
Replace 'x' by 'P' in the string
#!/usr/bin/env python
import re
text = "axbx'cxdxe'fxgh'ixj'k"
s = re.sub("'.*?'", lambda m: re.sub("x", "P", m.group(0)), text)
print s == "axbx'cPdPe'fxgh'iPj'k", s
# -> True axbx'cPdPe'fxgh'iPj'k
A:
The trick is to use non-capturing group to match the part of the string following the match (character x) we are searching for.
Trying to match the string up to x will only find either the first or the last occurence, depending whether non-greedy quantifiers are used.
Here's Greg's idea transposed to Tcl, with comments.
set strIn {axbx'cxdxe'fxgh'ixj'k}
set regex {(?x) # enable expanded syntax
# - allows comments, ignores whitespace
x # the actual match
(?= # non-matching group
[^']*' # match to end of current quoted substring
##
## assuming quotes are in pairs,
## make sure we actually were
## inside a quoted substring
## by making sure the rest of the string
## is what we expect it to be
##
(
[^']* # match any non-quoted substring
| # ...or...
'[^']*' # any quoted substring, including the quotes
)* # any number of times
$ # until we run out of string :)
) # end of non-matching group
}
#the same regular expression without the comments
set regexCondensed {(?x)x(?=[^']*'([^']|'[^']*')*$)}
set replRegex {P}
set nMatches [regsub -all -- $regex $strIn $replRegex strOut]
puts "$nMatches replacements. "
if {$nMatches > 0} {
puts "Original: |$strIn|"
puts "Result: |$strOut|"
}
exit
This prints:
3 replacements.
Original: |axbx'cxdxe'fxgh'ixj'k|
Result: |axbx'cPdPe'fxgh'iPj'k|
A:
#!/usr/bin/perl -w
use strict;
# Break up the string.
# The spliting uses quotes
# as the delimiter.
# Put every broken substring
# into the @fields array.
my @fields;
while (<>) {
@fields = split /'/, $_;
}
# For every substring indexed with an odd
# number, search for x and replace it
# with P.
my $count;
my $end = $#fields;
for ($count=0; $count < $end; $count++) {
if ($count % 2 == 1) {
$fields[$count] =~ s/a/P/g;
}
}
Wouldn't this chunk do the job?
A:
Not with plain regexp. Regular expressions have no "memory" so they cannot distinguish between being "inside" or "outside" quotes.
You need something more powerful, for example using gema it would be straighforward:
'<repl>'=$0
repl:x=P
A:
Similar discussion about balanced text replaces: Can regular expressions be used to match nested patterns?
Although you can try this in Vim, but it works well only if the string is on one line, and there's only one pair of 's.
:%s:\('[^']*\)x\([^']*'\):\1P\2:gci
If there's one more pair or even an unbalanced ', then it could fail. That's way I included the c a.k.a. confirm flag on the ex command.
The same can be done with sed, without the interaction - or with awk so you can add some interaction.
One possible solution is to break the lines on pairs of 's then you can do with vim solution.
A:
Pattern: (?s)\G((?:^[^']*'|(?<=.))(?:'[^']*'|[^'x]+)*+)x
Replacement: \1P
\G — Anchor each match at the end of the previous one, or the start of the string.
(?:^[^']*'|(?<=.)) — If it is at the beginning of the string, match up to the first quote.
(?:'[^']*'|[^'x]+)*+ — Match any block of unquoted characters, or any (non-quote) characters up to an 'x'.
One sweep trough the source string, except for a single character look-behind.
A:
Sorry to break your hopes, but you need a push-down automata to do that. There is more info here:
Pushdown Automaton
In short, Regular expressions, which are finite state machines can only read and has no memory while pushdown automaton has a stack and manipulating capabilities.
Edit: spelling...
| Can Regex be used for this particular string manipulation? | I need to replace character (say) x with character (say) P in a string, but only if it is contained in a quoted substring.
An example makes it clearer:
axbx'cxdxe'fxgh'ixj'k -> axbx'cPdPe'fxgh'iPj'k
Let's assume, for the sake of simplicity, that quotes always come in pairs.
The obvious way is to just process the string one character at a time (a simple state machine approach);
however, I'm wondering if regular expressions can be used to do all the processing in one go.
My target language is C#, but I guess my question pertains to any language having builtin or library support for regular expressions.
| [
"I converted Greg Hewgill's python code to C# and it worked!\n[Test]\npublic void ReplaceTextInQuotes()\n{\n Assert.AreEqual(\"axbx'cPdPe'fxgh'iPj'k\", \n Regex.Replace(\"axbx'cxdxe'fxgh'ixj'k\",\n @\"x(?=[^']*'([^']|'[^']*')*$)\", \"P\"));\n}\n\nThat test passed.\n",
"I was able to do this with Python:\n>>> import re\n>>> re.sub(r\"x(?=[^']*'([^']|'[^']*')*$)\", \"P\", \"axbx'cxdxe'fxgh'ixj'k\")\n\"axbx'cPdPe'fxgh'iPj'k\"\n\nWhat this does is use the non-capturing match (?=...) to check that the character x is within a quoted string. It looks for some nonquote characters up to the next quote, then looks for a sequence of either single characters or quoted groups of characters, until the end of the string.\nThis relies on your assumption that the quotes are always balanced. This is also not very efficient.\n",
"A more general (and simpler) solution which allows non-paired quotes.\n\nFind quoted string\nReplace 'x' by 'P' in the string\n#!/usr/bin/env python\nimport re\n\ntext = \"axbx'cxdxe'fxgh'ixj'k\"\n\ns = re.sub(\"'.*?'\", lambda m: re.sub(\"x\", \"P\", m.group(0)), text)\n\nprint s == \"axbx'cPdPe'fxgh'iPj'k\", s\n# -> True axbx'cPdPe'fxgh'iPj'k\n\n\n",
"The trick is to use non-capturing group to match the part of the string following the match (character x) we are searching for.\nTrying to match the string up to x will only find either the first or the last occurence, depending whether non-greedy quantifiers are used.\nHere's Greg's idea transposed to Tcl, with comments.\n\nset strIn {axbx'cxdxe'fxgh'ixj'k}\nset regex {(?x) # enable expanded syntax \n # - allows comments, ignores whitespace\n x # the actual match\n (?= # non-matching group\n [^']*' # match to end of current quoted substring\n ##\n ## assuming quotes are in pairs,\n ## make sure we actually were \n ## inside a quoted substring\n ## by making sure the rest of the string \n ## is what we expect it to be\n ##\n (\n [^']* # match any non-quoted substring\n | # ...or...\n '[^']*' # any quoted substring, including the quotes\n )* # any number of times\n $ # until we run out of string :)\n ) # end of non-matching group\n}\n\n#the same regular expression without the comments\nset regexCondensed {(?x)x(?=[^']*'([^']|'[^']*')*$)}\n\nset replRegex {P}\nset nMatches [regsub -all -- $regex $strIn $replRegex strOut]\nputs \"$nMatches replacements. \"\nif {$nMatches > 0} {\n puts \"Original: |$strIn|\"\n puts \"Result: |$strOut|\"\n}\nexit\n\nThis prints:\n3 replacements. \nOriginal: |axbx'cxdxe'fxgh'ixj'k|\nResult: |axbx'cPdPe'fxgh'iPj'k|\n\n",
"#!/usr/bin/perl -w\n\nuse strict;\n\n# Break up the string.\n# The spliting uses quotes\n# as the delimiter.\n# Put every broken substring\n# into the @fields array.\n\nmy @fields;\nwhile (<>) {\n @fields = split /'/, $_;\n}\n\n# For every substring indexed with an odd\n# number, search for x and replace it\n# with P.\n\nmy $count;\nmy $end = $#fields;\nfor ($count=0; $count < $end; $count++) {\n if ($count % 2 == 1) {\n $fields[$count] =~ s/a/P/g;\n } \n}\n\nWouldn't this chunk do the job?\n",
"Not with plain regexp. Regular expressions have no \"memory\" so they cannot distinguish between being \"inside\" or \"outside\" quotes. \nYou need something more powerful, for example using gema it would be straighforward:\n'<repl>'=$0\nrepl:x=P\n\n",
"Similar discussion about balanced text replaces: Can regular expressions be used to match nested patterns?\nAlthough you can try this in Vim, but it works well only if the string is on one line, and there's only one pair of 's.\n:%s:\\('[^']*\\)x\\([^']*'\\):\\1P\\2:gci\n\nIf there's one more pair or even an unbalanced ', then it could fail. That's way I included the c a.k.a. confirm flag on the ex command.\nThe same can be done with sed, without the interaction - or with awk so you can add some interaction.\nOne possible solution is to break the lines on pairs of 's then you can do with vim solution. \n",
"Pattern: (?s)\\G((?:^[^']*'|(?<=.))(?:'[^']*'|[^'x]+)*+)x\nReplacement: \\1P\n\n\n\\G — Anchor each match at the end of the previous one, or the start of the string.\n(?:^[^']*'|(?<=.)) — If it is at the beginning of the string, match up to the first quote.\n(?:'[^']*'|[^'x]+)*+ — Match any block of unquoted characters, or any (non-quote) characters up to an 'x'.\n\nOne sweep trough the source string, except for a single character look-behind.\n",
"Sorry to break your hopes, but you need a push-down automata to do that. There is more info here:\nPushdown Automaton\nIn short, Regular expressions, which are finite state machines can only read and has no memory while pushdown automaton has a stack and manipulating capabilities.\nEdit: spelling...\n"
] | [
9,
8,
3,
2,
2,
1,
1,
1,
0
] | [] | [] | [
"c#",
"language_agnostic",
"python",
"regex"
] | stackoverflow_0000138552_c#_language_agnostic_python_regex.txt |
Q:
Is there any Visual Library alternative to wxPython that supports CSS/Style Sheets?
I've developed a program that extensively uses wxPython - the wxWindow port for python.
Even though it is as mature library it is still very primitive and very programming oriented. Which is time consuming and not flexible at all.
I would love to see if there is something like Flex/Action Script where all the visual dimensions are configured by style sheets.
Any thoughts?
A:
PyQt with Qt style sheets might be a good fit. Naturally, you'd need to re-write quite a bit of your GUI layer for the toolkit change.
A:
You could try XUL, the language the Firefox GUI uses. It's XML styled with CSS and scripted with Javascript.
http://www.mozilla.org/projects/xul/
http://en.wikipedia.org/wiki/XUL
http://developer.mozilla.org/en/XUL
| Is there any Visual Library alternative to wxPython that supports CSS/Style Sheets? | I've developed a program that extensively uses wxPython - the wxWindow port for python.
Even though it is as mature library it is still very primitive and very programming oriented. Which is time consuming and not flexible at all.
I would love to see if there is something like Flex/Action Script where all the visual dimensions are configured by style sheets.
Any thoughts?
| [
"PyQt with Qt style sheets might be a good fit. Naturally, you'd need to re-write quite a bit of your GUI layer for the toolkit change.\n",
"You could try XUL, the language the Firefox GUI uses. It's XML styled with CSS and scripted with Javascript.\nhttp://www.mozilla.org/projects/xul/\nhttp://en.wikipedia.org/wiki/XUL\nhttp://developer.mozilla.org/en/XUL\n"
] | [
4,
1
] | [] | [] | [
"python",
"wxpython"
] | stackoverflow_0000150705_python_wxpython.txt |
Q:
What exceptions might a Python function raise?
Is there any way in Python to determine what exceptions a (built-in) function might raise? For example, the documentation (http://docs.python.org/lib/built-in-funcs.html) for the built-in int(s) says nothing about the fact that it might raise a ValueError if s is not a validly formatted int.
This is a duplicate of Does re.compile() or any given Python library call throw an exception?
A:
The only way to tell what exceptions something can raise is by looking at the documentation. The fact that the int() documentation doesn't say it may raise ValueError is a bug in the documentation, but easily explained by ValueError being exactly for that purpose, and that being something "everybody knows".
To belabour the point, though, documentation is the only way to tell what exceptions you should care about; in fact, any function can potentially raise any exception, even if it's just because signals may arrive and signal handlers may raise exceptions. You should not anticipate or handle those errors, however; you should just handle the errors you expect.
A:
I don't know of any definitive source, apart from the source.
| What exceptions might a Python function raise? | Is there any way in Python to determine what exceptions a (built-in) function might raise? For example, the documentation (http://docs.python.org/lib/built-in-funcs.html) for the built-in int(s) says nothing about the fact that it might raise a ValueError if s is not a validly formatted int.
This is a duplicate of Does re.compile() or any given Python library call throw an exception?
| [
"The only way to tell what exceptions something can raise is by looking at the documentation. The fact that the int() documentation doesn't say it may raise ValueError is a bug in the documentation, but easily explained by ValueError being exactly for that purpose, and that being something \"everybody knows\".\nTo belabour the point, though, documentation is the only way to tell what exceptions you should care about; in fact, any function can potentially raise any exception, even if it's just because signals may arrive and signal handlers may raise exceptions. You should not anticipate or handle those errors, however; you should just handle the errors you expect.\n",
"I don't know of any definitive source, apart from the source.\n"
] | [
8,
0
] | [] | [] | [
"exception",
"python"
] | stackoverflow_0000150743_exception_python.txt |
Q:
Looking for a regular expression including alphanumeric + "&" and ";"
Here's the problem:
split=re.compile('\\W*')
This regular expression works fine when dealing with regular words, but there are occasions where I need the expression to include words like k&auml;ytt&auml;j&aml;auml;.
What should I add to the regex to include the & and ; characters?
A:
I would treat the entities as a unit (since they also can contain numerical character codes), resulting in the following regular expression:
(\w|&(#(x[0-9a-fA-F]+|[0-9]+)|[a-z]+);)+
This matches
either a word character (including “_”), or
an HTML entity consisting of
the character “&”,
the character “#”,
the character “x” followed by at least one hexadecimal digit, or
at least one decimal digit, or
at least one letter (= named entity),
a semicolon
at least once.
/EDIT: Thanks to ΤΖΩΤΖΙΟΥ for pointing out an error.
A:
You probably want to take the problem reverse, i.e. finding all the character without the spaces:
[^ \t\n]*
Or you want to add the extra characters:
[a-zA-Z0-9&;]*
In case you want to match HTML entities, you should try something like:
(\w+|&\w+;)*
A:
you should make a character class that would include the extra characters. For example:
split=re.compile('[\w&;]+')
This should do the trick. For your information
\w (lower case 'w') matches word characters (alphanumeric)
\W (capital W) is a negated character class (meaning it matches any non-alphanumeric character)
* matches 0 or more times and + matches one or more times, so * will match anything (even if there are no characters there).
| Looking for a regular expression including alphanumeric + "&" and ";" | Here's the problem:
split=re.compile('\\W*')
This regular expression works fine when dealing with regular words, but there are occasions where I need the expression to include words like k&auml;ytt&auml;j&aml;auml;.
What should I add to the regex to include the & and ; characters?
| [
"I would treat the entities as a unit (since they also can contain numerical character codes), resulting in the following regular expression:\n(\\w|&(#(x[0-9a-fA-F]+|[0-9]+)|[a-z]+);)+\n\nThis matches\n\neither a word character (including “_”), or\nan HTML entity consisting of\n\n\nthe character “&”,\n\n\nthe character “#”,\n\n\nthe character “x” followed by at least one hexadecimal digit, or\nat least one decimal digit, or\n\nat least one letter (= named entity),\n\na semicolon\n\nat least once.\n\n/EDIT: Thanks to ΤΖΩΤΖΙΟΥ for pointing out an error.\n",
"You probably want to take the problem reverse, i.e. finding all the character without the spaces:\n[^ \\t\\n]*\n\nOr you want to add the extra characters:\n[a-zA-Z0-9&;]*\n\nIn case you want to match HTML entities, you should try something like:\n(\\w+|&\\w+;)*\n\n",
"you should make a character class that would include the extra characters. For example:\nsplit=re.compile('[\\w&;]+')\n\nThis should do the trick. For your information\n\n\\w (lower case 'w') matches word characters (alphanumeric)\n\\W (capital W) is a negated character class (meaning it matches any non-alphanumeric character) \n* matches 0 or more times and + matches one or more times, so * will match anything (even if there are no characters there).\n\n"
] | [
6,
5,
2
] | [
"Looks like this RegEx did the trick:\nsplit=re.compile('(\\\\\\W+&\\\\\\W+;)*')\n\nThanks for the suggestions. Most of them worked fine on Reggy, but I don't quite understand why they failed with re.compile.\n"
] | [
-1
] | [
"encoding",
"python",
"regex"
] | stackoverflow_0000152218_encoding_python_regex.txt |
Q:
NSWindow launched from statusItem menuItem does not appear as active window
I have a statusItem application written in PyObjC. The statusItem has a menuItem which is supposed to launch a new window when it is clicked:
# Create statusItem
statusItem = NSStatusBar.systemStatusBar().statusItemWithLength_(NSVariableStatusItemLength)
statusItem.setHighlightMode_(TRUE)
statusItem.setEnabled_(TRUE)
statusItem.retain()
# Create menuItem
menu = NSMenu.alloc().init()
menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Preferences', 'launchPreferences:', '')
menu.addItem_(menuitem)
statusItem.setMenu_(menu)
The launchPreferences: method is:
def launchPreferences_(self, notification):
preferences = Preferences.alloc().initWithWindowNibName_('Preferences')
preferences.showWindow_(self)
Preferences is an NSWindowController class:
class Preferences(NSWindowController):
When I run the application in XCode (Build & Go), this works fine. However, when I run the built .app file externally from XCode, the statusItem and menuItem appear as expected but when I click on the Preferences menuItem the window does not appear. I have verified that the launchPreferences code is running by checking console output.
Further, if I then double click the .app file again, the window appears but if I change the active window away by clicking, for example, on a Finder window, the preferences window disappears. This seems to me to be something to do with the active window.
Update 1
I have tried these two answers but neither work. If I add in to the launchPreferences method:
preferences.makeKeyAndOrderFront_()
or
preferences.setLevel_(NSNormalWindowLevel)
then I just get an error:
'Preferences' object has no attribute
A:
You need to send the application an activateIgnoringOtherApps: message and then send the window makeKeyAndOrderFront:.
In Objective-C this would be:
[NSApp activateIgnoringOtherApps:YES];
[[self window] makeKeyAndOrderFront:self];
A:
I have no idea of PyObjC, never used that, but if this was Objective-C code, I'd say you should call makeKeyAndOrderFront: on the window object if you want it to become the very first front window. A newly created window needs to be neither key, nor front, unless you make it either or like in this case, both.
The other issue that worries me is that you say the window goes away (gets invisible) when it's not active anymore. This sounds like your window is no real window. Have you accidentally set it to be a "Utility Window" in Interface Builder? Could you try to manually set the window level, using setLevel: to NSNormalWindowLevel before the window is displayed on screen for the first time whether it still goes away when becoming inactive?
| NSWindow launched from statusItem menuItem does not appear as active window | I have a statusItem application written in PyObjC. The statusItem has a menuItem which is supposed to launch a new window when it is clicked:
# Create statusItem
statusItem = NSStatusBar.systemStatusBar().statusItemWithLength_(NSVariableStatusItemLength)
statusItem.setHighlightMode_(TRUE)
statusItem.setEnabled_(TRUE)
statusItem.retain()
# Create menuItem
menu = NSMenu.alloc().init()
menuitem = NSMenuItem.alloc().initWithTitle_action_keyEquivalent_('Preferences', 'launchPreferences:', '')
menu.addItem_(menuitem)
statusItem.setMenu_(menu)
The launchPreferences: method is:
def launchPreferences_(self, notification):
preferences = Preferences.alloc().initWithWindowNibName_('Preferences')
preferences.showWindow_(self)
Preferences is an NSWindowController class:
class Preferences(NSWindowController):
When I run the application in XCode (Build & Go), this works fine. However, when I run the built .app file externally from XCode, the statusItem and menuItem appear as expected but when I click on the Preferences menuItem the window does not appear. I have verified that the launchPreferences code is running by checking console output.
Further, if I then double click the .app file again, the window appears but if I change the active window away by clicking, for example, on a Finder window, the preferences window disappears. This seems to me to be something to do with the active window.
Update 1
I have tried these two answers but neither work. If I add in to the launchPreferences method:
preferences.makeKeyAndOrderFront_()
or
preferences.setLevel_(NSNormalWindowLevel)
then I just get an error:
'Preferences' object has no attribute
| [
"You need to send the application an activateIgnoringOtherApps: message and then send the window makeKeyAndOrderFront:. \nIn Objective-C this would be:\n[NSApp activateIgnoringOtherApps:YES];\n[[self window] makeKeyAndOrderFront:self];\n\n",
"I have no idea of PyObjC, never used that, but if this was Objective-C code, I'd say you should call makeKeyAndOrderFront: on the window object if you want it to become the very first front window. A newly created window needs to be neither key, nor front, unless you make it either or like in this case, both.\nThe other issue that worries me is that you say the window goes away (gets invisible) when it's not active anymore. This sounds like your window is no real window. Have you accidentally set it to be a \"Utility Window\" in Interface Builder? Could you try to manually set the window level, using setLevel: to NSNormalWindowLevel before the window is displayed on screen for the first time whether it still goes away when becoming inactive?\n"
] | [
5,
1
] | [] | [] | [
"cocoa",
"pyobjc",
"python"
] | stackoverflow_0000152344_cocoa_pyobjc_python.txt |
Q:
Is it OK to inspect properties beginning with underscore?
I've been working on a very simple crud generator for pylons. I came up with something that inspects
SomeClass._sa_class_manager.mapper.c
Is it ok to inspect this (or to call methods begining with underscore)? I always kind of assumed this is legal though frowned upon as it relies heavily on the internal structure of a class/object. But hey, since python does not really have interfaces in the Java sense maybe it is OK.
A:
It is intentional (in Python) that there are no "private" scopes. It is a convention that anything that starts with an underscore should not ideally be used, and hence you may not complain if its behavior or definition changes in a next version.
A:
In general, this usually indicates that the method is effectively internal, rather than part of the documented interface, and should not be relied on. Future versions of the library are free to rename or remove such methods, so if you care about future compatability without having to rewrite, avoid doing it.
A:
If it works, why not? You could have problems though when _sa_class_manager gets restructured, binding yourself to this specific version of SQLAlchemy, or creating more work to track the changes. As SQLAlchemy is a fast moving target, you may be there in a year already.
The preferable way would be to integrate your desired API into SQLAlchemy itself.
A:
It's generally not a good idea, for reasons already mentioned. However, Python deliberately allows this behaviour in case there is no other way of doing something.
For example, if you have a closed-source compiled Python library where the author didn't think you'd need direct access to a certain object's internal state—but you really do—you can still get at the information you need. You have the same problems mentioned before of keeping up with different versions (if you're lucky enough that it's still maintained) but at least you can actually do what you wanted to do.
| Is it OK to inspect properties beginning with underscore? | I've been working on a very simple crud generator for pylons. I came up with something that inspects
SomeClass._sa_class_manager.mapper.c
Is it ok to inspect this (or to call methods begining with underscore)? I always kind of assumed this is legal though frowned upon as it relies heavily on the internal structure of a class/object. But hey, since python does not really have interfaces in the Java sense maybe it is OK.
| [
"It is intentional (in Python) that there are no \"private\" scopes. It is a convention that anything that starts with an underscore should not ideally be used, and hence you may not complain if its behavior or definition changes in a next version.\n",
"In general, this usually indicates that the method is effectively internal, rather than part of the documented interface, and should not be relied on. Future versions of the library are free to rename or remove such methods, so if you care about future compatability without having to rewrite, avoid doing it.\n",
"If it works, why not? You could have problems though when _sa_class_manager gets restructured, binding yourself to this specific version of SQLAlchemy, or creating more work to track the changes. As SQLAlchemy is a fast moving target, you may be there in a year already.\nThe preferable way would be to integrate your desired API into SQLAlchemy itself.\n",
"It's generally not a good idea, for reasons already mentioned. However, Python deliberately allows this behaviour in case there is no other way of doing something.\nFor example, if you have a closed-source compiled Python library where the author didn't think you'd need direct access to a certain object's internal state—but you really do—you can still get at the information you need. You have the same problems mentioned before of keeping up with different versions (if you're lucky enough that it's still maintained) but at least you can actually do what you wanted to do.\n"
] | [
9,
9,
0,
0
] | [] | [] | [
"pylons",
"python",
"sqlalchemy"
] | stackoverflow_0000152068_pylons_python_sqlalchemy.txt |
Q:
In Django, where is the best place to put short snippets of HTML-formatted data?
This question is related to (but perhaps not quite the same as):
Does Django have HTML helpers?
My problem is this: In Django, I am constantly reproducing the basic formatting for low-level database objects. Here's an example:
I have two classes, Person and Address. There are multiple Addresses for each Person, setup likeso (in their respective models.py)
class Person(models.Model):
...
class Address(models.Model):
contact = models.ForeignKey(Person)
Now, whenever I look at a Person, I want to see all their Addresses. So suppose Persons/views.py has something likeso:
def detail(request, person_id):
person = get_object_or_404( Person, pk=person_id )
return render_to_response('persons/details.html',
{ 'title' : unicode(person), 'addresses': person.address_set.all() } )
And, I have a template, persons/details.html, with code, for example, like-so:
{% extends "base.html" %}
{% for address in addresses %}
<b>{{ address.name }}</b>
{{ address.type }} <br>
{{ address.street_1 }}<br>
{{ address.street_2 }}<br>
{{ address.city }} {{ address.stateprov }} {{ address.postalcode }}<br>
{{ address.country }}
<hr>
{{ endfor }}
I am repeating this code quite a bit, often with minor variations, such when it's in a table, and then < br > must be substituted by < /td >< td >. Other times, I don't want a street_2 to display (or the < br > after it). All to say, there is fundamental logic that I want to express, that I am even more loath to tote around with block-and-copy!
What I want is a persons/details.html with, for example, the following:
{% extends "base.html" %}
{% for address in addresses %}
{% address.as_html4 %}
{% endfor %}
And if I want inline table, something likeso (I guess!):
{% extends "base.html" %}
<table><tr>
{% for address in addresses %}
<tr><td> {% address.as_html4 </td><td> %} </td></tr>
{% endfor %}
</table>
The question is, then: Where is the best place to put the formatting? The logic?
Django seem to have the following (plausible) options:
Put the formatting in models.py
Put the logic/formatting in views.py
Put the logic/formatting in some other sub-class of Person or Address (i.e. addresses/html4.py)
Create custom tags
Help / insight much appreciated!
A:
Sounds like an inclusion tag is what you're looking for. You could have a template and tag for each major variation and use the tag's arguments to customise the context for each template as required.
Basic tag definition:
@register.inclusion_tag('person/address.html')
def display_address(address):
return {'address': address}
Use in templates (assuming the templatetag module containing it has already been {% load %}-ed):
{% display_address address %}
A:
I would use a template tag outputting data using a template html-file a k a inclusion-tag
A:
I think template filter will be useful too. You can pass filter on each object, for example:
{{ value|linebreaks }} # standard django filter
Will produce:
If value is Joel\nis a slug, the output will be <p>Joel<br>is a slug</p>.
See Django Built-in template tags and filters complete reference.
| In Django, where is the best place to put short snippets of HTML-formatted data? | This question is related to (but perhaps not quite the same as):
Does Django have HTML helpers?
My problem is this: In Django, I am constantly reproducing the basic formatting for low-level database objects. Here's an example:
I have two classes, Person and Address. There are multiple Addresses for each Person, setup likeso (in their respective models.py)
class Person(models.Model):
...
class Address(models.Model):
contact = models.ForeignKey(Person)
Now, whenever I look at a Person, I want to see all their Addresses. So suppose Persons/views.py has something likeso:
def detail(request, person_id):
person = get_object_or_404( Person, pk=person_id )
return render_to_response('persons/details.html',
{ 'title' : unicode(person), 'addresses': person.address_set.all() } )
And, I have a template, persons/details.html, with code, for example, like-so:
{% extends "base.html" %}
{% for address in addresses %}
<b>{{ address.name }}</b>
{{ address.type }} <br>
{{ address.street_1 }}<br>
{{ address.street_2 }}<br>
{{ address.city }} {{ address.stateprov }} {{ address.postalcode }}<br>
{{ address.country }}
<hr>
{{ endfor }}
I am repeating this code quite a bit, often with minor variations, such when it's in a table, and then < br > must be substituted by < /td >< td >. Other times, I don't want a street_2 to display (or the < br > after it). All to say, there is fundamental logic that I want to express, that I am even more loath to tote around with block-and-copy!
What I want is a persons/details.html with, for example, the following:
{% extends "base.html" %}
{% for address in addresses %}
{% address.as_html4 %}
{% endfor %}
And if I want inline table, something likeso (I guess!):
{% extends "base.html" %}
<table><tr>
{% for address in addresses %}
<tr><td> {% address.as_html4 </td><td> %} </td></tr>
{% endfor %}
</table>
The question is, then: Where is the best place to put the formatting? The logic?
Django seem to have the following (plausible) options:
Put the formatting in models.py
Put the logic/formatting in views.py
Put the logic/formatting in some other sub-class of Person or Address (i.e. addresses/html4.py)
Create custom tags
Help / insight much appreciated!
| [
"Sounds like an inclusion tag is what you're looking for. You could have a template and tag for each major variation and use the tag's arguments to customise the context for each template as required.\nBasic tag definition:\n@register.inclusion_tag('person/address.html')\ndef display_address(address):\n return {'address': address}\n\nUse in templates (assuming the templatetag module containing it has already been {% load %}-ed):\n{% display_address address %}\n\n",
"I would use a template tag outputting data using a template html-file a k a inclusion-tag\n",
"I think template filter will be useful too. You can pass filter on each object, for example:\n{{ value|linebreaks }} # standard django filter\n\nWill produce:\nIf value is Joel\\nis a slug, the output will be <p>Joel<br>is a slug</p>.\n\nSee Django Built-in template tags and filters complete reference.\n"
] | [
13,
2,
1
] | [] | [] | [
"design_patterns",
"django",
"model_view_controller",
"python"
] | stackoverflow_0000146789_design_patterns_django_model_view_controller_python.txt |
Q:
What is the preferred way to redirect a request in Pylons without losing form data?
I'm trying to redirect/forward a Pylons request. The problem with using redirect_to is that form data gets dropped. I need to keep the POST form data intact as well as all request headers.
Is there a simple way to do this?
A:
Receiving data from a POST depends on the web browser sending data along. When the web browser receives a redirect, it does not resend that data along. One solution would be to URL encode the data you want to keep and use that with a GET. In the worst case, you could always add the data you want to keep to the session and pass it that way.
| What is the preferred way to redirect a request in Pylons without losing form data? | I'm trying to redirect/forward a Pylons request. The problem with using redirect_to is that form data gets dropped. I need to keep the POST form data intact as well as all request headers.
Is there a simple way to do this?
| [
"Receiving data from a POST depends on the web browser sending data along. When the web browser receives a redirect, it does not resend that data along. One solution would be to URL encode the data you want to keep and use that with a GET. In the worst case, you could always add the data you want to keep to the session and pass it that way.\n"
] | [
2
] | [] | [] | [
"header",
"post",
"pylons",
"python",
"request"
] | stackoverflow_0000153773_header_post_pylons_python_request.txt |
Q:
Extension functions and 'help'
When I call
help(Mod.Cls.f)
(Mod is a C extension module), I get the output
Help on method_descriptor:
f(...)
doc_string
What do I need to do so that the help output is of the form
Help on method f in module Mod:
f(x, y, z)
doc_string
like it is for random.Random.shuffle, for example?
My PyMethodDef entry is currently:
{ "f", f, METH_VARARGS, "doc_string" }
A:
You cannot. The inspect module, which is what 'pydoc' and 'help()' use, has no way of figuring out what the exact signature of a C function is. The best you can do is what the builtin functions do: include the signature in the first line of the docstring:
>>> help(range)
Help on built-in function range in module __builtin__:
range(...)
range([start,] stop[, step]) -> list of integers
...
The reason random.shuffle's docstring looks "correct" is that it isn't a C function. It's a function written in Python.
A:
Thomas's answer is right on, of course.
I would simply add that many C extension modules have a Python "wrapper" around them so that they can support standard function signatures and other dynamic-language features (such as the descriptor protocol).
| Extension functions and 'help' | When I call
help(Mod.Cls.f)
(Mod is a C extension module), I get the output
Help on method_descriptor:
f(...)
doc_string
What do I need to do so that the help output is of the form
Help on method f in module Mod:
f(x, y, z)
doc_string
like it is for random.Random.shuffle, for example?
My PyMethodDef entry is currently:
{ "f", f, METH_VARARGS, "doc_string" }
| [
"You cannot. The inspect module, which is what 'pydoc' and 'help()' use, has no way of figuring out what the exact signature of a C function is. The best you can do is what the builtin functions do: include the signature in the first line of the docstring:\n>>> help(range)\nHelp on built-in function range in module __builtin__:\n\nrange(...)\n range([start,] stop[, step]) -> list of integers\n\n...\n\nThe reason random.shuffle's docstring looks \"correct\" is that it isn't a C function. It's a function written in Python.\n",
"Thomas's answer is right on, of course.\nI would simply add that many C extension modules have a Python \"wrapper\" around them so that they can support standard function signatures and other dynamic-language features (such as the descriptor protocol).\n"
] | [
2,
1
] | [] | [] | [
"cpython",
"python"
] | stackoverflow_0000153227_cpython_python.txt |
Q:
Running a Python web server as a service in Windows
I have a small web server application I've written in Python that goes and gets some data from a database system and returns it to the user as XML. That part works fine - I can run the Python web server application from the command line and I can have clients connect to it and get data back. At the moment, to run the web server I have to be logged in to our server as the administrator user and I have to manually start the web server. I want to have the web server automatically start on system start as a service and run in the background.
Using code from ActiveState's site and StackOverflow, I have a pretty good idea of how to go about creating a service, and I think I've got that bit sorted - I can install and start my web server as a Windows service. I can't, however, figure out how to stop the service again. My web server is created from a BaseHTTPServer:
server = BaseHTTPServer.HTTPServer(('', 8081), SIMSAPIServerHandler)
server.serve_forever()
The serve_forever() call, naturally enough, makes the web server sit in an infinite loop and wait for HTTP connections (or a ctrl-break keypress, not useful for a service). I get the idea from the example code above that your main() function is supposed to sit in an infinite loop and only break out of it when it comes accross a "stop" condition. My main calls serve_forever(). I have a SvcStop function:
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
exit(0)
Which seems to get called when I do "python myservice stop" from the command line (I can put a debug line in there that produces output to a file) but doesn't actually exit the whole service - subsequent calls to "python myservice start" gives me an error:
Error starting service: An instance of
the service is already running.
and subsequent calls to stop gives me:
Error stopping service: The service
cannot accept control messages at this
time. (1061)
I think I need either some replacement for serve_forever (serve_until_stop_received, or whatever) or I need some way of modifying SvcStop so it stops the whole service.
Here's a full listing (I've trimmed includes/comments to save space):
class SIMSAPIServerHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
try:
reportTuple = self.path.partition("/")
if len(reportTuple) < 3:
return
if reportTuple[2] == "":
return
os.system("C:\\Programs\\SIMSAPI\\runCommandReporter.bat " + reportTuple[2])
f = open("C:\\Programs\\SIMSAPI\\out.xml", "rb")
self.send_response(200)
self.send_header('Content-type', "application/xml")
self.end_headers()
self.wfile.write(f.read())
f.close()
# The output from CommandReporter is simply dumped to out.xml, which we read, write to the user, then remove.
os.unlink("C:\\Programs\\SIMSAPI\\out.xml")
return
except IOError:
self.send_error(404,'File Not Found: %s' % self.path)
class SIMSAPI(win32serviceutil.ServiceFramework):
_svc_name_ = "SIMSAPI"
_svc_display_name_ = "A simple web server"
_svc_description_ = "Serves XML data produced by SIMS CommandReporter"
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
exit(0)
def SvcDoRun(self):
import servicemanager
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,servicemanager.PYS_SERVICE_STARTED,(self._svc_name_, ''))
self.timeout = 3000
while 1:
server = BaseHTTPServer.HTTPServer(('', 8081), SIMSAPIServerHandler)
server.serve_forever()
def ctrlHandler(ctrlType):
return True
if __name__ == '__main__':
win32api.SetConsoleCtrlHandler(ctrlHandler, True)
win32serviceutil.HandleCommandLine(SIMSAPI)
A:
This is what I do:
Instead of instancing directly the class BaseHTTPServer.HTTPServer, I write a new descendant from it that publishes an "stop" method:
class AppHTTPServer (SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):
def serve_forever(self):
self.stop_serving = False
while not self.stop_serving:
self.handle_request()
def stop (self):
self.stop_serving = True
And then, in the method SvcStop that you already have, I call that method to break the serve_forever() loop:
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
self.httpd.stop()
(self.httpd is the instance of AppHTTPServer() that implements the webserver)
If you use setDaemon() correctly on the background threads, and interrupt correctly all the loops in the service, then the instruction
exit(0)
in SvcStop() should not be necessary
| Running a Python web server as a service in Windows | I have a small web server application I've written in Python that goes and gets some data from a database system and returns it to the user as XML. That part works fine - I can run the Python web server application from the command line and I can have clients connect to it and get data back. At the moment, to run the web server I have to be logged in to our server as the administrator user and I have to manually start the web server. I want to have the web server automatically start on system start as a service and run in the background.
Using code from ActiveState's site and StackOverflow, I have a pretty good idea of how to go about creating a service, and I think I've got that bit sorted - I can install and start my web server as a Windows service. I can't, however, figure out how to stop the service again. My web server is created from a BaseHTTPServer:
server = BaseHTTPServer.HTTPServer(('', 8081), SIMSAPIServerHandler)
server.serve_forever()
The serve_forever() call, naturally enough, makes the web server sit in an infinite loop and wait for HTTP connections (or a ctrl-break keypress, not useful for a service). I get the idea from the example code above that your main() function is supposed to sit in an infinite loop and only break out of it when it comes accross a "stop" condition. My main calls serve_forever(). I have a SvcStop function:
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
exit(0)
Which seems to get called when I do "python myservice stop" from the command line (I can put a debug line in there that produces output to a file) but doesn't actually exit the whole service - subsequent calls to "python myservice start" gives me an error:
Error starting service: An instance of
the service is already running.
and subsequent calls to stop gives me:
Error stopping service: The service
cannot accept control messages at this
time. (1061)
I think I need either some replacement for serve_forever (serve_until_stop_received, or whatever) or I need some way of modifying SvcStop so it stops the whole service.
Here's a full listing (I've trimmed includes/comments to save space):
class SIMSAPIServerHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
try:
reportTuple = self.path.partition("/")
if len(reportTuple) < 3:
return
if reportTuple[2] == "":
return
os.system("C:\\Programs\\SIMSAPI\\runCommandReporter.bat " + reportTuple[2])
f = open("C:\\Programs\\SIMSAPI\\out.xml", "rb")
self.send_response(200)
self.send_header('Content-type', "application/xml")
self.end_headers()
self.wfile.write(f.read())
f.close()
# The output from CommandReporter is simply dumped to out.xml, which we read, write to the user, then remove.
os.unlink("C:\\Programs\\SIMSAPI\\out.xml")
return
except IOError:
self.send_error(404,'File Not Found: %s' % self.path)
class SIMSAPI(win32serviceutil.ServiceFramework):
_svc_name_ = "SIMSAPI"
_svc_display_name_ = "A simple web server"
_svc_description_ = "Serves XML data produced by SIMS CommandReporter"
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
exit(0)
def SvcDoRun(self):
import servicemanager
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,servicemanager.PYS_SERVICE_STARTED,(self._svc_name_, ''))
self.timeout = 3000
while 1:
server = BaseHTTPServer.HTTPServer(('', 8081), SIMSAPIServerHandler)
server.serve_forever()
def ctrlHandler(ctrlType):
return True
if __name__ == '__main__':
win32api.SetConsoleCtrlHandler(ctrlHandler, True)
win32serviceutil.HandleCommandLine(SIMSAPI)
| [
"This is what I do:\nInstead of instancing directly the class BaseHTTPServer.HTTPServer, I write a new descendant from it that publishes an \"stop\" method:\nclass AppHTTPServer (SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):\n def serve_forever(self):\n self.stop_serving = False\n while not self.stop_serving:\n self.handle_request()\n\n def stop (self):\n self.stop_serving = True\n\nAnd then, in the method SvcStop that you already have, I call that method to break the serve_forever() loop:\ndef SvcStop(self):\n self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)\n self.httpd.stop()\n\n(self.httpd is the instance of AppHTTPServer() that implements the webserver)\nIf you use setDaemon() correctly on the background threads, and interrupt correctly all the loops in the service, then the instruction \nexit(0)\n\nin SvcStop() should not be necessary \n"
] | [
4
] | [] | [] | [
"python",
"webserver",
"windows"
] | stackoverflow_0000153221_python_webserver_windows.txt |
Q:
Python GUI Application redistribution
I need to develop a small-medium sized desktop GUI application, preferably with Python as a language of choice because of time constraints.
What GUI library choices do I have which allow me to redistribute my application standalone, assuming that the users don't have a working Python installation and obviously don't have the GUI libraries I'm using either?
Also, how would I go about packaging everything up in binaries of reasonable size for each target OS? (my main targets are Windows and Mac OS X)
Addition:
I've been looking at WxPython, but I've found plenty of horror stories of packaging it with cx_freeze and getting 30mb+ binaries, and no real advice on how to actually do the packaging and how trust-worthy it is.
A:
This may help:
How can I make an EXE file from a Python program?
A:
http://wiki.wxpython.org/CreatingStandaloneExecutables
It shouldn't be that large unless you have managed to include the debug build of wx.
I seem to rememebr about 4Mb for the python.dll and similair for wx.
A:
Python has an embedded GUI toolkit named TKinter which is based on Tk library from TCL programming language. It is very basic and does not have all the functionality you expect in Windows Forms or GTK for example but if you must have platform independent toolkit I see no other choice taking in mind that you also dont want to grow that much the binary.
Tkinter is not hard at all to use since it doesnt have millions of widgets/controls and options and is the default toolkit included in most python distributions, at least on Windows, OSX and Linux.
GTK and QT are prettier and more powerful but they have a one big disadvantage for you: they are heavy and deppend upon third libraries, especially GTK which has a lot of dependencies that makes it a little hard to distribute it embeded in your software.
As for the binary creation I know there is py2exe which converts python code to win32 executable code (.exe's) but im not sure if there is something similar for OSX. Are you worried because people could see the source code or just so you can bundle all in a single package? If you just want to bundle everything you dont need to create a standalone executable, you could easily create an installer:
Creating distributable in python
That's a guide on how to distribute your software when it's done.
A:
http://Gajim.org for Windows uses python and PyGtk. You can check, how they did it. Also, there's PyQt for GUI (and wxpython mentioned earlier).
A:
I don't have any experience building stand-alone apps for any platform other than Windows.
That said:
Tkinter: works fine with py2exe. Python Megawidgets (an "expansion library" for Tkinter) works fine also, but it does funky things with dynamic imports, so you need to combine all the components into a big file "pmw.py" and add it to your project (well, you'll also have pmwblt.py and pmwcolor.py). There are instructions for how to do this somewhere (either on py2exe wiki or in the PMW docs). Tix (an extension to Tk that you can use with Tkinter) doesn't work with py2exe, or at least that was my experience about four years ago.
wxPython also works fine with py2exe. I just checked an app I have; the whole distribution came to around 11MB. Most of that was the wx DLLs and .pyd files, but I can't see how you'd avoid that. If you are targetting Windows XP, you need to include a manifest in your setup.py or else it will look ugly. See this email for details.
A:
I've used py2Exe myself - it's really easy (at least for small apps).
A:
Combination that I am familiar with: wxPython, py2exe, upx
The key to resolving your last concern about the size of the distribution is using upx to compress the DLLs. It looks like they support MacOS executables. You will pay an initial decompression penalty when the DLLs are first loaded.
| Python GUI Application redistribution | I need to develop a small-medium sized desktop GUI application, preferably with Python as a language of choice because of time constraints.
What GUI library choices do I have which allow me to redistribute my application standalone, assuming that the users don't have a working Python installation and obviously don't have the GUI libraries I'm using either?
Also, how would I go about packaging everything up in binaries of reasonable size for each target OS? (my main targets are Windows and Mac OS X)
Addition:
I've been looking at WxPython, but I've found plenty of horror stories of packaging it with cx_freeze and getting 30mb+ binaries, and no real advice on how to actually do the packaging and how trust-worthy it is.
| [
"This may help:\nHow can I make an EXE file from a Python program?\n",
"http://wiki.wxpython.org/CreatingStandaloneExecutables\nIt shouldn't be that large unless you have managed to include the debug build of wx.\nI seem to rememebr about 4Mb for the python.dll and similair for wx.\n",
"Python has an embedded GUI toolkit named TKinter which is based on Tk library from TCL programming language. It is very basic and does not have all the functionality you expect in Windows Forms or GTK for example but if you must have platform independent toolkit I see no other choice taking in mind that you also dont want to grow that much the binary.\nTkinter is not hard at all to use since it doesnt have millions of widgets/controls and options and is the default toolkit included in most python distributions, at least on Windows, OSX and Linux.\nGTK and QT are prettier and more powerful but they have a one big disadvantage for you: they are heavy and deppend upon third libraries, especially GTK which has a lot of dependencies that makes it a little hard to distribute it embeded in your software.\nAs for the binary creation I know there is py2exe which converts python code to win32 executable code (.exe's) but im not sure if there is something similar for OSX. Are you worried because people could see the source code or just so you can bundle all in a single package? If you just want to bundle everything you dont need to create a standalone executable, you could easily create an installer:\nCreating distributable in python\nThat's a guide on how to distribute your software when it's done.\n",
"http://Gajim.org for Windows uses python and PyGtk. You can check, how they did it. Also, there's PyQt for GUI (and wxpython mentioned earlier).\n",
"I don't have any experience building stand-alone apps for any platform other than Windows.\nThat said:\nTkinter: works fine with py2exe. Python Megawidgets (an \"expansion library\" for Tkinter) works fine also, but it does funky things with dynamic imports, so you need to combine all the components into a big file \"pmw.py\" and add it to your project (well, you'll also have pmwblt.py and pmwcolor.py). There are instructions for how to do this somewhere (either on py2exe wiki or in the PMW docs). Tix (an extension to Tk that you can use with Tkinter) doesn't work with py2exe, or at least that was my experience about four years ago.\nwxPython also works fine with py2exe. I just checked an app I have; the whole distribution came to around 11MB. Most of that was the wx DLLs and .pyd files, but I can't see how you'd avoid that. If you are targetting Windows XP, you need to include a manifest in your setup.py or else it will look ugly. See this email for details.\n",
"I've used py2Exe myself - it's really easy (at least for small apps).\n",
"Combination that I am familiar with: wxPython, py2exe, upx\nThe key to resolving your last concern about the size of the distribution is using upx to compress the DLLs. It looks like they support MacOS executables. You will pay an initial decompression penalty when the DLLs are first loaded.\n"
] | [
6,
6,
2,
1,
1,
0,
0
] | [] | [] | [
"distribution",
"freeze",
"python",
"user_interface",
"wxpython"
] | stackoverflow_0000153956_distribution_freeze_python_user_interface_wxpython.txt |
Q:
Programmatically focusing a hippo.CanvasEntry?
Consider this Python program which uses PyGtk and Hippo Canvas to display a clickable text label. Clicking the text label replaces it with a Hippo CanvasEntry widget which contains the text of the label.
import pygtk
pygtk.require('2.0')
import gtk, hippo
def textClicked(text, event, row):
input = hippo.CanvasEntry()
input.set_property('text', text.get_property('text'))
parent = text.get_parent()
parent.insert_after(input, text)
parent.remove(text)
def main():
canvas = hippo.Canvas()
root = hippo.CanvasBox()
canvas.set_root(root)
text = hippo.CanvasText(text=u'Some text')
text.connect('button-press-event', textClicked, text)
root.append(text)
window = gtk.Window()
window.connect('destroy', lambda ignored: gtk.main_quit())
window.add(canvas)
canvas.show()
window.show()
gtk.main()
if __name__ == '__main__':
main()
How can the CanvasEntry created when the text label is clicked be automatically focused at creation time?
A:
Underneath the CanvasEntry, there's a regular old gtk.Entry which you need to request the focus as soon as it's made visible. Here's a modified version of your textClicked function which does just that:
def textClicked(text, event, row):
input = hippo.CanvasEntry()
input.set_property('text', text.get_property('text'))
entry = input.get_property("widget")
def grabit(widget):
entry.grab_focus()
entry.connect("realize", grabit)
parent = text.get_parent()
parent.insert_after(input, text)
parent.remove(text)
| Programmatically focusing a hippo.CanvasEntry? | Consider this Python program which uses PyGtk and Hippo Canvas to display a clickable text label. Clicking the text label replaces it with a Hippo CanvasEntry widget which contains the text of the label.
import pygtk
pygtk.require('2.0')
import gtk, hippo
def textClicked(text, event, row):
input = hippo.CanvasEntry()
input.set_property('text', text.get_property('text'))
parent = text.get_parent()
parent.insert_after(input, text)
parent.remove(text)
def main():
canvas = hippo.Canvas()
root = hippo.CanvasBox()
canvas.set_root(root)
text = hippo.CanvasText(text=u'Some text')
text.connect('button-press-event', textClicked, text)
root.append(text)
window = gtk.Window()
window.connect('destroy', lambda ignored: gtk.main_quit())
window.add(canvas)
canvas.show()
window.show()
gtk.main()
if __name__ == '__main__':
main()
How can the CanvasEntry created when the text label is clicked be automatically focused at creation time?
| [
"Underneath the CanvasEntry, there's a regular old gtk.Entry which you need to request the focus as soon as it's made visible. Here's a modified version of your textClicked function which does just that:\ndef textClicked(text, event, row):\n input = hippo.CanvasEntry()\n input.set_property('text', text.get_property('text'))\n entry = input.get_property(\"widget\")\n def grabit(widget):\n entry.grab_focus()\n entry.connect(\"realize\", grabit)\n parent = text.get_parent()\n parent.insert_after(input, text)\n parent.remove(text)\n\n"
] | [
2
] | [] | [] | [
"focus",
"pygtk",
"python"
] | stackoverflow_0000155822_focus_pygtk_python.txt |
Q:
.order_by() isn't working how it should / how I expect it to
In my Django project I am using Product.objects.all().order_by('order') in a view, but it doesn't seem to be working properly.
This is it's output:
Product Name
Sort
Evolution
2
Polarity
1
Jumbulaya
3
Kalidascope
4
It should look like this:
Product Name
Sort
Polarity
1
Evolution
2
Jumbulaya
3
Kalidascope
4
But it doesn't. Any ideas?
My view (for that output):
def debug(request):
order = Product.objects.all().order_by('order')
return render_to_response('cms/debug.html', {'order' : order, 'name' : name})
And the view responsible for saving the order field:
def manage_all(request):
if request.method == 'POST':
PostEntries = len(request.POST)
x = 1
while x < PostEntries:
p = Product.objects.get(pk=x)
p.order = int(request.POST.get(str(x),''))
print "Itr: " + str(x)
x = x + 1
p.save()
print "Product Order saved"
return HttpResponse("Saved")
And the model (without the boring bits):
class Product(models.Model):
name = models.CharField(max_length=100)
order = models.IntegerField(blank = True, null = True
Here is a 'live' example of the page http://massiveatom.com:8080/debug/ Please note that that is only running on the dev server, so it may not always be up.
I have asked in #django and they didn't seem to know what was going on. One thought was that the database/Django was being confused by the SQL command it is generating (select * from table where 1 order by 'order'), but I would prefer not to change the order field in the model.
And I know there should be back-ticks surrounding order in the above SQL command, but the syntax parsing thingy kinda hated on it...
Edit: Each object has the correct value, so I don't really know why it isn't sorting it properly.
Edit 2: I don't know what was going on, but it turns out putting p.save() in the loop fixed it all...
A:
Your saving loop is wrong. You save Product outside of the loop. It should be:
if request.method == 'POST':
PostEntries = len(request.POST)
x = 1
while x < PostEntries:
p = Product.objects.get(pk=x)
p.order = int(request.POST.get(str(x),''))
print "Itr: " + str(x)
x = x + 1
p.save() # NOTE HERE <- saving in loop instead of outside
print "Product Order saved"
return HttpResponse("Saved")
| .order_by() isn't working how it should / how I expect it to | In my Django project I am using Product.objects.all().order_by('order') in a view, but it doesn't seem to be working properly.
This is it's output:
Product Name
Sort
Evolution
2
Polarity
1
Jumbulaya
3
Kalidascope
4
It should look like this:
Product Name
Sort
Polarity
1
Evolution
2
Jumbulaya
3
Kalidascope
4
But it doesn't. Any ideas?
My view (for that output):
def debug(request):
order = Product.objects.all().order_by('order')
return render_to_response('cms/debug.html', {'order' : order, 'name' : name})
And the view responsible for saving the order field:
def manage_all(request):
if request.method == 'POST':
PostEntries = len(request.POST)
x = 1
while x < PostEntries:
p = Product.objects.get(pk=x)
p.order = int(request.POST.get(str(x),''))
print "Itr: " + str(x)
x = x + 1
p.save()
print "Product Order saved"
return HttpResponse("Saved")
And the model (without the boring bits):
class Product(models.Model):
name = models.CharField(max_length=100)
order = models.IntegerField(blank = True, null = True
Here is a 'live' example of the page http://massiveatom.com:8080/debug/ Please note that that is only running on the dev server, so it may not always be up.
I have asked in #django and they didn't seem to know what was going on. One thought was that the database/Django was being confused by the SQL command it is generating (select * from table where 1 order by 'order'), but I would prefer not to change the order field in the model.
And I know there should be back-ticks surrounding order in the above SQL command, but the syntax parsing thingy kinda hated on it...
Edit: Each object has the correct value, so I don't really know why it isn't sorting it properly.
Edit 2: I don't know what was going on, but it turns out putting p.save() in the loop fixed it all...
| [
"Your saving loop is wrong. You save Product outside of the loop. It should be:\nif request.method == 'POST':\n PostEntries = len(request.POST)\n x = 1 \n while x < PostEntries:\n p = Product.objects.get(pk=x)\n p.order = int(request.POST.get(str(x),''))\n print \"Itr: \" + str(x)\n x = x + 1\n p.save() # NOTE HERE <- saving in loop instead of outside\n print \"Product Order saved\" \n return HttpResponse(\"Saved\")\n\n"
] | [
5
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000156951_django_python.txt |
Q:
How can i parse a comma delimited string into a list (caveat)?
I need to be able to take a string like:
'''foo, bar, "one, two", three four'''
into:
['foo', 'bar', 'one, two', 'three four']
I have an feeling (with hints from #python) that the solution is going to involve the shlex module.
A:
It depends how complicated you want to get... do you want to allow more than one type of quoting. How about escaped quotes?
Your syntax looks very much like the common CSV file format, which is supported by the Python standard library:
import csv
reader = csv.reader(['''foo, bar, "one, two", three four'''], skipinitialspace=True)
for r in reader:
print r
Outputs:
['foo', 'bar', 'one, two', 'three four']
HTH!
A:
The shlex module solution allows escaped quotes, one quote escape another, and all fancy stuff shell supports.
>>> import shlex
>>> my_splitter = shlex.shlex('''foo, bar, "one, two", three four''', posix=True)
>>> my_splitter.whitespace += ','
>>> my_splitter.whitespace_split = True
>>> print list(my_splitter)
['foo', 'bar', 'one, two', 'three', 'four']
escaped quotes example:
>>> my_splitter = shlex.shlex('''"test, a",'foo,bar",baz',bar \xc3\xa4 baz''',
posix=True)
>>> my_splitter.whitespace = ',' ; my_splitter.whitespace_split = True
>>> print list(my_splitter)
['test, a', 'foo,bar",baz', 'bar \xc3\xa4 baz']
A:
You may also want to consider the csv module. I haven't tried it, but it looks like your input data is closer to CSV than to shell syntax (which is what shlex parses).
A:
You could do something like this:
>>> import re
>>> pattern = re.compile(r'\s*("[^"]*"|.*?)\s*,')
>>> def split(line):
... return [x[1:-1] if x[:1] == x[-1:] == '"' else x
... for x in pattern.findall(line.rstrip(',') + ',')]
...
>>> split("foo, bar, baz")
['foo', 'bar', 'baz']
>>> split('foo, bar, baz, "blub blah"')
['foo', 'bar', 'baz', 'blub blah']
A:
I'd say a regular expression would be what you're looking for here, though I'm not terribly familiar with Python's Regex engine.
Assuming you use lazy matches, you can get a set of matches on a string which you can put into your array.
| How can i parse a comma delimited string into a list (caveat)? | I need to be able to take a string like:
'''foo, bar, "one, two", three four'''
into:
['foo', 'bar', 'one, two', 'three four']
I have an feeling (with hints from #python) that the solution is going to involve the shlex module.
| [
"It depends how complicated you want to get... do you want to allow more than one type of quoting. How about escaped quotes?\nYour syntax looks very much like the common CSV file format, which is supported by the Python standard library:\nimport csv\nreader = csv.reader(['''foo, bar, \"one, two\", three four'''], skipinitialspace=True)\nfor r in reader:\n print r\n\nOutputs:\n['foo', 'bar', 'one, two', 'three four']\n\nHTH!\n",
"The shlex module solution allows escaped quotes, one quote escape another, and all fancy stuff shell supports.\n>>> import shlex\n>>> my_splitter = shlex.shlex('''foo, bar, \"one, two\", three four''', posix=True)\n>>> my_splitter.whitespace += ','\n>>> my_splitter.whitespace_split = True\n>>> print list(my_splitter)\n['foo', 'bar', 'one, two', 'three', 'four']\n\nescaped quotes example:\n>>> my_splitter = shlex.shlex('''\"test, a\",'foo,bar\",baz',bar \\xc3\\xa4 baz''',\n posix=True) \n>>> my_splitter.whitespace = ',' ; my_splitter.whitespace_split = True \n>>> print list(my_splitter)\n['test, a', 'foo,bar\",baz', 'bar \\xc3\\xa4 baz']\n\n",
"You may also want to consider the csv module. I haven't tried it, but it looks like your input data is closer to CSV than to shell syntax (which is what shlex parses).\n",
"You could do something like this:\n>>> import re\n>>> pattern = re.compile(r'\\s*(\"[^\"]*\"|.*?)\\s*,')\n>>> def split(line):\n... return [x[1:-1] if x[:1] == x[-1:] == '\"' else x\n... for x in pattern.findall(line.rstrip(',') + ',')]\n... \n>>> split(\"foo, bar, baz\")\n['foo', 'bar', 'baz']\n>>> split('foo, bar, baz, \"blub blah\"')\n['foo', 'bar', 'baz', 'blub blah']\n\n",
"I'd say a regular expression would be what you're looking for here, though I'm not terribly familiar with Python's Regex engine.\nAssuming you use lazy matches, you can get a set of matches on a string which you can put into your array.\n"
] | [
42,
27,
5,
1,
0
] | [
"If it doesn't need to be pretty, this might get you on your way:\ndef f(s, splitifeven):\n if splitifeven & 1:\n return [s]\n return [x.strip() for x in s.split(\",\") if x.strip() != '']\n\nss = 'foo, bar, \"one, two\", three four'\n\nprint sum([f(s, sie) for sie, s in enumerate(ss.split('\"'))], [])\n\n"
] | [
-2
] | [
"escaping",
"python",
"quotes",
"split"
] | stackoverflow_0000118096_escaping_python_quotes_split.txt |
Q:
Python module dependency
Ok I have two modules, each containing a class, the problem is their classes reference each other.
Lets say for example I had a room module and a person module containing CRoom and CPerson.
The CRoom class contains infomation about the room, and a CPerson list of every one in the room.
The CPerson class however sometimes needs to use the CRoom class for the room its in, for example to find the door, or too see who else is in the room.
The problem is with the two modules importing each other I just get an import error on which ever is being imported second :(
In c++ I could solve this by only including the headers, and since in both cases the classes just have pointers to the other class, a forward declaration would suffice for the header eg:
class CPerson;//forward declare
class CRoom
{
std::set<CPerson*> People;
...
Is there anyway to do this in python, other than placing both classes in the same module or something like that?
edit: added python example showing problem using above classes
error:
Traceback (most recent call last):
File "C:\Projects\python\test\main.py", line 1, in
from room import CRoom
File "C:\Projects\python\test\room.py", line 1, in
from person import CPerson
File "C:\Projects\python\test\person.py", line 1, in
from room import CRoom
ImportError: cannot import name CRoom
room.py
from person import CPerson
class CRoom:
def __init__(Self):
Self.People = {}
Self.NextId = 0
def AddPerson(Self, FirstName, SecondName, Gender):
Id = Self.NextId
Self.NextId += 1#
Person = CPerson(FirstName,SecondName,Gender,Id)
Self.People[Id] = Person
return Person
def FindDoorAndLeave(Self, PersonId):
del Self.People[PeopleId]
person.py
from room import CRoom
class CPerson:
def __init__(Self, Room, FirstName, SecondName, Gender, Id):
Self.Room = Room
Self.FirstName = FirstName
Self.SecondName = SecondName
Self.Gender = Gender
Self.Id = Id
def Leave(Self):
Self.Room.FindDoorAndLeave(Self.Id)
A:
No need to import CRoom
You don't use CRoom in person.py, so don't import it. Due to dynamic binding, Python doesn't need to "see all class definitions at compile time".
If you actually do use CRoom in person.py, then change from room import CRoom to import room and use module-qualified form room.CRoom. See Effbot's Circular Imports for details.
Sidenote: you probably have an error in Self.NextId += 1 line. It increments NextId of instance, not NextId of class. To increment class's counter use CRoom.NextId += 1 or Self.__class__.NextId += 1.
A:
Do you actually need to reference the classes at class definition time? ie.
class CRoom(object):
person = CPerson("a person")
Or (more likely), do you just need to use CPerson in the methods of your class (and vice versa). eg:
class CRoom(object):
def getPerson(self): return CPerson("someone")
If the second, there's no problem - as by the time the method gets called rather than defined, the module will be imported. Your sole problem is how to refer to it. Likely you're doing something like:
from CRoom import CPerson # or even import *
With circularly referencing modules, you can't do this, as at the point one module imports another, the original modules body won't have finished executing, so the namespace will be incomplete. Instead, use qualified references. ie:
#croom.py
import cperson
class CRoom(object):
def getPerson(self): return cperson.CPerson("someone")
Here, python doesn't need to lookup the attribute on the namespace until the method actually gets called, by which time both modules should have completed their initialisation.
A:
First, naming your arguments with uppercase letters is confusing. Since Python does not have formal, static type checking, we use the UpperCase to mean a class and lowerCase to mean an argument.
Second, we don't bother with CRoom and CPerson. Upper case is sufficient to indicate it's a class. The letter C isn't used. Room. Person.
Third, we don't usually put things in One Class Per File format. A file is a Python module, and we more often import an entire module with all the classes and functions.
[I'm aware those are habits -- you don't need to break them today, but they do make it hard to read.]
Python doesn't use statically defined types like C++. When you define a method function, you don't formally define the data type of the arguments to that function. You merely list some variable names. Hopefully, the client class will provide arguments of the correct type.
At run time, when you make a method request, then Python has to be sure the object has the method. NOTE. Python doesn't check to see if the object is the right type -- that doesn't matter. It only checks to see if it has the right method.
The loop between room.Room and person.Person is a problem. You don't need to include one when defining the other.
It's safest to import the entire module.
Here's room.py
import person
class Room( object ):
def __init__( self ):
self.nextId= 0
self.people= {}
def addPerson(self, firstName, secondName, gender):
id= self.NextId
self.nextId += 1
thePerson = person.Person(firstName,secondName,gender,id)
self.people[id] = thePerson
return thePerson
Works fine as long as Person is eventually defined in the namespace where this is executing. Person does not have to be known when you define the class.
Person does not have to be known until runtime when then Person(...) expression is evaluated.
Here's person.py
import room
class Person( object ):
def something( self, x, y ):
aRoom= room.Room( )
aRoom.addPerson( self.firstName, self.lastName, self.gender )
Your main.py looks like this
import room
import person
r = room.Room( ... )
r.addPerson( "some", "name", "M" )
print r
A:
You could just alias the second one.
import CRoom
CPerson = CRoom.CPerson
A:
@S.Lott
if i don't import anything into the room module I get an undefined error instead (I imported it into the main module like you showed)
Traceback (most recent call last):
File "C:\Projects\python\test\main.py", line 6, in
Ben = Room.AddPerson('Ben', 'Blacker', 'Male')
File "C:\Projects\python\test\room.py", line 12, in AddPerson
Person = CPerson(FirstName,SecondName,Gender,Id)
NameError: global name 'CPerson' is not defined
Also, the reason there diffrent modules is where I encountered the problem to start with the container class (ieg the room) is already several hundred lines, so I wanted the items in it (eg the people) in a seperate file.
EDIT:
main.py
from room import CRoom
from person import CPerson
Room = CRoom()
Ben = Room.AddPerson('Ben', 'Blacker', 'Male')
Tom = Room.AddPerson('Tom', 'Smith', 'Male')
Ben.Leave()
| Python module dependency | Ok I have two modules, each containing a class, the problem is their classes reference each other.
Lets say for example I had a room module and a person module containing CRoom and CPerson.
The CRoom class contains infomation about the room, and a CPerson list of every one in the room.
The CPerson class however sometimes needs to use the CRoom class for the room its in, for example to find the door, or too see who else is in the room.
The problem is with the two modules importing each other I just get an import error on which ever is being imported second :(
In c++ I could solve this by only including the headers, and since in both cases the classes just have pointers to the other class, a forward declaration would suffice for the header eg:
class CPerson;//forward declare
class CRoom
{
std::set<CPerson*> People;
...
Is there anyway to do this in python, other than placing both classes in the same module or something like that?
edit: added python example showing problem using above classes
error:
Traceback (most recent call last):
File "C:\Projects\python\test\main.py", line 1, in
from room import CRoom
File "C:\Projects\python\test\room.py", line 1, in
from person import CPerson
File "C:\Projects\python\test\person.py", line 1, in
from room import CRoom
ImportError: cannot import name CRoom
room.py
from person import CPerson
class CRoom:
def __init__(Self):
Self.People = {}
Self.NextId = 0
def AddPerson(Self, FirstName, SecondName, Gender):
Id = Self.NextId
Self.NextId += 1#
Person = CPerson(FirstName,SecondName,Gender,Id)
Self.People[Id] = Person
return Person
def FindDoorAndLeave(Self, PersonId):
del Self.People[PeopleId]
person.py
from room import CRoom
class CPerson:
def __init__(Self, Room, FirstName, SecondName, Gender, Id):
Self.Room = Room
Self.FirstName = FirstName
Self.SecondName = SecondName
Self.Gender = Gender
Self.Id = Id
def Leave(Self):
Self.Room.FindDoorAndLeave(Self.Id)
| [
"No need to import CRoom\nYou don't use CRoom in person.py, so don't import it. Due to dynamic binding, Python doesn't need to \"see all class definitions at compile time\".\nIf you actually do use CRoom in person.py, then change from room import CRoom to import room and use module-qualified form room.CRoom. See Effbot's Circular Imports for details.\nSidenote: you probably have an error in Self.NextId += 1 line. It increments NextId of instance, not NextId of class. To increment class's counter use CRoom.NextId += 1 or Self.__class__.NextId += 1.\n",
"Do you actually need to reference the classes at class definition time? ie.\n class CRoom(object):\n person = CPerson(\"a person\")\n\nOr (more likely), do you just need to use CPerson in the methods of your class (and vice versa). eg:\nclass CRoom(object):\n def getPerson(self): return CPerson(\"someone\")\n\nIf the second, there's no problem - as by the time the method gets called rather than defined, the module will be imported. Your sole problem is how to refer to it. Likely you're doing something like:\nfrom CRoom import CPerson # or even import *\n\nWith circularly referencing modules, you can't do this, as at the point one module imports another, the original modules body won't have finished executing, so the namespace will be incomplete. Instead, use qualified references. ie:\n#croom.py\nimport cperson\nclass CRoom(object):\n def getPerson(self): return cperson.CPerson(\"someone\")\n\nHere, python doesn't need to lookup the attribute on the namespace until the method actually gets called, by which time both modules should have completed their initialisation.\n",
"First, naming your arguments with uppercase letters is confusing. Since Python does not have formal, static type checking, we use the UpperCase to mean a class and lowerCase to mean an argument.\nSecond, we don't bother with CRoom and CPerson. Upper case is sufficient to indicate it's a class. The letter C isn't used. Room. Person.\nThird, we don't usually put things in One Class Per File format. A file is a Python module, and we more often import an entire module with all the classes and functions. \n[I'm aware those are habits -- you don't need to break them today, but they do make it hard to read.]\nPython doesn't use statically defined types like C++. When you define a method function, you don't formally define the data type of the arguments to that function. You merely list some variable names. Hopefully, the client class will provide arguments of the correct type.\nAt run time, when you make a method request, then Python has to be sure the object has the method. NOTE. Python doesn't check to see if the object is the right type -- that doesn't matter. It only checks to see if it has the right method.\nThe loop between room.Room and person.Person is a problem. You don't need to include one when defining the other.\nIt's safest to import the entire module.\nHere's room.py\nimport person\nclass Room( object ):\n def __init__( self ):\n self.nextId= 0\n self.people= {}\n def addPerson(self, firstName, secondName, gender):\n id= self.NextId\n self.nextId += 1\n\n thePerson = person.Person(firstName,secondName,gender,id)\n self.people[id] = thePerson\n return thePerson \n\nWorks fine as long as Person is eventually defined in the namespace where this is executing. Person does not have to be known when you define the class. \nPerson does not have to be known until runtime when then Person(...) expression is evaluated.\nHere's person.py\nimport room\nclass Person( object ):\n def something( self, x, y ):\n aRoom= room.Room( )\n aRoom.addPerson( self.firstName, self.lastName, self.gender )\n\nYour main.py looks like this\nimport room\nimport person\nr = room.Room( ... )\nr.addPerson( \"some\", \"name\", \"M\" )\nprint r\n\n",
"You could just alias the second one.\nimport CRoom\n\nCPerson = CRoom.CPerson\n\n",
"@S.Lott\nif i don't import anything into the room module I get an undefined error instead (I imported it into the main module like you showed)\n\nTraceback (most recent call last):\n File \"C:\\Projects\\python\\test\\main.py\", line 6, in \n Ben = Room.AddPerson('Ben', 'Blacker', 'Male')\n File \"C:\\Projects\\python\\test\\room.py\", line 12, in AddPerson\n Person = CPerson(FirstName,SecondName,Gender,Id)\n NameError: global name 'CPerson' is not defined \n\nAlso, the reason there diffrent modules is where I encountered the problem to start with the container class (ieg the room) is already several hundred lines, so I wanted the items in it (eg the people) in a seperate file.\nEDIT:\nmain.py\nfrom room import CRoom\nfrom person import CPerson\n\nRoom = CRoom()\n\nBen = Room.AddPerson('Ben', 'Blacker', 'Male')\nTom = Room.AddPerson('Tom', 'Smith', 'Male')\n\nBen.Leave()\n\n"
] | [
22,
7,
4,
1,
0
] | [] | [] | [
"circular_dependency",
"module",
"python"
] | stackoverflow_0000158268_circular_dependency_module_python.txt |