text
stringlengths 226
34.5k
|
---|
Cython: Dynamically linking with a dll/so
Question: I'm working with an API which is distributed as a dll/so file that I need to
dynamically link with my python program. To accomplish this, I want to use
Cython.
I have been able to, in the past, link with the dll statically. This works
well, except that the API comes in four different flavors, with theoretically
infinitely more to come and the users should be able to compile them with
whatever name they want (Kinda like a plugin system). Because of that, I can't
just make an so/pxd file that statically links with just one library, or even
links with a selection of them.
What I need is to be able to pass the so/dll name into the Cython code and
have it "import" it. I know this can be done with ctypes via
ctypes.cdll.LoadLibrary, but is this kind of thing possible in Cython? Am I
going to have to use ctypes to do this?
Answer: I assume you are talking about writing C modules here. If so yes you can. I
don't know what the equivalent on Windows is, but in Linux you can use dlopen
and friends. There is a man page for it, and several web sites documenting it.
Try this link "http://linux.die.net/man/3/dlopen" it provides you with a nice
example near the bottom of the page. This is doing the same thing as what
ctypes does, in fact I think this might even be what ctypes uses.
|
On keystroke insert line of code in (mac)vim, for pdb
Question: I'm looking for the way to insert a line of code with a keystroke like
`leader``p` in Macvim
I want to insert the following line of code:
import pdb; pdb.set_trace()
Probably not an unheard of line of code in python land
Answer: I'd use a simple mapping (without functions) to leader p:
nnoremap <leader>p oimport pdb; pdb.set_trace()<Esc>
When pressing o, this enters insert mode inserts a blank line after the
current one (with `o`) and then types `import pdb; pdb.set_trace()`, finally
it goes back to normal mode (with `Esq`).
* * *
If you want to insert the code before the current line replace o by O:
nnoremap <leader>p Oimport pdb; pdb.set_trace()<Esc>
Or alternatively you could set this for leader shift-p:
nnoremap <leader><S-p> Oimport pdb; pdb.set_trace()<Esc>
|
How to access a database initialized in a class from another class in python?
Question: How would I go about implementing the MainPage class?
import cgi
import os
import logging
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext import db
class Greeting(db.Model): #setting up database
a = db.StringProperty(multiline=True)
b = db.StringProperty(multiline=True)
c = db.StringProperty(multiline=True)
d = db.StringProperty(multiline=True)
class MainPage(webapp.RequestHandler):
def get(self):
##I want to access the content of the database here which is defined in Downloader class.
##For example let the value of greeting.a be assigned to alpha variable.
#Entering data to the database
class Downloader(webapp.RequestHandler):
def get(self):
greeting=Greeting()
greeting.a=self.request.GET.get('a')
greeting.b=self.request.GET.get('b')
greeting.c=self.request.GET.get('c')
greeting.d=self.request.GET.get('d')
greeting.put()
Answer: You could pass a Downloader object to MainPage on initialization:
class MainPage(webapp.RequestHandler):
def __init__(self, downloader):
self.downloader = downloader
def get(self):
self.downloader.get()
|
Python: Regex and String Length in byte
Question: I am writing a program in python and have some question (I am 100% new to
python):
import re
rawData = '7I+8I-7I-9I-8I-'
print len(rawData)
rawData = re.sub("[0-9]I\+","",rawData)
rawData = re.sub("[0-9]I\-","",rawData)
print rawData
1. How to merge the 2 regex into one using `|`? It means it will get rid of both `9I-` and `9I+` using just one regex operation.
2. Does len(rawData) return the length of rawData is byte?
Thank you.
Answer: See the difference:
$ python3
Python 3.1.3 (r313:86834, May 20 2011, 06:10:42)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> len('día') # Unicode text
3
>>>
$ python
Python 2.7.1 (r271:86832, May 20 2011, 17:19:04)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> len('día') # bytes
4
>>> len(u'día') # Unicode text
3
>>>
Python 3.1.3 (r313:86834, May 20 2011, 06:10:42)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> len(b'día')
File "<stdin>", line 1
SyntaxError: bytes can only contain ASCII literal characters.
>>> len(b'dia')
3
>>>
|
Python: Are `hash` values for built-in numeric types, strings standardised?
Question: I came to this question while pondering about the ordering of `set`,
`frozenset` and `dict`. Python doesn't guarantee any ordering, and any
ordering is coupled to the `hash` value at some level. But is the hash value
for a value of a numeric or string built-in type standardized? In other words,
would
hash((a,b,c,d,e,f,g))
have a determined value, if `a`, `b`, `c`, `d`, `e`, `f`, `g` are numeric
values or `str`?
Answer: The hash values for strings and integers are absolutely not standardized. They
could change with any new implementation of Python, including between 2.6.1
and 2.6.2, or between a Mac and a PC implementation of the same version, etc.
More importantly, though, stable hash values doesn't imply repeatable
iteration order. You cannot depend on the ordering of values in a set,
**ever**. Even within one process, two sets can be equal and not return their
values in the same order. This can happen if one set has had many additions
and deletions, but the other has not:
>>> a = set()
>>> for i in range(1000000): a.add(str(i))
...
>>> for i in range(6, 1000000): a.remove(str(i))
...
>>> b = set()
>>> for i in range(6): b.add(str(i))
...
>>> a == b
True
>>> list(a)
['1', '5', '2', '0', '3', '4']
>>> list(b)
['1', '0', '3', '2', '5', '4']
|
How to express a context free design grammar as an internal DSL in Python?
Question: [Note: Rereading this before submitting, I realized this Q has become a bit of
an epic. Thank you for indulging my long explanation of the reasoning behind
this pursuit. I feel that, were I in a position to help another undertaking a
similar project, I would be more likely to get on board if I knew the
motivation behind the question.]
I have been getting into [Structure
Synth](http://structuresynth.sourceforge.net/) by Mikael Hvidtfeldt
Christensen lately. It is a tool for generating 3D geometry from a (mostly)
context free grammar called Eisenscript. Structure Synth is itself inspired by
Context Free Art. Context free grammars can create some stunning results from
surprisingly simple rulesets.
My current Structure Synth workflow involves exporting an OBJ file from
Structure Synth, importing it into Blender, setting up lights, materials,
etcetera, then rendering with Luxrender. Unfortunately, importing these OBJ
files often brings Blender to a grinding halt as there can be thousands of
objects with fairly complex geometry. I say 'fairly' because Structure Synth
only generates basic shapes, but a sphere represented by triangles still has
many faces.
Thus, generating the structures directly in Blender would be preferable to the
current process (Blender's deep support for Python scripting should make this
possible). An intelligent Python library could use Blender's instancing
abilities to use one mesh to generate myriad objects, thus saving memory. Plus
Blender is a full-featured 3D suite and its ability to interpret a CFDG would
provide creative possibilities far beyond what Structure Synth can offer.
And so my question is how best to translate the Eisenscript grammar into a
Python DSL. Here's what a simple Eisenscript looks like:
set maxdepth 2000
{ a 0.9 hue 30 } R1
rule R1 {
{ x 1 rz 3 ry 5 } R1
{ s 1 1 0.1 sat 0.9 } box
}
rule R1 {
{ x 1 rz -3 ry 5 } R1
{ s 1 1 0.1 } box
}
To explain, the first call to R1 (line 2) will randomly invoke one of the two
definitions of R1. Each definition of R1 recursively calls R1 (randomly
invoking one of the two definitions) and also creates a box. The first line
kills generation after recursion has gone 2000 levels deep.
Jeremy Ashkenas (of CoffeeScript fame) successfully implemented a [context
free DSL in Ruby](http://ashkenas.com/codework/context-free.html) using
blocks. Internally, it works by creating a hash key for each rule 'name', and
stores the blocks for each definition of that rule in an array, to be randomly
chosen when the rule is invoked.
The previous Eisenscript rule definitions would translate to the Ruby DSL like
so:
rule :r1 do
r1 :x => 1, :rz => 3, :ry => 5
box :s => [1, 1, 0.1], :sat => 0.9
end
rule :r1 do
r1 :x => 1, :rz => -3, :ry => 5
box :s => [1, 1, 0.1]
end
I am a novice Python user and so have been doing some research on Python's
functional programming capabilities. It seems like lambda is too limited to
create something similar to Jeremy's Ruby DSL, and, as far as I can tell,
lambda is the only option for anonymous functions?
How might an experienced Pythonista approach the design?
Answer: Writing a parser for a context free grammar is hard. You're probably better
off using some sort of library to make things easier on yourself.
I would check out the [PyParsing](http://pyparsing.wikispaces.com/) module.
The download comes with a number of examples, one of which is a simple SQL
parser, which might be enlightening to look at, at least as a first step.
|
Getting the event name rather than integer ID in wxPython
Question: I have the following code:
`self.sliderR.Bind(wx.EVT_SCROLL,self.OnSlide)`
In the function `OnSlide` I have the inserted the code `pdb.set_trace()` to
help me debug.
In the pdb prompt if I type `event.GetEventType()` it returns a number (10136)
but I have no idea which event that corresponds to.
Does the 10136 refer to the `wx.EVT_SCROLL` or another event that also
triggers the `wx.EVT_SCROLL` event? If the latter is true, how do I find the
specific event?
Thanks.
Answer: There isn't a built-in way. You will need to build an event dictionary. Robin
Dunn has some code here that will help: <http://osdir.com/ml/wxpython-
users/2009-11/msg00138.html>
Or you can check out my simple example:
import wx
class MyForm(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, title="Tutorial")
self.eventDict = {}
for name in dir(wx):
if name.startswith('EVT_'):
evt = getattr(wx, name)
if isinstance(evt, wx.PyEventBinder):
self.eventDict[evt.typeId] = name
# Add a panel so it looks the correct on all platforms
panel = wx.Panel(self, wx.ID_ANY)
btn = wx.Button(panel, wx.ID_ANY, "Get POS")
btn.Bind(wx.EVT_BUTTON, self.onEvent)
panel.Bind(wx.EVT_LEFT_DCLICK, self.onEvent)
panel.Bind(wx.EVT_RIGHT_DOWN, self.onEvent)
def onEvent(self, event):
"""
Print out what event was fired
"""
evt_id = event.GetEventType()
print self.eventDict[evt_id]
# Run the program
if __name__ == "__main__":
app = wx.App(False)
frame = MyForm().Show()
app.MainLoop()
|
Django inlines working on Dev server, yet not on Apache Test server?
Question: I'm having an issue where the Inline Admin functionality is behaving
differently in different environments.
In Dev, when editing a technology I get a link at the bottom to add more Roll
Modifiers as needed that works flawlessly.
In Test, I get a single roll modifier with no link to add more and it silently
fails to save any changes I make to the roll modifier.
The same code is deployed to both environments. Any ideas what might be going
on here?
## Dev Server Configuration (actually a Desktop)
* Gentoo Linux
* Django 1.3
* SQLLite3 Database (locally stored)
* Django built-in development server
* Python 2.6.6
## Test Server Configuration
* SuSE Linux 11.4
* Django 1.3 (also tried with Django 1.2.5)
* PostgreSQL 9.0.3
* Apache2 2.2.17
* Python 2.7
## Appendix A - Model Code
class Technology(models.Model):
categories = (
('weap' , 'Weaponry'),
('equip', 'Equipment'),
('cons' , 'Construction'),
('ammo' , 'Ammunition'),
)
name = models.CharField(max_length=40)
category = models.CharField(max_length=8, choices=categories)
urlname = models.CharField(max_length=20)
description = models.TextField()
base_difficulty = models.IntegerField()
tier = models.IntegerField()
show = models.BooleanField()
def __unicode__(self):
return self.name
class TechnologyRollModifier(models.Model):
technology = models.ForeignKey(Technology)
modifier = models.IntegerField(default=2)
condition = models.CharField(max_length=120)
## Appendix B - Admin Code
from django.contrib import admin
from solaris.warbook import models
class TechnologyRollModifierInline(admin.StackedInline):
model = models.TechnologyRollModifier
extra = 0
class TechnologyAdmin(admin.ModelAdmin):
fields = ['name', 'urlname', 'description', 'tier', 'category', 'base_difficulty', 'show']
inlines = [TechnologyRollModifierInline,]
admin.site.register(models.Technology, TechnologyAdmin)
Answer: Figured it out. Some time ago I'd copied the Django admin files to
/var/www/media/admin and aliases /media/ to /var/www/media/
Which means it was serving up the old media files - giving me working CSS /
images but silently failing to find the JavaScript - which the StackedInline
admin interface relies upon to do its work.
The single TechnologyRollModifier I saw was meant to be the hidden template
and did not actually record any data meant to be entered into it.
Another mystery solved....
|
python fabric prompted for password everytime i execute a sudo command
Question: Below is a small code I've been working on to install packages on the remote
server. The code is working fine and the packages are being installed, but
once the installation is complete I get prompted for entering the password.
Any idea what is causing this?
import re
import os
import sys
from datetime import datetime
import git
from fabric.api import run, local, cd, env
from fabric.contrib.files import exists, append
from fabric.operations import put, sudo
class WebAppConf():
APPLICATION_DIR = '/home/xxx/webapps'
APPLICATION_USER = '' # mazban
APPLICATION_ROOT = ''
APPLICATION_TEMPLATE_SETTINGS = 'com_work' # see config/settings/templates
BASE_DIR = os.path.abspath(os.path.dirname(__file__))
###########################################################################
# CONFIGURATION FOR DEVELOPMENT, STAGING, TESTING AND PRODUCTION SETTINGS - START #
###########################################################################
def test():
env.hosts = env.hosts or ['192.168.3.139', ]
env.user = 'xxx'
env.password = "xxx"
env.warn_only = True
env.no_keys = True
# App variables
WebAppConf.APPLICATION_DIR = '/home/xxx/webapps' # application physical location
WebAppConf.APPLICATION_USER = env.user
WebAppConf.APPLICATION_ROOT = env.user
WebAppConf.APPLICATION_TEMPLATE_SETTINGS = 'testing'
def staging():
env.hosts = env.hosts or ['xxx-staging.com', ]
env.user = 'xxx'
env.password = "xx"
env.warn_only = True
env.no_keys = True
def production():
"""
Production will deploay on one or many frontend's
for peroduction deployment no password will be used and only ssh keys
"""
env.hosts = env.hosts or ['xxx.com', ]
env.user = APPLICATION_USER
env.password = "xx"
env.warn_only = True
env.no_keys = True
##########################################################################
# CONFIGURATION FOR DEVELOPMENT, STAGING, TESTING AND PRODUCTION SETTINGS - END #
##########################################################################
def firstrun():
## if no env is selected it will fail
if not env.hosts:
print 'Error: must use environment (e.g fab prod %s)' % 'firstrun'
exit()
# validate the OS distribution and version
LINUX_DISTRIBUTION = ''
if run('cat /etc/*-release|grep DISTRIB_ID').upper().__contains__('UBUNTU'):
LINUX_DISTRIBUTION = 'UBUNTU'
print 'Ubuntu Linux'
elif run('cat /etc/*-release|grep DISTRIB_ID').upper().__contains__('DEBIAN'):
LINUX_DISTRIBUTION = 'DEBIAN'
print 'Debian Linux'
else:
print 'Exiting (Cannot recognize linux distribution).'
exit()
# Linux version
LINUX_VERSION = run('cat /etc/*-release|grep DISTRIB_RELEASE')
print 'Linux version %s' % LINUX_VERSION.replace('DISTRIB_RELEASE=', '')
# GET PYTHON VERSIONS
version = run("python2.6 -V").split()[1]
if version[0]=='2' and version[2]=='6' and version[4]=='6':
# version installed matches the requirements
print 'Python version is compatible with this application (2.6.6).'
else:
print 'Python version is not compatible with this application %s' % version[0]+'.'+version[2]+'.'+version[4]
_sudo('apt-get install python2.6')
version = run("python2.6 -V").split()[1]
if version[0]=='2' and version[2]=='6' and version[4]=='6':
print 'python 2.6.6 installed successfully.'
else:
print 'cannot install python 2.6.6'
exit()
if not exists('/usr/bin/pip'):
print "installing python-pip package"
_sudo('apt-get install python-pip')
if not exists('/usr/local/bin/virtualenv'):
print 'installing virtualenv'
_sudo('pip install -E /usr/bin/python2.6 virtualenv')
# application directory check
MAKE_APPLICATION_DIR = False
if not exists(WebAppConf.APPLICATION_DIR):
MAKE_APPLICATION_DIR = True
print '%s does not exist.' % WebAppConf.APPLICATION_DIR
run('mkdir -p %s' % WebAppConf.APPLICATION_DIR)
print '%s application directory is created successfully.' % WebAppConf.APPLICATION_DIR
# change to application root directory and start preparing for applications
#with cd(WebAppConf.APPLICATION_DIR):
return
Output when executing function
mo@ubuntu:~/Projects/mazban/mazban$ fab test firstrun
[192.168.3.139] Executing task 'firstrun'
[192.168.3.139] run: cat /etc/*-release|grep DISTRIB_ID
[192.168.3.139] out: DISTRIB_ID=Ubuntu
[192.168.3.139] out:
Ubuntu Linux
[192.168.3.139] run: cat /etc/*-release|grep DISTRIB_RELEASE
[192.168.3.139] out: DISTRIB_RELEASE=11.04
[192.168.3.139] out:
Linux version 11.04
[192.168.3.139] run: python2.6 -V
[192.168.3.139] out: /bin/bash: python2.6: command not found
[192.168.3.139] out:
Warning: run() encountered an error (return code 127) while executing 'python2.6 -V'
Python version is not compatible with this application p.t.o
[192.168.3.139] sudo: apt-get install python2.6
[192.168.3.139] out: sudo password:
[192.168.3.139] out: Reading package lists... Done
[192.168.3.139] out: Building dependency tree
[192.168.3.139] out: Reading state information... Done
[192.168.3.139] out: The following extra packages will be installed:
[192.168.3.139] out: python2.6-minimal
[192.168.3.139] out: Suggested packages:
[192.168.3.139] out: python2.6-doc python2.6-profiler
[192.168.3.139] out: The following NEW packages will be installed:
[192.168.3.139] out: python2.6 python2.6-minimal
[192.168.3.139] out: 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
[192.168.3.139] out: Need to get 3,743 kB of archives.
[192.168.3.139] out: After this operation, 14.3 MB of additional disk space will be used.
[192.168.3.139] out: Do you want to continue [Y/n]? Y
[192.168.3.139] out: Get:1 http://us.archive.ubuntu.com/ubuntu/ natty/main python2.6-minimal i386 2.6.6-6ubuntu7 [1,386 kB]
[192.168.3.139] out: Get:2 http://us.archive.ubuntu.com/ubuntu/ natty/main python2.6 i386 2.6.6-6ubuntu7 [2,357 kB]
[192.168.3.139] out: 53% [2 python2.6 602 kB/2,357 kB 25%] [192.168.3.139] out: 58% [2 python2.6 797 kB/2,357 kB 33%] [192.168.3.139] out: 63% [2 python2.6 1,001 kB/2,357 kB 42%] [192.168.3.139] out: 69% [2 python2.6 1,208 kB/2,357 kB 51%] [192.168.3.139] out: 75% [2 python2.6 1,425 kB/2,357 kB 60%] [192.168.3.139] out: 81% [2 python2.6 1,651 kB/2,357 kB 70%] [192.168.3.139] out: 87% [2 python2.6 1,890 kB/2,357 kB 80%] [192.168.3.139] out: 94% [2 python2.6 2,141 kB/2,357 kB 90%] [192.168.3.139] out: 100% [Working] [192.168.3.139] out: [192.168.3.139] out: Fetched 3,743 kB in 10s (371 kB/s)
[192.168.3.139] out: Selecting previously deselected package python2.6-minimal.
[192.168.3.139] out: (Reading database ... 131472 files and directories currently installed.)
[192.168.3.139] out: Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-6ubuntu7_i386.deb) ...
[192.168.3.139] out: Selecting previously deselected package python2.6.
[192.168.3.139] out: Unpacking python2.6 (from .../python2.6_2.6.6-6ubuntu7_i386.deb) ...
[192.168.3.139] out: Processing triggers for man-db ...
[192.168.3.139] out: Processing triggers for bamfdaemon ...
[192.168.3.139] out: Rebuilding /usr/share/applications/bamf.index...
[192.168.3.139] out: Processing triggers for desktop-file-utils ...
[192.168.3.139] out: Processing triggers for python-gmenu ...
[192.168.3.139] out: Rebuilding /usr/share/applications/desktop.en_US.utf8.cache...
[192.168.3.139] out: Processing triggers for python-support ...
[192.168.3.139] out: Setting up python2.6-minimal (2.6.6-6ubuntu7) ...
[192.168.3.139] out: Linking and byte-compiling packages for runtime python2.6...
[192.168.3.139] out: Setting up python2.6 (2.6.6-6ubuntu7) ...
[192.168.3.139] out:
[192.168.3.139] run: python2.6 -V
[192.168.3.139] Login password:
Answer: I'm speaking from limited experience, but I believe that since you're "sudo"
installing py2.6 you need be "sudo" to run it, or at least in the right group
(is it "wheel" or something like that on Ubuntu?).
I suppose an easy test would be, change this:
_sudo('apt-get install python2.6')
version = run("python2.6 -V").split()[1]
to this:
_sudo('apt-get install python2.6')
version = _sudo("python2.6 -V").split()[1]
& see if you're still prompted for the password.
|
Python: Persistent cookie, generate `expires` field
Question: I'm trying to generate the text for a persistent cookie in a simple Python web
application.
I'm having trouble finding a way to generate the `expires` field. The text
format for the field is somewhat complicated, and I'd rather not write code to
generate it myself.
Is there something in Python that will help? I've cooked at the docs for
`cookie` and `cookielib` and they seem to handle a lot of the cookie business,
except for generating the `expires` field
Answer: Use Python's
[`time.strftime()`](http://docs.python.org/library/time.html#time.strftime) to
get the time into the correct format (see [RFC 6265, section
5.1.1](http://tools.ietf.org/html/rfc6265#section-5.1.1)):
>>> import time
>>> expires = time.time() + 14 * 24 * 3600 # 14 days from now
>>> time.strftime("%a, %d-%b-%Y %T GMT", time.gmtime(expires))
'Sat, 16-Jul-2011 12:55:48 GMT'
(RFC 6265 indicates that the timezone is ignored, but all the examples have
"GMT", so I've put it here too. That's cargo-cult programming for you!)
|
Bcc Field in SMTP [ showing/not showing ] problem
Question: I'm trying to use python's smtplib to send, via _gmail_ , an email with bcc
addresses. I use this code:
#imports etc...
fromAddr = sender@origin.com
to = [ recpt1@destinationTo.com ]
cc = [ recpt2@destinationCc.com ]
bcc = [ recpt3@destinationBcc.com, recpt4@destinationBcc.com ]
server = SMTP( "smtp.gmail.com", 587)
#starttls, login, etc..
content = "Hello, this is a message."
msg = "From: %s\r\nTo:%s\r\nCc: %s\r\n\r\n%s" % ( from, to, cc, content )
server.sendmail( fromAddr, to + cc + bcc, msg )
#server.quit() etc...
-> When I go to the respective Inboxs I get the same message for all the addresses in [to+cc+bcc], wich is right. But
what i would like to happen was that each bcc address got a bcc field with its
own address in it, just like is described in
[here](http://mail.google.com/support/bin/answer.py?answer=57143) for the web
interface of _gmail_.
This is what I wish to accomplish:
Cc and To Inboxes:
To: recpt1@destinationTo.com
From: sender@origin.com
Cc: recpt2@destinationCc.com
(...)
recpt3 Inbox:
To: recpt1@destinationTo.com
From: sender@origin.com
Cc: recpt2@destinationCc.com
Bcc: recpt3@destinationBcc.com
(...)
recpt4 Inbox:
To: recpt1@destinationTo.com
From: sender@origin.com
Cc: recpt2@destinationCc.com
Bcc: recpt4@destinationBcc.com
(...)
Has anyone managed to get this working? I've been looking in the smtp rfc
documents and I've found nothing, I dont understand how gmail can do it
Answer: My guess would be that gmail does a separate SMTP session for each BCC
recipient. If everything between your two comments were a function
`dosend(fromAddr, toAddr, content, to, cc, bcc=None)` you might do something
like this:
dosend(fromAddr, to+cc, content, to, cc)
for t in bcc:
dosend(fromAddr, t, content, to, cc, t)
That would send it once to the `to` and `cc` address(es), then send it again
to each `bcc` address individually, with the appropriate bcc header. Just to
clarify what dosend does: the `fromAddr` and `toAddr` arguments are for the
envelope (the first and second arguments to `server.sendmail`). The `to`,
`cc`, and (optional) `bcc` arguments are for the headers in `msg`. (Your code
doesn't add a Bcc, you would need to add that if the optional argument `bcc`
is provided.)
(Edit: I deleted my comments about the possibility it might use X-Bcc. I just
tried it, and it works as described above. I also fixed up and clarified the
description of `dosend`.)
|
Clarification on why this C code works
Question: I'm learning C today. I've been coding in managed languages (Java, C#, Python,
etc.) for some time now. I thought I was understanding the details of
pointers, but then I wrote the following code that worked as expected, but
generated an 'incompatible pointer type' warning.
void setText(char* output) {
//code to set output to whatever, no problems here.
}
int main(int argc, const char* argv[]) {
char output[10];
setText(&output);
//[EDITED] ...other test code which printf's and further manipulates output.
return 0;
}
So I googled, and ended up changing the line
setText(&output);
to
setText(output);
which got rid of the warning. But now I don't know why the first one was
working at all. I was sending the address of an address as far as I can tell
(because char* x; is essentially the same as char x[];). What am I
misunderstanding and why do both of these work?
Answer: The _type_ of `output` is `char [10]`, which decays to a `char *` in the
context of a function call (which is why the second variant works).
The type of `&output` is `char (*)[10]`, i.e. a pointer-to-array. This is not
the same thing, hence the compiler warning. However, the _value_ of `&output`
(an address) is equivalent to the value of `output` (once it has decayed to a
`char *`), so the end result is "as expected".
This may sound like pedantry, but there is a fairly important difference. Try
the following:
void foo(const char *p)
{
printf("%s\n", p);
}
int main(void)
{
char output[][6] = { "Hello", "world" };
foo(output[0] + 1);
foo(&output[0] + 1);
}
Recommended reading is [the C FAQ on arrays and
pointers](http://c-faq.com/aryptr/), in particular question 6.3 and 6.12.
|
Parse HTML using Python and Beautiful Soup
Question:
<div class="profile-row clearfix"><div class="profile-row-header">Member Since</div><div class="profile-information">January 2010</div></div>
<div class="profile-row clearfix"><div class="profile-row-header">AIGA Chapter</div><div class="profile-information">Alaska</div></div>
<div class="profile-row clearfix"><div class="profile-row-header">Title</div><div class="profile-information">Owner</div></div>
<div class="profile-row clearfix"><div class="profile-row-header">Company</div><div class="profile-information">Mad Dog Graphx</div></div>
I'm using Beautiful Soup to get to this point in HTML code. I now want to
search through the code, and pull the data like January 2010, Alaska, Owner,
and Mad Dog Graph. All this data has the same class but they have different
variables like "Member Since", "AIGA Chapter," etc. before hand. How can I
search for Member Since and then get January 2010. And do the same for the
other 3 fields?
Answer:
>>> from BeautifulSoup import BeautifulSoup
>>> soup = BeautifulSoup('''<div class="profile-row clearfix"><div class="profile-row-header">Member Since</div><div class="profile-information">January 2010</div></div>
... <div class="profile-row clearfix"><div class="profile-row-header">AIGA Chapter</div><div class="profile-information">Alaska</div></div>
... <div class="profile-row clearfix"><div class="profile-row-header">Title</div><div class="profile-information">Owner</div></div>
... <div class="profile-row clearfix"><div class="profile-row-header">Company</div><div class="profile-information">Mad Dog Graphx</div></div>
... ''')
>>> for row in soup.findAll('div', {'class':'profile-row clearfix'}):
... field, value = row.findAll(text = True)
... print field, value
...
Member Since January 2010
AIGA Chapter Alaska
Title Owner
Company Mad Dog Graphx
You can of course do anything you want with `field` and `value`, like create a
dict with them or store them in a database.
If there are other divs or other text nodes within the "profile-row clearfix"
div, you'll need to do something like `field = row.find('div',
{'class':'profile-row-header'}).findAll(text=True)`, etc.
|
Python: svndumpfilter2 and new line characters on Windows
Question: When running svndumpfilter2 on Windows, I get a problem that seems to take its
origin in the fact that the dump file has sometimes CRLF endings.
Some files in the SVN database had CRLF line endings. But it seems that Python
counts CRLF as one character (not counting the CR character as separate from
the following LF in the content of the files). Thus, it fails read the right
amount of characters, and misses the start of the next lump.
So my question is: how to tell Python to treat CRLF as two separate
characters?
The stream is read from `sys.stdin` so I'm looking for a way to change the
newline property of stdin. What is the "one right way" to do that in Python?
Answer: _Update:_ One way that occurs to me is to explicitly set the mode of `stdin`
to binary. So something like the following will read CRLF as two characters:
import msvcrt, os, sys
msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
while True:
ch = sys.stdin.read(1)
print ord(ch) # CRLF should appear as 13 followed by 10
Another way is to start python with the `-u` flag which results in a
unbuffered `stdin` (as well as `stdout` and `stderr`). So just `python -u
myscript.py` where myscript.py calls `stdin.read(1)` with no other changes.
See `python --help` for more information on this.
_Old:_ If you're on windows, Python should be able to handle this without any
intervention when you call `sys.stdin.readline` (or simply iterate over
`sys.stdin` which is a file like object). Are you using `sys.stdin.read`
instead? If so, you need to handle that case yourself.
|
How to stitch several PDF pages into one big canvas-like PDF?
Question: I have a 32-page PDF of my family tree. Instead of having the family tree all
on one really big PDF page (which is what I want), it is formatted so a group
of 8 individual US letter-sized pages are supposed to be stitched across the
width; 4 rows of this completes the tree. The margins of each page are all
22px.
If you visualize it in table form (where the numbers represent PDF page
numbers):

I've tried to whip up some Python code to do this, but haven't gotten very
far. How can I stitch the PDF so it can be one big page instead of smaller
individual pages?
Thanks for the help.
EDIT: Here's the code I wrote. Sorry for not originally posting it.
from pyPdf import PdfFileWriter, PdfFileReader
STITCHWIDTH = 8;
currentpage = 1;
output = PdfFileWriter()
input1 = PdfFileReader(file("familytree.pdf", "rb"))
for(i=0; i<=4; i++)
output.addPage(input1.getPage(currentpage))
currentpage++;
#do something to add other pages to width
print "finished with stitching"
outputStream = file("familytree-stitched.pdf", "wb")
output.write(outputStream)
outputStream.close()
Answer: As an alternative to Ben Jackson's
[suggestion](http://stackoverflow.com/questions/6577090/how-to-stitch-several-
pdf-pages-into-one-big-canvas-like-pdf/6577225#6577225) of first converting to
PostScript, and doing a "N-up" transform on the PostScript files, there's also
a utility called `pdfnup` available as part of the
[PDFjam](http://www2.warwick.ac.uk/fac/sci/statistics/staff/academic-
research/firth/software/pdfjam/) suite, that can operate directly on PDF
files. Example:
pdfnup --nup 8x4 --outfile output.pdf input.pdf
|
Building Boost 1.46.1 *with openmpi*?
Question: I would like to build all of the [Boost](http://www.boost.org/) library on
Ubuntu 11.04 with gcc 4.5.2. So I went about downloading the tar.bz2 file. I
expanded it. I ran bootstrap.sh and noticed it complaining about unicode, so I
installed:
`sudo apt-get install libicu-dev`
And now it appears to be happily building with unicode. The trouble is that I
want to also link against OpenMPI. uh oh. So I add `using mpi ;` to
`./tools/build/v2/user-config.jam` and ran my build command:
`./bjam --layout=versioned --build-type=complete`
And boost prints out errors(I've abbreviated the large paragraphs):
error: Duplicate name of actual target: <pstage/lib>mpi.so
error: previous virtual target { common%common.copy-mpi.so.PYTHON_EXTENSION {
... then a few pages ...
error: created from ./stage-proper
error: another virtual target { common%common.copy-mpi.so.PYTHON_EXTENSION {
... then a few more pages ...
error: created from ./stage-proper
error: added properties: <debug-symbols>off <define>NDEBUG <inlining>full <library>object(file-target)@3501 <library>object(file-target)@3568 <library>object(file-target)@4171 <library>object(file-target)@4184 <library>object(searched-lib-target)@4066 <library>object(searched-lib-target)@4072 <library>object(searched-lib-target)@4078 <optimization>speed <runtime-debugging>off <variant>release <xdll-path>/home/mtibbits/src/boost_1_46_1/bin.v2/libs/mpi/build/gcc-4.5.2/release/threading-multi <xdll-path>/home/mtibbits/src/boost_1_46_1/bin.v2/libs/python/build/gcc-4.5.2/release/threading-multi <xdll-path>/home/mtibbits/src/boost_1_46_1/bin.v2/libs/serialization/build/gcc-4.5.2/release/threading-multi
error: removed properties: <debug-symbols>on <inlining>off <library>object(file-target)@1244 <library>object(file-target)@1350 <library>object(file-target)@2378 <library>object(file-target)@2393 <library>object(searched-lib-target)@2217 <library>object(searched-lib-target)@2223 <library>object(searched-lib-target)@2229 <optimization>off <runtime-debugging>on <variant>debug <xdll-path>/home/mtibbits/src/boost_1_46_1/bin.v2/libs/mpi/build/gcc-4.5.2/debug/threading-multi <xdll-path>/home/mtibbits/src/boost_1_46_1/bin.v2/libs/python/build/gcc-4.5.2/debug/threading-multi <xdll-path>/home/mtibbits/src/boost_1_46_1/bin.v2/libs/serialization/build/gcc-4.5.2/debug/threading-multi
/home/mtibbits/src/boost_1_46_1/tools/build/v2/build/virtual-target.jam:490: in actualize-no-scanner from module object(file-target)@4661
/home/mtibbits/src/boost_1_46_1/tools/build/v2/build/virtual-target.jam:135: in object(file-target)@4661.actualize from module object(file-target)@4661
/home/mtibbits/src/boost_1_46_1/tools/build/v2/build-system.jam:748: in load from module build-system
/home/mtibbits/src/boost_1_46_1/tools/build/v2/kernel/modules.jam:283: in import from module modules
/home/mtibbits/src/boost_1_46_1/tools/build/v2/kernel/bootstrap.jam:142: in boost-build from module
/home/mtibbits/src/boost_1_46_1/boost-build.jam:17: in module scope from module
It appears to be a [bug](https://svn.boost.org/trac/boost/ticket/3560) dating
back to Boost 1.40?? But I know others have gotten this to work. Does anyone
know the voodoo required to get Boost 1.46.1 to play nice with openmpi?
Note: I've been googling and this doesn't appear to be an Ubuntu specific
problem -- it has appeared on
[gentoo](https://bugs.gentoo.org/show_bug.cgi?id=329207) and elsewhere. But I
haven't found any concrete solution except _build without mpi..._ which I
can't.
I would happily post the entire logs, package versions, etc.
Answer: from some other forum, I found that in /tools/build/CMake/externals, there
should be a MPI.cmake file. adding the following lines should allow you to
build Boost with OpenMPI:
set(MPI_INCLUDE_PATH /usr/include/openmpi-x86_64)
set(MPI_COMPILE_FLAGS -I/usr/include/openmpi-x86_64)
set(MPI_LINK_FLAGS -L/usr/lib64/openmpi/lib -L/usr/lib64/openmpi/lib/openmpi
-lmpi_cxx -lmpi)
set(MPI_LIBRARIES /usr/lib64/openmpi/lib/libmpi.so
/usr/lib64/openmpi/lib/libmpi_cxx.so)
set(MPI_FOUND 1)
|
Difference between pydoc and help()?
Question: Are these two things different? The results that the two give in Python are
similar.
Answer: `help()` is a Python function.
`pydoc` is a command-line interface to the same thing.
If you want to see more what pydoc does, take a look in pydoc.py (`import
pydoc; pydoc.__file__`) and see what's in the `cli` function. It does do some
extra importing magic, but I don't think it really needs to - `help()` accepts
a string which is evaluated in the same sort of way, so if you have "foo.py",
run `python` and do `help('foo')` it'll get just about the same result as
`import foo; help(foo)` would, just with minor differences in layout, I think.
Probably historical reasons there.
In short, `pydoc foo` is about equal to `python -c "help('foo')"`
|
In python, removing thousands comma from numbers in a list where the numbers are separated by commas
Question: I have a list of data similar to that below:
a = ['"105', '424"', '"102', '629"', '"104', '307"']
I want this data to be in a form similar to that of below:
a = ['105424', '102629', '104307']
I am unsure of how to proceed. I thought perhaps removing all the commas then
inserting commas only where they should be and then removing the quotations. I
am finding this to be quite challenging.
Thanks
Answer: I'm assuming this data was originally in a csv file where data that contains
commas is quoted ("105,424","102,629","104,307") and then you are splitting on
comma:
>>> '"105,424","102,629","104,307"'.split(',')
['"105', '424"', '"102', '629"', '"104', '307"']
Rather you should let the [`csv`](http://docs.python.org/library/csv.html)
module do the work as it will handle the double quotes:
import csv
with open('u:\\foobar.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
print [x.replace(',','') for x in row]
This prints: `['105424', '102629', '104307']`
|
How to close urllib2 connection?
Question: I have made a program using urllib2 that makes a lot of connections across the
web. I noticed that eventually that this can be DDoS worthy; I would like to
know how to close down each connection after I have done my business to
prevent such an attack.
The code I am using to open a connection is:
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
r = opener.open("http://www.python.org)
html = r.read()
Answer: I assume you are opening them with the `urlopen()` function. [Its
documentation](http://docs.python.org/library/urllib2.html#urllib2.urlopen)
states:
> This function returns a file-like object with two additional methods:
As a file-like object, it will have a `close` method which you can call:
connection = urllib2.urlopen(url)
# Do cool stuff in here.
connection.close()
**Update** : Using the code you added to your question:
>>> import urllib2
>>> import cookielib
>>> cj = cookielib.CookieJar()
>>> opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
>>> r = opener.open("http://www.python.org")
>>> html = r.read()
>>> r.close??
Type: instancemethod
Base Class: <type 'instancemethod'>
String Form: <bound method addinfourl.close of <addinfourl at 150857644 whose fp = <socket._fileobject object at 0x8fd48ec>>>
Namespace: Interactive
File: /usr/lib/python2.6/urllib.py
Definition: r.close(self)
Source:
def close(self):
self.read = None
self.readline = None
self.readlines = None
self.fileno = None
if self.fp: self.fp.close()
self.fp = None
So the `close()` method exists and actually does something:
>>> r.close()
>>> r.read()
------------------------------------------------------------
Traceback (most recent call last):
File "<ipython console>", line 1, in <module>
TypeError: 'NoneType' object is not callable
|
Python: Unbound method
Question: When I try to bind a method, I get an error:
Traceback (most recent call last):
File "pygametest3.py", line 12, in <module>
render = winback.rend()
TypeError: unbound method rend() must be called with winback instance as first argument (got nothing instead)
This is the code, up to the offending point:
import sys, pygame
pygame.init()
class winback:
"""Render the window"""
def rend(self):
rendsurf.fill(black)
rendsurf.blit (landsurf, (landx,landy,640,480))
screen.blit (rendsurf, (0,0,640,480))
pygame.display.flip()
render = winback.rend()
Also, I'm sorry if this is blatantly obvious and not worth posting/reposting.
This is my first real plunge into python, I've worked on this all day, I'm
feeling kinda stupid, and it's 12:30 in the morning.
Answer: Classes must be instantiated before a normal method on them can be called.
class Winback(object):
def rend(self):
...
winback = Winback()
render = winback.rend()
Or you could use `@staticmethod`. [But make it a module-level function
instead](http://dirtsimple.org/2004/12/python-is-not-java.html).
|
Unit test terms
Question: Colleagues of mine are calling a class with test methods a fixture, but I
think that is wrong because a "fixture" is a fixed environment for testing.
[NUnit uses the (wrong?) term
"TestFixture"](http://www.nunit.org/index.php?p=testFixture&r=2.5) too.
[Wikipedia](http://en.wikipedia.org/wiki/Test_fixture#Test_fixture_in_xUnit)
says:
> In generic xUnit, a test fixture is all the things that must be in place in
> order to run a test and expect a particular outcome.
>
> Frequently fixtures are created by handling setUp() and tearDown() events of
> the unit testing framework. In setUp() one would create the expected state
> for the test, and in tearDown() it would clean up what had been set up.
So I would call setup and teardown fixtures but not the whole class. Is that
correct?
And how to call the class with the test cases? A "test suite" following the
[python docs](http://docs.python.org/library/unittest.html)?
Answer: A **test case** is a **single test** testing a particular thing.
A **test suite** is a **grouping of test cases** into a set of tests that for
some reason "belong together".
A **test fixture** manages (setup/teardown) the **state before and after a
test case** is being executed.
These are concepts and **how these are implemented depends on the test
framework**.
E.g. a test case can be a function or a class; a test suite might be a class
containing test cases as functions or just a container with test cases (this
again can be implemented in various ways); a test fixture might be built-in
into the test framework as e.g. dedicated functions, or it might just be a
fixture class taking care of the state through its construction and
destruction.
**Edit**
One thing I believe is important is to use the terminology of the test
framework and follow the recommended approach of the test framework (if such
exists). A lot of confusion comes through not naming things consistently and
similarly. This is true for everything.
|
Python 3.2 Unable to import urllib2 (ImportError: No module named urllib2)
Question: I am using Windows, and I get the error:
ImportError: No module named urllib2
I think [this](http://stackoverflow.com/questions/2532321/python-importerror-
no-module-named-urllib) is the solution for Linux. But how to set this in
Windows?
I am using Python 3.2 and I am not able see `urllib2` there in the LiB folder.
Answer: In python 3 urllib2 was merged into urllib. _See also[another Stack Overflow
question](http://stackoverflow.com/questions/2792650/python3-error-import-
error-no-module-name-urllib) and the [urllib PEP
3108](http://www.python.org/dev/peps/pep-3108/#urllib-package)._
To make Python 2 code work in Python 3:
try:
import urllib.request as urllib2
except ImportError:
import urllib2
|
What are equivalents to R's "phyper" function in Python?
Question: In R, I use the `phyper` function to do a hypergeometric test for
bioinformatics analysis. However I use a lot of Python code and using rpy2
here is quite slow. So, I started looking for alternatives. It seemed that
`scipy.stats.hypergeom` had something similar.
Currently, I call `phyper` like this:
pvalue <- 1-phyper(45, 92, 7518, 1329)
where 45 is the number of selected items having the property of interest, 92
the number of total items having the property, 7518 the number of non selected
items not having the property, and 1329 the total number of selected items.
In R, this yields `6.92113e-13`.
Attempting to do the same with `scipy.stats.hypergeom` however yields a
completely different result (notice, the numbers are swapped because the
function accepts numbers in a different way):
import scipy.stats as stats
pvalue = 1-stats.hypergeom.cdf(45, 7518, 92. 1329)
print pvalue
However this returns -7.3450134863151106e-12, which makes little sense. Notice
that I've tested this on other data and I had little issues (same precision up
to the 4th decimal, which is enough for me).
So it boils down to these possibilities:
1. I'm using the wrong function for the job (or wrong parameters)
2. There's a bug in scipy
In case of "1", are there other alternatives to `phyper` that can be used in
Python?
EDIT: As the comments have noted, this is a bug in scipy, fixed in git master.
Answer: From the
[docs](http://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.stats.hypergeom.html),
you could try:
> `hypergeom.sf(x,M,n,N,loc=0)` : survival function (1-cdf — sometimes more
> accurate)
Also, I think you might have the values mixed up.
> Models drawing objects from a bin. M is total number of objects, n is total
> number of Type I objects. RV counts number of Type I objects in N drawn
> without replacement from population.
Therefore, by my reading: `x=q`, `M=n+m`, `n=m`, `N=k`.
So I would try:
stats.hypergeom.sf(45,(92+7518),92,1329)
|
how to open a new bash window from python and insert commands into the new bash window
Question: I have been attempting, with little success, to open a terminal or konsole
window from python and to insert commands into the terminal or konsole window.
So far, I have the following:
import os
os.system('konsole')
I cannot seem to figure out how to pipe the commands from the python prompt
window to the terminal or konsole window.
This may help. I am trying to run commands from python into R (stats
programming language). i.e. open konsole from python -> have letter R typed
into the konsole so that R will start in the konsole -> insert R commands,
coded in a python script, into the konsole running R. If anyone has an easier
idea could you please share.
Many thanks.
Answer: If all that you need python for is to send commands to R, why not use
[RPy2](http://rpy.sourceforge.net/rpy2.html) ? Else, try writing your commands
to a temp file, invoke [R in batch
mode](http://stat.ethz.ch/R-manual/R-patched/library/utils/html/BATCH.html)
with the file and then flush it. PS: You might want to check [this
question](http://stackoverflow.com/questions/6434569/a-question-about-execute-
an-r-script-in-python) as well for pointers.
|
Python Right Click Menu Using PyGTK
Question: So I'm still fairly new to Python, and have been learning for a couple months,
but one thing I'm trying to figure out is say you have a basic window...
#!/usr/bin/env python
import sys, os
import pygtk, gtk, gobject
class app:
def __init__(self):
window = gtk.Window(gtk.WINDOW_TOPLEVEL)
window.set_title("TestApp")
window.set_default_size(320, 240)
window.connect("destroy", gtk.main_quit)
window.show_all()
app()
gtk.main()
I wanna right click inside this window, and have a menu pop up like alert,
copy, exit, whatever I feel like putting down.
How would I accomplish that?
Answer: There is a example for doing this very thing found at
<http://www.pygtk.org/pygtk2tutorial/sec-ManualMenuExample.html>
It shows you how to create a menu attach it to a menu bar and also listen for
a mouse button click event and popup the very same menu that was created.
I think this is what you are after.
EDIT: (added further explanation to show how to respond to only right mouse
button events)
To summarise.
Create a widget to listen for mouse events on. In this case it's a button.
button = gtk.Button("A Button")
Create a menu
menu = gtk.Menu()
Fill it with menu items
menu_item = gtk.MenuItem("A menu item")
menu.append(menu_item)
menu_item.show()
Make the widget listen for mouse press events, attaching the menu to it.
button.connect_object("event", self.button_press, menu)
Then define the method which handles these events. As is stated in the example
in the link, the widget passed to this method is the menu that you want
popping up not the widget that is listening for these events.
def button_press(self, widget, event):
if event.type == gtk.gdk.BUTTON_PRESS and event.button == 3:
#make widget popup
widget.popup(None, None, None, event.button, event.time)
pass
You will see that the if statement checks to see if the button was pressed, if
that is true it will then check to see which of the buttons was pressed. The
event.button is a integer value, representing which mouse button was pressed.
So 1 is the left button, 2 is the middle and 3 is the right mouse button. By
checking to see if the event.button is 3, you are only responding to mouse
press events for the right mouse button.
|
python re module to replace the binary data inside a text file?
Question: I know mixing text and binary is awful, but I have to do this.
I want to replace the binary content, which is around with "Content-Type:
image" and "----", by string "XXXXXXXX"
So the code for test is:
# coding=utf-8
import re
raw_data = open('r_img.txt').read()
#data = re.sub(r"Content-Type: image.*?-----","Content-Type: imageXXXXXXX-----", raw_data, re.S)
data = re.sub(r"Content-Type: image[^-]*-----","Content-Type: imageXXXXXXX-----", raw_data, re.S)
print data
And the file r_img.txt would be:
Content-Disposition: form-data; name="commodity_pic1"; filename="C:\Documents and Settings\tim\My Documents\My Pictures\Pic\222A8888.jpg"
Content-Type: image/pjpeg
EEE? JFIF H H EEE C
EEE C
EEEWhfEEE[e?EEEEEEqEEEEEEEEEEEEEEEZIOEEE(r5?-iEEEEEEEEEEEEEEE?EEE?EEEEEE
-----------------------------7db27132d0198
I had try string.replace() and re.sub, but I can't still find the answer.
Answer: This works for me:
data = re.sub(r"Content-Type: image.*-----","Content-Type: imageXXXXXXX-----",
raw_data, 0, re.DOTALL)
Essentially it matches in a greedy way all characters between `Content-Type:
image` and `-----`. The 0 means "match all occurrences of this pattern".
Probably this is superfluous for you, but you can't skip it as you also wanted
to use the flag `re.DOTALL` that modify the meaning of "any characters" to
also include newlines.
HTH!
|
Find offset between system-time and Internet time using Python
Question: How would you find the time offset between the local OS system-time and
Internet time from various Internet time sources using Python?
Answer: Use [ntplib](http://pypi.python.org/pypi/ntplib/). Right from the manual:
>>> import ntplib
>>> c = ntplib.NTPClient()
>>> response = c.request('europe.pool.ntp.org', version=3)
>>> response.offset
-0.143156766891
|
Ipython no readline available and pip install readline error
Question: I installed ipython but it doesn't have the readline option. I first
downloaded gnu readline and compiled and installed. DIdn't know whether it was
a proper solution but was the first thing I thought of. It still wouldn't work
to no avail with the same error as before:
WARNING: Readline services not available on this platform.
WARNING: The auto-indent feature requires the readline library
Then I tried using pip install readline and I get the error below. Any help
would be appreciated:
running install
running build
running build_ext
building 'readline' extension
creating build
creating build/temp.linux-x86_64-2.6
creating build/temp.linux-x86_64-2.6/Modules
creating build/temp.linux-x86_64-2.6/Modules/2.x
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DHAVE_RL_CALLBACK -DHAVE_RL_CATCH_SIGNAL -DHAVE_RL_COMPLETION_APPEND_CHARACTER -DHAVE_RL_COMPLETION_DISPLAY_MATCHES_HOOK -DHAVE_RL_COMPLETION_MATCHES -DHAVE_RL_COMPLETION_SUPPRESS_APPEND -DHAVE_RL_PRE_INPUT_HOOK -I. -I/home/jspender/include/python2.6 -c Modules/2.x/readline.c -o build/temp.linux-x86_64-2.6/Modules/2.x/readline.o -Wno-strict-prototypes
creating build/lib.linux-x86_64-2.6
gcc -pthread -shared build/temp.linux-x86_64-2.6/Modules/2.x/readline.o readline/libreadline.a readline/libhistory.a -L/home/jspender/lib -lncurses -lpython2.6 -o build/lib.linux-x86_64-2.6/readline.so
/usr/bin/ld: cannot find -lncurses
collect2: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
----------------------------------------
Command /home/jspender/bin/python2.6 -c "import setuptools;__file__='/home/jspender/build/readline/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-lBWIOm-record/install-record.txt failed with error code 1
Storing complete log in /home/jspender/.pip/pip.log
Answer: [tmaric](http://stackoverflow.com/users/704028/tmaric) is right. I had the
same problem while installing iPython (Ubuntu 12.10, quantal, 32-bit). I was
missing the dev version of the ncurses5 library. Try:
sudo apt-get install libncurses5-dev
and then installing the readline module again through pip
pip install readline
|
Python Pyramid - Add multiple chameleon base templates
Question: I am using
[this](http://docs.pylonsproject.org/projects/pyramid_cookbook/dev/templates.html#using-
a-beforerender-event-to-expose-chameleon-base-template) procedure to use a
base template which the other templates can derive from.
How can I create multiple base templates?
Answer: Just register them both:
from pyramid.renderers import get_renderer
def add_base_template(event):
base = get_renderer('templates/base.pt').implementation()
base2 = get_renderer('templates/base2.pt').implementation()
event.update({'base': base, 'base2': base2})
And then choose which to use in your template for each page:
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:tal="http://xml.zope.org/namespaces/tal"
xmlns:metal="http://xml.zope.org/namespaces/metal"
metal:use-macro="base">
<tal:block metal:fill-slot="content">
My awesome content.
</tal:block>
</html>
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:tal="http://xml.zope.org/namespaces/tal"
xmlns:metal="http://xml.zope.org/namespaces/metal"
metal:use-macro="base2">
<tal:block metal:fill-slot="content">
Content on a totally different page.
</tal:block>
I believe a template doesn't have to be the whole HTML element, so you could
instead expand 2 macros into the same final template
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:tal="http://xml.zope.org/namespaces/tal"
xmlns:metal="http://xml.zope.org/namespaces/metal">
<body>
<div metal:use-macro="section1">
<tal:block metal:fill-slot="content">
Content for template "section1".
</tal:block>
</div>
<div metal:use-macro="section2">
<tal:block metal:fill-slot="content">
Content for template "section2".
</tal:block>
</div>
</body>
|
Send XML using urllib
Question: Following this [link](http://stackoverflow.com/questions/3020979/send-xml-
file-to-http-using-python) I tried sending a XML file to my web service using
GET:
import urllib
from createfile import XML
URL = "http://http://localhost:8080/mywebservice
parameter = urllib.urlencode({'XML': XML})
response = urllib.urlopen(URL + "?%s" % parameter)
print response.read()
But it gives me this error:
Traceback (most recent call last):
File "C:\eclipse\testing_workspace\http tester\src\Main.py", line 15, in <module>
response = urllib.urlopen(URL + "?%s" % parameter)
File "C:\Python27\lib\urllib.py", line 84, in urlopen
return opener.open(url)
File "C:\Python27\lib\urllib.py", line 205, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 331, in open_http
h = httplib.HTTP(host)
File "C:\Python27\lib\httplib.py", line 1047, in __init__
self._setup(self._connection_class(host, port, strict))
File "C:\Python27\lib\httplib.py", line 681, in __init__
self._set_hostport(host, port)
File "C:\Python27\lib\httplib.py", line 706, in _set_hostport
raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
httplib.InvalidURL: nonnumeric port: ''
But if I use POST method described in that link, it works good, my problem is
that I need to use GET, so why I am getting thoose errors ?
response = urllib.urlopen(URL, parameter) // this works
Answer: Sending a XML file through a GET request is bare nonsense.
Use _POST_ instead.
|
Android XML + Object Storage
Question: I am sorry if this is a simple fix (I'm sure it probably is) but after a few
hours of me and my friend google, I came up empty.
I have an XML file that I have retrieved from a server (using httpclient for
cookie handling goodness). I now wish to search through the XML. The XML
defines a set of playing cards with attributes. As an example it would be
something like this.
<CardInfo>
<Type>
Player
</Type>
<ID>
674868
</ID>
</CardInfo>
There would obviously be a few more attributes within this, but that is to
just illustrate the example. There may be many of these 'cardinfo' per XML I
pull, I would like some way of filtering the XML to store each card, and have
the attributes of the card in an easy to access form. I obviously do not
expect all of this done for me, but thought the context may be important for
any solutions.
I'm sorry to ask this, and feel terrible for doing so, but even XML parsing
and storage is difficult for an android nooby! (why can't they let us do it in
python for the love of god).
Answer: Use an [XML pull-
parser](http://developer.android.com/reference/org/xmlpull/v1/XmlPullParser.html).
It gives you the XML document is its logical pieces, part (event) per part. So
for each CardInfo element, you create a new CardInto object, and for element
name 'Type' you set the CardInfo type and for element name 'ID' you set the
CardInfo type etc.
**Update** Add read and write methods to your objects, or create a standalone
class for read/write to and from XML:
This example is for the StAX API, but I guess you get the picture (the methods
and event types are the same, but have difference names).
In MyObject:
public void read(XMLStreamReader reader) throws XMLStreamException {
int event;
// read to the first tag, so we are at level 1
do {
event = reader.next();
if(event == XMLStreamConstants.START_ELEMENT) {
break;
}
} while(reader.hasNext());
int level = 1;
do {
event = reader.next();
if(event == XMLStreamConstants.START_ELEMENT) {
level++; // increment
String localName = reader.getLocalName();
if(localName.equals("Domain")) {
event = reader.next();
if(event == XMLStreamConstants.CHARACTERS) {
domain = reader.getText();
}
} else if(localName.equals("URL")) {
event = reader.next();
if(event == XMLStreamConstants.CHARACTERS) {
url = reader.getText();
}
} else if(localName.equals("Headers")) {
readHeaders(reader);
level--;
} else throw new IllegalArgumentException("Unexpected element " + localName + " at " + reader.getLocation());
}
if(event == XMLStreamConstants.END_ELEMENT) {
level--; // decrement
}
} while(level > 0); // simple level check
}
Parse the subtype called headers:
<Headers>
<Header name="" value=""/>
<Header name="" value=""/>
<Header name="" value=""/>
</Headers>
using the code
private void readHeaders(XMLStreamReader reader) throws XMLStreamException {
int level = 1;
int event;
do {
event = reader.next();
if(event == XMLStreamConstants.START_ELEMENT) {
level++;
String localName = reader.getLocalName();
if(localName.equals("Header")) {
CodeRequestHeader header = new CodeRequestHeader();
header.setName(reader.getAttributeValue(null, "name"));
header.setValue(reader.getAttributeValue(null, "value"));
headers.add(header);
}
} else if(event == XMLStreamConstants.END_ELEMENT) {
level--;
}
} while(level > 0);
}
Now there are multiple ways of doing this stuff, but the important things are
these:
1. First go to level 1; read the root element
2. Start a do-while loop at level 1
3. Increment and decrement level on start and end elements so the level is correct
4. Use the level to control the while loop, continue as long as level > 0
5. For subelements (i.e. when some tag contains multiple tags which are mapped to object entities), create a new method repeating steps 2-4. Then decrement level by one when the method is used.
If you use this pattern, you can pull-parse n-dimensional XML documents in an
orderly and strict, proper way. Strictly speaking, the rules means that should
be a 'readHeader' method in the readHeaders method, but it is not necessary if
there are no subelements in the Header element. Have fun ;)
|
Update numbers in the middle of a string
Question: I'm looking at diff-style output, but I need to update the line numbers. So I
see:
*** 1,2 *****
Actual line 1
Actual line 2
--- 1,2 -----
Expected line 1
Expected line 2
and in my results file, I'm at resline. So if resline=line 90, I'd want to
change the second "1,2" to "91,92".
In perl, I'd use the following on the line that begins "---"
s/(\d+?)/($1+$resline)/eg
How should I do this in python?
Answer: You would use `re.sub` and pass in a callable instead of a string as the
replacement:
import re
re.sub(r'\d+?', lambda m: str(int(m.group(0))+resline), YOUR_STR)
|
Python parse csv file - replace commas with colons
Question: I suspect this is a common problem, but I counldn't seem to locate the answer.
I am trying to remove all commas from a csv file and replace them with colons.
I would normally use sed or vi for this, but I need to use a purely python
implementation. Here is what I have come up with so far:
import csv
with open("temp.csv", mode="rU") as infile:
reader = csv.reader(infile, dialect="excel")
with open("temp2.txt", mode="w") as outfile:
writer = csv.writer(outfile)
for rows in reader:
for parsed_item in rows:
parsed_item = rows.replace(',', ':') # I can't do this with a list!
writer.writerow(parsed_item)
Can anyone help me out with how to do this? Thanks in advance for your help.
Answer: The answer is easier than you think. You just need to set the delimiter for
`csv.writer`:
import csv
with open("temp.csv", mode="rU") as infile:
reader = csv.reader(infile, dialect="excel")
with open("temp2.txt", mode="w") as outfile:
writer = csv.writer(outfile, delimiter=':')
writer.writerows(rows)
You're line trying to replace `,` with `:` wasn't going to do anything because
the row had already been processed by `csv.reader`.
|
Strip Non alpha numeric characters from string in python but keeping special characters
Question: I know similar questions were asked around here on StackOverflow. I tryed to
adapt some of the approaches but I couldn't get anything to work, that fits my
needs:
Given a **python string** I want to strip every non alpha numeric charater -
**but** \- _leaving any special charater like**µ æ Å Ç ß**..._ Is this even
possible? with regexes I tryed variations of this
re.sub(r'[^a-zA-Z0-9: ]', '', x) # x is my string to sanitize
but it strips me more then I want. An example of what I want would be:
Input: "A string, with characters µ, æ, Å, Ç, ß,... Some whitespace confusion ?"
Output: "A string with characters µ æ Å Ç ß Some whitespace confusion"
Is this even possible without getting complicated?
Answer: Use \w with the UNICODE flag set. This will match the underscore also, so you
might need to take care of that separately.
Details on <http://docs.python.org/library/re.html>.
EDIT: Here is some actual code. It will keep unicode letters, unicode digits,
and spaces.
import re
x = u'$a_bßπ7: ^^@p'
pattern = re.compile(r'[^\w\s]', re.U)
re.sub(r'_', '', re.sub(pattern, '', x))
If you did not use re.U then the ß and π characters would have been stripped.
Sorry I can't figure out a way to do this with one regex. If you can, can you
post a solution?
|
Django/Jquery problem - Cannot assign "u'Agua'": "Venta.producto" must be a "Producto" instance
Question: I'm using django+jquery autocomplete widget in my application. I customized
the admin form of one of my tables to get autocomplete in the input textbox.
It's working except that when I save the form the following exception occurs:
ValueError at /admin/Stock/venta/add/
Cannot assign "u'Agua'": "Venta.producto" must be a "Producto" instance.
Request Method: POST
Request URL: http://127.0.0.1:8080/admin/Stock/venta/add/
Exception Type: ValueError
Exception Value:
Cannot assign "u'Agua'": "Venta.producto" must be a "Producto" instance.
Exception Location: /usr/lib/pymodules/python2.6/django/db/models/fields/related.py in __set__, line 273
Python Executable: /usr/bin/python
Python Version: 2.6.5
...
It seems that it's not converting my autocompleted text in a Producto object.
I saw the POST and it's sending the numerical key (i.e: 2) of the Producto
selected. When I disabled all the autocomplete stuff, the post is exactly the
same but it works. So something of admin.py or models.py sourcecode is wrong.
Then something is doing that in a case it's converting it to an object and not
in the other one.
The following is the **models.py** part:
class Producto(models.Model):
detalle = models.CharField('Detalle', max_length=200)
importe = models.FloatField('Importe')
def __unicode__(self):
return self.detalle
class Empleado(models.Model):
nombre = models.CharField('Nombre', max_length=100)
def __unicode__(self):
return self.nombre
class Venta(models.Model):
importe = models.FloatField('Importe')
producto = models.ForeignKey(Producto)
responsable = models.ForeignKey(Empleado)
mesa = models.IntegerField()
The following is the **admin.py** part:
class VentaAdminForm(forms.ModelForm):
importe = forms.DecimalField()
producto = forms.CharField()
responsable = forms.CharField()
mesa = forms.IntegerField()
class Meta:
model = Venta
fields = ['producto', 'importe', 'responsable', 'mesa']
class VentaAdmin(admin.ModelAdmin):
form = VentaAdminForm
admin.site.register(Venta, VentaAdmin)
**views.py**
@login_required
def search(request):
results = []
if request.method != "GET":
return HttpResponse()
term = q = None
if request.GET.has_key(u'q'):
q = request.GET[u'q']
if request.GET.has_key(u'term'):
term = request.GET[u'term']
if not q or not term:
return HttpResponse()
if q == 'producto':
model_results = Producto.objects.filter(detalle__contains=term)
for x in model_results:
results.append({'label': x.detalle,'value': x.detalle, 'id': x.id })
elif q == 'responsable':
model_results = Empleado.objects.filter(nombre__contains=term)
for x in model_results:
results.append({'label': x.nombre,'value': x.nombre, 'id': x.id })
else:
raise Exception("Unknown query_object")
json = simplejson.dumps(results)
return HttpResponse(json, mimetype='application/json')
The javascript part:
<script>
$(function() {
$( "#id_producto" ).autocomplete({
source: "/search/?q=producto",
});
$( "#id_responsable" ).autocomplete({
source: "/search/?q=responsable",
});
});
</script>
When a write, i.e.: agua, in the autocomplete textbox, it send a GET and the
response is the following.
**http://127.0.0.1:8080/search/?q=producto &term=agua**
[{"id": 3, "value": "Agua", "label": "Agua"}]
**versions**
django 1.1.1
jquery 1.5.1
jquery-ui 1.8.13
Answer: It looks like "Agua" is the value thats being passed in back to your
controller whereas Rails is expecting you to pass "3". Can you try changing
your backend to send
[{"value": "3", "label": "Agua"}]
and see if it works
|
Can I have scripts outside my python exe?
Question: I'm new to Python and starting to learn it. However I have one query, which
I'd like to know for the future.
Once I've made a python program and produced an exe via py2exe, is it then
possible to execute python scripts outside of the exe or do I have to look at
plugins?
Answer: Yes, it is entirely possible. It's a standard Python interpreter. You'll need
to put the directory in `sys.path` so that the filesystem importer will pick
it up. And then for runtime imports generally using the `__import__` function
will be the easiest way.
A Google search for "py2exe runtime import" provides early in the results one
example of it: <http://www.velocityreviews.com/forums/t330613-py2exe-
excluding-a-module-and-import-it-during-runtime-how.html>
|
An Efficient Lookup Table in redis--implemented using redis sets?
Question: I want to use redis to store a large set of user_ids and with each of these
ids, a "group id" to which that user was previously assigned:
User_ID | Group_ID
1043 | 2
2403 | 1
The number of user_ids is fairly large (~ 10 million); the number of unique
group ids is about 3 - 5.
My purpose for this LuT is routine:
* _find the group id for a given user_ ; and
* _return a list of other users (of specified length) with the same group id as that given user_
There might be an idiomatic way to do this in redis or at least a way that's
most efficient. If so i would like to know what it is. Here's a simplified
version of my working implementation (using the python client):
# assume a redis server is already running
# create some model data:
import numpy as NP
NUM_REG_USERS = 100
user_id = NP.random.randint(1000, 9999, NUM_REG_USERS)
cluster_id = NP.random.randint(1, 4, NUM_REG_USERS)
D = zip(cluster_id, user_id)
from redis import Redis
# r = Redis()
# populate the redis LuT:
for t in D :
r.sadd( t[0], t[1] )
# the queries:
# is user_id 1034 in Group 1?
r.sismember("1", 1034)
# return 10 users in the same Group 1 as user_id 1034:
r.smembers("1")[:10] # assume user_id 1034 is in group 1
So i have implemented this LuT using ordinary redis **_sets_** ; each set is
keyed to a Group ID (1, 2, or 3), so there are three sets in total.
Is this the most efficient way store this data given the type of queries i
want to run against it?
Answer: Using sets is a good basic approach, though there are a couple of things in
there you may want to change:
Unless you store the group ID for each a user somewhere you will need 5 round
trips to get the group for a particular user - the operation itself is O(1),
but you still need to consider latency. Usually it is fairly easy to do this
without too much effort - you have lots of other properties stored for each
user, so it is trivial to add one for group id.
You probably want SRANDMEMBER rather than SMEMBERS - I think SMEMBERS will
return the same 10 items from your million item set every time.
|
Validating a zip file coming from stdin
Question: After some frustration with `unzip(1L)`, I've been trying to create a script
that will unzip and print out raw data from all of the files inside a zip
archive that is coming from stdin. I currently have the following, which
works:
import sys, zipfile, StringIO
stdin = StringIO.StringIO(sys.stdin.read())
zipselect = zipfile.ZipFile(stdin)
filelist = zipselect.namelist()
for filename in filelist:
print filename, ':'
print zipselect.read(filename)
When I try to add validation to check if it truly is a zip file, however, it
doesn't like it.
...
zipcheck = zipfile.is_zipfile(zipselect)
if zipcheck is not None:
print 'Input is not a zip file.'
sys.exit(1)
...
results in
File "/home/chris/simple/zipcat/zipcat.py", line 13, in <module>
zipcheck = zipfile.is_zipfile(zipselect)
File "/usr/lib/python2.7/zipfile.py", line 149, in is_zipfile
result = _check_zipfile(fp=filename)
File "/usr/lib/python2.7/zipfile.py", line 135, in _check_zipfile
if _EndRecData(fp):
File "/usr/lib/python2.7/zipfile.py", line 203, in _EndRecData
fpin.seek(0, 2)
AttributeError: ZipFile instance has no attribute 'seek'
I assume it can't seek because it is not a file, as such?
Sorry if this is obvious, this is my first 'go' with Python.
Answer: You should pass `stdin` to `is_zipfile`, not `zipselect`. `is_zipfile` takes a
path to a file or a file object, not a `ZipFile`.
See [the zipfile.is_zipfile
documentation](http://docs.python.org/library/zipfile.html#zipfile.is_zipfile)
You are correct that a `ZipFile` can't seek because it isn't a file. It's an
archive, so it can contain many files.
|
python getoutput() equivalent in subprocess
Question: I want to get the output from some shell commands like `ls` or `df` in a
python script. I see that `commands.getoutput('ls')` is deprecated but
`subprocess.call('ls')` will only get me the return code.
I'll hope there is some simple solution.
Answer: Use **subprocess.Popen** :
import subprocess
process = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE)
out, err = process.communicate()
print(out)
Note that communicate blocks until the process terminates. You could use
**process.stdout.readline()** if you need the output before it terminates. For
more information see the
[documentation](http://docs.python.org/py3k/library/subprocess.html#replacing-
older-functions-with-the-subprocess-module).
|
From HTML to CSS style with Python
Question: I've extracted certain elements from a HTML page with beautifulsoup and want
to extract the according CSS styles (which reside in most cases in external
css files) via python.
How can i get a jQuery selector if I have a HTML element? If i had this
selector, i could use cssutils to parse the CSS and get it.
TIA for help.
Answer: You may take a look at [PyQuery's
API](http://packages.python.org/pyquery/api.html). It provides similar css
selector syntax as jQuery does and it's much faster than beautifulsoup because
it relies on lxml to do the parsing work.
import pyquery.PyQuery as pq # you can treat it as a css selector
html = '<div class="foo"><a href="somewhere"></a></div>'
parsed = pq(html) # PyQuery object, is a callable
pq_list = parsed('.foo a') # doing css selection
for node in pq_list: # node here is a lxml element object
print node.attrib['href'] # => somewhere
|
Is a python script aware of its stored location path?
Question:
/home/bar/foo/test.py:
I am trying `test.py` to print `/home/bar/foo` irrespective of from where I
run the script from:
import os
def foo():
print os.getcwd()
test run:
[/home/bar $] python /home/bar/foo/test.py # echoes /home/bar
[/tmp $] python /home/bar/foo/test.py # echoes /tmp
`os.getcwd()` not the function for the task. How can I get this done
otherwise?
Answer: Try this:
import os.path
p = os.path.abspath(__file__)
|
How to make group permissions work in Django-nonrel for Google App Engine
Question: I'm trying to get role-based permissions working for django-nonrel for GAE.
Out of the box, it didn't seem to work, probably because of the implicit many-
to-many relationship between Users and Groups, so I found and installed
<http://www.fhahn.com/writing/Django-s-Permission-System-with-Django-Nonrel>.
Per the documentation, I added permission_backend_nonrel to INSTALLED_APPS
(after djangotoolbox), and defined AUTHENTICATION_BACKENDS to the appropriate
class in settings.py.
This gets me past the earlier problem ("DatabaseError: This query is not
supported by the database."), but I'm still stuck because when I run a very
simple sample, I get an empty set of permissions when I believe I should be
getting something back. The below is about as simple an example as I could
make. It's launched in the django framework by python manage.py shell - it's a
simple pony shop. I'm trying to add a user to a group, give that group
permissions, and then see those permissions reflected as part of the set of
permissions the user has:
>>> from django.contrib.auth.models import Group, Permission, User
>>> from django.contrib.contenttypes.models import ContentType
>>> from pony_shop.models import Pony
#Create the group:
>>> farmers = Group(name="Farmers")
>>> farmers.save()
>>> pony_ct = ContentType.objects.get(app_label='pony_shop', model='pony')
#Create the Permission
>>> can_twirl = Permission(name='Can Twirl', codename='can_twirl', content_type=pony_ct)
>>> can_twirl.save()
#Give the Permission to the Group
>>> farmers.permissions.add(can_twirl)
>>> farmers.save()
#Create the User
>>> francis = User(username='francis')
>>> francis.save()
#Put the user in the group
>>> francis.groups.add(farmers)
>>> francis.save()
#Get a pony object
>>> firefly = Pony(price=12, height=3, name='Firefly', color='fuscia')
>>> firefly.save()
>>> francis.get_all_permissions()
set([]) #<-- WHY?!?
#Just in case I needed to check the permissions against a pony object:
>>> francis.get_all_permissions(obj=firefly)
set([]) #<-- Still no joy
So, the question is: Why doesn't the above work, and what do I need to change
to make it work?
Thanks in advance for your help!
Answer: Thanks to a colleague, I got the answer to this. Apparently, I needed to not
use the built-in group/permission adds. Instead, use the utility class that
comes with *permission_backend_nonrel*
>>>from permission_backend_nonrel import utils
>>>utils.add_permission_to_group(can_twirl,farmers)
>>>utils.add_user_to_group(francis,farmers)
Then, it works.
|
How to fetch content of XML root element in Python?
Question: I have an XML file, e.g.:
<?xml version="1.0" encoding="UTF-8"?>
<root>
First line. <br/> Second line.
</root>
As an output I want to get: `'\nFirst line. <br/> Second line.\n'` I just want
to notice, if the root element contains other nested elements, they should be
returned as is.
Answer: The first that I came up with:
from xml.etree.ElementTree import fromstring, tostring
source = '''<?xml version="1.0" encoding="UTF-8"?>
<root>
First line.<br/>Second line.
</root>
'''
xml = fromstring(source)
result = tostring(xml).lstrip('<%s>' % xml.tag).rstrip('</%s>' % xml.tag)
print result
# output:
#
# First line.<br/>Second line.
#
But it's not truly general-purpose approach since it fails if opening root
element (`<root>`) contains any attribute.
**UPDATE:** This approach has another issue. Since `lstrip` and `rstrip` match
any combination of given chars, you can face such problem:
# input:
<?xml version="1.0" encoding="UTF-8"?><root><p>First line</p></root>
# result:
p>First line</p
If your really need only literal string between the opening and closing tags
(as you mentioned in the comment), you can use this:
from string import index, rindex
from xml.etree.ElementTree import fromstring, tostring
source = '''<?xml version="1.0" encoding="UTF-8"?>
<root attr1="val1">
First line.<br/>Second line.
</root>
'''
# following two lines are needed just to cut
# declaration, doctypes, etc.
xml = fromstring(source)
xml_str = tostring(xml)
start = index(xml_str, '>')
end = rindex(xml_str, '<')
result = xml_str[start + 1 : -(len(xml_str) - end)]
Not the most elegant approach, but unlike the previous one it works correctly
with attributes within opening tag as well as with any valid xml document.
|
List users in IRC channel using Twisted Python IRC framework
Question: I am trying to write a function that will print the lists of nicks in an IRC
channel to the channel using Twisted Python. How do I do this? I have read the
API documentation and I have only seen one question similar to mine on this
site, but it doesn't really answer my question. If I knew how to get the
userlist (or whatever it is Twisted recognizes it as), I could simply iterate
the list using a for loop, but I don't know how to get this list.
Answer: The linked example you seem to think is the same, uses `WHO`, different
command, different purpose. The correct way is to use `NAMES`.
Extended IRCClient to support a names command.
from twisted.words.protocols import irc
from twisted.internet import defer
class NamesIRCClient(irc.IRCClient):
def __init__(self, *args, **kwargs):
self._namescallback = {}
def names(self, channel):
channel = channel.lower()
d = defer.Deferred()
if channel not in self._namescallback:
self._namescallback[channel] = ([], [])
self._namescallback[channel][0].append(d)
self.sendLine("NAMES %s" % channel)
return d
def irc_RPL_NAMREPLY(self, prefix, params):
channel = params[2].lower()
nicklist = params[3].split(' ')
if channel not in self._namescallback:
return
n = self._namescallback[channel][1]
n += nicklist
def irc_RPL_ENDOFNAMES(self, prefix, params):
channel = params[1].lower()
if channel not in self._namescallback:
return
callbacks, namelist = self._namescallback[channel]
for cb in callbacks:
cb.callback(namelist)
del self._namescallback[channel]
Example:
def got_names(nicklist):
log.msg(nicklist)
self.names("#some channel").addCallback(got_names)
|
Python Find & Replace Beautiful Soup
Question: I am using Beautiful Soup to replace the occurrences of a pattern with a href
link inside a HTML file
I am facing a problem as described below
modified_contents = re.sub("([^http://*/s]APP[a-z]{2}[0-9]{2})", "<a href=\"http://stack.com=\\1\">\\1</a>", str(soup))
Sample input 1:
Input File contains APPdd34
Output File contains <a href="http://stack.com=APPdd34"> APPdd34</a>
Sample input 2:
Input File contains <a href="http://stack.com=APPdd34"> APPdd34</a>
Output File contains <a href="http://stack.com=<a href="http://stack.com=APPdd34"> APPdd34</a>"> <a href="http://stack.com=APPdd34"> APPdd34</a></a>
Desired Output File 2 is same as Sample Input File 2.
How can I rectify this problem?
Answer: This may not entirely answer your problem because I don't know an entire input
file could look like, but I hope this is a direction you can take.
from BeautifulSoup import BeautifulSoup, Tag
text = """APPdd34"""
soup = BeautifulSoup(text)
var1 = soup.text
text = """<a href="http://stack.com=APPdd34"> APPdd34</a>"""
soup = BeautifulSoup(text)
var2 = soup.find('a').text
soup = BeautifulSoup("<p>Some new html</p>")
tag1 = Tag(soup, "a",{'href':'http://stack.com='+var1,})
tag1.insert(0,var1) # Insert text
tag2 = Tag(soup, "a",{'href':'http://stack.com='+var2,})
tag2.insert(0,var2)
soup.insert(0,tag1)
soup.insert(3,tag2)
print soup.prettify()
So basically, just use BeautifulSoup to extract the text and then you can
build Tags from there.
|
Why do rfind and find return the same values in Python 2.6.5?
Question: I'm relatively new to Python, and something is acting up. Basically, when I
call `str.rfind("test")` on a string, the output is the same as
`str.find("test")`. It's best that I show you an example:
Python 2.6.5 (r265:79063, May 6 2011, 17:25:59)
[GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import string
>>> line = "hello what's up"
>>> line.rfind("what")
6
>>> line.find("what")
6
By my understanding, the value of `line.find` is okay, but the value of
`line.rfind` should be `9`. Am I misinterpreting these functions or not using
them well?
Answer: I think you're expecting `rfind` to return the index of the rightmost
character in the first/leftmost match for `"what"`. It actually returns the
index of the leftmost character in the last/_rightmost_ match for `"what`". To
quote [the
documentation](http://docs.python.org/2.7/library/stdtypes.html#str.rfind):
> ### `str.rfind(sub[, start[, end]])`
>
> Return the highest index in the string where substring _sub_ is found, such
> that _sub_ is contained within `s[start:end]`. Optional arguments _start_
> and _end_ are interpreted as in slice notation. Return `-1` on failure.
`"ab c ab".find("ab")` would be `0`, because the leftmost occurrence is on the
left end.
`"ab c ab".rfind("ab")` would be `5`, because the rightmost occurrence is
starts at that index.
|
how do i dump a single sqlite3 table in python?
Question: I would like to dump only one table but by the looks of it there is no
parameter for this.
I found this example of the dump but is is for all tables:
# Convert file existing_db.db to SQL dump file dump.sql
import sqlite3, os
con = sqlite3.connect('existing_db.db')
with open('dump.sql', 'w') as f:
for line in con.iterdump():
f.write('%s\n' % line)
Answer: You can copy only the single table in an in memory db:
import sqlite3
def getTableDump(db_file, table_to_dump):
conn = sqlite3.connect(':memory:')
cu = conn.cursor()
cu.execute("attach database '" + db_file + "' as attached_db")
cu.execute("select sql from attached_db.sqlite_master "
"where type='table' and name='" + table_to_dump + "'")
sql_create_table = cu.fetchone()[0]
cu.execute(sql_create_table);
cu.execute("insert into " + table_to_dump +
" select * from attached_db." + table_to_dump)
conn.commit()
cu.execute("detach database attached_db")
return "\n".join(conn.iterdump())
TABLE_TO_DUMP = 'table_to_dump'
DB_FILE = 'db_file'
print getTableDump(DB_FILE, TABLE_TO_DUMP)
**Pro** : Simplicity and reliability: you don't have to re-write any library
method, and you are more assured that the code is compatible with future
versions of the sqlite3 module.
**Con** : You need to load the whole table in memory, which may or may not be
a big deal depending on how big the table is, and how much memory is
available.
|
Python-LDAP simple_bind_s timeout
Question: Is there a way to set timeout for "simple_bind_s" in python-LDAP manually? I
have tested ldapObject.timeout = 10 it did not work for me. Any ideas?
Thanks in advance..
Answer: Set the option `ldap.OPT_NETWORK_TIMEOUT` for the ldap object.
import ldap
l = ldap.initialize('ldap://servername:389')
l.set_option(ldap.OPT_NETWORK_TIMEOUT, 10.0)
l.simple_bind_s('username', 'password')
This will raise a ldap.SERVER_DOWN exception if the specified timeout is
reached.
|
Problem running PyDev-developed apps in terminal
Question: Im having some import problems with an application I developed in python with
Eclipse/PyDev. Running the app from within Eclipse is no problem but when I
try running it through the linux terminal the imports (which are imported from
other folders (packages in Eclipse)) are broken and I get an ImportError: No
module named xxx..
From previous experiences developing Java-apps in Eclipse I always solved this
through exporting the project to a runnable jar-file but this isn't an option
with Python.
Is there a way of circumventing this? I'd rather not put all my .py-files in a
single folder since I very much like the package-system (guess Java has
damaged me). Can I change the import statement to make it work in both Eclipse
and the terminal or do I have to abandon PyDev if I want this to work in the
terminal?
Thanks for any help!
Slim
Answer: The key here is that PyDev and Eclipse manage a custom Python Path when you're
launching within Eclipse. You can modify your environment variables to contain
a more complete PYTHONPATH value that contains the locations where you're
importing from, or you can use `sys.path.append()` to add directories to the
path at run time so that the imports can be resolved.
|
INSERT INTO SELECT syntax error Python ODBC with Microsoft Access
Question: Had a long search through here and the web and can't seem to find any examples
that explain why I am getting a syntax error on the following: import
win32com.client
`
PregCode = recordsetraw.Fields.Item("PregnancyCode").value
SQL = "INSERT INTO UniqueData SELECT * FROM Rawdata WHERE PregnancyCode =%s"
params = (PregCode)
connection.execute(SQL, params)
print PregCode
recordsetraw.MoveNext()`
I'm getting the following error:
Traceback (most recent call last): File "testdb.py", line 22, in
connection.execute(SQL, params) File "", line 2, in execute
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, u'Microsoft JET
D atabase Engine', u"Syntax error in query expression 'PregnancyCode = %s'.",
None , 5003000, -2147217900), None)
I have tried hardcoding in PregnancyCode to see if that would make a
difference but no, same error.
Any ideas what I'm doing wrong?
Answer: My Python is very rusty, and I don't recall what this type of substitution is
called, but I think you're trying to accomplish something similar to this with
your INSERT statement:
>>> print "Hello %s" %("World")
Hello World
From your code:
SQL = "INSERT INTO UniqueData SELECT * FROM Rawdata WHERE PregnancyCode =%s"
params = (PregCode)
connection.execute(SQL, params)
Trouble is _connection_ is an ADO Connection, and its [Execute
Method](http://msdn.microsoft.com/en-us/library/ms675023%28VS.85%29.aspx)
won't perform the Pythonic text substitution you want.
I think you should do the text substitution in Python **before** you feed the
INSERT string to `connection.execute` Maybe something like this:
SQL = "INSERT INTO UniqueData SELECT * FROM Rawdata WHERE PregnancyCode =%s" %(PregCode)
connection.execute(SQL)
If I didn't get the Python quite right, hopefully it's close enough so you can
see how to fix it.
BTW, your title mentions Python ODBC, but I don't see that your code uses ODBC
at all.
|
Making a list of evenly spaced numbers in a certain range in python
Question: What is a pythonic way of making list of arbitrary length containing evenly
spaced numbers (not just whole integers) between given bounds? For instance:
my_func(0,5,10) # ( lower_bound , upper_bound , length )
# [ 0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5 ]
Note the `Range()` function only deals with integers. And this:
def my_func(low,up,leng):
list = []
step = (up - low) / float(leng)
for i in range(leng):
list.append(low)
low = low + step
return list
seems too complicated. Any ideas?
Answer: Given numpy, you could use
[linspace](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html#numpy-
linspace):
Including the right endpoint (5):
In [46]: import numpy as np
In [47]: np.linspace(0,5,10)
Out[47]:
array([ 0. , 0.55555556, 1.11111111, 1.66666667, 2.22222222,
2.77777778, 3.33333333, 3.88888889, 4.44444444, 5. ])
Excluding the right endpoint:
In [48]: np.linspace(0,5,10,endpoint=False)
Out[48]: array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5])
|
issuing dos commands from inside ruby on rails controller in the same dos session
Question: **Here is my scenario**. I have a ajax call in my web site to find the
elevation at particular point. Once this point comes into an action of a
controller in Ruby on rails, I have to use python on command line to find the
elevation.
The following sequence of commands in DOS does that for me.
1. python (starts a python session)
2. import arcpy (takes a lot of time)
3. function call (very fast).
Now if I put this into a script and run it, I do get the result, but its very
slow, because the 'import' step takes a lot of time. But the actual function
takes less than a second.
As all this is suppose to happen behind an Ajax call on ror web site, such a
large delay is unacceptable.
**Question:** Is it possible for me in Ror to open a 'command line session'
when the application loads and issue the first two commands, and then use this
session every time a request comes in a controller's action, and issue the
third command, and return its output? If yes can someone please post some
samples?
Thanks Shaunak
Answer: What you are proposing could be possible if Rails was friendlier about forked
processes. A cleaner and better solution would be to write a python daemon
that you could query so that you don't incur the startup penalty. (This could
be a web-service or a daemon you communicate with standard network sockets or
whatever).
|
Circular reference between modules?
Question: If have this module
# mymodule.py
import __main__
And I import it in the python interpreter:
>>> import mymodule
>>> dir(mymodule)
['__builtins__', '__doc__', '__file__', '__main__', '__name__', '__package__']
>>> dir()
['__builtins__', '__doc__', '__name__', '__package__', 'mymodule']
>>> import sys
>>> sys.modules['__main__'] == mymodule.__main__
True
>>> mymodule == mymodule.__main__.mymodule
True
I've just created a circular reference by importing a module that imports
`__main__`. Is this bad?
Answer: No. Python has no problem with circular references like this. The only thing
you must be sure of is that you don't use a name from the other module until
it has been defined.
|
How to animate a time-ordered sequence of matplotlib plots
Question: I want to plot a sequence of .png images in matplotlib. The goal is to plot
them rapidly to simulate the effect of a movie, but I have additional reasons
for wanting to avoid actually creating an .avi file or saving matplotlib
figures and then viewing them in sequence outside of Python.
I'm specifically trying to view the image files in sequence inside a for-loop
in Python. Assuming I have imported matplotlib correctly, and I have my own
functions 'new_image()' and 'new_rect()', here's some example code that fails
to work because of the blocking effect of the show() function's call to the
GUI mainloop:
for index in index_list:
img = new_image(index)
rect = new_rect(index)
plt.imshow(img)
plt.gca().add_patch(rect)
plt.show()
#I also tried pausing briefly and then closing, but this doesn't
#get executed due to the GUI mainloop from show()
time.sleep(0.25)
plt.close()
The above code works to show only the first image, but then the program just
hangs and waits for me to manually close the resultant figure window. Once I
do close it, the program then just hangs and doesn't re-plot with the new
image data. What should I be doing? Also note that I have tried replacing the
plt.show() command with a plt.draw() command, and then adding the plt.show()
outside of the for-loop. This doesn't display anything and just hangs.
Answer: Based on
<http://matplotlib.sourceforge.net/examples/animation/simple_anim_tkagg.html>:
import time
import numpy as np
import matplotlib
matplotlib.use('TkAgg') # do this before importing pylab
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
def animate():
tstart = time.time() # for profiling
data=np.random.randn(10,10)
im=plt.imshow(data)
for i in np.arange(1,200):
data=np.random.randn(10,10)
im.set_data(data)
fig.canvas.draw() # redraw the canvas
print 'FPS:' , 200/(time.time()-tstart)
win = fig.canvas.manager.window
fig.canvas.manager.window.after(100, animate)
plt.show()
* * *
`plt.imshow` can accept a float array, uint8 array, or a PIL image. So if you
have a directory of PNG files, you could open them as PIL images and animate
them like this:
import matplotlib
matplotlib.use('TkAgg') # do this before importing pylab
import matplotlib.pyplot as plt
import Image
import glob
fig = plt.figure()
ax = fig.add_subplot(111)
def animate():
filenames=sorted(glob.glob('*.png'))
im=plt.imshow(Image.open(filenames[0]))
for filename in filenames[1:]:
image=Image.open(filename)
im.set_data(image)
fig.canvas.draw()
win = fig.canvas.manager.window
fig.canvas.manager.window.after(100, animate)
plt.show()
|
Keep persistent variables in memory between runs of Python script
Question: Is there any way of keeping a result variable in memory so I don't have to
recalculate it each time I run the beginning of my script? I am doing a long
(5-10 sec) series of the exact operations on a data set (which I am reading
from disk) every time I run my script. This wouldn't be too much of a problem
since I'm pretty good at using the interactive editor to debug my code in
between runs; however sometimes the interactive capabilities just don't cut
it.
I know I could write my results to a file on disk, but I'd like to avoid doing
so if at all possible. This should be a solution which generates a variable
the first time I run the script, and keeps it in memory until the shell itself
is closed or until I explicitly tell it to fizzle out. Something like this:
# Check if variable already created this session
in_mem = var_in_memory() # Returns pointer to var, or False if not in memory yet
if not in_mem:
# Read data set from disk
with open('mydata', 'r') as in_handle:
mytext = in_handle.read()
# Extract relevant results from data set
mydata = parse_data(mytext)
result = initial_operations(mydata)
in_mem = store_persistent(result)
I've an inkling that the [shelve](http://docs.python.org/library/shelve.html)
module might be what I'm looking for here, but looks like in order to open a
shelve variable I would have to specify a file name for the persistent object,
and so I'm not sure if it's quite what I'm looking for.
Any tips on getting shelve to do what I want it to do? Any alternative ideas?
Answer: You can achieve something like this using the `reload` global function to re-
execute your main script's code. You will need to write a wrapper script that
imports your main script, asks it for the variable it wants to cache, caches a
copy of that within the wrapper script's module scope, and then when you want
(when you hit ENTER on stdin or whatever), it calls `reload(yourscriptmodule)`
but this time passes it the cached object such that yourscript can bypass the
expensive computation. Here's a quick example.
**wrapper.py**
import sys
import mainscript
part1Cache = None
if __name__ == "__main__":
while True:
if not part1Cache:
part1Cache = mainscript.part1()
mainscript.part2(part1Cache)
print "Press enter to re-run the script, CTRL-C to exit"
sys.stdin.readline()
reload(mainscript)
**mainscript.py**
def part1():
print "part1 expensive computation running"
return "This was expensive to compute"
def part2(value):
print "part2 running with %s" % value
While `wrapper.py` is running, you can edit `mainscript.py`, add new code to
the `part2` function and be able to run your new code against the pre-computed
part1Cache.
|
How to change gif file to png file using python pil
Question: this is my code :
import Image,glob
files = glob.glob("/small/*.gif")
for imageFile in files:
print "Processing: " + imageFile
try:
im = Image.open(imageFile)
im.save( "/small_/", "png" )
except Exception as exc:
print "Error: " + str(exc)
but it show error :
File "f.py", line 13
im.save( "/small_/", "png" )
^
SyntaxError: invalid syntax
so what can i do ,
thanks
**updated:**
import Image,glob,os
files = glob.glob("small/*.gif")
for imageFile in files:
filepath,filename = os.path.split(imageFile)
filterame,exts = os.path.splitext(filename)
print "Processing: " + imageFile,filterame
im = Image.open(imageFile)
im.save( 'small_/'+filterame+'.png','PNG')
Answer: Try copy and pasting your code in here back into your editor, it works
perfectly fine for me. You seem to have some non-printable characters in there
or something similar.
Also, have a look at the [PIL
documentation](http://www.pythonware.com/library/pil/handbook/image.htm),
`save` needs a filename or fileobject, not a folder.
|
Why do I get an import error for multiprocessing when my code is called from a unittest? (PyCharm Python 3)
Question: In one of my modules I do the following import:
from multiprocessing import Pool
This module works fine when called normally, but when I use this from a
unittest, I get the following error:
Error
Traceback (most recent call last):
File "/share/work/peter/software/lib/python3.2/unittest/case.py", line 387, in _executeTestPart
function()
File "/home/peter/current/parallelize/src/parallelize/backend/tests.py", line 52, in test_submit_ok_job
backend = self._get_multi_processing_backend()
File "/home/peter/current/parallelize/src/parallelize/backend/tests.py", line 46, in _get_multi_processing_backend
from parallelize.backend.multiprocessing import MultiprocessingBackend
File "/home/peter/current/parallelize/src/parallelize/backend/multiprocessing.py", line 2, in <module>
from multiprocessing import Pool,cpu_count
File "/home/peter/current/parallelize/src/parallelize/backend/multiprocessing.py", line 2, in <module>
from multiprocessing import Pool,cpu_count
ImportError: cannot import name Pool
Why does this happen? I am using Python 3.2
_EDIT_ I know notices that this is only the case when the unittest is started
from my PyCharm IDE. When run from the commandline (python3 -m unittest ....)
it works. Is this a bug in the IDE?
Answer: It may be because you have the `...src/parallelize/backend/` directory on your
Python path (perhaps it is the working directory), and Python is trying to
import `Pool` from your own `multiprocessing.py` there, not from the standard
library. I don’t know if it’s possible with PyCharm, but try changing the
working directory to something else.
|
Why do I see random read errors with Python BaseHTTPServer?
Question: I have Python code that calls external HTTP services. I want to test this code
by setting up mock HTTP servers that imitate those external services. I do
this by starting a `BaseHTTPServer` in a separate thread, and then calling
that server from the main thread. It looks like this:
import BaseHTTPServer, httplib, threading, time
class MockHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_POST(self):
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.end_headers()
self.wfile.write('{"result": "success"}')
class ServerThread(threading.Thread):
def run(self):
svr = BaseHTTPServer.HTTPServer(('127.0.0.1', 8540), MockHandler)
svr.handle_request()
ServerThread().start()
time.sleep(0.1) # Give the thread some time to get up
conn = httplib.HTTPConnection('127.0.0.1', 8540)
conn.request('POST', '/', 'foo=bar&baz=qux')
resp_body = conn.getresponse().read()
However, some of the requests fail in the `read()` call, with `socket.error:
[Errno 104] Connection reset by peer`. I can reproduce it, with varying
frequency, on several machines with Python 2.6, though not with 2.7.
But the most interesting thing is, if I don’t send the POST data (i.e. if I
omit the third argument to `conn.request()`), the error does not occur.
What could this be?
Alternatively, is there another quick and easy way to set up mock HTTP servers
in Python?
Answer: "...in a separate thread, and then calling that server from the main thread."
Don't use threads for this kind of thing.
Use processes. `subprocess.Popen` (and your operating system's normal
features) will do a much, much better job of assuring that this works
properly.
|
Localization for scripts
Question: I have a new piece of software I'm working on for my company that potentially
will need some localization options for our plant in Mexico. As far as the
application goes, C#/.NET has some great localization features that I will
utilize. The new program will be making use of scripts that will have messages
pop-up to the user, and those are probably the most important ones for
localization. They'll be written in [Iron]Python and we are currently, for
other scripts/software, maintaining two separate scripts for localization (and
some other small changes that could be implemented via logic). What's the best
way to localize a script so we can have just one script?
Answer: The simplest way is to remove all of the user-facing strings from your script,
and instead look up what string is to be shown based on the current language.
For example, instead of:
print 'Hello, World'
use
print _('Hello, World')
def _(key):
if currentlang == 'es':
return localized_text_es[key]
else:
return key
localized_text_es = { 'Hello, World': '¡Hola, mundo' }
You could fill `localized_text` at startup from `en.txt`/`es.txt` files, or
you could hardcode both languages into the scripts, or whatever other method
you choose.
|
Can't input unicode in python IDE (Mac OS X)
Question: I'm trying to collect some unicode raw_input in the default python IDE, and as
far as I'm aware, it should be as simple as:
>>> c = raw_input()
日本語
>>> print c
日本語
However, when I try to input the unicode characters, the computer beeps some
protestations and I end up with an empty string. (To do this, I click on the
IME switcher near the time and select the appropriate input method [which in
this case is Japanese input). Outside of the python IDE, the input works fine,
I can input the characters and the system recognizes them as having been
input. In the IDE, I'll type some hiragana, and the drop-down kanji selection
window appears as usual, but when I select the appropriate representation and
hit enter, those beeps come and I wind up with nothing. I figure there's a
setting involved somewhere that I've missed.
versions are:
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
and
Python 2.5.4 (r254:67916, Jun 24 2010, 21:47:25)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
neither of which work. There's also this:
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
>>> sys.stdin.encoding
'UTF-8'
>>> sys.stdout.encoding
'UTF-8'
>>> sys.getfilesystemencoding()
'utf-8'
but from what I've read, the defaultencoding is a mysterious beast. Changing
it doesn't actually fix anything anyway. That is,
>>> import sys
>>> sys.setdefaultencoding('utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'setdefaultencoding'
>>> reload(sys)
<module 'sys' (built-in)>
>>> sys.setdefaultencoding('utf-8')
>>> # !!!
... c = raw_input()
no dice!
doesn't work. Just more beeping. I can't cut-and-paste Japanese text from
other applications, either.
Any ideas?
Answer: I've had the same problem. In my case it turned out to be a **libedit**
problem. I fixed it by installing readline -- which I had to do from source
(from here: <http://pypi.python.org/pypi/readline>) since using **pip** or
**easy_install** , for whatever reason, didn't actually replace readline.
If you have **ipython** installed, it will tell you on startup if you're using
**libedit**. And, if you have the same experience I did, you'll see the same
problems in both the python interpreter in Terminal and in ipython. Once I got
readline truly installed, and ipython no longer informed me that it was using
libedit, the problems with entering Unicode disappeared in both python and
ipython.
(Note: I also have bpython installed -- and, since it doesn't seem to use
readline or libedit, but rather its own line-editing routines, entering
Unicode in bpython _always_ worked.)
|
How to make Unix binary self-contained?
Question: I have a Linux binary, without sources, that works on one machine, and I'd
like to make a self-contained package that would run on a different machine of
the same architecture. What is a way of achieving this?
In my case, both machines have the same architecture, same Ubuntu kernel, but
target machine doesn't have `make` and has wrong version of files under `/lib`
and `/usr`
One idea I had was to use `chroot` and recreate a subset of the filesystem
that the binary uses, possibly using `strace` to figure out what it needs. Is
there a tool that does this already?
For posterity, here's how I figure out which files a process opens
#!/usr/bin/python
# source of trace_fileopen.py
# Runs command and prints all files that have been successfully opened with mode O_RDONLY
# example: trace_fileopen.py ls -l
import re, sys, subprocess, os
if __name__=='__main__':
strace_fn = '/tmp/strace.out'
strace_re = re.compile(r'([^(]+?)\((.*)\)\s*=\s*(\S+?)\s+(.*)$')
cmd = sys.argv[1]
nowhere = open('/dev/null','w')#
p = subprocess.Popen(['strace','-o', strace_fn]+sys.argv[1:], stdout=nowhere, stderr=nowhere)
sts = os.waitpid(p.pid, 0)[1]
output = []
for line in open(strace_fn):
# ignore lines like --- SIGCHLD (Child exited) @ 0 (0) ---
if not strace_re.match(line):
continue
(function,args,returnval,msg) = strace_re.findall(line)[0]
if function=='open' and returnval!='-1':
(fname,mode)=args.split(',',1)
if mode.strip()=='O_RDONLY':
if fname.startswith('"') and fname.endswith('"') and len(fname)>=2:
fname = fname[1:-1]
output.append(fname)
prev_line = ""
for line in sorted(output):
if line==prev_line:
continue
print line
prev_line = line
**Update** The problem with `LD_LIBRARY_PATH` solutions is that `/lib` is
hardcoded into interpreter and takes precedence over `LD_LIBRARY_PATH`, so
native versions will get loaded first. The interpreter is hardcoded into the
binary. One approach might be to patch the interpreter and run the binary as
`patched_interpreter mycommandline` Problem is that when `mycommandline` is
starts with `java`, this doesn't work because Java sets-up `LD_LIBRARY_PATH`
and restarts itself, which resorts to the old interpreter. A solution that
worked for me was to open the binary in the text editor, find the interpreter
(`/lib/ld-linux-x86-64.so.2`), and replace it with same-length path to the
patched interpreter
Answer: As others have mentioned, static linking is one option. Except static linking
with glibc gets a little more broken with every release (sorry, no reference;
just my experience).
Your `chroot` idea is probably overkill.
The solution most commercial products use, as far as I can tell, is to make
their "application" a shell script that sets `LD_LIBRARY_PATH` and then runs
the actual executable. Something along these lines:
#!/bin/sh
here=`dirname "$0"`
export LD_LIBRARY_PATH="$here"/lib
exec "$here"/bin/my_app "$@"
Then you just dump a copy of all the relevant .so files under `lib/`, put your
executable under `bin/`, put the script in `.`, and ship the whole tree.
(To be production-worthy, properly prepend `"$here"/lib` to `LD_LIBRARY_PATH`
if it is non-empty, etc.)
[edit, to go with your update]
I think you may be confused about what is hard-coded and what is not. `ld-
linux-x86-64.so.2` is the dynamic linker itself; and you are correct that its
path is hard-coded into the ELF header. But the other libraries are not hard-
coded; they are searched for by the dynamic linker, which will honor
`LD_LIBRARY_PATH`.
If you really need a different ld-linux.so, instead of patching the ELF
header, simply run the dynamic linker itself:
/path/to/my-ld-linux.so my_program <args>
This will use your linker instead of the one listed in the ELF header.
Patching the executable itself is evil. Please consider the poor person who
has to maintain your stuff after you move on... _Nobody_ is going to expect
you to have hacked the ELF header by hand. _Anybody_ can read what a shell
script is doing.
Just my $0.02.
|
how to use python to produce a group of anagram with a given list
Question: for example :giving a list
['car','bed','stop','pots','arc','tops','z','z','rac','deb']
then with the function: produce
[['arc', 'car', 'rac'], ['bed', 'deb'], ['pots', 'stop', 'tops'], ['z', 'z']]
Answer: Seems to be taken from [this
assigment](http://www.student.cs.uwaterloo.ca/~cs234/a5/prog.html), but the OP
doesn't specify the "no builtin sort constraint, so...
>>> from collections import defaultdict
>>> d=defaultdict(list)
>>> words = ['car','bed','stop','pots','arc','tops','z','z','rac','deb']
>>> for w in words:
... d[''.join(sorted(w)].append(w)
...
>>> d.values()
[['bed', 'deb'], ['car', 'arc', 'rac'], ['z', 'z'], ['stop', 'pots', 'tops']]
|
What's the common way to layout a Django app with Buildout/djangorecipe?
Question: I have a Django app that I've set up using Buildout laid out like so:
/workspace
/bin
/src
/myproject
settings.py
/myapp
views.py
...
bootstrap.py
buildout.cfg
setup.py
The issue is that I'd like both `myproject.settings` and `myapp` on the python
path. I need the `myproject.settings` on the path so djangorecipe can import
it. And I'd like `myapp` on the path so that I don't have to write `import
myproject.myapp` all the time.
For now I've got both `/workspace/src` and `/workspace/src/myproject` in the
Python path, but this feels like a hack and practically makes me worried if
there might be situations where `import some_module` might have confusing
resolution patterns because I have two directories that are parent-child to
each other.
So questions are:
Is there an accepted way to lay this out?
Is it actually bad to have a directory and one of its sub-directories in the
path?
Answer: There is no problem, on `import some_module` importer will search in each
folder specified at `sys.path` for `some_module/__init__.py` and
`some_module.py`. Same for `import myproject.some_module`, it will search for
`myproject` module, then it will try to find in it `some_module` with same
algorithm.
I'm using the same project structure.
|
PyQt and QtDesigner
Question: I am trying to implement a LED in PyQt named disLDR1 by trying to change the
background color. I am trying to use QPalette to change the BackgroundRole.
But what is the equivalent of Qt::red?
Is this the correct way to set the background color or is there any other way?
#!/usr/bin/python -d
import sys
from PyQt4 import QtCore, QtGui
from main import Ui_Form
from PyQt4.QtGui import QPalette
class Top(QtGui.QMainWindow):
def __init__(self, parent = None):
QtGui.QWidget.__init__(self, parent)
self.ui = Ui_Form()
self.ui.setupUi(self)
for i in 1, 10:
self.setOff()
self.setOn()
def setOff(self):
self.pal = QPalette(self.ui.disLDR1.palette())
self.pal.setColor(self.ui.disLDR1.backgroundRole(), <<<RED COLOR>>>)
self.ui.disLDR1.setPalette(pal)
def setOn(self):
self.pal = QPalette(self.ui.disLDR1.palette())
self.pal.setColor(self.ui.disLDR1.backgroundRole(), <<<GREEN COLOR>>>)
self.ui.disLDR1.setPalette(pal)
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
myapp = Top()
myapp.show()
sys.exit(app.exec_())
EDIT: I have been trying to take help from [In Qt, how do I set the background
color of a widget like combobox or double spin
box?](http://stackoverflow.com/questions/177778/in-qt-how-do-i-set-the-
background-color-of-a-widget-like-combobox-or-double-spin) but don't know what
to substitute for QT::red
Answer: You can find the list of predefined Qt color objects at [this
link](http://www.riverbankcomputing.com/static/Docs/PyQt4/html/qt.html#GlobalColor-
enum). In this case you would just need to use `QtCore.Qt.red` and
`QtCore.Qt.blue`. You could also use the
[`QColor`](http://www.riverbankcomputing.com/static/Docs/PyQt4/html/qcolor.html)
class to generate arbitrary colors.
It's a matter of preference, but I personally think the easiest and most
powerful way to go would be to use a [style
sheet](http://doc.qt.nokia.com/latest/stylesheet.html).
|
error is raised when I try to remove elements (pydot objects) from a python list
Question: I am writing an algorithm to represent regression trees, using pydot module
(an interface to Graphviz's Dot language). In the algorithm, lists of Edges
and Nodes are made, and then they are represented - that is working fine.
But in some specific situations, I need to remove some of the Edges and Nodes,
and that's where I am getting stuck. Here is part of the code:
import pydot
graph = pydot.Dot(graph_type='graph')
link4 = pydot.Edge(node10, node21, label=etiquetas[3])
link5 = pydot.Edge(node11, node22, label=etiquetas[4])
lista_links = [link4, link5]
# if some conditions are verified, then:
lista_links.remove(link5)
for link in lista_links:
graph.add_edge(link)
graph.write_png('teste.png')
I was expecting this code to work without any problem, but I get an error,
saying:
AttributeError: 'NoneType' object has no attribute 'get_top_graph_type'
My only idea is, instead of removing the Nodes and Edges in some specific
situations, to change the code and add only the Nodes and Edges after I define
all the specific situations. But that would be a lot more work... (The code is
much bigger than what I've shown you, and I have several specific situations
that need to be considered).
I am curious why python behaves like this... Can somebody explain that to me,
or give me any idea on how to change this behavior?
Thanks in advance, Carla
Answer: At the surface, it seems that the problem is in Edges or Nodes without parent
graph. So, the overall solution would be: do not allow nodes and edges hang
around, always attach them to graph,and then remove from graph as needed.
|
Fuzzy String Searching with Whoosh in Python
Question: I've built up a large database of banks in MongoDB. I can easily take this
information and create indexes with it in whoosh. For example I'd like to be
able to match the bank names 'Eagle Bank & Trust Co of Missouri' and 'Eagle
Bank and Trust Company of Missouri'. The following code works with simple
fuzzy such, but cannot achieve a match on the above:
from whoosh.index import create_in
from whoosh.fields import *
schema = Schema(name=TEXT(stored=True))
ix = create_in("indexdir", schema)
writer = ix.writer()
test_items = [u"Eagle Bank and Trust Company of Missouri"]
writer.add_document(name=item)
writer.commit()
from whoosh.qparser import QueryParser
from whoosh.query import FuzzyTerm
with ix.searcher() as s:
qp = QueryParser("name", schema=ix.schema, termclass=FuzzyTerm)
q = qp.parse(u"Eagle Bank & Trust Co of Missouri")
results = s.search(q)
print results
gives me:
<Top 0 Results for And([FuzzyTerm('name', u'eagle', boost=1.000000, minsimilarity=0.500000, prefixlength=1), FuzzyTerm('name', u'bank', boost=1.000000, minsimilarity=0.500000, prefixlength=1), FuzzyTerm('name', u'trust', boost=1.000000, minsimilarity=0.500000, prefixlength=1), FuzzyTerm('name', u'co', boost=1.000000, minsimilarity=0.500000, prefixlength=1), FuzzyTerm('name', u'missouri', boost=1.000000, minsimilarity=0.500000, prefixlength=1)]) runtime=0.00166392326355>
Is it possible to achieve what I want with Whoosh? If not what other python
based solutions do I have?
Answer: **You could** match `Co` with `Company` using Fuzzy Search in Whoosh but **You
shouldn't** do because the difference between `Co` and `Company` is large.
`Co` is similar to `Company` as `Be` is similar to `Beast` and `ny` to
`Company`, You can imagine how bad and how large will be the search results.
However, if you want to match `Compan` or `compani` or `Companee` to `Company`
you could do it by using a Personalized Class of `FuzzyTerm` with default
`maxdist` equal to 2 or more :
> **maxdist** – The maximum edit distance from the given text.
class MyFuzzyTerm(FuzzyTerm):
def __init__(self, fieldname, text, boost=1.0, maxdist=2, prefixlength=1, constantscore=True):
super(MyFuzzyTerm, self).__init__(fieldname, text, boost, maxdist, prefixlength, constantscore)
Then:
qp = QueryParser("name", schema=ix.schema, termclass=MyFuzzyTerm)
You could match `Co` with `Company` by setting `maxdist` to `5` but this as I
said give bad search results. I suggest to keep `maxdist` from `1` to `3`.
If you are looking for matching a word linguistic variations, you better use
[`whoosh.query.Variations`](https://whoosh.readthedocs.org/en/latest/api/query.html#whoosh.query.Variations).
**Note:** older Whoosh versions has `minsimilarity` instead of `maxdist`.
|
Problems loading pygtk on Ubuntu 11.04
Question: I'm trying to use pygtk in Python but when I try running my code I get this
error:
Traceback (most recent call last):
File "application.py", line 3, in <module>
pygtk.require(2.0)
File "/usr/lib/python2.7/dist-packages/pygtk.py", line 85, in require
"required version '%s' not found on system" % version
AssertionError: required version '2.0' not found on system
Here is the code I'm trying to run (it's basically the Hello World example
from the pygtk website):
#!/usr/bin/env python
import pygtk
pygtk.require(2.0)
import gtk
class Application():
def hello(self, widget, data=None):
print 'Hello World'
def delete_event(self, widget, event, data=None):
print 'delete even occurred'
return False
def destroy(self, widget, data=None):
gtk.main_quit()
def __init__(self):
self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
self.window.connect('delete_event', self.delete_event)
self.quitButton = Button(self, text='Quit', command=self.quit)
self.quitButton.grid()
self.window.set_border_width(10)
self.button = gtk.Button('Hello World')
self.button.connect('clicked', self.hello, None)
self.button.connect_object('clicked', gtk.Widget.destroy, self.window)
self.window.add(self.button)
self.button.show()
def main(self):
gtk.main()
def main():
app = Application()
app.main()
if __name__ == '__main__':
main()
Also, when I try running `pygtk-demo` everything works ok, even though it is
importing the library the same way that I am. Also it outputs `PyGTK Demo
(gtk: v2.24.4, pygtk: v2.22.0)` so you can see that I have a version that is
>2.0.
Answer: The 3rd line in your file should read:
pygtk.require('2.0')
Because `2.0` is a string in this case, not a float.
|
from reportlab.platypus import ListFlowable, ListItem not working
Question: I am a newbie to python. I have to create an ordered list in my pdf document
using **Reportlab**. I found these two classes **ListFlowable(), ListItem()**
in the [user-guide](http://www.reportlab.com/docs/reportlab-userguide.pdf) of
Reportlab to do the same. But the very first import statement for these
classes is not working.
> from reportlab.platypus import ListFlowable, ListItem
This statement gives me the following error:
ImportError: cannot import name ListFlowable
How can I use these classes? I am using python 2.6, reportlab 2.5.
Answer: In my install of ReportLab 2.5 this isn't available. I see it's there in the
documentation, but searching through the code there is no such thing as a
ListFlowable or ListItem. This might be something that's only available in the
closed-source portion of ReportLab and not the open source.
If you need to make lists, though, you can fairly easily get similar results
using iterator variables and paragraph styles. That's the way I've always done
it.
|
SQLAlchemy (ORM, declarative): How to build query from key/values in a dict?
Question: Using the [SQLAlchemy](http://www.sqlalchemy.org/) ORM (declarative form), how
do you programatically create a query from a set of conditions in a
dictionary?
I wish to search for those records in a users table that match some criteria
previously collected in a dict. I can not know in advance which fields will be
used, and must be able to handle that some fields are Integers, some are
Strings, that there can be a lot of different fields, etc.
Example:
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
email = Column(String)
Two queries has been requested, resulting in the following dicts:
q1_dict = {'id' : 177}
q2_dict = {'name' : 'Johnny', 'email' : 'johnny@somewhere.com'}
Are there any simple/generic way I can create my queries from those two dicts,
simply relying on the fact that the keys match the attributes of the User
class, while handling types correctly, autoescaping unsafe values, etc?
I've spent several hours googling this, and browsing the SQLAlchemy
documentation, but can't seem to find any good answers/examples.
**Solution:**
So, after the help from you guys, the solution seems to be as simple as:
User.query.filter_by(**q1_dict)
User.query.filter_by(**q2_dict)
...to get to the two queries needed in the example.
I had already looked at the links you provided, dagoof, but I guess my
"python" just wasn't strong enough to get to the solution on my own. :)
Answer: Try the following, references here:
[Query](http://www.sqlalchemy.org/docs/orm/query.html#sqlalchemy.orm.query.Query),
[filter_by](http://www.sqlalchemy.org/docs/orm/query.html#sqlalchemy.orm.query.Query.filter_by)
session.query(User).filter_by(**q1_dict)
|
Python XMl Parser with BeautifulSoup. How do I remove tags?
Question: For a project I decided to make an app that helps people find friends on
Twitter.
I have been able to grab usernames from xml pages. So for example with my
current code I can get `<uri>http://twitter.com/username</uri>` from an XML
page, but I want to remove the `<uri>` and `</uri>` tags using [Beautiful
Soup](http://www.crummy.com/software/BeautifulSoup/).
Here is my current code:
import urllib
import BeautifulSoup
doc = urllib.urlopen("http://search.twitter.com/search.atom?q=travel").read()
soup = BeautifulStoneSoup(''.join(doc))
data = soup.findAll("uri")
Answer: Don't use BeautifulSoup to parse twitter, use their
[API](https://dev.twitter.com/docs/api) (also don't use BeautifulSoup, use
[lxml](http://lxml.de/)). To answer your question:
import urllib
from BeautifulSoup import BeautifulSoup
resp = urllib.urlopen("http://search.twitter.com/search.atom?q=travel")
soup = BeautifulSoup(resp.read())
for uri in soup.findAll('uri'):
uri.extract()
|
MiddleStorm middleware with bottle
Question: How do you use [MiddleStorm](http://pypi.python.org/pypi/middlestorm)
middleware with [bottle](http://bottlepy.org/)? I followed [this
example](http://readthedocs.org/docs/bottle/en/latest/recipes.html), replacing
SessionMiddleware with MiddleStorm, but I can't get it to work.
from bottle import *
from storm.locals import *
from middlestorm import MiddleStorm
#other bottle code like this here...
@get('/')
def index():
return 'index'
db = create_database("mysql://user:pass@localhost/mydb")
myapp = MiddleStorm(app, db)
run(app=myapp, reloader=True, host='0.0.0.0', port=4321)
I get this error in console:
exceptions.TypeError: __call__() takes exactly 1 argument (3 given)
If I change the line with myapp to:
myapp = MiddleStorm(app(), db)
I get this error on the webpage:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/bottle-0.9.5-py2.7.egg/bottle.py", line 651, in _handle
return callback(**args)
File "/usr/local/lib/python2.7/dist-packages/bottle-0.9.5-py2.7.egg/bottle.py", line 1143, in wrapper
rv = callback(*a, **ka)
TypeError: decorator() takes exactly 1 argument (0 given)
edit: bottle, storm, middlestorm are installed
edit2: if I chane myapp line to myapp = MiddleStorm(dafault_app, db) I get
this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/usr/local/lib/python2.7/dist-packages/middlestorm-0.8.1-py2.7.egg/middlestorm.py", line 68, in __call__
return self._app(environ, start_response)
TypeError: __call__() takes exactly 1 argument (3 given)
homer - - [17/Jul/2011 16:28:42] "GET / HTTP/1.1" 500 59
edit3: with @zeekay code I still get this error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/bottle-0.9.5-py2.7.egg/bottle.py", line 651, in _handle
return callback(**args)
File "/usr/local/lib/python2.7/dist-packages/bottle-0.9.5-py2.7.egg/bottle.py", line 1143, in wrapper
rv = callback(*a, **ka)
TypeError: decorator() takes exactly 1 argument (0 given)
Answer: Actually `default_app` and `app` are synonymous. This should work:
myapp = MiddleStorm(app(), db)
Just testing briefly and it seems to work for me. Can you try testing this:
from bottle import *
from storm.locals import *
from middlestorm import MiddleStorm
@get('/')
def index():
return 'index'
db = create_database("sqlite://test.db")
myapp = MiddleStorm(app(), db)
run(app=myapp, reloader=True, host='0.0.0.0', port=4321)
You should be able to drop it in a file and just run.
|
Error when using ctypes module to acess a DLL written in C
Question: I have a DLL with one single function That gets five doubles and one int:
__declspec(dllexport) struct res ITERATE(double z_r,double z_i,double c_r, double c_i, int iterations, double limit)
It retuns a custom struct caled res which consists of a three-double array:
struct res {
double arr[3];
};
To return the values I do this:
struct res result; /*earlier in the code */
result.arr[0] = z_real; /*Just three random doubles*/
result.arr[1] = z_imag;
result.arr[2] = value;
return result;
I've compiled it with MinGW and I'm trying to use it in python to do something
like this:
form ctypes import *
z = [0.0,0.0]
c = [1.0,1.0]
M = 2.0
MiDLL = WinDLL("RECERCATOOLS.dll")
MiDLL.ITERATE.argtypes = [c_double, c_double, c_double, c_double,c_int,c_double]
MiDLL.ITERATE(z[0],z[1],c[0],c[1],100,M) #testing out before assigning the result to anything.
But, whenever I try call the function with those values, it wil throw this to
me:
WindowsError: exception: access violation writing 0x00000000
I also don't know how to catch the custom structure I declared and convert
each of it's elements into Python floating points. I've looked into [this
PyDocs link](http://docs.python.org/py3k/library/ctypes.html) but to no avail.
Thank you in advance.
**EDIT:**
This is the original (modified according to suggestions) header used
("mydll.h"):
#ifndef MYDLL_H
#define MYDLL_H
extern "C" __declspec(dllexport)
#define EXPORT_DLL __declspec(dllexport)
EXPORT_DLL void ITERATE(struct res*, double z_r,double z_i,double c_r, double c_i, int iterations, double limit)
#endif
And, in case something might be wrong with it, the code file (it's very short,
just one function):
#include <stdio.h>
#include <complex.h>
struct res {
double arr[3];
};
void __declspec(dllexport) ITERATE(struct res* result,double z_r,double z_i,double c_r, double c_i, int iterations, double limit)
{
/* The purpose of this function is, given two complex numbers,
an iteration number and a limit, apply a formula to these
two numbers for as many iterations as specified.
If at any iteration the result of the formula is bigger than
the limit, stop and return the number and the iteration it reached.
If after iterating they are still inside the limit, return the
number after all the iterations and the number of iterations
it has gone through.
Complex numbers are composed of a real part and an imaginary part,
and they must be returned separately.
*/
double complex z = z_r + z_i*I;
double complex c = c_r + c_i*I;
int actual_iter;
for (actual_iter = 1; actual_iter <= iterations; actual_iter++)
{
z = z*z + c;
if (cabs(z) > limit)
{
double value = actual_iter;
double z_real = creal(z);
double z_imag = cimag(z);
result.arr[0] = z_real;
result.arr[1] = z_imag;
result.arr[2] = value;
}
}
double value = iterations;
double z_real = creal(z);
double z_imag = cimag(z);
result.arr[0] = z_real;
result.arr[1] = z_imag;
result.arr[2] = value;
}
int main()
{
return 0;
}
Answer: There is a problem with returning structs like that. Not all compilers return
such structures the same way. I'd rather change the function declaration to
this:
void __declspec(dllexport) ITERATE(struct res* result, double z_r,double z_i,
double c_r, double c_i, int iterations, double limit);
That way the struct is in the user's memory, and there is no ambiguity on how
the struct will be returned.
Of course, as David said, you may have to use a different calling convention.
|
permutating on list
Question: > **Possible Duplicate:**
> [How to generate all permutations of a list in
> Python](http://stackoverflow.com/questions/104420/how-to-generate-all-
> permutations-of-a-list-in-python)
I'm writing a function, called permutate(lst, acc) that permutes on the list
of integers, finding its all possible permutation. For example,
permutate([1,2,3],[0]) should return [123,132,213,231,312,321], but not
necesarily in that order. acc is an accumulator, that is carried on through
the recursion. What I have so far, is
def permutate(lst, acc):
if not lst:
return acc
else:
return reduce(lambda x, y: x + y,
map(lambda z: permutate(lst[:].remove(z),
map(lambda a: a * 10 + z, acc[:])), lst[:]))
currently permutate([1,3,2,5],[0]) returns [1,3,2,5]. I'm guessing the
function is not making its recursive steps. I also avoided list being mutable,
by adding [:]. any thoughts?
Answer: Use the Python library itertools:
from itertools import permutations
permutations([1,2,3])
It's included with Python, so you won't have to install anything.
|
Syntax Error When Running paster?
Question: No idea what happened, but all of a sudden, paster stopped working on my
server (working with virtualenv and pyramid). Tried reinstalling everything
but didn't work - same error again:
Traceback (most recent call last):
File "bin/paster", line 7, in ?
sys.exit(
File "/home/user/webapps/myapp/htdocs/lib/python2.4/site-packages/PasteScript-1.7.3-py2.4.egg/paste/script/command.py", line 83, in run
command = commands[command_name].load()
File "/home/user/webapps/myapp/htdocs/lib/python2.4/site-packages/setuptools-0.6c11-py2.4.egg/pkg_resources.py", line 1954, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/home/user/webapps/myapp/htdocs/lib/python2.4/site-packages/PasteScript-1.7.3-py2.4.egg/paste/script/serve.py", line 19, in ?
from paste.deploy import loadapp, loadserver
File "/home/user/webapps/myapp/htdocs/lib/python2.4/site-packages/PasteDeploy-1.5.0-py2.4.egg/paste/deploy/__init__.py", line 3, in ?
from paste.deploy.loadwsgi import *
File "/home/user/webapps/myapp/htdocs/lib/python2.4/site-packages/PasteDeploy-1.5.0-py2.4.egg/paste/deploy/loadwsgi.py", line 393
with open(filename) as f:
^
SyntaxError: invalid syntax
What is happening to me?
Answer: You are using Python 2.4. It does not have the `with` statement. It only came
in Python 2.5.
Update your version of Python.
|
Find pattern in xml string
Question: I have following xml tags in my xml file as follows '''
`<pd:link scheme="http://www.w3.org/1999/xhtml" target="www.altruvest.org
<pd:unicode ch="2014"/> or <pd:unicode ch="2014"/>
www.Boardmatch.org">"www.altruvest.org <pd:unicode ch="2014"/> or <pd:unicode
ch="2014"/> www.Boardmatch.org</pd:link>)` '''
in above tag pd:unicode tag is inside text value of target. I want to create
regular expression pattern to find such tag where tag is within text in
python.
Can anyone please help to create pattern for this?
Answer: Edited answer:
>>> s = r'"<pd:link scheme="http://www.w3.org/1999/xhtml" target="www.altruvest.org <pd:unicode ch="2014"/> or <pd:unicode ch="2014"/> www.Boardmatch.org">www.altruvest.org <pd:unicode ch="2014"/> or <pd:unicode ch="2014"/> www.Boardmatch.org</pd:link>"'
>>> import re
>>> r = re.search(r'=".*?(<pd:unicode ch="\d+"/>).*?"', s, re.DOTALL)
>>> r.groups()
('<pd:unicode ch="2014"/>',)
What the above does is to match the `pd:unicode` tags when they are preceded
by a `="` and followed by `"`. The `re.DOTALL` ignores newlines (treats them
as normal characters).
Bare in mind that what you are asking to do is _parsing_ XML, something for
which you should use an xmlparser (see for example
[xml.etree](http://docs.python.org/library/xml.etree.elementtree.html) or a
more general discussion [here](http://wiki.python.org/moin/PythonXml)), and
not regular expressions. Accurately parsing XML by mean of regex is actually
[not possible](http://stackoverflow.com/q/6751105/146792), so the above regex
is likely to generate false positives or to miss some true ones.
If you don't want to go with a full XML parser, you could consider something
like [pyparsing](http://pyparsing.wikispaces.com/) instead.
|
Link to Python with MinGW
Question: I wan't want to create a cross-plattform programm that embedds the python
interpreter, and compile it with MinGW. But the Python Binary distribution
provides no libraries for MinGW to link with (only `python32.lib` for Visual
C++), and the Python Source package provides no support for compiling with
MinGW.
I tried linking to `python32.lib` in Mingw with `-lpython32` but it still
generates errors like:
main.cpp: undefined reference to `_imp__Py_Initialize'
main.cpp: undefined reference to `_imp__Py_Finalize'
How do I link Python in MinGW? I really don't want to switch to using Visual
C++.
Answer: With nm and dlltool from binutils, you should be able to rebuild the library
for gcc:
echo EXPORTS > python32.def
nm python32.lib | grep " T _" | sed "s/.* T _//" >> python32.def
dlltool --input-def python32.def --dllname python32 --output-lib libpython32.a
python_test.c:
#include "Python.h"
int main(int argc, char *argv[]) {
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is',ctime(time())\n)");
Py_Finalize();
return 0;
}
Compile:
gcc -Wall -IC:\Python32\include -LC:\Python32\libs -o python_test.exe python_test.c -lpython32
Test:
C:\python_test.exe
Today is Mon Jul 18 08:50:53 2011
**Edit** : If you'd prefer to skip building this yourself on x64, you can
download it for several versions from Christoph Gohlke's [Unofficial Windows
Binaries for Python Extension
Packages](http://www.lfd.uci.edu/~gohlke/pythonlibs/#libpython).
**Edit** : Here's a Python version based on the existing function that's
distributed in Tools/msi/msi.py:
import subprocess
import warnings
import re
NM = 'x86_64-w64-mingw32-nm'
DLLTOOL = 'x86_64-w64-mingw32-dlltool'
EXPORT_PATTERN = r'^[_]{1,2}imp_(?P<export>.*) in python\d+\.dll'
def build_libpython(ver, nm=NM, dlltool=DLLTOOL,
export_pattern=EXPORT_PATTERN):
pylib = 'python%s.lib' % ver
pydef = 'python%s.def' % ver
pydll = 'python%s.dll' % ver
libpy = 'libpython%s.a' % ver
warning = '%s failed - ' + '%s not built' % libpy
match_export = re.compile(export_pattern).match
cmd_nm = [nm, '-Cs', pylib]
cmd_dlltool = [dlltool,
'--dllname', pydll,
'--def', pydef,
'--output-lib', libpy]
with open(pydef, 'w') as f:
f.write('LIBRARY %s\nEXPORTS\n' % pydll)
p_nm = subprocess.Popen(cmd_nm,
stdout=subprocess.PIPE,
universal_newlines=True)
for line in sorted(p_nm.stdout):
m = match_export(line)
if m:
f.write(m.group('export') + '\n')
if p_nm.wait() != 0:
warnings.warn(warning % nm)
return False
if subprocess.call(cmd_dlltool) != 0:
warnings.warn(warning % dlltool)
return False
return True
For example:
import os
for n in (27, 33, 35):
pylib = 'python%s.lib' % n
if os.path.exists(pylib):
build_libpython(n)
pydef = 'python%s.def' % n
lc_def = sum(1 for line in open(pydef))
libpy = 'libpython%s.a' % n
lc_lib = sum(1 for line in os.popen('ar -t %s' % libpy))
assert lc_def == lc_lib
|
Python random digit letter and additional generator
Question: I would like to generate a random number string of N length with this code:
import random
import string
N=512
print ''.join(random.choice(string.ascii_uppercase + string.digits + string.ascii_lowercase) for x in range(N))
What is missing is that I would like to add "special characters like e.g.
"@,;:.§$%&/(!"" And I would like to output that to .txt file with a newline
after for a example 10,15 signs.
Any help would be great.
Thanks for the time.
Answer: `string` module has a class for such special characters:
>>> print string.punctuation
!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
To save generated string to file use something like:
N=512
chars_per_line = 15
s = ''.join(random.choice(string.digits + string.ascii_letters + string.punctuation) for i in xrange(N))
fh = open('filename', 'w')
try:
fh.writelines([s[i:i+chars_per_line]
for i in range(0, N, chars_per_line)])
finally:
fh.close()
|
Help me with mercurial extension exportfiles
Question: i'm trying to use
[exportfiles](http://bitbucket.org/albert_brand/hgexportfiles/) extension for
mercurial but I'm getting this error:
c:\xampp\htdocs\dev>hg exportfiles -r 1 /tmp/export
** unknown exception encountered, please report by visiting
** http://mercurial.selenic.com/wiki/BugTracker
** Python 2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC v.1500 32 bit (Intel)]
** Mercurial Distributed SCM (version 1.9+10-e9264b45237d)
** Extensions loaded: exportfiles
Traceback (most recent call last):
File "hg", line 42, in <module>
File "mercurial\dispatch.pyo", line 27, in run
File "mercurial\dispatch.pyo", line 64, in dispatch
File "mercurial\dispatch.pyo", line 87, in _runcatch
File "mercurial\dispatch.pyo", line 675, in _dispatch
File "mercurial\dispatch.pyo", line 454, in runcommand
File "mercurial\dispatch.pyo", line 729, in _runcommand
File "mercurial\dispatch.pyo", line 683, in checkargs
File "mercurial\dispatch.pyo", line 672, in <lambda>
File "mercurial\util.pyo", line 385, in check
File "C:\Users\Sasa/exportfiles.py", line 39, in exportfiles
rng = cmdutil.revrange(repo, opts['rev'])
AttributeError: 'module' object has no attribute 'revrange'
I'm using TortiseHG 2.1.1 for Windows 32-bit with Mercurial 1.9+10
Could you help me please to solve this and use exportfiles extension?
Thanks in advance!
Answer: [Mercurial's API changed after version
1.8.](http://mercurial.selenic.com/wiki/ApiChanges#Changes_after_1.8)
> Various functions have been moved from cmdutil.py to scmutil.py, including
> revrange/revsingle/revpair and match/matchall/matchfiles
Assuming
[this](https://bitbucket.org/albert_brand/hgexportfiles/src/ac8754d1d7ef/exportfiles.py)
is the source of the extension you're using, line 11 should be
from mercurial import util, scmutil
and line 39 should be
rng = scmutil.revrange(repo, opts['rev'])
|
find and update duplicates in a list of lists
Question: I am looking for a Pythonic way to solve the following problem. I have (what I
think is) a working solution but it has complicated flow controls and just
isn't "pretty". (Basically, a C++ solution)
I have a list of lists. Each list contains multiple items of varying types
(maybe 10 items per list) The overall order of the lists is not relevant, but
the order of the items in any individual list is important. (ie I can't change
it).
I am looking to "tag" duplicates by adding an extra field to the end of an
individual list. However, in this case a "duplicate" list is one that has
equal values in several preselected fields, but not all fields (there are no
"true" duplicates).
For example: if this were the original data from a 5 item list of lists and
duplicate is defined as having equal values in the first and third fields:
['apple', 'window', 'pear', 2, 1.55, 'banana']
['apple', 'orange', 'kiwi', 3, 1.80, 'banana']
['apple', 'envelope', 'star_fruit', 2, 1.55, 'banana']
['apple', 'orange', 'pear', 2, 0.80, 'coffee_cup']
['apple', 'orange', 'pear', 2, 3.80, 'coffee_cup']
The first, fourth and fifth lists would be duplicates and therefore all lists
should be updated as follows:
['apple', 'window', 'pear', 2, 1.55, 'banana', 1]
['apple', 'orange', 'kiwi', 3, 1.55, 'banana', 0]
['apple', 'envelope', 'star_fruit', 2, 1.55,'banana', 0]
['apple', 'orange', 'pear', 2, 3.80, 'coffee_cup', 2]
['apple', 'orange', 'pear', 2, 3.80, 'coffee_cup', 3]
Thanks for any help or direction. I think this may be getting beyond the
Learning Python book.
Answer:
from collections import defaultdict
lists = [['apple', 'window', 'pear', 2, 1.55, 'banana'],
['apple', 'orange', 'kiwi', 3, 1.80, 'banana'],
['apple', 'envelope', 'star_fruit', 2, 1.55, 'banana'],
['apple', 'orange', 'pear', 2, 0.80, 'coffee_cup'],
['apple', 'orange', 'pear', 2, 3.80, 'coffee_cup']]
dic = defaultdict(int)
fts = []
for lst in lists:
first_third = lst[0], lst[2]
dic[first_third] += 1
if dic[first_third] == 2: fts.append(first_third)
lst.append(dic[first_third])
for lst in lists:
if (lst[0], lst[2]) not in fts:
lst[-1] -= 1
print(lists)
Edit: Thanks utdemir. `first_third = lst[0], lst[2]` is correct, not
`first_third = lst[0] + lst[2]`
Edit2: Changed variable names for clarity.
Edit3: Changed to reflect what the original poster really wanted, and his
updated list. Not pretty any more, desired changes just tacked on.
|
How do python submodules share a scarce resource between them?
Question: I am breaking up a large monolothic python file into six separate submodules.
Originally in onebigfile.py, I had
conn = MySqldb.connect()
c = conn.cursor()
and then a function would use it as so:
def getFromDB():
c.execute(sql)
Now, I have restructured my module to
NewModule/
__init__.py
users.py
accounts.py
sixmoreofthese.py
What I'm puzzling over is what to do with semi-scarce resources like my MySQL
connection.
Is there a way to access a parent namespace, e.g. `conn.` and `c.` could be
put in `__init__.py` ? I don't want to instantiate a whole bunch of
connections to MySQL. Just dumping them there and calling them as if they're
part of the global namespace doesn't work.. That is:
__init__.py:
conn = MySqldb.connect()
c = conn.cursor()
> import NewModule
> NewModule.users.login('a','b')
--- login function calls the Mysql c. from the global namespace and can't find it.
To anticipate one suggestion: it makes sense to split these files -- there's
roughly 50-75k worth of python, and a group of people that need to work with
the code, plus there are pretty clear conceptual groupings of functionality.
Answer: You can certainly share the connection object between python modules, and your
idea to have the connection be a module-scoped variable will accomplish this
nicely (with a few exceptions, Python modules are singletons, so even if they
are imported multiple times from different files, they are only loaded once).
You probably do not want to share a single cursor object between modules,
especially if you are using a multi-threaded environment (e.g. running a web
application in mod_wsgi or similar).
One suggestion might be to move the connection into a database-specific
module, like `db.py`, and initialize/access it with a `get_connection()`
method. This will allow your application to bootstrap and load any
configuration it needs to connect (username, password, hostname) gracefully.
This might be as simple as:
# db.py
connection = None
def get_connection():
global connection
if not connection:
connection = MySqldb.connect() # possibly with configuration vars passed in
return connection
Your other code which uses the databse could look like:
# other_module.py
import db
curs = db.get_connection().cursor()
# do stuff
curs.close()
|
Associative Array does not work with eval() JavaScript
Question: I am generating a data expressed as a Python dictionary which is dumped using
simplejson via url which is in this format.
{"2": "London", "3": "Tokyo", "4": "Sydney"}
I am using `$.get` and storing into a variable data. However `eval(data)` does
not generate an Associative Array. Actually throws up an error. What is the
problem? What is the solution?
Edit: I have shared the code <http://dpaste.com/570901/>
Answer: We need to see more code...
var x = '{"2": "London", "3": "Tokyo", "4": "Sydney"}';
eval('var y = ' + x);
// or
var y = eval('(' + x + ')');
console.log(y);
console.log(y["2"]);
The above works just fine. What exactly are you doing/not doing?
PS: You shouldn't use `eval` for this regardless, but it's important to know
how it works.
|
Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb"
Question: So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run'
versions of python and mysql. However, I can't get python to syncdb: "Error
loading MySQLdb module: No module named MySQLdb"
I thought the Bitnami package would already install everything necessary in
Windows to make mysql and Python work together? Is this not true?
I don't want to have to deal with installing mysql-python components as that
can be frustrating to get working alone as I have tried before.
Answer: You'll need to install MySQL for python as Django needs this to do the
connecting, once you have the package installed you shouldn't need to
configure it though as Django just needs to import from it.
Edit: from your comments there is a setuptools bundled but it has been
replaced by the package distribute, install this python package and you should
have access to easy_install which makes it really easy to get new packages.
Assuming you've added PYTHONPATH/scripts to your environment variables, you
can call easy_install mysql_python
|
JSON to python dictionary: Printing values
Question: Noob here. I have a large number of json files, each is a series of blog posts
in a different language. The key-value pairs are meta data about the posts,
e.g. "{'author':'John Smith', 'translator':'Jane Doe'}. What I want to do is
convert it to a python dictionary, then extract the values so that I have a
list of all the authors and translators across all the posts.
for lang in languages:
f = 'posts-' + lang + '.json'
file = codecs.open(f, 'rt', 'utf-8')
line = string.strip(file.next())
postAuthor[lang] = []
postTranslator[lang]=[]
while (line):
data = json.loads(line)
print data['author']
print data['translator']
When I tried this method, I keep getting a key error for translator and I'm
not sure why. I've never worked with the json module before so I tried a more
complex method to see what happened:
postAuthor[lang].append(data['author'])
for translator in data.keys():
if not data.has_key('translator'):
postTranslator[lang] = ""
postTranslator[lang] = data['translator']
It keeps returning an error that strings do not have an append function. This
seems like a simple task and I'm not sure what I'm doing wrong.
Answer: See if this works for you:
import json
# you have lots of "posts", so let's assume
# you've stored them in some list. We'll use
# the example text you gave as one of the entries
# in said list
posts = ["{'author':'John Smith', 'translator':'Jane Doe'}"]
# strictly speaking, the single-quotes in your example isn't
# valid json, so you'll want to switch the single-quotes
# out to double-quotes, you can verify this with something
# like http://jsonlint.com/
# luckily, you can easily swap out all the quotes programmatically
# so let's loop through the posts, and store the authors and translators
# in two lists
authors = []
translators = []
for post in posts:
double_quotes_post = post.replace("'", '"')
json_data = json.loads(double_quotes_post)
author = json_data.get('author', None)
translator = json_data.get('translator', None)
if author: authors.append(author)
if translator: translators.append(translator)
# and there you have it, a list of authors and translators
|
Display a transparent .png in wxpython
Question: I'm using Python 2.7. I need to display a .png image file in wxpython, such
that the transparency is preserved, and you can still see the controls behind
the transparent part of the image. This needs to work in Windows, Mac, AND
Linux.
Answer: I just wanted to add how to draw a png with transparencies normally, for those
who google and come across this (as I did) so they dont end up thinking its
not possible because of the accepted answer (as I did)
import wx
dc = wx.PaintDC(self)
self.pngimage = wx.Bitmap('image.png', wx.BITMAP_TYPE_PNG)
dc.DrawBitMap(self.pngimage, x, y)
this is what I do, and all the transparencies are displayed perfectly. I'm
using wxpython 2.9.4.0
|
Python, import a modulo from a father directory
Question: I want to know how import a file that is one level up. I read in google this:
>>> sys.path.append("../")
But I dont like it and i hope there are a better solution
Answer: Another option, if what you are writing is part of a package, is to use
relative imports like so:
from .. import foo
where `foo` is the name of the module you're trying to import.
Similarly, if you've got another module in the current directory, you can use:
from . import bar
Sadly (or perhaps not so sadly), though, this doesn't extend any further than
this. You can't go up to a grandparent, or higher.
**EDIT** :
As so graciously pointed out by JAB, my last comment there is not true:
> from ...sys import path
>
> Note that while that last case is legal, it is certainly discouraged
> ("insane" was the word Guido used).
I must have internalized Guido's description of it too much. ;^)
**EDIT** :
Okay, I just verified this in 2.7 - this apparently goes as deep as you like,
and is dependent on the number of `.`
from .... import greatgrandparent
works juuuust fine. I think I'm going to need [a
bucket](http://en.wikipedia.org/wiki/Mr_Creosote)
|
python utf-8 problem
Question: this is my script
# -*- coding: utf-8 -*-
from BeautifulSoup import BeautifulSoup
import urllib2
res = urllib2.urlopen('http://tazeh.net')
html = res.read()
soup = BeautifulSoup(''.join(html))
title = soup.findAll('title')
print title
when im am running this script in terminal i get wronged text like that
$ python test.py
[<title>ŮžŘ§ŰŒÚŻŘ§Ů‡ ŘŽŘ¨ŘąŰŒ ŘŞŘŮ„ŰŒŮ„ŰŒ تازه</title>]
this title in utf-8 encoding and persian language
i'm newer in python what's wrong?
Answer: if i add (like one of the comments suggested doing in less useful place):
html = html[:10000].decode("utf-8")
(the slice is because decode failed at an offset further into the page)
before:
soup = BeautifulSoup(html)
it prints:
[<title>پایگاه خبری تحلیلی تازه</title>]
|
Python PIL Image.tostring()
Question: I'm new to Python and PIL. I am trying to follow code samples on how to load
an image into to Python through PIL and then draw its pixels using openGL.
Here are some line of the code:
from Image import *
im = open("gloves200.bmp")
pBits = im.convert('RGBA').tostring()
.....
glDrawPixels(200, 200, GL_RGBA, GL_UNSIGNED_BYTE, pBits)
This will draw a 200 x 200 patch of pixels on the canvas. However, it is not
the intended image-- it looks like it is drawing pixels from random memory.
The random memory hypothesis is supported by the fact that I get the same
pattern even when I attempt to draw entirely different images.Can someone help
me? I'm using Python 2.7 and the 2.7 version of pyopenGL and PIL on Windows
XP.

Answer: I think you were close. Try:
pBits = im.convert("RGBA").tostring("raw", "RGBA")
The image first has to be converted to RGBA mode in order for the RGBA rawmode
packer to be available (see
[Pack.c](https://bitbucket.org/effbot/pil-117/src/7493ffdf4aff/libImaging/Pack.c#cl-459)
in libimaging). You can check that `len(pBits) == im.size[0]*im.size[1]*4`,
which is 200x200x4 = 160,000 bytes for your gloves200 image.
|
piping postgres COPY in python with psycopg2
Question: I'm writing a script to do a copy of some data between two machines on the
same network using psycopg2. I'm replacing some old, ugly bash that does the
copy with
psql -c -h remote.host "COPY table TO STDOUT" | psql -c "COPY table FROM STDIN"
This seems like both the simplest and [most
efficient](http://www.depesz.com/index.php/2007/07/05/how-to-insert-data-to-
database-as-fast-as-possible/) way to do the copy. It's easy to replicate in
python with a stringIO or a temp-file, like so:
buf = StringIO()
from_curs = from_conn.cursor()
to_curs = to_conn.cursor()
from_curs.copy_expert("COPY table TO STDOUT", buf)
buf.seek(0, os.SEEK_SET)
to_curs.copy_expert("COPY table FROM STDIN", buf)
...but that involves saving all the data to disk/in memory.
Has anyone figured out a way to mimic the behavior of a Unix pipe in a copy
like this? I can't seem to find a unix-pipe object that doesn't involve POpen
- Maybe the best solution is to just use POpen and subprocess, after all.
Answer: You will have to put one of your calls in a separate thread. I just realized
you can use
[os.pipe()](http://docs.python.org/release/2.6/library/os.html#os.pipe), which
makes the rest quite straightforward:
#!/usr/bin/python
import psycopg2
import os
import threading
fromdb = psycopg2.connect("dbname=from_db")
todb = psycopg2.connect("dbname=to_db")
r_fd, w_fd = os.pipe()
def copy_from():
cur = todb.cursor()
cur.copy_from(os.fdopen(r_fd), 'table')
cur.close()
todb.commit()
to_thread = threading.Thread(target=copy_from)
to_thread.start()
cur = fromdb.cursor()
write_f = os.fdopen(w_fd, 'w')
cur.copy_to(write_f, 'table')
write_f.close() # or deadlock...
to_thread.join()
|
Start up script for node.js repl
Question: Is there a way configure node.js's repl? I want to require jquery and
underscore automatically whenever the repl starts. Is there a file (noderc?)
that node.js loads when it starts the repl?
The equivalent in Python is to edit `~/.ipython/ipy_user_conf.py` with:
import_mod('sys os datetime re itertools functools')
Answer: I don't know of any such configuration file, but if you want to have modules
`foo` and `bar` be available in a REPL, you can create a file `myrepl.js`
containing:
var myrepl = require("repl").start();
["foo", "bar"].forEach(function(modName){
myrepl.context[modName] = require(modName);
});
and you when you execute it with `node myrepl.js` you get a REPL with those
modules available.
Armed with this knowledge you can put `#!/path/to/node` at the top and make it
executable directly, or you could modify your version of the repl.js module
(source available at <https://github.com/joyent/node/blob/master/lib/repl.js>
for inspection) or whatever :)
|
Is there a way to add multiple conditions in a for loop?
Question:
n=int(raw_input('enter the number of mcnuggets you want to buy : ')) #total number of mcnuggets you want yo buy
for a in range(1,n) and b in range(1,n) and c in range(1,n) :
if (6*a+9*b+20*c==n):
print 'number of packs of 6 are ',a
print 'number of packs of 9 are ',b
print 'number of packs of 20 are',c
i am new to programming and i am learning python.the code above gives errors.
Any suggestion.?.
Answer: You should use nested loops:
for a in range(1, n):
for b in range(1, n):
for c in range(1, n):
if ...
Or even better:
import itertools
for a, b, c in itertools.product(range(1, n + 1), repeat=3):
if ...
|
Reuse the same template for different content types
Question: I'm creating quite a few Dexterity content types (thanks
[zopeskel.dexterity](http://pypi.python.org/pypi/zopeskel.dexterity) devs!!)
but even if I need them to be different content types (searches,
collections...), some of them will be rendered equally.
So, there's any way to reuse the same template for different content types?
Ok, I made it work but I'm wondering if it's the correct approach:
from my.product.parent_type import IParentType, ParentType, TwoColumnsView
... code omitted ...
# Common folder for templates
grok.templatedir('parent_type_templates')
class SameTwoColumnsView(TwoColumnsView):
grok.context(CustomClass)
grok.require('zope2.View')
grok.template("twocolumnsview")
Any thought? **How do you reuse templates across content types?**
Answer: Create an interface for this:
from zope.interface import Interface
class ITwoColumnViewable(Interface):
"""Can be viewed in a 2-column layout"""
You then assign this interface to your various content types, and register the
view for that interface instead directly for a type:
class SameTwoColumnsView(TwoColumnsView):
grok.context(ITwoColumnViewable)
|
What is a stack imbalance?
Question: Having read this article [F# Versus Mathematics: Part One - Getting Started
with BLAS and LAPACK](http://www.codeproject.com/KB/net-
languages/FSharpvsmathematicspt01.aspx) I stumbled across the term `stack
imbalance` in the paragraph `A Warning, Perhaps an Omen`.
I
[googled](http://www.google.de/search?ie=UTF-8&q=what%20is%20a%20stack%20imbalance)
and searched on
[SO](http://stackoverflow.com/search?q=%22stack%20imbalance%22), but could
only find people struggling with stack imbalances and no generall
explanations.
**Bonus Question:** Does it only affect f# or is it a general problem in C,
C++, Python, Java, etc.?
p.s. please change the tags of the question if necessary
Answer: A stack imbalance occurs when the data structure used to keep track of called
functions, arguments, and return values becomes corrupted or misaligned.
Most times, the stack is a memory pointer that stores the address where
control will resume when the current function call exits back to the caller.
There are different variants on this, sometimes the arguments to a function
are also appended to the stack, as well as the return value. What is most
important here is that the caller and callee should agree upon how to restore
it back to the prior state when the callee exits. This agreement is frequently
known as the _Calling Convention_.
In .NET, stack imbalances are a rare to nonexistent problem in pure managed
code. However, this can be a frequent problem when calling unmanaged code, as
you will need to tell the compiler how the method should be called, which then
implies how the stack should be cleaned up per the calling convention.
On windows, there are a few standard calling conventions that cover the bulk
of invocation cases.
**stdcall** \- Callee will fix the stack upon exit.
**fastcall** \- Potentially no need to fix the stack aside from return
address, instead, CPU registers are used to pass arguments.
**cdecl** \- The caller will fix the stack after the called function returns.
A formal reference is available here: [Argument Passing and Naming Conventions
@ MSDN](http://msdn.microsoft.com/en-us/library/984x0h58.aspx)
This is also of interest: [X86 calling convention list @
Wikipedia](http://en.wikipedia.org/wiki/X86_calling_conventions)
Within a given development domain, this tends not to be a problem. Each
language generally has a convention that is implicit for all method calls.
C/C++ uses the same convention for invocation of C/C++ calls, Python for other
Python calls, etc. When crossing domains, it can become a problem if one
domain doesn't use the same as another. Perhaps most common in windows, a
function exported with "C" style declarations (cdecl) may cause an unbalanced
stack (or worse) when called as though it had a stdcall convention, which is
the method recognized by WINAPI (windows system) calls.
|
Open Matlab file .mat with module PICKLE in Python
Question: I want open a **Matlab** project with the module **Pickle** or **cPickle** in
**Python language**. NOT with:
from scipy.io import matlab
mat=matlab.loadmat('file.mat')
Can I use `pickle.load` with a .mat file?
Answer: For some years now, Matlab has used HDF5 to store data. Python has support for
HDF5, via `PyTables`. No need to use Pickle. In fact, HDF5 may surprise you
for its speed relative to Pickle. A friend reported 2-10X speedups in
read/write for some very large datasets.
* * *
Update 1: A concise guide to loading the files, via HDF5, can be found [at
this page](http://www.scipy.org/Cookbook/Reading_mat_files).
In addition, several good references and resources may be found [at this
page](http://www.mathworks.com/matlabcentral/newsreader/view_thread/290116).
There's also a [PyMat project on Sourceforge](http://pymat.sourceforge.net/).
|
Installing oursql on Mac OS Lion successes but import in python fails. **Why?**
Question: I followed the installation instructions for installing oursql on Mac OS X.
Since
sudo pip install oursql
told me, that it couldn't find `mysql_config` I (located it with `locate
mysql_config` and) told it where to find it by
sudo MYSQL_CONFIG=/usr/local/mysql-5.5.14-osx10.6-x86_64/bin/mysql_config pip install oursql
I added the terminal output at the bottom for readability reasons. After that
I fired up python in terminal (On Mac OS Lion it is python 2.7 now,...) and
did
>>> import oursql
but python keeps telling me:
>>> import oursql
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: dlopen(/Library/Python/2.7/site-packages/oursql.so, 2): Library not loaded: libmysqlclient.18.dylib
Referenced from: /Library/Python/2.7/site-packages/oursql.so
Reason: image not found
What do I miss? Any suggestions?
* * *
## Terminal Output, of pip installation:
Downloading/unpacking oursql
Downloading oursql-0.9.2.tar.bz2 (113Kb): 113Kb downloaded
Running setup.py egg_info for package oursql
Installing collected packages: oursql
Running setup.py install for oursql
skipping 'oursqlx/oursql.c' Cython extension (up-to-date)
building 'oursql' extension
/usr/local/mysql-5.5.14-osx10.6-x86_64/bin/mysql_config --cflags
llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c oursqlx/oursql.c -o build/temp.macosx-10.7-intel-2.7/oursqlx/oursql.o -I/usr/local/mysql-5.5.14-osx10.6-x86_64/include -Os -g -fno-common -fno-strict-aliasing -arch x86_64
oursqlx/oursql.c: In function ‘__pyx_pf_6oursql_10Connection___cinit__’:
oursqlx/oursql.c:4630: warning: implicit conversion shortens 64-bit value into a 32-bit value
oursqlx/oursql.c: In function ‘__pyx_pf_6oursql_10_Statement_execute’:
oursqlx/oursql.c:10219: warning: implicit conversion shortens 64-bit value into a 32-bit value
oursqlx/oursql.c: In function ‘__pyx_pf_6oursql_16_DBAPITypeObject___richcmp__’:
oursqlx/oursql.c:17597: warning: implicit conversion shortens 64-bit value into a 32-bit value
llvm-gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c oursqlx/compat.c -o build/temp.macosx-10.7-intel-2.7/oursqlx/compat.o -I/usr/local/mysql-5.5.14-osx10.6-x86_64/include -Os -g -fno-common -fno-strict-aliasing -arch x86_64
/usr/local/mysql-5.5.14-osx10.6-x86_64/bin/mysql_config --libs
llvm-gcc-4.2 -Wl,-F. -bundle -undefined dynamic_lookup -Wl,-F. -arch i386 -arch x86_64 build/temp.macosx-10.7-intel-2.7/oursqlx/oursql.o build/temp.macosx-10.7-intel-2.7/oursqlx/compat.o -o build/lib.macosx-10.7-intel-2.7/oursql.so -L/usr/local/mysql-5.5.14-osx10.6-x86_64/lib -lmysqlclient -lpthread
ld: warning: ignoring file build/temp.macosx-10.7-intel-2.7/oursqlx/oursql.o, file was built for unsupported file format which is not the architecture being linked (i386)
ld: warning: ignoring file build/temp.macosx-10.7-intel-2.7/oursqlx/compat.o, file was built for unsupported file format which is not the architecture being linked (i386)
ld: warning: ignoring file /usr/local/mysql-5.5.14-osx10.6-x86_64/lib/libmysqlclient.dylib, file was built for unsupported file format which is not the architecture being linked (i386)
Successfully installed oursql
Cleaning up...
Answer: This did the job:
export DYLD_LIBRARY_PATH="$DYLD_LIBRARY_PATH:/usr/local/mysql/lib/"
|
Opening a file in pwdir's folder in Python through Applescript
Question: Opening a file available in present working directory's temp folder in python
I tried
pwdir=os.getcwd()
tempdir=pwdir+"/temp/test.txt"
f=open(tempdir,'r+')
When I print the path of tempdir, it is showing up correctly and also the
contents of file are also read.
When I try to combine this operation from an Applescript, which calls this
python script. I get an error like this
f=open(pwdir1,'r+')
IOError: [Errno 2] No such file or directory: '//temp/test.txt'" number 1
EDIT:
I am using Shell script from Applescript to call this pythonscript
do shell script "/Users/mymac/Documents/'Microsoft User Data'/test.py"
EDIT:
Python Code:
tempdir = os.path.join(os.getcwd(),'temp','htmlinput.html')
print tempdir
with open(tempdir) as f:
html=f.read()
Python output from terminal:(works perfectly fine)
/Users/mymac/Documents/Microsoft User Data/Outlook Script Menu Items/temp/htmlinput.html
I am also able to see the file contents.
Applescript Code:
do shell script "/Users/mymac/Documents/'Microsoft User Data'/'Outlook Script Menu Items'/new.py"
Applescript Error:
error "Microsoft Outlook got an error: Traceback (most recent call last):
File \"/Users/mymac/Documents/Microsoft User Data/Outlook Script Menu Items/new.py\", line 12, in <module>
with open(tempdir) as f:
IOError: [Errno 2] No such file or directory: '/temp/htmlinput.html'" number 1
Answer: I don't know Applescript -- or OS X in general. It looks like the script is
being run from the root folder, and os.getcwd() returns '/'. The directory of
the script itself is `sys.path[0]` or the dirname of the current module --
`dirname(__file__)` \-- if it's a single script instead of a package. Try one
of the following
import os, sys
tempdir = os.path.join(sys.path[0], 'temp', 'temp.txt')
or
import os
tempdir = os.path.join(os.path.dirname(__file__), 'temp', 'temp.txt')
|
Find out time it took for a python script to complete execution
Question: I have the following code in a python script:
def fun()
#Code here
fun()
I want to execute this script and also find out how much time it took to
execute in minutes. How to find out how much time it took for this script to
execute ?.Some example would be really appreciated.
Thank You
Answer:
from datetime import datetime
startTime = datetime.now()
#do something
print datetime.now() - startTime
|
Python: replace a string by a float in txt file
Question: Here is a problem.
I have a .py file and a .txt. To simplify, my .txt looks like:
@x@
In the .py I have
x=15
I would like to replace @x@ in the txt by the value saved in the py, ie my txt
should look like
15
I tried with this:
for i, line in enumerate(fileinput.input('mytxtfile.txt', inplace = 1)):
sys.stdout.write(line.replace('@x@', 'x'))
or with
for i, line in enumerate(fileinput.input('mytxtfile.txt', inplace = 1)):
sys.stdout.write(line.replace('@x@', 'str(x)'))
The problem is that the "replace" method seems to consider only strings and I
need to evaluate the value of the string. Any idea how to do it?
Thanks
Answer: The `string.Template` class is a good way to do this.
<http://docs.python.org/library/string.html#template-strings>
from string import Template
class MyTemplate( Template ):
delimiter= '@'
pattern= r"@(?P<escaped>@)|@(?P<named>[_a-z][_a-z0-9]*)@|@(?P<braced>[_a-z][_a-z0-9]*)@|@(?P<invalid>)"
Given this class definition, you can now do this
with open( 'a.txt', 'r' ) as source:
t = MyTemplate(source.read())
result= t.substitute( x=15 )
print result
This allows you to use any number of substitutions anywhere in the template of
almost any complexity. This handles a very, very large number of cases
gracefully.
|
Look for img and id tag, store URL in variable if both are true
Question: I have been playing around with some Python now and start to get a hang of it.
I have already came up with a project, but I can't work out some things.
The purpose is to look inside a defined tag, such as the img tag or the a tag.
If that's true, it also needs to look for a id tag, always the same one.
If the img take looks like `<img src="/overflow.png" id="true">` I want it to
be stored
If the img take looks like `<img src="/overflow.png" id="false">` I don't want
it stored.
Hope this is fairly easy to achieve, I just haven't found a solution yet. I
have looked up the function for HTMLParser, but it's more gibberish than sense
to me. Hope someone knows how to do this, and help me out. Will be much
appreciated!
Cheers,
ninjaboi21.
Answer: People generally use BeautifulSoup,
<http://www.crummy.com/software/BeautifulSoup/>, to do this sort of thing.
After installing:
from BeautifulSoup import BeautifulSoup
# if the file is on your computer use this
#file = open('/path/to/the/file')
# and if the file is on the internet use this
#import urllib
#file = urllib.urlopen('http://www.the.com/path/to/the/file')
html = file.read()
file.close()
soup = BeautifulSoup(html)
trueimages = [image for image in soup.findAll('img') if image['id'].lower() == 'true']
Edit: added how to get the file into the string.
|
using *args in struct.pack
Question: I just read about the *args and **kwargs notation in python and decided to use
it with my functions that use struct.pack as such:
def pack_floats(*args):
return struct.pack('%df' %len(args), args)
But of course, it doesn't work because args is a tuple. If I wanted to pack
three numbers, I would call pack as such
struct.pack('3f', 1, 2, 3)
Alternatively I could just run it through a loop and pack one number at a
time, but I don't know whether there is any performance difference between one
value at a time vs all values at a time.
Is there a way to write the pack_floats function without calling the pack
function inside a loop?
Answer:
>>> import struct
>>> def pack_floats(*args):
... return struct.pack('%df' %len(args), *args)
...
>>> pack_floats(0.1,1.2,2.3)
'\xcd\xcc\xcc=\x9a\x99\x99?33\x13@'
|
Python, check type of object after circular import
Question: Here are two files, foo.py and bar.py bar.py has...
from foo import *
...at the top. bar.py uses types defined n foo.
When importing bar.py from foo I am having trouble determining the types of
objects. Looking at the example below why do the calls to isinstance return
False? How can I check if these types are the same?
Thanks,
~Eric
===== foo.py =====
#!/usr/bin/env python
class Spam(object):
def __init__(self, x):
self.x = x
def funcA(self):
print 'function a'
def __str__(self):
return 'Spam object %s' % repr(self.x)
class Eggs(object):
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def funcB(self):
print 'function b'
def __str__(self):
return "Eggs object (%s, %s, %s)" % (repr(self.x), repr(self.y), repr(self.z))
def main(fname):
if not fname.endswith('.py'):
raise Exception("Must be a .py file")
module = __import__(fname[:-3])
for item in module.DATA:
if isinstance(item, Spam):
item.funcA()
elif isinstance(item, Eggs):
item.funcB()
print item
if __name__ == '__main__':
import sys
for fname in sys.argv[1:]:
main(fname)
sys.exit(0)
===== bar.py =====
from foo import *
DATA=[
Spam("hi"),
Spam("there"),
Eggs(1, 2, 3),
]
Answer: With:
if __name__ == '__main__':
import sys
main('bar.py')
sys.exit(0)
I got :
Spam object 'hi'
Spam object 'there'
Eggs object (1, 2, 3)
**Edit:** move the **main** code and the main function to adifferent file and
import foo and will work
#-- main.py --
import foo
def main(fname):
if not fname.endswith('.py'):
raise Exception("Must be a .py file")
module = __import__(fname[:-3])
for item in module.DATA:
if isinstance(item, foo.Spam):
item.funcA()
elif isinstance(item, foo.Eggs):
item.funcB()
print item
if __name__ == '__main__':
import sys
main('bar.py')
sys.exit(0)
|
Subsets and Splits