Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
The type of a field in a collection in my mongodb database is unicode string. This field currently does not have any data associated with it in any of the documents in the collection. I dont want the type to be string because,i want to add subfields to it from my python code using pymongo. The collection already has many records in it.So, is it possible to change the type of the field to something like a dictionary in python for all the documents in the collection ? Please Help Thank You
2
1
0.099668
0
false
6,789,704
0
1,765
1
0
0
6,789,562
Sure, simply create a script that iterates over your current collection, reads the existing value and overwrite it with the new value (an embedded document in your case). You change the typ of the field by simply setting a new value for that field. E.g. setting a string field to an integer field : db.test.update({field:"string"}, {$set:{field:23}})
1
0
1
Changing the type of a field in a collection in a mongodb database
2
python,mongodb,pymongo
0
2011-07-22T11:50:00.000
I have a rather complex Excel 2010 file that I automate using python and win32com. For this I run windows in virtual box on an ubuntu machine. However, that same excel file solves/runs fine on Ubuntu Maverick directly using wine 1.3. Any hope of automating Excel on wine so I can drop the VM? Or is that just crazy talk (which I suspect).
2
3
1.2
0
true
6,847,960
0
2,617
1
1
0
6,847,684
You'd need a Windows version of Python, not a Linux version -- I'm saying you'd have to run Python under wine as well. Have you tried with just a normal Windows install of Python on wine? I don't see any reason why this wouldn't work. There is are numerous pages in a Google search that show Windows Python (32-bit) working fine.
1
0
0
automating excel with win32com on linux with wine
1
python,linux,excel,win32com,wine
0
2011-07-27T16:10:00.000
I have lots of data to operate on (write, sort, read). This data can potentially be larger than the main memory and doesn't need to be stored permanently. Is there any kind of library/database that can store these data for me in memory and that does have and automagically fallback to disk if system runs in a OOM situation? The API and storage type is unimportant as long as it can store basic Python types (str, int, list, date and ideally dict).
0
0
1.2
0
true
7,056,331
0
105
1
0
0
6,942,105
I will go for the in memory solution and let the OS swap. I can still replace the storage component if this will be really a problem. Thanks agf.
1
0
0
In memory database with fallback to disk on OOM
2
python
0
2011-08-04T13:16:00.000
I have a database and csv file that gets updated once a day. I managed to updated my table1 from this file by creating a separate log file with the record of the last insert. No, I have to create a new table table2 where I keep calculations from the table1. My issue is that those calculations are based on 10, 20 and 90 previous rows from table1. The question is - how can I efficiently update table2 from the data of the table1 on a daily basis? I don't want to re-do the calculations everyday from the beginning of the table since it will be very time consuming for me. Thanks for your help!
0
0
0
0
false
15,492,291
0
778
1
0
0
6,945,953
The answer is "as well as one could possibly expect." Without seeing your tables, data, and queries, and the stats of your machine it is hard to be too specific. However in general updates basically doing three steps. This is a bit of an oversimplification but it allows you to estimate performance. First it selects the data necessary. Then it marks the rows that were updated as deleted, then it inserts new rows with the new data into the table. In general, your limit is usually the data selection. As long as you can efficiently run the SELECT query to get the data you want, update should perform relatively well.
1
0
0
Table updates using daily data from other tables Postgres/Python
1
python,postgresql
0
2011-08-04T17:33:00.000
I noticed that sqlite3 isn´t really capable nor reliable when i use it inside a multiprocessing enviroment. Each process tries to write some data into the same database, so that a connection is used by multiple threads. I tried it with the check_same_thread=False option, but the number of insertions is pretty random: Sometimes it includes everything, sometimes not. Should I parallel-process only parts of the function (fetching data from the web), stack their outputs into a list and put them into the table all together or is there a reliable way to handle multi-connections with sqlite?
8
8
1
0
false
12,809,817
0
20,258
1
0
0
6,969,820
I've actually just been working on something very similar: multiple processes (for me a processing pool of 4 to 32 workers) each process worker does some stuff that includes getting information from the web (a call to the Alchemy API for mine) each process opens its own sqlite3 connection, all to a single file, and each process adds one entry before getting the next task off the stack At first I thought I was seeing the same issue as you, then I traced it to overlapping and conflicting issues with retrieving the information from the web. Since I was right there I did some torture testing on sqlite and multiprocessing and found I could run MANY process workers, all connecting and adding to the same sqlite file without coordination and it was rock solid when I was just putting in test data. So now I'm looking at your phrase "(fetching data from the web)" - perhaps you could try replacing that data fetching with some dummy data to ensure that it is really the sqlite3 connection causing you problems. At least in my tested case (running right now in another window) I found that multiple processes were able to all add through their own connection without issues but your description exactly matches the problem I'm having when two processes step on each other while going for the web API (very odd error actually) and sometimes don't get the expected data, which of course leaves an empty slot in the database. My eventual solution was to detect this failure within each worker and retry the web API call when it happened (could have been more elegant, but this was for a personal hack). My apologies if this doesn't apply to your case, without code it's hard to know what you're facing, but the description makes me wonder if you might widen your considerations.
1
0
1
SQLite3 and Multiprocessing
4
python,sqlite,multiprocessing
0
2011-08-07T00:05:00.000
I'm in the process of setting up a webserver from scratch, mainly for writing webapps with Python. On looking at alternatives to Apache+mod_wsgi, it appears that pypy plays very nicely indeed with pretty much everything I intend to use for my own apps. Not really having had a chance to play with PyPy properly, I feel this is a great opportunity to get to use it, since I don't need the server to be bulletproof. However, there are some PHP apps that I would like to run on the webserver for administrative purposes (PHPPgAdmin, for example). Is there an elegant solution that allows me to use PyPy within a PHP-compatible webserver like Apache? Or am I going to have to run CherryPy/Paste or one of the other WSGI servers, with Apache and mod_wsgi on a separate port to provide administrative services?
5
-1
-0.099668
0
false
8,920,308
1
815
1
0
0
6,976,578
I know that mod_wsgi doesn't work with mod_php I heavily advise you, running PHP and Python applications on CGI level. PHP 5.x runs on CGI, for python there exists flup, that makes it possible to run WSGI Applications on CGI. Tamer
1
0
0
PyPy + PHP on a single webserver
2
php,python,apache,wsgi,pypy
1
2011-08-07T23:41:00.000
I'm writing a Python script which uses a MySQL database, which is locally hosted. The program will be delivered as source code. As a result, the MySQL password will be visible to bare eyes. Is there a good way to protect this? The idea is to prevent some naughty people from looking at the source code, gaining direct access to MySQL, and doing something ... well, naughty.
17
16
1.2
0
true
6,981,725
0
14,190
2
0
0
6,981,064
Short answer You can't. If the password is stored in the artifact that's shipped to the end-user you must consider it compromised! Even if the artifact is a compiled binary, there are always (more or less complicated) ways to get at the password. The only way to protect your resources is by exposing only a limited API to the end-user. Either build a programmatic API (REST, WS+SOAP, RMI, JavaEE+Servlets, ...) or only expose certain functionalities in your DB via SPROCs (see below). Some things first... The question here should not be how to hide the password, but how to secure the database. Remember that passwords only are often a very weak protection and should not be considered the sole mechanism of protecting the DB. Are you using SSL? No? Well, then even if you manage to hide the password in the application code, it's still easy to sniff it on the network! You have multiple options. All with varying degrees of security: "Application Role" Create one database-user for the application. Apply authorization for this role. A very common setup is to only allow CRUD ops. Pros very easy to set-up Prevents DROP queries (f.ex. in SQL injections?) Cons Everybody seeing the password has access to all the data in the database. Even if that data is normally hidden in the application. If the password is compromised, the user can run UPDATE and DELETE queries without criteria (i.e.: delete/update a whole table at once). Atomic auth&auth Create one database user per application-/end-user. This allows you to define atomic access rights even on a per-column basis. For example: User X can only select columns far and baz from table foo. And nothing else. But user Y can SELECT everything, but no updates, while user Z has full CRUD (select, insert, update, delete) access. Some databases allow you to reuse the OS-level credentials. This makes authentication to the user transparent (only needs to log-in to the workstation, that identity is then forwarded to the DB). This works easiest in a full MS-stack (OS=Windows, Auth=ActiveDirectory, DB=MSSQL) but is - as far as I am aware - also possible to achieve in other DBs. Pros Fairly easy to set up. Very atomic authorization scheme Cons Can be tedious to set up all the access rights in the DB. Users with UPDATE and DELETE rights can still accidentally (or intentionally?) delete/update without criteria. You risk losing all the data in a table. Stored Procedures with atomic auth&auth Write no SQL queries in your application. Run everything through SPROCs. Then create db-accounts for each user and assign privileges to the SPROCs only. Pros Most effective protection mechanism. SPROCs can force users to pass criteria to every query (including DELETE and UPDATE) Cons not sure if this works with MySQL (my knowledge in that area is flaky). complex development cycle: Everything you want to do, must first be defined in a SPROC. Final thoughts You should never allow database administrative tasks to the application. Most of the time, the only operations an application needs are SELECT, INSERT, DELETE and UPDATE. If you follow this guideline, there is hardly a risk involved by users discovering the password. Except the points mentioned above. In any case, keep backups. I assume you want to project you database against accidental deletes or updates. But accidents happen... keep that in mind ;)
1
0
0
Safeguarding MySQL password when developing in Python?
4
python,mysql
0
2011-08-08T10:52:00.000
I'm writing a Python script which uses a MySQL database, which is locally hosted. The program will be delivered as source code. As a result, the MySQL password will be visible to bare eyes. Is there a good way to protect this? The idea is to prevent some naughty people from looking at the source code, gaining direct access to MySQL, and doing something ... well, naughty.
17
-5
-1
0
false
6,981,128
0
14,190
2
0
0
6,981,064
Either use simple passwor like root.Else Don't use password.
1
0
0
Safeguarding MySQL password when developing in Python?
4
python,mysql
0
2011-08-08T10:52:00.000
I am trying to find a solution for a problem I am working on. I have a python program which is is using a custom built sqlite3 install (which allows > 10 simultaneous connections) and in addition requires the use of Tix (which does not come as a stand install with the python package for the group I am distributing to.) I want to know if there is a way to specify to distutils to use this certain sqlite3 build and include this third party install of Tix, such that I can distribute the file as an rpm and not require the end user to install Tix or modify their sqlite3 install... Any help is greatly appreciated!
2
3
1.2
0
true
6,988,101
0
115
1
0
0
6,986,925
One possible solution: Create a custom package for that program containing the custom sqlite3/etc. stuff and use relative imports to refer to those custom subpackages from a main module in your package, which you'd hook into with a simple importing script that would execute a your_package.run() function or something. You'd then use distutils to install your package in site-packages or whatever.
1
0
1
Packaging a Python Program with custom built libraries
1
python,distutils
0
2011-08-08T18:40:00.000
I'm looking for a way to debug queries as they are executed and I was wondering if there is a way to have MySQLdb print out the actual query that it runs, after it has finished inserting the parameters and all that? From the documentation, it seems as if there is supposed to be a Cursor.info() call that will give information about the last query run, but this does not exist on my version (1.2.2). This seems like an obvious question, but for all my searching I haven't been able to find the answer. Thanks in advance.
82
126
1.2
0
true
7,190,914
0
67,798
1
0
0
7,071,166
We found an attribute on the cursor object called cursor._last_executed that holds the last query string to run even when an exception occurs. This was easier and better for us in production than using profiling all the time or MySQL query logging as both of those have a performance impact and involve more code or more correlating separate log files, etc. Hate to answer my own question but this is working better for us.
1
0
0
Print the actual query MySQLdb runs?
10
python,mysql,mysql-python
0
2011-08-15T21:43:00.000
Is this possible? Generation of Excel combobox in a cell using xlwt or similar module? When I load the xls using xlrd, then copy and save it using xlwt, the combobox from original xls is lost.
2
1
0.197375
0
false
7,266,184
0
597
1
0
0
7,094,771
No, it's not possible. xlrd doesn't pick up the combo box and suchlike.
1
0
0
Excel Combobox in Python xlwt module
1
python,excel,combobox,xlwt,xlrd
0
2011-08-17T14:45:00.000
I'm implementing a voting system for a relatively large website and I'm wondering where should I store the vote count. The main problem is that storing them in the main database would put a lot of strain on it, as MySQL isn't very good at handing lots and lots of simple queries. My best option so far is to use memcached as it seems perfect for this task (very fast and key/value oriented). The only problem with this solution is that memcached is non-persistent and there is no easy way of saving these values. Is there something that is specifically designed for this task, preferably with a Python back end?
3
2
1.2
0
true
7,112,410
0
145
4
0
0
7,112,347
Can you accept some degree of vote loss? If so, you can do a hybrid solution. Every modulo 100 (10, something), update the SQL database with the current memcache value. You can also have a periodic script scan and update if required.
1
0
0
Best way of storing incremental numbers?
6
python,memcached,voting
0
2011-08-18T18:33:00.000
I'm implementing a voting system for a relatively large website and I'm wondering where should I store the vote count. The main problem is that storing them in the main database would put a lot of strain on it, as MySQL isn't very good at handing lots and lots of simple queries. My best option so far is to use memcached as it seems perfect for this task (very fast and key/value oriented). The only problem with this solution is that memcached is non-persistent and there is no easy way of saving these values. Is there something that is specifically designed for this task, preferably with a Python back end?
3
0
0
0
false
7,112,511
0
145
4
0
0
7,112,347
Mongodb can work well.Since it can be faster or Google App Engine was designed to scale.
1
0
0
Best way of storing incremental numbers?
6
python,memcached,voting
0
2011-08-18T18:33:00.000
I'm implementing a voting system for a relatively large website and I'm wondering where should I store the vote count. The main problem is that storing them in the main database would put a lot of strain on it, as MySQL isn't very good at handing lots and lots of simple queries. My best option so far is to use memcached as it seems perfect for this task (very fast and key/value oriented). The only problem with this solution is that memcached is non-persistent and there is no easy way of saving these values. Is there something that is specifically designed for this task, preferably with a Python back end?
3
2
0.066568
0
false
7,112,669
0
145
4
0
0
7,112,347
MySQL isn't very good at handing lots and lots of simple queries You may have something drastically misconfigured in your MySQL server. MySQL should easily be able to handle 4000 queries per minute. There are benchmarks of MySQL handling over 25k INSERTs per second.
1
0
0
Best way of storing incremental numbers?
6
python,memcached,voting
0
2011-08-18T18:33:00.000
I'm implementing a voting system for a relatively large website and I'm wondering where should I store the vote count. The main problem is that storing them in the main database would put a lot of strain on it, as MySQL isn't very good at handing lots and lots of simple queries. My best option so far is to use memcached as it seems perfect for this task (very fast and key/value oriented). The only problem with this solution is that memcached is non-persistent and there is no easy way of saving these values. Is there something that is specifically designed for this task, preferably with a Python back end?
3
0
0
0
false
7,116,659
0
145
4
0
0
7,112,347
If you like memcached but don't like the fact that it doesn't persist data then you should consider using Membase. Membase is basically memcached with sqlite as the persistence layer. It is very easy to set up and supports the memcached protocol so if you already have memcached set up you can use Membase as a drop in replacement.
1
0
0
Best way of storing incremental numbers?
6
python,memcached,voting
0
2011-08-18T18:33:00.000
I looked through several SO-Questions for how to pickle a python object and store it into a database. The information I collected is: import pickle or import cpickle. Import the latter, if performance is an issue. Assume dict is a python dictionary (or what so ever python object): pickled = pickle.dumps(dict). store pickled into a MySQL BLOB Column using what so ever module to communicate with Database. Get it out again. And use pickle.loads(pickled) to restore the python dictionary. I just want to make sure I understood this right. Did I miss something critical? Are there sideeffects? Is it really that easy? Background-Info: The only thing I want to do, is store Googlegeocoder-Responses, which are nested python dictionarys in my case. I am only using a little part of the response object and I don't know if I will ever need more of it later on. That's why I thought of storing the response to save me repetition of some million querys.
7
2
1.2
0
true
7,117,674
0
5,063
1
0
0
7,117,525
It's really that easy... so long as you don't need your DB to know anything about the dictionary. If you need any sort of structured data access to the contents of the dictionary, then you're going to have to get more involved. Another gotcha might be what you intend to put in the dict. Python's pickle serialization is quite intelligent and can handle most cases without any need for adding custom support. However, when it doesn't work, it can be very difficult to understand what's gone wrong. So if you can, restrict the contents of the dict to Python's built-in types. If you start adding instances of custom classes, keep them to simple custom classes that don't do any funny stuff with attribute storage or access. And beware of adding instances of classes or types from add-ons. In general, if you start running into hard-to-understand problems with the pickling or unpickling, look at the non-built-in types in the dict.
1
0
1
How to Pickle a python dictionary into MySQL?
3
python,mysql,pickle
0
2011-08-19T05:59:00.000
I'm trying to get a django site deployed from a repository. I was almost there, and then changed something (I'm not sure what!!) and was back to square one. Now I'm trying to run ./manage.py syncdb and get the following error: django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: this is MySQLdb version (1, 2, 3, 'final', 0), but _mysql is version (1, 2, 2, 'final', 0) I've searched forums for hours and none of the solutions presented helped. I tried uninstalling and re-installing MySQL-python and upgrading it. I get the same error when trying to import it from the python command line interpreter. Does anyone have any suggestions?
6
1
1.2
0
true
7,352,188
1
4,064
1
0
0
7,137,214
For those who come upon this question: It turns out that ubuntu _mysql version was different from the one in my venv. Uninstalling that and re-installing in my venv did the trick.
1
0
0
Django MySQLdb version doesn't match _mysql version Ubuntu
2
mysql,django,deployment,ubuntu,mysql-python
0
2011-08-21T08:27:00.000
i'm trying to build a web server using apache as the http server, mod_wsgi + python as the logic handler, the server was supposed to handler long request without returning, meaning i want to keep writing stuff into this request. the problem is, when the link is broken, the socket is in a CLOSE_WAIT status, apache will NOT notify my python program, which means, i have to write something to get an exception, says the link is broken, but those messages were lost and can't be restored. i tried to get the socket status before writing through /proc/net/tcp, but it could not prevent a quick connect/break connection. anybody has any ideas, please help, very much thanks in advance!
0
1
1.2
0
true
7,145,199
1
393
1
0
0
7,144,011
You cant. It is a limitation of the API defined by the WSGI specification. So, nothing to do with Apache or mod_wsgi really as you will have the same issue with any WSGI server if you follow the WSGI specification. If you search through the mod_wsgi mailing list on Google Groups you will find a number of discussions about this sort of problem in the past.
1
0
0
apache server with mod_wsgi + python as backend, how can i be able to notified my connection status?
1
python,apache,webserver,mod-wsgi
0
2011-08-22T06:52:00.000
I am currently working on two projects in python. One need python 2.5 and other 2.7. Now the problem is when I installed mysql python for 2.5 it required 32 bit version of mysql and it was not working with 64 bit version. So I installed 32 bit version. This project is done by using virtualenv. Now I need to run it on 2.7 and it wants 64 bit version of mysql. I cannot reinstall mysql as old project is still on. Is it possible to install both bit versions of mysql in my Snow Leopard 10.6? If possible then how?
0
0
1.2
0
true
7,159,017
0
235
1
0
0
7,158,929
It is possible but you'll need to to compile them by hand, start by creating separate folders for them to live in, then get the source and dependencies that they'll need and keep them separate, you'll need to alter the ./configure commands to point them to the correct places and they should build fine.
1
0
1
Install both 32 bit and 64 bit versions of mysql on a same mac machine
1
mysql,osx-snow-leopard,32bit-64bit,mysql-python,python-2.5
0
2011-08-23T09:34:00.000
Okay, so I'm connected to an oracle database in python 2.7 and cx_Oracle 5.1 compiled against the instant client 11.2. I've got a cursor to the database and running SQL is not an issue, except this: cursor.execute('ALTER TRIGGER :schema_trigger_name DISABLE', schema_trigger_name='test.test_trigger') or cursor.prepare('ALTER TRIGGER :schema_trigger_name DISABLE') cursor.execute(None,{'schema_trigger_name': 'test.test_trigger'}) both result in an error from oracle: Traceback (most recent call last): File "connect.py", line 257, in cursor.execute('ALTER TRIGGER :schema_trigger_name DISABLE', schema_trigger_name='test.test_trigger') cx_Oracle.DatabaseError: ORA-01036: illegal variable name/number While running: cursor.execute('ALTER TRIGGER test.test_trigger DISABLE') works perfectly. What's the issue with binding that variable?
1
0
0
0
false
7,174,814
0
1,300
1
0
0
7,174,741
You normally can't bind an object name in Oracle. For variables it'll work but not for trigger_names, table_names etc.
1
0
0
Exception binding variables with cx_Oracle in python
2
python,oracle,cx-oracle
0
2011-08-24T11:32:00.000
As a personal project, I have been developing my own database software in C#. Many current database systems can use SQL commands for queries. Is there anyone here that could point me in the right direction of implementing such a system in a database software written completely from scratch? For example a user familiar with SQL could enter a statement as a string into an application, that statement will be analyzed by my application and the proper query will be run. Does anyone have any experience with something like that here? This is probably a very unusual questions haha. Basically what I am asking, are there any tools available out there that can dissect SQL statements or will I have to write my own from scratch for that? Thanks in advance for any help! (I may transfer some of my stuff to Python and Java, so any potential answers need not be limited to C#) ALSO: I am not using any current SQL database or anything like that, my system is completely from scratch, I hope my question makes sense. Basically I want my application to be able to interface with programs that send SQL commands.
1
3
1.2
0
true
7,211,297
0
2,117
1
0
0
7,211,204
A full-on database engine is a pretty serious undertaking. You're not going to sit down and have a complete engine next week, so I'd have thought you would want to write the SQL parser piecemeal: adding features to the parser as the features are supported in the engine. I'm guessing this is just something fun to do, rather than something you want working ASAP. Given that, I'd have thought writing an SQL parser is one of the best bits of the project! I've done lots of work with flat file database engines, because the response times required for queries don't allow a RDBMS. One of the most enjoyable bits has been adding support for SQL fragments in e.g. the UI, where response time isn't quite as vital. The implementation I work on is plain old C, but in fact from what I've seen, most relational databases are still written primarily in C. And there is something satisfying about writing these things in a really low level language :)
1
0
0
C# custom database engine, how to implement SQL
1
c#,java,python,sql,database
0
2011-08-26T22:36:00.000
Im trying to pull only one column from a datastore table I have a Books model with id, key, title, author, isbn and price everything = db.GqlQuery('SELECT * FROM Books') gives me everything, but say i only want the title books = db.GqlQuery('SELECT title FROM Books') Ive tried everything people have suggested but nothing seems to work Any help is much appreciated Thanks
2
3
0.291313
0
false
7,214,401
1
1,553
1
0
0
7,213,991
You can't. GQL is not SQL, and the datastore is not a relational database. An entity is stored as a single serialized protocol buffer, and it's impossible to fetch part of an entity; the whole thing needs to be deserialized.
1
0
0
Google App Engine python, GQL, select only one column from datastore
2
python,google-app-engine,gql,gqlquery
0
2011-08-27T10:36:00.000
I am relatively new to SQLalchemy and have done basic database creation, insert, update and delete. I have found it quite simple to use so far. My question is: I want to move records from one database to another backup database. What is the simplest way to do this in SQLalchemy?
2
0
0
0
false
7,216,293
0
1,472
1
0
0
7,216,100
You would just go direct to the database utiltites and back it up there. Nothing to do with SQLAlchemy
1
0
0
What is the easiest way to move data from one database to another backup database using SQLalchemy?
2
python,sqlalchemy
0
2011-08-27T17:17:00.000
I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started. I was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance? Note, results are huge too, I am talking about an ERP in production developed by other people.
9
7
1
0
false
7,279,821
0
3,111
3
0
0
7,279,761
Let the DB figure out how best to retrieve the information that you want, else you'll have to duplicate the functionality of the RDBMS in your code, and that will be way more complex than your SQL queries. Plus, you'll waste time transferring all that unneeded information from the DB to your app, so that you can filter and process it in code. All this is true because you say you're dealing with large data.
1
0
0
Should I use complex SQL queries or process results in the application?
5
python,sql,performance
0
2011-09-02T06:05:00.000
I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started. I was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance? Note, results are huge too, I am talking about an ERP in production developed by other people.
9
3
1.2
0
true
7,280,826
0
3,111
3
0
0
7,279,761
I would have the business logic in the application, as much as possible. Complex business logic in queries are difficult to maintain. (when I finish understanding one I have already forgotten how it all started)Complex logic in stored procedures are ok. But with a typical python application, you would want your business logic to be in python. Now, the database is way better in handling data than your application code. So if your logic involves huge amount of data, you may get better performance with the logic in the database. But this will be for complex reports, bookkeeping operations and such, that operate on a large volume of data. You may want to use stored procedures, or systems that specialize in such operations (a data warehouse for reports) for these types of operations. Normal OLTP operations do not involve much of data. The database may be huge, but the data required for a typical transaction will be (typically) a very small part of it. Querying this in a large database may cause performance issues, but you can optimize this in several ways (indexes, full text searches, redundancy, summary tables... depends on your actual problem). Every rule has exceptions, but as a general guideline, try to have your business logic in your application code. Stored procedures for complex logic. A separate data warehouse or a set of procedures for reporting.
1
0
0
Should I use complex SQL queries or process results in the application?
5
python,sql,performance
0
2011-09-02T06:05:00.000
I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started. I was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance? Note, results are huge too, I am talking about an ERP in production developed by other people.
9
1
0.039979
0
false
7,282,367
0
3,111
3
0
0
7,279,761
@Nivas is generally correct. These are pretty common patterns Division of labour - the DBAs have to return all the data the business need, but they only have a database to work with. The developers could work with the DBAs to do it better but departmental responsbilities make it nearly impossible. So SQL to do morethan retrieve data is used. lack of smaller functions. Could the massive query be broken down into smaller stages, using working tables? Yes, but I have known environments where a new table needs reams of approavals - a heavy Query is just written So, in general, getting data out of the database - thats down to the database. But if a SQL query is too long its going to be hard for the RDBMS to optimise, and it probably means the query is spanning data, business logic and even presentation in one go. I would suggest a saner approach is usually to seperate out the "get me the data" portions into stored procedures or other controllable queries that populate staging tables. Then the business logic can be written into a scripting language sitting above and controlling the stored procedures. And presentation is left elsewhere. In essence solutions like cognos try to do this anyway. But if you are looking at an ERP in production, the constraints and the solutions above probably already exist - are you talking to the right people?
1
0
0
Should I use complex SQL queries or process results in the application?
5
python,sql,performance
0
2011-09-02T06:05:00.000
My python application is dying, this oracle trace file is being generated. I am using cx_Oracle, how do I go about using this trace file to resolve this crash? ora_18225_139690296567552.trc kpedbg_dmp_stack()+360<-kpeDbgCrash()+192<-kpureq2()+3194<-OCIStmtPrepare2()+157<-Cursor_InternalPrepare()+298<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010
1
0
1.2
0
true
7,530,424
0
653
1
0
0
7,285,135
Do you have an Oracle support contract? If I would file an SR and upload the trace to Oracle and have them tell you what it is complaining about. Those code calls are deep in their codebase from the looks of it.
1
0
0
I have an Oracle Stack trace file Python cx_Oracle
1
python,cx-oracle
0
2011-09-02T14:43:00.000
In a regular application (like on Windows), when objects/variables are created on a global level it is available to the entire program during the entire duration the program is running. In a web application written in PHP for instance, all variables/objects are destroyed at the end of the script so everything has to be written to the database. a) So what about python running under apache/modwsgi? How does that work in regards to the memory? b) How do you create objects that persist between web page requests and how do you ensure there isn't threading issues in apache/modwsgi?
4
0
0
0
false
7,293,404
1
193
1
0
0
7,293,290
All Python globals are created when the module is imported. When module is re-imported the same globals are used. Python web servers do not do threading, but pre-forked processes. Thus there is no threading issues with Apache. The lifecycle of Python processes under Apache depends. Apache has settings how many child processes are spawned, keep in reserve and killed. This means that you can use globals in Python processes for caching (in-process cache), but the process may terminate after any request so you cannot put any persistent data in the globals. But the process does not necessarily need to terminate and in this regard Python is much more efficient than PHP (the source code is not parsed for every request - but you need to have the server in reload mode to read source code changes during the development). Since globals are per-process and there can be N processes, the processes share "web server global" state using mechanisms like memcached. Usually Python globals only contain Setting variables set during the process initialization Cached data (session/user neutral)
1
0
0
Memory model for apache/modwsgi application in python?
2
python,apache,memory-management,mod-wsgi
0
2011-09-03T13:09:00.000
I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table). I tried select count(*) from table, but that seems to access each row and is super slow. I also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack. Any ideas on how to find the table size quickly and cleanly? Using Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.
2
0
0
0
false
34,628,302
0
3,251
3
0
0
7,346,079
To follow up on Thilo's answer, as a data point, I have a sqlite table with 2.3 million rows. Using select count(*) from table, it took over 3 seconds to count the rows. I also tried using SELECT rowid FROM table, (thinking that rowid is a default primary indexed key) but that was no faster. Then I made an index on one of the fields in the database (just an arbitrary field, but I chose an integer field because I knew from past experience that indexes on short fields can be very fast, I think because the index is stored a copy of the value in the index itself). SELECT my_short_field FROM table brought the time down to less than a second.
1
0
0
Fast number of rows in Sqlite
3
python,sqlite
0
2011-09-08T09:44:00.000
I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table). I tried select count(*) from table, but that seems to access each row and is super slow. I also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack. Any ideas on how to find the table size quickly and cleanly? Using Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.
2
1
0.066568
0
false
7,346,136
0
3,251
3
0
0
7,346,079
Do you have any kind of index on a not-null column (for example a primary key)? If yes, the index can be scanned (which hopefully does not take that long). If not, a full table scan is the only way to count all rows.
1
0
0
Fast number of rows in Sqlite
3
python,sqlite
0
2011-09-08T09:44:00.000
I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table). I tried select count(*) from table, but that seems to access each row and is super slow. I also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack. Any ideas on how to find the table size quickly and cleanly? Using Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.
2
1
0.066568
0
false
7,346,821
0
3,251
3
0
0
7,346,079
Other way to get the rows number of a table is by using a trigger that stores the actual number of rows in other table (each insert operation will increment a counter). In this way inserting a new record will be a little slower, but you can immediately get the number of rows.
1
0
0
Fast number of rows in Sqlite
3
python,sqlite
0
2011-09-08T09:44:00.000
I'm migrating a GAE/Java app to Python (non-GAE) due new pricing, so I'm getting a little server and I would like to find a database that fits the following requirements: Low memory usage (or to be tuneable or predictible) Fastest querying capability for simple document/tree-like data identified by key (I don't care about performance on writing and I assume it will have indexes) Bindings with Pypy 1.6 compatibility (or Python 2.7 at least) My data goes something like this: Id: short key string Title Creators: an array of another data structure which has an id - used as key -, a name, a site address, etc. Tags: array of tags. Each of them can has multiple parent tags, a name, an id too, etc. License: a data structure which describes its license (CC, GPL, ... you say it) with name, associated URL, etc. Addition time: when it was add in our site. Translations: pointers to other entries that are translations of one creation. My queries are very simple. Usual cases are: Filter by tag ordered by addition time. Select a few (pagination) ordered by addition time. (Maybe, not done already) filter by creator. (Not done but planned) some autocomplete features in forms, so I'm going to need search if some fields contains a substring ('LIKE' queries). The data volume is not big. Right now I have about 50MB of data but I'm planning to have a huge dataset around 10GB. Also, I want to rebuild this from scratch, so I'm open to any option. What database do you think can meet my requirements? Edit: I want to do some benchmarks around different options and share the results. I have selected, so far, MongoDB, PostgreSQL, MySQL, Drizzle, Riak and Kyoto Cabinet.
3
1
1.2
0
true
7,377,444
1
830
1
0
0
7,375,415
I would recommend Postresql, only because it does what you want, can scale, is fast, rather easy to work with and stable. It is exceptionally fast at the example queries given, and could be even faster with document querying.
1
0
0
Low memory and fastest querying database for a Python project
2
python,database,nosql,rdbms
0
2011-09-10T23:49:00.000
I am using Psycopg2 with PostgreSQL 8.4. While reading from a huge table, I suddenly get this cryptic error at the following line of code, after this same line of code has successfully fetched a few hundred thousand rows. somerows = cursorToFetchData.fetchmany(30000) psycopg2.DataError: invalid value "LÃ" for "DD" DETAIL: Value must be an integer. My problem is that I have no column named "DD", and about 300 columns in that table (I know 300 columns is a design flaw). I would appreciate a hint about the meaning of this error message, or how to figure out where the problem lies. I do not understand how Psycop2 can have any requirements about the datatype while fetching rows.
0
2
1.2
0
true
7,378,101
0
176
2
0
0
7,375,572
Can you paste in the data from the row that's causing the problem? At a guess I'd say it's a badly formatted date entry, but hard to say. (Can't comment, so has to be in a answer...)
1
0
0
Cryptic Psycopg2 error message
2
python,postgresql,psycopg2
0
2011-09-11T00:33:00.000
I am using Psycopg2 with PostgreSQL 8.4. While reading from a huge table, I suddenly get this cryptic error at the following line of code, after this same line of code has successfully fetched a few hundred thousand rows. somerows = cursorToFetchData.fetchmany(30000) psycopg2.DataError: invalid value "LÃ" for "DD" DETAIL: Value must be an integer. My problem is that I have no column named "DD", and about 300 columns in that table (I know 300 columns is a design flaw). I would appreciate a hint about the meaning of this error message, or how to figure out where the problem lies. I do not understand how Psycop2 can have any requirements about the datatype while fetching rows.
0
1
0.099668
0
false
40,247,155
0
176
2
0
0
7,375,572
This is not a psycopg error, it is a postgres error. After the error is raised, take a look at cur.query to see the query generated. Copy and paste it into psql and you'll see the same error. Then debug it from there.
1
0
0
Cryptic Psycopg2 error message
2
python,postgresql,psycopg2
0
2011-09-11T00:33:00.000
I have a large amount of data that I am pulling from an xml file that all needs to be validated against each other (in excess of 500,000 records). It is location data, so it has information such as: county, street prefix, street suffix, street name, starting house number, ending number. There are duplicates, house number overlaps, etc. and I need to report on all this data (such as where there are issues). Also, there is no ordering of the data within the xml file, so each record needs to be matched up against all others. Right now I'm creating a dictionary of the location based on the street name info, and then storing a list of the house number starting and ending locations. After all this is done, I'm iterating through the massive data structure that was created to find duplicates and overlaps within each list. I am running into problems with the size of the data structure and how many errors are coming up. One solution that was suggested to me was to create a temporary SQLite DB to hold all data as it is read from the file, then run through the DB to find all issues with the data, report them out, and then destroy the DB. Is there a better/more efficient way to do this? And any suggestions on a better way to approach this problem? As an fyi, the xml file I'm reading in is over 500MB (stores other data than just this street information, although that is the bulk of it), but the processing of the file is not where I'm running into problems, it's only when processing the data obtained from the file. EDIT: I could go into more detail, but the poster who mentioned that there was plenty of room in memory for the data was actually correct, although in one case I did have to run this against 3.5 million records, in that instance I did need to create a temporary database.
0
0
0
0
false
7,392,402
0
1,110
1
0
0
7,391,148
Unless this data has already been sanitised against the PAF (UK Post office Address file - every address in UK basically) then you will have addresses in there that are the same actual house, but spelt differently, wrong postcode, postcode in wrong field etc. This will completely change your approach. Check out if this is sanitised before you start. The person giving it to you will either say "yes of course it has and I did it" or they will look blankly - in which case no. If it is sanitised, great, probably an external agency is supplying your data and they probably can do this for you, but I expect oyu are being asked because its cheaper. Get on. If not, you have a range of problems and need to talk with your boss about what they want, how confidnet they want to be of matches etc. In general the idea is to come up with a number of match algorithms per field, that output a confidence value that the two address under compare are the same. Then a certain number of these values are weighted, and a total confidnece value has to be passed to consider the two addresses a match But I am not clear this is your problem, but I do suggest you check what your boss exactly wants - this is not a clearly understood area between marketing and technical depats.
1
0
0
Large temporary database to sanitize data in Python
2
python,xml,sanitization
0
2011-09-12T16:40:00.000
Well, I might be doing some work in Python that would end up with hundreds of thousands, maybe millions of rows of data, each with entries in maybe 50 or more columns. I want a way to keep track of this data and work with it. Since I also want to learn Microsoft Access, I suggest putting the data in there. Is there any easy way to do this? I also want to learn SAS, so that would be fine too. Or, is there some other program/method I should know for such a situation? Thanks for any help!
1
1
1.2
0
true
7,410,499
0
323
1
0
0
7,410,458
Yes, you can talk to any ODBC database from Python, and that should include Access. You'll want the "windows" version of Python (which includes stuff like ODBC) from ActiveState. I'd be more worried about the "millions of rows" in Access, it can get a bit slow on retrieval if you're actually using it for relational tasks (that is, JOINing different tables together). I'd also take a look at your 50 column tables — sometimes you need 50 columns but more often it means you haven't decomposed your data sufficiently to get it in normal form. Finally, if you use Python to read and write an Access database I don't know if I'd count that as "learning Access". Really learning Access would be using the front end to create and maintain the database, creating forms and reports in Access (which would not be available from Python) and programming in Visual Basic for Applications (VBA). I really like SQLite as an embedded database solution, especially from Python, and its SQL dialect is probably "purer" than Access's.
1
0
1
Is it possible to store data from Python in Access file?
2
python,database,ms-access
0
2011-09-14T01:56:00.000
I'm working on an application where a user can search for items near his location. When a user registers for my service, their long/lat coordinates are taken (this is actually grabbed from a zip/postcode and then gets looked up via Google for the long/lats). This also happens when a user adds an item, they are asked for the zip/postcode of the item, and that is converted to the long/lat. My question is how would i run a query using MySQL that would search within, say 20 miles, from the user's location and get all the items within that 20 mile radius?
3
0
0
0
false
7,420,726
0
4,012
1
0
0
7,413,619
To be performant, you don't want to do a complete scan through the database and compute distances for each row, you want conditions that can be indexed. The simplest way to do this is to compute a box with a minimum/maximum latitude and minimum/maximum longitude, and use BETWEEN to exclude everything outside of those ranges. Since you're only dealing with US locations (zip code based), you won't have to worry about the transition between +180 and -180 degrees. The only remaining problem is to compute the bounds of the box in lat/long when your conditions are in miles. You need to convert miles to degrees. For latitude this is easy, just divide 360 degrees by the circumference of the earth and multiply by 20; 0.289625 degrees. Longitude is tougher because it varies by latitude, the circumference is roughly cosine(latitude)*24901.461; 20 miles is 20*360/(cos(latitude)*24901.461).
1
0
0
Find long/lat's within 20 miles of user's long/lat
6
python,mysql,geolocation,latitude-longitude
0
2011-09-14T08:49:00.000
I have an application that needs to interface with another app's database. I have read access but not write. Currently I'm using sql statements via pyodbc to grab the rows and using python manipulate the data. Since I don't cache anything this can be quite costly. I'm thinking of using an ORM to solve my problem. The question is if I use an ORM like "sql alchemy" would it be smart enough to pick up changes in the other database? E.g. sql alchemy accesses a table and retrieves a row. If that row got modified outside of sql alchemy would it be smart enough to pick it up? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Edit: To be more clear I have one application that is simply a reporting tool lets call App A. I have another application that handles various financial transactions called App B. A has access to B's database to retrieve the transactions and generates various reports. There's hundreds of thousands of transactions. We're currently caching this info manually in python, if we need an updated report we refresh the cache. If we get rid of the cache, the sql queries combined with the calculations becomes unscalable.
1
2
0.379949
0
false
7,429,664
0
122
1
0
0
7,426,564
I don't think an ORM is the solution to your problem of performance. By default ORMs tend to be less efficient than row SQL because they might fetch data that you're not going to use (eg. doing a SELECT * when you need only one field), although SQLAlchemy allows fine-grained control over the SQL generated. Now to implement a caching mechanism, depending on your application, you could use a simple dictionary in memory or a specialized system such as memcached or Redis. To keep your cached data relatively fresh, you can poll the source at regular intervals, which might be OK if your application can tolerate a little delay. Otherwise you'll need the application that has write access to the db to notify your application or your cache system when an update occurs. Edit: since you seem to have control over app B, and you've already got a cache system in app A, the simplest way to solve your problem is probably to create a callback in app A that app B can call to expire cached items. Both apps need to agree on a convention to identify cached items.
1
0
0
How to interface with another database effectively using python
1
python,sql,orm,sqlalchemy
0
2011-09-15T06:12:00.000
My Python High Replication Datastore application requires a large lookup table of between 100,000 and 1,000,000 entries. I need to be able to supply a code to some method that will return the value associated with that code (or None if there is no association). For example, if my table held acceptable English words then I would want the function to return True if the word was found and False (or None) otherwise. My current implementation is to create one parentless entity for each table entry, and for that entity to contain any associated data. I set the datastore key for that entity to be the same as my lookup code. (I put all the entities into their own namespace to prevent any key conflicts, but that's not essential for this question.) Then I simply call get_by_key_name() on the code and I get the associated data. The problem is that I can't access these entities during a transaction because I'd be trying to span entity groups. So going back to my example, let's say I wanted to spell-check all the words used in a chat session. I could access all the messages in the chat because I'd give them a common ancestor, but I couldn't access my word table because the entries there are parentless. It is imperative that I be able to reference the table during transactions. Note that my lookup table is fixed, or changes very rarely. Again this matches the spell-check example. One solution might be to load all the words in a chat session during one transaction, then spell-check them (saving the results), then start a second transaction that would spell-check against the saved results. But not only would this be inefficient, the chat session might have been added to between the transactions. This seems like a clumsy solution. Ideally I'd like to tell GAE that the lookup table is immutable, and that because of this I should be able to query against it without its complaining about spanning entity groups in a transaction. I don't see any way to do this, however. Storing the table entries in the memcache is tempting, but that too has problems. It's a large amount of data, but more troublesome is that if GAE boots out a memcache entry I wouldn't be able to reload it during the transaction. Does anyone know of a suitable implementation for large global lookup tables? Please understand that I'm not looking for a spell-check web service or anything like that. I'm using word lookup as an example only to make this question clear, and I'm hoping for a general solution for any sort of large lookup tables.
0
1
1.2
0
true
7,466,485
1
143
2
1
0
7,451,163
If you can, try and fit the data into instance memory. If it won't fit in instance memory, you have a few options available to you. You can store the data in a resource file that you upload with the app, if it only changes infrequently, and access it off disk. This assumes you can build a data structure that permits easy disk lookups - effectively, you're implementing your own read-only disk based table. Likewise, if it's too big to fit as a static resource, you could take the same approach as above, but store the data in blobstore. If your data absolutely must be in the datastore, you may need to emulate your own read-modify-write transactions. Add a 'revision' property to your records. To modify it, fetch the record (outside a transaction), perform the required changes, then inside a transaction, fetch it again to check the revision value. If it hasn't changed, increment the revision on your own record and store it to the datastore. Note that the underlying RPC layer does theoretically support multiple independent transactions (and non-transactional operations), but the APIs don't currently expose any way to access this from within a transaction, short of horrible (and I mean really horrible) hacks, unfortunately. One final option: You could run a backend provisioned with more memory, exposing a 'SpellCheckService', and make URLFetch calls to it from your frontends. Remember, in-memory is always going to be much, much faster than any disk-based option.
1
0
0
GAE Lookup Table Incompatible with Transactions?
2
python,google-app-engine,transactions,google-cloud-datastore,entity-groups
0
2011-09-16T22:55:00.000
My Python High Replication Datastore application requires a large lookup table of between 100,000 and 1,000,000 entries. I need to be able to supply a code to some method that will return the value associated with that code (or None if there is no association). For example, if my table held acceptable English words then I would want the function to return True if the word was found and False (or None) otherwise. My current implementation is to create one parentless entity for each table entry, and for that entity to contain any associated data. I set the datastore key for that entity to be the same as my lookup code. (I put all the entities into their own namespace to prevent any key conflicts, but that's not essential for this question.) Then I simply call get_by_key_name() on the code and I get the associated data. The problem is that I can't access these entities during a transaction because I'd be trying to span entity groups. So going back to my example, let's say I wanted to spell-check all the words used in a chat session. I could access all the messages in the chat because I'd give them a common ancestor, but I couldn't access my word table because the entries there are parentless. It is imperative that I be able to reference the table during transactions. Note that my lookup table is fixed, or changes very rarely. Again this matches the spell-check example. One solution might be to load all the words in a chat session during one transaction, then spell-check them (saving the results), then start a second transaction that would spell-check against the saved results. But not only would this be inefficient, the chat session might have been added to between the transactions. This seems like a clumsy solution. Ideally I'd like to tell GAE that the lookup table is immutable, and that because of this I should be able to query against it without its complaining about spanning entity groups in a transaction. I don't see any way to do this, however. Storing the table entries in the memcache is tempting, but that too has problems. It's a large amount of data, but more troublesome is that if GAE boots out a memcache entry I wouldn't be able to reload it during the transaction. Does anyone know of a suitable implementation for large global lookup tables? Please understand that I'm not looking for a spell-check web service or anything like that. I'm using word lookup as an example only to make this question clear, and I'm hoping for a general solution for any sort of large lookup tables.
0
1
0.099668
0
false
7,452,303
1
143
2
1
0
7,451,163
First, if you're under the belief that a namespace is going to help avoid key collisions, it's time to take a step back. A key consists of an entity kind, a namespace, a name or id, and any parents that the entity might have. It's perfectly valid for two different entity kinds to have the same name or id. So if you have, say, a LookupThingy that you're matching against, and have created each member by specifying a unique name, the key isn't going to collide with anything else. As for the challenge of doing the equivalent of a spell-check against an unparented lookup table within a transaction, is it possible to keep the lookup table in code? Or can you think of an analogy that's closer to what you need? One that motivates the need to do the lookup within a transaction?
1
0
0
GAE Lookup Table Incompatible with Transactions?
2
python,google-app-engine,transactions,google-cloud-datastore,entity-groups
0
2011-09-16T22:55:00.000
I have a function where I save a large number of models, (thousands at a time), this takes several minutes so I have written a progress bar to display progress to the user. The progress bar works by polling a URL (from Javascript) and looking a request.session value to see the state of the first call (the one that is saving). The problem is that the first call is within a @transaction.commit_on_success decorator and because I am using Database Backed sessions when I try to force request.session.save() instead of it immediately committing it is appended to the ongoing transaction. This results in the progress bar only being updated once all the saves are complete, thus rendering it useless. My question is, (and I'm 99.99% sure I already know the answer), can you commit statements within a transaction without doing the whole lot. i.e. I need to just commit the request.session.save() whilst leaving all of the others.. Many thanks, Alex
3
1
0.197375
0
false
7,473,401
1
839
1
0
0
7,472,348
No, both your main saves and the status bar updates will be conducted using the same database connection so they will be part of the same transaction. I can see two options to avoid this. You can either create your own, separate database connection and save the status bar updates using that. Don't save the status bar updates to the database at all and instead use a cache to store them. As long as you don't use the database cache backend (ideally you'd use memcached) this will work fine. My preferred option would be the second one. You'll need to delve into the Django internals to get your own database connection so that could is likely to end up fragile and messy.
1
0
0
Force commit of nested save() within a transaction
1
python,sql,django
0
2011-09-19T14:15:00.000
i use pymongo to test the performance of the mongodb. i use 100 threads, every thread excecute 5000 insert, and everything work ok. but when i excecute 10000 insert in every thead, i meet some error: "AutoReconnect: Connection reset by peer"
3
1
0.099668
0
false
18,267,147
0
1,676
1
0
0
7,479,907
Driver can't remove dropped socket from connection from pool until your code try use it.
1
0
1
Mongodb : AutoReconnect, Connection reset by peer
2
python,mongodb,pymongo
0
2011-09-20T03:48:00.000
I am currently writing a Python script to interact with an SQLite database but it kept returning that the database was "Encrypted or Corrupted". The database is definitely not encrypted and so I tried to open it using the sqlite3 library at the command line (returned the same error) and with SQLite Manager add-on for Firefox... I had a copy of the same database structure but populated by a different instance of this program on a windows box, I tried to open it using SQLite Manager and it was fine, so as a quick test I loaded the "Encrypted or Corrupted" database onto a USB stick and plugged it into the windows machine, using the manager it opened first time without issues. Does anyone have any idea what may be causing this? EDIT: On the Linux machine I tried accessing it as root with no luck, I also tried chmoding it to 777 just as a test (on a copied version of the DB), again with no luck
0
0
0
0
false
7,512,015
0
1,207
1
0
0
7,511,965
You should check the user privileges, the user on linux may not have enough privileges.
1
0
0
SQLite3 Database file - Corrupted/Encrypted only on Linux
2
python,sql,database,sqlite
0
2011-09-22T08:40:00.000
I'm developing an web based application written in PHP5, which basically is an UI on top of a database. To give users a more flexible tool I want to embed a scripting language, so they can do more complex things like fire SQL queries, do loops and store data in variables and so on. In my business domain Python is widely used for scripting, but I'm also thinking of making a simple Domain Specific Language. The script has to wrap my existing PHP classes. I'm seeking advise on how to approach this development task? Update: I'll try scripting in the database using PLPGSQL in PostgreSQL. This will do for now, but I can't use my PHP classes this way. Lua approach is appealing and seems what is what I want (besides its not Python).
11
3
0.099668
0
false
7,660,613
0
570
2
0
0
7,528,360
How about doing the scripting on the client. That will ensure maximum security and also save server resources. In other words Javascript would be your scripting platform. What you do is expose the functionality of your backend as javascript functions. Depending on how your app is currently written that might require backend work or not. Oh and by the way you are not limited to javascript for the actual language. Google "compile to javascript" and first hit should be a list of languages you can use.
1
0
0
Embed python/dsl for scripting in an PHP web application
6
php,python,dsl,plpgsql
1
2011-09-23T11:36:00.000
I'm developing an web based application written in PHP5, which basically is an UI on top of a database. To give users a more flexible tool I want to embed a scripting language, so they can do more complex things like fire SQL queries, do loops and store data in variables and so on. In my business domain Python is widely used for scripting, but I'm also thinking of making a simple Domain Specific Language. The script has to wrap my existing PHP classes. I'm seeking advise on how to approach this development task? Update: I'll try scripting in the database using PLPGSQL in PostgreSQL. This will do for now, but I can't use my PHP classes this way. Lua approach is appealing and seems what is what I want (besides its not Python).
11
0
0
0
false
7,605,372
0
570
2
0
0
7,528,360
You could do it without Python, by ie. parsing the user input for pre-defined "tags" and returning the result.
1
0
0
Embed python/dsl for scripting in an PHP web application
6
php,python,dsl,plpgsql
1
2011-09-23T11:36:00.000
I would like to store windows path in MySQL without escaping the backslashes. How can I do this in Python? I am using MySQLdb to insert records into the database. When I use MySQLdb.escape_string(), I notice that the backslashes are removed.
0
0
0
0
false
7,553,317
0
1,040
1
0
0
7,553,200
Have a look at os.path.normpath(thePath) I can't remember if it's that one, but there IS a standard os.path formating function that gives double backslashes, that can be stored in a db "as is" and reused later "as is". I have no more windows machine and cannot test it anymore.
1
0
0
Storing windows path in MySQL without escaping backslashes
2
python,mysql
0
2011-09-26T09:36:00.000
I'm trying to write a pop3 and imap clients in python using available libs, which will download email headers (and subsequently entire email bodies) from various servers and save them in a mongodb database. The problem I'm facing is that this client downloads emails in addition to a user's regular email client. So with the assumption that a user might or might not leave emails on the server when downloading using his mail client, I'd like to fetch the headers but only collect them from a certain date, to avoid grabbing entire mailboxes every time I fetch the headers. As far as I can see the POP3 list call will get me all messages on the server, even those I probably already downloaded. IMAP doesn't have this problem. How do email clients handle this situation when dealing with POP3 servers?
3
3
1.2
0
true
7,556,750
0
1,501
1
0
0
7,553,606
Outlook logs in to a POP3 server and issues the STAT, LIST and UIDL commands; then if it decides the user has no new messages it logs out. I have observed Outlook doing this when tracing network traffic between a client and my DBMail POP3 server. I have seen Outlook fail to detect new messages on a POP3 server using this method. Thunderbird behaves similarly but I have never seen it fail to detect new messages. Issue the LIST and UIDL commands to the server after logging in. LIST gives you an index number (the message's linear position in the mailbox) and the size of each message. UIDL gives you the same index number and a computed hash value for each message. For each user you can store the size and hash value given by LIST and UIDL. If you see the same size and hash value, assume it is the same message. When a given message no longer appears in this list, assume it has been deleted and clear it from your local memory. For complete purity, remember the relative positions of the size/hash pairs in the message list, so that you can support the possibility that they may repeat. (My guess on Outlook's new message detection failure is that sometimes these values do repeat, at least for DBMail, but Outlook remembers them even after they are deleted, and forever considers them not new. If it were me, I would try to avoid this behavior.) Footnote: Remember that the headers are part of the message. Do not trust anything in the header for this reason: dates, senders, even server hand-off information can be easily faked and cannot be assumed unique.
1
0
0
Download POP3 headers from a certain date (Python)
1
python,email,pop3
1
2011-09-26T10:14:00.000
I have been doing lots of searching and reading to solve this. The main goal is let a Django-based web management system connecting to a device which runs a http server as well. Django will handle user request and ask device for the real data, then feedback to user. Now I have a "kinda-work-in-concept" solution: Browser -> Apache Server: Browser have jQuery and HTML/CSS to collect user request. Apache Server-> Device HTTP Server: Apache + mod_python(or somesay Apache + mod_wsgi?) , so I might control the Apache to do stuff like build up a session and cookies to record login. But, this is the issue actually bugs me. How to make it work? Using what to build up socket connection between this two servers?
0
0
0
0
false
7,567,682
1
320
1
0
0
7,565,812
If you have control over what runs on the device side, consider using XML-RPC to talk from client to server.
1
0
0
How to control Apache via Django to connect to mongoose(another HTTP server)?
2
python,django,apache,mod-wsgi,mod-python
0
2011-09-27T07:52:00.000
I know about the XLWT library, which I've used before on a Django project. XLWT is very neat but as far as I know, it doesn't support .xlsx which is the biggest obstacle in my case. I'm probably going to be dealing with more than 2**16 rows of information. Is there any other mature similar library? Or even better, is there a fork for the XLWT with this added functionality? I know there are libraries in C#, but if a python implementation already exists, it would be a lot better. Thanks a bunch!
3
0
0
0
false
7,576,355
0
1,504
1
0
0
7,576,309
Export a CSV don't use .xlsx..
1
0
0
Exporting to Excel .xlsx from a Python Pyramid project
3
python,xls,xlsx,xlwt,openpyxl
0
2011-09-27T22:14:00.000
I am researching a project that would require hundreds of database writes per a minute. I have never dealt with this level of data writes before and I am looking for good scalable techniques and technologies. I am a comfortable python developer with experience in django and sql alchemy. I am thinking I will build the data interface on django, but I don't think that it is a good idea to go through the orm to do the amount of data writes I will require. I am definitely open to learning new technologies. The solution will live on Amazon web services, so I have access to all their tools. Ultimately I am looking for advice on database selection, data writing techniques, and any other needs I may have that I do not realize. Any advice on where to start? Thanks, CG
0
0
0
0
false
7,587,624
1
324
2
0
0
7,586,999
You should actually be okay with low hundreds of writes per minute through SQLAlchemy (thats only a couple a second); if you're talking more like a thousand a minute, yeah that might be problematic. What kind of data do you have? If it's fairly flat (few tables, few relations), you might want to investigate a non-relational database such as CouchDB or Mongo. If you want to use SQL, I strongly reccommend PostgreSQL, it seems to deal with large databases and frequent writes a lot better than MySQL. It also depends how complex the data is that you're inserting. I think unfortunately, you're going to just have to try a couple things and run benchmarks, as each situation is different and query optimizers are basically magic.
1
0
0
Setup for high volume of database writing
3
python,django,database-design,amazon-web-services
0
2011-09-28T17:14:00.000
I am researching a project that would require hundreds of database writes per a minute. I have never dealt with this level of data writes before and I am looking for good scalable techniques and technologies. I am a comfortable python developer with experience in django and sql alchemy. I am thinking I will build the data interface on django, but I don't think that it is a good idea to go through the orm to do the amount of data writes I will require. I am definitely open to learning new technologies. The solution will live on Amazon web services, so I have access to all their tools. Ultimately I am looking for advice on database selection, data writing techniques, and any other needs I may have that I do not realize. Any advice on where to start? Thanks, CG
0
0
0
0
false
7,587,774
1
324
2
0
0
7,586,999
If it's just a few hundred writes you still can do with a relational DB. I'd pick PostgreSQL (8.0+), which has a separate background writer process. It also has tuneable serialization levels so you can enable some tradeoffs between speed and strict ACID compliance, some even at transaction level. Postgres is well documented, but it assumes some deeper understanding of SQL and relational DB theory to fully understand and make the most of it. The alternative would be new fangled "NO-SQL" system, which can probably scale even better, but at the cost of buying into a very different technology system. Any way, if you are using python and it is not 100% critical to lose writes on shutdown or power loss, and you need a low latency, use a threadsafe Queue.Queue and worker threads to decouple the writes from your main application thread(s).
1
0
0
Setup for high volume of database writing
3
python,django,database-design,amazon-web-services
0
2011-09-28T17:14:00.000
Problem I am writing a program that reads a set of documents from a corpus (each line is a document). Each document is processed using a function processdocument, assigned a unique ID, and then written to a database. Ideally, we want to do this using several processes. The logic is as follows: The main routine creates a new database and sets up some tables. The main routine sets up a group of processes/threads that will run a worker function. The main routine starts all the processes. The main routine reads the corpus, adding documents to a queue. Each process's worker function loops, reading a document from a queue, extracting the information from it using processdocument, and writes the information to a new entry in a table in the database. The worker loops breaks once the queue is empty and an appropriate flag has been set by the main routine (once there are no more documents to add to the queue). Question I'm relatively new to sqlalchemy (and databases in general). I think the code used for setting up the database in the main routine works fine, from what I can tell. Where I'm stuck is I'm not sure exactly what to put into the worker functions for each process to write to the database without clashing with the others. There's nothing particularly complicated going on: each process gets a unique value to assign to an entry from a multiprocessing.Value object, protected by a Lock. I'm just not sure whether what I should be passing to the worker function (aside from the queue), if anything. Do I pass the sqlalchemy.Engine instance I created in the main routine? The Metadata instance? Do I create a new engine for each process? Is there some other canonical way of doing this? Is there something special I need to keep in mind? Additional Comments I'm well aware I could just not bother with the multiprocessing but and do this in a single process, but I will have to write code that has several processes reading for the database later on, so I might as well figure out how to do this now. Thanks in advance for your help!
1
5
0.761594
0
false
7,603,832
0
1,473
1
0
0
7,603,790
The MetaData and its collection of Table objects should be considered a fixed, immutable structure of your application, not unlike your function and class definitions. As you know with forking a child process, all of the module-level structures of your application remain present across process boundaries, and table defs are usually in this category. The Engine however refers to a pool of DBAPI connections which are usually TCP/IP connections and sometimes filehandles. The DBAPI connections themselves are generally not portable over a subprocess boundary, so you would want to either create a new Engine for each subprocess, or use a non-pooled Engine, which means you're using NullPool. You also should not be doing any kind of association of MetaData with Engine, that is "bound" metadata. This practice, while prominent on various outdated tutorials and blog posts, is really not a general purpose thing and I try to de-emphasize this way of working as much as possible. If you're using the ORM, a similar dichotomy of "program structures/active work" exists, where your mapped classes of course are shared between all subprocesses, but you definitely want Session objects to be local to a particular subprocess - these correspond to an actual DBAPI connection as well as plenty of other mutable state which is best kept local to an operation.
1
0
1
How to use simple sqlalchemy calls while using thread/multiprocessing
1
python,database,multithreading,sqlalchemy,multiprocessing
0
2011-09-29T21:50:00.000
So I am pretty sure that I have managed to dork up my MySQLdb installation. I have all of the following installed correctly on a fresh install of OS X Lion: phpMyAdmin MySQL 5.5.16 Django 1.3.1 And yet when I try to run "from django.db import connection" in a django console, I get the following: from django.db import connection Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/db/init.py", line 78, in connection = connections[DEFAULT_DB_ALIAS] File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/db/utils.py", line 93, in getitem backend = load_backend(db['ENGINE']) File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/db/utils.py", line 33, in load_backend return import_module('.base', backend_name) File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/utils/importlib.py", line 35, in import_module import(name) File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Users/[my username]/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.7-intel.egg-tmp/_mysql.so, 2): Library not loaded: libmysqlclient.18.dylib Referenced from: /Users/[my username]/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.7-intel.egg-tmp/_mysql.so Reason: image not found I have no idea why this is happening, could somebody help walk me through this?
2
1
0.066568
0
false
7,605,229
1
3,834
2
0
0
7,605,212
Install pip if you haven't already, and run pip install MySQL-Python
1
0
0
Having an issue with setting up MySQLdb on Mac OS X Lion in order to support Django
3
python,mysql,django,macos,mysql-python
0
2011-09-30T01:53:00.000
So I am pretty sure that I have managed to dork up my MySQLdb installation. I have all of the following installed correctly on a fresh install of OS X Lion: phpMyAdmin MySQL 5.5.16 Django 1.3.1 And yet when I try to run "from django.db import connection" in a django console, I get the following: from django.db import connection Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/db/init.py", line 78, in connection = connections[DEFAULT_DB_ALIAS] File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/db/utils.py", line 93, in getitem backend = load_backend(db['ENGINE']) File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/db/utils.py", line 33, in load_backend return import_module('.base', backend_name) File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/utils/importlib.py", line 35, in import_module import(name) File "/Library/Python/2.7/site-packages/Django-1.3.1-py2.7.egg/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Users/[my username]/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.7-intel.egg-tmp/_mysql.so, 2): Library not loaded: libmysqlclient.18.dylib Referenced from: /Users/[my username]/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.7-intel.egg-tmp/_mysql.so Reason: image not found I have no idea why this is happening, could somebody help walk me through this?
2
5
0.321513
0
false
12,027,574
1
3,834
2
0
0
7,605,212
I found the following solution for this issue. It worked for me. I have encountered this problem when I was running python console from PyCharm. sudo ln -s /usr/local/mysql/lib/libmysqlclient.18.dylib /usr/lib/libmysqlclient.18.dylib
1
0
0
Having an issue with setting up MySQLdb on Mac OS X Lion in order to support Django
3
python,mysql,django,macos,mysql-python
0
2011-09-30T01:53:00.000
I've been spending the better part of the weekend trying to figure out the best way to transfer data from an MS Access table into an Excel sheet using Python. I've found a few modules that may help (execsql, python-excel), but with my limited knowledge and the modules I have to use to create certain data (I'm a GIS professional, so I'm creating spatial data using the ArcGIS arcpy module into an access table) I'm not sure what the best approach should be. All I need to do is copy 4 columns of data from access to excel and then format the excel. I have the formatting part solved. Should I: Iterate through the rows using a cursor and somehow load the rows into excel? Copy the columns from access to excel? Export the whole access table into a sheet in excel? Thanks for any suggestions.
2
1
0.039979
0
false
7,636,416
0
4,767
2
0
0
7,630,142
Another idea - how important is the formatting part? If you can ditch the formatting, you can output your data as CSV. Excel can open CSV files, and the CSV format is much simpler then the Excel format - it's so simple you can write it directly from Python like a text file, and that way you won't need to mess with Office COM objects.
1
0
0
Copy data from MS Access to MS Excel using Python
5
python,excel,ms-access
0
2011-10-03T00:23:00.000
I've been spending the better part of the weekend trying to figure out the best way to transfer data from an MS Access table into an Excel sheet using Python. I've found a few modules that may help (execsql, python-excel), but with my limited knowledge and the modules I have to use to create certain data (I'm a GIS professional, so I'm creating spatial data using the ArcGIS arcpy module into an access table) I'm not sure what the best approach should be. All I need to do is copy 4 columns of data from access to excel and then format the excel. I have the formatting part solved. Should I: Iterate through the rows using a cursor and somehow load the rows into excel? Copy the columns from access to excel? Export the whole access table into a sheet in excel? Thanks for any suggestions.
2
1
0.039979
0
false
7,630,189
0
4,767
2
0
0
7,630,142
The best approach might be to not use Python for this task. You could use the macro recorder in Excel to record the import of the External data into Excel. After starting the macro recorder click Data -> Get External Data -> New Database Query and enter your criteria. Once the data import is complete you can look at the code that was generated and replace the hard coded search criteria with variables.
1
0
0
Copy data from MS Access to MS Excel using Python
5
python,excel,ms-access
0
2011-10-03T00:23:00.000
I am importing text files into excel using xlwt module. But it allows only 256 columns to be stored. Are there any ways to solve this problem?
14
0
0
0
false
70,290,332
0
18,626
2
0
0
7,658,513
If you trying to write to the columns in the for loop and getting this error, then re-initalize the column to 0 while iterating.
1
0
0
Python - Xlwt more than 256 columns
6
python
0
2011-10-05T08:21:00.000
I am importing text files into excel using xlwt module. But it allows only 256 columns to be stored. Are there any ways to solve this problem?
14
1
0.033321
0
false
7,658,627
0
18,626
2
0
0
7,658,513
Is that a statement of fact or should xlwt support more than 256 columns? What error do you get? What does your code look like? If it truly does have a 256 column limit, just write your data in a csv-file using the appropriate python module and import the file into Excel.
1
0
0
Python - Xlwt more than 256 columns
6
python
0
2011-10-05T08:21:00.000
I'm using Python with Celery and RabbitMQ to make a web spider to count the number of links on a page. Can a database, such as MySQL, be written into asynchronously? Is it OK to commit the changes after every row added, or is it required to batch them (multi-add) and then commit after a certain number of rows/duration? I'd prefer to use SQLAlchemy and MySQL, unless there is a more recommended combination for Celery/RabbitMQ. I also see NoSQL (CouchDB?) recommended.
0
1
1.2
0
true
7,780,116
0
728
1
0
0
7,659,246
For write intensive operation like Counters and Logs NoSQL solution are always the best choice. Personally I use a mongoDB for this kind of tasks.
1
0
0
Python Celery Save Results in Database Asynchronously
1
python,database,asynchronous,rabbitmq,celery
0
2011-10-05T09:31:00.000
I am trying to edit several excel files (.xls) without changing the rest of the sheet. The only thing close so far that I've found is the xlrd, xlwt, and xlutils modules. The problem with these is it seems that xlrd evaluates formulae when reading, then puts the answer as the value of the cell. Does anybody know of a way to preserve the formulae so I can then use xlwt to write to the file without losing them? I have most of my experience in Python and CLISP, but could pick up another language pretty quick if they have better support. Thanks for any help you can give!
0
1
0.049958
0
false
7,667,880
0
8,902
1
0
0
7,665,486
As of now, xlrd doesn't read formulas. It's not that it evaluates them, it simply doesn't read them. For now, your best bet is to programmatically control a running instance of Excel, either via pywin32 or Visual Basic or VBScript (or some other Microsoft-friendly language which has a COM interface). If you can't run Excel, then you may be able to do something analogous with OpenOffice.org instead.
1
0
0
Is there any way to edit an existing Excel file using Python preserving formulae?
4
python,excel,formula,xlwt,xlrd
0
2011-10-05T17:52:00.000
I'm using postgresql and python and I need to store data group by week of the year. So, there's plenty alternatives: week and year in two separated fields a date pointing to the start of the week (or a random day of the week) And, the one I like: an interval type. I never use it, but reading the docs, seems to fit. But then, reading psycopg docs I found interval mapped to python timedelta object... seems weird to me, a timedelta is just a difference. So, there are two question here, really: Can I handle this choice using psycopg2? Is the better alternative? Thanks
0
4
1.2
0
true
7,668,912
0
661
1
0
0
7,668,822
The PostgreSQL interval type isn't really what you're looking for -- it's specifically intended for storing an arbitrary length of time, ranging anywhere from a microsecond to a few million years. An interval has no starting or ending point; it's just a measure of "how long". If you're specifically after storing which week an event is associated with, you're probably better off with either of your first two options.
1
0
0
psycopg2: interval type for storing weeks
1
python,postgresql,psycopg2
0
2011-10-05T23:11:00.000
I'm writing a script to access data in an established database and unfortunately, I'm breaking the DB. I'm able to recreate the issue from the command line: [user@box tmp]# python Python 2.7.2 (default, Sep 19 2011, 15:02:41) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pgdb >>> db = pgdb.connect('localhost:my_db:postgres') >>> cur = db.cursor() >>> cur.execute("SELECT * FROM mytable LIMIT 10") >>> cur.close() >>> At this point any activity to mytable is greatly degraded and "select * from pg_stat_activity" shows my connection as "IDLE in transaction". If I call db.close() everything is fine, but my script loops infinitely and I didn't think I'd need to open and close the db connection with each loop. I don't think it has anything to do with the fact that I'm not using the data above as in my real script I am calling fetchone() (in a loop) to process the data. I'm not much of a DB guy so I'm not sure what other info would be useful. My postgres version is 9.1.0 and python is 2.7.2 as shown above.
2
2
0.197375
0
false
7,670,330
0
1,471
2
0
0
7,669,434
I suggest using psycopg2 instead of pgdb. pgdb uses the following semantics: connect() -> open database connection, begin transaction commit() -> commit, begin transaction rollback() -> rollback, begin transaction execute() -> execute statement psycopg2, on the other hand, uses the following semantics: connect() -> open database connection commit() -> commit rollback() -> rollback execute() -> begin transaction unless already in transaction, execute statement so, as Amber mentioned, you can do a rollback or commit after your select statement and terminate the transaction. Unfortunately, with pgdb, you will immediately start a new transaction after you rollback or commit (even if you haven't performed any work). For many database systems, pgdb's behavior is fine, but because of the way PostgreSQL handles transactions, it can cause trouble for you if you've got lots of connections accessing the same tables (trouble specifically with vacuum). Why does pgdb start a transaction right away? The Python DB-API (2.0) spec calls for it to do so. Seems kind of silly to me, but that's the way the spec is written.
1
0
0
python pgdb hanging database
2
python,postgresql,pgdb
0
2011-10-06T01:15:00.000
I'm writing a script to access data in an established database and unfortunately, I'm breaking the DB. I'm able to recreate the issue from the command line: [user@box tmp]# python Python 2.7.2 (default, Sep 19 2011, 15:02:41) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pgdb >>> db = pgdb.connect('localhost:my_db:postgres') >>> cur = db.cursor() >>> cur.execute("SELECT * FROM mytable LIMIT 10") >>> cur.close() >>> At this point any activity to mytable is greatly degraded and "select * from pg_stat_activity" shows my connection as "IDLE in transaction". If I call db.close() everything is fine, but my script loops infinitely and I didn't think I'd need to open and close the db connection with each loop. I don't think it has anything to do with the fact that I'm not using the data above as in my real script I am calling fetchone() (in a loop) to process the data. I'm not much of a DB guy so I'm not sure what other info would be useful. My postgres version is 9.1.0 and python is 2.7.2 as shown above.
2
2
1.2
0
true
7,669,476
0
1,471
2
0
0
7,669,434
Try calling db.rollback() before you close the cursor (or if you're doing a write operation, db.commit()).
1
0
0
python pgdb hanging database
2
python,postgresql,pgdb
0
2011-10-06T01:15:00.000
Pretty recent (but not newborn) to both Python, SQLAlchemy and Postgresql, and trying to understand inheritance very hard. As I am taking over another programmer's code, I need to understand what is necessary, and where, for the inheritance concept to work. My questions are: Is it possible to rely only on SQLAlchemy for inheritance? In other words, can SQLAlchemy apply inheritance on Postgresql database tables that were created without specifying INHERITS=? Is the declarative_base technology (SQLAlchemy) necessary to use inheritance the proper way. If so, we'll have to rewrite everything, so please don't discourage me. Assuming we can use Table instance, empty Entity classes and mapper(), could you give me a (very simple) example of how to go through the process properly (or a link to an easily understandable tutorial - I did not find any easy enough yet). The real world we are working on is real estate objects. So we basically have - one table immobject(id, createtime) - one table objectattribute(id, immoobject_id, oatype) - several attribute tables: oa_attributename(oa_id, attributevalue) Thanks for your help in advance. Vincent
1
4
0.379949
0
false
7,675,115
0
1,451
1
0
0
7,672,569
Welcome to Stack Overflow: in the future, if you have more than one question; you should provide a separate post for each. Feel free to link them together if it might help provide context. Table inheritance in postgres is a very different thing and solves a different set of problems from class inheritance in python, and sqlalchemy makes no attempt to combine them. When you use table inheritance in postgres, you're doing some trickery at the schema level so that more elaborate constraints can be enforced than might be easy to express in other ways; Once you have designed your schema; applications aren't normally aware of the inheritance; If they insert a row; it just magically appears in the parent table (much like a view). This is useful, for instance, for making some kinds of bulk operations more efficient (you can just drop the table for the month of january). This is a fundamentally different idea from inheritance as seen in OOP (in python or otherwise, with relational persistence or otherwise). In that case, the application is aware that two types are related, and that the subtype is a permissible substitute for the supertype. "A holding is an address, a contact has an address therefore a contact can have a holding." Which of these, (mostly orthogonal) tools you need depends on the application. You might need neither, you might need both. Sqlalchemy's mechanisms for working with object inheritance is flexible and robust, you should use it in favor of a home built solution if it is compatible with your particular needs (this should be true for almost all applications). The declarative extension is a convenience; It allows you to describe the mapped table, the python class and the mapping between the two in one 'thing' instead of three. It makes your code more "DRY"; It is however only a convenience layered on top of "classic sqlalchemy" and it isn't necessary by any measure. If you find that you need table inheritance that's visible from sqlalchemy; your mapped classes won't be any different from not using those features; tables with inheritance are still normal relations (like tables or views) and can be mapped without knowledge of the inheritance in the python code.
1
0
0
Python, SQLAlchemy and Postgresql: understanding inheritance
2
python,postgresql,inheritance,sqlalchemy
0
2011-10-06T09:40:00.000
At work we want our next generation product to be based on a graph database. I'm looking for suggestions as to what database engine might be appropriate for our new project: Out product is intended to keep track of a large number of prices for goods. Here's a simplistic example of what it does - supposing you wanted to estimate the price of gasoline in the UK - you know that Gasoline is refined from crude-oil. If you new the price of crude oil in the UK you could estimate the price of anything simply by adding the cost of refining, transporting (etc). Actually things are more complex because there are a number of sources of crude-oil and hundreds of refined oil products. The prices of oil products can be affected by the availability of other energy sources (e.g. nuclear, wind, natural gas) and the demand. It's kind of complex! The idea is that we want to model the various inter-related goods and their costs of refining, transportation (etc) as an asyclic directed graph. The idea being, when an event causes a price to change then we want to be quickly able to determine what kinds of things are affected and re-calculate those prices ASAP. Essentially we need a database which can represent the individual commodities as nodes in the graph. Each node will store a number of curves and surfaces of information pertaining to the product. We want to represent the various costs & transformations (e.g. refining, transportation) as labels on the edges. As with the nodes, the information we want to store could be quite complex - not just single values but curves and surfaces. The calculations we do are all linear with respect to the size of the objects, however since the graph could be very big we need to be able to traverse the graph very quickly. We are Java and Python centric - ideally we are after a product that runs on the JVM but has really good APIs for both Python and Java. We don't care so much about other languages... but .Net would be nice to have (even though it might be years before we get round to doing something with it). We'd definitely like something which was high-performance - but more importantly the system needs to have a degree of hardware fault tolerance. For example, we'd like to distribute the database across a number of physical servers. In the event that any of the servers go down we'd like to be able to continue without an interruption. Oh, and we are really lazy. We dont want to spend much time writing infrastructure - so if the database came with tools that allow us to do as much as possible of this kind of thing with very little work that's fine by us. It would also be a real bonus if there was a grid technology associated with the graph DB, that way we could push a sequence of re-calculate jobs onto a compute grid and have much of our calculation done in paralell. So, that's a description of the kind of thing we want to build. What I want to know is whether there are any mature technologies which will help us achieve this. As I mentioned before, we have a preference for Python & JVM, however if the technology is really good and comes with great bindings for Python + Java we'd consider almost anything.
2
3
1.2
0
true
7,675,078
0
303
1
0
0
7,674,895
Neo4J is the most mature graphDB I know of - and is java, with bindings for python too, or REST
1
0
0
I'm looking for a graph-database for a Java/Python centric organization
1
java,python,database,graph
0
2011-10-06T13:27:00.000
We have a bunch of utility scripts in Visual FoxPro, which we use to interactively cleanse/format data. We'd like to start migrating this code to make use of other database platforms, like MySQL or SQLite. For instance we have a script that we run which converts the name and/or address lines to proper upper/lower case. This code goes through an entire table and analyzes/fixes each row. There are others, that do things like parse and standardize the address and even duplicate detection... We're thinking of migrating the code to Python and possibly using something like SQLAlchemy as a "middleman". In Visual FoxPro the database/tables are integrated so we can just open the table and run commands. MySQL is different in that we need to extract data from it, then work on that extracted data, then update the table. What would be the best approach? I see several possibilities: 1) Extract the the entire data set to be worked on, say all the address fields, if that's what we're going to be working with, then updating it all and writing it all back... 2) Extract the data set in chunks, so as to not potentially consume vast amounts of system memory... then update and write back 3) Generate SQL code, perhaps with the help of a tool like SQLAlchemy, that gets sent to and executed by the server... 4) ??? Anything else I didn't think of?
1
0
0
0
false
7,681,237
0
789
1
0
0
7,681,017
It seems like you're trying to do several things all at once. Could you take a step-by-step approach? Perhaps cleansing the data as they are right now using your normal, usual scripts. Then migrate the database to MySQL. It is easy to migrate the database if VisualFoxPro offers a way to export the database to, say, CSV. You can then import that CSV into MySQL directly, with very little trouble. That gives you two databases that should be functionally identical. Of course, you have to prove that they are indeed identical, which isn't too hard but is time-consuming. You might be able to use SQLAlchemy to help. When the MySQL database is right, that's the time to port your cleansing scripts to Python or something and get those working. That's how I would approach this problem: break it into pieces and not try to do too much in any single step. HTH
1
0
0
What's the best language/technique to perform advanced data cleansing and formatting on a SQL/MySQL/PostgreSQL table?
1
python,mysql,sqlalchemy,foxpro,data-cleaning
0
2011-10-06T22:03:00.000
I have a Pylons application using SQLAlchemy with SQLite as backend. I would like to know if every read operation going to SQLite will always lead to a hard disk read (which is very slow compared to RAM) or some caching mechanisms are already involved. does SQLite maintain a subset of the database in RAM for faster access ? Can the OS (Linux) do that automatically ? How much speedup could I expect by using a production database (MySQL or PostgreSQL) instead of SQLite?
5
3
1.2
0
true
7,712,124
0
339
1
0
0
7,710,895
Yes, SQLite has its own memory cache. Check PRAGMA cache_size for instance. Also, if you're looking for speedups, check PRAGMA temp_store. There is also API for implementing your own cache. The SQLite database is just a file to the OS. Nothing is 'automatically' done for it. To ensure caching does happen, there are sqlite.h defines and runtime pragma settings. It depends, there are a lot of cases when you'll get a slowdown instead.
1
0
0
Are SQLite reads always hitting disk?
2
python,sqlite,sqlalchemy
0
2011-10-10T09:35:00.000
Best practice question about setting Mongo indexes. Mongoengine, the Python ORM wrapper, allows you to set indexes in the Document meta class. When is this meta class introspected and the index added? Can I build a collection via a mongoengine Document class and then add an index after the fact? If I remove the index from the meta class, is the index automatically removed from the corresponding collection? Thanks,
7
6
1.2
0
true
9,082,609
0
2,747
1
0
0
7,758,898
You can add an index at any time and ensureIndex will be called behind the scenes so it will be added if it doesn't exist. If you remove an index from the meta - you will have to use pymongo or the shell to remove the index.
1
0
1
How does MongoEngine handle Indexes (creation, update, removal)?
2
python,mongodb,indexing,mongoengine
0
2011-10-13T18:43:00.000
which is better for production with web2py? please more insights. I'm very new 2 web2py and i am working on a small pharmacy mgt system. pls which is better for production postgres or mysql? if postgres, step by step installation guide pls so to smoothly work with web2py. thanks
2
0
0
0
false
9,021,132
1
1,454
1
0
0
7,761,339
I say. Whatever you can work with from console. Some events may require fixing db from fingertip, you may also want to have some other ongoing actions in db and it might need to be done outside web2py. PosgreSQL is my choice as there are much less irregular behaviours thus its easier to grasp...
1
0
0
which is better for production with web2py?
4
python,web2py
0
2011-10-13T22:50:00.000
I am using memcached on a web site, and I am currently needing to open connections to a database and socket each time a function is called. In the case of the db connection, I am having to decide at runtime, which database to connect to. Because of the (default) stateless nature of web apps, I am having to tear down (i.e. close) the connection after each function call. I am wondering if it is possible to store (i.e. cache) the socket connection and the database connections in memcache - do that I have a pool of db connections and a socket connection already open that I can use whenever the function is called. Is this safe ? [[Additional Info]] I will be interfacing to memcached primarily, with PHP and Python BTW - memcached is running on the same machine (so physical address issues should not arise).
0
0
1.2
0
true
7,785,856
0
150
2
0
0
7,783,860
Both languages support database connections which live beyond the lifetime of a single request. Don't use memcache for that!
1
0
0
Is it safe to store a connection (effectively a pointer) in memcache?
2
php,python,memcached
0
2011-10-16T11:03:00.000
I am using memcached on a web site, and I am currently needing to open connections to a database and socket each time a function is called. In the case of the db connection, I am having to decide at runtime, which database to connect to. Because of the (default) stateless nature of web apps, I am having to tear down (i.e. close) the connection after each function call. I am wondering if it is possible to store (i.e. cache) the socket connection and the database connections in memcache - do that I have a pool of db connections and a socket connection already open that I can use whenever the function is called. Is this safe ? [[Additional Info]] I will be interfacing to memcached primarily, with PHP and Python BTW - memcached is running on the same machine (so physical address issues should not arise).
0
0
0
0
false
7,785,929
0
150
2
0
0
7,783,860
I am wondering if it is possible to store (i.e. cache) the socket connection and the database connections in memcache No.
1
0
0
Is it safe to store a connection (effectively a pointer) in memcache?
2
php,python,memcached
0
2011-10-16T11:03:00.000
I am building a web application that allows a user to upload an image. When the image is uploaded, it needs to be resized to one or more sizes, each of which needs to be sent to Amazon s3 for storage. Metadata and urls for each size of the image are stored in a single database record on the web server. I'm using a message queue to perform the resizing and uploading asynchronously (as there is potential for large images and multiple resizes per request). When the resize/upload task completes, the database record needs to be updated with the url. My problem is that the worker executing the task will not have access to the database. I was thinking of firing off a http callback from the worker back to the web application after the task is complete with the appropriate information for updating the database record. Are there any other alternatives or reasons I should do this another way? I'm using python/pylons for the web backend, mysql for the database and celery/amqp for messaging. Thanks!
0
2
1.2
0
true
7,802,664
1
639
1
0
0
7,802,504
It seems that your goal is not to decouple the database from the MQ, but rather from the workers. As such, you can create another queue that receives completion notifications, and have another single worker that picks up the notifications and updates the database appropriately.
1
0
0
Best practice for decoupling a database from a message queue
1
python,database,pylons,message-queue,celery
0
2011-10-18T04:41:00.000
I use Berkeley DB(BDB) in nginx. When a request arrives, nginx passes the URI as a key to BDB and checks if that key has a value in BDB file. I actually did in an example. I add some data in BDB, and run nginx, it's OK. I can access it. But when I add some data in running BDB with nginx (using Python), I can't get the new data. Even I use the another python interpreter access the BDB file, it was actually has the new data. Steps of the request in nginx: start up nginx, and it will init my plugin (BDB env and init) a request comes in control in plugin, check if key(uri) has a value. If true, return it, or pass ...rest of process
1
1
1.2
0
true
8,835,081
0
237
1
1
0
7,817,567
it supports A Single Process With One Thread A Single Process With Multiple Threads Groups of Cooperating Processes Groups of Unrelated Processes
1
0
0
Does Berkeley DB only support one processor operation
1
python,nginx,berkeley-db
0
2011-10-19T06:52:00.000
Is there way to execute DDL script from Python with kinterbasdb library for Firebird database? Basically I'd like to replicate 'isql -i myscript.sql' command.
1
2
0.379949
0
false
7,832,347
0
412
1
0
0
7,825,066
It has been a while since I used kinterbasdb, but as far as I know you should be able to do this with any query command which can also be used for INSERT, UPDATE and DELETE (ie nothing that produces a resultset). So Connection.execute_immediate and Cursor.execute should work. Did you actually try this. BTW: With Firebird it is advisable not to mix DDL and DML in one transaction. EDIT: I just realised that you might have meant a full DDL script with multiple statements, if that is what you mean, then: no you cannot, you need to execute each statement individually. You might be able to use an EXECUTE BLOCK statement, but you may need to modify your script so much that it would be easier to simply try to split the actual script into individual statements.
1
0
0
How to run DDL script with kinterbasdb
1
python,firebird,ddl,kinterbasdb
0
2011-10-19T16:58:00.000
I'm very new to Python and I'm trying to write a sort of recipe organizer to get acquainted with the language. Basically, I am unsure how how I should be storing the recipes. For now, the information I want to store is: Recipe name Ingredient names Ingredient quantities Preparation I've been thinking about how to do this with the built-in sqlite3, but I know nothing about database architecture, and haven't been able to find a good reference. I suppose one table would contain recipe names and primary keys. Preparation could be in a different table with the primary key as well. Would each ingredient/quantity pair need its own table. In other words, there would be a table for ingredientNumberOne, and each recipe's first ingredient, with the quantity, would go in there. Then each time recipe comes along with more ingredients than there are tables, a new table would be created. Am I even correct in assuming that sqlite3 is sufficient for this task?
0
2
0.099668
0
false
7,827,955
0
2,639
1
0
0
7,827,859
Just a general data modeling concept: you never want to name anything "...NumberOne", "...NumberTwo". Data models designed in this way are very difficult to query. You'll ultimately need to visit each of N tables for 1 to N ingredients. Also, each table in the model would ultimately have the same fields making maintenance a nightmare. Rather, just have one ingredient table that references the "recipe" table. Ultimately, I just realized this doesn't exactly answer the question, but you could implement this solution in Sqlite. I just get worried when good developers start introducing bad patterns into the data model. This comes from a guy who's been on both sides of the coin.
1
0
0
How to store recipe information with Python
4
python,database,database-design
0
2011-10-19T20:42:00.000
I created a simple bookmarking app using django which uses sqlite3 as the database backend. Can I upload it to appengine and use it? What is "Django-nonrel"?
3
5
0.761594
0
false
7,838,935
1
582
1
1
0
7,838,667
Unfortunately, no you can't. Google App Engine does not allow you to write files, and that is needed by SQLite. Until recently, it had no support of SQL at all, preferring a home-grown solution (see the "CAP theorem" as for why). This motivated the creation of projects like "Django-nonrel" which is a version of Django that does not require a relational database. Recently, they opened a beta service that proposes a MySQL database. But beware that it is fundamentally less reliable, and that it is probably going to be expensive. EDIT: As Nick Johnson observed, this new service (Google Cloud SQL) is fundamentally less scalable, but not fundamentally less reliable.
1
0
0
Can I deploy a django app which uses sqlite3 as backend on google app engine?
1
python,django,google-app-engine,web-applications,sqlite
0
2011-10-20T15:52:00.000
I am using GeoDjango with PostGIS. Then I am into trouble on how to get the nearest record from the given coordinates from my postgres db table.
13
2
0.066568
0
false
7,904,142
0
5,401
1
0
0
7,846,355
I have no experience with GeoDjango, but on PostgreSQL/PostGIS you have the st_distance(..) function. So, you can order your results by st_distance(geom_column, your_coordinates) asc and see what are the nearest rows. If you have plain coordinates (no postgis geometry), you can convert your coordinates to a point with the geometryFromText function. Is that what you were looking for? If not, try to be more explicit.
1
0
0
How can I query the nearest record in a given coordinates(latitude and longitude of string type)?
6
python,postgresql,postgis,geodjango
0
2011-10-21T07:36:00.000
What's the best combination of tools to import daily data feed (in .CSV format) to a MSSQL server table? Environment and acceptable tools: - Windows 2000/XP - ruby or python MS SQL Server is on a remote server, the importing process has to be done on a Windows client machine.
0
0
0
0
false
7,847,885
0
3,142
1
0
0
7,847,818
And what about DTS services? It's integral part of MS SQL server starting with early versions and it allows you to import text-based data to server tables
1
0
0
Import CSV to MS SQL Server programmatically
4
python,sql-server,ruby,windows,csv
0
2011-10-21T10:00:00.000
I'd like to open the chromium site data (in ~/.config/chromium/Default) with python-sqlite3 but it gets locked whenever chromium is running, which is understandable since transactions may be made. Is there a way to open it in read-only mode, ensuring that I can't corrupt the integrity of the db while chromium is using it?
15
6
1
0
false
7,857,866
0
7,130
1
0
0
7,857,755
Chromium is holding a database lock for long periods of time? Yuck! That's really not a very good idea at all. Still, not your fault… You could try just copying the database file (e.g., with the system utility cp) and using that snapshot for reading purposes; SQLite keeps all its committed state in a single file per database. Yes, there's a chance of seeing a partial transaction, but you will definitely not have lock problems on Unix as SQLite definitely doesn't use mandatory locks. (This might well not work on Windows due to the different locking scheme there.)
1
0
0
Is it possible to open a locked sqlite database in read only mode?
3
python,database,sqlite
0
2011-10-22T06:05:00.000
This is what I have :- Ubuntu 11.10. Django 1.3 Python 2.7 What I want to do is build an app that is similar to top-coder and I have the skeletal version of the app sketched out. The basic requirements would be:- 1. Saving the code. 2. Saving the user name and ranks.(User-profile) 3. Should allow a teacher to create multiple choice questions too.( Similar to Google docs). I have basic knowledge of Django and have built couple of (basic) apps before. Rather than building an online tool, is it possible to build something very similar to conf2py that sits on top of web2py, in Django. Lets call this small project examPy( I know, very original), is it possible to build an app that acts more a plug-in to Django or is my concept of Django absolutely wrong? The primary question being: As I want to learn a new DB and have worked on postgres in Django, should I chose CouchDB or MongoDB for Django? Answers can be explanations or links to certain documentations or blogs that can tell me the pros and cons.
3
3
0.197375
0
false
10,204,764
1
2,560
1
0
0
7,859,775
I've used mongo-engine with Django but you need to create a file specifically for Mongo documents eg. Mongo_models.py. In that file you define your Mongo documents. You then create forms to match each Mongo document. Each form has a save method which inserts or updates whats stored in Mongo. Django forms are designed to plug into any data back end ( with a bit of craft ). If you go this route you can dodge Django non-rel which is still not part of Django 1.4. In addition I believe django-nonrel is on hiatus right now. I've used both CouchDB and Mongo extensively. CouchDB has a lovely interface. My colleague is working on something similar for Mongo. Mongo's map and reduce are far faster than CouchDB. Mongo is more responsive loading and retrieving data. The python libraries for Mongo are easier to get working with ( both pymongo and mongo-engine are excellent ) Be sure you read the Mongo production recommendations! Do not run one instance on the same node as Django or prepare to be savagely burned when traffic peaks. Mondo works great with Memcache/Redis where one can store reduced data for rapid lookups. BEWARE: If you have very well defined and structured data that can be described in documents or models then don't use Mongo. Its not designed for that and something like PostGreSQL will work much better. I use PostGreSQL for relational or well structured data because its good for that. Small memory footprint and good response. I use Redis to cache or operate in memory queues/lists because its very good for that. great performance providing you have the memory to cope with it. I use Mongo to store large JSON documents and to perform Map and reduce on them ( if needed ) because its very good for that. Be sure to use indexing on certain columns if you can to speed up lookups. Don't use a circle to fill a square hole. It won't fill it.
1
0
0
Mongo DB or Couch DB with django for building an app that is similar to top coder?
3
python,django,mongodb,couchdb
0
2011-10-22T13:09:00.000
There seems to be many choices for Python to interface with SQLite (sqlite3, atpy) and HDF5 (h5py, pyTables) -- I wonder if anyone has experience using these together with numpy arrays or data tables (structured/record arrays), and which of these most seamlessly integrate with "scientific" modules (numpy, scipy) for each data format (SQLite and HDF5).
12
23
1.2
1
true
7,891,137
0
3,647
1
0
0
7,883,646
Most of it depends on your use case. I have a lot more experience dealing with the various HDF5-based methods than traditional relational databases, so I can't comment too much on SQLite libraries for python... At least as far as h5py vs pyTables, they both offer very seamless access via numpy arrays, but they're oriented towards very different use cases. If you have n-dimensional data that you want to quickly access an arbitrary index-based slice of, then it's much more simple to use h5py. If you have data that's more table-like, and you want to query it, then pyTables is a much better option. h5py is a relatively "vanilla" wrapper around the HDF5 libraries compared to pyTables. This is a very good thing if you're going to be regularly accessing your HDF file from another language (pyTables adds some extra metadata). h5py can do a lot, but for some use cases (e.g. what pyTables does) you're going to need to spend more time tweaking things. pyTables has some really nice features. However, if your data doesn't look much like a table, then it's probably not the best option. To give a more concrete example, I work a lot with fairly large (tens of GB) 3 and 4 dimensional arrays of data. They're homogenous arrays of floats, ints, uint8s, etc. I usually want to access a small subset of the entire dataset. h5py makes this very simple, and does a fairly good job of auto-guessing a reasonable chunk size. Grabbing an arbitrary chunk or slice from disk is much, much faster than for a simple memmapped file. (Emphasis on arbitrary... Obviously, if you want to grab an entire "X" slice, then a C-ordered memmapped array is impossible to beat, as all the data in an "X" slice are adjacent on disk.) As a counter example, my wife collects data from a wide array of sensors that sample at minute to second intervals over several years. She needs to store and run arbitrary querys (and relatively simple calculations) on her data. pyTables makes this use case very easy and fast, and still has some advantages over traditional relational databases. (Particularly in terms of disk usage and speed at which a large (index-based) chunk of data can be read into memory)
1
0
0
exporting from/importing to numpy, scipy in SQLite and HDF5 formats
1
python,sqlite,numpy,scipy,hdf5
0
2011-10-25T01:06:00.000
I have two List. first list element Name Age Sex and second list element test 10 female. I want to insert this data into database. In first list having MySQL Column and in second MySQL Column Values.I'm trying to make this query. INSERT INTO (LIST1) VALUES (List2) =>INSERT INTO table (name,age,sex) values (test,10,female) Is it possible? thanks
0
0
0
0
false
7,886,073
0
77
1
0
0
7,886,024
Try getting this to work using the MySQL gui. Once that works properly, then you can try to get it to work with Python using the SQL statements that worked in MySQL.
1
0
0
related to List (want to insert into database)
2
python
0
2011-10-25T07:32:00.000
The most common SQLite interface I've seen in Python is sqlite3, but is there anything that works well with NumPy arrays or recarrays? By that I mean one that recognizes data types and does not require inserting row by row, and extracts into a NumPy (rec)array...? Kind of like R's SQL functions in the RDB or sqldf libraries, if anyone is familiar with those (they import/export/append whole tables or subsets of tables to or from R data tables).
6
1
0.049958
1
false
12,100,118
0
7,905
1
0
0
7,901,853
This looks a bit older but is there any reason you cannot just do a fetchall() instead of iterating and then just initializing numpy on declaration?
1
0
0
NumPy arrays with SQLite
4
python,arrays,sqlite,numpy,scipy
0
2011-10-26T11:15:00.000
I am running Ubuntu, Flask 0.8, mod_wsgi 3 and apache2. When an error occurs, I am unable to get Flask's custom 500 error page to trigger (and not the debug mode output either). It works fine when I run it without WSGI via app.run(debug=True). I've tried setting WSGIErrorOverride to both On and Off in apache settings but same result. Anyone has gotten this issue? Thanks!
2
1
0.197375
0
false
7,942,317
1
873
1
0
0
7,940,745
Are you sure the error is actually coming from Flask if you are getting a generic Apache 500 error page? You should look in the Apache error log to see what error messages are in there first. The problem could be configuration or your WSGI script file being wrong or failing due to wrong sys.path etc.
1
0
0
Using Python Flask, mod_wsgi, apache2 - unable to get custom 500 error page
1
python,apache,wsgi,flask
0
2011-10-29T18:13:00.000
I'm create a blog using django. I'm getting an 'operational error: FATAL: role "[database user]" does not exist. But i have not created any database yet, all i have done is filled in the database details in setting.py. Do i have to create a database using psycopg2? If so, how do i do it? Is it: python import psycopg2 psycopg2.connect("dbname=[name] user=[user]") Thanks in advance.
0
0
0
0
false
7,942,855
1
1,391
2
0
0
7,941,623
Generally, you would create the database externally before trying to hook it up with Django. Is this your private server? If so, there are command-line tools you can use to set up a PostgreSQL user and create a database. If it is a shared hosting situation, you would use CPanel or whatever utility your host provides to do this. For example, when I had shared hosting, I was issued a database user and password by the hosting administrator. Perhaps you were too. Once you have this set up, there are places in your settings.py file to put your username and password credentials, and the name of the database.
1
0
0
how do i create a database in psycopg2 and do i need to?
2
python,database,django,psycopg2
0
2011-10-29T20:52:00.000
I'm create a blog using django. I'm getting an 'operational error: FATAL: role "[database user]" does not exist. But i have not created any database yet, all i have done is filled in the database details in setting.py. Do i have to create a database using psycopg2? If so, how do i do it? Is it: python import psycopg2 psycopg2.connect("dbname=[name] user=[user]") Thanks in advance.
0
0
0
0
false
7,941,712
1
1,391
2
0
0
7,941,623
before connecting to database, you need to create database, add user, setup access for user you selected. Reffer to installation/configuration guides for Postgres.
1
0
0
how do i create a database in psycopg2 and do i need to?
2
python,database,django,psycopg2
0
2011-10-29T20:52:00.000
Does a canonical user id exist for a federated user created using STS? When using boto I need a canonical user id to grant permissions to a bucket. Here's a quick tour through my code: I've successfully created temporary credentials using boto's STS module (using a "master" account), and this gives me back: federated_user_arn federated_user_id packed_policy_size access_key secret_key session_token expiration Then I create the bucket using boto: bucket = self.s3_connection.create_bucket('%s_store' % (app_id)) Now I want to grant permissions I'm left with two choices in boto: add_email_grant(permission, email_address, recursive=False, headers=None) add_user_grant(permission, user_id, recursive=False, headers=None, display_name=None) The first method isn't an option since there isn't an email attached to the federated user, so I look at the second. Here the second parameter ("userid") is to be "The canonical user id associated with the AWS account your are granting the permission to." But I can't seem to find a way to come with this for the federated user. Do canonical user ids even exist for federated users? Am I overlooking an easier way to grant permissions to federated users?
2
1
1.2
0
true
8,074,814
1
718
1
0
0
8,032,576
Contacted the author of boto and learned of: get_canonical_user_id() for the S3Connection class. This will give you the canonical user ID for the credentials associated with the connection. The connection will have to have been used for some operation (e.g.: listing buckets). Very awkward, but possible.
1
0
0
Do AWS Canonical UserIDs exist for AWS Federated Users (temporary security credentials)?
1
python,amazon-s3,amazon-web-services,boto,amazon-iam
0
2011-11-07T03:57:00.000
I store several properties of objects in hashsets. Among other things, something like "creation date". There are several hashsets in the db. So, my question is, how can I find all objects older than a week for example? Can you suggest an algorithm what faster than O(n) (naive implementation)? Thanks, Oles
1
2
1.2
0
true
8,039,797
0
462
1
0
0
8,039,566
My initial thought would be to store the data elsewhere, like relational database, or possibly using a zset. If you had continuous data (meaning it was consistently set at N interval time periods), then you could store the hash key as the member and the date (as a int timestamp) as the value. Then you could do a zrank for a particular date, and use zrevrange to query from the first rank to the value you get from zrank.
1
0
0
Redis: find all objects older than
1
python,redis
0
2011-11-07T16:37:00.000
I am running several thousand python processes on multiple servers which go off, lookup a website, do some analysis and then write the results to a central MySQL database. It all works fine for about 8 hours and then my scripts start to wait for a MySQL connection. On checking top it's clear that the MySQL daemon is overloaded as it is using up to 90% of most of the CPUs. When I stop all my scripts, MySQL continues to use resources for some time afterwards. I assume it is still updating the indexes? - If so, is there anyway of determining which indexes it is working on, or if not what it is actually doing? Many thanks in advance.
1
0
0
0
false
9,198,763
0
197
1
0
0
8,048,742
There are a lot of tweaks that can be done to improve the performance of MySQL. Given your workload, you would probably benefit a lot from mysql 5.5 and higher, which improved performance on multiprocessor machines. Is the machine in question hitting VM? if it is paging out, then the performance of mysql will be horrible. My suggestions: check version of mysql. If possible, get the latest 5.5 version. Look at the config files for mysql called my.cnf. Make sure that it makes sense on your machine. There are example config files for small, medium, large, etc machines to run MySQL. I think the default setup is for a machine with < 1 Gig of ram. As the other answer suggests, turn on slow query logging.
1
0
0
Python processes and MySQL
2
python,mysql,linux,indexing
0
2011-11-08T10:06:00.000
sorry, but does this make sense? the ORM means: Object Relational Mapper, and here, there is Relational, and NoSql is not RDBMS! so why the use of an ORM in a NoSql solution? because i see updates of ORMs for Python!
8
3
0.197375
0
false
8,051,721
0
4,264
3
0
0
8,051,614
Interesting question. Although NoSQL databases do not have a mechanism to identify relationships, it does not mean that there are no logical relationships between the data that you are storing. Most of the time, you are handling & enforcing those relationships in code manually if you're using a NoSQL database. Hence, I feel that ORMs can still help you here. If you do have data that is related, but need to use a NoSQL database, an ORM can still help you in maintaining clean data. For Example, I use Amazon SimpleDB for the lower cost, but my data still has relationships, which need to be maintained. Currently, I'm doing that manually. Maybe an ORM would help me as well.
1
0
0
why the use of an ORM with NoSql (like MongoDB)
3
python,orm,mongodb
0
2011-11-08T14:03:00.000
sorry, but does this make sense? the ORM means: Object Relational Mapper, and here, there is Relational, and NoSql is not RDBMS! so why the use of an ORM in a NoSql solution? because i see updates of ORMs for Python!
8
2
0.132549
0
false
8,051,652
0
4,264
3
0
0
8,051,614
ORM is an abstraction layer. Switching to a different engine is much easier when the queries are abstracted away, and hidden behind a common interface (it doesn't always work that well in practice, but it's still easier than without).
1
0
0
why the use of an ORM with NoSql (like MongoDB)
3
python,orm,mongodb
0
2011-11-08T14:03:00.000
sorry, but does this make sense? the ORM means: Object Relational Mapper, and here, there is Relational, and NoSql is not RDBMS! so why the use of an ORM in a NoSql solution? because i see updates of ORMs for Python!
8
13
1.2
0
true
8,051,825
0
4,264
3
0
0
8,051,614
Firstly, they are not ORM (since they don't have any relations among them), they are ODM (Object Document Mapper) Main usage of these ODM frameworks here same as the some common feature of ORM, thus providing the abstraction over your data model. you can have your data modelled in your application irrespective of the target software. Most ODM's build to leverage the existing language features and use the familiar pattern to manipulate data instead to learn new language syntax's of the new software. When i use mongoid (Ruby ODM for mongo), i can query mongo the way i do it in active model (mostly). Since they don't have the relation among them, these ODM's provide the way to define the relations in your models and simulate the relationships. These are all abstracted from the developer so they can code the same way they do with the relational data.
1
0
0
why the use of an ORM with NoSql (like MongoDB)
3
python,orm,mongodb
0
2011-11-08T14:03:00.000
I'm thinking in create a webapplication with cakephp but consuming python's appengine webservice. But, to install cakephp etc, I need to configure the database. Appengine uses another kind of datastorage, with is different from mysql, etc. I was thinking in store the data in appengine, and using the python webservices, and with the cakephp application comunicating with the webservice, for insert and retrieve data. Is there any good resource for this, or is it unpossible. Obs: also opened for a possibility for developing the webapplicaiton completely in python running in appengine. If anyone has a good resource. Thanks.
0
0
0
0
false
8,070,747
1
378
1
1
0
8,069,649
You can not run PHP on GAE. If you run PHP somewhere, it is a bad architecture to go over the internet for your data. It will be slooooow and a nightmare to develop in. You should store your data where you run your php, unless you must have a distributed, globally scaling architecture, which afaiu not the case.
1
0
0
Connect appengine with cakephp
5
php,python,google-app-engine,cakephp
0
2011-11-09T18:21:00.000
I am currently working on a pyramid system that uses sqlalchemy. This system will include a model (let's call it Base) that is stored in a database table. This model should be extensible by the user on runtime. Basically, the user should be able to subclass the Base and create a new model (let's call this one 'Child'). Childs should be stored in another database table. All examples available seem to handle database reflection on a predefined model. What would be the best way to generate complete model classes via database reflection?
3
4
0.197375
0
false
8,125,931
1
1,319
1
0
0
8,122,078
This doesn't seem to have much to do with "database reflection", but rather dynamic table creation. This is a pretty dangerous operation and generally frowned upon. You should try to think about how to model the possible structure your users would want to add to the Base and design your schema around that. Sometimes these flexible structures can benefit a lot from vertical tables when you don't know what the columns may be. Don't forget that there's an entire class of data storage systems out there that provide more flexible support for "schemaless" models. Something like Mongo or ZODB might make more sense here.
1
0
0
Model Creation by SQLAlchemy database reflection
4
python,reflection,sqlalchemy,pyramid
0
2011-11-14T13:09:00.000
How can we remove the database name and username that appears on top left hand side corner in openERP window after openERP logo.In which file do we need to make changes to remove that. Thanks, Sameer
0
0
0
0
false
12,295,904
1
133
1
0
0
8,151,033
It's in the openerp-web module. The location depends on your particular configuration. The relevant code can be found in the file addons/web/static/src/xml/base.xml. Search for header_title and edit the contents of the h1 tag of that class.
1
0
0
Removing Database name and username from top Left hand side corner.
2
python,openerp
0
2011-11-16T11:37:00.000
I want to convert xlsx file to xls format using python. The reason is that im using xlrd library to parse xls files, but xlrd is not able to parse xlsx files. Switching to a different library is not feasible for me at this stage, as the entire project is using xlrd, so a lot of changes will be required. So, is there any way i can programatically convert an xlsx file to xls using python ? Please Help Thank You
0
0
0
0
false
21,996,139
0
1,806
1
0
0
8,151,243
xlrd-0.9.2.tar.gz (md5) can extract data from Excel spreadsheets (.xls and .xlsx, versions 2.0 on-wards) on any platform.
1
0
0
xlrd library not working with xlsx files.any way to covert xlsx to xls using python?
2
python,excel,xls,xlsx,xlrd
0
2011-11-16T11:54:00.000
if I use cx_Oracle 5.0.4, I can connect from python console, and works under apache+django+mod_wsgi but when I update cx_Oracle 5.1.1, I can connect from python console, BUT same code doesn't work under apache+django+mod_wsgi File "C:\Python27\lib\site-packages\django\db\backends\oracle\base.py", line 24, in raise ImproperlyConfigured("Error loading cx_Oracle module: %s" % e) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: Error loading cx_Oracle module: DLL load failed: The specified module could not be found. PS: python 2.7 PSS: I have instaled MSVC 2008 Redistributable x86
1
1
0.197375
0
false
8,158,089
1
1,227
1
0
0
8,151,815
Need a solution as well. I have the same setup on WinXP (Apache 2.2.21/ mod_wsgi 3.3/ python 2.7.2/ cx_Oracle 5.x.x). I found that cx_Oracle 5.1 also fails with the same error. Only 5.0.4 works. Here is the list of changes that were made from 5.0.4 to 5.1: Remove support for UNICODE mode and permit Unicode to be passed through in everywhere a string may be passed in. This means that strings will be passed through to Oracle using the value of the NLS_LANG environment variable in Python 3.x as well. Doing this eliminated a bunch of problems that were discovered by using UNICODE mode and also removed an unnecessary restriction in Python 2.x that Unicode could not be used in connect strings or SQL statements, for example. Added support for creating an empty object variable via a named type, the first step to adding full object support. Added support for Python 3.2. Account for lib64 used on x86_64 systems. Thanks to Alex Wood for supplying the patch. Clear up potential problems when calling cursor.close() ahead of the cursor being freed by going out of scope. Avoid compilation difficulties on AIX5 as OCIPing does not appear to be available on that platform under Oracle 10g Release 2. Thanks to Pierre-Yves Fontaniere for the patch. Free temporary LOBs prior to each fetch in order to avoid leaking them. Thanks to Uwe Hoffmann for the initial patch.
1
0
0
cx_Oracle 5.1.1 under apache+mod_wsgi
1
python,apache,cx-oracle
0
2011-11-16T12:36:00.000
I'm working with SQLAlchemy for the first time and was wondering...generally speaking is it enough to rely on python's default equality semantics when working with SQLAlchemy vs id (primary key) equality? In other projects I've worked on in the past using ORM technologies like Java's Hibernate, we'd always override .equals() to check for equality of an object's primary key/id, but when I look back I'm not sure this was always necessary. In most if not all cases I can think of, you only ever had one reference to a given object with a given id. And that object was always the attached object so technically you'd be able to get away with reference equality. Short question: Should I be overriding eq() and hash() for my business entities when using SQLAlchemy?
11
1
0.099668
0
false
8,179,370
0
4,581
1
0
0
8,179,068
I had a few situations where my sqlalchemy application would load multiple instances of the same object (multithreading/ different sqlalchemy sessions ...). It was absolutely necessary to override eq() for those objects or I would get various problems. This could be a problem in my application design, but it probably doesn't hurt to override eq() just to be sure.
1
0
0
sqlalchemy id equality vs reference equality
2
python,sqlalchemy
0
2011-11-18T07:24:00.000
Essentially I have a large database of transactions and I am writing a script that will take some personal information and match a person to all of their past transactions. So I feed the script a name and it returns all of the transactions that it has decided belong to that customer. The issue is that I have to do this for almost 30k people and the database has over 6 million transaction records. Running this on one computer would obviously take a long time, I am willing to admit that the code could be optimized but I do not have time for that and I instead want to split the work over several computers. Enter Celery: My understanding of celery is that I will have a boss computer sending names to a worker computer which runs the script and puts the customer id in a column for each transaction it matches. Would there be a problem with multiple worker computers searching and writing to the same database? Also, have I missed anything and/or is this totally the wrong approach? Thanks for the help.
2
2
1.2
0
true
8,230,713
0
107
1
1
0
8,230,617
No, there wouldn't be any problem multiple worker computers searching and writing to the same database since MySQL is designed to be able to handle this. Your approach is good.
1
0
0
relatively new programmer interested in using Celery, is this the right approach
1
python,mysql,celery
0
2011-11-22T16:56:00.000
I'm currently trying to build and install the mySQLdb module for Python, but the command python setup.py build gives me the following error running build running build_py copying MySQLdb/release.py -> build/lib.macosx-10.3-intel-2.7/MySQLdb error: could not delete 'build/lib.macosx-10.3-intel-2.7/MySQLdb/release.py': Permission denied I verified that I'm a root user and when trying to execute the script using sudo, I then get a gcc-4.0 error: running build running build_py copying MySQLdb/release.py -> build/lib.macosx-10.3-fat-2.7/MySQLdb running build_ext building '_mysql' extension gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -g -O2 -DNDEBUG -g -O3 -Dversion_info=(1,2,3,'final',0) -D__version__=1.2.3 -I/usr/local/mysql/include -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.3-fat-2.7/_mysql.o -Os -g -fno-common -fno-strict-aliasing -arch x86_64 unable to execute gcc-4.0: No such file or directory error: command 'gcc-4.0' failed with exit status 1 Which is odd, because I'm using XCode 4 with Python 2.7. I've tried the easy_install and pip methods, both of which dont work and give me a permission denied error on release.py. I've chmodded that file to see if that was the problem but no luck. Thoughts?
4
1
1.2
0
true
8,260,644
0
912
1
1
0
8,236,963
Make sure that gcc-4.0 is in your PATH. Also, you can create an alias from gcc to gcc-4.0. Take care about 32b and 64b versions. Mac OS X is a 64b operating system and you should right flags to make sure you're compiling for 64b architecture.
1
0
0
Errors When Installing MySQL-python module for Python 2.7
2
python,mysql,django,mysql-python
0
2011-11-23T03:28:00.000