Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I've produced a few Django sites but up until now I have been mapping individual views and URLs in urls.py.
Now I've tried to create a small custom CMS but I'm having trouble with the URLs. I have a database table (SQLite3) which contains code for the pages like a column for header, one for right menu, one for content.... so on, so on. I also have a column for the URL. How do I get Django to call the information in the database table from the URL stored in the column rather than having to code a view and the URL for every page (which obviously defeats the purpose of a CMS)?
If someone can just point me at the right part of the docs or a site which explains this it would help a lot.
Thanks all. | 1 | 1 | 0.099668 | 0 | false | 1,563,359 | 1 | 3,215 | 1 | 0 | 0 | 1,563,088 | Your question is a little bit twisted, but I think what you're asking for is something similar to how django.contrib.flatpages handles this. Basically it uses middleware to catch the 404 error and then looks to see if any of the flatpages have a URL field that matches.
We did this on one site where all of the URLs were made "search engine friendly". We overrode the save() method, munged the title into this_is_the_title.html (or whatever) and then stored that in a separate table that had a URL => object class/id mapping.ng (this means it is listed before flatpages in the middleware list). | 1 | 0 | 0 | URLs stored in database for Django site | 2 | python,database,django,url,content-management-system | 0 | 2009-10-13T21:43:00.000 |
I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements for execution later | 17 | 1 | 0.039979 | 0 | false | 1,564,226 | 0 | 50,856 | 3 | 0 | 0 | 1,563,967 | Quoting parameters manually in general is a bad idea. What if there is a mistake in escaping rules? What if escape doesn't match used version of DB? What if you just forget to escape some parameter or erroneously assumed it can't contain data requiring escaping? That all may cause SQL injection vulnerability. Also, DB can have some restrictions on SQL statement length while you need to pass large data chunk for LOB column. That's why Python DB API and most databases (Python DB API module will transparently escape parameters, if database doesn't support this, as early MySQLdb did) allow passing parameters separated from statement:
.execute(operation[,parameters]) | 1 | 0 | 0 | Generate SQL statements with python | 5 | python,sql,postgresql,psycopg2 | 0 | 2009-10-14T02:31:00.000 |
I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements for execution later | 17 | 2 | 0.07983 | 0 | false | 1,563,981 | 0 | 50,856 | 3 | 0 | 0 | 1,563,967 | For robustness, I recommend using prepared statements to send user-entered values, no matter what language you use. :-) | 1 | 0 | 0 | Generate SQL statements with python | 5 | python,sql,postgresql,psycopg2 | 0 | 2009-10-14T02:31:00.000 |
I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements for execution later | 17 | 13 | 1 | 0 | false | 1,564,224 | 0 | 50,856 | 3 | 0 | 0 | 1,563,967 | SQLAlchemy provides a robust expression language for generating SQL from Python.
Like every other well-designed abstraction layer, however, the queries it generates insert data through bind variables rather than through attempting to mix the query language and the data being inserted into a single string. This approach avoids massive security vulnerabilities and is otherwise The Right Thing. | 1 | 0 | 0 | Generate SQL statements with python | 5 | python,sql,postgresql,psycopg2 | 0 | 2009-10-14T02:31:00.000 |
When using the sqlite3 module in python, all elements of cursor.description except the column names are set to None, so this tuple cannot be used to find the column types for a query result (unlike other DB-API compliant modules). Is the only way to get the types of the columns to use pragma table_info(table_name).fetchall() to get a description of the table, store it in memory, and then match the column names from cursor.description to that overall table description? | 5 | 5 | 0.462117 | 0 | false | 1,583,379 | 0 | 3,955 | 1 | 0 | 0 | 1,583,350 | No, it's not the only way. Alternatively, you can also fetch one row, iterate over it, and inspect the individual column Python objects and types. Unless the value is None (in which case the SQL field is NULL), this should give you a fairly precise indication what the database column type was.
sqlite3 only uses sqlite3_column_decltype and sqlite3_column_type in one place, each, and neither are accessible to the Python application - so their is no "direct" way that you may have been looking for. | 1 | 0 | 0 | sqlite3 and cursor.description | 2 | python,sqlite,python-db-api | 0 | 2009-10-17T22:11:00.000 |
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.
Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.
By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.
So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?
I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?
Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not? | 2 | 0 | 0 | 0 | false | 1,586,035 | 1 | 369 | 3 | 0 | 0 | 1,586,008 | It would be great if code written for one platform would work on every other without any modification whatsoever, but this is usually not the case and probably never will be. What the current frameworks do is about all anyone can. | 1 | 0 | 0 | PHP, Python, Ruby application with multiple RDBMS | 4 | php,python,ruby-on-rails,database | 0 | 2009-10-18T20:56:00.000 |
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.
Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.
By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.
So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?
I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?
Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not? | 2 | 2 | 0.099668 | 0 | false | 1,586,105 | 1 | 369 | 3 | 0 | 0 | 1,586,008 | If you want to leverage the bells and whistles of various RDBMSes, you can certainly do it. Just apply standard OO Principles. Figure out what kind of API your persistence layer will need to provide.
You'll end up writing a set of isomorphic persistence adapter classes. From the perspective of your model code (which will be calling adapter methods to load and store data), these classes are identical. Writing good test coverage should be easy, and good tests will make life a lot easier. Deciding how much abstraction is provided by the persistence adapters is the trickiest part, and is largely application-specific.
As for whether this is worth the trouble: it depends. It's a good exercise if you've never done it before. It may be premature if you don't actually know for sure what your target databases are.
A good strategy might be to implement two persistence adapters to start. Let's say you expect the most common back end will be MySQL. Implement one adapter tuned for MySQL. Implement a second that uses your database abstraction library of choice, and uses only standard and widely available SQL features. Now you've got support for a ton of back ends (everything supported by your abstraction library of choice), plus tuned support for mySQL. If you decide you then want to provide an optimized adapter from Oracle, you can implement it at your leisure, and you'll know that your application can support swappable database back-ends. | 1 | 0 | 0 | PHP, Python, Ruby application with multiple RDBMS | 4 | php,python,ruby-on-rails,database | 0 | 2009-10-18T20:56:00.000 |
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL generation is not the right way of writing database applications that should run on multiple database engines, especially when you throw in really expensive databases like Oracle. And this is more or less global, it doesn't apply to only a few languages.
Just a simple example, using query pagination and insertion: when using Oracle one could use the FIRST_ROWS and APPEND hints(where appropriate). Going to advanced examples I could mention putting in the database lots of Stored Procedures/Packages where it makes sense. And those are different for every RDBMS.
By using only a limited set of features, commonly available to many RDBMS one doesn't exploit the possibilities that those expensive and advanced database engines have to offers.
So getting back to the heart of the question: how do you develop PHP, Python, Ruby etc. applications that should run on multiple database engines?
I am especially interested hearing how you separate/use the queries that are especially written for running on a single RDBMS. Say you've got a statement that should run on 3 RDBMS: Oracle, DB2 and Sql Server and for each of these you write a separate SQL statement in order to make use of all features this RDBMS has to offer. How do you do it?
Letting this aside, what is you opinion walking this path? Is it worth it in your experience? Why? Why not? | 2 | 2 | 1.2 | 0 | true | 1,587,887 | 1 | 369 | 3 | 0 | 0 | 1,586,008 | You cannot eat a cake and have it, choose on of the following options.
Use your database abstraction layer whenever you can and in the rare cases when you have a need for a hand-made query (eg. performance reasons) stick to the lowest common denominator and don't use stored procedures or any proprietary extensions that you database has to offer. In this case deploying the application on a different RDBMS should be trivial.
Use the full power of your expensive RDBMS, but take into account that your application won't be easily portable. When the need arises you will have to spend considerable effort on porting and maintenance. Of course a decent layered design encapsulating all the differences in a single module or class will help in this endeavor.
In other words you should consider how probable is it that your application will be deployed to multiple RDBMSes and make an informed choice. | 1 | 0 | 0 | PHP, Python, Ruby application with multiple RDBMS | 4 | php,python,ruby-on-rails,database | 0 | 2009-10-18T20:56:00.000 |
I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM.
Anyway, what use cases suit non relational datastores best? | 3 | 2 | 0.132549 | 0 | false | 1,588,748 | 1 | 587 | 2 | 1 | 0 | 1,588,708 | Consider the situation where you have many entity types but few instances of each entity. In this case you will have many tables each with a few records so a relational approach is not suitable. | 1 | 0 | 0 | What are the use cases for non relational datastores? | 3 | python,google-app-engine,couchdb | 0 | 2009-10-19T13:36:00.000 |
I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM.
Anyway, what use cases suit non relational datastores best? | 3 | 0 | 0 | 0 | false | 1,589,186 | 1 | 587 | 2 | 1 | 0 | 1,588,708 | In some cases that are simply nice. ZODB is a Python-only object database, that is so well-integrated with Python that you can simply forget that it's there. You don't have to bother about it, most of the time. | 1 | 0 | 0 | What are the use cases for non relational datastores? | 3 | python,google-app-engine,couchdb | 0 | 2009-10-19T13:36:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | 3 | 3 | 0.085505 | 1 | false | 1,594,704 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | If you are I/O bound, the best way I have found to optimize is to read or write the entire file into/out of memory at once, then operate out of RAM from there on.
With extensive testing I found that my runtime eded up bound not by the amount of data I read from/wrote to disk, but by the number of I/O operations I used to do it. That is what you need to optimize.
I don't know Python, but if there is a way to tell it to write the whole file out of RAM in one go, rather than issuing a separate I/O for each byte, that's what you need to do.
Of course the drawback to this is that files can be considerably larger than available RAM. There are lots of ways to deal with that, but that is another question for another time. | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | 3 | 1 | 0.028564 | 1 | false | 1,595,358 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | Use buffered writes for step 4.
Write a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it.
You would have one buffer per file, so that most "writes" won't actually hit the disk. | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | 3 | 3 | 1.2 | 1 | true | 1,595,626 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | Python already does IO buffering and the OS should handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening the file in O_SYNC mode.
If the OS isn't doing the right thing, you can try raising the buffer size (third parameter to open()). For some guidance on appropriate values given a 100MB/s 10ms latency IO system a 1MB IO size will result in approximately 50% latency overhead, while a 10MB IO size will result in 9% overhead. If its still IO bound, you probably just need more bandwidth. Use your OS specific tools to check what kind of bandwidth you are getting to/from the disks.
Also useful is to check if step 4 is taking a lot of time executing or waiting on IO. If it's executing you'll need to spend more time checking which part is the culprit and optimize that, or split out the work to different processes. | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | 3 | 2 | 0.057081 | 1 | false | 1,597,062 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so. | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those files into the database.
Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.
There are a few ideas that I have to solve this:
Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?
Parallelize steps 1, 2&3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.
Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.
Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice? | 3 | 1 | 0.028564 | 1 | false | 1,597,281 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them?
This would remove the save to and load from the disk that step 4 entails.
If the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and commit after the last. | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
Two libraries for Mysql.
I've always used _mysql because it's simpler.
Can anyone tell me the difference, and why I should use which one in certain occasions? | 11 | 5 | 0.321513 | 0 | false | 1,620,642 | 0 | 4,941 | 1 | 0 | 0 | 1,620,575 | _mysql is the one-to-one mapping of the rough mysql API. On top of it, the DB-API is built, handling things using cursors and so on.
If you are used to the low-level mysql API provided by libmysqlclient, then the _mysql module is what you need, but as another answer says, there's no real need to go so low-level. You can work with the DB-API and behave just fine, with the added benefit that the DB-API is backend-independent. | 1 | 0 | 0 | Python: advantages and disvantages of _mysql vs MySQLdb? | 3 | python,mysql | 0 | 2009-10-25T10:37:00.000 |
On my website I store user pictures in a simple manner such as:
"image/user_1.jpg".
I don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com/images/user_2.jpg, www.mydomain.com/images/user_3.jpg, so on...)
So far I have three solutions in mind:
I tried using .htaccess to password protect the "images" folder. That helped me up to some point but some of the images started popping up a username and password request on my htmls (while amazingly some images did not) so this seems to be an unpredictable method.
I can start converting my user_id's to an md5 hash with some salt. The images would be named as: /image/user_e4d909c290d0fb1ca068ffaddf22cbd0.jpg. I don't like this solution. It makes the file system way complicated.
or I can user PHP's readfile() function or maybe something similar in Perl or Python. For instance I could pass a password using an md5 string to validate visitors as loggedin users with access to that image.
I'm leaning towards option 3 but with a Perl or Python angle (assuming they would be faster than PHP). However I would like to see other ideas on the matter. Maybe there is a simple .htaccess trick to this?
Basically all I want to make sure is that no one can view images from my website unless the images are directly called from within htmls hosted on my site.
Thanks a lot,
Haluk | 2 | 6 | 1 | 0 | false | 1,623,338 | 1 | 4,621 | 2 | 0 | 0 | 1,623,311 | Any method you choose to determine the source of a request is only as reliable as the HTTP_REFERER information that is sent by the user's browser, which is not very. Requiring authentication is the only good way to protect content. | 1 | 0 | 0 | Restrict access to images on my website except through my own htmls | 5 | php,python,linux,perl | 0 | 2009-10-26T06:06:00.000 |
On my website I store user pictures in a simple manner such as:
"image/user_1.jpg".
I don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com/images/user_2.jpg, www.mydomain.com/images/user_3.jpg, so on...)
So far I have three solutions in mind:
I tried using .htaccess to password protect the "images" folder. That helped me up to some point but some of the images started popping up a username and password request on my htmls (while amazingly some images did not) so this seems to be an unpredictable method.
I can start converting my user_id's to an md5 hash with some salt. The images would be named as: /image/user_e4d909c290d0fb1ca068ffaddf22cbd0.jpg. I don't like this solution. It makes the file system way complicated.
or I can user PHP's readfile() function or maybe something similar in Perl or Python. For instance I could pass a password using an md5 string to validate visitors as loggedin users with access to that image.
I'm leaning towards option 3 but with a Perl or Python angle (assuming they would be faster than PHP). However I would like to see other ideas on the matter. Maybe there is a simple .htaccess trick to this?
Basically all I want to make sure is that no one can view images from my website unless the images are directly called from within htmls hosted on my site.
Thanks a lot,
Haluk | 2 | 2 | 0.07983 | 0 | false | 1,623,325 | 1 | 4,621 | 2 | 0 | 0 | 1,623,311 | You are right considering option #3. Use service script that would validate user and readfile() an image. Be sure to set correct Content-Type HTTP header via header() function prior to serving an image. For better isolation images should be put above web root directory, or protected by well written .htaccess rules - there is definitely a way of protecting files and/or directories this way. | 1 | 0 | 0 | Restrict access to images on my website except through my own htmls | 5 | php,python,linux,perl | 0 | 2009-10-26T06:06:00.000 |
Is there an easy way to reset a django database (i.e. drop all data/tables, create new tables and create indexes) without loading fixture data afterwords? What I want to have is just an empty database because all data is loaded from another source (a kind of a post-processed backup).
I know that this could be achieved by piping the output of the manage sql... commands to manage dbshell, but this relies on manage dbshelland is kind of hacky...
Are there any other ways to do this?
Edit:
manage reset will do it, but is there a command like reset that doesn't need the application names as parameters? | 1 | 2 | 1.2 | 0 | true | 1,645,519 | 1 | 1,667 | 1 | 0 | 0 | 1,645,310 | As far as I know, the fixtures (in initial_data file) are automatically loaded after manage.py syndcb and not after reset. So, if you do a manage.py reset yourapp it should not load the fixtures. Hmm? | 1 | 0 | 0 | Django db reset without loading fixtures | 2 | python,database,django,fixtures | 0 | 2009-10-29T17:26:00.000 |
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.
Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size?
Note that no temporary tables are being created/deleted during the process: just inserts into existing tables.
And to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007.
If it could help, I could preprocess in sqlite.
Edit:
Just to add some further information (some already listed in my comments):
The data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc
All processing is happening in Python: all the mdb file is doing is storing the data
All of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.)
Given the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and/or removing then reinstating indexes. Thanks for the suggestions.
Any further suggestions still welcome. | 1 | 1 | 0.033321 | 0 | false | 1,652,783 | 0 | 3,989 | 4 | 0 | 0 | 1,650,856 | Is your script executing a single INSERT statement per row of data? If so, pre-processing the data into a text file of many rows that could then be inserted with a single INSERT statement might improve the efficiency and cut down on the accumulating temporary crud that's causing it to bloat.
You might also make sure the INSERT is being executed without transactions. Whether or not that happens implicitly depends on the Jet version and the data interface library you're using to accomplish the task. By explicitly making sure it's off, you could improve the situation.
Another possibility is to drop the indexes before the insert, compact, run the insert, compact, re-instate the indexes, and run a final compact. | 1 | 0 | 0 | MS-Access Database getting very large during inserts | 6 | python,ms-access | 0 | 2009-10-30T16:21:00.000 |
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.
Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size?
Note that no temporary tables are being created/deleted during the process: just inserts into existing tables.
And to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007.
If it could help, I could preprocess in sqlite.
Edit:
Just to add some further information (some already listed in my comments):
The data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc
All processing is happening in Python: all the mdb file is doing is storing the data
All of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.)
Given the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and/or removing then reinstating indexes. Thanks for the suggestions.
Any further suggestions still welcome. | 1 | -1 | -0.033321 | 0 | false | 31,059,064 | 0 | 3,989 | 4 | 0 | 0 | 1,650,856 | File --> Options --> Current Database -> Check below options
* Use the Cache format that is compatible with Microsoft Access 2010 and later
* Clear Cache on Close
Then, you file will be saved by compacting to the original size. | 1 | 0 | 0 | MS-Access Database getting very large during inserts | 6 | python,ms-access | 0 | 2009-10-30T16:21:00.000 |
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.
Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size?
Note that no temporary tables are being created/deleted during the process: just inserts into existing tables.
And to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007.
If it could help, I could preprocess in sqlite.
Edit:
Just to add some further information (some already listed in my comments):
The data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc
All processing is happening in Python: all the mdb file is doing is storing the data
All of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.)
Given the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and/or removing then reinstating indexes. Thanks for the suggestions.
Any further suggestions still welcome. | 1 | 3 | 0.099668 | 0 | false | 1,650,897 | 0 | 3,989 | 4 | 0 | 0 | 1,650,856 | A common trick, if feasible with regard to the schema and semantics of the application, is to have several MDB files with Linked tables.
Also, the way the insertions take place matters with regards to the way the file size balloons... For example: batched, vs. one/few records at a time, sorted (relative to particular index(es)), number of indexes (as you mentioned readily dropping some during the insert phase)...
Tentatively a pre-processing approach with say storing of new rows to a separate linked table, heap fashion (no indexes), then sorting/indexing this data is a minimal fashion, and "bulk loading" it to its real destination. Similar pre-processing in SQLite (has hinted in question) would serve the serve purpose. Keeping it "ALL MDB" is maybe easier (fewer languages/processes to learn, fewer inter-op issues [hopefuly ;-)]...)
EDIT: on why inserting records in a sorted/bulk fashion may slow down the MDB file's growth (question from Tony Toews)
One of the reasons for MDB files' propensity to grow more quickly than the rate at which text/data added to them (and their counterpart ability to be easily compacted back down) is that as information is added, some of the nodes that constitute the indexes have to be re-arranged (for overflowing / rebalancing etc.). Such management of the nodes seems to be implemented in a fashion which favors speed over disk space and harmony, and this approach typically serves simple applications / small data rather well. I do not know the specific logic in use for such management but I suspect that in several cases, node operations cause a particular node (or much of it) to be copied anew, and the old location simply being marked as free/unused but not deleted/compacted/reused. I do have "clinical" (if only a bit outdated) evidence that by performing inserts in bulk we essentially limit the number of opportunities for such duplication to occur and hence we slow the growth.
EDIT again: After reading and discussing things from Tony Toews and Albert Kallal it appears that a possibly more significant source of bloat, in particular in Jet Engine 4.0, is the way locking is implemented. It is therefore important to set the database in single user mode to avoid this. (Read Tony's and Albert's response for more details. | 1 | 0 | 0 | MS-Access Database getting very large during inserts | 6 | python,ms-access | 0 | 2009-10-30T16:21:00.000 |
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.
Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the inserts into chunks and compacting inbetween each, are there any techniques for avoiding the increase in file size?
Note that no temporary tables are being created/deleted during the process: just inserts into existing tables.
And to forstall the inevitable comments: yes, I am required to store this data in Access 2003. No, I can't upgrade to Access 2007.
If it could help, I could preprocess in sqlite.
Edit:
Just to add some further information (some already listed in my comments):
The data is being generated in Python on a table by table basis, and then all of the records for that table batch inserted via odbc
All processing is happening in Python: all the mdb file is doing is storing the data
All of the fields being inserted are valid fields (none are being excluded due to unique key violations, etc.)
Given the above, I'll be looking into how to disable row level locking via odbc and considering presorting the data and/or removing then reinstating indexes. Thanks for the suggestions.
Any further suggestions still welcome. | 1 | 3 | 0.099668 | 0 | false | 1,651,412 | 0 | 3,989 | 4 | 0 | 0 | 1,650,856 | One thing to watch out for is records which are present in the append queries but aren't inserted into the data due to duplicate key values, null required fields, etc. Access will allocate the space taken by the records which aren't inserted.
About the only significant thing I'm aware of is to ensure you have exclusive access to the database file. Which might be impossible if doing this during the day. I noticed a change in behavior from Jet 3.51 (used in Access 97) to Jet 4.0 (used in Access 2000) when the Access MDBs started getting a lot larger when doing record appends. I think that if the MDB is being used by multiple folks then records are inserted once per 4k page rather than as many as can be stuffed into a page. Likely because this made index insert/update operations faster.
Now compacting does indeed put as many records in the same 4k page as possible but that isn't of help to you. | 1 | 0 | 0 | MS-Access Database getting very large during inserts | 6 | python,ms-access | 0 | 2009-10-30T16:21:00.000 |
I am after a Python module for Google App Engine that abstracts away limitations of the GQL.
Specifically I want to store big files (> 1MB) and retrieve all records for a model (> 1000). I have my own code that handles this at present but would prefer to build on existing work, if available.
Thanks | 0 | 1 | 1.2 | 0 | true | 1,660,404 | 0 | 78 | 1 | 1 | 0 | 1,658,829 | I'm not aware of any libraries that do that. You may want to reconsider what you're doing, at least in terms of retrieving more than 1000 results - those operations are not available because they're expensive, and needing to evade them is usually (though not always) a sign that you need to rearchitect your app to do less work at read time. | 1 | 0 | 0 | module to abstract limitations of GQL | 1 | python,google-app-engine,gql | 0 | 2009-11-01T23:54:00.000 |
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.
The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc)
So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages.
In hindsight it's not all together surprising, but it seems that this can't be good.
...even if only a dozen or so of the queries take 1ms+
So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead. | 3 | 1 | 0.049958 | 0 | false | 1,689,143 | 1 | 1,620 | 4 | 0 | 0 | 1,689,031 | There is always overhead in database calls, in your case the overhead is not that bad because the application and database are on the same machine so there is no network latency but there is still a significant cost.
When you make a request to the database it has to prepare to service that request by doing a number of things including:
Allocating resources (memory buffers, temp tables etc) to the database server connection/thread that will handle the request,
De-serializing the sql and parameters (this is necessary even on one machine as this is an inter-process request unless you are using an embeded database)
Checking whether the query exists in the query cache if not optimise it and put it in the cache.
Note also that if your queries are not parametrised (that is the values are not separated from the SQL) this may result in cache misses for statements that should be the same meaning that each request results in the query being analysed and optimized each time.
Process the query.
Prepare and return the results to the client.
This is just an overview of the kinds of things the most database management systems do to process an SQL request. You incur this overhead 500 times even if the the query itself runs relatively quickly. Bottom line database interactions even to local database are not as cheap as you might expect. | 1 | 0 | 0 | Overhead of a Round-trip to MySql? | 4 | python,mysql,django,overhead | 0 | 2009-11-06T17:18:00.000 |
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.
The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc)
So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages.
In hindsight it's not all together surprising, but it seems that this can't be good.
...even if only a dozen or so of the queries take 1ms+
So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead. | 3 | 3 | 0.148885 | 0 | false | 1,689,146 | 1 | 1,620 | 4 | 0 | 0 | 1,689,031 | The overhead of each queries is only part of the picture. The actual round trip time between your Django and Mysql servers is probably very small since most of your queries are coming back in less than a one millisecond. The bigger problem is that the number of queries issued to your database can quickly overwhelm it. 500 queries for a page is way to much, even 50 seems like a lot to me. If ten users view complicated pages you're now up to 5000 queries.
The round trip time to the database server is more of a factor when the caller is accessing the database from a Wide Area Network, where roundtrips can easily be between 20ms and 100ms.
I would definitely look into using some kind of caching. | 1 | 0 | 0 | Overhead of a Round-trip to MySql? | 4 | python,mysql,django,overhead | 0 | 2009-11-06T17:18:00.000 |
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.
The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc)
So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages.
In hindsight it's not all together surprising, but it seems that this can't be good.
...even if only a dozen or so of the queries take 1ms+
So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead. | 3 | 4 | 0.197375 | 0 | false | 1,689,452 | 1 | 1,620 | 4 | 0 | 0 | 1,689,031 | Just because you are using an ORM doesn't mean that you shouldn't do performance tuning.
I had - like you - a home page of one of my applications that had low performance. I saw that I was doing hundreds of queries to display that page. I went looking at my code and realized that with some careful use of select_related() my queries would bring more of the data I needed - I went from hundreds of queries to tens.
You can also run a SQL profiler and see if there aren't indices that would help your most common queries - you know, standard database stuff.
Caching is also your friend, I would think. If a lot of a page is not changing, do you need to query the database every single time?
If all else fails, remember: the ORM is great, and yes - you should try to use it because it is the Django philosophy; but you are not married to it.
If you really have a usecase where studying and tuning the ORM navigation didn't help, if you are sure that you could do it much better with a standard query: use raw sql for that case. | 1 | 0 | 0 | Overhead of a Round-trip to MySql? | 4 | python,mysql,django,overhead | 0 | 2009-11-06T17:18:00.000 |
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.
The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc)
So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages.
In hindsight it's not all together surprising, but it seems that this can't be good.
...even if only a dozen or so of the queries take 1ms+
So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead. | 3 | 2 | 1.2 | 0 | true | 1,689,330 | 1 | 1,620 | 4 | 0 | 0 | 1,689,031 | There are some ways to reduce the query volume.
Use .filter() and .all() to get a bunch of things; pick and choose in the view function (or template via {%if%}). Python can process a batch of rows faster than MySQL.
"But I could send too much to the template". True, but you'll execute fewer SQL requests. Measure to see which is better.
This is what you used to do when you wrote SQL. It's not wrong -- it doesn't break the ORM -- but it optimizes the underlying DB work and puts the processing into the view function and the template.
Avoid query navigation in the template. When you do {{foo.bar.baz.quux}}, SQL is used to get the bar associated with foo, then the baz associated with the bar, then the quux associated with baz. You may be able to reduce this query business with some careful .filter() and Python processing to assemble a useful tuple in the view function.
Again, this was something you used to do when you hand-crafted SQL. In this case, you gather larger batches of ORM-managed objects in the view function and do your filtering in Python instead of via a lot of individual ORM requests.
This doesn't break the ORM. It changes the usage profile from lots of little queries to a few bigger queries. | 1 | 0 | 0 | Overhead of a Round-trip to MySql? | 4 | python,mysql,django,overhead | 0 | 2009-11-06T17:18:00.000 |
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})
As this is a client-side app, I don't want to use a database server, I just want the info stored into files.
I want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game.
I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs.
Is my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution? | 5 | 2 | 0.066568 | 0 | false | 1,697,185 | 0 | 666 | 3 | 0 | 0 | 1,697,153 | BerkeleyDB is good, also look at the *DBM incarnations (e.g. GDBM). The big question though is: for what do you need to search? Do you need to search by that URL, by a range of URLs or the dates you list?
It is also quite possible to keep groups of records as simple files in the local filesystem, grouped by dates or search terms, &c.
Answering the "search" question is the biggest start.
As for the key/value thingy, what you need to ensure is that the KEY itself is well defined as for your lookups. If for example you need to lookup by dates sometimes and others by title, you will need to maintain a "record" row, and then possibly 2 or more "index" rows making reference to the original record. You can model nearly anything in a key/value store. | 1 | 0 | 0 | Which database should I use to store records, and how should I use it? | 6 | c++,python,database,persistence | 0 | 2009-11-08T17:01:00.000 |
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})
As this is a client-side app, I don't want to use a database server, I just want the info stored into files.
I want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game.
I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs.
Is my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution? | 5 | 0 | 0 | 0 | false | 1,698,109 | 0 | 666 | 3 | 0 | 0 | 1,697,153 | Ok, so you say just storing the data..? You really only need a DB for retrieval, lookup, summarising, etc. So, for storing, just use simple text files and append lines. Compress the data if you need to, use delims between fields - just about any language will be able to read such files. If you do want to retrieve, then focus on your retrieval needs, by date, by key, which keys, etc. If you want simple client side, then you need simple client db. SQLite is far easier than BDB, but look at things like Sybase Advantage (very fast and free for local clients but not open-source) or VistaDB or firebird... but all will require local config/setup/maintenance. If you go local XML for a 'sizable' number of records will give you some unnecessarily bloated file-sizes..! | 1 | 0 | 0 | Which database should I use to store records, and how should I use it? | 6 | c++,python,database,persistence | 0 | 2009-11-08T17:01:00.000 |
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})
As this is a client-side app, I don't want to use a database server, I just want the info stored into files.
I want the files to be readable from various languages (at least python and C++), so something language specific like python's pickle is out of the game.
I am seeing two possibilities: sqlite and BerkeleyDB. As my use case is clearly not relational, I am tempted to go with BerkeleyDB, however I don't really know how I should use it to store my records, as it only stores key/value pairs.
Is my reasoning correct? If so, how should I use BDB to store my records? Can you link me to relevant info? Or am I missing a better solution? | 5 | 2 | 0.066568 | 0 | false | 1,697,239 | 0 | 666 | 3 | 0 | 0 | 1,697,153 | Personally I would use sqlite anyway. It has always just worked for me (and for others I work with). When your app grows and you suddenly do want to do something a little more sophisticated, you won't have to rewrite.
On the other hand, I've seen various comments on the Python dev list about Berkely DB that suggest it's less than wonderful; you only get dict-style access (what if you want to select certain date ranges or titles instead of URLs); and it's not even in Python 3's standard set of libraries. | 1 | 0 | 0 | Which database should I use to store records, and how should I use it? | 6 | c++,python,database,persistence | 0 | 2009-11-08T17:01:00.000 |
Are there database testing tools for python (like sqlunit)? I want to test the DAL that is built using sqlalchemy | 4 | 4 | 1.2 | 0 | true | 1,719,347 | 0 | 601 | 1 | 0 | 0 | 1,719,279 | Follow the design pattern that Django uses.
Create a disposable copy of the database. Use SQLite3 in-memory, for example.
Create the database using the SQLAlchemy table and index definitions. This should be a fairly trivial exercise.
Load the test data fixture into the database.
Run your unit test case in a database with a known, defined state.
Dispose of the database.
If you use SQLite3 in-memory, this procedure can be reasonably fast. | 1 | 0 | 0 | Are there database testing tools for python (like sqlunit)? | 1 | python,database,testing,sqlalchemy | 1 | 2009-11-12T01:27:00.000 |
But, they were unable to be found!?
How do I install both of them? | 1 | 2 | 1.2 | 0 | true | 1,720,904 | 0 | 730 | 1 | 0 | 0 | 1,720,867 | Have you installed python-mysqldb? If not install it using apt-get install python-mysqldb. And how are you importing mysql.Is it import MySQLdb? Python is case sensitive. | 1 | 0 | 0 | I just installed a Ubuntu Hardy server. In Python, I tried to import _mysql and MySQLdb | 3 | python,linux,unix,installation | 0 | 2009-11-12T09:01:00.000 |
I'm writing an application in Python with Postgresql 8.3 which runs on several machines on a local network.
All machines
1) fetch huge amount of data from the database server ( lets say database gets 100 different queries from a machine with in 2 seconds time) and there are about 10 or 11 machines doing that.
2) After processing data machines have to update certain tables (about 3 or 4 update/insert queries per machine per 1.5 seconds).
What I have noticed is that database goes down some times by giving server aborted process abnormally or freezes the server machine (requiring a hard reset).
By the way all machines maintain a constant connection to the database at all times i.e. once a connection is made using Psycopg2 (in Python) it remains active until processing finishes (which could last hours).
What's the best / optimal way for handling large number of connections in the application, should they be destroyed after each query ?
Secondly should I increase max_connections ?
Would greatly appreciate any advice on this matter. | 2 | 1 | 0.099668 | 0 | false | 1,729,623 | 0 | 1,607 | 1 | 0 | 0 | 1,728,350 | This sounds a bit like your DB server might have some problems, especially if your database server literally crashes. I'd start by trying to figure out from logs what is the root cause of the problems. It could be something like running out of memory, but it could also happen because of faulty hardware.
If you're opening all the connections at start and keep them open, max_connections isn't the culprit. The way you're handling the DB connections should be just fine and your server shouldn't do that no matter how it's configured. | 1 | 0 | 0 | Optimal / best pratice to maintain continuos connection between Python and Postgresql using Psycopg2 | 2 | python,linux,performance,postgresql,out-of-memory | 0 | 2009-11-13T10:18:00.000 |
i recently switched to mac. first and foremost i installed xampp.
then for django-python-mysql connectivity, i "somehow" ended up installing a seperate MySQL.
now the seperate mysql installation is active all the time and the Xampp one doesnt switch on unless i kill the other one.
what i wanted to know is it possible to make xampp work with the seperate mysql installation? because that way i wouldnt have to tinker around with the mysqlDB adapter for python?
any help would be appreciated. | 0 | 1 | 1.2 | 0 | true | 1,734,939 | 1 | 150 | 1 | 0 | 0 | 1,734,918 | You could change the listening port of one of the installations and they shouldn't conflict anymore with each other.
Update: You need to find the mysql configuration file my.cnf of the server which should get a new port (the one from xampp should be somewhere in the xampp folder). Find the line port=3306 in the [mysqld] section. You could change it to something like 3307.
You will also need to specify the new port when connecting to the server from your applications. | 1 | 0 | 0 | 2 mysql instances in MAC | 1 | python,mysql,django,macos,xampp | 0 | 2009-11-14T17:22:00.000 |
How do I load data from an Excel sheet into my Django application? I'm using database PosgreSQL as the database.
I want to do this programmatically. A client wants to load two different lists onto the website weekly and they don't want to do it in the admin section, they just want the lists loaded from an Excel sheet. Please help because I'm kind of new here. | 3 | -1 | -0.022219 | 0 | false | 11,293,612 | 1 | 7,027 | 1 | 0 | 0 | 1,747,501 | Just started using XLRD and it looks very easy and simple to use.
Beware that it does not support Excel 2007 yet, so keep in mind to save your excel at 2003 format. | 1 | 0 | 0 | Getting data from an Excel sheet | 9 | python,django,excel,postgresql | 0 | 2009-11-17T09:07:00.000 |
Greetings, everybody.
I'm trying to import the following libraries in python: cx_Oracle and kinterbasdb.
But, when I try, I get a very similar message error.
*for cx_Oracle:
Traceback (most recent call last):
File "", line 1, in
ImportError: DLL load failed: Não foi possível encontrar o procedimento especificado.
(translation: It was not possible to find the specified procedure)
*for kinterbasdb:
Traceback (most recent call last):
File "C:\", line 1, in
File "c:\Python26\Lib\site-packages\kinterbasdb__init__.py", line 119, in
import _kinterbasdb as _k
ImportError: DLL load failed: Não foi possível encontrar o módulo especificado.
(translation: It was not possible to find the specified procedure)
I'm using python 2.6.4 in windows XP. cx_Oracle's version is 5.0.2. kinterbasdb's version is 3.3.0.
Edit: I've solved it for cx_Oracle, it was a wrong version problem. But I believe I'm using the correct version, and I downloaded it from the Firebird site ( kinterbasdb-3.3.0.win32-setup-py2.6.exe ). Still need assistance with this, please.
Can anyone lend me a hand here?
Many Thanks
Dante | 2 | -1 | -0.197375 | 0 | false | 1,803,407 | 0 | 767 | 1 | 0 | 0 | 1,799,475 | oracle is a complete pain. i don't know the details for windows, but for unix you need ORACLE_HOME and LD_LIBRARY_PATH to both be defined before cx_oracle will work. in windows this would be your environment variables, i guess. so check those.
also, check that they are defined in the environment in which the program runs (again, i don't know windows specific details, but in unix it's possible for everything to work when you run it from your account by hand, but still not work when run as a batch job because the environment is different). | 1 | 1 | 0 | importing cx_Oracle and kinterbasdb returns error | 1 | python,cx-oracle,kinterbasdb | 0 | 2009-11-25T19:43:00.000 |
I'm creating a financial app and it seems my floats in sqlite are floating around. Sometimes a 4.0 will be a 4.000009, and a 6.0 will be a 6.00006, things like that. How can I make these more exact and not affect my financial calculations?
Values are coming from Python if that matters. Not sure which area the messed up numbers are coming from. | 4 | 1 | 0.033321 | 0 | false | 1,801,521 | 0 | 3,672 | 1 | 0 | 0 | 1,801,307 | Most people would probably use Decimal for this, however if this doesn't map onto a database type you may take a performance hit.
If performance is important you might want to consider using Integers to represent an appropriate currency unit - often cents or tenths of cents is ok.
There should be business rules about how amounts are to be rounded in various situations and you should have tests covering each scenario. | 1 | 0 | 0 | How to deal with rounding errors of floating types for financial calculations in Python SQLite? | 6 | python,sqlite,floating-point | 0 | 2009-11-26T02:55:00.000 |
I have downloaded mysqlDb, and while installing it I am getting errors like:
C:\Documents and Settings\naresh\Desktop\MySQL-python-1.2.3c1>setup.py build
Traceback (most recent call last):
File "C:\Documents and Settings\naresh\Desktop\MySQL-python-1.2.3c1
\setup.py",line15, in
metadata, options = get_config()
File "C:\Documents and Settings\naresh\Desktop\MySQL-python-1.2.3c1
\setup_windows.py", line 7, in get_config
serverKey = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, options['registry_key'])
WindowsError: [Error 2] The system cannot find the file specified
What can I do to address this? | 7 | 0 | 0 | 0 | false | 6,616,901 | 0 | 6,706 | 1 | 0 | 0 | 1,803,233 | You need to fire up regedit and make
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\InstallPath
and HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\InstallPath\InstallGroup
to look like HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.7\InstallPath\InstallGroup. | 1 | 0 | 0 | How to install mysql connector | 2 | python,mysql | 0 | 2009-11-26T11:46:00.000 |
We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.
Do you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the impression that NHibernate is glue code between C# and the DB, so can not be exported to other languages.
Alternative question: can somebody recommend a good python compatible replacement of NHibernate ? The backend DB is Oracle something. | 0 | 0 | 0 | 0 | false | 1,809,219 | 0 | 1,382 | 3 | 0 | 0 | 1,809,201 | Check out Django. They have a nice ORM and I believe it has tools to attempt a reverse-engineer from the DB schema. | 1 | 0 | 0 | NHibernate and python | 4 | python,nhibernate,orm | 0 | 2009-11-27T14:50:00.000 |
We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.
Do you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the impression that NHibernate is glue code between C# and the DB, so can not be exported to other languages.
Alternative question: can somebody recommend a good python compatible replacement of NHibernate ? The backend DB is Oracle something. | 0 | 2 | 0.099668 | 0 | false | 1,809,238 | 0 | 1,382 | 3 | 0 | 0 | 1,809,201 | What about running your project under Mono on Linux? Mono seems to support NHibernate, which means you may be able to get away with out rewriting large chunks of your application.
Also, if you really wanted to get Python in on the action, you could use IronPython along with Mono. | 1 | 0 | 0 | NHibernate and python | 4 | python,nhibernate,orm | 0 | 2009-11-27T14:50:00.000 |
We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.
Do you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the impression that NHibernate is glue code between C# and the DB, so can not be exported to other languages.
Alternative question: can somebody recommend a good python compatible replacement of NHibernate ? The backend DB is Oracle something. | 0 | 5 | 0.244919 | 0 | false | 1,809,266 | 0 | 1,382 | 3 | 0 | 0 | 1,809,201 | NHibernate is not specific to C#, but it is specific to .NET.
IronPython is a .NET language from which you could use NHibernate.
.NET and NHibernate can run on Linux through Mono. I'm not sure how good Mono's support is for WPF.
I'm not sure if IronPython runs on Linux, but that would seem to be the closest thing to what you are looking for.
There is a Java version of NHibernate (said tongue in cheek) called Hibernate and there are integration points between Java and Python where Linux is very much supported.
I know the Python community has its own ORMs, but as far as I'm aware, those options are not as mature and feature rich as Hibernate/NHibernate.
I would imagine that almost all of the options available to you would support Oracle. | 1 | 0 | 0 | NHibernate and python | 4 | python,nhibernate,orm | 0 | 2009-11-27T14:50:00.000 |
I recently created a script that parses several web proxy logs into a tidy sqlite3 db file that is working great for me... with one snag. the file size. I have been pressed to use this format (a sqlite3 db) and python handles it natively like a champ, so my question is this... what is the best form of string compression that I can use for db entries when file size is the sole concern. zlib? base-n? Klingon?
Any advice would help me loads, again just string compression for characters that are compliant for URLs. | 0 | 0 | 0 | 0 | false | 1,829,601 | 0 | 2,957 | 2 | 0 | 0 | 1,829,256 | what sort of parsing do you do before you put it in the database? I get the impression that it is fairly simple with a single table holding each entry - if not then my apologies.
Compression is all about removing duplication, and in a log file most of the duplication is between entries rather than within each entry so compressing each entry individually is not going to be a huge win.
This is off the top of my head so feel free to shoot it down in flames, but I would consider breaking the table into a set of smaller tables holding the individual parts of the entry. A log entry would then mostly consist of a timestamp (as DATE type rather than a string) plus a set of indexes into other tables (e.g. requesting IP, request type, requested URL, browser type etc.)
This would have a trade-off of course, since it would make the database a lot more complex to maintain, but on the other hand it would enable meaningful queries such as "show me all the unique IPs that requested page X in the last week". | 1 | 0 | 0 | Python 3: Best string compression method to minimize the size of a sqlite3 db | 3 | python,sqlite,compression | 0 | 2009-12-01T22:02:00.000 |
I recently created a script that parses several web proxy logs into a tidy sqlite3 db file that is working great for me... with one snag. the file size. I have been pressed to use this format (a sqlite3 db) and python handles it natively like a champ, so my question is this... what is the best form of string compression that I can use for db entries when file size is the sole concern. zlib? base-n? Klingon?
Any advice would help me loads, again just string compression for characters that are compliant for URLs. | 0 | 0 | 0 | 0 | false | 1,832,688 | 0 | 2,957 | 2 | 0 | 0 | 1,829,256 | Instead of inserting compression/decompression code into your program, you could store the table itself on a compressed drive. | 1 | 0 | 0 | Python 3: Best string compression method to minimize the size of a sqlite3 db | 3 | python,sqlite,compression | 0 | 2009-12-01T22:02:00.000 |
I'm using the sqlite3 module in Python 2.6.4 to store a datetime in a SQLite database. Inserting it is very easy, because sqlite automatically converts the date to a string. The problem is, when reading it it comes back as a string, but I need to reconstruct the original datetime object. How do I do this? | 84 | 1 | 0.066568 | 0 | false | 48,429,766 | 0 | 58,748 | 1 | 0 | 0 | 1,829,872 | Note: In Python3, I had to change the SQL to something like:
SELECT jobid, startedTime as "st [timestamp]" FROM job
(I had to explicitly name the column.) | 1 | 0 | 1 | How to read datetime back from sqlite as a datetime instead of string in Python? | 3 | python,datetime,sqlite | 0 | 2009-12-02T00:15:00.000 |
I want to use Python-MySQLDB library on Mac so I have compiled the source code to get the _mysql.so under Mac10.5 with my Intel iMac (i386)
This _mysql.co works in 2 of my iMacs and another Macbook. But that's it, it doesn't work in any other Macs.
Does this mean some machine specific info got compiled into the file? | 0 | 2 | 0.197375 | 0 | false | 1,832,065 | 0 | 106 | 1 | 0 | 0 | 1,831,979 | If you've only built one architecture (i386 / PPC) then it won't work on Macs with the opposite architecture. Are the machines that don't work PPC machines, by any chance?
Sometimes build configurations are set up to build only the current architecture by default - I haven't build Python-MySQLDB so I'm not sure if this is the case here, but it's worth checking.
You can find out which architectures have been built with the 'file' command in Terminal.
(Incidentally do you mean ".so"? I'm not familiar with ".co" files.) | 1 | 0 | 0 | Why _mysql.co that compiled on one Mac doesn't work on another? | 2 | python,compilation,mysql | 0 | 2009-12-02T10:21:00.000 |
Does there exist, or is there an intention to create, a universal database frontend for Python like Perl's DBI? I am aware of Python's DB-API, but all the separate packages are leaving me somewhat aggravated. | 1 | 2 | 0.197375 | 0 | false | 1,836,125 | 0 | 1,158 | 1 | 0 | 0 | 1,836,061 | Well...DBAPI is that frontend:
This API has been defined to encourage similarity between the
Python modules that are used to access databases. By doing this,
we hope to achieve a consistency leading to more easily understood
modules, code that is generally more portable across databases,
and a broader reach of database connectivity from Python.
It has always worked great for me atleast, care to elaborate the problems you are facing? | 1 | 0 | 0 | Python universal database interface? | 2 | python,database | 0 | 2009-12-02T21:49:00.000 |
I have an Excel spreadsheet with calculations I would like to use in a Django web application. I do not need to present the spreadsheet as it appears in Excel. I only want to use the formulae embedded in it. What is the best way to do this? | 2 | 0 | 0 | 0 | false | 1,937,261 | 1 | 1,592 | 1 | 0 | 0 | 1,883,098 | You need to use Excel to calculate the results? I mean, maybe you could run the Excel sheet from OpenOffice and use a pyUNO macro, which is somehow "native" python.
A different approach will be to create a macro to generate some more friendly code to python, if you want Excel to perform the calculation is easy you end up with a very slow process. | 1 | 0 | 0 | Importing Excel sheets, including formulae, into Django | 4 | python,django,excel | 0 | 2009-12-10T18:39:00.000 |
PHP provides mysql_connect() and mysql_pconnect() which allow creating both temporary and persistent database connections.
Is there a similar functionality in Python? The environment on which this will be used is lighttpd server with FastCGI.
Thank you! | 2 | 0 | 0 | 0 | false | 1,895,731 | 0 | 3,649 | 1 | 0 | 0 | 1,895,089 | Note: Persistent connections can have a very negative effect on your system performance. If you have a large number of web server processes all holding persistent connections to your DB server you may exhaust the DB server's limit on connections. This is one of those areas where you need to test it under heavy simulated loads to make sure you won't hit the wall at 100MPH. | 1 | 0 | 0 | Persistent MySQL connections in Python | 2 | python,mysql,web-services | 0 | 2009-12-12T23:49:00.000 |
I am using MySQLdb module of python on FC11 machine. Here, i have an issue. I have the following implementation for one of our requirement:
connect to mysqldb and get DB handle,open a cursor, execute a delete statement,commit and then close the cursor.
Again using the DB handle above, iam performing a "select" statement one some different table using the cursor way as described above.
I was able to delete few records using Step1, but step2 select is not working. It simply gives no records for step2 though there are some records available under DB.
But, when i comment step1 and execute step2, i could see that step2 works fine. Why this is so?
Though there are records, why the above sequence is failing to do so?
Any ideas would be appreciated.
Thanks! | 0 | 0 | 0 | 0 | false | 1,922,710 | 0 | 626 | 2 | 0 | 0 | 1,922,623 | With no code, I can only make a guess: try not closing the cursor until you are done with that connection. I think that calling cursor() again after calling cursor.close() will just give you a reference to the same cursor, which can no longer be used for queries.
I am not 100% sure if that is the intended behavior, but I haven't seen any MySQLDB examples of cursors being opened and closed within the same connection. | 1 | 0 | 0 | MYSQLDB python module | 3 | python,mysql | 0 | 2009-12-17T15:42:00.000 |
I am using MySQLdb module of python on FC11 machine. Here, i have an issue. I have the following implementation for one of our requirement:
connect to mysqldb and get DB handle,open a cursor, execute a delete statement,commit and then close the cursor.
Again using the DB handle above, iam performing a "select" statement one some different table using the cursor way as described above.
I was able to delete few records using Step1, but step2 select is not working. It simply gives no records for step2 though there are some records available under DB.
But, when i comment step1 and execute step2, i could see that step2 works fine. Why this is so?
Though there are records, why the above sequence is failing to do so?
Any ideas would be appreciated.
Thanks! | 0 | 0 | 0 | 0 | false | 1,924,766 | 0 | 626 | 2 | 0 | 0 | 1,922,623 | It sounds as though the first cursor is being returned back to the second step. | 1 | 0 | 0 | MYSQLDB python module | 3 | python,mysql | 0 | 2009-12-17T15:42:00.000 |
I have both, django and mysql set to work with UTF-8.
My base.html set utf-8 in head.
row on my db :
+----+--------+------------------------------------------------------------------+-----------------------------+-----------------------------+---------------------+
| id | psn_id | name | publisher | developer | release_date |
+----+--------+------------------------------------------------------------------+-----------------------------+-----------------------------+---------------------+
| 1 | 10945- | まいにちいっしょ | Sony Computer Entertainment | Sony Computer Entertainment | 2006-11-11 00:00:00 |
+----+--------+------------------------------------------------------------------+-----------------------------+-----------------------------+---------------------+
the source code generated looks like :
まいにちいっしょ
and this is wat is displayed :/
why they are not showing the chars the way in this database? | 0 | 0 | 0 | 0 | false | 1,931,067 | 1 | 312 | 1 | 0 | 0 | 1,928,087 | As Dominic has said, the generated HTML source code is correct (these are your Japanese characters translated into HTML entities), but we're not sure, if you see the same code rendered in the page (in this case, you have probably set content-type to "text/plain" instead of "text/html" - do you use render_to_response() or HttpResponse() in the corresponding view.py method?), or your Japanese is rendered correctly but you just don't like the entities in the source code.
Since we don't know your Django settings and how do you render and return the page, it's difficult to provide you the solution. | 1 | 0 | 0 | django + mysql + UTF-8 - Chars are not displayed | 3 | python,django,unicode | 0 | 2009-12-18T13:04:00.000 |
I'd like to get busy with a winter programming project and am contemplating writing an online word game (with a server load of up to, say, 500 users simultaneously). I would prefer it to be platform independent. I intend to use Python, which I have some experience with. For user data storage, after previous experience with MySQL, a flat database design would be preferable but not essential. Okay, now the questions:
Is it worth starting with Python 3, or is it still too poorly supported with ports of modules from previous versions?
Are there any great advantages in using Python 3 for my particular project? Would I be better off looking at using other languages instead, such as Erlang?
Is there any great advantage in using a relational database within a game server?
Are there any open source game servers' source code out there that are worthy of study before starting? | 2 | 2 | 0.07983 | 0 | false | 1,937,342 | 0 | 1,430 | 2 | 0 | 0 | 1,937,286 | Is it worth starting with Python 3, or is it still too poorly supported with ports of modules from previous versions?
depends on which modules do you want to use. twisted is a "swiss knife" for the network programming and could be a choice for your project but unfortunately it does not support python3 yet.
Are there any great advantages in using Python 3 for my particular project? Would I be better off looking at using other languages instead, such as Erlang?
only you can answer your question because only you know your knowledge. Using python3 instead of python2 you get all the advantages of new features the python3 brings with him and the disadvantage that non all libraries support python3 at the moment.
note that python2.6 should implements most (if not all) of the features of python3 while it should be compatible with python2.5 but i did not investigated a lot in this way.
both python and erlang are candidates for your needs, use what you know best and what you like most.
Is there any great advantage in using a relational database within a game server?
you get all the advantages and disadvantage of having a ACID storage system. | 1 | 0 | 0 | Word game server in Python, design pros and cons? | 5 | python | 0 | 2009-12-20T22:18:00.000 |
I'd like to get busy with a winter programming project and am contemplating writing an online word game (with a server load of up to, say, 500 users simultaneously). I would prefer it to be platform independent. I intend to use Python, which I have some experience with. For user data storage, after previous experience with MySQL, a flat database design would be preferable but not essential. Okay, now the questions:
Is it worth starting with Python 3, or is it still too poorly supported with ports of modules from previous versions?
Are there any great advantages in using Python 3 for my particular project? Would I be better off looking at using other languages instead, such as Erlang?
Is there any great advantage in using a relational database within a game server?
Are there any open source game servers' source code out there that are worthy of study before starting? | 2 | 1 | 0.039979 | 0 | false | 1,937,370 | 0 | 1,430 | 2 | 0 | 0 | 1,937,286 | Related to your database choice, I'd seriously look at using Postgres instead of MySQL. In my experiance with the two Postgres has shown to be faster on most write operations while MySQL is slightly faster on the reads.
However, MySQL also has many issues some of which are:
Live backups are difficult at best, and impossible at worse, mostly you have to take the db offline or let it lock during the backups.
In the event of having to kill the server forcefully, either by kill -9, or due to power outage, postgres generally has better resilience to table corruption.
Full support for ACID compliance, and other relational db features that support for, again imho and experiance, are weak or lacking in MySQL.
You can use a library such as SQLAlchemy to abstract away the db access though. This would let you test against both to see which you prefer dealing with.
As far as the language choice.
If you go with Python:
More librarys support Python 2.x rather than Python 3.x at this time, so I'd likely stick to 2.x.
Beware multi-threading gotchas with Python's GIL. Utilizing Twisted can get around this.
If you go with Erlang:
Erlang's syntax and idioms can be very foreign to someone who's never used it.
If well written it not only scales, it SCALES.
Erlang has it's own highly concurrent web server named Yaws.
Erlang also has it's own highly scalable DBMS named Mnesia (Note it's not relational).
So I guess your choices could be really boiled down to how much you're willing to learn to do this project. | 1 | 0 | 0 | Word game server in Python, design pros and cons? | 5 | python | 0 | 2009-12-20T22:18:00.000 |
I worked on a PHP project earlier where prepared statements made the SELECT queries 20% faster.
I'm wondering if it works on Python? I can't seem to find anything that specifically says it does or does NOT. | 51 | 5 | 0.141893 | 0 | false | 2,539,467 | 0 | 52,874 | 1 | 0 | 0 | 1,947,750 | Using the SQL Interface as suggested by Amit can work if you're only concerned about performance. However, you then lose the protection against SQL injection that a native Python support for prepared statements could bring. Python 3 has modules that provide prepared statement support for PostgreSQL. For MySQL, "oursql" seems to provide true prepared statement support (not faked as in the other modules). | 1 | 0 | 0 | Does Python support MySQL prepared statements? | 7 | python,mysql,prepared-statement | 0 | 2009-12-22T17:06:00.000 |
I am using Python 2.6 + xlwt module to generate excel files.
Is it possible to include an autofilter in the first row with xlwt or pyExcelerator or anything else besides COM?
Thanks | 6 | 2 | 0.132549 | 0 | false | 20,838,509 | 0 | 6,820 | 1 | 0 | 0 | 1,948,224 | I have the same issue, running a linux server.
i'm going to check creating an ODS or XLSX file with auto-filter by other means, and then convert them with a libreoffice command line to "xls". | 1 | 0 | 0 | How to create an excel file with an autofilter in the first row with xlwt? | 3 | python,excel,xlwt,pyexcelerator | 0 | 2009-12-22T18:21:00.000 |
I need a documentation system for a PHP project and I wanted it to be able to integrate external documentation (use cases, project scope etc.) with the documentation generated from code comments. It seems that phpDocumentor has exactly the right feature set, but external documentation must be written in DocBook which is too complex for our team.
If it were in python, sphinx would be just about perfect for this job (ReST is definitely simpler than docbook). Is there any way I can integrate external ReST documentation with the docs extracted from phpdoc? Should I just separate the external documentation (eg. use ReST for external and phpdoc for internal)? Or do you have a better suggestion for managing the external documentation? | 0 | 2 | 1.2 | 0 | true | 2,035,342 | 0 | 853 | 1 | 0 | 0 | 1,957,787 | You can convert ReST to DocBook using pandoc. | 1 | 0 | 1 | External documentation for PHP, no DocBook | 2 | php,phpdoc,docbook,restructuredtext,python-sphinx | 0 | 2009-12-24T10:37:00.000 |
If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10. | 2 | 0 | 0 | 0 | false | 1,960,164 | 0 | 287 | 2 | 0 | 0 | 1,960,155 | You can try the Blackhole and Memory table types in MySQL. | 1 | 0 | 0 | Start a "throwaway" MySQL session for testing code? | 2 | python,mysql,unit-testing,ubuntu | 1 | 2009-12-25T00:25:00.000 |
If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10. | 2 | 1 | 1.2 | 0 | true | 1,960,160 | 0 | 287 | 2 | 0 | 0 | 1,960,155 | --datadir for just the data or --basedir | 1 | 0 | 0 | Start a "throwaway" MySQL session for testing code? | 2 | python,mysql,unit-testing,ubuntu | 1 | 2009-12-25T00:25:00.000 |
I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | 27 | 19 | 1.2 | 0 | true | 3,007,620 | 0 | 5,990 | 4 | 0 | 0 | 1,961,013 | Since a nosql database can contain huge amounts of data you can not migrate it in the regular rdbms sence. Actually you can't do it for rdbms as well as soon as your data passes some size threshold. It is impractical to bring your site down for a day to add a field to an existing table, and so with rdbms you end up doing ugly patches like adding new tables just for the field and doing joins to get to the data.
In nosql world you can do several things.
As others suggested you can write your code so that it will handle different 'versions' of the possible schema. this is usually simpler then it looks. Many kinds of schema changes are trivial to code around. for example if you want to add a new field to the schema, you just add it to all new records and it will be empty on the all old records (you will not get "field doesn't exist" errors or anything ;). if you need a 'default' value for the field in the old records it is too trivially done in code.
Another option and actually the only sane option going forward with non-trivial schema changes like field renames and structural changes is to store schema_version in EACH record, and to have code to migrate data from any version to the next on READ. i.e. if your current schema version is 10 and you read a record from the database with the version of 7, then your db layer should call migrate_8, migrate_9, and migrate_10. This way the data that is accessed will be gradually migrated to the new version. and if it is not accessed, then who cares which version is it;) | 1 | 0 | 0 | Are there any tools for schema migration for NoSQL databases? | 4 | python,mongodb,couchdb,database,nosql | 0 | 2009-12-25T11:23:00.000 |
I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | 27 | 2 | 0.099668 | 0 | false | 1,961,090 | 0 | 5,990 | 4 | 0 | 0 | 1,961,013 | One of the supposed benefits of these databases is that they are schemaless, and therefore don't need schema migration tools. Instead, you write your data handling code to deal with the variety of data stored in the db. | 1 | 0 | 0 | Are there any tools for schema migration for NoSQL databases? | 4 | python,mongodb,couchdb,database,nosql | 0 | 2009-12-25T11:23:00.000 |
I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | 27 | 2 | 0.099668 | 0 | false | 1,966,375 | 0 | 5,990 | 4 | 0 | 0 | 1,961,013 | If your data are sufficiently big, you will probably find that you cannot EVER migrate the data, or that it is not beneficial to do so. This means that when you do a schema change, the code needs to continue to be backwards compatible with the old formats forever.
Of course if your data "age" and eventually expire anyway, this can do schema migration for you - simply change the format for newly added data, then wait for all data in the old format to expire - you can then retire the backward-compatibility code. | 1 | 0 | 0 | Are there any tools for schema migration for NoSQL databases? | 4 | python,mongodb,couchdb,database,nosql | 0 | 2009-12-25T11:23:00.000 |
I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | 27 | 1 | 0.049958 | 0 | false | 3,007,685 | 0 | 5,990 | 4 | 0 | 0 | 1,961,013 | When a project has a need for a schema migration in regards to a NoSQL database makes me think that you are still thinking in a Relational database manner, but using a NoSQL database.
If anybody is going to start working with NoSQL databases, you need to realize that most of the 'rules' for a RDBMS (i.e. MySQL) need to go out the window too. Things like strict schemas, normalization, using many relationships between objects. NoSQL exists to solve problems that don't need all the extra 'features' provided by a RDBMS.
I would urge you to write your code in a manner that doesn't expect or need a hard schema for your NoSQL database - you should support an old schema and convert a document record on the fly when you access if if you really want more schema fields on that record.
Please keep in mind that NoSQL storage works best when you think and design differently compared to when using a RDBMS | 1 | 0 | 0 | Are there any tools for schema migration for NoSQL databases? | 4 | python,mongodb,couchdb,database,nosql | 0 | 2009-12-25T11:23:00.000 |
Python --> SQLite --> ASP.NET C#
I am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.
I want to stay away from writing to disc as it takes too long. The data is not important, if something goes wrong, it simply flushes and fills up with the next wave of data sent by players.
Next, another ASP.NET server must be able to connect to this in memory database via TCP/IP at regular intervals, say once every second, or 10 seconds. It has to pull this data, and this will in turn update on a website that displays "live" game data.
I'm looking at SQlite, and wondering, is this the right tool for the job, anyone have any suggestions?
Thanks!!! | 0 | 1 | 1.2 | 0 | true | 1,977,499 | 0 | 309 | 2 | 0 | 0 | 1,962,130 | This sounds like a premature optimization (apologizes if you've already done the profiling). What I would suggest is go ahead and write the system in the simplest, cleanest way, but put a bit of abstraction around the database bits so they can easily by swapped out. Then profile it and find your bottleneck.
If it turns out it is the database, optimize the database in the usual way (indexes, query optimizations, etc...). If its still too slow, most databases support an in-memory table format. Or you can mount a RAM disk and mount individual tables or the whole database on it. | 1 | 0 | 0 | In memory database with socket capability | 5 | asp.net,python,sqlite,networking,udp | 0 | 2009-12-25T22:47:00.000 |
Python --> SQLite --> ASP.NET C#
I am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.
I want to stay away from writing to disc as it takes too long. The data is not important, if something goes wrong, it simply flushes and fills up with the next wave of data sent by players.
Next, another ASP.NET server must be able to connect to this in memory database via TCP/IP at regular intervals, say once every second, or 10 seconds. It has to pull this data, and this will in turn update on a website that displays "live" game data.
I'm looking at SQlite, and wondering, is this the right tool for the job, anyone have any suggestions?
Thanks!!! | 0 | 0 | 0 | 0 | false | 1,962,162 | 0 | 309 | 2 | 0 | 0 | 1,962,130 | The application of SQlite depends on your data complexity.
If you need to perform complex queries on relational data, then it might be a viable option. If your data is flat (i.e. not relational) and processed as a whole, then some python-internal data structures might be applicable. | 1 | 0 | 0 | In memory database with socket capability | 5 | asp.net,python,sqlite,networking,udp | 0 | 2009-12-25T22:47:00.000 |
I'm trying to install the module mySQLdb on a windows vista 64 (amd) machine.
I've installed python on a different folder other than suggested by Python installer.
When I try to install the .exe mySQLdb installer, it can't find python 2.5 and it halts the installation.
Is there anyway to supply the installer with the correct python location (even thou the registry and path are right)? | 0 | 0 | 0 | 0 | false | 2,179,175 | 0 | 431 | 1 | 1 | 0 | 1,980,454 | did you use an egg?
if so, python might not be able to find it.
import os,sys
os.environ['PYTHON_EGG_CACHE'] = 'C:/temp'
sys.path.append('C:/path/to/MySQLdb.egg') | 1 | 0 | 1 | Problem installing MySQLdb on windows - Can't find python | 1 | python,windows-installer,mysql | 0 | 2009-12-30T14:24:00.000 |
I'm using cherrypy's standalone server (cherrypy.quickstart()) and sqlite3 for a database.
I was wondering how one would do ajax/jquery asynchronous calls to the database while using cherrypy? | 1 | 2 | 1.2 | 0 | true | 2,015,344 | 1 | 3,741 | 1 | 0 | 0 | 2,015,065 | The same way you would do them using any other webserver - by getting your javascript to call a URL which is handled by the server-side application. | 1 | 0 | 0 | How does one do async ajax calls using cherrypy? | 2 | jquery,python,ajax,asynchronous,cherrypy | 0 | 2010-01-06T17:57:00.000 |
I'm making a trivia webapp that will feature both standalone questions, and 5+ question quizzes. I'm looking for suggestions for designing this model.
Should a quiz and its questions be stored in separate tables/objects, with a key to tie them together, or am I better off creating the quiz as a standalone entity, with lists stored for each of a question's characteristics? Or perhaps someone has another idea...
Thank you in advance. It would probably help to say that I am using Google App Engine, which typically frowns upon relational db models, but I'm willing to go my own route if it makes sense. | 1 | 1 | 0.039979 | 0 | false | 2,017,958 | 1 | 642 | 2 | 0 | 0 | 2,017,930 | My first cut (I assumed the questions were multiple choice):
I'd have a table of Questions, with ID_Question as the PK, the question text, and a category (if you want).
I'd have a table of Answers, with ID_Answer as the PK, QuestionID as a FK back to the Questions table, the answer text, and a flag as to whether it's the correct answer or not.
I'd have a table of Quizzes, with ID_Quiz as the PK, and a description of the quiz, and a category (if you want).
I'd have a table of QuizQuestions, with ID_QuizQuestion as the PK, QuizID as a FK back to the Quizzes table, and QuestionID as a FK back to the Questions table.
This model lets you:
Use questions standalone or in quizzes
Lets you have as many or few questions in a quiz as you want
Lets you have as many of few choices for questions as you want (or even multiple correct answers)
Use questions in several different quizzes | 1 | 0 | 0 | Database Design Inquiry | 5 | python,database-design,google-app-engine,schema | 0 | 2010-01-07T02:56:00.000 |
I'm making a trivia webapp that will feature both standalone questions, and 5+ question quizzes. I'm looking for suggestions for designing this model.
Should a quiz and its questions be stored in separate tables/objects, with a key to tie them together, or am I better off creating the quiz as a standalone entity, with lists stored for each of a question's characteristics? Or perhaps someone has another idea...
Thank you in advance. It would probably help to say that I am using Google App Engine, which typically frowns upon relational db models, but I'm willing to go my own route if it makes sense. | 1 | 0 | 0 | 0 | false | 2,017,943 | 1 | 642 | 2 | 0 | 0 | 2,017,930 | Have a table of questions, a table of quizzes and a mapping table between them. That will give you the most flexibility. This is simple enough that you wouldn't even necessarily need a whole relational database management system. I think people tend to forget that relations are pretty simple mathematical/logical concepts. An RDBMS just handles a lot of the messy book keeping for you. | 1 | 0 | 0 | Database Design Inquiry | 5 | python,database-design,google-app-engine,schema | 0 | 2010-01-07T02:56:00.000 |
Two questions:
i want to generate a View in my PostGIS-DB. How do i add this View to my geometry_columns Table?
What i have to do, to use a View with SQLAlchemy? Is there a difference between a Table and View to SQLAlchemy or could i use the same way to use a View as i do to use a Table?
sorry for my poor english.
If there a questions about my question, please feel free to ask so i can try to explain it in another way maybe :)
Nico | 2 | 4 | 1.2 | 0 | true | 2,027,143 | 0 | 1,758 | 1 | 0 | 0 | 2,026,475 | Table objects in SQLAlchemy have two roles. They can be used to issue DDL commands to create the table in the database. But their main purpose is to describe the columns and types of tabular data that can be selected from and inserted to.
If you only want to select, then a view looks to SQLAlchemy exactly like a regular table. It's enough to describe the view as a Table with the columns that interest you (you don't even need to describe all of the columns). If you want to use the ORM you'll need to declare for SQLAlchemy that some combination of the columns can be used as the primary key (anything that's unique will do). Declaring some columns as foreign keys will also make it easier to set up any relations. If you don't issue create for that Table object, then it is just metadata for SQLAlchemy to know how to query the database.
If you also want to insert to the view, then you'll need to create PostgreSQL rules or triggers on the view that redirect the writes to the correct location. I'm not aware of a good usage recipe to redirect writes on the Python side. | 1 | 0 | 0 | Work with Postgres/PostGIS View in SQLAlchemy | 1 | python,postgresql,sqlalchemy,postgis | 0 | 2010-01-08T09:05:00.000 |
hmm, is there any reason why sa tries to add Nones to for varchar columns that have defaults set in in database schema ?, it doesnt do that for floats or ints (im using reflection).
so when i try to add new row :
like
u = User()
u.foo = 'a'
u.bar = 'b'
sa issues a query that has a lot more cols with None values assigned to those, and db obviously bards and doesnt perform default substitution. | 0 | 0 | 1.2 | 0 | true | 2,037,291 | 0 | 900 | 1 | 0 | 0 | 2,036,996 | I've found its a bug in sa, this happens only for string fields, they dont get server_default property for some unknow reason, filed a ticket for this already | 1 | 0 | 0 | Problem with sqlalchemy, reflected table and defaults for string fields | 2 | python,sqlalchemy | 0 | 2010-01-10T12:28:00.000 |
i have been in the RDBMS world for many years now but wish to explore the whole nosql movement. so here's my first question:
is it bad practice to have the possibility of duplicate keys? for example, an address book keyed off of last name (most probably search item?) could have multiple entities. is it bad practice to use the last name then? is the key supposed to be the most "searchable" definition of the entity? are there any resources for "best practices" in this whole new world (for me)?
i'm intrigued by tokyo cabinet (and specifically the tc interface) but don't know how to iterate through different entities that have the same key (e.g. see above). i can only get the first entity. anyway, thanks in advance for the help | 1 | 1 | 1.2 | 0 | true | 2,384,015 | 0 | 597 | 1 | 0 | 0 | 2,068,473 | This depend on no-sql implementation. Cassandra, for example, allows range queries, so you could model data to do queries on last name, or with full name (starting with last name, then first name).
Beyond this, many simpler key-value stores would indeed require you to store a list structure (or such) for multi-valued entries. Whether this is feasible or not depends on expected number of "duplicates" -- with last name, number could be rather high I presume, so it does not sound like an ideal model for many cases. | 1 | 0 | 0 | key/value (general) and tokyo cabinet (python tc-specific) question | 2 | python,tokyo-cabinet | 0 | 2010-01-15T00:04:00.000 |
I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading/writing to disk.
Now I have a script that runs at basically the same speed but CPU utilization for that script never goes over 10%.
ProcessExplorer shows that mysqld is also not taking almost any CPU time or reading/writing a lot to disk.
What steps would you take to figure out where the bottleneck is? | 2 | 1 | 0.066568 | 0 | false | 2,077,129 | 0 | 917 | 3 | 0 | 0 | 2,076,582 | It is "well known", so to speak, that svn update waits up to a whole second after it has finished running, so that file modification timestamps get "in the past" (since many filesystems don't have a timestamp granularity finer than one second). You can find more information about it by Googling for "svn sleep_for_timestamps".
I don't have any obvious solution to suggest. If this is really performance critical you could either: 1) not update as often as you are doing 2) try to use a lower-level Subversion API (good luck). | 1 | 0 | 0 | Finding the performance bottleneck in a Python and MySQL script | 3 | python,mysql,performance,svn | 0 | 2010-01-16T07:40:00.000 |
I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading/writing to disk.
Now I have a script that runs at basically the same speed but CPU utilization for that script never goes over 10%.
ProcessExplorer shows that mysqld is also not taking almost any CPU time or reading/writing a lot to disk.
What steps would you take to figure out where the bottleneck is? | 2 | 4 | 1.2 | 0 | true | 2,076,639 | 0 | 917 | 3 | 0 | 0 | 2,076,582 | Doing SQL queries in a for loop 15k times is a bottleneck in every language..
Is there any reason you query every time again ? If you do a single query before the for loop and then loop over the resultset and the SVN part, you will see a dramatic increase in speed.
But I doubt that you will get a higher CPU usage. The reason is that you are not doing calculations, but mostly IO.
Btw, you can't measure that in mysqld cpu usage, as it's in the actual code not complexity of the queries, but their count and the latency of the server engine to answer. So you will see only very short, not expensive queries, that do sum up in time, though. | 1 | 0 | 0 | Finding the performance bottleneck in a Python and MySQL script | 3 | python,mysql,performance,svn | 0 | 2010-01-16T07:40:00.000 |
I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading/writing to disk.
Now I have a script that runs at basically the same speed but CPU utilization for that script never goes over 10%.
ProcessExplorer shows that mysqld is also not taking almost any CPU time or reading/writing a lot to disk.
What steps would you take to figure out where the bottleneck is? | 2 | 1 | 0.066568 | 0 | false | 2,076,590 | 0 | 917 | 3 | 0 | 0 | 2,076,582 | Profile your Python code. That will show you how long each function/method call takes. If that's the method call querying the MySQL database, you'll have a clue where to look. But it also may be something else. In any case, profiling is the usual approach to solve such problems. | 1 | 0 | 0 | Finding the performance bottleneck in a Python and MySQL script | 3 | python,mysql,performance,svn | 0 | 2010-01-16T07:40:00.000 |
I recently joined a new company and the development team was in the progress of a project to rebuild the database categories structure as follows:
if we have category and subcategory for items, like food category and italian food category in food category.
They were building a table for each category, instead of having one table and a link to the category id.
Now we have a table called food
and another table called food_italian
and both tables contain the same fields.
I have asked around and it seems that some DBA prefers this design. I would like to know why? and how this design can improve the performance? | 2 | 2 | 0.197375 | 0 | false | 2,077,536 | 0 | 155 | 1 | 0 | 0 | 2,077,522 | First, the most obvious answer is that you should ask them, not us, since I can tell you this, that design seems bogus deluxe.
The only reason I can come up with is that you have inexperienced DBA's that does not know how to performance-tune a database, and seems to think that a table with less rows will always vastly outperform a table with more rows.
With good indices, that need not be the case. | 1 | 0 | 0 | DB a table for the category and another table for the subcategory with similar fields, why? | 2 | python,mysql,database,django,performance | 0 | 2010-01-16T13:58:00.000 |
I have a question related to some guidances to solve a problem. I have with me an xml file, I have to populate it into a database system (whatever, it might be sqlite, mysql) using scripting language: Python.
Does anyone have any idea on how to proceed?
Which technologies I need to read further?
Which environments I have to install?
Any tutorials on the same topic?
I already tried to parse xml using both by tree-based and sax method in other language, but to start with Python, I don't know where to start. I already know how to design the database I need.
Another question, is Python alone possible of executing database ddl queries? | 7 | 1 | 0.049958 | 0 | false | 2,085,657 | 1 | 15,042 | 1 | 0 | 0 | 2,085,430 | If you are accustomed to DOM (tree) access to xml from other language, you may find useful these standard library modules (and their respective docs):
xml.dom
xml.dom.minidom
To save tha data to DB, you can use standard module sqlite3 or look for binding to mysql. Or you may wish to use something more abstract, like SQLAlchemy or Django's ORM. | 1 | 0 | 0 | populating data from xml file to a sqlite database using python | 4 | python,xml,database,sqlite,parsing | 0 | 2010-01-18T10:55:00.000 |
I have a written a Python module which due to its specifics needs to have a MySQL database connection. Right now, details of this connection (host, database, username and password to connect with) are stored in /etc/mymodule.conf in plaintext, which is obviously not a good idea.
Supposedly, the /etc/mymodule.conf file is edited by the root user after the module is installed, since the module and its database may be used by all users of a Unix system.
How should I securely store the password instead? | 0 | 4 | 1.2 | 0 | true | 2,088,188 | 0 | 1,241 | 1 | 0 | 0 | 2,087,920 | Your constraints set a very difficult problem: every user on the system must be able to access that password (since that's the only way for users to access that database)... yet they must not (except when running that script, and presumably only when running it without e.g. a python -i session that would let them set a breakpoint just before the connect call and look all through memory, so definitely able to look at the password).
You could write a daemon process that runs as root (so can read mymodule.conf, which you'd make readable only by root) and accepts requests, somehow validates that the request comes from a "good" process (one that's running the exact module in question and not interactive), and only then supplies the password. That's fragile, mostly because of the need to determine whether a process may or may not have a breakpoint set at the crucial point of execution.
Alternatively, you could further raise the technological stakes by having the daemon return, not the password, but rather the open socket ready to be wrapped in a DB-API compliant wrapper; some Unix systems allow open file descriptors to be sent between unrelated processes (a prereq for this approach) -- and of course you'd have to substantially rework the MySQL-based DB API to allow opening a connection around an already-open socket rather than a freshly made one. Note that a validated requesting process that happens to be interactive would still be able to get the connection object, once built, and send totally arbitrary requests -- they wouldn't be able to see the password, technically, but that's not much consolation. So it's unlikely that the large effort required here is warranted.
So the next possible architecture is to mediate all db interaction via the validating daemon: a process would "log into" the daemon, get validated, and, if all's OK, gain a proxy connection to (e.g.) an XMLRPC server exposing the DB connection and functionality (the daemon would probably fork each such proxy process, right after reading the password from the root-only-readable file, and drop privileges immediately, just on general security ground).
The plus wrt the previous alternative, in addition to probably easier implementation, is that the proxy would also get a look at every SQL request that's about to be sent to the MySQL db, and be able to validate and censor those requests as well (presumably on a default-deny basis, again for general security principles), thus seriously limiting the amount of damage a "rogue" client process (running interactively with a debugger) can do... one hopes;-).
Yes, no easy solutions here -- but then, the problem your constraints pose is so far from easy that it borders on a self-contradictory impossibility;-). BTW, the problem's not particularly Python-related, it's essentially about choosing a secure architecture that comes close to "squaring the circle"-hard contradictory constraints on access permissions!-) | 1 | 0 | 0 | Storing system-wide DB connection password for a Python module | 2 | python,security | 0 | 2010-01-18T17:30:00.000 |
I want to add a field to an existing mapped class, how would I update the sql table automatically. Does sqlalchemy provide a method to update the database with a new column, if a field is added to the class. | 15 | 0 | 0 | 0 | false | 65,265,231 | 0 | 11,257 | 1 | 0 | 0 | 2,103,274 | You can install 'DB Browser (SQLite)' and open your current database file and simple add/edit table in your database and save it, and run your app
(add script in your model after save above process) | 1 | 0 | 0 | SqlAlchemy add new Field to class and create corresponding column in table | 6 | python,sqlalchemy | 0 | 2010-01-20T17:01:00.000 |
Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.
I realize this is a little ill-formed, and solutions are welcome.
Additional Details
files long, not wide
millions of lines per hour, spread over 100 files per hour
tab seperated, not many columns (~10)
fields are short (say < 50 chars per field)
queries are on fields, combinations of fields, and can be historical
Drawbacks to various solutions:
(All of these are based on my observations and tests, but I'm open to correction)
BDB
has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
single writer (if it's possible to get around this, I want to see code!)
hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
since it only stores strings, there is a serialize / deserialize step
RDBMSes
Wins:
flat table model is excellent for querying, indexing
Losses:
In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning. | 4 | 1 | 0.039979 | 1 | false | 2,111,067 | 0 | 2,801 | 3 | 0 | 0 | 2,110,843 | If the data is already organized in fields, it doesn't sound like a text searching/indexing problem. It sounds like tabular data that would be well-served by a database.
Script the file data into a database, index as you see fit, and query the data in any complex way the database supports.
That is unless you're looking for a cool learning project. Then, by all means, come up with an interesting file indexing scheme. | 1 | 0 | 0 | File indexing (using Binary trees?) in Python | 5 | python,algorithm,indexing,binary-tree | 0 | 2010-01-21T16:22:00.000 |
Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.
I realize this is a little ill-formed, and solutions are welcome.
Additional Details
files long, not wide
millions of lines per hour, spread over 100 files per hour
tab seperated, not many columns (~10)
fields are short (say < 50 chars per field)
queries are on fields, combinations of fields, and can be historical
Drawbacks to various solutions:
(All of these are based on my observations and tests, but I'm open to correction)
BDB
has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
single writer (if it's possible to get around this, I want to see code!)
hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
since it only stores strings, there is a serialize / deserialize step
RDBMSes
Wins:
flat table model is excellent for querying, indexing
Losses:
In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning. | 4 | 1 | 0.039979 | 1 | false | 2,110,912 | 0 | 2,801 | 3 | 0 | 0 | 2,110,843 | The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.
To reduce the time spent waiting for I/O, your best bet is compression.
Create a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more CPU time. I/O time, however, will dominate your processing, so reduce I/O time by zipping everything. | 1 | 0 | 0 | File indexing (using Binary trees?) in Python | 5 | python,algorithm,indexing,binary-tree | 0 | 2010-01-21T16:22:00.000 |
Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.
I realize this is a little ill-formed, and solutions are welcome.
Additional Details
files long, not wide
millions of lines per hour, spread over 100 files per hour
tab seperated, not many columns (~10)
fields are short (say < 50 chars per field)
queries are on fields, combinations of fields, and can be historical
Drawbacks to various solutions:
(All of these are based on my observations and tests, but I'm open to correction)
BDB
has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
single writer (if it's possible to get around this, I want to see code!)
hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
since it only stores strings, there is a serialize / deserialize step
RDBMSes
Wins:
flat table model is excellent for querying, indexing
Losses:
In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning. | 4 | 1 | 0.039979 | 1 | false | 12,805,622 | 0 | 2,801 | 3 | 0 | 0 | 2,110,843 | sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system. | 1 | 0 | 0 | File indexing (using Binary trees?) in Python | 5 | python,algorithm,indexing,binary-tree | 0 | 2010-01-21T16:22:00.000 |
Table structure - Data present for 5 min. slots -
data_point | point_date
12 | 00:00
14 | 00:05
23 | 00:10
10 | 00:15
43 | 00:25
10 | 00:40
When I run the query for say 30 mins. and if data is present I'll get 6 rows (one row for each 5 min. stamp). Simple Query -
select data_point
from some_table
where point_date >= start_date
AND point_date < end_date
order by point_date
Now when I don't have an entry for a particular time slot (e.g. time slot 00:20 is missing), I want the "data_point" to be returned as 0
The REPLACE, IF, IFNULL, ISNULL don't work when there no rows returned.
I thought Union with a default value would work, but it failed too or maybe I didn't use it correctly.
Is there a way to get this done via sql only ?
Note : Python 2.6 & mysql version 5.1 | 1 | 0 | 0 | 0 | false | 2,119,402 | 0 | 1,466 | 2 | 0 | 0 | 2,119,153 | You cannot query data you do not have.
You (as a thinking person) can claim that the 00:20 data is missing; but there's no easy way to define "missing" in some more formal SQL sense.
The best you can do is create a table with all of the expected times.
Then you can do an outer join between expected times (including a 0 for 00:20) and actual times (missing the 00:20 sample) and you'll get kind of result you're expecting. | 1 | 0 | 0 | python : mysql : Return 0 when no rows found | 3 | python,mysql,null | 0 | 2010-01-22T17:28:00.000 |
Table structure - Data present for 5 min. slots -
data_point | point_date
12 | 00:00
14 | 00:05
23 | 00:10
10 | 00:15
43 | 00:25
10 | 00:40
When I run the query for say 30 mins. and if data is present I'll get 6 rows (one row for each 5 min. stamp). Simple Query -
select data_point
from some_table
where point_date >= start_date
AND point_date < end_date
order by point_date
Now when I don't have an entry for a particular time slot (e.g. time slot 00:20 is missing), I want the "data_point" to be returned as 0
The REPLACE, IF, IFNULL, ISNULL don't work when there no rows returned.
I thought Union with a default value would work, but it failed too or maybe I didn't use it correctly.
Is there a way to get this done via sql only ?
Note : Python 2.6 & mysql version 5.1 | 1 | 0 | 0 | 0 | false | 2,119,384 | 0 | 1,466 | 2 | 0 | 0 | 2,119,153 | I see no easy way to create non-existing records out of thin air, but you could create yourself a point_dates table containing all the timestamps you're interested in, and left join it on your data:
select pd.slot, IFNULL(data_point, 0)
from point_dates pd
left join some_table st on st.point_date=pd.slot
where point_date >= start_date
AND point_date < end_date
order by point_date | 1 | 0 | 0 | python : mysql : Return 0 when no rows found | 3 | python,mysql,null | 0 | 2010-01-22T17:28:00.000 |
I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. | 2 | 2 | 1.2 | 0 | true | 2,124,718 | 1 | 1,021 | 3 | 1 | 0 | 2,124,688 | True, Google App Engine is a very cool product, but the datastore is a different beast than a regular mySQL database. That's not to say that what you need can't be done with the GAE datastore; however it may take some reworking on your end.
The most prominent different that you notice right off the start is that GAE uses an object-relational mapping for its data storage scheme. Essentially object graphs are persisted in the database, maintaining there attributes and relationships to other objects. In many cases ORM (object relational mappings) map fairly well on top of a relational database (this is how Hibernate works). The mapping is not perfect though and you will find that you need to make alterations to persist your data. Also, GAE has some unique contraints that complicate things a bit. One contraint that bothers me a lot is not being able to query for attribute paths: e.g. "select ... where dog.owner.name = 'bob' ". It is these rules that force you to read and understand how GAE data store works before you jump in.
I think GAE could work well in your situation. It just may take some time to understand ORM persistence in general, and GAE datastore in specifics. | 1 | 0 | 0 | iPhone app with Google App Engine | 4 | iphone,python,google-app-engine,gql | 0 | 2010-01-23T20:55:00.000 |
I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. | 2 | 1 | 0.049958 | 0 | false | 2,124,705 | 1 | 1,021 | 3 | 1 | 0 | 2,124,688 | That's a pretty generic question :)
Short answer: yes. It's going to involve some rethinking of your data model, but yes, changes are you can support it with the GAE Datastore API.
When you create your Python models (think of these as tables), you can certainly define references to other models (so now we have a foreign key). When you select this model, you'll get back the referencing models (pretty much like a join).
It'll most likely work, but it's not a drop in replacement for a mySQL server. | 1 | 0 | 0 | iPhone app with Google App Engine | 4 | iphone,python,google-app-engine,gql | 0 | 2010-01-23T20:55:00.000 |
I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python interface to write to the data store - but I know very little about GQL's capability. I've basically written all the working database code using MySQL, testing internally on the iPhone with SQLite. Will GQL offer the same functionality that SQL can? I read on the site that it doesn't support join queries. Also is it truly relational?
Basically I guess my question is can an app that typically uses SQL backend work just as well with Google's App Engine, with GQL?
I hope that's clear... any guidance is great. | 2 | 2 | 0.099668 | 0 | false | 2,125,297 | 1 | 1,021 | 3 | 1 | 0 | 2,124,688 | GQL offers almost no functionality at all; it's only used for SELECT queries, and it only exists to make writing SELECT queries easier for SQL programmers. Behind the scenes, it converts your queries to db.Query objects.
The App Engine datastore isn't a relational database at all. You can do some stuff that looks relational, but my advice for anyone coming from an SQL background is to avoid GQL at all costs to avoid the trap of thinking the datastore is anything at all like an RDBMS, and to forget everything you know about database design. Specifically, if you're normalizing anything, you'll soon wish you hadn't. | 1 | 0 | 0 | iPhone app with Google App Engine | 4 | iphone,python,google-app-engine,gql | 0 | 2010-01-23T20:55:00.000 |
Currently an application of mine is using SQLAlchemy, but I have been considering the possibility of using Django model API.
Django 1.1.1 is about 3.6 megabytes in size, whereas SQLAlchemy is about 400 kilobytes (as reported by PyPM - which is essentially the size of the files installed by python setup.py install).
I would like to use the Django models (so as to not have other developers learn yet-another-ORM), but do not want to include 3.6 megabytes of stuff most of which are not needed. (FYI - the application, final executable that is, actually bundles the install_requires from setup.py) | 0 | 1 | 0.099668 | 0 | false | 2,127,512 | 1 | 191 | 2 | 0 | 0 | 2,126,433 | The Django ORM is usable on its own - you can use "settings.configure()" to set up the database settings. That said, you'll have to do the stripping down and repackaging yourself, and you'll have to experiment with how much you can actually strip away. I'm sure you can ditch contrib/, forms/, template/, and probably several other unrelated pieces. The ORM definitely relies on conf/, and quite likely on core/ and util/ as well. A few quick greps through db/* should make it clear what's imported. | 1 | 0 | 0 | Using Django's Model API without having to *include* the full Django stack | 2 | python,django,deployment,size,sqlalchemy | 0 | 2010-01-24T08:27:00.000 |
Currently an application of mine is using SQLAlchemy, but I have been considering the possibility of using Django model API.
Django 1.1.1 is about 3.6 megabytes in size, whereas SQLAlchemy is about 400 kilobytes (as reported by PyPM - which is essentially the size of the files installed by python setup.py install).
I would like to use the Django models (so as to not have other developers learn yet-another-ORM), but do not want to include 3.6 megabytes of stuff most of which are not needed. (FYI - the application, final executable that is, actually bundles the install_requires from setup.py) | 0 | 1 | 0.099668 | 0 | false | 2,130,014 | 1 | 191 | 2 | 0 | 0 | 2,126,433 | You may be able to get a good idea of what is safe to strip out by checking which files don't have their access time updated when you run your application. | 1 | 0 | 0 | Using Django's Model API without having to *include* the full Django stack | 2 | python,django,deployment,size,sqlalchemy | 0 | 2010-01-24T08:27:00.000 |
Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | 380 | 133 | 1 | 0 | false | 2,157,930 | 0 | 221,892 | 4 | 0 | 0 | 2,128,505 | We actually had these merged together originally, i.e. there was a "filter"-like method that accepted *args and **kwargs, where you could pass a SQL expression or keyword arguments (or both). I actually find that a lot more convenient, but people were always confused by it, since they're usually still getting over the difference between column == expression and keyword = expression. So we split them up. | 1 | 0 | 0 | Difference between filter and filter_by in SQLAlchemy | 5 | python,sqlalchemy | 0 | 2010-01-24T19:49:00.000 |
Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | 380 | 40 | 1 | 0 | false | 2,128,567 | 0 | 221,892 | 4 | 0 | 0 | 2,128,505 | filter_by uses keyword arguments, whereas filter allows pythonic filtering arguments like filter(User.name=="john") | 1 | 0 | 0 | Difference between filter and filter_by in SQLAlchemy | 5 | python,sqlalchemy | 0 | 2010-01-24T19:49:00.000 |
Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | 380 | 494 | 1.2 | 0 | true | 2,128,558 | 0 | 221,892 | 4 | 0 | 0 | 2,128,505 | filter_by is used for simple queries on the column names using regular kwargs, like
db.users.filter_by(name='Joe')
The same can be accomplished with filter, not using kwargs, but instead using the '==' equality operator, which has been overloaded on the db.users.name object:
db.users.filter(db.users.name=='Joe')
You can also write more powerful queries using filter, such as expressions like:
db.users.filter(or_(db.users.name=='Ryan', db.users.country=='England')) | 1 | 0 | 0 | Difference between filter and filter_by in SQLAlchemy | 5 | python,sqlalchemy | 0 | 2010-01-24T19:49:00.000 |
Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | 380 | 4 | 0.158649 | 0 | false | 68,331,326 | 0 | 221,892 | 4 | 0 | 0 | 2,128,505 | Apart from all the technical information posted before, there is a significant difference between filter() and filter_by() in its usability.
The second one, filter_by(), may be used only for filtering by something specifically stated - a string or some number value. So it's usable only for category filtering, not for expression filtering.
On the other hand filter() allows using comparison expressions (==, <, >, etc.) so it's helpful e.g. when 'less/more than' filtering is needed. But can be used like filter_by() as well (when == used).
Just to remember both functions have different syntax for argument typing. | 1 | 0 | 0 | Difference between filter and filter_by in SQLAlchemy | 5 | python,sqlalchemy | 0 | 2010-01-24T19:49:00.000 |
we are still pretty new to Postgres and came from Microsoft Sql Server.
We are wanting to write some stored procedures now. Well, after struggling to get something more complicated than a hello world to work in pl/pgsql, we decided it's better if we are going to learn a new language we might as well learn Python because we got the same query working in it in about 15 minutes(note, none of us actually know python).
So I have some questions about it in comparison to pl/psql.
Is pl/Pythonu slower than pl/pgsql?
Is there any kind of "good" reference for how to write good stored procedures using it? Five short pages in the Postgres documentation doesn't really tell us enough.
What about query preparation? Should it always be used?
If we use the SD and GD arrays for a lot of query plans, will it ever get too full or have a negative impact on the server? Will it automatically delete old values if it gets too full?
Is there any hope of it becoming a trusted language?
Also, our stored procedure usage is extremely light. Right now we only have 4, but we are still trying to convert little bits of code over from Sql Server specific syntax(such as variables, which can't be used in Postgres outside of stored procedures) | 11 | 9 | 1.2 | 0 | true | 2,142,128 | 0 | 5,869 | 1 | 0 | 0 | 2,141,589 | Depends on what operations you're doing.
Well, combine that with a general Python documentation, and that's about what you have.
No. Again, depends on what you're doing. If you're only going to run a query once, no point in preparing it separately.
If you are using persistent connections, it might. But they get cleared out whenever a connection is closed.
Not likely. Sandboxing is broken in Python and AFAIK nobody is really interested in fixing it. I heard someone say that python-on-parrot may be the most viable way, once we have pl/parrot (which we don't yet).
Bottom line though - if your stored procedures are going to do database work, use pl/pgsql. Only use pl/python if you are going to do non-database stuff, such as talking to external libraries. | 1 | 0 | 0 | Stored Procedures in Python for PostgreSQL | 1 | python,postgresql,stored-procedures,plpgsql | 0 | 2010-01-26T18:19:00.000 |
I have a queryset with a few million records. I need to update a Boolean Value, fundamentally toggle it, so that in the database table the values are reset. What's the fastest way to do that?
I tried traversing the queryset and updating and saving each record, that obviously takes ages? We need to do this very fast, any suggestions? | 7 | 0 | 0 | 0 | false | 4,230,081 | 1 | 1,711 | 1 | 0 | 0 | 2,141,769 | Actually, that didn't work out for me.
The following did:
Entry.objects.all().update(value=(F('value')==False)) | 1 | 0 | 0 | Fastest Way to Update a bunch of records in queryset in Django | 3 | python,database,django,django-queryset | 0 | 2010-01-26T18:50:00.000 |
Why is _mysql in the MySQLdb module a C file? When the module tries to import it, I get an import error. What should I do? | 1 | 0 | 0 | 0 | false | 2,169,464 | 0 | 1,242 | 1 | 0 | 0 | 2,169,449 | It's the adaptor that sits between the Python MySQLdb module and the C libmysqlclient library. One of the most common reasons for it not loading is that the appropriate libmysqlclient library is not in place. | 1 | 0 | 0 | Importing _mysql in MySQLdb | 2 | python,mysql,c | 0 | 2010-01-30T21:12:00.000 |
I am attempting to execute the following query via the mysqldb module in python:
for i in self.p.parameter_type:
cursor.execute("""UPDATE parameters SET %s = %s WHERE parameter_set_name = %s""" % (i,
float(getattr(self.p, i)), self.list_box_parameter.GetStringSelection()))
I keep getting the error: "Unknown column 'M1' in 'where clause'". I want to update columns i with the value getattr(self.p, i), but only in rows that have the column parameter_set_name equal to self.list_box_parameter.GetStringSelection(). The error suggests that my query is looking for columns by the name 'M1' in the WHERE clause. Why is the above query incorrect and how can I correct it? | 0 | 0 | 0 | 0 | false | 2,171,104 | 0 | 375 | 1 | 0 | 0 | 2,171,072 | It looks like query is formed with wrong syntax.
Could you display string parameter of cursor.execute? | 1 | 0 | 0 | Trouble with MySQL UPDATE syntax with the module mysqldb in Python | 2 | python,mysql | 0 | 2010-01-31T08:45:00.000 |
I'm trying to implement the proper architecture for multiple databases under Python + Pylons. I can't put everything in the config files since one of the database connections requires the connection info from a previous database connection (sharding).
What's the best way to implement such an infrastructure? | 2 | 1 | 1.2 | 0 | true | 2,224,250 | 0 | 988 | 1 | 0 | 0 | 2,205,047 | Pylons's template configures the database in config/environment.py, probably with the engine_from_config method. It finds all the config settings with a particular prefix and passes them as keyword arguments to create_engine.
You can just replace that with a few calls to sqlalchemy.create_engine() with the per-engine url, and common username, and password from your config file. | 1 | 0 | 0 | Multiple database connections with Python + Pylons + SQLAlchemy | 1 | python,pylons | 0 | 2010-02-05T04:29:00.000 |
TypeError: unsupported operand type(s) for /: 'tuple' and 'tuple'
I'm getting above error , while I fetched a record using query "select max(rowid) from table"
and assigned it to variable and while performing / operation is throws above message.
How to resolve this. | 1 | 4 | 1.2 | 0 | true | 2,220,107 | 0 | 1,106 | 1 | 0 | 0 | 2,220,099 | Sql query select max(rowid) would return Tuple data like records=(1000,)
You may need to do like numerator / records[0] | 1 | 0 | 1 | python tuple division | 1 | python,tuples | 0 | 2010-02-08T07:18:00.000 |
Suppose that I have a table Articles, which has fields article_id, content and it contains one article with id 1.
I also have a table Categories, which has fields category_id (primary key), category_name, and it contains one category with id 10.
Now suppose that I have a table ArticleProperties, that adds properties to Articles. This table has fields article_id, property_name, property_value.
Suppose that I want to create a mapping from Categories to Articles via ArticleProperties table.
I do this by inserting the following values in the ArticleProperties table: (article_id=1, property_name="category", property_value=10).
Is there any way in SQLAlchemy to express that rows in table ArticleProperties with property_name "category" are actually FOREIGN KEYS of table Articles to table Categories?
This is a complicated problem and I haven't found an answer myself.
Any help appreciated!
Thanks, Boda Cydo. | 0 | 1 | 0.197375 | 0 | false | 2,248,806 | 1 | 928 | 1 | 0 | 0 | 2,234,030 | Assuming I understand you question correctly, then No, you can't model that relationship as you have suggested. (It would help if you described your desired result, rather than your perceived solution)
What I think you may want is a many-to-many mapping table called ArticleCategories, consisting of 2 int columns, ArticleID and CategoryID (with respective FKs) | 1 | 0 | 0 | SQLAlchemy ForeignKey relation via an intermediate table | 1 | python,sqlalchemy | 0 | 2010-02-10T02:30:00.000 |