Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
Looking for any advice I can get. I have 16 virtual CPUs all writing to a single remote MongoDB server. The machine that's being written to is a 64-bit machine with 32GB RAM, running Windows Server 2008 R2. After a certain amount of time, all the CPUs stop cold (no gradual performance reduction), and any attempt to get a Remote Desktop Connection hangs. I'm writing from Python via pymongo, and the insert statement is "[collection].insert([document], safe=True)" I decided to more actively monitor my server as the distributed write job progressed, remoting in from time to time and checking the Task Manager. What I see is a steady memory creep, from 0.0GB all the way up to 29.9GB, in a fairly linear fashion. My leading theory is therefore that my writes are filling up the memory and eventually overwhelming the machine. Am I missing something really basic? I'm new to MongoDB, but I remember that when writing to a MySQL database, inserts are typically followed by commits, where it's the commit statement that actually makes sure the record is written. Here I'm not doing any commits...? Thanks, Dave
0
0
0
0
false
10,157,192
0
97
1
0
0
10,114,431
Try it with journaling turned off and see if the problem remains.
1
0
0
Distributed write job crashes remote machine with MongoDB server
1
mongodb,python-2.7,windows-server-2008-r2,pymongo,distributed-transactions
0
2012-04-11T21:43:00.000
We moved our SQL Server 2005 database to a new physical server, and since then it has been terminating any connection that persist for 30 seconds. We are experiencing this in Oracle SQL developer and when connecting from python using pyodbc Everything worked perfectly before, and now python returns this error after 30 seconds: ('08S01', '[08S01] [FreeTDS][SQL Server]Read from the server failed (20004) (SQLExecDirectW)')
0
1
1.2
0
true
10,145,890
0
549
1
0
0
10,145,201
First of all what you need is profile the sql server to see if any activity is happening. Look for slow running queries, CPU and memory bottlenecks. Also you can include the timeout in the querystring like this: "Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=SSPI;Connection Timeout=30"; and extend that number if you want. But remember "timeout" doesn't means time connection, this is just the time to wait while trying to establish a connection before terminating. I think this problem is more about database performance or maybe a network issue.
1
0
0
SQL Server 2005 terminating connections after 30 sec
2
python,sql,sql-server,sql-server-2005,oracle-sqldeveloper
0
2012-04-13T17:06:00.000
I've extended sorl-thumbnail's KVStoreBase class, and made a key-value backend that uses a single MongoDB collection. This was done in order to avoid installing a discrete key-value store (e.g. Redis). Should I clear the collection every once in a while? What are the downsides?
2
0
1.2
0
true
11,557,675
0
298
1
0
0
10,146,087
Only clear the collection if low disk usage is more important to you than fast access times. The downsides are that your users will all hit un-cached thumbs simultaneously (And simultaneously begin recomputing them). Just run python manage.py thumbnail cleanup This cleans up the Key Value Store from stale cache. It removes references to images that do not exist and thumbnail references and their actual files for images that do not exist. It removes thumbnails for unknown images.
1
0
0
Using sorl-thumbnail with MongoDB storage
1
django,mongodb,python-imaging-library,sorl-thumbnail
0
2012-04-13T18:11:00.000
I am currently writing a script in Python which uploads data to a localhost MySql DB. I am now looking to relocate this MySql DB to a remote server with a static IP address. I have a web hosting facility but this only allows clients to connect to the MySql DB if I specify the domain / IP address from which clients will connect. My Python script will be ran on a number of computers that will connect via a mobile broadband dongle and therefore, the IP addresses will vary on a day-to-day basis as the IP address is allocated dynamically. Any suggestions on how to overcome this issue either with my web hosting facility (cPanel) or alternatively, any suggestions on MySql hosting services that allow remote access from any IP addresses (assuming they successfully authenticate with passwords etc...) Would SSH possibly address this and allow me to transmit data?
0
2
0.132549
0
false
10,157,409
0
1,360
1
0
0
10,157,380
Go to Cpanel and add the wildcard % on remote Mysql Connection options (cPanel > Remote MySQL)
1
0
0
Remote Access to MySql DB (Hosting Options)
3
python,mysql,mysql-python
0
2012-04-14T21:07:00.000
I am trying to read phonenumber field from xls using xlrd (python). But, I always get float no. e.g. I get phone number as 8889997777.0 How can I get rid of floating format and convert it to string to store it in my local mongodb within python as string as regular phone number e.g. 8889997777
2
0
0
0
false
10,169,963
0
2,371
2
0
0
10,169,949
Did you try using int(phoneNumberVar) or in your case int(8889997777.0)?
1
0
0
python xlrd reading phone nunmber from xls becomes float
2
python,string,floating-point,xls,xlrd
0
2012-04-16T07:06:00.000
I am trying to read phonenumber field from xls using xlrd (python). But, I always get float no. e.g. I get phone number as 8889997777.0 How can I get rid of floating format and convert it to string to store it in my local mongodb within python as string as regular phone number e.g. 8889997777
2
4
0.379949
0
false
10,170,261
0
2,371
2
0
0
10,169,949
You say: python xlrd reading phone nunmber from xls becomes float This is incorrect. It is already a float inside your xls file. xlrd reports exactly what it finds. You can use str(int(some_float_value)) to do what you want to do.
1
0
0
python xlrd reading phone nunmber from xls becomes float
2
python,string,floating-point,xls,xlrd
0
2012-04-16T07:06:00.000
I would like to know if somebody knows a way to customize the csv output in htsql, and especially the delimiter and the encoding ? I would like to avoid iterating over each result and find a way through configuration and/or extensions. Thank in advance. Anthony
1
3
0.53705
1
false
10,210,348
0
170
1
0
0
10,205,990
If you want TAB as a delimiter, use tsv format (e.g. /query/:tsv instead of /query/:csv). There is no way to specify the encoding other than UTF-8. You can reencode the output manually on the client.
1
0
0
Customizing csv output in htsql
1
python,sql,htsql
0
2012-04-18T08:52:00.000
I am new in OpenERP, I have installed OpenERP v6. I want to know how can I insert data in database? Which files I have to modify to do the job? (files for the SQL code)
2
0
0
0
false
10,208,766
1
794
2
0
0
10,208,147
OpenERP works with PostgreSQl as the Back-end Structure. Postgresql is managed by pgadmin3 (Postgres GUI),you can write sql queries there and can add/delete records from there. It is not advisable to insert/remove data directly into Database!!!!
1
0
0
OpenERP: insert Data code
3
python,postgresql,openerp
0
2012-04-18T11:10:00.000
I am new in OpenERP, I have installed OpenERP v6. I want to know how can I insert data in database? Which files I have to modify to do the job? (files for the SQL code)
2
0
0
0
false
10,225,346
1
794
2
0
0
10,208,147
The addition of columns in the .py files of the corresponding modules you want to chnage will insert coumns to the pgadmin3 also defenition of classes will create tables...when the fields are displayed in xml file and values are entered to the fields through the interface the values get stored to the table values to the database...
1
0
0
OpenERP: insert Data code
3
python,postgresql,openerp
0
2012-04-18T11:10:00.000
I have a web application that has been done using Cakephp with MySql as the DB. The webapp also exposes a set of web services that get and update data to the MySQL DB. I will like to extend the app to provide a fresh set of web services but will like to use a python based framework like web2py/django etc. Since both will be working of the same DB will it cause any problems? The reason I want to do it is because the initial app/web services was done by somebody else and now I want to extend it and am more comfortable using python/web2py that php/cakephp.
0
0
0
0
false
10,233,231
1
73
1
0
0
10,233,187
This is one of the reasons to use RDBMS to provide access for different users and applications to the same data. There should absolutely no problem with this.
1
0
0
Same MySql DB working with a php and a python framework
2
php,python,mysql,django,cakephp
0
2012-04-19T17:08:00.000
I'm trying to drop a few tables with the "DROP TABLE" command but for a unknown reason, the program just "sits" and doesn't delete the table that I want it to in the database. I have 3 tables in the database: Product, Bill and Bill_Products which is used for referencing products in bills. I managed to delete/drop Product, but I can't do the same for bill and Bill_Products. I'm issuing the same "DROP TABLE Bill CASCADE;" command but the command line just stalls. I've also used the simple version without the CASCADE option. Do you have any idea why this is happening? Update: I've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table. So, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).
42
5
0.124353
0
false
19,072,541
0
55,426
4
0
0
10,317,114
Had the same problem. There were not any locks on the table. Reboot helped.
1
0
0
Postgresql DROP TABLE doesn't work
8
python,database,django,postgresql
0
2012-04-25T13:50:00.000
I'm trying to drop a few tables with the "DROP TABLE" command but for a unknown reason, the program just "sits" and doesn't delete the table that I want it to in the database. I have 3 tables in the database: Product, Bill and Bill_Products which is used for referencing products in bills. I managed to delete/drop Product, but I can't do the same for bill and Bill_Products. I'm issuing the same "DROP TABLE Bill CASCADE;" command but the command line just stalls. I've also used the simple version without the CASCADE option. Do you have any idea why this is happening? Update: I've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table. So, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).
42
2
0.049958
0
false
40,749,694
0
55,426
4
0
0
10,317,114
Old question but ran into a similar issue. Could not reboot the database so tested a few things until this sequence worked : truncate table foo; drop index concurrently foo_something; times 4-5x alter table foo drop column whatever_foreign_key; times 3x alter table foo drop column id; drop table foo;
1
0
0
Postgresql DROP TABLE doesn't work
8
python,database,django,postgresql
0
2012-04-25T13:50:00.000
I'm trying to drop a few tables with the "DROP TABLE" command but for a unknown reason, the program just "sits" and doesn't delete the table that I want it to in the database. I have 3 tables in the database: Product, Bill and Bill_Products which is used for referencing products in bills. I managed to delete/drop Product, but I can't do the same for bill and Bill_Products. I'm issuing the same "DROP TABLE Bill CASCADE;" command but the command line just stalls. I've also used the simple version without the CASCADE option. Do you have any idea why this is happening? Update: I've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table. So, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).
42
0
0
0
false
69,412,889
0
55,426
4
0
0
10,317,114
The same thing happened for me--except that it was because I forgot the semicolon. face palm
1
0
0
Postgresql DROP TABLE doesn't work
8
python,database,django,postgresql
0
2012-04-25T13:50:00.000
I'm trying to drop a few tables with the "DROP TABLE" command but for a unknown reason, the program just "sits" and doesn't delete the table that I want it to in the database. I have 3 tables in the database: Product, Bill and Bill_Products which is used for referencing products in bills. I managed to delete/drop Product, but I can't do the same for bill and Bill_Products. I'm issuing the same "DROP TABLE Bill CASCADE;" command but the command line just stalls. I've also used the simple version without the CASCADE option. Do you have any idea why this is happening? Update: I've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table. So, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).
42
4
0.099668
0
false
60,367,779
0
55,426
4
0
0
10,317,114
I ran into this today, I was issuing a: DROP TABLE TableNameHere and getting ERROR: table "tablenamehere" does not exist. I realized that for case-sensitive tables (as was mine), you need to quote the table name: DROP TABLE "TableNameHere"
1
0
0
Postgresql DROP TABLE doesn't work
8
python,database,django,postgresql
0
2012-04-25T13:50:00.000
Ive been reading the Django Book and its great so far, unless something doesn't work properly. I have been trying for two days to install the psycogp2 plugin with no luck. i navigate to the unzipped directory and run setup.py install and it returns "You must have postgresql dev for building a serverside extension or libpq-dev for client side." I don't know what any of this means, and google returns results tossing a lot of terms I don't really understand. Ive been trying to learn django for abut a week now plus linux so any help would be great. Thanks Btw, I have installed postgresql and pgadminIII from installer pack. I also tried sudo apt-get post.... and some stuff happens...but Im lost.
6
-1
-0.049958
0
false
20,124,244
1
4,157
2
0
0
10,321,568
sudo apt-get install python-psycopg2 should work fine since it worked solution for me as well.
1
0
0
Django with psycopg2 plugin
4
python,django
0
2012-04-25T18:25:00.000
Ive been reading the Django Book and its great so far, unless something doesn't work properly. I have been trying for two days to install the psycogp2 plugin with no luck. i navigate to the unzipped directory and run setup.py install and it returns "You must have postgresql dev for building a serverside extension or libpq-dev for client side." I don't know what any of this means, and google returns results tossing a lot of terms I don't really understand. Ive been trying to learn django for abut a week now plus linux so any help would be great. Thanks Btw, I have installed postgresql and pgadminIII from installer pack. I also tried sudo apt-get post.... and some stuff happens...but Im lost.
6
3
0.148885
0
false
22,528,687
1
4,157
2
0
0
10,321,568
I'm working on Xubuntu (12.04) and I have encountered the same error when I wanted to install django-toolbelt. I solved this error with the following operations : sudo apt-get install python-dev sudo apt-get install libpq-dev sudo apt-get install python-psycopg2 I hope this informations may be helpful for someone else.
1
0
0
Django with psycopg2 plugin
4
python,django
0
2012-04-25T18:25:00.000
I have a daemon process witch spawns child processes using multiprocessing to do some work, each child process opens its own connection handle do DB (postgres in my case). Jobs to processes are passed via Queue and if queue is empty processes invoke sleep for some time, and recheck queue How can I implement "graceful shutdown" on SIGTERM? Each subprocess should terminate as fast as possible, with respect of closing/terminating current cursor/transaction and db connection, and opened files.
3
5
1.2
0
true
10,322,481
0
403
1
1
0
10,322,422
Store all the open files/connections/etc. in a global structure, and close them all and exit in your SIGTERM handler.
1
0
0
Gracefull shutdown, close db connections, opened files, stop work on SIGTERM, in multiprocessing
1
python,database,multiprocessing,signals
0
2012-04-25T19:27:00.000
I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but: I already do this, thank you. I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail. Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.
0
1
0.049958
0
false
10,329,694
0
621
4
0
0
10,329,486
I don't know if this is in any way applicable but I am just putting it up there for completeness and experts can downvote me at will... not to mention i have concerns about its performance in some cases. I was once tasked with protecting an aging web app written in classic asp against sql injection (they were getting hit pretty bad at the time) I dint have time to go through all code (not may choice) so I added a method to one of our standard include files that looked at everything being submitted by the user (iterated through request params) and checked it for blacklisted html tags (e.g. script tags) and sql injection signs (e.g. ";--" and "';shutdown").. If it found one it redirected the user told them they submission was suspicious and if they have an issue call or email.. blah blah. It also recorded the injection attempt in a table (once it have been escaped) and details about the IP address time etc of the attack.. Overall it worked a treat.. at least the attacks stopped. every web technology i have used has some way of fudging something like this in there and it only took me about a day to dev and test.. hope it helps, I would not call it an industry standard or anything tl;dr?: Check all request params against a blacklist of strings
1
0
0
protecting against sql injection attacks beyond parameter binding
4
python,sql,sql-injection
0
2012-04-26T08:09:00.000
I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but: I already do this, thank you. I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail. Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.
0
2
1.2
0
true
10,336,420
0
621
4
0
0
10,329,486
I already do this, thank you. Good; with just this, you can be totally sure (yes, totally sure) that user inputs are being interpreted only as values. You should direct your energies toward securing your site against other kinds of vulnerabilities (XSS and CSRF come to mind; make sure you're using SSL properly, et-cetera). I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail. Well, there's no such thing as fool proof because fools are so ingenious. If your your audience is determined to undermine all of your hard work for securing their data, you can't really do anything about it. what you can do is determine which drivers you believe are secure, and generate a big scary warning when you detect that your users are using something else. Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once So don't do that! During development, log every sql statement sent to your driver. check, on a regular basis, that user data is never in this log (or logged as a separate event, for the parameters). SQL injection is basically string formatting. You can usually follow each database transaction backwards to the original sql; if user data is formatted into that somewhere along the way, you have a problem. When scanning over projects, I find that I'm able to locate these at a rate of about one per minute, with effective use of grep and my editor of choice. unless you have tens of thousands of different sql statements, going over each one shouldn't really be prohibitively difficult. Try to keep your database interactions well isolated from the rest of your application. mixing sql in with the rest of your code makes it hard to mantain, or do the checks I've described above. Ideally, you should go through some sort of database abstraction, (a full ORM or maybe something thinner), so that you can work on just your database related code when that's the task at hand.
1
0
0
protecting against sql injection attacks beyond parameter binding
4
python,sql,sql-injection
0
2012-04-26T08:09:00.000
I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but: I already do this, thank you. I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail. Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.
0
0
0
0
false
10,336,013
0
621
4
0
0
10,329,486
So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. This strategy is doomed to fail.
1
0
0
protecting against sql injection attacks beyond parameter binding
4
python,sql,sql-injection
0
2012-04-26T08:09:00.000
I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but: I already do this, thank you. I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail. Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.
0
-1
-0.049958
0
false
10,329,550
0
621
4
0
0
10,329,486
Well in php, I use preg_replace to protect my website from being attacked by sql injection. preg_match can also be used. Try searching an equivalent function of this in python.
1
0
0
protecting against sql injection attacks beyond parameter binding
4
python,sql,sql-injection
0
2012-04-26T08:09:00.000
We have developed an application using DJango 1.3.1, Python 2.7.2 using Database as SQL server 2008. All these are hosted in Win 2008 R2 operating system on VM. The clients has windows 7 as o/s. We developed application keeping in view with out VM, all of sudden client has come back saying they can only host the application on VM. Now the challnege is to access application from client to server which is on VM. If anyone has done this kind of applications, request them share step to access the applicaiton on VM. As I am good at standalone systems, not having knowledge on VM accessbility. We have done all project and waiting to someone to respond ASAP. Thanks in advance for your guidence. Regards, Shiva.
0
0
1.2
0
true
10,331,810
1
422
1
1
0
10,331,518
Maybe this could help you a bit, although my set-up is slightly different. I am running an ASP.NET web app developed on Windows7 via VMware fusion on OS X. I access the web app from outside the VM (browser of Mac or other computers/phones within the network). Here are the needed settings: Network adapter set to (Bridged), so that the VM has its own IP address Configure the VM to have a static IP At this point, the VM is acting as its own machine, so you can access it as if it were another server sitting on the network.
1
0
0
Steps to access Django application hosted in VM from Windows 7 client
2
django,wxpython,sql-server-2008-r2,vmware,python-2.7
0
2012-04-26T10:21:00.000
What's the best way to create an intentionally empty query in SQLAlchemy? For example, I've got a few functions which build up the query (adding WHERE clauses, for example), and at some points I know that the the result will be empty. What's the best way to create a query that won't return any rows? Something like Django's QuerySet.none().
36
34
1
0
false
12,837,029
0
9,094
1
0
0
10,345,327
If you need the proper return type, just return session.query(MyObject).filter(sqlalchemy.sql.false()). When evaluated, this will still hit the DB, but it should be fast. If you don't have an ORM class to "query", you can use false() for that as well: session.query(sqlalchemy.false()).filter(sqlalchemy.false())
1
0
0
SQLAlchemy: create an intentionally empty query?
4
python,sqlalchemy
0
2012-04-27T05:41:00.000
I'm using django-1.4 , sqlite3 , django-facebookconnect Following instructions in Wiki to setup . "python manage.py syncdb" throws an error . Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_content_type Creating table django_session Creating table django_site Creating table blog_post Creating table blog_comment Creating table django_admin_log Traceback (most recent call last): File "manage.py", line 10, in execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/init.py", line 443, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/dist-packages/django/core/management/init.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.dict) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute output = self.handle(*args, **options) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 371, in handle return self.handle_noargs(**options) File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/syncdb.py", line 91, in handle_noargs sql, references = connection.creation.sql_create_model(model, self.style, seen_models) File "/usr/local/lib/python2.7/dist-packages/django/db/backends/creation.py", line 44, in sql_create_model col_type = f.db_type(connection=self.connection) TypeError: db_type() got an unexpected keyword argument 'connection' Is there any solution ??
0
1
1.2
0
true
10,486,708
1
282
1
0
0
10,356,581
You should use django-facebook instead, it does that and more and it is actively supported :)
1
0
0
Getting db_type() error while using django-facebook connect for DjangoApp
1
python,django,facebook,sqlite
0
2012-04-27T19:17:00.000
I have a large dataset of events in a Postgres database that is too large to analyze in memory. Therefore I would like to quantize the datetimes to a regular interval and perform group by operations within the database prior to returning results. I thought I would use SqlSoup to iterate through the records in the appropriate table and make the necessary transformations. Unfortunately I can't figure out how to perform the iteration in such a way that I'm not loading references to every record into memory at once. Is there some way of getting one record reference at a time in order to access the data and update each record as needed? Any suggestions would be most appreciated! Chris
0
1
0.197375
0
false
10,360,094
0
333
1
0
0
10,359,617
After talking with some folks, it's pretty clear the better answer is to use Pig to process and aggregate my data locally. At the scale, I'm operating it wasn't clear Hadoop was the appropriate tool to be reaching for. One person I talked to about this suggests Pig will be orders of magnitude faster than in-DB operations at the scale I'm operating at which is about 10^7 records.
1
0
0
Data Transformation in Postgres Using SqlSoup
1
python,postgresql,sqlsoup
0
2012-04-28T00:57:00.000
I have a python script that gets data from a USB weather station, now it puts the data into MySQL whenever the data is received from the station. I have a MySQL class with an insert function, what i want i that the function checks if it has been run the last 5 minutes if it has, quit. Could not find any code on the internet that does this. Maybe I need to have a sub-process, but I am not familiar with that at all. Does anyone have an example that I can use?
1
0
0
0
false
10,366,467
0
489
2
0
0
10,366,424
Just derive to a new class and override the insert function. In the overwriting function, check last insert time and call father's insert method if it has been more than five minutes, and of course update the most recent insert time.
1
0
0
Python, function quit if it has been run the last 5 minutes
5
python,python-2.7
0
2012-04-28T18:36:00.000
I have a python script that gets data from a USB weather station, now it puts the data into MySQL whenever the data is received from the station. I have a MySQL class with an insert function, what i want i that the function checks if it has been run the last 5 minutes if it has, quit. Could not find any code on the internet that does this. Maybe I need to have a sub-process, but I am not familiar with that at all. Does anyone have an example that I can use?
1
0
0
0
false
10,366,452
0
489
2
0
0
10,366,424
Each time the function is run save a file with the current time. When the function is run again check the time stored in the file and make sure it is old enough.
1
0
0
Python, function quit if it has been run the last 5 minutes
5
python,python-2.7
0
2012-04-28T18:36:00.000
I'm building a social app in django, the architecture of the site will be very similar to facebook There will be posts, posts will have comments Both posts and comments will have meta data like date, author, tags, votes I decided to go with nosql database because of the ease with which we can add new features. I finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app? Update: I have decided to go with mongodb, will use redis for user home page and home page if necessary.
0
2
0.099668
0
false
10,396,700
1
1,158
3
0
0
10,396,315
There's a huge distinction to be made between Redis and MongoDB for your particular needs, in that Redis, unlike MongoDB, doesn't facilitate value queries. You can use MongoDB to embed the comments within the post document, which means you get the post and the comments in a single query, yet you could also query for post documents based on tags, the author, etc. You'll definitely want to go with MongoDB. Redis is great, but it's not a proper fit for what I'd believe you'll need from it.
1
0
0
mongo db or redis for a facebook like site?
4
python,django,database-design,mongodb,redis
0
2012-05-01T10:16:00.000
I'm building a social app in django, the architecture of the site will be very similar to facebook There will be posts, posts will have comments Both posts and comments will have meta data like date, author, tags, votes I decided to go with nosql database because of the ease with which we can add new features. I finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app? Update: I have decided to go with mongodb, will use redis for user home page and home page if necessary.
0
0
0
0
false
10,403,789
1
1,158
3
0
0
10,396,315
First, loosely couple your app and your persistence so that you can swap them out at a very granular level. For example, you want to be able to move one service from mongo to redis as your needs evolve. Be able to measure your services and appropriately respond to them individually. Second, you are unlikely to find one persistence solution that fits every workflow in your application at scale. Don't be afraid to use more than one. Mongo is a good tool for a set of problems, as is Redis, just not necessarily the same problems.
1
0
0
mongo db or redis for a facebook like site?
4
python,django,database-design,mongodb,redis
0
2012-05-01T10:16:00.000
I'm building a social app in django, the architecture of the site will be very similar to facebook There will be posts, posts will have comments Both posts and comments will have meta data like date, author, tags, votes I decided to go with nosql database because of the ease with which we can add new features. I finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app? Update: I have decided to go with mongodb, will use redis for user home page and home page if necessary.
0
1
0.049958
0
false
10,396,466
1
1,158
3
0
0
10,396,315
These things are subjective and can be looked at in different directions. But if you have already decided to go with a nosql solution and is trying to determine between mongodb and redis I think it is better to go with mongodb as I guess you should be able to save a big number of posts and also mongodb documents are better suited to represent posts. Redis can only save upto the max memory limit but is super fast. So if you need to index some kind of things you can save posts in mongodb and then keep the id's of posts in redis to access faster.
1
0
0
mongo db or redis for a facebook like site?
4
python,django,database-design,mongodb,redis
0
2012-05-01T10:16:00.000
I am facing an issue with setting a value of Excel Cell. I get data from a table cell in MS-Word Document(dcx) and print it on output console. Problem is that the data of the cell is just a word, "Hour", with no apparent other leading or trailing printable character like white-spaces. But when I print it using python's print() function, it shows some unexpected character, more like a small "?" in a rectangle. I don't know where does it come from. And when I write the same variable that holds the word, "Hour", to an Excel cell it shows a bold dot(.) in the cell. What can be the problem? Any help is much appreciated. I Am Using Python 3.2 And PyWin32 3.2 On Win7. Thanks.
3
3
1.2
0
true
10,423,918
0
3,883
1
0
0
10,423,593
Try using value.rstrip('\r\n') to remove any carriage returns (\r) or newlines (\n) at the end of your string value.
1
0
1
Unwanted character in Excel Cell In Python
2
python,excel,ms-word,character
0
2012-05-03T00:30:00.000
Sometimes an application requires quite a few SQL queries before it can do anything useful. I was wondering if there is a way to send those as a batch to the database, to avoid the overhead of going back and forth between the client and the server? If there is no standard way to do it, I'm using the python bindings of MySQL. PS: I know MySQL has an executemany() function, but that's only for the same query executed many times with different parameters, right?
0
0
0
0
false
10,434,644
0
92
1
0
0
10,434,523
This process works best on inserts Make all you SQL queries into Stored Procedures. These eventually will become child stored procedures Create Master Store procedure to run all other Stored Procedures. Modify master Stored procedure to accept values required by child Stored Procedures Modify master Stored procedure to accept commands using "if" statements to know which child stored procedures to run If you need return data from Database use 1 stored procedure at the time.
1
0
0
Grouping SQL queries
1
mysql,sql,mysql-python
0
2012-05-03T15:27:00.000
Lets say I have 100 servers each running a daemon - lets call it server - that server is responsible for spawning a thread for each user of this particular service (lets say 1000 threads per server). Every N seconds each thread does something and gets information for that particular user (this request/response model cannot be changed). The problem I a have is sometimes a thread hangs and stops doing something. I need some way to know that users data is stale, and needs to be refreshed. The only idea I have is every 5N seconds have the thread update a MySQL record associated with that user (a last_scanned column in the users table), and another process that checks that table every 15N seconds, if the last_scanned column is not current, restart the thread.
1
1
1.2
0
true
10,440,880
1
221
1
1
0
10,440,277
The general way to handle this is to have the threads report their status back to the server daemon. If you haven't seen a status update within the last 5N seconds, then you kill the thread and start another. You can keep track of the current active threads that you've spun up in a list, then just loop through them occasionally to determine state. You of course should also fix the errors in your program that are causing threads to exit prematurely. Premature exits and killing a thread could also leave your program in an unexpected, non-atomic state. You should probably also have the server daemon run a cleanup process that makes sure any items in your queue, or whatever you're using to determine the workload, get reset after a certain period of inactivity.
1
0
0
Distributed server model
1
python,distributed-computing
0
2012-05-03T22:39:00.000
I have come across a requirement that needs to access a set of databases in a Mongodb server, using TurboGear framework. There I need to list down the Databases, and allow the user to select one and move on. As far as I looked, TurboGear does facilitate multiple databases to use, but those needs to be specify beforehand in the development.ini. Is there a way to just connect to the db server(or to a particular database first) and then get the list of databases and select one on the fly?
2
2
1.2
0
true
10,650,606
0
305
1
0
0
10,495,324
For SQLAlchemy you can achieve something like that using a smarter Session. Just subclass the sqlalchemy.orm.Session class and override the get_bind(self, mapper=None, clause=None) method. That method is called each time the session has to decide which engine to use and is expected to return the engine itself. You can then store a list of engines wherever you prefer and return the correct one. When using Ming/MongoDB the same can probably be achieved by subclassing the ming.Session in model/session.py and overridding the ming.Session.db property to return the right database.
1
0
0
How to change the database on the fly in python using TurboGear framework?
1
mongodb,python-3.x,turbogears2
0
2012-05-08T08:42:00.000
I have one table with Time32 column and large number of rows. My problem is next. When my table reaches thousand million rows, I want start archiving every row older than specified value. For creating query I will use Time32 column which represents timestamp for collected data in row. So,using this query I want delete old rows in working table, and store in other table reserved for storing archive records. Is it possible? If yes, what is most efficient way? I know for whereAppend() method, but this method only copy records, not delete from actual table. Thaks for advice. Cheers!
1
1
0.197375
0
false
10,497,547
0
1,114
1
0
0
10,496,821
The general way to archive records from one table of a given database to another one is to copy records into the target table, and then to delete the same records in the origin table. That said, depending of your database engine and the capabilities of the language built on top of that, you can write atomic query commands that do an atomic 'copy then delete' for you, but it is dependent of your database engine capabilities. In your case of old records archiving, a robust approach can be to copy the records you want to archive by chunks by copying blocks of n records (n sized to your amount of data you can temporary clone, it is a trade-off between temporary additional size and the overhead of a copy delete action), then deleting those n records, and so on until to archive all the records fulfilling your condition Time32 field older than a given timestamp threshold.
1
0
0
Pytables - Delete rows from table by some criteria
1
python,database,python-2.7,hdf5,pytables
0
2012-05-08T10:26:00.000
I looked at the sqlite.org docs, but I am new to this, so bear with me. (I have a tiny bit of experience with MySQL, and I think using it would be an overkill for what I am trying to do with my application.) From what I understand I can initially create an SQLite db file locally on my MAC and add entrees to it using a Firefox extension. I could then store any number of tables and images (as binary). Once my site that uses this db is live, I could upload the db file to any web hosting service to any directory. In my site I could have a form that collects data and sends a request to write that data to the db file. Then, I could have an iOS app that connects to the db and reads the data. Did I get this right? Would I be able to run a Python script that writes to SQLite? What questions should I ask a potential hosting service? (I want to leave MediaTemple, so I am looking around...) I don't want to be limited to a Windows server, I am assuming SQLite would run on Unix? Or, does it depend on a hosting service? Thanks!
0
1
1.2
0
true
10,518,010
0
143
1
0
0
10,517,900
I could upload the db file to any web hosting service to any directory Supposing that the service has the libraries installed to handle sqlite, and that sqlite is installed. Would I be able to run a Python script that writes to SQLite Yes, well, maybe. As of Python 2.5, Python includes sqlite support as part of it's standard library. What questions should I ask a potential hosting service Usually, in their technical specs they will list what databases/libraries/languages are supported. I have successfully ran Python sites w/ sqlite databases on Dreamhost. SQLite would run on Unix Most *nix flavors have pre-packaged sqlite installation binaries. The hosting provider should be able to tell you this as well.
1
0
0
Understanding SQLite conceptually
1
python,sqlite,web-hosting
0
2012-05-09T14:13:00.000
I'm trying to install MYSQLdb on a windows client. The goal is, from the Windows client, run a python script that connects to a MySQL server on a LINUX client. Looking at the setup code (and based on the errors I am getting when I try to run setup.py for mysqldb, it appears that I have to have my own version of MySQL on the windows box. Is there a way (perhaps another module) that will let me accomplish this? I need to have people on multiple boxes run a script that will interact with a MySQL database on a central server.
0
1
0.066568
0
false
10,541,253
0
1,771
1
0
0
10,541,085
You don't need the entire MySQL database server, only the MySQL client libraries.
1
0
0
Install MYSQLdb python module without MYSQL local install
3
python,mysql,windows
0
2012-05-10T19:44:00.000
I know that pyramid comes with a scaffold for sqlalchemy. But what if I'm using the pyramid_jqm scaffold. How would you integrate or use sqlalchemy then? When I create a model.py and import from sqlalchemy I get an error that he couldnt find the module.
0
2
1.2
0
true
10,555,714
1
101
1
0
0
10,551,042
You have to setup your project in the same way that the alchemy scaffold is constructed. Put "sqlalchemy" in your setup.py requires field and run "python setup.py develop" to install the dependency. This is all just python and unrelated to Pyramid.
1
0
0
Using sqlalchemy in pyramid_jqm
1
python,sqlalchemy,pyramid
0
2012-05-11T12:08:00.000
I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset/refresh the localhost. I tried to create new projects/directories with different content but my localhost constantly shows the old "Hello world!" I get the following in the log window: WARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded. WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\users\tomek\appdata\local\temp\dev_appserver.datastore WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging Please help...
0
2
0.132549
0
false
10,575,238
1
437
3
1
0
10,575,184
Those warnings shouldn't prevent you from seeing new 'content,' they simply mean that you are missing some libraries necessary to run local versions of CloudSQL (MySQL) and the Images API. First to do is try to clear your browser cache. What changes did you make to your Hello World app?
1
0
0
Localhost is not refreshing/reseting
3
python,google-app-engine
0
2012-05-13T20:57:00.000
I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset/refresh the localhost. I tried to create new projects/directories with different content but my localhost constantly shows the old "Hello world!" I get the following in the log window: WARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded. WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\users\tomek\appdata\local\temp\dev_appserver.datastore WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging Please help...
0
0
0
0
false
10,593,822
1
437
3
1
0
10,575,184
Press CTRL-F5 in your browser, while on the page. Forces a cache refresh.
1
0
0
Localhost is not refreshing/reseting
3
python,google-app-engine
0
2012-05-13T20:57:00.000
I am absolute beginner using google app engine with python 2.7. I was successful with creating helloworld app, but then any changes I do to the original app doesn't show in localhost:8080. Is there a way to reset/refresh the localhost. I tried to create new projects/directories with different content but my localhost constantly shows the old "Hello world!" I get the following in the log window: WARNING 2012-05-13 20:54:25,536 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded. WARNING 2012-05-13 20:54:26,496 datastore_file_stub.py:518] Could not read datastore data from c:\users\tomek\appdata\local\temp\dev_appserver.datastore WARNING 2012-05-13 20:54:26,555 dev_appserver.py:3401] Could not initialize images API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging Please help...
0
0
0
0
false
41,388,817
1
437
3
1
0
10,575,184
You can try opening up the DOM reader (Mac: alt+command+i, Windows: shift+control+i) the reload the page. It's weird, but it works for me.
1
0
0
Localhost is not refreshing/reseting
3
python,google-app-engine
0
2012-05-13T20:57:00.000
I have found that ultramysql meets my requirement. But it has no document, and no windows binary package. I have a program heavy on internet downloads and mysql inserts. So I use gevent to solve the multi-download-tasks problem. After I downloaded the web pages, and parsed the web pages, I get to insert the data into mysql. Is monkey.patch_all() make mysql operations async? Can anyone show me a correct way to go.
4
1
1.2
0
true
12,335,813
1
1,197
2
1
0
10,580,835
Postgres may be better suited due to its asynchronous capabilities
1
0
0
How to use mysql in gevent based programs in python?
2
python,mysql,gevent
0
2012-05-14T09:41:00.000
I have found that ultramysql meets my requirement. But it has no document, and no windows binary package. I have a program heavy on internet downloads and mysql inserts. So I use gevent to solve the multi-download-tasks problem. After I downloaded the web pages, and parsed the web pages, I get to insert the data into mysql. Is monkey.patch_all() make mysql operations async? Can anyone show me a correct way to go.
4
1
0.099668
0
false
13,006,283
1
1,197
2
1
0
10,580,835
I think one solution is use pymysql. Since pymysql use python socket, after monkey patch, should be work with gevent.
1
0
0
How to use mysql in gevent based programs in python?
2
python,mysql,gevent
0
2012-05-14T09:41:00.000
I'm currently developing an application which connects to a database using sqlalchemy. The idea consists of having several instances of the application running in different computers using the same database. I want to be able to see changes in the database in all instances of the application once they are commited. I'm currently using sqlalchemy event interface, however it's not working when I have several concurrent instances of the application. I change something in one of the instances, but there are no signals emitted in the other instances.
2
0
1.2
0
true
10,602,194
0
1,036
1
0
0
10,601,947
You said it, you are using SQLAlchemy's event interface, it is not the one of the RDBMS, and SQLAlchemy does not communicate with the other instances connected to that DB. SQLAlchemy's event system calls a function in your own process. It's up to you to make this function send a signal to the rest of them via the network (or however they are connected). As long as SQLAlchemy is concerned, it doesn't know about the other instances connected to your database. So, you might want to start another server on the machine with the database running, and make all the other listening to it, and act accordingly. Hope it helps.
1
0
0
Concurrency in sqlalchemy
1
python,concurrency,sqlalchemy
0
2012-05-15T13:37:00.000
I have a couchdb instance with database a and database b. They should contain identical sets of documents, except that the _rev property will be different, which, AIUI, means I can't use replication. How do I verify that the two databases really do contain the same documents which are all otherwise 'equal'? I've tried using the python-based couchdb-dump tool with a lot of sed magic to get rid of the _rev and MD5 and ETag headers, but then it still seems that property order in the JSON structure is slightly random, which means I still can't compare the output easily with something like diff. Is there a better approach here? Have other people wanted to solve a similar problem?
1
1
1.2
0
true
10,616,421
1
685
1
0
0
10,615,980
If you want to make sure they're exactly the same, write a map job that emits the document path as the key, and the documents hash (generated any way you like) as the value. Do not include the _rev field in the hash generation. You cannot reduce to a single hash because order is not guaranteed, but you can feed the resultant JSON document to a good diff program.
1
0
0
Compare two couchdb databases
1
couchdb,replication,couchdb-python
0
2012-05-16T09:45:00.000
I have made a python ladon webservice and I run is on Ubuntu with Apache2 and mod_wsgi. (I use Python 2.6). The webservice connect to a postgreSQL database with psycopg2 python module. My problem is that the psycopg2.connection is closed (or destroyed) automatically after a little time (after about 1 or 2 minutes). The other hand if I run the server with ladon2.6ctl testserve command (http://ladonize.org/index.php/Python_Configuration) than the server is working and the connection is not closed automatically. I can't understand why the connection is closed with apache+mod_wsgi and in this case the webserver is very slowly. Can anyone help me?
0
1
0.197375
0
false
10,645,670
1
450
1
0
0
10,636,409
If you are using mod_wsgi in embedded moe, especially with preform MPM for Apache, then likely that Apache is killing off the idle processes. Try using mod_wsgi daemon mode, which keeps process persistent and see if it makes a difference.
1
0
0
Python psycopg2 + mod_wsgi: connection is very slow and automatically close
1
python,web-services,apache2,mod-wsgi,psycopg2
1
2012-05-17T13:12:00.000
I wish to consume a .net webservice containing the results of SQL Server query using a Python client. I have used the Python Suds library to interface to the same web service but not with a set of results. How should I structure the data so it is efficiently transmitted and consumed by a Python client. There should be a maximum of 40 rows of data, containing 60 bytes of data per row in 5 columns.
0
1
0.197375
0
false
10,653,866
0
175
1
0
0
10,638,071
Suds is a library to connect via SOAP, so you may already have blown "efficiently transmitted" out of the window, as this is a particularly verbose format over the wire. Your maximum data size is relatively small, and so should almost certainly be transmitted back in a single message so the SOAP overhead is incurred only once. So you should create a web service that returns a list or array of results, and call it once. This should be straightforwardly serialised to a single XML body that Suds then gives you access to.
1
0
0
SQL Query result via .net webservice to a non .net- Python client
1
.net,python,sql-server,web-services,suds
0
2012-05-17T14:49:00.000
I'm seeing some unexpected behaviour with Flask-SQLAlchemy, and I don't understand what's going on: If I make a change to a record using e.g. MySQL Workbench or Sequel Pro, the running app (whether running under WSGI on Apache, or from the command line) isn't picking up the change. If I reload the app by touching the WSGI file, or by reloading it (command line), I can see the changed record. I've verified this by running an all() query in the interactive shell, and it's the same – no change until I quit the shell, and start again. I get the feeling I'm missing something incredibly obvious here – it's a single table, no joins etc. – Running MySQL 5.5.19, and SQLA 0.7.7 on 2.7.3
1
1
0.197375
0
false
15,194,364
1
326
1
0
0
10,645,793
you app's SELECT is probably within its own transaction / session so changes submitted by another session (e.g. MySQL Workbench connection) are not yet visible for your SELECT. You can easily verify it by enabling mysql general log or by setting 'echo: false' in your create_engine(...) definition. Chances are you're starting your SQLAlchemy session in SET AUTOCOMMIT = 0 mode which requires explicit commit or rollback (when you restart / reload, Flask-SQLAlchemy does it for you automatically). Try either starting your session in autocommit=true mode or stick explicit commit/rollback before calling your SELECT.
1
0
0
Flask SQLAlchemy not picking up changed records
1
python,flask-sqlalchemy
0
2012-05-18T01:57:00.000
Background: I'm trying to use a Google Map as an interface to mark out multiple polygons, that can be stored in a Postgres Database. The Database will then be queried with a geocoded Longitude Latitude Point to determine which of the Drawn Polygons encompass the point. Using Python and Django. Question How do I configure the Google Map to allow a user to click around and specify multiple polygon areas?
0
1
0.099668
0
false
10,648,479
1
667
1
0
0
10,647,482
"Using Python and Django" only, you're not going to do this. Obviously you're going to need Javascript. So you may as well dump Google Maps and use an open-source web mapping framework. OpenLayers has a well-defined Javascript API which will let you do exactly what you want. Examples in the OpenLayers docs show how. You'll thank me later - specifically when Google come asking for a fee for their map tiles and you can't switch your Google Maps widget to OpenStreetMap or some other tile provider. This Actually Happens.
1
0
0
Mark Out Multiple Delivery Zones on Google Map and Store in Database
2
python,django,postgresql,google-maps,postgis
0
2012-05-18T06:01:00.000
There is a way to avoid duplicate files in mongo gridfs? Or I have to do that via application code (I am using pymongo)
5
1
0.099668
0
false
10,648,760
0
2,727
2
0
0
10,648,729
You could use md5 hash and compare new hash with exists before saving file.
1
0
1
Mongo: avoid duplicate files in gridfs
2
python,mongodb,gridfs
0
2012-05-18T07:48:00.000
There is a way to avoid duplicate files in mongo gridfs? Or I have to do that via application code (I am using pymongo)
5
5
1.2
0
true
10,650,262
0
2,727
2
0
0
10,648,729
The MD5 sum is already part of Mongo's gridfs meta-data, so you could simply set a unique index on that column and the server will refuse to store the file. No need to compare on the client side.
1
0
1
Mongo: avoid duplicate files in gridfs
2
python,mongodb,gridfs
0
2012-05-18T07:48:00.000
Are there any good example projects which uses SQLAlchemy (with Python Classes) that I can look into? (which has at least some basic database operations - CRUD) I believe that, it is a good way to learn any programming language by looking into someone's code. Thanks!
18
0
0
0
false
10,778,146
0
12,180
1
0
0
10,656,426
What kind of environment are you looking to work with on top of SQLAlchemy? Most likely, if you are using a popular web framework like django, Flask or Pylons, you can find many examples and tutorials specific to that framework that include SQLAlchemy. This will boost your knowledge both with SQLAlchemy and whatever else it is you are working with. Chances are, you won't find any good project examples in 'just' SQLAlchemy as it essentially a tool.
1
0
0
SQLAlchemy Example Projects
2
python,sqlalchemy
0
2012-05-18T16:32:00.000
I am confused about why python needs cursor object. I know jdbc and there the database connection is quite intuitive but in python I am confused with cursor object. Also I am doubtful about what is the difference between cursor.close() and connection.close() function in terms of resource release.
41
5
0.321513
0
false
10,660,537
0
17,423
1
0
0
10,660,411
Connection object is your connection to the database, close that when you're done talking to the database all together. Cursor object is an iterator over a result set from a query. Close those when you're done with that result set.
1
0
0
difference between cursor and connection objects
3
python,python-db-api
0
2012-05-18T22:16:00.000
I am thinking about creating an open source data management web application for various types of data. A privileged user must be able to add new entity types (for example a 'user' or a 'family') add new properties to entity types (for example 'gender' to 'user') remove/modify entities and properties These will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me: a) How should the data be stored in the database? Should I dynamically add/remove database tables and/or columns during runtime? I am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add/remove tables (entities) and/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database. Anyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management? b) How to implement this in Python using an ORM or NoSQL? If you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy? If you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis? Thanks for your suggestions! Edit in response to some comments: The idea is that all instances ("rows") of a certain entity ("table") share the same set of properties/attributes ("columns"). However, it will be perfectly valid if certain instances have an empty value for certain properties/attributes. Basically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property. The datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a "Pythonic", state-of-the-art, scalable, and reliable web application. I see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document/Collection model of Mongo/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction? Edit 2 in response to some answers/comments: From most of your answers, I conclude that the dynamic creation/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design. As exactly this dynamic nature should be the main purpose/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine. Expressed in an abstract way, the application needs to manage the data layout, i.e. a "dynamic list" of valid entity types and a "dynamic list" of properties for each valid entity type the data itself I am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.
20
6
1
0
false
10,792,940
1
4,158
2
0
0
10,672,939
What you're asking about is a common requirement in many systems -- how to extend a core data model to handle user-defined data. That's a popular requirement for packaged software (where it is typically handled one way) and open-source software (where it is handled another way). The earlier advice to learn more about RDBMS design generally can't hurt. What I will add to that is, don't fall into the trap of re-implementing a relational database in your own application-specific data model! I have seen this done many times, usually in packaged software. Not wanting to expose the core data model (or permission to alter it) to end users, the developer creates a generic data structure and an app interface that allows the end user to define entities, fields etc. but not using the RDBMS facilities. That's usually a mistake because it's hard to be nearly as thorough or bug-free as what a seasoned RDBMS can just do for you, and it can take a lot of time. It's tempting but IMHO not a good idea. Assuming the data model changes are global (shared by all users once admin has made them), the way I would approach this problem would be to create an app interface to sit between the admin user and the RDBMS, and apply whatever rules you need to apply to the data model changes, but then pass the final changes to the RDBMS. So for example, you may have rules that say entity names need to follow a certain format, new entities are allowed to have foreign keys to existing tables but must always use the DELETE CASCADE rule, fields can only be of certain data types, all fields must have default values etc. You could have a very simple screen asking the user to provide entity name, field names & defaults etc. and then generate the SQL code (inclusive of all your rules) to make these changes to your database. Some common rules & how you would address them would be things like: -- if a field is not null and has a default value, and there are already existing records in the table before that field was added by the admin, update existing records to have the default value while creating the field (multiple steps -- add field allowing null; update all existing records; alter the table to enforce not null w/ default) -- otherwise you wouldn't be able to use a field-level integrity rule) -- new tables must have a distinct naming pattern so you can continue to distinguish your core data model from the user-extended data model, i.e. core and user-defined have different RDBMS owners (dbo. vs. user.) or prefixes (none for core, __ for user-defined) or somesuch. -- it is OK to add fields to tables that are in the core data model (as long as they tolerate nulls or have a default), and it is OK for admin to delete fields that admin added to core data model tables, but admin cannot delete fields that were defined as part of the core data model. In other words -- use the power of the RDBMS to define the tables and manage the data, but in order to ensure whatever conventions or rules you need will always be applied, do this by building an app-to-DB admin function, instead of giving the admin user direct DB access. If you really wanted to do this via the DB layer only, you could probably achieve the same by creating a bunch of stored procedures and triggers that would implement the same logic (and who knows, maybe you would do that anyway for your app). That's probably more of a question of how comfortable are your admin users working in the DB tier vs. via an intermediary app. So to answer your questions directly: (1) Yes, add tables and columns at run time, but think about the rules you will need to have to ensure your app can work even once user-defined data is added, and choose a way to enforce those rules (via app or via DB / stored procs or whatever) when you process the table & field changes. (2) This issue isn't strongly affected by your choice of SQL vs. NoSQL engine. In every case, you have a core data model and an extended data model. If you can design your app to respond to a dynamic data model (e.g. add new fields to screens when fields are added to a DB table or whatever) then your app will respond nicely to changes in both the core and user-defined data model. That's an interesting challenge but not much affected by choice of DB implementation style. Good luck!
1
0
0
Which database model should I use for dynamic modification of entities/properties during runtime?
4
python,database,dynamic,sqlalchemy,redis
0
2012-05-20T11:16:00.000
I am thinking about creating an open source data management web application for various types of data. A privileged user must be able to add new entity types (for example a 'user' or a 'family') add new properties to entity types (for example 'gender' to 'user') remove/modify entities and properties These will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me: a) How should the data be stored in the database? Should I dynamically add/remove database tables and/or columns during runtime? I am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add/remove tables (entities) and/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database. Anyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management? b) How to implement this in Python using an ORM or NoSQL? If you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy? If you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis? Thanks for your suggestions! Edit in response to some comments: The idea is that all instances ("rows") of a certain entity ("table") share the same set of properties/attributes ("columns"). However, it will be perfectly valid if certain instances have an empty value for certain properties/attributes. Basically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property. The datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a "Pythonic", state-of-the-art, scalable, and reliable web application. I see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document/Collection model of Mongo/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction? Edit 2 in response to some answers/comments: From most of your answers, I conclude that the dynamic creation/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design. As exactly this dynamic nature should be the main purpose/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine. Expressed in an abstract way, the application needs to manage the data layout, i.e. a "dynamic list" of valid entity types and a "dynamic list" of properties for each valid entity type the data itself I am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.
20
3
1.2
0
true
10,707,420
1
4,158
2
0
0
10,672,939
So, if you conceptualize your entities as "documents," then this whole problem maps onto a no-sql solution pretty well. As commented, you'll need to have some kind of model layer that sits on top of your document store and performs tasks like validation, and perhaps enforces (or encourages) some kind of schema, because there's no implicit backend requirement that entities in the same collection (parallel to table) share schema. Allowing privileged users to change your schema concept (as opposed to just adding fields to individual documents - that's easy to support) will pose a little bit of a challenge - you'll have to handle migrating the existing data to match the new schema automatically. Reading your edits, Mongo supports the kind of searching/ordering you're looking for, and will give you the support for "empty cells" (documents lacking a particular key) that you need. If I were you (and I happen to be working on a similar, but simpler, product at the moment), I'd stick with Mongo and look into a lightweight web framework like Flask to provide the front-end. You'll be on your own to provide the model, but you won't be fighting against a framework's implicit modeling choices.
1
0
0
Which database model should I use for dynamic modification of entities/properties during runtime?
4
python,database,dynamic,sqlalchemy,redis
0
2012-05-20T11:16:00.000
I have a large SQLServer database on my current hosting site... and I would like to import it into Google BigData. Is there a method for this?
0
1
0.197375
0
false
10,713,425
0
108
1
0
0
10,705,572
I think that the answer is that there is no general recipe for doing this. In fact, I don't even think it makes sense to have a general recipe ... What you need to do is to analyse the SQL schemas and work out an appropriate mapping to BigData schemas. Then you figure out how to migrate the data.
1
0
0
Porting data from SQLServer to BigData
1
python,sql-server,bigdata
0
2012-05-22T15:52:00.000
I need to copy an existing neo4j database in Python. I even do not need it for backup, just to play around with while keeping the original database untouched. However, there is nothing about copy/backup operations in neo4j.py documentation (I am using python embedded binding). Can I just copy the whole folder with the original neo4j database to a folder with a new name? Or is there any special method available in neo4j.py?
1
2
1.2
0
true
10,736,999
0
310
1
0
0
10,724,345
Yes, you can copy the whole DB directory when you have cleanly shut down the DB for backup.
1
0
0
Copy neo4j database from python
1
python,copy,backup,neo4j
0
2012-05-23T16:44:00.000
Let's say I get sales data every 15 minutes. The sales transactions are stored in a mysql database. I need to be able to graph this data, and allow the user to re-size the scale of time. The info would be graphed on a django website. How would I go about doing this, and are there any open source tools that I could look into?
0
1
0.066568
0
false
10,779,681
0
3,292
1
0
0
10,779,244
HighCharts have awesome features you can also build pivot charts using that one but they will charge you .You can look over Py Chart also
1
0
0
How to graph mysql data in python?
3
python,mysql,sql
0
2012-05-28T04:10:00.000
In google app engine, can I call "get_or_insert" from inside a transaction? The reason I ask is because I'm not sure if there is some conflict with having this run its own transaction inside an already running transaction. Thanks!
2
2
0.197375
0
false
10,791,742
1
308
1
1
0
10,790,381
No. get_or_insert is syntactic sugar for a transactional function that fetches or inserts a record. You can implement it yourself trivially, but that will only work if the record you're operating on is in the same entity group as the rest of the entities in the current transaction, or if you have cross-group transactions enabled.
1
0
0
In app engine, can I call "get_or_insert" from inside a transaction?
2
python,google-app-engine
0
2012-05-28T21:01:00.000
In db.py,I can use a function(func insert) insert data into sqlite correctly. Now I want to insert data into sqlite through python-fastcgi, in fastcgi (just named post.py ) I can get the request data correctly,but when I call db.insert,it gives me internal server error. I already did chmod 777 slqite.db. Anyone know whats problem?
2
4
1.2
0
true
10,796,243
0
1,289
1
0
0
10,793,042
Ffinally I found the answer: the sqlite3 library needs write permissions also on the directory that contains it, probably because it needs to create a lockfile. Therefor when I use sql to insert data there is no problem, but when I do it through web cgi,fastcgi etc)to insert data there would be an error. Just add write permission to the directory.
1
0
0
sqlite3 insert using python and python cgi
1
python,sqlite,fastcgi
0
2012-05-29T04:23:00.000
I have run a few trials and there seems to be some improvement in speed if I set autocommit to False. However, I am worried that doing one commit at the end of my code, the database rows will not be updated. So, for example, I do several updates to the database, none are committed, does querying the database then give me the old data? Or, does it know it should commit first? Or, am I completely mistaken as to what commit actually does? Note: I'm using pyodbc and MySQL. Also, the table I'm using are InnoDB, does that make a difference?
2
0
0
0
false
10,803,049
0
1,170
2
0
0
10,803,012
As long as you use the same connection, the database should show you a consistent view on the data, e.g. with all changes made so far in this transaction. Once you commit, the changes will be written to disk and be visible to other (new) transactions and connections.
1
0
0
Does setting autocommit to true take longer than batch committing?
3
python,mysql,odbc,pyodbc
0
2012-05-29T16:22:00.000
I have run a few trials and there seems to be some improvement in speed if I set autocommit to False. However, I am worried that doing one commit at the end of my code, the database rows will not be updated. So, for example, I do several updates to the database, none are committed, does querying the database then give me the old data? Or, does it know it should commit first? Or, am I completely mistaken as to what commit actually does? Note: I'm using pyodbc and MySQL. Also, the table I'm using are InnoDB, does that make a difference?
2
1
0.066568
0
false
10,803,230
0
1,170
2
0
0
10,803,012
The default transaction mode for InnoDB is REPEATABLE READ, all the read will be consistent within a transaction. If you insert rows and query them in the same transaction, you will not see the newly inserted row, but they will be stored when you commit the transaction. If you want to see the newly inserted row before you commit the transaction, you can set the isolation level to READ COMMITTED.
1
0
0
Does setting autocommit to true take longer than batch committing?
3
python,mysql,odbc,pyodbc
0
2012-05-29T16:22:00.000
From someone who has a django application in a non-trivial production environment, how do you handle database migrations? I know there is south, but it seems like that would miss quite a lot if anything substantial is involved. The other two options (that I can think of or have used) is doing the changes on a test database and then (going offline with the app) and importing that sql export. Or, perhaps a riskier option, doing the necessary changes on the production database in real-time, and if anything goes wrong reverting to the back-up. How do you usually handle your database migrations and schema changes?
22
1
0.033321
0
false
10,872,504
1
11,397
2
0
0
10,826,266
South isnt used everywhere. Like in my orgainzation we have 3 levels of code testing. One is local dev environment, one is staging dev enviroment, and third is that of a production . Local Dev is on the developers hands where he can play according to his needs. Then comes staging dev which is kept identical to production, ofcourse, until a db change has to be done on the live site, where we do the db changes on staging first, and check if everything is working fine and then we manually change the production db making it identical to staging again.
1
0
0
Database migrations on django production
6
python,mysql,django,migration,django-south
0
2012-05-31T01:12:00.000
From someone who has a django application in a non-trivial production environment, how do you handle database migrations? I know there is south, but it seems like that would miss quite a lot if anything substantial is involved. The other two options (that I can think of or have used) is doing the changes on a test database and then (going offline with the app) and importing that sql export. Or, perhaps a riskier option, doing the necessary changes on the production database in real-time, and if anything goes wrong reverting to the back-up. How do you usually handle your database migrations and schema changes?
22
0
0
0
false
70,559,647
1
11,397
2
0
0
10,826,266
If its not trivial, you should have pre-prod database/ app that mimic the production one. To avoid downtime on production.
1
0
0
Database migrations on django production
6
python,mysql,django,migration,django-south
0
2012-05-31T01:12:00.000
Using python(fastcgi),lighttpd,sqlite3 for server Update data of sqlite3 every weekend. Thats means, every user get the same data from server before weekend,and server query database for every user's request. My question is: Is there any way to cache data for users,server using cache data to response all users before updating data,not query database every time. Like using a global variable for a week,until update it.
0
1
1.2
0
true
10,843,435
0
167
1
0
0
10,843,191
You can use a cache such as memcached to store it once retrieved.
1
0
0
cache data in python and sqlite3
1
python,sqlite,fastcgi,lighttpd
0
2012-06-01T01:04:00.000
I am trying to write a python program for appending live stock quotes from a csv file to an excel file (which is already open) using xlrd and xlwt. The task is summarised below. From my stock-broker's application, a csv file is continually being updated on my hard disk. I wish to write a program which, when run, would append the new data from csv file to an excel file, which is kept open (I wonder whether it is possible to read & write an open file). I wish to keep the file open because I will be having stock-charts in it. Is it possible? If yes, how?
1
1
0.197375
1
false
10,857,757
0
1,299
1
0
0
10,851,726
Not directly. xlutils can use xlrd and xlwt to copy a spreadsheet, and appending to a "to be written" worksheet is straightforward. I don't think reading the open spreadsheet is a problem -- but xlwt will not write to the open book/sheet. You might write an Excel VBA macro to draw the graphs. In principle, I think a macro from a command workbook could close your stock workbook, invoke your python code to copy and update, open the new spreadsheet, and maybe run the macro to re-draw the graphs. Another approach is to use matplotlib for the graphs. I'd think a sleep loop could wake up every n seconds, grab the new csv data, append it to your "big" csv data, and re-draw the graph. Taking this approach keeps you in python and should make things a lot easier, imho. Disclosure: my Python is better than my VBA.
1
0
0
xlrd - append data to already opened workbook
1
python,xlrd,xlwt
0
2012-06-01T14:01:00.000
I am trying to save an array of dates. I am providing a list of date objects, yet psycopg2 is throwing the above error. Any thoughts on how I can work around this?
1
1
1.2
0
true
10,914,900
0
1,679
1
0
0
10,854,532
This is a PostgreSQL error: you need an explicit cast. Add ::date[] after the value or the placeholder.
1
0
1
psycopg2 column is of type date[] but expression is of type text[]
1
python,django,psycopg2
0
2012-06-01T17:03:00.000
I have an open source PHP website and I intend to modify/translate (mostly constant strings) it so it can be used by Japanese users. The original code is PHP+MySQL+Apache and written in English with charset=utf-8 I want to change, for example, the word "login" into Japanese counterpart "ログイン" etc I am not sure whether I have to save the PHP code in utf-8 format (just like Python)? I only have experience with Python, so what other issues I should take care of?
2
2
0.132549
0
false
10,868,488
0
148
2
0
0
10,868,473
If it's in the file, then yes, you will need to save the file as UTF-8. If it's is in the database, you do not need to save the PHP file as UTF-8. In PHP, strings are basically just binary blobs. You will need to save the file as UTF-8 so the correct bytes are read in. In theory, if you saved the raw bytes in an ANSI file, it would still be output to the browser correctly, just your editor would not display it correctly, and you would run the risk of your editor manipulating it incorrectly. Also, when handling non-ANSI strings, you'll need to be careful to use the multi-byte versions of string manipulation functions (str_replace will likely botch a utf-8 string for example).
1
0
1
PHP for Python Programmers: UTF-8 Issues
3
php,python,mysql,apache,utf-8
1
2012-06-03T06:52:00.000
I have an open source PHP website and I intend to modify/translate (mostly constant strings) it so it can be used by Japanese users. The original code is PHP+MySQL+Apache and written in English with charset=utf-8 I want to change, for example, the word "login" into Japanese counterpart "ログイン" etc I am not sure whether I have to save the PHP code in utf-8 format (just like Python)? I only have experience with Python, so what other issues I should take care of?
2
0
0
0
false
10,868,497
0
148
2
0
0
10,868,473
If the file contains UTF-8 characters then save it with UTF-8. Otherwise you can save it in any format. One thing you should be aware of is that the PHP interpreter does not support the UTF-8 byte order mark so make sure you save it without that.
1
0
1
PHP for Python Programmers: UTF-8 Issues
3
php,python,mysql,apache,utf-8
1
2012-06-03T06:52:00.000
Using CGI scripts, I can run single Python files on my server and then use their output on my website. However, I have a more complicated program on my computer that I would like to run on the server. It involves several modules I have written myself, and the SQLITE3 module built in Python. The program involves reading from a .db file and then using that data. Once I run my main Python executable from a browser, I get a "500: Internal server error" error. I just wanted to know whether I need to change something in the permission settings or something for Python files to be allowed to import other Python files, or to read from a .db file. I appreciate any guidance, and sorry if I'm unclear about anything I'm new to this site and coding in general. FOLLOW UP: So, as I understand, there isn't anything inherently wrong with importing Python files on a server?
0
0
0
0
false
10,900,387
0
88
1
0
0
10,900,319
I suggest you look in the log of your server to find out what caused the 500 error.
1
0
0
Importing Python files into each other on a web server
2
python,sqlite,web
0
2012-06-05T15:34:00.000
It looks like this is what e.g. MongoEngine does. The goal is to have model files be able to access the db without having to explicitly pass around the context.
2
2
0.379949
0
false
10,907,158
1
877
1
0
0
10,906,477
Pyramid has nothing to do with it. The global needs to handle whatever mechanism the WSGI server is using to serve your application. For instance, most servers use a separate thread per request, so your global variable needs to be threadsafe. gunicorn and gevent are served using greenlets, which is a different mechanic. A lot of engines/orms support a threadlocal connection. This will allow you to access your connection as if it were a global variable, but it is a different variable in each thread. You just have to make sure to close the connection when the request is complete to avoid that connection spilling over into the next request in the same thread. This can be done easily using a Pyramid tween or several other patterns illustrated in the cookbook.
1
0
0
In Pyramid, is it safe to have a python global variable that stores the db connection?
1
python,pyramid
0
2012-06-05T23:41:00.000
is there a difference if i use """..""" in the sql of cusror.execute. Even if there is any slight difference please tell
1
0
0
0
false
10,910,268
0
125
1
0
0
10,910,246
No, other than the string can contain newlines.
1
0
0
What is the use of """...""" in python instead of "..." or '...', especially in MySQLdb cursor.execute
2
python,sql,string,mysql-python
0
2012-06-06T07:55:00.000
Just started to use Mercurial. Wow, nice application. I moved my database file out of the code directory, but I was wondering about the .pyc files. I didn't include them on the initial commit. The documentation about the .hgignore file includes an example to exclude *.pyc, so I think I'm on the right track. I am wondering about what happens when I decide to roll back to an older fileset. Will I need to delete all the .pyc files then? I saw some questions on Stack Overflow about the issue, including one gentleman that found old .pyc files were being used. What is the standard way around this?
7
5
0.244919
0
false
10,920,888
1
8,119
2
0
0
10,920,423
Usually you are safe, because *.pyc are regenerated if the corresponding *.py changes its content. It is problematic if you delete a *.py file and you are still importing from it in another file. In this case you are importing from the *.pyc file if it is existing. But this will be a bug in your code and is not really related to your mercurial workflow. Conclusion: Every famous Python library is ignoring their *.pyc files, just do it ;)
1
0
0
What to do with pyc files when Django or python is used with Mercurial?
4
python,django,mercurial,pyc
0
2012-06-06T18:58:00.000
Just started to use Mercurial. Wow, nice application. I moved my database file out of the code directory, but I was wondering about the .pyc files. I didn't include them on the initial commit. The documentation about the .hgignore file includes an example to exclude *.pyc, so I think I'm on the right track. I am wondering about what happens when I decide to roll back to an older fileset. Will I need to delete all the .pyc files then? I saw some questions on Stack Overflow about the issue, including one gentleman that found old .pyc files were being used. What is the standard way around this?
7
0
0
0
false
10,920,511
1
8,119
2
0
0
10,920,423
Sure if you have a .pyc file from an older version of the same module python will use that. Many times I have wondered why my program wasn't reflecting the changes I made, and realized it was because I had old pyc files. If this means that .pyc are not reflecting your current version then yes you will have to delete all .pyc files. If you are on linux you can find . -name *.pyc -delete
1
0
0
What to do with pyc files when Django or python is used with Mercurial?
4
python,django,mercurial,pyc
0
2012-06-06T18:58:00.000
I have a sqlite3 database that I created from Python (2.7) on a local machine, and am trying to copy it to a remote location. I ran "sqlite3 posts.db .backup posts.db.bak" to create a copy (I can use the original and this new copy just fine). But when I move the copied file to the remote location, suddenly every command gives me: sqlite3.OperationalError: database is locked. How do I safely move a sqlite3 database so that I can use it after the move?
0
0
0
0
false
10,922,927
0
624
1
0
0
10,922,394
You did a .backup on the source system, but you don't mention doing a .restore on the target system. Please clarify. You don't mention what versions of the sqlite3 executable you have on the source and target systems. You don't mention how you transferred the .bak file from the source to the target. Was the source db being accessed by another process when you did the .backup? How big is the file? Have you considered zip/copy/unzip instead of backup/copy/restore?
1
0
0
How to safely move an SQLite3 database?
1
python,sqlite,copy
0
2012-06-06T21:13:00.000
I have a desktop application that send POST requests to a server where a django app store the results. DB server and web server are not on the same machine and it happens that sometimes the connectivity is lost for a very short time but results in a connection error on some requests: OperationalError: (2003, "Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (110)") On a "normal" website I guess you'd not worry too much: the browser display a 500 error page and the visitor tries again later. In my case loosing info posted by a request is not an option and I am wondering how to handle this? I'd try to catch on this exception, wait for the connectivity to come back (lag is not a problem) and then continue the process. But as the exception can occur about anywhere in the code I'm a bit stuck on how to proceed. Thanks for your advice.
1
1
0.099668
0
false
10,935,789
1
2,536
1
0
0
10,930,459
You could use a middleware with a process_view method and a try / except wrapping your call. Or you could decorate your views and wrap the call there. Or you could use class based views with a base class that has a method decorator on its dispatch method, or an overriden.dispatch. Really, you have plenty of solutions. Now, as said above, you might want to modify your Desktop application too!
1
0
0
Django: how to properly handle a database connection error
2
python,mysql,django
0
2012-06-07T11:00:00.000
I cant find "best" solution for very simple problem(or not very) Have classical set of data: posts that attached to users, comments that attached to post and to user. Now i can't decide how to build scheme/classes On way is to store user_id inside comments and inside. But what happens when i have 200 comments on page? Or when i have N posts on page? I mean it should be 200 additional requests to database to display user info(such as name,avatar) Another solution is to embed user data into each comment and each post. But first -> it is huge overhead, second -> model system is getting corrupted(using mongoalchemy), third-> user can change his info(like avatar). And what then? As i understand update operation on huge collections of comments or posts is not simple operation... What would you suggest? Is 200 requests per page to mongodb is OK(must aim for performance)? Or may be I am just missing something...
3
1
0.049958
0
false
10,932,004
1
919
1
0
0
10,931,889
What I would do with mongodb would be to embed the user id into the comments (which are part of the structure of the "post" document). Three simple hints for better performances: 1) Make sure to ensure an index on the user_id 2) Use comment pagination method to avoid querying 200 times the database 3) Caching is your friend
1
0
0
MongoDB: Embedded users into comments
4
python,mongodb,mongoalchemy,nosql
0
2012-06-07T12:34:00.000
I am making a little add-on for a game, and it needs to store information on a player: username ip-address location in game a list of alternate user names that have came from that ip or alternate ip addresses that come from that user name I read an article a while ago that said that unless I am storing a large amount of information that can not be held in ram, that I should not use a database. So I tried using the shelve module in python, but I'm not sure if that is a good idea. When do you guys think it is a good idea to use a database, and when it better to store information in another way , also what are some other ways to store information besides databases and flat file databases.
14
7
1
0
false
10,957,953
0
5,782
1
0
0
10,957,877
Assuming by 'database' you mean 'relational database' - even the embedded databases like SQLite come with some overhead compared to a plain text file. But, sometimes that overhead is worth it compared to rolling your own. The biggest question you need to ask is whether you are storing relational data - whether things like normalisation and SQL queries make any sense at all. If you need to lookup data across multiple tables using joins, you should certainly use a relational database - that's what they're for. On the other hand, if all you need to do is lookup into one table based on its primary key, you probably want a CSV file. Pickle and shelve are useful if what you're persisting is the objects you use in your program - if you can just add the relevant magic methods to your existing classes and expect it all to make sense. Certainly "you shouldn't use databases unless you have a lot of data" isn't the best advice - the amount of data goes more to what database you might use if you are using one. SQLite, for example, wouldn't be suitable for something the size of Stackoverflow - but, MySQL or Postgres would almost certainly be overkill for something with five users.
1
0
0
When is it appropriate to use a database , in Python
2
python,database,flat-file
0
2012-06-09T02:16:00.000
I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.
5
9
1
0
false
10,974,037
1
1,106
2
1
0
10,968,439
This depends on lots of things like the size of the entities and the number of values that need to look up in the index, so it's best to benchmark it for your specific application. Also beware that if you find that on a sunny day it takes e.g. 10 seconds to load all your items, that probably means that some small fraction of your queries will run into a timeout due to natural variations in datastore performance, and occasionally your app will hit the timeout all the time when the datastore is having a bad day (it happens).
1
0
0
What is the Google Appengine Ndb GQL query max limit?
2
python,google-app-engine,gql,app-engine-ndb
0
2012-06-10T11:51:00.000
I am looking around in order to get an answer what is the max limit of results I can have from a GQL query on Ndb on Google AppEngine. I am using an implementation with cursors but it will be much faster if I retrieve them all at once.
5
7
1.2
0
true
10,969,575
1
1,106
2
1
0
10,968,439
Basically you don't have the old limit of 1000 entities per query anymore, but consider using a reasonable limit, because you can hit the time out error and it's better to get them in batches so users won't wait during load time.
1
0
0
What is the Google Appengine Ndb GQL query max limit?
2
python,google-app-engine,gql,app-engine-ndb
0
2012-06-10T11:51:00.000
When I say 'equivalent', I mean an ORM that allows for the same work-style. That is; Setting up a database Dispensing and editing 'beans' (table rows) as if the table was already ready, while the table is being created behind the scenes Reviewing, indexing and polishing the table structure before production Thanks for any leads
1
0
1.2
0
true
13,714,374
0
700
1
0
0
10,987,162
Short answer, there is a proof-of-concept called PyBean as answered by Gabor de Mooij, but it barely offers any features and cannot be used. There are no other Python libraries that work like PyBean.
1
0
0
Is there a RedBeanPHP equivalent for Python?
2
php,python,mysql,orm,redbean
0
2012-06-11T20:35:00.000
Here's the scenario: I have a url in a MySQL database that contains Unicode. The database uses the Latin-1 encoding. Now, when I read the record from MySQL using Python, it gets converted to Unicode because all strings follow the Unicode format in Python. I want to write the URL into a text file -- to do so, it needs to be converted to bytes (UTF-8). This was done successfully. Now, given the URLS that are in the text file, I want to query the db for these SAME urls in the database. I do so by calling the source command to execute a few select queries. Result: I get no matches. I suspect that the problem stems from my conversion to UTF-8, which somehow is messing up the symbols.
0
0
1.2
0
true
10,992,555
0
1,752
1
0
0
10,990,496
You most probably need to set your mysql shell client to use utf8. You can set it either in mysql shell directly by running set character set utf8. Or by adding default-character-set=utf8 to your ~/.my.cnf.
1
0
0
Unicode to UTF-8 encoding issue when importing SQL text file into MySQL
1
python,mysql,unicode,encoding,utf-8
0
2012-06-12T04:22:00.000
I'm looking for a way of editing and save a specified cell in Excel 2010 .xlsx file from Node.JS. I realize, that maybe there are no production-ready solutions for NodeJS at this time. However, NodeJS supports C++ libraries, so could you suggest me any suitable lib compatible with Node? Also, I had an idea to process this task via Python (xlrd, xlwt) and call it with NodeJS. What do you think of this? Are there any more efficient methods to edit XLSX from NodeJS? Thanks.
0
0
1.2
0
true
11,008,175
0
3,505
1
0
0
11,007,460
Basically you have 2 possibilities: node.js does not support C++ libraries but it is possible to write bindings for node.js that interact with a C/C++ library. So you need to get your feet wet on writing a C++ addon for the V8 (the JavaScript engine behind node.js) find a command line program which does what you want to do. (It does not need to be Python.) You could call this from your JavaScript code by using a child-process. First option is more work, but would be result in faster executing time (when done right). Second possibility is easier to realise. P.S.: To many question for one question. I've no idea about the xls-whatever stuff, besides it's "actually" only XML.
1
0
0
Node.JS/C++/Python - edit Excel .xlsx file
1
c++,python,excel,node.js,read-write
0
2012-06-13T02:27:00.000
Okay., We have Rails webapp which stores data in a mysql data base. The table design was not read efficient. So we resorted to creating a separate set of read only tables in mysql and made all our internal API calls use that tables for read. We used callbacks to keep the data in sync between both the set of tables. Now we have a another Python app which is going to mess with the same database - now how do we proceed maintaining the data integrity? Active record callbacks can't be used anymore. We know we can do it with triggers. But is there a any other elegant way to do this? How to people achieve to maintain the integrity of such derived data.
2
1
1.2
0
true
11,014,025
1
296
1
0
0
11,013,976
Yes, refactor the code to put a data web service in front of the database and let the Ruby and Python apps talk to the service. Let it maintain all integrity and business rules. "Don't Repeat Yourself" - it's a good rule.
1
0
0
Maintaining data integrity in mysql when different applications are accessing it
1
python,mysql,ruby-on-rails,database,triggers
0
2012-06-13T11:31:00.000
I have an excel spreadsheet (version 1997-2003) and another nonspecific database file (a .csy file, I am assuming it can be parsed as a text file as that is what it appears to be). I need to take information from both sheets, match them up, put them on one line, and print it to a text file. I was going to use python for this as usuing the python plugins for Visual Studio 2010 alongside the xlrd package seems to be the best way I could find for excel files, and I'd just use default packages in python for the other file. Would python be a good choice of language to both learn and program this script in? I am not familiar with scripting languages other then a little bit of VBS, so any language will be a learning experience for me. Converting the xls to csv is not an option, there are too many excel files, and the wonky formatting of them would make fishing through the csv more difficult then using xlrd.
1
-1
-0.066568
0
false
11,020,968
0
2,962
1
0
0
11,020,919
Python is beginner-friendly and is good with string manipulation so it's a good choice. I have no idea how easy awk is to learn without programming experience but I would consider that as it's more or less optimized for processing csv's.
1
0
1
First time writing a script, not sure what language to use (parsing excel and other files)
3
python,excel,file-io,scripting
0
2012-06-13T18:16:00.000
We are writing an inventory system and I have some questions about sqlalchemy (postgresql) and transactions/sessions. This is a web app using TG2, not sure this matters but to much info is never a bad. How can make sure that when changing inventory qty's that i don't run into race conditions. If i understand it correctly if user on is going to decrement inventory on an item to say 0 and user two is also trying to decrement the inventory to 0 then if user 1s session hasn't been committed yet then user two starting inventory number is going to be the same as user one resulting in a race condition when both commit, one overwriting the other instead of having a compound effect. If i wanted to use postgresql sequence for things like order/invoice numbers how can I get/set next values from sqlalchemy without running into race conditions? EDIT: I think i found the solution i need to use with_lockmode, using for update or for share. I am going to leave open for more answers or for others to correct me if I am mistaken. TIA
2
3
1.2
0
true
11,034,199
0
2,935
1
0
0
11,033,892
If two transactions try to set the same value at the same time one of them will fail. The one that loses will need error handling. For your particular example you will want to query for the number of parts and update the number of parts in the same transaction. There is no race condition on sequence numbers. Save a record that uses a sequence number the DB will automatically assign it. Edit: Note as Limscoder points out you need to set the isolation level to Repeatable Read.
1
0
0
SQLAlchemy(Postgresql) - Race Conditions
2
python,postgresql,web-applications,sqlalchemy,turbogears2
0
2012-06-14T13:15:00.000
I am using mongoexport to export mongodb data which also has Image data in Binary format. Export is done in csv format. I tried to read image data from csv file into python and tried to store as in Image File in .jpg format on disk. But it seems that, data is corrupt and image is not getting stored. Has anybody come across such situation or resolved similar thing ? Thanks,
1
-1
-0.099668
0
false
11,058,611
0
930
2
0
0
11,055,921
Depending how you stored the data, it may be prefixed with 4 bytes of size. Are the corrupt exports 4 bytes/GridFS chunk longer than you'd expect?
1
0
1
Can mongoexport be used to export images stored in binary format in mongodb
2
python,image,mongodb,csv
0
2012-06-15T18:01:00.000
I am using mongoexport to export mongodb data which also has Image data in Binary format. Export is done in csv format. I tried to read image data from csv file into python and tried to store as in Image File in .jpg format on disk. But it seems that, data is corrupt and image is not getting stored. Has anybody come across such situation or resolved similar thing ? Thanks,
1
0
0
0
false
11,056,533
0
930
2
0
0
11,055,921
One thing to watch out for is an arbitrary 2MB BSON Object size limit in several of 10gen's implementations. You might have to denormalize your image data and store it across multiple objects.
1
0
1
Can mongoexport be used to export images stored in binary format in mongodb
2
python,image,mongodb,csv
0
2012-06-15T18:01:00.000
I need to manipulate a large amount of numerical/textual data, say total of 10 billion entries which can be theoretically organized as 1000 of 10000*1000 tables. Most calculations need to be performed on a small subset of data each time (specific rows or columns), such that I don't need all the data at once. Therefore, I am intersted to store the data in some kind of database so I can easily search the database, retrieve multiple rows/columns matching defined criteria, make some calculations and update the database.The database should be accessible with both Python and Matlab, where I use Python mainly for creating raw data and putting it into database and Matlab for the data processing. The whole project runs on Windows 7. What is the best and mainly the simplest database I can use for this purpose? I have no prior experience with databases at all.
4
3
0.197375
0
false
11,058,566
0
3,289
1
0
0
11,058,409
IMO simply use the file system with a file format that can you read/write in both MATLAB and Python. Databases usually imply a relational model (excluding the No-SQL ones), which would only add complexity here. Being more MATLAB-inclined, you can directly manipulate MAT-files in SciPy with scipy.io.loadmat/scipy.io.savemat functions. This is the native MATLAB format for storing data, with save/load functions. Unless of course you really need databases, then ignore my answer :)
1
0
0
What the simplest database to use with both Python and Matlab?
3
python,database,matlab
0
2012-06-15T21:27:00.000
I'm looking for a search engine that I can point to a column in my database that supports advanced functions like spelling correction and "close to" results. Right now I'm just using SELECT <column> from <table> where <colname> LIKE %<searchterm>% and I'm missing some results particularly when users misspell items. I've written some code to fix misspellings by running it through a spellchecker but thought there may be a better out-of-the box option to use. Google turns up lots of options for indexing and searching the entire site where I really just need to index and search this one table column.
3
3
0.197375
0
false
11,088,110
0
153
2
0
0
11,082,229
Apache Solr is a great Search Engine that provides (1) N-Gram Indexing (search for not just complete strings but also for partial substrings, this helps greatly in getting similar results) (2) Provides an out of box Spell Corrector based on distance metric/edit distance (which will help you in getting a "did you mean chicago" when the user types in chicaog) (3) It provides you with a Fuzzy Search option out of box (Fuzzy Searches helps you in getting close matches for your query, for an example if a user types in GA-123 he would obtain VMDEO-123 as a result) (4) Solr also provides you with "More Like This" component which would help you out like the above options. Solr (based on Lucene Search Library) is open source and is slowly rising to become the de-facto in the Search (Vertical) Industry and is excellent for database searches (As you spoke about indexing a database column, which is a cakewalk for Solr). Lucene and Solr are used by many Fortune 500 companies as well as Internet Giants. Sphinx Search Engine is also great (I love it too as it has very low foot print for everything & is C++ based) but to put it simply Solr is much more popular. Now Python support and API's are available for both. However Sphinx is an exe and Solr is an HTTP. So for Solr you simply have to call the Solr URL from your python program which would return results that you can send to your front end for rendering, as simple as that) So far so good. Coming to your question: First you should ask yourself that whether do you really require a Search Engine? Search Engines are good for all use cases mentioned above but are really made for searching across huge amounts of full text data or million's of rows of tabular data. The Algorithms like Did you Mean, Similar Records, Spell Correctors etc. can be written on top. Before zero-ing on Solr please also search Google for (1) Peter Norvig Spell Corrector & (2) N-Gram Indexing. Possibility is that just by writing few lines of code you may get really the stuff that you were looking out for. I leave it up to you to decide :)
1
0
0
Search Engine for a single DB column
3
python,mysql,database,search
0
2012-06-18T11:52:00.000
I'm looking for a search engine that I can point to a column in my database that supports advanced functions like spelling correction and "close to" results. Right now I'm just using SELECT <column> from <table> where <colname> LIKE %<searchterm>% and I'm missing some results particularly when users misspell items. I've written some code to fix misspellings by running it through a spellchecker but thought there may be a better out-of-the box option to use. Google turns up lots of options for indexing and searching the entire site where I really just need to index and search this one table column.
3
1
0.066568
0
false
11,087,295
0
153
2
0
0
11,082,229
I would suggest looking into open source technologies like Sphynx Search.
1
0
0
Search Engine for a single DB column
3
python,mysql,database,search
0
2012-06-18T11:52:00.000
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted. For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django? Thanks in Advance!
1
0
0
0
false
11,101,114
1
2,701
4
0
0
11,100,997
Instead of deleting orders - you should create a field which is a boolean (call it whatever you like - for example, deleted) and set this field to 1 for "deleted" orders. Messing with a serial field (which is what your auto-increment field is called in postgres) will lead to problems later; especially if you have foreign keys and relationships with tables. Not only will it impact your database server's performance; it also will impact on your business as eventually you will have two orders floating around that have the same order number; even though you have "deleted" one from the database, the order number may already be referenced somewhere else - like in a receipt your printed for your customer.
1
0
0
Auto Increment Field in Django/Python
7
python,django,postgresql
0
2012-06-19T12:32:00.000
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted. For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django? Thanks in Advance!
1
4
0.113791
0
false
11,101,064
1
2,701
4
0
0
11,100,997
You are going to have to implement that feature yourself, I doubt very much that a relational db will do that for you, and for good reason: it means updating a potentially large number of rows when one row is deleted. Are you sure you need this? It could become expensive.
1
0
0
Auto Increment Field in Django/Python
7
python,django,postgresql
0
2012-06-19T12:32:00.000
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted. For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django? Thanks in Advance!
1
-1
-0.028564
0
false
11,101,032
1
2,701
4
0
0
11,100,997
Try to set the value with type sequence in postgres using pgadmin.
1
0
0
Auto Increment Field in Django/Python
7
python,django,postgresql
0
2012-06-19T12:32:00.000
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted. For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django? Thanks in Advance!
1
0
0
0
false
15,074,698
1
2,701
4
0
0
11,100,997
I came across this looking for something else and wanted to point something out: By storing the order in a field in the same table as your data, you lose data integrity, or if you index it things will get very complicated if you hit a conflict. In other words, it's very easy to have a bug (or something else) give you two 3's, a missing 4, and other weird things can happen. I inherited a project with a manual sort order that was critical to the application (there were other issues as well) and this was constantly an issue, with just 200-300 items. The right way to handle a manual sort order is to have a separate table to manage it and sort with a join. This way your Order table will have exactly 10 entries with just it's PK (the order number) and a foreign key relationship to the ID of the items you want to sort. Deleted items just won't have a reference anymore. You can continue to sort on delete similar to how you're doing it now, you'll just be updating the Order model's FK to list instead of iterating through and re-writing all your items. Much more efficient. This will scale up to millions of manually sorted items easily. But rather than using auto-incremented ints, you would want to give each item a random order id in between the two items you want to place it between and keep plenty of space (few hundred thousand should do it) so you can arbitrarily re-sort them. I see you mentioned that you've only got 10 rows here, but designing your architecture to scale well the first time, as a practice, will save you headaches down the road, and once you're in the habit of it, it won't really take you any more time.
1
0
0
Auto Increment Field in Django/Python
7
python,django,postgresql
0
2012-06-19T12:32:00.000
I am new with SQL/Python. I was wondering if there is a way for me to sort or categorize expense items into three primary categories. That is I have a 56,000 row list with about 100+ different expense categories. They vary from things like Payroll, Credit Card Pmt, telephone, etc. I would like to put them into three categories, for the sake of analysis. I know I could do a GIANT IF statement in Excel, but that would be really time consuming, based on the fact that there are 100+ sub categories. Is there any way to expedite the process with Python or even in Excel? Also, I don't know if this is material or not, but I am preparing this file to be uploaded to a SQL database.
0
0
0
0
false
11,121,498
0
163
1
0
0
11,121,395
You should create a table called something like ExpenseCategories, with the columns ExpenseCategory, PrimaryCategory. This table would have one row for each expense category (which you can enforce with a constraint if you like). You would then join this table with your existing data in SQL. By the way, in Excel, you could do this with a vlookup() rather than an if(). The vlookup() is analogous to using a lookup table in SQL. The equivalent of an if() would be a giant case statement, which is another possibility.
1
0
0
Method for Sorting a list of expense categories into specific categories
1
python,sql
0
2012-06-20T14:05:00.000
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code. I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally. Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use. I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
5
0
0
0
false
11,130,087
0
4,619
3
0
0
11,129,844
I would be tempted to research a little into some GUI that could output graphviz (DOT format) with annotations, so you could create the rooms and links between them (a sort of graph). Then later, you might want another format to support heftier info. But should make it easy to create maps, links between rooms (containing items or traps etc..), and you could use common libraries to produce graphics of the maps in png or something. Just a random idea off the top of my head - feel free to ignore!
1
0
1
Optimal format for simple data storage in python
8
python
0
2012-06-20T23:59:00.000
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code. I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally. Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use. I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
5
5
0.124353
0
false
11,129,974
0
4,619
3
0
0
11,129,844
Though there are good answers here already, I would simply recommend JSON for your purposes for the sole reason that since you're a new programmer it will be the most straightforward to read and translate as it has the most direct mapping to native Python data types (lists [] and dictionaries {}). Readability goes a long way and is one of the tenets of Python programming.
1
0
1
Optimal format for simple data storage in python
8
python
0
2012-06-20T23:59:00.000
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code. I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally. Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use. I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
5
1
0.024995
0
false
11,129,853
0
4,619
3
0
0
11,129,844
If you want editability, YAML is the best option of the ones you've named, because it doesn't have <> or {} required delimiters.
1
0
1
Optimal format for simple data storage in python
8
python
0
2012-06-20T23:59:00.000
I would like to get some understanding on the question that I was pretty sure was clear for me. Is there any way to create table using psycopg2 or any other python Postgres database adapter with the name corresponding to the .csv file and (probably the most important) with columns that are specified in the .csv file.
3
1
1.2
0
true
11,130,568
0
2,067
1
0
0
11,130,261
I'll leave you to look at the psycopg2 library properly - this is off the top of my head (not had to use it for a while, but IIRC the documentation is ample). The steps are: Read column names from CSV file Create "CREATE TABLE whatever" ( ... ) Maybe INSERT data import os.path my_csv_file = '/home/somewhere/file.csv' table_name = os.path.splitext(os.path.split(my_csv_file)[1])[0] cols = next(csv.reader(open(my_csv_file))) You can go from there... Create a SQL query (possibly using a templating engine for the fields and then issue the insert if needs be)
1
0
0
Dynamically creating table from csv file using psycopg2
1
python,postgresql,psycopg2
0
2012-06-21T00:59:00.000