Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I am working on a personal project where I need to manipulate values in a database-like format. Up until now I have been using dictionaries, tuples, and lists to store and consult those values. I am thinking about starting to use SQL to manipulate those values, but I don't know if it's worth the effort, because I don't know anything about SQL, and I don't want to use something that won't bring me any benefits (if I can do it in a simpler way, I don't want to complicate things) If I am only storing and consulting values, what would be the benefit of using SQL? PS: the numbers of rows goes between 3 and 100 and the number of columns is around 10 (some may have 5 some may have 10 etc.)
1
2
0.132549
0
false
2,871,090
0
291
3
0
0
2,870,815
SQL is useful in many applications. But it is an overkill in this case. You can easily store your data in CSV, pickle or JSON format. Get this job done in 5 minutes and then learn SQL when you have time.
1
0
0
Python and database
3
python,sql,database
0
2010-05-20T03:24:00.000
I am trying to install postgrepsql to cygwin on a windows 7 machine and want it to work with django. After built and installed postgrepsql in cygwin, I built and installed psycopg2 in cygwin as well and got no error, but when use it in python with cygwin, I got the "no such process" error: import psycopg2 Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/site-packages/psycopg2/init.py", line 60, in from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: No such process any clues? Thanks! Jerry
0
1
0.049958
0
false
14,780,956
0
1,204
2
0
0
2,879,246
In my case, I had to reinstall libpq5.
1
0
0
psycopg2 on cygwin: no such process
4
python,django,postgresql,psycopg2
0
2010-05-21T02:34:00.000
I am trying to install postgrepsql to cygwin on a windows 7 machine and want it to work with django. After built and installed postgrepsql in cygwin, I built and installed psycopg2 in cygwin as well and got no error, but when use it in python with cygwin, I got the "no such process" error: import psycopg2 Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/site-packages/psycopg2/init.py", line 60, in from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: No such process any clues? Thanks! Jerry
0
0
0
0
false
2,885,759
0
1,204
2
0
0
2,879,246
Why? There is native psycopg2 for Win.
1
0
0
psycopg2 on cygwin: no such process
4
python,django,postgresql,psycopg2
0
2010-05-21T02:34:00.000
After running a bunch of simulations I'm going to be outputting the results into a table created using SQLAlchemy. I plan to use this data to generate statistics - mean and variance being key. These, in turn, will be used to generate some graphs - histograms/line graphs, pie-charts and box-and-whisker plots specifically. I'm aware of the Python graphing libraries like matplotlib. The thing is, I'm not sure how to have this integrate with the information contained within the database tables. Any suggestions on how to make these two play with each other? The main problem is that I'm not sure how to supply the information as "data sets" to the graphing library.
1
1
1.2
0
true
2,891,001
0
1,415
1
0
0
2,890,564
It looks like matplotlib takes simple python data types -- lists of numbers, etc, so you'll be need to write custom code to massage what you pull out of mysql/sqlalchemy for input into the graphing functions...
1
0
0
How to generate graphs and statistics from SQLAlchemy tables?
1
python,matplotlib,sqlalchemy
0
2010-05-23T03:10:00.000
If you wanted to manipulate the data in a table in a postgresql database using some python (maybe running a little analysis on the result set using scipy) and then wanted to export that data back into another table in the same database, how would you go about the implementation? Is the only/best way to do this to simply run the query, have python store it in an array, manipulate the array in python and then run another sql statement to output to the database? I'm really just asking, is there a more efficient way to deal with the data? Thanks, Ian
4
0
0
0
false
2,906,866
0
1,582
1
0
0
2,905,097
I agree with the SQL Alchemy suggestions or using Django's ORM. Your needs seem to simple for PL/Python to be used.
1
0
0
Python and Postgresql
7
python,postgresql
0
2010-05-25T13:35:00.000
I am in the planning stages of rewriting an Access db I wrote several years ago in a full fledged program. I have very slight experience coding, but not enough to call myself a programmer by far. I'll definitely be learning as I go, so I'd like to keep everything as simple as possible. I've decided on Python and SQLite for my program, but I need help on my next decision. Here is my situation 1) It'll be run locally on each machine, all Windows computers 2) I would really like a nice looking GUI with colors, nice screens, menus, lists, etc, 3) I'm thinking about using a browser interface because (a) from what I've read, browser apps can look really great, and (b) I understand there are lots of free tools to assist in setting up the GUI/GUI code with drag and drop tools, so that helps my "keep it simple" goal. 4) I want the program to be totally portable so it runs completely from one single folder on a user's PC, with no installation(s) needed for it to run (If I did it as a browser app, isn't there the possibility that a user's browser settings could affect or break the app. How likely is this?) For my situation, should/could I make it a browser app? What would be the pros and cons for my situation?
5
0
0
0
false
2,979,467
0
5,638
1
0
0
2,924,231
You question is a little broad. I'll try to cover as much as I can. First, what I understood and my assumptions. In your situation, the sqlite database is just a data store. Only one process (unless your application is multiprocess) will be accessing it so you won't need to worry about locking issues. The application doesn't need to communicate with other instances etc. over the network. It's a single desktop app. The platform is Windows. Here are some thoughts that come to mind. If you develop an application in Python (either web based or desktop), you will have to package it as a single executable and distribute it to your users. They might have to install the Python runtime as well as any extra modules that you might be using. Guis are in my experience easier to develop using a standalone widget system than in a browser with Javascript. There are things like Pyjamas that make this better but it's still hard. While it's not impossible to have local web applications running on each computer, your real benefits come if you centralise it. One place to update software. No need to "distribute" etc. This of course entails that you use a more powerful database system and you can actually manage multiple users. It will also require that you worry about browser specific quirks. I'd go with a simple desktop app that uses a prepackaged toolkit (perhaps Tkinter which ships with Python). It's not the best of approaches but it will avoid problems for you. I'd also consider using a language that's more "first class" on windows like C# so that the runtimes and other things are already there. You requirement for a fancy GUI is secondary and I'd recommend that you get the functionality working fine before you focus on the bells and whistles. Good luck.
1
0
0
Python/Sqlite program, write as browser app or desktop app?
8
python,sqlite,browser
0
2010-05-27T19:28:00.000
When using SQL Alchemy for abstracting your data access layer and using controllers as the way to access objects from that abstraction layer, how should joins be handled? So for example, say you have an Orders controller class that manages Order objects such that it provides getOrder, saveOrder, etc methods and likewise a similar controller for User objects. First of all do you even need these controllers? Should you instead just treat SQL Alchemy as "the" thing for handling data access. Why bother with object oriented controller stuff there when you instead have a clean declarative way to obtain and persist objects without having to write SQL directly either. Well one reason could be that perhaps you may want to replace SQL Alchemy with direct SQL or Storm or whatever else. So having controller classes there to act as an intermediate layer helps limit what would need to change then. Anyway - back to the main question - so assuming you have these two controllers, now lets say you want the list of orders for a certain set of users meeting some criteria. How do you go about doing this? Generally you don't want the controllers crossing domains - the Orders controllers knows only about Orders and the User controller just about Users - they don't mess with each other. You also don't want to go fetch all the Users that match and then feed a big list of user ids to the Orders controller to go find the matching Orders. What's needed is a join. Here's where I'm stuck - that seems to mean either the controllers must cross domains or perhaps they should be done away with altogether and you simply do the join via SQL Alchemy directly and get the resulting User and / or Order objects as needed. Thoughts?
1
2
1.2
0
true
2,934,084
1
430
1
0
0
2,933,796
Controllers are meant to encapsulate features for your convienience. Not to bind your hands. If you want to join, simply join. Use the controller that you think is logically fittest to make the query.
1
0
0
SQL Alchemy MVC and cross controller joins
1
python,model-view-controller,sqlalchemy,dns,controllers
0
2010-05-29T04:25:00.000
I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to write the whole thing in C (or Fortran) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks Edit Thanks for your responses. There seems to be conflicting opinions about Hadoop, but in any case I don't have access to a cluster (though I can use several unnetworked machines)...
32
2
0.049958
0
false
2,942,419
0
5,360
4
0
0
2,937,619
When you say "accumulating" then solution (2) looks most suitable to problem. After initial load up to database you only update database with new files (daily, weekly? depends how often you need this). In cases (1) and (3) you need to process files each time (what was stated earlier as most time/resources-consuming), unless you find a way to stored results and update them with new files. You could use R to process files from csv to, for example, SQLite database.
1
0
0
large amount of data in many text files - how to process?
8
python,sql,r,large-files,large-data-volumes
0
2010-05-30T05:06:00.000
I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to write the whole thing in C (or Fortran) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks Edit Thanks for your responses. There seems to be conflicting opinions about Hadoop, but in any case I don't have access to a cluster (though I can use several unnetworked machines)...
32
4
0.099668
0
false
2,937,664
0
5,360
4
0
0
2,937,619
With terabytes, you will want to parallelize your reads over many disks anyway; so might as well go straight into Hadoop. Use Pig or Hive to query the data; both have extensive support for user-defined transformations, so you should be able to implement what you need to do using custom code.
1
0
0
large amount of data in many text files - how to process?
8
python,sql,r,large-files,large-data-volumes
0
2010-05-30T05:06:00.000
I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to write the whole thing in C (or Fortran) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks Edit Thanks for your responses. There seems to be conflicting opinions about Hadoop, but in any case I don't have access to a cluster (though I can use several unnetworked machines)...
32
1
0.024995
0
false
2,937,660
0
5,360
4
0
0
2,937,619
Yes. You are right! I/O would cost most of your processing time. I don't suggest you to use distributed systems, like hadoop, for this task. Your task could be done in a modest workstation. I am not an Python expert, I think it has support for asynchronous programming. In F#/.Net, the platform has well support for that. I was once doing an image processing job, loading 20K images on disk and transform them into feature vectors only costs several minutes in parallel. all in all, load and process your data in parallel and save the result in memory (if small), in database (if big).
1
0
0
large amount of data in many text files - how to process?
8
python,sql,r,large-files,large-data-volumes
0
2010-05-30T05:06:00.000
I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to write the whole thing in C (or Fortran) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks Edit Thanks for your responses. There seems to be conflicting opinions about Hadoop, but in any case I don't have access to a cluster (though I can use several unnetworked machines)...
32
14
1.2
0
true
2,937,630
0
5,360
4
0
0
2,937,619
(3) is not necessarily a bad idea -- Python makes it easy to process "CSV" file (and despite the C standing for Comma, tab as a separator is just as easy to handle) and of course gets just about as much bandwidth in I/O ops as any other language. As for other recommendations, numpy, besides fast computation (which you may not need as per your statements) provides very handy, flexible multi-dimensional arrays, which may be quite handy for your tasks; and the standard library module multiprocessing lets you exploit multiple cores for any task that's easy to parallelize (important since just about every machine these days has multi-cores;-).
1
0
0
large amount of data in many text files - how to process?
8
python,sql,r,large-files,large-data-volumes
0
2010-05-30T05:06:00.000
Should I invest a lot of time trying to figure out an ORM style implementation, or is it still common to just stick with standard SQL queries in python/pylons/sqlalchemy?
1
8
1.2
0
true
2,947,182
0
1,032
2
0
0
2,947,172
ORMs are very popular, for several reasons -- e.g.: some people would rather not learn SQL, ORMs can ease porting among different SQL dialects, they may fit in more smoothly with the mostly-OOP style of applications, indeed might even ease some porting to non-SQL implementations (e.g, moving a Django app to Google App Engine would be much more work if the storage access layer relied on SQL statements -- as it relies on the ORM, that reduces, a bit, the needed porting work). SQLAlchemy is the most powerful ORM I know of for Python -- it lets you work at several possible levels, from a pretty abstract declarative one all the way down to injecting actual SQL in some queries where your profiling work has determined it makes a big difference (I think most people use it mostly at the intermediate level where it essentially mediates between OOP and relational styles, just like other ORMs). You haven't asked for my personal opinion in the matter, which is somewhat athwart of the popular one I summarized above -- I've never really liked "code generators" of any kind (they increase your productivity a bit when everything goes smoothly... but you can pay that back with interest when you find yourself debugging problems [[including performance bottlenecks]] due to issues occurring below the abstraction levels that generators strive to provide). When I get a chance to use a good relational engine, such as PostgreSQL, I believe I'm overall more productive than I would be with any ORM in between (incuding SQLAlchemy, despite its many admirable qualities). However, I have to admit that the case is different when the relational engine is not all that good (e.g., I've never liked MySQL), or when porting to non-relational deployments is an important consideration. So, back to your actual question, I do think that, overall, investing time in mastering SQLAlchemy is a good idea, and time well-spent.
1
0
0
Transitioning from php to python/pylons/SQLAlchemy -- Are ORMs the standard now?
2
python,sql,orm,sqlalchemy
0
2010-06-01T03:49:00.000
Should I invest a lot of time trying to figure out an ORM style implementation, or is it still common to just stick with standard SQL queries in python/pylons/sqlalchemy?
1
1
0.099668
0
false
2,947,191
0
1,032
2
0
0
2,947,172
If you have never use an ORM like SqlAlchemy before, I would suggest that you learn it - as long as you are learning the Python way. If nothing else, you will be better able to decide where/when to use it vs plain SQL. I don't think you should have to invest a lot of time on it. Documentation for SQLAlchemy is decent, and you can always ask for help if you get stuck.
1
0
0
Transitioning from php to python/pylons/SQLAlchemy -- Are ORMs the standard now?
2
python,sql,orm,sqlalchemy
0
2010-06-01T03:49:00.000
I have a table formatted similar to this: Date | ID | Value | Difference I need to get the difference between a record's value column, and the previous record's value column based off of the date. I.E 2 days ago | cow | 1 | Null Yesterday | cow | 2 | Null Today | cow | 3 | Null Yesterdays difference would be 1, and today's difference would be 1. basically, I need to get the previous record based off the date, I don't know the interval's between each record. I've been stumped on this for a while. I am using Mysql, and Python to do the majority of the calculations.
0
0
1.2
0
true
2,960,589
0
530
1
0
0
2,960,481
Use a SELECT... WHERE date <= NOW() && date >= ( NOW() - 90000 ) (90,000 is 25 hours, giving you a little leeway with the insert time), and then take the difference between the rows in python.
1
0
0
Get the previous date in Mysql
2
python,mysql
0
2010-06-02T18:35:00.000
In one of my Django projects that use MySQL as the database, I need to have a date fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD). The date field in MySQL can deal with that by accepting 00 for the month and the day. So 2010-00-00 is valid in MySQL and it represent 2010. Same thing for 2010-05-00 that represent May 2010. So I started to create a PartialDateField to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a datetime.date object for a date field AND datetime.date() support only real date. So it's possible to modify the converter for the date field used by MySQLdb and return only a string in this format 'YYYY-MM-DD'. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL date fields. But Django DateField rely on the fact that the database return a datetime.date object, so if I change the converter to return a string, Django is not happy at all. Someone have an idea or advice to solve this problem? How to create a PartialDateField in Django ? EDIT Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by Alison R.) or use a varchar field to keep date as string in this format YYYY-MM-DD. But in both solutions, if I'm not wrong, I will loose the special properties of a date field like doing query of this kind on them: Get all entries after this date. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
8
7
1.2
0
true
3,027,410
1
4,080
1
0
0
2,971,198
First, thanks for all your answers. None of them, as is, was a good solution for my problem, but, for your defense, I should add that I didn't give all the requirements. But each one help me think about my problem and some of your ideas are part of my final solution. So my final solution, on the DB side, is to use a varchar field (limited to 10 chars) and storing the date in it, as a string, in the ISO format (YYYY-MM-DD) with 00 for month and day when there's no month and/or day (like a date field in MySQL). This way, this field can work with any databases, the data can be read, understand and edited directly and easily by a human using a simple client (like mysql client, phpmyadmin, etc.). That was a requirement. It can also be exported to Excel/CSV without any conversion, etc. The disadvantage is that the format is not enforce (except in Django). Someone could write 'not a date' or do a mistake in the format and the DB will accept it (if you have an idea about this problem...). This way it's also possible to do all of the special queries of a date field relatively easily. For queries with WHERE: <, >, <=, >= and = work directly. The IN and BETWEEN queries work directly also. For querying by day or month you just have to do it with EXTRACT (DAY|MONTH ...). Ordering work also directly. So I think it covers all the query needs and with mostly no complication. On the Django side, I did 2 things. First, I have created a PartialDate object that look mostly like datetime.date but supporting date without month and/or day. Inside this object I use a datetime.datetime object to keep the date. I'm using the hours and minutes as flag that tell if the month and day are valid when they are set to 1. It's the same idea that steveha propose but with a different implementation (and only on the client side). Using a datetime.datetime object gives me a lot of nice features for working with dates (validation, comparaison, etc.). Secondly, I have created a PartialDateField that mostly deal with the conversion between the PartialDate object and the database. So far, it works pretty well (I have mostly finish my extensive unit tests).
1
0
0
How to deal with "partial" dates (2010-00-00) from MySQL in Django?
5
python,mysql,database,django,date
0
2010-06-04T02:49:00.000
I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers. Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key. My question is what is the best way to do this if the CSV file increases to 100K+ records? Would it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?
3
1
1.2
1
true
2,980,269
0
109
1
0
0
2,980,257
As long as they will all fit in memory, a dict will be the most efficient solution. It's also a lot easier to code. 100k records should be no problem on a modern computer. You are right that switching to an SQLite database is a good choice when the number of records gets very large.
1
0
0
Efficient way to access a mapping of identifiers in Python
1
python,database,sqlite,dictionary,csv
0
2010-06-05T12:08:00.000
I'm doing a project with reasonalby big DataBase. It's not a probper DB file, but a class with format as follows: DataBase.Nodes.Data=[[] for i in range(1,1000)] f.e. this DataBase is all together something like few thousands rows. Fisrt question - is the way I'm doing efficient, or is it better to use SQL, or any other "proper" DB, which I've never used actually. And the main question - I'd like to save my DataBase class with all record, and then re-open it with Python in another session. Is that possible, what tool should I use? cPickle - it seems to be only for strings, any other? In matlab there's very useful functionality named save workspace - it saves all Your variables to a file that You can open at another session - this would be vary useful in python!
3
3
0.197375
1
false
2,991,030
0
306
1
0
0
2,990,995
Pickle (cPickle) can handle any (picklable) Python object. So as long, as you're not trying to pickle thread or filehandle or something like that, you're ok.
1
0
0
How to save big "database-like" class in python
3
python,serialization,pickle,object-persistence
0
2010-06-07T15:52:00.000
I'm building a web application, and I need to use an architecture that allows me to run it over two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB. Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user. I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server. What frame work(s) should I use for this kind of job? Is MVC and Cakephp a good solution? If so will I be able to control and monitor the Python code using it? Thanks
3
2
1.2
0
true
3,022,395
1
1,161
1
0
0
3,021,921
How do go about implementing this? Too big a question for an answer here. Certainly you don't want 2 sets of code for the scraping (1 for scheduled, 1 for demand) in addition to the added complication, you really don't want to be running job which will take an indefinite time to complete within the thread generated by a request to your webserver - user requests for a scrape should be run via the scheduling mechanism and reported back to users (although if necessary you could use Ajax polling to give the illusion that it's happening in the same thread). What frame work(s) should I use? Frameworks are not magic bullets. And you shouldn't be choosing a framework based primarily on the nature of the application you are writing. Certainly if specific, critical functionality is precluded by a specific framework, then you are using the wrong framework - but in my experience that has never been the case - you just need to write some code yourself. using something more complex than a cron job Yes, a cron job is probably not the right way to go for lots of reasons. If it were me I'd look at writing a daemon which would schedule scrapes (and accept connections from web page scripts to enqueue additional scrapes). But I'd run the scrapes as separate processes. Is MVC a good architecture for this? (I'm new to MVC, architectures etc.) No. Don't start by thinking whether a pattern fits the application - patterns are a useful tool for teaching but describe what code is not what it will be (Your application might include some MVC patterns - but it should also include lots of other ones). C.
1
0
0
Web application architecture, and application servers?
2
php,python,model-view-controller,cakephp,application-server
0
2010-06-11T10:12:00.000
Criteria for 'better': fast in math and simple (few fields, many records) db transactions, convenient to develop/read/extend, flexible, connectible. The task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selecting/inserting sets of floats and doing maths with them). The choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, or JavaScript (node.js). All the data is to be stored in a relational database (due to its heavily multidimensional nature); all the communication with outer world is to be done by means of web services.
4
4
0.379949
0
false
3,022,304
0
451
2
0
0
3,022,232
The best option is probably the language you're most familiar with. My second consideration would be if you need to use any special maths libraries and whether they're supported in each of the languages.
1
0
0
What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?
2
php,python,ruby,performance,math
0
2010-06-11T11:13:00.000
Criteria for 'better': fast in math and simple (few fields, many records) db transactions, convenient to develop/read/extend, flexible, connectible. The task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selecting/inserting sets of floats and doing maths with them). The choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, or JavaScript (node.js). All the data is to be stored in a relational database (due to its heavily multidimensional nature); all the communication with outer world is to be done by means of web services.
4
10
1.2
0
true
3,022,242
0
451
2
0
0
3,022,232
I would suggest Python with it's great Scientifical/Mathematical libraries (SciPy, NumPy). Otherwise the languages are not differing so much, although I doubt that Ruby, PHP or JS can keep up with the speed of Python or Perl. And what the comments below here say: at this moment, go for the latest Python2 (which is Python2.7). This has mature versions of all needed libraries, and if you follow the coding guidelines, transferring some day to Python 3 will be only a small pain.
1
0
0
What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?
2
php,python,ruby,performance,math
0
2010-06-11T11:13:00.000
I'm doing some queries in Python on a large database to get some stats out of the database. I want these stats to be in-memory so other programs can use them without going to a database. I was thinking of how to structure them, and after trying to set up some complicated nested dictionaries, I realized that a good representation would be an SQL table. I don't want to store the data back into the persistent database, though. Are there any in-memory implementations of an SQL database that supports querying the data with SQL syntax?
32
1
0.033321
0
false
65,153,849
0
44,308
1
0
0
3,047,412
In-memory databases usually do not support memory paging option (for the whole database or certain tables), i,e, total size of the database should be smaller than the available physical memory or maximum shared memory size. Depending on your application, data-access pattern, size of database and available system memory for database, you have a few choices: a. Pickled Python Data in File System It stores structured Python data structure (such as list of dictionaries/lists/tuples/sets, dictionary of lists/pandas dataframes/numpy series, etc.) in pickled format so that they could be used immediately and convienently upon unpickled. AFAIK, Python does not use file system as backing store for Python objects in memory implicitly but host operating system may swap out Python processes for higher priority processes. This is suitable for static data, having smaller memory size compared to available system memory. These pickled data could be copied to other computers, read by multiple dependent or independent processes in the same computer. The actual database file or memory size has higher overhead than size of the data. It is the fastest way to access the data as the data is in the same memory of the Python process, and without a query parsing step. b. In-memory Database It stores dynamic or static data in the memory. Possible in-memory libraries that with Python API binding are Redis, sqlite3, Berkeley Database, rqlite, etc. Different in-memory databases offer different features Database may be locked in the physical memory so that it is not swapped to memory backing store by the host operating system. However the actual implementation for the same libray may vary across different operating systems. The database may be served by a database server process. The in-memory may be accessed by multiple dependent or independent processes. Support full, partial or no ACID model. In-memory database could be persistent to physical files so that it is available when the host operating is restarted. Support snapshots or/and different database copies for backup or database management. Support distributed database using master-slave, cluster models. Support from simple key-value lookup to advanced query, filter, group functions (such as SQL, NoSQL) c. Memory-map Database/Data Structure It stores static or dynamic data which could be larger than physical memory of the host operating system. Python developers could use API such as mmap.mmap() numpy.memmap() to map certain files into process memory space. The files could be arranged into index and data so that data could be lookup/accessed via index lookup. This is actually the mechanism used by various database libraries. Python developers could implement custom techniques to access/update data efficiency.
1
0
1
in-memory database in Python
6
python,sql,database,in-memory-database
0
2010-06-15T17:13:00.000
I'm running a Django project on Postgresql 8.1.21 (using Django 1.1.1, Python2.5, psycopg2, Apache2 with mod_wsgi 3.2). We've recently encountered this lovely error: OperationalError: FATAL: connection limit exceeded for non-superusers I'm not the first person to run up against this. There's a lot of discussion about this error, specifically with psycopg, but much of it centers on older versions of Django and/or offer solutions involving edits to code in Django itself. I've yet to find a succinct explanation of how to solve the problem of the Django ORM (or psycopg, whichever is really responsible, in this case) leaving open Postgre connections. Will simply adding connection.close() at the end of every view solve this problem? Better yet, has anyone conclusively solved this problem and kicked this error's ass? Edit: we later upped Postgresql's limit to 500 connections; this prevented the error from cropping up, but replaced it with excessive memory usage.
3
1
1.2
0
true
3,049,796
1
2,568
1
0
0
3,049,625
This could be caused by other things. For example, configuring Apache/mod_wsgi in a way that theoretically it could accept more concurrent requests than what the database itself may be able to accept at the same time. Have you reviewed your Apache/mod_wsgi configuration and compared limit on maximum clients to that of PostgreSQL to make sure something like that hasn't been done. Obviously this presumes though that you have managed to reach that limit in Apache some how and also depends on how any database connection pooling is set up.
1
0
0
Django ORM and PostgreSQL connection limits
1
python,database,django,postgresql,django-orm
0
2010-06-15T22:45:00.000
THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend? CLARIFICATIONS: I am merging tables, not concatenating. Each table has a different structure and different data. It is a normalized CRM database. Companies->contacts->details = ~ 60 tables of details. As the Access db will be scuttled after the db is migrated, I want to spend as little time in Access as possible.
2
0
0
0
false
3,073,339
0
4,410
2
0
0
3,064,830
I'm not even clear on what you're trying to do. I assume your problem is that Jet/ACE can't handle a UNION with that many SELECT statements. If you have 64 identically-structured tables and you want them in a single CSV, I'd create a temp table in Access, append each table in turn, then export from the temp table to CSV. This is a simple solution and shouldn't be slow, either. The only possible issue might be if there are dupes, but if there are, you can export from a SELECT DISTINCT saved QueryDef. Tangentially, I'm surprised Maximizer still exists. I had a client who used to use it, and the db structure was terribly unnormalized, just like all the other sales software like ACT.
1
0
0
Query crashes MS Access
4
python,sql,ms-access,crm
0
2010-06-17T19:13:00.000
THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend? CLARIFICATIONS: I am merging tables, not concatenating. Each table has a different structure and different data. It is a normalized CRM database. Companies->contacts->details = ~ 60 tables of details. As the Access db will be scuttled after the db is migrated, I want to spend as little time in Access as possible.
2
1
0.049958
0
false
3,064,852
0
4,410
2
0
0
3,064,830
I would recommend #2 if the merge is fairly simple and straightforward, and doesn't need the power of an RDBMS. I'd go with #1 if the merge is more complex and you will need to write some actual queries to get the data merged properly.
1
0
0
Query crashes MS Access
4
python,sql,ms-access,crm
0
2010-06-17T19:13:00.000
I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?
2
0
0
0
false
3,320,441
1
640
3
0
0
3,066,255
I concur with the 'no foreign keys' advice (with the disclaimer: I also work for Percona). The reason why it is is recommended is for concurrency / reducing locking internally. It can be a difficult "optimization" to sell, but if you consider that the database has transactions (and is more or less ACID compliant) then it should only be application-logic errors that cause foreign-key violations. Not to say they don't exist, but if you enable foreign keys in development hopefully you should find at least a few bugs. In terms of whether or not you need to write custom SQL: The explanation I usually give is that "optimization rarely decreases complexity". I think it is okay to stick with an ORM by default, but if in a profiler it looks like one particular piece of functionality is taking a lot more time than you suspect it would when written by hand, then you need to be prepared to fix it (assuming the code is called often enough). The real secret here is that you need good instrumentation / profiling in order to be frugal with your complexity adding optimization(s).
1
0
0
Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?
5
python,mysql,django
0
2010-06-17T23:07:00.000
I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?
2
0
0
0
false
3,066,274
1
640
3
0
0
3,066,255
django-admin inspectdb allows you to reverse engineer a models file from existing tables. That is only a very partial response to your question ;)
1
0
0
Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?
5
python,mysql,django
0
2010-06-17T23:07:00.000
I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?
2
0
0
0
false
3,066,360
1
640
3
0
0
3,066,255
You can just create the model.py and avoid having SQL Alchemy automatically create the tables leaving it up to you to define the actual tables as you please. So although there are foreign key relationships in the model.py this does not mean that they must exist in the actual tables. This is a very good thing considering how ludicrously foreign key constraints are implemented in MySQL - MyISAM just ignores them and InnoDB creates a non-optional index on every single one regardless of whether it makes sense.
1
0
0
Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?
5
python,mysql,django
0
2010-06-17T23:07:00.000
How would I go around creating a MYSQL table schema inspecting an Excel(or CSV) file. Are there any ready Python libraries for the task? Column headers would be sanitized to column names. Datatype would be estimated based on the contents of the spreadsheet column. When done, data would be loaded to the table. I have an Excel file of ~200 columns that I want to start normalizing.
6
1
0.039979
0
false
3,072,109
0
7,011
3
0
0
3,070,094
As far as I know, there is no tool that can automate this process (I would love for someone to prove me wrong as I've had this exact problem before). When I did this, I came up with two options: (1) Manually create the columns in the db with the appropriate types and then import, or (2) Write some kind of filter that could "figure out" what data types the columns should be. I went with the first option mainly because I didn't think I could actually write a program to do the type inference. If you do decide to write a type inference tool/conversion, here are a couple of issues you may have to deal with: (1) Excel dates are actually stored as the number of days since December 31st, 1899; how does one infer then that a column is dates as opposed to some piece of numerical data (population for example)? (2) For text fields, do you just make the columns of type varchar(n) where n is the longest entry in that column, or do you make it an unbounded char field if one of the entries is longer than some upper limit? If so, what's a good upper limit? (3) How do you automatically convert a float to a decimal with the correct precision and without loosing any places? Obviously, this doesn't mean that you won't be able to (I'm a pretty bad programmer). I hope you do, because it'd be a really useful tool to have.
1
0
0
Generate table schema inspecting Excel(CSV) and import data
5
python,mysql,excel,csv,import-from-excel
0
2010-06-18T13:40:00.000
How would I go around creating a MYSQL table schema inspecting an Excel(or CSV) file. Are there any ready Python libraries for the task? Column headers would be sanitized to column names. Datatype would be estimated based on the contents of the spreadsheet column. When done, data would be loaded to the table. I have an Excel file of ~200 columns that I want to start normalizing.
6
1
0.039979
0
false
3,071,074
0
7,011
3
0
0
3,070,094
Quick and dirty workaround with phpmyadmin: Create a table with the right amount of columns. Make sure the data fits the columns. Import the CSV into the table. Use the propose table structure.
1
0
0
Generate table schema inspecting Excel(CSV) and import data
5
python,mysql,excel,csv,import-from-excel
0
2010-06-18T13:40:00.000
How would I go around creating a MYSQL table schema inspecting an Excel(or CSV) file. Are there any ready Python libraries for the task? Column headers would be sanitized to column names. Datatype would be estimated based on the contents of the spreadsheet column. When done, data would be loaded to the table. I have an Excel file of ~200 columns that I want to start normalizing.
6
1
1.2
0
true
3,169,710
0
7,011
3
0
0
3,070,094
Just for (my) reference, I documented below what I did: XLRD is practical, however I've just saved the Excel data as CSV, so I can use LOAD DATA INFILE I've copied the header row and started writing the import and normalization script Script does: CREATE TABLE with all columns as TEXT, except for Primary key query mysql: LOAD DATA LOCAL INFILE loading all CSV data into TEXT fields. based on the output of PROCEDURE ANALYSE, I was able to ALTER TABLE to give columns the right types and lengths. PROCEDURE ANALYSE returns ENUM for any column with few distinct values, which is not what I needed, but I found that useful later for normalization. Eye-balling 200 columns was a breeze with PROCEDURE ANALYSE. Output from PhpMyAdmin propose table structure was junk. I wrote some normalization mostly using SELECT DISTINCT on columns and INSERTing results to separate tables. I have added to the old table a column for FK first. Just after the INSERT, I've got its ID and UPDATEed the FK column. When loop finished I've dropped old column leaving only FK column. Similarly with multiple dependent columns. It was much faster than I expected. I ran (django) python manage.py inspctdb, copied output to models.py and added all those ForeignkeyFields as FKs do not exist on MyISAM. Wrote a little python views.py, urls.py, few templates...TADA
1
0
0
Generate table schema inspecting Excel(CSV) and import data
5
python,mysql,excel,csv,import-from-excel
0
2010-06-18T13:40:00.000
I currently work with Google's AppEngine and I could not find out, whether a Google DataStorage Object Entry has an ID by default, and if not, how I add such a field and let it increase automatically? regards,
1
4
0.26052
0
false
3,078,018
1
282
2
1
0
3,077,156
An object has a Key, part of which is either an automatically-generated numeric ID, or an assigned key name. IDs are not guaranteed to be increasing, and they're almost never going to be consecutive because they're allocated to an instance in big chunks, and IDs unused by the instance to which they're allocated will never be used by another instance (at least, not currently). They're also only unique within the same entity group for a kind; they're not unique to the entire kind if you have parent relationships.
1
0
0
Does GQL automatically add an "ID" Property
3
python,google-app-engine,gql
0
2010-06-19T20:38:00.000
I currently work with Google's AppEngine and I could not find out, whether a Google DataStorage Object Entry has an ID by default, and if not, how I add such a field and let it increase automatically? regards,
1
3
1.2
0
true
3,077,170
1
282
2
1
0
3,077,156
Yes, they have id's by default, and it is named ID as you mentioned.
1
0
0
Does GQL automatically add an "ID" Property
3
python,google-app-engine,gql
0
2010-06-19T20:38:00.000
I am trying to use the SimpleDB in following way. I want to keep 48 hrs worth data at anytime into simpledb and query it for different purposes. Each domain has 1 hr worth data, so at any time there are 48 domains present in the simpledb. As the new data is constantly uploaded, I delete the oldest domain and create a new domain for each new hour. Each domain is about 50MB in size, the total size of all the domains is around 2.2 GB. The item in the domain has following type of attributes identifier - around 50 characters long -- 1 per item timestamp - timestamp value -- 1 per item serial_n_data - 500-1000 bytes data -- 200 per item I'm using python boto library to upload and query the data. I send 1 item/sec with around 200 attributes in the domain. For one of the application of this data, I need to get all the data from all the 48 domains. The Query looks like, "SELECT * FROM domain", for all the domains. I use 8 threads to query data with each thread taking responsibility of few domains. e.g domain 1-6 thread 1 domain 7-12 thread 2 and so on It takes close to 13 minutes to get the entire data.I am using boto's select method for this.I need much more faster performance than this. Any suggestions on speed up the querying process? Is there any other language that I can use, which can speed up the things?
2
0
0
0
false
9,012,699
1
1,743
1
0
0
3,103,145
I have had the same issue as you Charlie. After profiling the code, I have narrowed the performance problem down to SSL. It seems like that is where it is spending most of it's time and hence CPU cycles. I have read of a problem in the httplib library (which boto uses for SSL) where the performance doesn't increase unless the packets are over a certain size, though that was for Python 2.5 and may have already been fixed.
1
0
0
SimpleDB query performance improvement using boto
3
python,amazon-simpledb,boto
0
2010-06-23T15:38:00.000
Can I somehow work with remote databases (if they can do it) with the Django ORM? It is understood that the sitting has spelled out the local database. And periodically to make connection to various external databases and perform any sort of commands such as load dump.
0
1
0.197375
0
false
3,125,012
1
121
1
0
0
3,123,801
If you can connect to the database remotely, then you can simply specify its host/port in settings.py exactly as you would a local one.
1
0
0
Remote execution of commands using the Django ORM
1
python,django,orm
0
2010-06-26T12:05:00.000
My two main requirements for the site are related to degrees of separation and graph matching (given two graphs, return some kind of similarity score). My first thought was to use MySql to do it, which would probably work out okay for storing how I want to manage 'friends' (similar to Twitter), but I'm thinking if I want to show users results which will make use of graphing algorithms (like shortest path between two people) maybe it isn't the way to go for that. My language of choice for the front end, would be Python using something like Pylons but I haven't committed to anything specific yet and would be willing to budge if it fitted well with a good backend solution. I'm thinking of using MySQL for storing user profile data, neo4j for the graph information of relations between users and then have a Python application talk to both of them. Maybe there is a simpler/more efficient way to do this kind of thing. At the moment for me it's more getting a suitable prototype done than worrying about scalability but I'm willing to invest some time learning something new if it'll save me time rewriting/porting in the future. PS: I'm more of a programmer than a database designer, so I'd prefer having rewrite the frontend later rather than say porting over the database, which is the main reason I'm looking for advice.
4
2
0.099668
0
false
3,126,208
1
749
1
0
0
3,126,155
MySQL is really your best choice for the database unless you want to go proprietary. As for the actual language, pick whatever you are familiar with. While Youtube and Reddit are written in python, many of the other large sites use Ruby (Hulu, Twitter, Techcrunch) or C++ (Google) or PHP (Facebook, Yahoo, etc).
1
0
0
What should I use for the backend of a 'social' website?
4
python,sql,mysql,database
0
2010-06-27T02:19:00.000
With PostgreSQL, one of my tables has an 'interval' column, values of which I would like to extract as something I can manipulate (datetime.timedelta?); however I am using PyGreSQL which seems to be returning intervals as strings, which is less than helpful. Where should I be looking to either parse the interval or make PyGreSQL return it as a <something useful>?
0
3
1.2
0
true
3,137,124
0
1,424
1
0
0
3,134,699
Use Psycopg 2. It correctly converts between Postgres's interval data type and Python's timedelta.
1
0
0
python sql interval
1
python,sql,postgresql,pygresql
0
2010-06-28T17:33:00.000
I've decided to give Python a try on Netbeans. The problem so far is when try to run program I know works, i.e. if I ran it through the terminal. For the project I selected the correct Python version (2.6.5). And received the following error: Traceback (most recent call last): File "/Users/XXX/NetBeansProjects/NewPythonProject3/src/newpythonproject3.py", line 4, in import sqlite3 ImportError: No module named sqlite3
0
0
0
0
false
3,151,119
0
184
1
0
0
3,149,370
Search for PYTHONPATH. You probably have different settings in your OS and Netbeans.
1
0
0
Netbeans + sqlite3 = Fail?
1
python,sqlite
0
2010-06-30T12:50:00.000
I'm writing a script for exporting some data. Some details about the environment: The project is Django based I'm using raw/custom SQL for the export The database engine is MySQL. The database and code are on the same box.- Details about the SQL: A bunch of inner joins A bunch of columns selected, some with a basic multiplication calculation. The sql result has about 55K rows When I run the SQL statement in the mysql command line, it takes 3-4 seconds When I run the SQL in my python script the line cursor.execute(sql, [id]) takes over 60 seconds. Any ideas on what might be causing this?
0
0
0
0
false
3,188,555
0
705
1
0
0
3,188,289
Two ideas: MySQL may have query caching enabled, which makes it difficult to get accurate timing when you run the same query repeatedly. Try changing the ID in your query to make sure that it really does run in 3-4 seconds consistently. Try using strace on the python process to see what it is doing during this time.
1
0
0
Python MySQL Performance: Runs fast in mysql command line, but slow with cursor.execute
1
python,mysql,performance
0
2010-07-06T16:39:00.000
As is mentioned in the doc for google app engine, it does not support group by and other aggregation functions. Is there any alternatives to implement the same functionality? I am working on a project where I need it on urgent basis, being a large database its not efficient to iterate the result set and then perform the logic. Please suggest. Thanks in advance.
0
1
1.2
0
true
3,211,471
1
322
1
1
0
3,210,577
The best way is to populate the summaries (aggregates) at the time of write. This way your reads will be faster, since they just read - at the cost of writes which will have to update the summaries if its likely to be effected by the write. Hopefully you will be reading more often than writing/updating summaries.
1
0
0
Google application engine Datastore - any alternatives to aggregate functions and group by?
1
python,google-app-engine
0
2010-07-09T07:21:00.000
I'm using Google Appengine to store a list of favorites, linking a Facebook UserID to one or more IDs from Bing. I need function calls returning the number of users who have favorited an item, and the number of times an item has been favorited (and by whom). My question is, should I resolve this relationship into two tables for efficiency? If I have a table with columns for Facebook ID and Bing ID, I can easily use select queries for both of the functions above, however this will require that each row is searched in each query. The alternative is having two tables, one for each Facebook user's favorites and the other for each Bing item's favorited users, and using transactions to keep them in sync. The two tables option has the advantage of being able to use JSON or CSV in the database so that only one row needs to be fetched, and little manipulation needs to be done for an API. Which option is better, in terms of efficiency and minimising cost? Thanks, Matt
1
0
1.2
0
true
3,213,988
1
280
1
1
0
3,210,994
I don't think there's a hard and fast answer to questions like this. "Is this optimization worth it" always depends on many variables such as, is the lack of optimization actually a problem to start with? How much of a problem is it? What's the cost in terms of extra time and effort and risk of bugs of a more complex optimized implementation, relative to the benefits? What might be the extra costs of implementing the optimization later, such as data migration to a new schema?
1
0
0
Many-to-many relationships in Google AppEngine - efficient?
1
python,google-app-engine,performance,many-to-many
0
2010-07-09T08:21:00.000
I'm using MySQLdb and when I perform an UPDATE to a table row I sometimes get an infinite process hang. At first I thought, maybe its COMMIT since the table is Innodb, but even with autocommit(True) and db.commit() after each update I still get the hang. Is it possible there is a row lock and the query just fails to carry out? Is there a way to handle potential row locks or maybe even handle slow queries?
0
1
1.2
0
true
3,216,500
0
544
1
0
0
3,216,027
Depending on your user privileges, you can execute SHOW PROCESSLIST or SELECT from information_schema.processlist while the UPDATE hangs to see if there is a contention issue with another query. Also do an EXPLAIN on a SELECT of the WHERE clause used in the UPDATE to see if you need to change the statement. If it's a lock contention, then you should eventually encounter a Lock Wait Timeout (default = 50 sec, I believe). Otherwise, if you have timing constraints, you can make use of KILL QUERY and KILL CONNECTION to unblock the cursor execution.
1
0
0
MySQLdb Handle Row Lock
1
python,mysql
0
2010-07-09T19:47:00.000
Is there a good step by step online guide to install xampp (apache server,mysql server) together with zope-plone on the same linux machine and make it play nicely or do I have to go through their confusing documentations? Or how can I install this configuration in the best way? I can install and use both seperately but in tandem is an issue for me. Any help is appreciated.
0
0
1.2
0
true
3,247,954
1
486
1
0
0
3,233,246
sorry for wrong site but I just figured out that it was not a problem at all. I installed XAMPP (a snap) and downloaded and ran the plone install script. Both sites XAMPP on port 80 and zope/plone on 8080 are working without problems. Just to let everyone know. I don't know why I got nervous about this :)
1
0
0
Guide to install xampp with zope-plone on the same linux machine?
2
python,apache,xampp,plone,zope
0
2010-07-13T00:09:00.000
I never thought I'd ever say this but I'd like to have something like the report generator in Microsoft Access. Very simple, just list data from a SQL query. I don't really care what language is used as long as I can get it done fast. C#,C++,Python,Javascript... I want to know the quickest (development sense) way to display data from a database. edit : I'm using MySQL with web interface for data input. I would be much better if the user had some kind of GUI.
0
0
0
0
false
3,249,163
0
325
1
0
0
3,242,448
Some suggestions: 1) ASP.NET Gridview ---use the free Visual Studio to create an asp.net page ...can do VB, C#, etc. ---drag/drop a gridview control on your page, then connect it to your data and display fields, all via wizard (you did say quick and dirty, correct?). No coding required if you can live within the wizard's limitations (which aren't too bad actually). The type of database (mySQL or otherwise) isn't relevant. Other quick and dirty approach might be Access itself -- it can create 'pages', I think, that are web publishable. If you want to put a little more work into it, ASP.NET has some other great controls / layout capability (non-wizard derived). Also, you could look at SSRS if you have access to it. More initial setup work, but has the option to let your users create their own reports in a semi-Access-like fashion. Web accessible. Good luck.
1
0
0
Quick and dirty reports based on a SQL query
2
c#,javascript,python,sql,database
0
2010-07-13T23:59:00.000
Installed Django from source (python setup.py install and such), installed MySQLdb from source (python setup.py build, python setup.py install). Using Python 2.4 which came installed on the OS (CentOS 5.5). Getting the following error message after launching the server: Error loading MySQLdb module: No module named MySQLdb The pythonpath the debug info provides includes '/usr/lib/python2.4/site-packages' and yet, if I ls that directory, I can plainly see MySQL_python-1.2.3-py2.4-linux-i686.egg Using the python interactive shell, I can type import MySQLdb and it produces no errors. This leads me to believe it's a Django pathing issue, but I haven't the slightest clue where to start looking as I'm new to both Django and python. EDIT: And to be a bit more specific, everything is currently running as root. I haven't setup any users yet on the machine, so none exist other than root. EDITx2: And to be even more specific, web server is Cherokee, and deploying using uWSGI. All installed from source.
19
1
0.019997
0
false
23,076,238
1
38,547
1
0
0
3,243,073
Try this if you are using linux:- sudo apt-get install python-mysqldb windows:- pip install python-mysqldb or easy_install python-mysqldb Hope this should work
1
0
0
Django unable to find MySQLdb python module
10
python,django,mysql
0
2010-07-14T02:49:00.000
I am a PHP guy. In PHP I mainly use Doctrine ORM to deal with database issues. I am considering move to Python + Django recently. I know Python but don't have experience with Django. Can anyone who has good knowledge of both Doctrine and ORM in Django give me a comparison of features of these two ORM implementations?
3
1
0.049958
0
false
8,543,708
1
5,103
3
0
0
3,249,977
Ive used Doctrine over a 2 year project that ended 1.5 years ago, since then i've been doing mostly Django. I prefer Djangos ORM over Doctrine any day, more features, more consistency, faster and shinier.
1
0
0
ORM in Django vs. PHP Doctrine
4
php,python,django,orm,doctrine
0
2010-07-14T20:01:00.000
I am a PHP guy. In PHP I mainly use Doctrine ORM to deal with database issues. I am considering move to Python + Django recently. I know Python but don't have experience with Django. Can anyone who has good knowledge of both Doctrine and ORM in Django give me a comparison of features of these two ORM implementations?
3
5
0.244919
0
false
12,267,439
1
5,103
3
0
0
3,249,977
I am a rare person who had to switch from Django 1.4 to Symfony 2.1 so I had to use Doctrine 2 instead of current Django ORM. Maybe Doctrine can do many things but let me tell you that it is a nightmare for me to use it coming from Django. I'm bored with the verbosity of php/Symfony/Doctrine ... Also I never needed something that Django's ORM didn't manage already (maybe projects not big enough to reach the limits). Simply compare the description of data between both orms (including setters & getters)...
1
0
0
ORM in Django vs. PHP Doctrine
4
php,python,django,orm,doctrine
0
2010-07-14T20:01:00.000
I am a PHP guy. In PHP I mainly use Doctrine ORM to deal with database issues. I am considering move to Python + Django recently. I know Python but don't have experience with Django. Can anyone who has good knowledge of both Doctrine and ORM in Django give me a comparison of features of these two ORM implementations?
3
-1
-0.049958
0
false
3,250,203
1
5,103
3
0
0
3,249,977
Django isn't just an orm. It is a web framework like symfony. The form framework in symfony is modeled on django for example. It's orm part is more like doctrine 2 I think, but I haven't played with either much.
1
0
0
ORM in Django vs. PHP Doctrine
4
php,python,django,orm,doctrine
0
2010-07-14T20:01:00.000
We need to be able to inform a Delphi application in case there are changes to some of our tables in MySQL. Delphi clients are in the Internet behind a firewall, and they have to be authenticated before connecting to the notification server we need to implement. The server can be programmed using for example Java, PHP or Python, and it has to support thousands of clients. Typically one change in the database needs to be informed only to a single client, and I don't believe performance will be a bottleneck. It just has to be possible to inform any of those thousands of clients when a change affecting the specific client occurs. I have been thinking of a solution where: MySQL trigger would inform to notification server Delphi client connects to a messaging queue and gets the notification using it My questions: What would be the best to way from the trigger to inform the external server of the change Which message queue solution to pick?
4
0
0
0
false
3,255,344
0
1,596
2
0
0
3,255,330
Why not use the XMPP protocol (aka Jabbber) ?
1
0
0
How to create a notification server which informs Delphi application when database changes?
3
java,php,python,mysql,delphi
0
2010-07-15T12:02:00.000
We need to be able to inform a Delphi application in case there are changes to some of our tables in MySQL. Delphi clients are in the Internet behind a firewall, and they have to be authenticated before connecting to the notification server we need to implement. The server can be programmed using for example Java, PHP or Python, and it has to support thousands of clients. Typically one change in the database needs to be informed only to a single client, and I don't believe performance will be a bottleneck. It just has to be possible to inform any of those thousands of clients when a change affecting the specific client occurs. I have been thinking of a solution where: MySQL trigger would inform to notification server Delphi client connects to a messaging queue and gets the notification using it My questions: What would be the best to way from the trigger to inform the external server of the change Which message queue solution to pick?
4
1
0.066568
0
false
3,255,395
0
1,596
2
0
0
3,255,330
There is apache camel and spring intergration, both provides some ways to send messages across.
1
0
0
How to create a notification server which informs Delphi application when database changes?
3
java,php,python,mysql,delphi
0
2010-07-15T12:02:00.000
Due to the nature of my application, I need to support fast inserts of large volumes of data into the database. Using executemany() increases performance, but there's a caveat. For example, MySQL has a configuration parameter called max_allowed_packet, and if the total size of my insert queries exceeds its value, MySQL throws an error. Question #1: Is there a way to tell SQLAlchemy to split the packet into several smaller ones? Question #2: If other RDBS have similar constraints, how should I work around them as well? P.S. I had posted this question earlier but deleted it when I wrongly assumed that likely I will not encounter this problem after all. Sadly, that's not the case.
1
2
1.2
0
true
3,271,153
0
2,103
1
0
0
3,267,580
I had a similar problem recently and used the - not very elegant - work-around: First I parsed my.cnf for a value for max_allow_packets, if I can't find it, the maximum is set to a default value. All data items are stored in a list. Next, for each data item I count the approximate byte length (with strings, it's the length of the string in bytes, for other data types I take the maximum bytes used to be safe.) I add them up, committing after I have reached approx. 75% of max_allow_packets (as SQL queries will take up space as well, just to be on the safe side). This approach is not really beautiful, but it worked flawlessly for me.
1
0
0
SQLAlchemy and max_allowed_packet problem
1
python,mysql,sqlalchemy,large-query
0
2010-07-16T17:54:00.000
I am trying to setup a website in django which allows the user to send queries to a database containing information about their representatives in the European Parliament. I have the data in a comma seperated .txt file with the following format: Parliament, Name, Country, Party_Group, National_Party, Position 7, Marta Andreasen, United Kingdom, Europe of freedom and democracy Group, United Kingdom Independence Party, Member etc.... I want to populate a SQLite3 database with this data, but so far all the tutorials I have found only show how to do this by hand. Since I have 736 observations in the file I dont really want to do this. I suspect this is a simple matter, but I would be very grateful if someone could show me how to do this. Thomas
13
2
0.066568
0
false
3,275,298
1
7,034
1
0
0
3,270,952
You asked what the create(**dict(zip(fields, row))) line did. I don't know how to reply directly to your comment, so I'll try to answer it here. zip takes multiple lists as args and returns a list of their correspond elements as tuples. zip(list1, list2) => [(list1[0], list2[0]), (list1[1], list2[1]), .... ] dict takes a list of 2-element tuples and returns a dictionary mapping each tuple's first element (key) to its second element (value). create is a function that takes keyword arguments. You can use **some_dictionary to pass that dictionary into a function as keyword arguments. create(**{'name':'john', 'age':5}) => create(name='john', age=5)
1
0
0
Populating a SQLite3 database from a .txt file with Python
6
python,django,sqlite
0
2010-07-17T09:38:00.000
I'm having all sorts of trouble trying to instal MySQLdb (1.2.2) on snow leopard. I am running python 2.5.1 and MySQL 5.1 32bit. Python and MySQL are running just fine. I've also installed django 1.2.1, although I don't think thats all that important, but wanted to give an idea of the stack i'm trying to install. I am using python 2.5.x as my webhost only has that version as an option and I want to be as close to my production env as possible. anyway... After following many of the existing articles and tutorials which mention modifying _mysql.c and setup_posix.py etc, I am still running into trouble. Here is my stack trace: xxxxxxx-mbp:MySQL-python-1.2.2 xxxxxxx$ sudo ARCHFLAGS="-arch x86_64" python setup.py build running build running build_py creating build creating build/lib.macosx-10.3-i386-2.5 copying _mysql_exceptions.py -> build/lib.macosx-10.3-i386-2.5 creating build/lib.macosx-10.3-i386-2.5/MySQLdb copying MySQLdb/init.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb copying MySQLdb/converters.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb copying MySQLdb/connections.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb copying MySQLdb/cursors.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb copying MySQLdb/release.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb copying MySQLdb/times.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb creating build/lib.macosx-10.3-i386-2.5/MySQLdb/constants copying MySQLdb/constants/init.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb/constants copying MySQLdb/constants/REFRESH.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.3-i386-2.5/MySQLdb/constants running build_ext building '_mysql' extension creating build/temp.macosx-10.3-i386-2.5 gcc -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -Dversion_info=(1,2,2,'final',0) -D__version__=1.2.2 -I/usr/local/mysql-5.1.48-osx10.6-x86/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c _mysql.c -o build/temp.macosx-10.3-i386-2.5/_mysql.o -g -Os -arch i386 -fno-common -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL In file included from /Developer/SDKs/MacOSX10.4u.sdk/usr/include/wchar.h:112, from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/unicodeobject.h:118, from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:83, from pymemcompat.h:10, from _mysql.c:29: /Developer/SDKs/MacOSX10.4u.sdk/usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory In file included from _mysql.c:35: /usr/local/mysql-5.1.48-osx10.6-x86/include/my_config.h:1062:1: warning: "HAVE_WCSCOLL" redefined In file included from /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h:8, from pymemcompat.h:10, from _mysql.c:29: /Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/pyconfig.h:724:1: warning: this is the location of the previous definition error: command 'gcc' failed with exit status 1 Does anyone have any ideas?
2
0
0
0
false
3,285,926
0
461
1
0
0
3,285,631
I tried to solve this one for days myself and finally gave up. I switched to postgres. It works pretty well with django on snow leopard, with one minor problem. For some reason auto_inc pk ids don't get assigned to some models. I solved the problem by randomly assigning an id from a large random range, and relying on the unique column designation to prevent collisions. My production server is linux. Mysql and postgres install fine on it. In fact, many on the #django irc channel recommended running a virtual linux instance on the mac to get around my mysql install problems on it.
1
0
0
Installing MySQLdb on Snow Leopard
3
python,mysql,django
0
2010-07-19T22:34:00.000
I'm playing around with a little web app in web.py, and am setting up a url to return a JSON object. What's the best way to convert a SQL table to JSON using python?
54
1
0.014285
0
false
55,329,857
0
157,377
1
0
0
3,286,525
If you are using an MSSQL Server 2008 and above, you can perform your SELECT query to return json by using the FOR JSON AUTO clause E.G SELECT name, surname FROM users FOR JSON AUTO Will return Json as [{"name": "Jane","surname": "Doe" }, {"name": "Foo","surname": "Samantha" }, ..., {"name": "John", "surname": "boo" }]
1
0
1
return SQL table as JSON in python
14
python,sql,json
0
2010-07-20T02:16:00.000
I have 5 python cgi pages. I can navigate from one page to another. All pages get their data from the same database table just that they use different queries. The problem is that the application as a whole is slow. Though they connect to the same database, each page creates a new handle every time I visit it and handles are not shared by the pages. I want to improve performance. Can I do that by setting up sessions for the user? Suggestions/Advices are welcome. Thanks
1
0
0
0
false
3,289,546
1
179
1
0
0
3,289,330
Django and Pylons are both frameworks that solve this problem quite nicely, namely by abstracting the DB-frontend integration. They are worth considering.
1
0
0
Improving performance of cgi
2
python,cgi
1
2010-07-20T11:15:00.000
I need to save an image file into sqlite database in python. I could not find a solution. How can I do it? Thanks in advance.
8
0
0
0
false
3,310,034
0
15,288
2
0
0
3,309,957
It's never a good idea to record raw types in databases. Couldn't you just save the file on the filesystem, and record the path to it in database?
1
0
0
pysqlite - how to save images
4
python,image,sqlite,blob,pysqlite
0
2010-07-22T14:29:00.000
I need to save an image file into sqlite database in python. I could not find a solution. How can I do it? Thanks in advance.
8
11
1.2
0
true
3,310,995
0
15,288
2
0
0
3,309,957
write - cursor.execute('insert into File (id, name, bin) values (?,?,?)', (id, name, sqlite3.Binary(file.read()))) read - file = cursor.execute('select bin from File where id=?', (id,)).fetchone() if you need to return bin data in web app - return cStringIO.StringIO(file['bin'])
1
0
0
pysqlite - how to save images
4
python,image,sqlite,blob,pysqlite
0
2010-07-22T14:29:00.000
This is a tricky question, we've been talking about this for a while (days) and haven't found a convincingly good solution. This is the situation: We have users and groups. A user can belong to many groups (many to many relation) There are certain parts of the site that need access control, but: There are certain ROWS of certain tables that need access control, ie. a certain user (or certain group) should not be able to delete a certain row, but other rows of the same table could have a different permission setting for that user (or group) Is there an easy way to acomplish this? Are we missing something? We need to implement this in python (if that's any help).
2
2
0.099668
0
false
3,327,313
1
214
3
0
0
3,327,279
This problem is not really new; it's basically the general problem of authorization and access rights/control. In order to avoid having to model and maintain a complete graph of exactly what objects each user can access in each possible way, you have to make decisions (based on what your application does) about how to start reigning in the multiplicative scale factors. So first: where do users get their rights? If each user is individually assigned rights, you're going to pose a significant ongoig management challenge to whoever needs to add users, modify users, etc. Perhaps users can get their rights from the groups they're members of. Now you have a scale factor that simplifies management and makes the system easier to understand. Changing a group changes the effective rights for all users who are members. Now, what do these rights look like? It's still probably not wise to assign rights on a target object by object basis. Thus maybe rights should be thought of as a set of abstract "access cards". Objects in the system can be marked as requiring "blue" access for read, "red" access for update, and "black" access for delete. Those abstract rights might be arranged in some sort of topology, such that having "black" access means you implicitly also have "red" and "blue", or maybe they're all disjoint; it's up to you and how your application has to work. (Note also that you may want to consider that object types — tables, if you like — may need their own access rules, at least for "create". By introducing collection points in the graph pictures you draw relating actors in the system to objects they act upon, you can handle scale issues and keep the complexity of authorization under control. It's never easy, however, and often it's the case that voiced customer desires result in something that will never work out and never in fact achieve what the customer (thinks she) wants. The implementation language doesn't have a lot to do with the architectural decisions you need to make.
1
0
0
Control access to parts of a system, but also to certain pieces of information
4
python,access-control
0
2010-07-24T23:11:00.000
This is a tricky question, we've been talking about this for a while (days) and haven't found a convincingly good solution. This is the situation: We have users and groups. A user can belong to many groups (many to many relation) There are certain parts of the site that need access control, but: There are certain ROWS of certain tables that need access control, ie. a certain user (or certain group) should not be able to delete a certain row, but other rows of the same table could have a different permission setting for that user (or group) Is there an easy way to acomplish this? Are we missing something? We need to implement this in python (if that's any help).
2
0
0
0
false
3,327,325
1
214
3
0
0
3,327,279
It's hard to be specific without knowing more about your setup and about why exactly you need different users to have different permissions on different rows. But generally, I would say that whenever you access any data in the database in your code, you should precede it by an authorization check, which examines the current user and group and the row being inserted/updated/deleted/etc. and decides whether the operation should be allowed or not. Consider designing your system in an encapsulated manner - for example you could put all the functions that directly access the database in one module, and make sure that each of them contains the proper authorization check. (Having them all in one file makes it less likely that you'll miss one) It might be helpful to add a permission_class column to the table, and have another table specifying which users or groups have which permission classes. Then your authorization check simply has to take the value of the permission class for the current row, and see if the permissions table contains an association between that permission class and either the current user or any of his/her groups.
1
0
0
Control access to parts of a system, but also to certain pieces of information
4
python,access-control
0
2010-07-24T23:11:00.000
This is a tricky question, we've been talking about this for a while (days) and haven't found a convincingly good solution. This is the situation: We have users and groups. A user can belong to many groups (many to many relation) There are certain parts of the site that need access control, but: There are certain ROWS of certain tables that need access control, ie. a certain user (or certain group) should not be able to delete a certain row, but other rows of the same table could have a different permission setting for that user (or group) Is there an easy way to acomplish this? Are we missing something? We need to implement this in python (if that's any help).
2
0
0
0
false
3,327,726
1
214
3
0
0
3,327,279
Add additional column "category" or "type" to the table(s), that will categorize the rows (or if you will, group/cluster them) - and then create a pivot table that defines the access control between (rowCategory, userGroup). So for each row, by its category you can pull which userGroups have access (and what kind of access).
1
0
0
Control access to parts of a system, but also to certain pieces of information
4
python,access-control
0
2010-07-24T23:11:00.000
I'm using MongoDB an nosql database. Basically as a result of a query I have a list of dicts which themselves contains lists of dictionaries... which I need to work with. Unfortunately dealing with all this data within Python can be brought to a crawl when the data is too much. I have never had to deal with this problem, and it would be great if someone with experience could give a few suggestions. =)
0
1
0.066568
0
false
3,333,193
0
406
2
0
0
3,330,668
Are you loading all the data into memory at once? If so you could be causing the OS to swap memory to disk, which can bring any system to a crawl. Dictionaries are hashtables so even an empty dict will use up a lot of memory, and from what you say you are creating a lot of them at once. I don't know the MongoDB API, but I presume there is a way of iterating through the results one at a time instead of reading in the entire set of result at once - try using that. Or rewrite your query to return a subset of the data. If disk swapping is not the problem then profile the code to see what the bottleneck is, or put some sample code in your question. Without more specific information it is hard to give a more specific answer.
1
0
1
Speeding up parsing of HUGE lists of dictionaries - Python
3
python,parsing,list,sorting,dictionary
0
2010-07-25T19:23:00.000
I'm using MongoDB an nosql database. Basically as a result of a query I have a list of dicts which themselves contains lists of dictionaries... which I need to work with. Unfortunately dealing with all this data within Python can be brought to a crawl when the data is too much. I have never had to deal with this problem, and it would be great if someone with experience could give a few suggestions. =)
0
3
1.2
0
true
3,333,236
0
406
2
0
0
3,330,668
Do you really want all of that data back in your Python program? If so fetch it back a little at a time, but if all you want to do is summarise the data then use mapreduce in MongoDB to distribute the processing and just return the summarised data. After all, the point about using a NoSQL database that cleanly shards all the data across multiple machines is precisely to avoid having to pull it all back onto a single machine for processing.
1
0
1
Speeding up parsing of HUGE lists of dictionaries - Python
3
python,parsing,list,sorting,dictionary
0
2010-07-25T19:23:00.000
I was using Python 2.6.5 to build my application, which came with sqlite3 3.5.9. Apparently though, as I found out in another question of mine, foreign key support wasn't introduced in sqlite3 until version 3.6.19. However, Python 2.7 comes with sqlite3 3.6.21, so this work -- I decided I wanted to use foreign keys in my application, so I tried upgrading to python 2.7. I'm using twisted, and I couldn't for the life of me get it to build. Twisted relies on zope.interface and I can't find zope.interface for python 2.7 -- I thought it might just "work" anyway, but I'd have to just copy all the files over myself, and get everything working myself, rather than just using the self-installing packages. So I thought it might be wiser to just re-build python 2.6 and link it against a new version of sqlite3. But I don't know how-- How would I do this? I have Visual Studio 2008 installed as a compiler, I read that that is the only one that is really supported for Windows, and I am running a 64 bit operating system
9
6
1
0
false
3,341,117
0
4,281
2
0
0
3,333,095
download the latest version of sqlite3.dll from sqlite website and replace the the sqlite3.dll in the python dir.
1
0
1
How can I upgrade the sqlite3 package in Python 2.6?
3
python,build,linker,sqlite
0
2010-07-26T07:54:00.000
I was using Python 2.6.5 to build my application, which came with sqlite3 3.5.9. Apparently though, as I found out in another question of mine, foreign key support wasn't introduced in sqlite3 until version 3.6.19. However, Python 2.7 comes with sqlite3 3.6.21, so this work -- I decided I wanted to use foreign keys in my application, so I tried upgrading to python 2.7. I'm using twisted, and I couldn't for the life of me get it to build. Twisted relies on zope.interface and I can't find zope.interface for python 2.7 -- I thought it might just "work" anyway, but I'd have to just copy all the files over myself, and get everything working myself, rather than just using the self-installing packages. So I thought it might be wiser to just re-build python 2.6 and link it against a new version of sqlite3. But I don't know how-- How would I do this? I have Visual Studio 2008 installed as a compiler, I read that that is the only one that is really supported for Windows, and I am running a 64 bit operating system
9
1
0.066568
0
false
3,333,348
0
4,281
2
0
0
3,333,095
I decided I'd just give this a shot when I realized that every library I've ever installed in python 2.6 resided in my site-packages folder. I just... copied site-packages to my 2.7 installation, and it works so far. This is by far the easiest route for me if this works -- I'll look further into it but at least I can continue to develop now. I won't accept this answer, because it doesn't even answer my question, but it does solve my problem, as far as I can tell so far.
1
0
1
How can I upgrade the sqlite3 package in Python 2.6?
3
python,build,linker,sqlite
0
2010-07-26T07:54:00.000
I have some MySQL database server information that needs to be shared between a Python backend and a PHP frontend. What is the best way to go about storing the information in a manner wherein it can be read easily by Python and PHP? I can always brute force it with a bunch of str.replace() calls in Python and hope it works if nobody has a solution, or I can just maintain two separate files, but it would be a bunch easier if I could do this automatically. I assume it would be easiest to store the variables in PHP format directly and do conversions in Python, and I know there exist Python modules for serializing and unserializing PHP, but I haven't been able to get it all figured out. Any help is appreciated!
0
4
1.2
0
true
3,349,485
0
200
1
0
0
3,349,445
Store the shared configuration in a plain text file, preferably in a standard format. You might consider yaml, ini, or json. I'm pretty sure both PHP and python can very trivially read and parse all three of those formats.
1
0
0
Python - PHP Shared MySQL server connection info?
1
php,python,mysql,variables,share
0
2010-07-28T02:23:00.000
I store groups of entities in the google app engine Data Store with the same ancestor/parent/entityGroup. This is so that the entities can be updated in one atomic datastore transaction. The problem is as follows: I start a db transaction I update entityX by setting entityX.flag = True I save entityX I query for entity where flag == True. BUT, here is the problem. This query does NOT return any results. It should have returned entityX, but it did not. When I remove the transaction, my code works perfectly, so it must be the transaction that is causing this strange behavior. Should updates to entities in the entity group not be visible elsewhere in the same transaction? PS: I am using Python. And GAE tells me I can't use nested transactions :(
2
0
0
0
false
3,350,082
1
215
1
1
0
3,350,068
Looks like you are not doing a commit on the transaction before querying start a db transaction update entityX by setting entityX.flag = True save entityX COMMIT TRANSACTION query for entity where flag == True. BUT, here is the problem. This query does NOT return any results. It should have returned entityX, but it did not. In a transaction, entities will not be persisted until the transaction is commited
1
0
0
On the google app engine, why do updates not reflect in a transaction?
2
python,google-app-engine
0
2010-07-28T05:06:00.000
We need to bulk load many long strings (>4000 Bytes, but <10,000 Bytes) using cx_Oracle. The data type in the table is CLOB. We will need to load >100 million of these strings. Doing this one by one would suck. Doing it in a bulk fashion, ie using cursor.arrayvar() would be ideal. However, CLOB does not support arrays. BLOB, LOB, LONG_STRING LONG_RAW don't either. Any help would be greatly appreciated.
0
0
0
0
false
3,373,296
0
1,252
1
0
0
3,358,666
In the interest of getting shit done that is good enough, we did the abuse of the CLOB I mentioned in my comment. It took less than 30 minutes to get coded up, runs fast and works.
1
0
0
Passing an array of long strings ( >4000 bytes) to an Oracle (11gR2) stored procedure using cx_Oracle
2
python,oracle,cx-oracle
0
2010-07-29T00:41:00.000
I am connecting to an MS SQL Server db from Python in Linux. I am connecting via pyodbc using the FreeTDS driver. When I return a money field from MSSQL it comes through as a float, rather than a Python Decimal. The problem is with FreeTDS. If I run the exact same Python code from Windows (where I do not need to use FreeTDS), pyodbc returns a Python Decimal. How can I get back a Python Decimal when I'm running the code in Linux?
0
1
0.099668
0
false
3,372,035
0
1,284
1
0
0
3,371,795
You could always just convert it to Decimal when it comes back...
1
0
0
FreeTDS translating MS SQL money type to python float, not Decimal
2
python,sql-server,pyodbc,freetds
0
2010-07-30T13:19:00.000
I set up Mysql5, mysql5-server and py26-mysql using Macports. I then started the mysql server and was able to start the prompt with mysql5 In my settings.py i changed database_engine to "mysql" and put "dev.db" in database_name. I left the username and password blank as the database doesnt exist yet. When I ran python manage.py syncdb, django raised an error 'django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dynamic module does not define init function (init_mysql)` How do I fix this? Do I have to create the database first? is it something else?
0
1
0.099668
0
false
3,377,350
1
5,039
1
0
0
3,376,673
syncdb will not create a database for you -- it only creates tables that don't already exist in your schema. You need to: Create a user to 'own' the database (root is a bad choice). Create the database with that user. Update the Django database settings with the correct database name, user, and password.
1
0
0
Django MySql setup
2
python,mysql,django
0
2010-07-31T03:34:00.000
I had my sqlalchemy related code in my main() method in my script. But then when I created a function, I wasn't able to reference my 'products' mapper because it was in the main() method. Should I be putting the sqlalchemy related code (session, mapper, and classes) in global scope so all functions in my single file script can refer to it? I was told a script is usually layout out as: globals functions classes main But if I put sqlalchemy at the top to make it global, I have to move my classes to the top also.
1
2
1.2
0
true
3,382,810
0
87
1
0
0
3,382,739
Typical approach is to define all mappings in separate model module, with one file per class/table. Then you just import needed classes whenever need them.
1
0
0
Where to put my sqlalchemy code in my script?
1
python,sqlalchemy
0
2010-08-01T16:20:00.000
Lets say I have a database table which consists of three columns: id, field1 and field2. This table may have anywhere between 100 and 100,000 rows in it. I have a python script that should insert 10-1,000 new rows into this table. However, if the new field1 already exists in the table, it should do an UPDATE, not an INSERT. Which of the following approaches is more efficient? Do a SELECT field1 FROM table (field1 is unique) and store that in a list. Then, for each new row, use list.count() to determine whether to INSERT or UPDATE For each row, run two queries. Firstly, SELECT count(*) FROM table WHERE field1="foo" then either the INSERT or UPDATE. In other words, is it more efficient to perform n+1 queries and search a list, or 2n queries and get sqlite to search?
2
0
0
0
false
3,536,835
0
2,505
2
0
0
3,404,556
You appear to be comparing apples with oranges. A python list is only useful if your data fit into the address-space of the process. Once the data get big, this won't work any more. Moreover, a python list is not indexed - for that you should use a dictionary. Finally, a python list is non-persistent - it is forgotten when the process quits. How can you possibly compare these?
1
0
0
Python performance: search large list vs sqlite
4
python,performance,sqlite
0
2010-08-04T10:18:00.000
Lets say I have a database table which consists of three columns: id, field1 and field2. This table may have anywhere between 100 and 100,000 rows in it. I have a python script that should insert 10-1,000 new rows into this table. However, if the new field1 already exists in the table, it should do an UPDATE, not an INSERT. Which of the following approaches is more efficient? Do a SELECT field1 FROM table (field1 is unique) and store that in a list. Then, for each new row, use list.count() to determine whether to INSERT or UPDATE For each row, run two queries. Firstly, SELECT count(*) FROM table WHERE field1="foo" then either the INSERT or UPDATE. In other words, is it more efficient to perform n+1 queries and search a list, or 2n queries and get sqlite to search?
2
0
0
0
false
3,404,589
0
2,505
2
0
0
3,404,556
I imagine using a python dictionary would allow for much faster searching than using a python list. (Just set the values to 0, you won't need them, and hopefully a '0' stores compactly.) As for the larger question, I'm curious too. :)
1
0
0
Python performance: search large list vs sqlite
4
python,performance,sqlite
0
2010-08-04T10:18:00.000
i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. can i continue to use python on a 2gig file or should i move the data into a database?
4
4
1.2
0
true
3,419,835
0
1,357
5
0
0
3,419,624
I don't know exactly what you are doing. But a database will just change how the data is stored. and in fact it might take longer since most reasonable databases may have constraints put on columns and additional processing for the checks. In many cases having the whole file local, going through and doing calculations is going to be more efficient than querying and writing it back to the database (subject to disk speeds, network and database contention, etc...). But in some cases the database may speed things up, especially because if you do indexing it is easy to get subsets of the data. Anyway you mentioned logs, so before you go database crazy I have the following ideas for you to check out. Anyway I'm not sure if you have to keep going through every log since the beginning of time to download charts and you expect it to grow to 2 GB or if eventually you are expecting 2 GB of traffic per day/week. ARCHIVING -- you can archive old logs, say every few months. Copy the production logs to an archive location and clear the live logs out. This will keep the file size reasonable. If you are wasting time accessing the file to find the small piece you need then this will solve your issue. You might want to consider converting to Java or C. Especially on loops and calculations you might see a factor of 30 or more speedup. This will probably reduce the time immediately. But over time as data creeps up, some day this will slow down as well. if you have no bound on the amount of data, eventually even hand optimized Assembly by the world's greatest programmer will be too slow. But it might give you 10x the time... You also may want to think about figuring out the bottleneck (is it disk access, is it cpu time) and based on that figuring out a scheme to do this task in parallel. If it is processing, look into multi-threading (and eventually multiple computers), if it is disk access consider splitting the file among multiple machines...It really depends on your situation. But I suspect archiving might eliminate the need here. As was suggested, if you are doing the same calculations over and over again, then just store them. Whether you use a database or a file this will give you a huge speedup. If you are downloading stuff and that is a bottleneck, look into conditional gets using the if modified request. Then only download changed items. If you are just processing new charts then ignore this suggestion. Oh and if you are sequentially reading a giant log file, looking for a specific place in the log line by line, just make another file storing the last file location you worked with and then do a seek each run. Before an entire database, you may want to think of SQLite. Finally a "couple of years" seems like a long time in programmer time. Even if it is just 2, a lot can change. Maybe your department/division will be laid off. Maybe you will have moved on and your boss. Maybe the system will be replaced by something else. Maybe there will no longer be a need for what you are doing. If it was 6 months I'd say fix it. but for a couple of years, in most cases, I'd say just use the solution you have now and once it gets too slow then look to do something else. You could make a comment in the code with your thoughts on the issue and even an e-mail to your boss so he knows it as well. But as long as it works and will continue doing so for a reasonable amount of time, I would consider it "done" for now. No matter what solution you pick, if data grows unbounded you will need to reconsider it. Adding more machines, more disk space, new algorithms/systems/developments. Solving it for a "couple of years" is probably pretty good.
1
0
1
python or database?
5
python,sql
0
2010-08-05T22:13:00.000
i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. can i continue to use python on a 2gig file or should i move the data into a database?
4
2
0.07983
0
false
3,419,871
0
1,357
5
0
0
3,419,624
I always reach for a database for larger datasets. A database gives me some stuff for "free"; that is, I don't have to code it. searching sorting indexing language-independent connections Something like SQLite might be the answer for you. Also, you should investigate the "nosql" databases; it sounds like your problem might fit well into one of them.
1
0
1
python or database?
5
python,sql
0
2010-08-05T22:13:00.000
i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. can i continue to use python on a 2gig file or should i move the data into a database?
4
4
0.158649
0
false
3,419,726
0
1,357
5
0
0
3,419,624
If you need to go through all lines each time you perform the "fiddling" it wouldn't really make much difference, assuming the actual "fiddling" is whats eating your cycles. Perhaps you could store the results of your calculations somehow, then a database would probably be nice. Also, databases have methods for ensuring data integrity and stuff like that, so a database is often a great place for storing large sets of data (duh! ;)).
1
0
1
python or database?
5
python,sql
0
2010-08-05T22:13:00.000
i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. can i continue to use python on a 2gig file or should i move the data into a database?
4
4
0.158649
0
false
3,419,718
0
1,357
5
0
0
3,419,624
I'd only put it into a relational database if: The data is actually relational and expressing it that way helps shrink the size of the data set by normalizing it. You can take advantage of triggers and stored procedures to offload some of the calculations that your Python code is performing now. You can take advantage of queries to only perform calculations on data that's changed, cutting down on the amount of work done by Python. If neither of those things is true, I don't see much difference between a database and a file. Both ultimately have to be stored on the file system. If Python has to process all of it, and getting it into memory means loading an entire data set, then there's no difference between a database and a flat file. 2GB of data in memory could mean page swapping and thrashing by your application. I would be careful and get some data before I blamed the problem on the file. Just because you access the data from a database won't solve a paging problem. If your data's flat, I see less advantage in a database, unless "flat" == "highly denormalized". I'd recommend some profiling to see what's consuming CPU and memory before I made a change. You're guessing about the root cause right now. Better to get some data so you know where the time is being spent.
1
0
1
python or database?
5
python,sql
0
2010-08-05T22:13:00.000
i am reading a csv file into a list of a list in python. it is around 100mb right now. in a couple of years that file will go to 2-5gigs. i am doing lots of log calculations on the data. the 100mb file is taking the script around 1 minute to do. after the script does a lot of fiddling with the data, it creates URL's that point to google charts and then downloads the charts locally. can i continue to use python on a 2gig file or should i move the data into a database?
4
1
0.039979
0
false
3,419,687
0
1,357
5
0
0
3,419,624
At 2 gigs, you may start running up against speed issues. I work with model simulations for which it calls hundreds of csv files and it takes about an hour to go through 3 iterations, or about 20 minutes per loop. This is a matter of personal preference, but I would go with something like PostGreSql because it integrates the speed of python with the capacity of a sql-driven relational database. I encountered the same issue a couple of years ago when my Access db was corrupting itself and crashing on a daily basis. It was either MySQL or PostGres and I chose Postgres because of its python friendliness. Not to say MySQL would not work with Python, because it does, which is why I say its personal preference. Hope that helps with your decision-making!
1
0
1
python or database?
5
python,sql
0
2010-08-05T22:13:00.000
I have an sqlite database whose data I need to transfer over the network, the server needs to modify the data, and then I need to get the db back and either update my local version or overwrite it with the new db. How should I do this? My coworker at first wanted to scrap the db and just use an .ini file, but this is going to be data that we have to parse pretty frequently (it's a user defined schedule that can change at the user's will, as well as the server's). I said we should just transfer the entire .db as a binary file and let them do with it what they will and then take it back. Or is there a way in sqlite to dump the db to a .sql file like you can do in MySQL so we can transfer it as text? Any other solutions? This is in python if it makes a difference update: This is on an embedded platform running linux (I'm not sure what version/kernel or what OS commands we have except the basics that are obvious)
0
3
1.2
0
true
3,451,733
0
471
1
0
0
3,451,708
Use the copy command in your OS. No reason to overthink this.
1
0
0
Sending sqlite db over network
1
python,sqlite,embedded,binary-data
0
2010-08-10T17:29:00.000
I'm trying to copy an excel sheet with python, but I keep getting "access denied" error message. The file is closed and is not shared. It has macros though. Is their anyway I can copy the file forcefully with python? thanks.
0
0
1.2
0
true
3,466,751
0
94
1
0
0
3,465,231
If you do not have sufficient file permissions you will not be able to access the file. In that case you will have to execute your Python program as an user with sufficient permissions. If on the other hand the file is locked using other means specific to Excel then I am not sure what exactly is the solution. You might have to work around the protection using other means which will require a fair amount of understanding of how Excel sheets are "locked". I don't know of any Python libraries that will do this for you.
1
0
0
Copying a file with access locks, forcefully with python
1
python,excel-2003
0
2010-08-12T06:32:00.000
is there a python ORM (object relational mapper) that has a tool for automatically creating python classes (as code so I can expand them) from a given database schema? I'm frequently faced with small tasks involving different databases (like importing/exporting from various sources etc.) and I thought python together with the abovementioned tool would be perfect for that. It should work like Visual Studios ADO.NET/Linq for SQL designer, where I can just drop DB tables and VS creates classes for me ... Thanks in advance.
2
3
0.197375
0
false
3,481,115
0
1,411
1
0
0
3,478,780
You do not need to produce a source code representation of your classes to be able to expand them. The only trick is that you need the ORM to generate the classes BEFORE importing the module that defines the derived classes. Even better, don't use derivation, but use __getattr__ and __setattr__ to implement transparent delegation to the ORM classes.
1
0
0
Python ORM that automatically creates classes from DB schema
3
python,orm,code-generation
0
2010-08-13T16:18:00.000
I have a couple of sqlite dbs (i'd say about 15GBs), with about 1m rows in total - so not super big. I was looking at mongodb, and it looks pretty easy to work with, especially if I want to try and do some basic natural language processing on the documents which make up the databases. I've never worked with Mongo in the past, no would have to learn from scratch (will be working in python). After googling around a bit, I came across a number of somewhat horrific stories about Mongodb re. reliability. Is this still a major problem ? In a crunch, I will of course retain the sqlite backups, but I'd rather not have to reconstruct my mongo databases constantly. Just wondering what sort data corruption issues people have actually faced recently with Mongo ? Is this a big concern? Thanks!
12
10
1
0
false
3,491,117
0
3,302
3
0
0
3,487,456
As others have said, MongoDB does not have single-server durability right now. Fortunately, it's dead easy to set up multi-node replication. You can even set up a second machine in another data center and have data automatically replicated to it live! If a write must succeed, you can cause Mongo to not return from an insert/update until that data has been replicated to n slaves. This ensures that you have at least n copies of the data. Replica sets allow you to add and remove nodes from your cluster on the fly without any significant work; just add a new node and it'll automatically sync a copy of the data. Remove a node and the cluster rebalances itself. It is very much designed to be used across multiple machines, with multiple nodes acting in parallel; this is it's preferred default setup, compared to something like MySQL, which expects one giant machine to do its work on, which you can then pair slaves against when you need to scale out. It's a different approach to data storage and scaling, but a very comfortable one if you take the time to understand its difference in assumptions, and how to build an architecture that capitalizes on its strengths.
1
0
0
Mongodb - are reliability issues significant still?
5
python,sqlite,mongodb
0
2010-08-15T13:00:00.000
I have a couple of sqlite dbs (i'd say about 15GBs), with about 1m rows in total - so not super big. I was looking at mongodb, and it looks pretty easy to work with, especially if I want to try and do some basic natural language processing on the documents which make up the databases. I've never worked with Mongo in the past, no would have to learn from scratch (will be working in python). After googling around a bit, I came across a number of somewhat horrific stories about Mongodb re. reliability. Is this still a major problem ? In a crunch, I will of course retain the sqlite backups, but I'd rather not have to reconstruct my mongo databases constantly. Just wondering what sort data corruption issues people have actually faced recently with Mongo ? Is this a big concern? Thanks!
12
3
0.119427
0
false
3,488,244
0
3,302
3
0
0
3,487,456
Mongo does not have ACID properties, specifically durability. So you can face issues if the process does not shut down cleanly or the machine loses power. You are supposed to implement backups and redundancy to handle that.
1
0
0
Mongodb - are reliability issues significant still?
5
python,sqlite,mongodb
0
2010-08-15T13:00:00.000
I have a couple of sqlite dbs (i'd say about 15GBs), with about 1m rows in total - so not super big. I was looking at mongodb, and it looks pretty easy to work with, especially if I want to try and do some basic natural language processing on the documents which make up the databases. I've never worked with Mongo in the past, no would have to learn from scratch (will be working in python). After googling around a bit, I came across a number of somewhat horrific stories about Mongodb re. reliability. Is this still a major problem ? In a crunch, I will of course retain the sqlite backups, but I'd rather not have to reconstruct my mongo databases constantly. Just wondering what sort data corruption issues people have actually faced recently with Mongo ? Is this a big concern? Thanks!
12
2
0.07983
0
false
3,490,547
0
3,302
3
0
0
3,487,456
I don't see the problem if you have the same data also in the sqlite backups. You can always refill your MongoDb databases. Refilling will only take a few minutes.
1
0
0
Mongodb - are reliability issues significant still?
5
python,sqlite,mongodb
0
2010-08-15T13:00:00.000
I want to encrypt a string using RSA algorithm and then store that string into postgres database using SQLAlchemy in python. Then Retrieve the encrypted string and decrypt it using the same key. My problem is that the value gets stored in the database is not same as the actual encrypted string. The datatype of column which is storing the encrypted value is bytea. I am using pycrypto library. Do I need to change the data in a particular format before inserting it to database table? Any suggestions please. Thanks, Tara Singh
1
1
0.099668
0
false
3,507,558
0
3,897
1
0
0
3,507,543
By "same key" you mean "the other key", right? RSA gives you a keypair, if you encrypt with one you decrypt with the other ... Other than that, it sounds like a encoding problem. Try storing the data as binary or encode the string with your databases collation. Basically encryption gives you bytes but you store them as a string (encoded bytes).
1
0
0
Inserting Encrypted Data in Postgres via SQLALchemy
2
python,postgresql,sqlalchemy,rsa,pycrypto
0
2010-08-17T22:39:00.000
I'd like to query the database and get read-only objects with session object. I need to save the objects in my server and use them through the user session. If I use a object outside of the function that calls the database, I get this error: "DetachedInstanceError: Parent instance is not bound to a Session; lazy load operation of attribute 'items' cannot proceed" I don't need to make any change in those objects, so I don't need to load them again. Is there any way that I can get that? Thanks in advance!
0
0
0
0
false
3,513,490
1
244
1
0
0
3,513,433
You must load the parent object again.
1
0
0
How to get read-only objects from database?
1
python,sqlalchemy
0
2010-08-18T14:57:00.000
I'm in the settings.py module, and I'm supposed to add the directory to the sqlite database. How do I know where the database is and what the full directory is? I'm using Windows 7.
5
1
0.099668
0
false
3,524,305
1
3,575
1
0
0
3,524,236
if you don't provide full path, it will use the current directory of settings.py, and if you wish to specify static path you can specify it like: c:/projects/project1/my_proj.db or in case you want to make it dynamic then you can use os.path module so os.path.dirname(file) will give you the path of settings.py and accordingly you can alter the path for your database like os.path.join(os.path.dirname(file),'my_proj.db')
1
0
0
Trouble setting up sqlite3 with django! :/
2
python,database,django,sqlite
0
2010-08-19T17:02:00.000
I'm working with two databases, a local version and the version on the server. The server is the most up to date version and instead of recopying all values on all tables from the server to my local version, I would like to enter each table and only insert/update the values that have changed, from server, and copy those values to my local version. Is there some simple method to handling such a case? Some sort of batch insert/update? Googl'ing up the answer isn't working and I've tried my hand at coding one but am starting to get tied up in error handling.. I'm using Python and MySQLDB... Thanks for any insight Steve
1
0
0
0
false
3,527,732
0
1,035
1
0
0
3,526,629
If all of your tables' records had timestamps, you could identify "the values that have changed in the server" -- otherwise, it's not clear how you plan to do that part (which has nothing to do with insert or update, it's a question of "selecting things right"). Once you have all the important values, somecursor.executemany will let you apply them all as a batch. Depending on your indexing it may be faster to put them into a non-indexed auxiliary temporary table, then insert/update from all of that table into the real one (before dropping the aux/temp one), the latter of course being a single somecursor.execute. You can reduce wall-clock time for the whole job by using one (or a few) threads to do the selects and put the results onto a Queue.Queue, and a few worker threads to apply results plucked from the queue into the internal/local server. (Best balance of reading vs writing threads is best obtained by trying a few and measuring -- writing per se is slower than reading, but your bandwidth to your local server may be higher than to the other one, so it's difficult to predict). However, all of this is moot unless you do have a strategy to identify "the values that have changed in the server", so it's not necessarily very useful to enter into more discussion about details "downstream" from that identification.
1
0
0
Python + MySQLDB Batch Insert/Update command for two of the same databases
1
python,mysql,batch-file
0
2010-08-19T21:59:00.000
Sometimes, when fetching data from the database either through the python shell or through a python script, the python process dies, and one single word is printed to the terminal: Killed That's literally all it says. It only happens with certain scripts, but it always happens for those scripts. It consistently happens with this one single query that takes a while to run, and also with a south migration that adds a bunch of rows one-by-one to the database. My initial hunch was that a single transaction was taking too long, so I turned on autocommit for Postgres. Didn't solve the problem. I checked the Postgres logs, and this is the only thing in there: 2010-08-19 22:06:34 UTC LOG: could not receive data from client: Connection reset by peer 2010-08-19 22:06:34 UTC LOG: unexpected EOF on client connection I've tried googling, but as you might expect, a one-word error message is tough to google for. I'm using Django 1.2 with Postgres 8.4 on a single Ubuntu 10.4 rackspace cloud VPS, stock config for everything.
7
6
1.2
0
true
3,529,637
1
1,944
1
1
0
3,526,748
Only one thing I could think of that will kill automatically a process on Linux - the OOM killer. What's in the system logs?
1
0
0
Why do some Django ORM queries end abruptly with the message "Killed"?
2
python,django,postgresql
0
2010-08-19T22:19:00.000
I've been diving into MongoDB with kind help of MongoKit and MongoEngine, but then I started thinking whether the data mappers are necessary here. Both mappers I mentioned enable one to do simple things without any effort. But is any effort required to do simple CRUD? It appears to me that in case of NoSQL the mappers just substitute one api with another (but of course there is data validation, more strict schema, automatic referencing/dereferencing) Do you use Data Mappers in your applications? How big are they (apps)? Why yes, why no? Thanks
2
1
1.2
0
true
3,553,262
0
366
1
1
0
3,533,064
We are running a production site using Mongodb for the backend (no direct queries to Mongo, we have a search layer in between). We wrote our own business / object layer, i suppose it just seemed natural enough for the programmers to write in the custom logic. We did separate the database and business layers, but they just didn't see a need to go for a separate library. As the software keeps evolving I think it makes sense. We have 15 million records.
1
0
0
Do you use data mappers with MongoDB?
1
python,orm,mongodb,mongoengine,mongokit
0
2010-08-20T16:54:00.000
I have an SQL database and am wondering what command you use to just get a list of the table names within that database.
35
10
1.2
0
true
3,556,313
0
62,222
1
0
0
3,556,305
SHOW tables 15 chars
1
0
0
How to retrieve table names in a mysql database with Python and MySQLdb?
4
python,mysql,mysql-python
0
2010-08-24T12:18:00.000
While I see a bunch of links/binaries for mysql connector for python 2.6, I don't see one for 2.7 To use django, should I just revert to 2.6 or is there a way out ? I'm using windows 7 64bit django - 1.1 Mysql 5.1.50 Any pointers would be great.
1
1
0.066568
0
false
58,359,370
1
2,224
1
0
0
3,562,406
For Python 2.7 on specific programs: sudo chown -R $USER /Library/Python/2.7 brew install mysql@5.7 brew install mysql-connector-c brew link --overwrite mysql@5.7 echo 'export PATH="/usr/local/opt/mysql@5.7/bin:$PATH"' >> ~/.bash_profile sed -i -e 's/libs="$libs -l "/libs="$libs -lmysqlclient -lssl -lcrypto"/g' /usr/local/bin/mysql_config pip install MySql-python This solved all issues I was having running a program that ran on Python 2.7 on and older version of MySql
1
0
0
Is there no mysql connector for python 2.7 on windows
3
mysql,python-2.7
0
2010-08-25T02:15:00.000
I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers. Many of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary. It would need to be: Dynamic (so don't suggest Java or C!) Easily available on each platform (Windows, Solaris, Linux, perhaps AIX) Require very little in the way of setup (root access not always available!) Be easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers. Be easy to understand other people's code Friendly with SQL Server and Oracle, without messing around. A few nice XML features wouldn't go amiss. It would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here. I have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. No one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction? What are people's opinions on this? thanks! Chris
6
1
0.033321
0
false
3,564,251
0
3,213
5
1
0
3,564,177
Although I prefer working on the JVM, one thing that turns me off is having to spin up a JVM to run a script. If you can work in a REPL this is not such a big deal, but it really slows you down when doing edit-run-debug scripting. Now of course Oracle has a lot of Java stuff where interaction moght be needed, but that is something only you can estimate how important it is. For plain Oracle DB work I have seen very little Java and lots fo PLSQL/SQL. If your dba now do their work in bash, then they will very likely pickup perl in a short time as there is a nice, logical progression path. Since ruby was designed to be an improved version of perl, it might fit in that category too. Actually python also. Scala is statically typed like Java, albeit with much better type inference. My recommendation would be to go the Perl route. The CPAN is its ace in the hole, you do not have to deal with the OO stuff which might turn off some DBA's (although it is there for the power users).
1
0
0
Which cross platform scripting language should we adopt for a group of DBAs?
6
python,scala,groovy,shell,jython
0
2010-08-25T08:47:00.000
I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers. Many of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary. It would need to be: Dynamic (so don't suggest Java or C!) Easily available on each platform (Windows, Solaris, Linux, perhaps AIX) Require very little in the way of setup (root access not always available!) Be easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers. Be easy to understand other people's code Friendly with SQL Server and Oracle, without messing around. A few nice XML features wouldn't go amiss. It would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here. I have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. No one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction? What are people's opinions on this? thanks! Chris
6
0
0
0
false
3,564,285
0
3,213
5
1
0
3,564,177
I've been in a similar situation, though on a small scale. The previous situation was that any automation on the SQL Server DBs was done with VBScript, which I did start out using. As I wanted something cross-platform (and less annoying than VBScript) I went with Python. What I learnt is: Obviously you want a language that comes with libraries to access your databases comfortably. I wasn't too concerned with abstracting the differences away (ie, I still wrote SQL queries in the relevant dialect, with parameters). However, I'd be a bit less happy with PHP, for example, which has only very vendor-specific libraries and functions for certain databases. I see it's not on your list. THE major obstacle was authentication. If your SQL Server uses Windows domain authentication, you'll have to work to get in. Another system also had specific needs as it required RSA tokens to be supported. For the second point, Python is quite versatile enough to work around the difficulties, but it was getting into "badly supported" territory, especially on Windows. It was easy to work around the first problem from a Windows host, and for a Unix host it is possible though not easy. If you're using SQL Server authentication, it becomes a lot easier. From your other choices, I'd expect various ways of authenticating and DB drivers to exist for Perl, which philosophically would be easier for DBAs used to shell scripting. Ruby - no experience, but it tends to have spotty support for some of the odder authentication methods and connectors. Scala I'd expect to be a bit too much of a "programmer's programming language" -- OOO and FP? It's a very interesting language, but maybe not the one I'd chose at first. As for the rest of the Java-based options, I don't have an opinion, but do check that all the connection types you want to make are solidly supported.
1
0
0
Which cross platform scripting language should we adopt for a group of DBAs?
6
python,scala,groovy,shell,jython
0
2010-08-25T08:47:00.000
I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers. Many of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary. It would need to be: Dynamic (so don't suggest Java or C!) Easily available on each platform (Windows, Solaris, Linux, perhaps AIX) Require very little in the way of setup (root access not always available!) Be easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers. Be easy to understand other people's code Friendly with SQL Server and Oracle, without messing around. A few nice XML features wouldn't go amiss. It would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here. I have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. No one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction? What are people's opinions on this? thanks! Chris
6
4
0.132549
0
false
3,565,446
0
3,213
5
1
0
3,564,177
The XML thing almost calls for Scala. Now, I love Scala, but I suggest Python here.
1
0
0
Which cross platform scripting language should we adopt for a group of DBAs?
6
python,scala,groovy,shell,jython
0
2010-08-25T08:47:00.000
I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers. Many of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary. It would need to be: Dynamic (so don't suggest Java or C!) Easily available on each platform (Windows, Solaris, Linux, perhaps AIX) Require very little in the way of setup (root access not always available!) Be easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers. Be easy to understand other people's code Friendly with SQL Server and Oracle, without messing around. A few nice XML features wouldn't go amiss. It would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here. I have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. No one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction? What are people's opinions on this? thanks! Chris
6
5
0.16514
0
false
3,568,609
0
3,213
5
1
0
3,564,177
I think your best three options are Groovy, Python, and Scala. All three let you write code at a high level (compared to C/Java). Python has its own perfectly adequate DB bindings, and Groovy and Scala can use ones made for Java. The advantages of Python are that it is widely used already, so there are tons of tools, libraries, expertise, etc. available around it. It has a particularly clean syntax, which makes working with it aesthetically pleasing. The disadvantages are that it is slow (which may not be an issue for you), untyped (so you have runtime errors instead of compile-time errors), and you can't really switch back and forth between Jython and Python, so you have to pick whether you want the large amount of Python stuff, or the huge amount of Java stuff, minus a lot of the nice Python stuff. The advantages of Groovy are that you know it already and it interoperates well with Java libraries. Its disadvantages are also slowness and lack of static typing. (So in contrast to Python, the choice is: do you value Python's clean syntax and wide adoption more, or do you value the vast set of Java libraries more in a language made to work well in that environment?) The advantages of Scala are that it is statically typed (i.e. if the code gets past the compiler, it has a greater chance of working), is fast (as fast as Java if you care to work hard enough), and interoperates well with Java libraries. The disadvantages are that it imposes a bit more work on you to make the static typing work (though far, far less than Java while simultaneously being more safe), and that the canonical style for Scala is a hybrid object/functional blend that feels more different than the other two (and thus requires more training to use at full effectiveness IMO). In contrast to Groovy, the question would be whether familiarity and ease of getting started is more important than speed and correctness. Personally, I now do almost all of my work in Scala because my work requires speed and because the compiler catches those sort of errors in coding that I commonly make (so it is the only language I've used where I am not surprised when large blocks of code run correctly once I get them to compile). But I've had good experiences with Python in other contexts--interfacing with large databases seems like a good use-case. (I'd rule out Perl as being harder to maintain with no significant benefits over e.g. Python, and I'd rule out Ruby as being not enough more powerful than Python to warrant the less-intuitive syntax and lower rate of adoption/tool availability.)
1
0
0
Which cross platform scripting language should we adopt for a group of DBAs?
6
python,scala,groovy,shell,jython
0
2010-08-25T08:47:00.000
I wanted to get the community's feedback on a language choice our team is looking to make in the near future. We are a software developer, and I work in a team of Oracle and SQL Server DBAs supporting a cross platform Java application which runs on Oracle Application Server. We have SQL Server and Oracle code bases, and support customers on Windows, Solaris and Linux servers. Many of the tasks we do on a frequent basis are insufficiently automated, and where they are, tend to be much more automated via shell scripts, with little equivalent functionality on Windows. Unfortunately, we now have this problem of redeveloping scripts and so on, on two platforms. So, I wish for us to choose a cross platform language to script in, instead of using Bash and awkwardly translating to Cygwin or Batch files where necessary. It would need to be: Dynamic (so don't suggest Java or C!) Easily available on each platform (Windows, Solaris, Linux, perhaps AIX) Require very little in the way of setup (root access not always available!) Be easy for shell scripters, i.e. DBAs, to adopt, who are not hardcore developers. Be easy to understand other people's code Friendly with SQL Server and Oracle, without messing around. A few nice XML features wouldn't go amiss. It would be preferable if it would run on the JVM, since this will almost always be installed on every server (certainly on all application servers) and we have many Java developers in our company, so sticking to the JVM makes sense. This isn't exclusive though, since I know Python is a very viable language here. I have created a list of options, but there may be more: Groovy, Scala, Jython, Python, Ruby, Perl. No one has much experience of any, except I have quite a lot of Java and Groovy experience myself. We are looking for something dynamic, easy to pick up, will work with both SQL server and Oracle effortlessly, has some XML simplifying features, and that won't be a turnoff for DBAs. Many of us are very Bash orientated - what could move us away from this addiction? What are people's opinions on this? thanks! Chris
6
6
1
0
false
3,564,413
0
3,213
5
1
0
3,564,177
You can opt for Python. Its dynamic(interpreted) , is available on Windows/Linux/Solaris, has easy to read syntax so that your code maintenance is easy. There modules/libraries for Oracle interaction and various other database servers as well. there are also library support for XML. All 7 points are covered.
1
0
0
Which cross platform scripting language should we adopt for a group of DBAs?
6
python,scala,groovy,shell,jython
0
2010-08-25T08:47:00.000
I have been developing under Python/Snowleopard happily for the part 6 months. I just upgraded Python to 2.6.5 and a whole bunch of libraries, including psycopg2 and Turbogears. I can start up tg-admin and run some queries with no problems. Similarly, I can run my web site from the command line with no problems. However, if I try to start my application under Aptana Studio, I get the following exception while trying to import psychopg2: ('dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _PQbackendPID\n Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/psycopg2/_psycopg.so\n Expected in: flat namespace\n in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/psycopg2/_psycopg.so',) This occurs after running the following code: try: import psycopg2 as psycopg except ImportError as ex: print "import failed :-( xxxxxxxx = " print ex.args I have confirmed that the same version of python is being run as follows: import sys print "python version: ", sys.version_info Does anyone have any ideas? I've seem some references alluding to this being a 64-bit issue. - dave
0
0
1.2
0
true
3,571,749
0
296
1
0
0
3,571,495
Problem solved (to a point). I was running 64 bit python from Aptana Studio and 32 bit python on the command line. By forcing Aptana to use 32 bit python, the libraries work again and all is happy.
1
0
0
Psycopg2 under osx works on commandline but fails in Aptana studio
1
python,turbogears,psycopg
1
2010-08-26T01:41:00.000
I'm using a linux machine to make a little python program that needs to input its result in a SQL Server 2000 DB. I'm new to python so I'm struggling quite a bit to find what's the best solution to connect to the DB using python 3, since most of the libs I looked only work in python 2. As an added bonus question, the finished version of this will be compiled to a windows program using py2exe. Is there anything I should be aware of, any changes to make? Thanks
1
0
0
0
false
4,062,244
0
1,786
2
0
0
3,571,819
If you want to have portable mssql server library, you can try the module from www.pytds.com. It works with 2.5+ AND 3.1, have a good stored procedure support. It's api is more "functional", and has some good features you won't find anywhere else.
1
0
0
How to access a MS SQL Server using Python 3?
3
python,sql-server,python-3.x,py2exe
0
2010-08-26T03:18:00.000
I'm using a linux machine to make a little python program that needs to input its result in a SQL Server 2000 DB. I'm new to python so I'm struggling quite a bit to find what's the best solution to connect to the DB using python 3, since most of the libs I looked only work in python 2. As an added bonus question, the finished version of this will be compiled to a windows program using py2exe. Is there anything I should be aware of, any changes to make? Thanks
1
0
0
0
false
3,573,005
0
1,786
2
0
0
3,571,819
I can't answer your question directly, but given that many popular Python packages and frameworks are not yet fully supported on Python 3, you might consider just using Python 2.x. Unless there are features you absolutely cannot live without in Python 3, of course. And it isn't clear from your post if you plan to deploy to Windows only, or Windows and Linux. If it's only Windows, then you should probably just develop on Windows to start with: the native MSSQL drivers are included in most recent versions so you don't have anything extra to install, and it gives you more options, such as adodbapi.
1
0
0
How to access a MS SQL Server using Python 3?
3
python,sql-server,python-3.x,py2exe
0
2010-08-26T03:18:00.000
Since mongo doesn't have a schema, does that mean that we won't have to do migrations when we change the models? What does the migration process look like with a non-relational db?
18
1
0.066568
0
false
3,605,615
1
5,660
2
0
0
3,604,565
What does the migration process look like with a non-relational db? Depends on if you need to update all the existing data or not. In many cases, you may not need to touch the old data, such as when adding a new optional field. If that field also has a default value, you may also not need to update the old documents, if your application can handle a missing field correctly. However, if you want to build an index on the new field to be able to search/filter/sort, you need to add the default value back into the old documents. Something like field renaming (trivial in a relational db, because you only need to update the catalog and not touch any data) is a major undertaking in MongoDB (you need to rewrite all documents). If you need to update the existing data, you usually have to write a migration function that iterates over all the documents and updates them one by one (although this process can be shared and run in parallel). For large data sets, this can take a lot of time (and space), and you may miss transactions (if you end up with a crashed migration that went half-way through).
1
0
0
Does django with mongodb make migrations a thing of the past?
3
python,django,mongodb
0
2010-08-30T22:01:00.000
Since mongo doesn't have a schema, does that mean that we won't have to do migrations when we change the models? What does the migration process look like with a non-relational db?
18
2
0.132549
0
false
3,604,687
1
5,660
2
0
0
3,604,565
There is no silver bullet. Adding or removing fields is easier with non-relational db (just don't use unneeded fields or use new fields), renaming a field is easier with traditional db (you'll usually have to change a lot of data in case of field rename in schemaless db), data migration is on par - depending on task.
1
0
0
Does django with mongodb make migrations a thing of the past?
3
python,django,mongodb
0
2010-08-30T22:01:00.000