Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm using SqlAlchemy in my Pylons application to access data and SqlAlchemy-migrate to maintain the database schema.
It works fine for managing the schema itself. However, I also want to manage seed data in a migrate-like way. E.g. when ProductCategory table is created it would make sense to seed it with categories data.
Looks like SqlAlchemy-migrate does not support this directly. What would be a good approach to do this with Pylons+SqlAlchemy+SqlAlchemy-migrate? | 2 | 2 | 1.2 | 0 | true | 4,300,116 | 0 | 2,454 | 1 | 0 | 0 | 4,298,886 | Well what format is your seed data starting out in? The migrate calls are just python methods so you're free to open some csv, create SA object instances, loop, etc. I usually have my seed data as a series of sql insert statements and just loop over them executing a migate.execute(query) for each one.
So I'll first create the table, loop and run seed data, and then empty/drop table on the downgrade method. | 1 | 0 | 0 | Managing seed data with SqlAlchemy and SqlAlchemy-migrate | 1 | python,sqlalchemy,pylons,sqlalchemy-migrate | 0 | 2010-11-28T20:24:00.000 |
I'm learning to use SQLAlchemy connected to a SQL database for 12 standard relational tables (e.g. SQLite or PostgreSQL). But then I'd like to use Redis with Python for a couple of tables, particularly for Redis's fast set manipulation. I realise that Redis is NoSQL, but can I integrate this with SQLAlchemy for the benefit of the session and thread handling that SQLAlchemy has?
Is there a Redis SA dialect? I couldn't find it, which probably means that I'm missing some basic point. Is there a better architecture I should look at to use two different types of database? | 15 | 17 | 1 | 0 | false | 4,331,070 | 0 | 13,868 | 2 | 0 | 0 | 4,324,407 | While it is possible to set up an ORM that puts data in redis, it isn't a particularly good idea. ORMs are designed to expose standard SQL features. Many things that are standard in SQL such as querying on arbitrary columns are not available in redis unless you do a lot of extra work. At the same time redis has features such as set manipulation that do not exist in standard SQL so will not be used by the ORM.
Your best option is probably to write your code to interact directly with redis rather than trying to use an inappropriate abstraction - Generally you will find that the code to get data out of redis is quite a bit simpler than the SQL code that justifies using an ORM. | 1 | 0 | 0 | How to integrate Redis with SQLAlchemy | 2 | python,sqlalchemy,nosql,redis | 0 | 2010-12-01T12:36:00.000 |
I'm learning to use SQLAlchemy connected to a SQL database for 12 standard relational tables (e.g. SQLite or PostgreSQL). But then I'd like to use Redis with Python for a couple of tables, particularly for Redis's fast set manipulation. I realise that Redis is NoSQL, but can I integrate this with SQLAlchemy for the benefit of the session and thread handling that SQLAlchemy has?
Is there a Redis SA dialect? I couldn't find it, which probably means that I'm missing some basic point. Is there a better architecture I should look at to use two different types of database? | 15 | 14 | 1 | 0 | false | 4,332,791 | 0 | 13,868 | 2 | 0 | 0 | 4,324,407 | Redis is very good at what it does, storing key values and making simple atomic operations, but if you want to use it as a relational database you're really gonna SUFFER!, as I had... and here is my story...
I've done something like that, making several objects to abstracting all the redis internals exposing primitives queries (I called filters in my code), get, set, updates, and a lot more methods that you can expect from a ORM and in fact if you are dealing only with localhost, you're not going to perceive any slowness in your application, you can use redis as a relational database but if in any time you try to move your database into another host, that will represent a lot of problems in terms of network transmission, I end up with a bunch of re-hacked classes using redis and his pipes, which it make my program like 900% faster, making it usable in the local network, anyway I'm starting to move my database library to postgres.
The lesson of this history is to never try to make a relational database with the key value model, works great at basic operations, but the price of not having the possibility to make relations in your server comes with a high cost.
Returning to your question, I don't know any project to make an adapter to sqlalchemy for redis, and I think that nobody are going to be really interested in something like that, because of the nature of each project. | 1 | 0 | 0 | How to integrate Redis with SQLAlchemy | 2 | python,sqlalchemy,nosql,redis | 0 | 2010-12-01T12:36:00.000 |
I have a database full of data, including a date and time string, e.g. Tue, 21 Sep 2010 14:16:17 +0000
What I would like to be able to do is extract various documents (records) from the database based on the time contained within the date string, Tue, 21 Sep 2010 14:16:17 +0000.
From the above date string, how would I use python and regex to extract documents that have the time 15:00:00? I'm using MongoDB by the way, in conjunction with Python. | 3 | 1 | 0.049958 | 0 | false | 4,325,260 | 0 | 563 | 1 | 0 | 0 | 4,325,194 | I agree with the other poster. Though this doesn't solve your immediate problem, if you have any control over the database, you should seriously consider creating a time/column, with either a DATE or TIMESTAMP datatype. That would make your system much more robust, & completely avoid the problem of trying to parse dates from string (an inherently fragile technique). | 1 | 0 | 1 | Extracting Date and Time info from a string. | 4 | python,regex,mongodb,datetime,database | 0 | 2010-12-01T14:10:00.000 |
I am implementing a database model to store the 20+ fields of the iCal calendar format and am faced with tediously typing in all these into an SQLAlchemy model.py file. Is there a smarter approach? I am looking for a GUI or model designer that can create the model.py file for me. I would specify the column names and some attributes, e.g, type, length, etc.
At the minimum, I need this designer to output a model for one table. Additional requirements, in decreasing order of priority:
Create multiple tables
Support basic relationships between the multiple tables (1:1, 1:n)
Support constraints on the columns.
I am also open to other ways of achieving the goal, perhaps using a GUI to create the tables in the database and then reflecting them back into a model.
I appreciate your feedback in advance. | 5 | 0 | 0 | 0 | false | 4,330,995 | 0 | 2,698 | 1 | 0 | 0 | 4,330,339 | "I would specify the column names and some attributes, e.g, type, length, etc."
Isn't that the exact same thing as
"tediously typing in all these into an SQLAlchemy model.py file"?
If those two things aren't identical, please explain how they're different. | 1 | 0 | 0 | Is there any database model designer that can output SQLAlchemy models? | 3 | python,model,sqlalchemy,data-modeling | 0 | 2010-12-01T23:52:00.000 |
I just downloaded sqlite3.exe. It opens up as a command prompt. I created a table test & inserted a few entries in it. I used .backup test just in case. After I exit the program using .exit and reopened it I don't find the table listed under .tables nor can I run any query on it.
I need to quickly run an open source python program that makes use of this table & although I have worked with MySQL, I have no clue about sqlite. I need the minimal basics of sqlite. Can someone guide me through this or at least tell me how to permanently store my tables.
I have put this sqlite3.exe in Python folder assuming that python would then be able to read the sqlite files. Any ideas on this? | 0 | 0 | 0 | 0 | false | 4,348,768 | 0 | 2,598 | 1 | 0 | 0 | 4,348,658 | Just execute sqlite3 foo.db? This will permanently store everything you do afterwards in this file. (No need for .backup.) | 1 | 0 | 0 | How to create tables in sqlite 3? | 3 | python,sqlite | 0 | 2010-12-03T18:30:00.000 |
I have some (Excel 2000) workbooks. I want to extract the data in each worksheet to a separate file.
I am running on Linux.
Is there a library I can use to access (read) XLS files on Linux from Python? | 3 | 0 | 0 | 0 | false | 4,355,455 | 0 | 2,104 | 1 | 0 | 0 | 4,355,435 | The easiest way would be to run excel up under Wine or as a VM and do it from Windows. You can use Mark Hammond's COM bindings, which come bundled with ActiveState Python. Alternatively, you could export the data in CSV format and read it from that. | 1 | 0 | 0 | Cross platform way to read Excel files in Python? | 4 | python,excel | 0 | 2010-12-04T19:41:00.000 |
With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?
Note
I am not asking whether RDBMS/SQL is not relevant because that will only start flamewar. | 0 | 0 | 0 | 0 | false | 4,357,332 | 1 | 210 | 3 | 0 | 0 | 4,355,909 | The NoSQL effort has to do with creating a persistence layer that scales with modern applications using non-normalized data structures for fast reads & writes and data formats like JSON, the standard format used by ajax based systems. It is sometimes the case that transaction based relational databases do not scale well, but more often than not poor performance is directly related to poor data modeling, poor query creation and poor planning.
No persistence layer should have anything to do with your domain model. Using a data abstraction layer, you transform the data contained in your objects to the schema implemented in your data store. You would then use the same DAL to read data from your data store, transform and load it into your objects.
Your data store could be XML files, an RDBMS like SQL Server or a NoSQL implementation like CouchDB. It doesn't matter.
FWIW, I've built and inherited plenty of applications that used no model at all. For some, there's no need, but if you're using an object model it has to fit the needs of the application, not the data store and not the presentation layer. | 1 | 0 | 0 | With the rise of NoSQL, Is it more common these days to have a webapp without any model? | 3 | python,mysql,ruby-on-rails,nosql | 0 | 2010-12-04T21:24:00.000 |
With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?
Note
I am not asking whether RDBMS/SQL is not relevant because that will only start flamewar. | 0 | 0 | 0 | 0 | false | 4,355,924 | 1 | 210 | 3 | 0 | 0 | 4,355,909 | SQL databases are still the order of the day. But it's becoming more common to use unstructured stores. NoSQL databases are well suited for some web apps, but not necessarily all of them. | 1 | 0 | 0 | With the rise of NoSQL, Is it more common these days to have a webapp without any model? | 3 | python,mysql,ruby-on-rails,nosql | 0 | 2010-12-04T21:24:00.000 |
With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?
Note
I am not asking whether RDBMS/SQL is not relevant because that will only start flamewar. | 0 | 4 | 1.2 | 0 | true | 4,355,976 | 1 | 210 | 3 | 0 | 0 | 4,355,909 | I don't think "NoSQL" has anything to do with "no model".
For one, MVC originated in the Smalltalk world for desktop applications, long before the current web server architecture (or even the web itself) existed. Most apps I've written have used MVC (including the M), even those that didn't use a DBMS (R or otherwise).
For another, some kinds of "NoSQL" explicitly have a model. An object database might look, to the application code, almost just like the interface that your "SQL RDBMS + ORM" are trying to expose, but without all the weird quirks and explicit mapping and so on.
Finally, you can obviously go the other way, and write SQL-based apps with no model. It may not be pretty, but I've seen it done. | 1 | 0 | 0 | With the rise of NoSQL, Is it more common these days to have a webapp without any model? | 3 | python,mysql,ruby-on-rails,nosql | 0 | 2010-12-04T21:24:00.000 |
When committing data that has originally come from a webpage, sometimes data has to be converted to a data type or format which is suitable for the back-end database. For instance, a date in 'dd/mm/yyyy' format needs to be converted to a Python date-object or 'yyyy-mm-dd' in order to be stored in a SQLite date column (SQLite will accept 'dd/mm/yyyy', but that can cause problems when data is retrieved).
Question - at what point should the data be converted?
a) As part of a generic web_page_save() method (immediately after data validation, but before a row.table_update() method is called).
b) As part of row.table_update() (a data-object method called from web- or non-web-based applications, and includes construction of a field-value parameter list prior to executing the UPDATE command).
In other words, from a framework point-of-view, does the data-conversion belong to page-object processing or data-object processing?
Any opinions would be appreciated.
Alan | 0 | 1 | 1.2 | 0 | true | 4,360,475 | 1 | 75 | 2 | 0 | 0 | 4,360,407 | I could be wrong, but I think there is no definite answer to this question. It depends on "language" level your framework provides. For example, if another parts of the framework accept data in non-canonical form and then convert it to an internal canonical form, it this case it would worth to support some input date formats that are expected.
I always prefer to build strict frameworks and convert data in front-ends. | 1 | 0 | 0 | Framework design question | 2 | python,sqlite | 0 | 2010-12-05T18:24:00.000 |
When committing data that has originally come from a webpage, sometimes data has to be converted to a data type or format which is suitable for the back-end database. For instance, a date in 'dd/mm/yyyy' format needs to be converted to a Python date-object or 'yyyy-mm-dd' in order to be stored in a SQLite date column (SQLite will accept 'dd/mm/yyyy', but that can cause problems when data is retrieved).
Question - at what point should the data be converted?
a) As part of a generic web_page_save() method (immediately after data validation, but before a row.table_update() method is called).
b) As part of row.table_update() (a data-object method called from web- or non-web-based applications, and includes construction of a field-value parameter list prior to executing the UPDATE command).
In other words, from a framework point-of-view, does the data-conversion belong to page-object processing or data-object processing?
Any opinions would be appreciated.
Alan | 0 | 2 | 0.197375 | 0 | false | 4,360,452 | 1 | 75 | 2 | 0 | 0 | 4,360,407 | I think it belongs in the validation. You want a date, but the web page inputs strings only, so the validator needs to check if the value van be converted to a date, and from that point on your application should process it like a date. | 1 | 0 | 0 | Framework design question | 2 | python,sqlite | 0 | 2010-12-05T18:24:00.000 |
Which one of Ruby-PHP-Python is best suited for Cassandra/Hadoop on 500M+ users? I know language itself is not a big concern but I like to know base on proven success, infrastructure and available utilities around those frameworks! thanks so much. | 0 | 0 | 0 | 0 | false | 9,921,879 | 1 | 339 | 1 | 0 | 0 | 4,398,341 | Because Cassandra is written in Java, a client also in Java would likely have the best stability and maturity for your application.
As far as choosing between those 3 dynamic languages, I'd say whatever you're most comfortable with is best. I don't know of any significant differences between client libraries in those languages. | 1 | 0 | 0 | Scability of Ruby-PHP-Python on Cassandra/Hadoop on 500M+ users | 1 | php,python,ruby-on-rails,scalability,cassandra | 1 | 2010-12-09T12:46:00.000 |
The question pretty much says it all. The database is in MySQL using phpMyAdmin.
A little background: I'm writing the interface for a small non-profit organization. They need to be able to see which customers to ship to this month, which customers have recurring orders, etc. The current system is ancient, written in PHP 4, and I'm in charge of upgrading it. I spoke with the creator of the current system, and he agreed that it would be better to just write a new interface.
I'm new to Python, SQL and PHP, so this is a big learning opportunity for me. I'm pretty excited. I do have a lot of programming experience though (C, Java, Objective-C), and I don't anticipate any problems picking up Python.
So here I am!
Thanks in advance for all your help. | 0 | 0 | 0 | 0 | false | 4,413,898 | 0 | 186 | 1 | 0 | 0 | 4,413,840 | What can I say? Just download the various software, dig in and ask questions here when you run into specific problems. | 1 | 0 | 0 | I have a MySQL database, I want to write an interface for it using Python. Help me get started, please! | 3 | php,python,mysql,phpmyadmin | 1 | 2010-12-10T22:23:00.000 |
I am writing a Python logger script which writes to a CSV file in the following manner:
Open the file
Append data
Close the file (I think this is necessary to save the changes, to be safe after every logging routine.)
PROBLEM:
The file is very much accessible through Windows Explorer (I'm using XP). If the file is opened in Excel, access to it is locked by Excel. When the script tries to append data, obviously it fails, then it aborts altogether.
OBJECTIVE:
Is there a way to lock a file using Python so that any access to it remains exclusive to the script? Or perhaps my methodology is poor in the first place? | 1 | 0 | 0 | 0 | false | 4,427,958 | 0 | 2,562 | 1 | 0 | 0 | 4,427,936 | As far as I know, Windows does not support file locking. In other words, applications that don't know about your file being locked can't be prevented from reading a file.
But the remaining question is: how can Excel accomplish this?
You might want to try to write to a temporary file first (one that Excel does not know about) and replace the original file by it lateron. | 1 | 0 | 0 | Prevent a file from being opened | 2 | python,logging,file-locking | 0 | 2010-12-13T10:36:00.000 |
Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.
Which path should be followed, any advice ?
For example:
There is a table of Customers, containing fields like Name, Address
Now for managing Orders (each customer can have many), you can either create an Order table or store the orders in a serialized format in a separate column in the Customers table only. | 2 | 1 | 0.039979 | 0 | false | 4,428,933 | 0 | 159 | 4 | 0 | 0 | 4,428,613 | I agree with Mchi, there is no problem storing "pickled" data if you don't need to search or do relational type operations.
Denormalisation is also an important tool that can scale up database performance when applied correctly.
It's probably a better idea to use JSON instead of pickles. It only uses a little more space, and makes it possible to use the database from languages other than Python | 1 | 0 | 0 | Is it a good practice to use pickled data instead of additional tables? | 5 | python,mysql | 0 | 2010-12-13T12:04:00.000 |
Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.
Which path should be followed, any advice ?
For example:
There is a table of Customers, containing fields like Name, Address
Now for managing Orders (each customer can have many), you can either create an Order table or store the orders in a serialized format in a separate column in the Customers table only. | 2 | 2 | 1.2 | 0 | true | 4,429,509 | 0 | 159 | 4 | 0 | 0 | 4,428,613 | Mixing SQL databases and pickling seems to ask for trouble. I'd go with either sticking all data in the SQL databases or using only pickling, in the form of the ZODB, which is a Python only OO database that is pretty damn awesome.
Mixing makes case sometimes, but is usually just more trouble than it's worth. | 1 | 0 | 0 | Is it a good practice to use pickled data instead of additional tables? | 5 | python,mysql | 0 | 2010-12-13T12:04:00.000 |
Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.
Which path should be followed, any advice ?
For example:
There is a table of Customers, containing fields like Name, Address
Now for managing Orders (each customer can have many), you can either create an Order table or store the orders in a serialized format in a separate column in the Customers table only. | 2 | 0 | 0 | 0 | false | 4,432,349 | 0 | 159 | 4 | 0 | 0 | 4,428,613 | I agree with @Lennart Regebro. You should probably see whether you need a Relational DB or an OODB. If RDBMS is your choice, I would suggest you stick with more tables. IMHO, pickling may have issues with scalability. If thats what you want, you should look at ZODB. It is pretty good and supports caching etc for better performance | 1 | 0 | 0 | Is it a good practice to use pickled data instead of additional tables? | 5 | python,mysql | 0 | 2010-12-13T12:04:00.000 |
Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.
Which path should be followed, any advice ?
For example:
There is a table of Customers, containing fields like Name, Address
Now for managing Orders (each customer can have many), you can either create an Order table or store the orders in a serialized format in a separate column in the Customers table only. | 2 | 3 | 0.119427 | 0 | false | 4,428,635 | 0 | 159 | 4 | 0 | 0 | 4,428,613 | Usually it's best to keep your data normalized (i.e. create more tables). Storing data 'pickled' as you say, is acceptable, when you don't need to perform relational operations on them. | 1 | 0 | 0 | Is it a good practice to use pickled data instead of additional tables? | 5 | python,mysql | 0 | 2010-12-13T12:04:00.000 |
I would like to be able to plot a call graph of a stored procedure. I am not interested in every detail, and I am not concerned with dynamic SQL (although it would be cool to detect it and skip it maybe or mark it as such.)
I would like the tool to generate a tree for me, given the server name, db name, stored proc name, a "call tree", which includes:
Parent stored procedure.
Every other stored procedure that is being called as a child of the caller.
Every table that is being modified (updated or deleted from) as a child of the stored proc which does it.
Hopefully it is clear what I am after; if not - please do ask. If there is not a tool that can do this, then I would like to try to write one myself. Python 2.6 is my language of choice, and I would like to use standard libraries as much as possible. Any suggestions?
EDIT: For the purposes of bounty Warning: SQL syntax is COMPLEX. I need something that can parse all kinds of SQL 2008, even if it looks stupid. No corner cases barred :)
EDIT2: I would be OK if all I am missing is graphics. | 8 | 0 | 0 | 0 | false | 18,523,367 | 0 | 6,375 | 1 | 0 | 0 | 4,445,117 | SQL Negotiator Pro has a free lite version at www.aphilen.com
The full version is the only product out there that will find all dependencies and not stop after finding the first 10 child dependencies. Other products fail when there is a circular reference and just hang, these guys have covered this off. Also a neat feature is the ability to add notes to the diagram so that it can be easily distributed.
Full version is not cheap but has saved us plenty of hours usually required figuring out complex database procedures. apex also provide a neat tool | 1 | 0 | 0 | Is there a free tool which can help visualize the logic of a stored procedure in SQL Server 2008 R2? | 3 | sql-server-2008,stored-procedures,python-2.6,call-graph | 0 | 2010-12-14T22:54:00.000 |
I create hangman game with silverlight ironpython and I use data in postgresql for random word but I don't know to access data in postgresql in silverlight.
how can or should it be done?
Thanks!! | 0 | 3 | 0.53705 | 0 | false | 4,470,466 | 0 | 950 | 1 | 0 | 0 | 4,470,073 | From Silverlight you cannot access a database directly (remember it's a web technology that actually runs locally on the client and the client cannot access your database directly over the internet).
To communicate with the server from Silverlight, you must create a separated WebService either with SOAP, WCF or RIA Services for example.
That Webservice will expose your data on the web. Call the WebService method to get your data from your Silverlight program.
This WebService layer will be your middle tiers that actually makes the bridge between your postgresql database and your Silverlight application. | 1 | 0 | 0 | How to access PostgreSQL with Silverlight | 1 | silverlight,postgresql,silverlight-4.0,silverlight-3.0,ironpython | 0 | 2010-12-17T11:37:00.000 |
I need to implement a function that takes a lambda as the argument and queries the database. I use SQLAlchemy for ORM. Is there a way to pass the lambda, that my function receives, to SQLAlchemy to create a query?
Sincerely,
Roman Prykhodchenko | 2 | 2 | 1.2 | 0 | true | 4,470,921 | 0 | 1,681 | 1 | 0 | 0 | 4,470,481 | I guess you want to filter the data with the lambda, like a WHERE clause? Well, no, functions nor lambdas cannot be turned into a SQL query. Sure, you could just fetch all the data and filter it in Python, but that completely defeats the purpose of the database.
You'll need to recreate the logic you put into the lambda with SQLAlchemy. | 1 | 0 | 0 | Can I use lambda to create a query in SQLAlchemy? | 1 | python,sqlalchemy | 0 | 2010-12-17T12:33:00.000 |
The desire is to have the user provide information in an OpenOffice Writer or MS Word file that is inserted into part of a ReportLab generated PDF. I am comfortable with ReportLab; but, I don't have any experience with using Writer or Word data in this way. How would you automate the process of pulling in the Writer/Word data? Is it possible to retain tables and graphs? | 1 | 0 | 0 | 0 | false | 4,691,989 | 0 | 147 | 1 | 0 | 0 | 4,478,478 | You can not embed such objects as is within a PDF, adobe specification does not support that. However you could always parse the data from the Office document and reproduce it as a table/graph/etc using reportlab in the output PDF. If you don't care about the data being an actual text you could always save it in the PDF as an image. | 1 | 0 | 0 | Is it possible to include OpenOffice Writer or MS Word data in a ReportLab generated PDF? | 1 | python,ms-word,reportlab,openoffice-writer | 0 | 2010-12-18T14:36:00.000 |
I am developing an application for managers that might be used in a large organisation. The app is improved and extended step by step on a frequent (irregular) basis. The app will have SQL connections to several databases and has a complex GUI.
What would you advise to deploy the app ?
Based on my current (limited) knowledge of apps in lager organisations I prefer a setup where the app runs on a server and the user uses a thin client via the web. I prefer not to use a webbrowser because of (possible)limitations of the user GUI. The user experience should be as if the app was running on his own laptop/pc/tablet(?)
What opensource solution would you advise ?
Thanks ! | 1 | 1 | 0.197375 | 0 | false | 4,485,440 | 1 | 318 | 1 | 0 | 0 | 4,485,404 | If possible, make the application run without any installation procedure, and provide it on a network share (e.g. with a fixed UNC path). You didn't specify the client operating system: if it's Windows, create an MSI that sets up something in the start menu that will still make the application launch from the network share.
With that approach, updates will be as simple as replacing the files on the file server - yet it will always run on the client. | 1 | 0 | 0 | Deploy python application | 1 | python,client-server,rich-internet-application | 0 | 2010-12-19T22:13:00.000 |
Okay, so what I want to do is upload an excel sheet and display it on my website, in html. What are my options here ? I've found this xlrd module that allows you to read the data from spreadsheets, but I don't really need that right now. | 1 | 4 | 1.2 | 0 | true | 4,499,265 | 1 | 1,961 | 1 | 0 | 0 | 4,498,678 | Why don't you need xlrd? It sounds like exactly what you need.
Create a Django model with a FileField that holds the spreadsheet. Then your view uses xlrd to loop over the rows and columns and put them into an HTML table. Job done.
Possible complications: multiple sheets in one Excel file; formulas; styles. | 1 | 0 | 0 | Python/Django excel to html | 2 | python,html,django,excel | 0 | 2010-12-21T11:20:00.000 |
I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am basically doing a Breadth-First Search with this dataset, so once I get the initial "seed" set the 50million I use this as the first values in my queue.
Thanks, | 2 | 0 | 0 | 0 | false | 4,505,300 | 0 | 1,161 | 3 | 0 | 0 | 4,505,170 | How about some key-value storages like MongoDB | 1 | 0 | 0 | 50 million+ Rows of Data - CSV or MySQL | 5 | python,mysql,database,optimization,csv | 0 | 2010-12-22T00:20:00.000 |
I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am basically doing a Breadth-First Search with this dataset, so once I get the initial "seed" set the 50million I use this as the first values in my queue.
Thanks, | 2 | 3 | 1.2 | 0 | true | 4,505,218 | 0 | 1,161 | 3 | 0 | 0 | 4,505,170 | I would say that there are a wide variety of benefits to using a database over a CSV for such large structured data so I would suggest that you learn enough to do so. However, based on your description you might want to check out non-server/lighter weight databases. Such as SQLite, or something similar to JavaDB/Derby... or depending on the structure of your data a non-relational (Nosql) database- obviously you will need one with some type of python support though. | 1 | 0 | 0 | 50 million+ Rows of Data - CSV or MySQL | 5 | python,mysql,database,optimization,csv | 0 | 2010-12-22T00:20:00.000 |
I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am basically doing a Breadth-First Search with this dataset, so once I get the initial "seed" set the 50million I use this as the first values in my queue.
Thanks, | 2 | 1 | 0.039979 | 0 | false | 4,505,180 | 0 | 1,161 | 3 | 0 | 0 | 4,505,170 | Are you just going to slurp in everything all at once? If so, then CSV is probably the way to go. It's simple and works.
If you need to do lookups, then something that lets you index the data, like MySQL, would be better. | 1 | 0 | 0 | 50 million+ Rows of Data - CSV or MySQL | 5 | python,mysql,database,optimization,csv | 0 | 2010-12-22T00:20:00.000 |
I'm looking to write a small web app to utilise a dataset I already have stored in a MongoDB collection. I've been writing more Python than other languages lately and would like to broaden my repertoire and write a Python web app.
It seems however that most if not all of the current popular Python web development frameworks favour MySQL and others with no mention given to MongoDB.
I am aware that there are more than likely plugins written to allow Mongo be used with existing frameworks but so far have found little as to documentation that compares and contrasts them.
I was wondering what in people's experience is the Python web development framework with the best MongoDB support?
Many thanks in advance,
Patrick | 17 | 0 | 0 | 0 | false | 50,201,839 | 1 | 9,196 | 1 | 0 | 0 | 4,534,684 | There is no stable support for mongodb using django framework. I tried using mongoengine, but unlike models, provided for admin in django framework, there is no support for mongoengine.
Correct if I am wrong. | 1 | 0 | 0 | Python Web Framework with best Mongo support | 4 | python,mongodb | 0 | 2010-12-26T17:14:00.000 |
I'm new to MySQL, and I have a question about the memory.
I have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the
memory.
I use python(actually MySQLdb in python) with sql: SELECT * FROM table.
However, from my linux "top" I saw this python process uses 50% of my memory(which is total 6GB)
I'm curious about why it uses about 3GB memory only for a 200 mb table.
Thanks in advance! | 1 | 0 | 0 | 0 | false | 4,559,691 | 0 | 4,116 | 2 | 0 | 0 | 4,559,402 | In pretty much any scripting language, a variable will always take up more memory than its actual contents would suggest. An INT might be 32 or 64bits, suggesting it would require 4 or 8 bytes of memory, but it will take up 16 or 32bytes (pulling numbers out of my hat), because the language interpreter has to attach various metadata to that value along the way.
The database might only require 200megabytes of raw storage space, but once you factor in the metadata, it will definitely occupy much much more. | 1 | 0 | 0 | the Memory problem about MySQL "SELECT *" | 4 | python,mysql | 0 | 2010-12-30T01:47:00.000 |
I'm new to MySQL, and I have a question about the memory.
I have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the
memory.
I use python(actually MySQLdb in python) with sql: SELECT * FROM table.
However, from my linux "top" I saw this python process uses 50% of my memory(which is total 6GB)
I'm curious about why it uses about 3GB memory only for a 200 mb table.
Thanks in advance! | 1 | -1 | -0.049958 | 0 | false | 4,559,443 | 0 | 4,116 | 2 | 0 | 0 | 4,559,402 | This is almost certainly a bad design.
What are you doing with all that data in memory at once?
If it's for one user, why not pare the size down so you can support multiple users?
If you're doing a calculation on the middle tier, is it possible to shift the work to the database server so you don't have to bring all the data into memory?
You know you can do this, but the larger questions are (1) why? and (2) what else could you do? We'd need more context to answer these. | 1 | 0 | 0 | the Memory problem about MySQL "SELECT *" | 4 | python,mysql | 0 | 2010-12-30T01:47:00.000 |
How can I update multiple records in a queryset efficiently?
Do I just loop over the queryset, edit , and call save() for each one of them? Is it equivalent to psycopg2's executemany? | 1 | 6 | 1.2 | 0 | true | 4,601,203 | 1 | 6,395 | 1 | 0 | 0 | 4,600,938 | If you have to update each record with a different value, then of couse you have to iterate over each record. If you wish to do update them all with the same value, then just use the update method of the queryset. | 1 | 0 | 0 | Django: how can I update more than one record at once? | 3 | python,django | 0 | 2011-01-05T04:53:00.000 |
I have a project coming up that involves a desktop application (tournament scoring for an amateur competition) that probably 99+% of the time will be a single-user on a single machine, no network connectivity, etc. For that, sqlite will likely work beautifully. For those other few times when there are more than one person, with more than one computer, and some form of network... they would ideally need to be able to enter data (event registration and putting in scores) to a central database such as a MySQL or PostgreSQL server. I don't envision a need for synchronizing data between the local (sqlite) and remote databases, just a need to be able to switch via preferences or configuration file which kind of database the program should connect to the next time its started (along with the connection info for any remote database).
I'm fairly new at this kind of programming, and this will likely take me a good while to get where I want it... but I'd prefer to avoid going down the wrong path early on (at least on major things like this). Given my limited understanding of things like ORMs it seems like this would be a near-ideal use for something like SQLAlchemy, no? Or would the 'batteries included' python db-api be generic enough for this kind of task?
TIA,
Monte | 2 | 1 | 1.2 | 0 | true | 4,612,684 | 0 | 245 | 2 | 0 | 0 | 4,610,698 | Yes, SQLAlchemy will help you to be independent on what SQL database you use, and you get a nice ORM as well. Highly recommended. | 1 | 0 | 0 | creating a database-neutral app in python | 2 | python,database,sqlite,orm,sqlalchemy | 0 | 2011-01-06T00:31:00.000 |
I have a project coming up that involves a desktop application (tournament scoring for an amateur competition) that probably 99+% of the time will be a single-user on a single machine, no network connectivity, etc. For that, sqlite will likely work beautifully. For those other few times when there are more than one person, with more than one computer, and some form of network... they would ideally need to be able to enter data (event registration and putting in scores) to a central database such as a MySQL or PostgreSQL server. I don't envision a need for synchronizing data between the local (sqlite) and remote databases, just a need to be able to switch via preferences or configuration file which kind of database the program should connect to the next time its started (along with the connection info for any remote database).
I'm fairly new at this kind of programming, and this will likely take me a good while to get where I want it... but I'd prefer to avoid going down the wrong path early on (at least on major things like this). Given my limited understanding of things like ORMs it seems like this would be a near-ideal use for something like SQLAlchemy, no? Or would the 'batteries included' python db-api be generic enough for this kind of task?
TIA,
Monte | 2 | -1 | -0.099668 | 0 | false | 4,610,735 | 0 | 245 | 2 | 0 | 0 | 4,610,698 | I don't see how those 2 use cases would use the same methods. Just create a wrapper module that conditionally imports either the sqlite or sqlalchemy modules or whatever else you need. | 1 | 0 | 0 | creating a database-neutral app in python | 2 | python,database,sqlite,orm,sqlalchemy | 0 | 2011-01-06T00:31:00.000 |
I'm developing a web app that uses stock data. The stock data can be stored in:
Files
DB
The structure of the data is simple: there's a daily set and a weekly set. If files are used, then I can store a file per symbol/set, such as GOOGLE_DAILY and GOOGLE_WEEKLY. Each set includes a simple list of (Date, open/hight/low/close, volume, dividend) fields.
But how can I do it with DB? Should I use relational or other db? I thought about using 2 tables per each symbol, but that would generate thousands of tables, which doesn't feel right.
Thanks. | 0 | 3 | 1.2 | 0 | true | 4,613,300 | 1 | 81 | 1 | 0 | 0 | 4,613,251 | You don't need a table per stock symbol, you just need one of the fields in the table to be the stock symbol. The table might be called StockPrices and its fields might be
ticker_symbol - the stock ticker symbol
time - the time of the stock quote
price - the price of the stock at that time
As long as ticker_symbol is an indexed field you can do powerful queries like SELECT time,price FROM StockPrices WHERE ticker_symbol='GOOG' ORDER BY time DESC and they will be very efficient. You can also store as many symbols as you like in this table.
You could add other tables for dividends, volume information and such. In all cases you probably have a composite key of ticker_symbol and time. | 1 | 0 | 0 | Help needed with db structure | 2 | python,django,data-structures | 0 | 2011-01-06T09:02:00.000 |
Im setting a VM.
Both host and VM machine have Mysql.
How do keep the VM Mysql sync'd to to the host Mysql.
Host is using MYsql 5.5 on XP.
VM is Mysql 5.1 on Fedora 14.
1) I could DUMP to "shared," Restore. Not sure if this will work.
2) I could network Mysql Host to Mysql VM. Not how to do this
How would I do this with python 2.7?
I dont want them in sync after set-up phase. But, maybe sync some tables or SP occasionly on-rewrites. After I build out Linux Env. I would like to be able to convert V2P and have a dual-boot system. | 1 | 0 | 0 | 0 | false | 4,621,472 | 0 | 2,319 | 2 | 0 | 0 | 4,619,392 | Do you want it synced in realtime?
Why not just connect the guest's mysql process to the host? | 1 | 0 | 0 | How to Sync MySQL with python? | 3 | python,mysql,virtual-machine | 0 | 2011-01-06T20:15:00.000 |
Im setting a VM.
Both host and VM machine have Mysql.
How do keep the VM Mysql sync'd to to the host Mysql.
Host is using MYsql 5.5 on XP.
VM is Mysql 5.1 on Fedora 14.
1) I could DUMP to "shared," Restore. Not sure if this will work.
2) I could network Mysql Host to Mysql VM. Not how to do this
How would I do this with python 2.7?
I dont want them in sync after set-up phase. But, maybe sync some tables or SP occasionly on-rewrites. After I build out Linux Env. I would like to be able to convert V2P and have a dual-boot system. | 1 | 1 | 1.2 | 0 | true | 4,619,503 | 0 | 2,319 | 2 | 0 | 0 | 4,619,392 | You can use mysqldump to make snapshots of the database, and to restore it to known states after tests.
But instead or going into the complication of synchronizing different database instances, it would be best to open the host machine's instance to local network access, and have the applications in the virtual machine access that as if it was a remote server. Overall performance should improve too.
Even if you decide to run different databases for the host and the guest, run then both on the host's MySQL instance. Performance will be better, configuration management will be easier, and the apps in the guest will be tested against a realistic deployment environment. | 1 | 0 | 0 | How to Sync MySQL with python? | 3 | python,mysql,virtual-machine | 0 | 2011-01-06T20:15:00.000 |
I know of PyMySQLDb, is that pretty much the thinnest/lightest way of accessing MySql? | 2 | 0 | 0 | 0 | false | 9,090,731 | 0 | 2,598 | 3 | 0 | 0 | 4,620,340 | MySQLDb is faster while SQLAlchemy makes code more user friendly -:) | 1 | 0 | 0 | What is the fastest/most performant SQL driver for Python? | 3 | python,mysql | 0 | 2011-01-06T21:56:00.000 |
I know of PyMySQLDb, is that pretty much the thinnest/lightest way of accessing MySql? | 2 | 5 | 0.321513 | 0 | false | 4,620,669 | 0 | 2,598 | 3 | 0 | 0 | 4,620,340 | The fastest is SQLAlchemy.
"Say what!?"
Well, a nice ORM, and I like SQLAlchemy, you will get your code finished much faster. If your code then runs 0.2 seconds slower isn't really gonna make any noticeable difference. :)
Now if you get performance problems, then you can look into improving the code. But choosing the access module after who in theory is "fastest" is premature optimization. | 1 | 0 | 0 | What is the fastest/most performant SQL driver for Python? | 3 | python,mysql | 0 | 2011-01-06T21:56:00.000 |
I know of PyMySQLDb, is that pretty much the thinnest/lightest way of accessing MySql? | 2 | 3 | 0.197375 | 0 | false | 4,620,433 | 0 | 2,598 | 3 | 0 | 0 | 4,620,340 | The lightest possible way is to use ctypes and directly call into the MySQL API, of course, without using any translation layers. Now, that's ugly and will make your life miserable unless you also write C, so yes, the MySQLDb extension is the standard and most performant way to use MySQL while still using the Python Database API. Almost anything else will be built on top of that or one of its predecessors.
Of course, the connection layer is rarely where all of the database speed problems come from. That's mostly from misusing the API you have or building a bad database or queries. | 1 | 0 | 0 | What is the fastest/most performant SQL driver for Python? | 3 | python,mysql | 0 | 2011-01-06T21:56:00.000 |
I work with Oracle Database and lastest Django but when i use the default user model is the query very slow
what can i do? | 2 | 2 | 1.2 | 0 | true | 6,583,775 | 1 | 305 | 1 | 0 | 0 | 4,625,835 | The solution was to add an index. | 1 | 0 | 0 | how can i optimize a django oracle connection? | 1 | python,django,oracle,django-models,model | 0 | 2011-01-07T13:18:00.000 |
I know that with an InnoDB table, transactions are autocommit, however I understand that to mean for a single statement? For example, I want to check if a user exists in a table, and then if it doesn't, create it. However there lies a race condition. I believe using a transaction prior to doing the select, will ensure that the table remains untouched until the subsequent insert, and the transaction is committed. How can you do this with MySQLdb and Python? | 2 | 4 | 0.379949 | 0 | false | 4,656,098 | 0 | 363 | 1 | 0 | 0 | 4,637,886 | There exists a SELECT ... FOR UPDATE that allows you to lock the rows from being read by another transaction but I believe the records have to exist in the first place. Then you can do as you say, and unlock it once you commit.
In your case I think the best approach is to simply set a unique constraint on the username and try to insert. If you get a key exception you can notify the user that the name was already taken. | 1 | 0 | 0 | How do you create a transaction that spans multiple statements in Python with MySQLdb? | 2 | python,mysql | 0 | 2011-01-09T05:47:00.000 |
i have a django project with a long running (~3hour) management command
in my production environment ( apache mod_wsgi ) this process fails with a broken pipe(32) at the end, when trying to update the database.
thank you | 1 | 1 | 1.2 | 0 | true | 4,644,443 | 1 | 571 | 1 | 0 | 0 | 4,644,317 | The broken pipe mostly mean that one socket in the canal of transmission has been closed without notifying the other one , in your case i think it mean that the database connection that you have establish was closed from the database part, so when you code try to use it, it raise the exception.
Usually the database connection has a time out which "usually" you can configure by making it more bigger to solve this kind of problem , check your database documentation to see how.
N.B: you don't give us much detail so i'm just trying to make assumption here.
Well hope this can help. | 1 | 0 | 0 | django long running process database connection | 1 | python,django,apache,mod-wsgi | 0 | 2011-01-10T06:42:00.000 |
I have a medium size (~100mb) read-only database that I want to put on google app engine. I could put it into the datastore, but the datastore is kind of slow, has no relational features, and has many other frustrating limitations (not going into them here). Another option is loading all the data into memory, but I quickly hit the quota imposed by google. A final option is to use django-nonrel + djangoappengine, but I'm afraid that package is still in its infancy.
Ideally, I'd like to create a read-only sqlite database that uses a blobstore as its data source. Is this possible? | 2 | 2 | 1.2 | 0 | true | 4,663,353 | 1 | 803 | 2 | 1 | 0 | 4,663,071 | I don't think you're likely to find anything like that...surely not over blobstore. Because if all your data is stored in a single blob, you'd have to read the entire database into memory for any operation, and you said you can't do that.
Using the datastore as your backend is more plausible, but not much. The big issue with providing a SQLite driver there would be implementing transaction semantics, and since that's the key thing GAE takes away from you for the sake of high availability, it's hard to imagine somebody going to much trouble to write such a thing. | 1 | 0 | 0 | A Read-Only Relational Database on Google App Engine? | 3 | python,google-app-engine,sqlite,relational-database,non-relational-database | 0 | 2011-01-11T21:55:00.000 |
I have a medium size (~100mb) read-only database that I want to put on google app engine. I could put it into the datastore, but the datastore is kind of slow, has no relational features, and has many other frustrating limitations (not going into them here). Another option is loading all the data into memory, but I quickly hit the quota imposed by google. A final option is to use django-nonrel + djangoappengine, but I'm afraid that package is still in its infancy.
Ideally, I'd like to create a read-only sqlite database that uses a blobstore as its data source. Is this possible? | 2 | 2 | 0.132549 | 0 | false | 4,663,631 | 1 | 803 | 2 | 1 | 0 | 4,663,071 | django-nonrel does not magically provide an SQL database - so it's not really a solution to your problem.
Accessing a blobstore blob like a file is possible, but the SQLite module requires a native C extension, which is not enabled on App Engine. | 1 | 0 | 0 | A Read-Only Relational Database on Google App Engine? | 3 | python,google-app-engine,sqlite,relational-database,non-relational-database | 0 | 2011-01-11T21:55:00.000 |
I'm trying to use clr.AddReference to add sqlite3 functionality to a simple IronPython program I'm writing; but everytime I try to reference System.Data.SQLite I get this error:
Traceback (most recent call last):
File "", line 1, in
IOError: System.IO.IOException: Could not add reference to assembly System.Data.SQLite
at Microsoft.Scripting.Actions.Calls.MethodCandidate.Caller.Call(Object[] args, Boolean&shouldOptimize)
at IronPython.Runtime.Types.BuiltinFunction.BuiltinFunctionCaller2.Call1(CallSite site, CodeContext context, TFuncType func, T0 arg0)
at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2)
at CallSite.Target(Closure , CallSite , CodeContext , Object , Object )
at IronPython.Compiler.Ast.CallExpression.Invoke1Instruction.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1)
at IronPython.Runtime.FunctionCode.Call(CodeContext context)
at IronPython.Runtime.Operations.PythonOps.QualifiedExec(CodeContext context, Object code, PythonDictionary globals, Object locals)
at Microsoft.Scripting.Interpreter.ActionCallInstruction4.Run(InterpretedFrame frame)
at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)
I've been testing out the imports and references in the interpreter mainly, and these are the lines I test:
import sys
import clr
sys.path.append("C:/Program Files (x86)/SQLite.NET/bin")
clr.AddReference("System.Data.SQLite")
The error happens after the clr.AddReference line is entered. How would I add System.Data.SQLite properly? | 1 | 1 | 0.197375 | 0 | false | 4,696,478 | 0 | 1,695 | 1 | 0 | 0 | 4,682,960 | My first guess is that you're trying to load the x86 (32-bit) System.Data.SQLite.dll in a x64 (64-bit) process, or vice versa. System.Data.SQLite.dll contains the native sqlite3 library, which must be compiled for x86 or x64, so there is a version of System.Data.SQLite.dll for each CPU.
If you're using the console, ipy.exe is always 32-bit (even on 64-bit platforms) while ipy64.exe is AnyCPU, so it matches the current platform. If you're hosting IronPython, and the host app is AnyCPU, you need to load the right copy of System.Data.SQLite.dll for the machine you're running on (or just force the host app x86). | 1 | 1 | 0 | Adding System.Data.SQLite reference in IronPython | 1 | ado.net,ironpython,system.data.sqlite | 0 | 2011-01-13T17:11:00.000 |
I read somewhere that to save data to a SQLite3 database in Python, the method commit of the connection object should be called. Yet I have never needed to do this. Why? | 18 | 3 | 0.119427 | 0 | false | 15,967,816 | 0 | 20,808 | 1 | 0 | 0 | 4,699,605 | Python sqlite3 issues a BEGIN statement automatically before "INSERT" or "UPDATE". After that it automatically commits on any other command or db.close() | 1 | 0 | 0 | Why doesn’t SQLite3 require a commit() call to save data? | 5 | python,transactions,sqlite,autocommit | 0 | 2011-01-15T12:36:00.000 |
I've got a situation where I'm contemplating using subversion/svn as the repository/version control system for a project. I'm trying to figure out if it's possible, (and if so, how) to be able to have the subversion system, on a post commit hook/process to to write the user/file/time (and maybe msg) to either an external file (csv) or to a mysql db.
Once I can figure out how to invoke the post commit hook to write the output to a file, I can then modify my issue tracker/project app to then implement a basic workflow process based on the user role, as well as the success/failure of the repository files.
Short sample/pointers would be helpful.
My test env, is running subversion/svnserve on centos5. The scripting languages in use are Php/Python. | 1 | 0 | 1.2 | 0 | true | 4,701,984 | 0 | 1,005 | 2 | 0 | 0 | 4,701,902 | I would say that's possible, but you are going to need a bit of work to retrieve the username, date and commit message.
Subversion invokes the post-commit hook with the repo path and the number of revision which was just committed as arguments.
In order to retrieve the information you're looking for, you will need to use an executable by the name of svnlook, which is bundled with Subversion.
See repo\hooks\post-commit.tmpl for a rather clear explanation about how to use it
Also, take a look at svnlook help, it's not difficult to use. | 1 | 0 | 0 | subversion post commit hooks | 2 | php,python,svn,hook,svn-hooks | 1 | 2011-01-15T20:14:00.000 |
I've got a situation where I'm contemplating using subversion/svn as the repository/version control system for a project. I'm trying to figure out if it's possible, (and if so, how) to be able to have the subversion system, on a post commit hook/process to to write the user/file/time (and maybe msg) to either an external file (csv) or to a mysql db.
Once I can figure out how to invoke the post commit hook to write the output to a file, I can then modify my issue tracker/project app to then implement a basic workflow process based on the user role, as well as the success/failure of the repository files.
Short sample/pointers would be helpful.
My test env, is running subversion/svnserve on centos5. The scripting languages in use are Php/Python. | 1 | 0 | 0 | 0 | false | 4,701,973 | 0 | 1,005 | 2 | 0 | 0 | 4,701,902 | Indeed it is very possible, in your repository root there should be a folder named hooks, inside which should be a file named post-commit (if not, create one), add whatever bash code you put there and it will execute after every commit.
Note, there are 2 variables that are passed into the script $1 is the repository, and $2 is the revision number (i think), you can use those two variables to execute some svn commands/queries, and pull out the required data, and do with it whatever your heart desires. | 1 | 0 | 0 | subversion post commit hooks | 2 | php,python,svn,hook,svn-hooks | 1 | 2011-01-15T20:14:00.000 |
According to the Bigtable original article, a column key of a Bigtable is named using "family:qualifier" syntax where column family names must be printable but qualifiers may be arbitrary strings. In the application I am working on, I would like to specify the qualifiers using Chinese words (or phrase). Is it possible to do this in Google App Engine? Is there a Bigtable API other than provided datastore API? It seems Google is tightly protecting its platform for good reasons.
Thanks in advance.
Marvin | 1 | 2 | 0.379949 | 0 | false | 4,718,951 | 1 | 134 | 1 | 1 | 0 | 4,712,143 | The Datastore is the only interface to the underlying storage on App Engine. You should be able to use any valid UTF-8 string as a kind name, key name, or property name, however. | 1 | 0 | 0 | Is there an API of Google App Engine provided to better configure the Bigtable besides Datastore? | 1 | python,google-app-engine | 0 | 2011-01-17T10:32:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL. | 2 | 1 | 0.033321 | 0 | false | 10,204,815 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | I've used mongoengine with django but you need to create a file like mongo_models.py for example. In that file you define your Mongo documents. You then create forms to match each Mongo document. Each form has a save method which inserts or updates whats stored in Mongo. Django forms are designed to plug into any data back end ( with a bit of craft )
BEWARE: If you have very well defined and structured data that can be described in documents or models then don't use Mongo. Its not designed for that and something like PostGreSQL will work much better.
I use PostGreSQL for relational or well structured data because its good for that. Small memory footprint and good response.
I use Redis to cache or operate in memory queues/lists because its very good for that. great performance providing you have the memory to cope with it.
I use Mongo to store large JSON documents and to perform Map and reduce on them ( if needed ) because its very good for that. Be sure to use indexing on certain columns if you can to speed up lookups.
Don't circle to fill a square hole. It won't fill it.
I've seen too many posts where someone wanted to swap a relational DB for Mongo because Mongo is a buzz word. Don't get me wrong, Mongo is really great... when you use it appropriately. I love using Mongo appropriately | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL. | 2 | 9 | 1.2 | 0 | true | 4,718,924 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | There's no reason why you can't use one of the standard RDBMSs for all the standard Django apps, and then Mongo for your app. You'll just have to replace all the standard ways of processing things from the Django ORM with doing it the Mongo way.
So you can keep urls.py and its neat pattern matching, views will still get parameters, and templates can still take objects.
You'll lose querysets because I suspect they are too closely tied to the RDBMS models - but they are just lazily evaluated lists really. Just ignore the Django docs on writing models.py and code up your database business logic in a Mongo paradigm.
Oh, and you won't have the Django Admin interface for easy access to your data. | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL. | 2 | -1 | -0.033321 | 0 | false | 4,719,398 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | Primary pitfall (for me): no JOINs! | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL. | 2 | 0 | 0 | 0 | false | 4,719,167 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | Upfront, it won't work for any existing Django app that ships it's models. There's no backend for storing Django's Model data in mongodb or other NoSQL storages at the moment and, database backends aside, models themselves are somewhat of a moot point, because once you get in to using someones app (django.contrib apps included) that ships model-template-view triads, whenever you require a slightly different model for your purposes you either have to edit the application code (plain wrong), dynamically edit the contents of imported Python modules at runtime (magical), fork the application source altogether (cumbersome) or provide additional settings (good, but it's a rare encounter, with django.contrib.auth probably being the only widely known example of an application that allows you to dynamically specify which model it will use, as is the case with user profile models through the AUTH_PROFILE_MODULE setting).
This might sound bad, but what it really means is that you'll have to deploy SQL and NoSQL databases in parallel and go from an app-to-app basis--like Spacedman suggested--and if mongodb is the best fit for a certain app, hell, just roll your own custom app.
There's a lot of fine Djangonauts with NoSQL storages on their minds. If you followed the streams from the past Djangocon presentations, every year there's been important discussions about how Django should leverage NoSQL storages. I'm pretty sure, in this year or the next, someone will refactor the apps and models API to pave the path to a clean design that can finally unify all the different flavors of NoSQL storages as part of the Django core. | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (Imagekit, Haystack, Registration, etc) and wanted to know if these apps will continue to work if I make the switch. Are there any known headaches that I will encounter, if so I might just keep banging my head with MySQL. | 2 | 0 | 0 | 0 | false | 4,728,500 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | I have recently tried this (although without Mongoengine). There are a huge number of pitfalls, IMHO:
No admin interface.
No Auth django.contrib.auth relies on the DB interface.
Many things rely on django.contrib.auth.User. For example, the RequestContext class. This is a huge hindrance.
No Registration (Relies on the DB interface and django.contrib.auth)
Basically, search through the django interface for references to django.contrib.auth and you'll see how many things will be broken.
That said, it's possible that MongoEngine provides some support to replace/augment django.contrib.auth with something better, but there are so many things that depend on it that it's hard to say how you'd monkey patch something that much. | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I have a .sql file containing thousands of individual insert statements. It takes forever to do them all. I am trying to figure out a way to do this more efficiently. In python the sqlite3 library can't do things like ".read" or ".import" but executescript is too slow for that many inserts.
I installed the sqlite3.exe shell in hopes of using ".read" or ".import" but I can't quite figure out how to use it. Running it through django in eclipse doesn't work because it expects the database to be at the root of my C drive which seems silly. Running it through the command line doesn't work because it can't find my database file (unless I'm doing something wrong)
Any tips?
Thanks! | 6 | 1 | 0.049958 | 0 | false | 4,724,461 | 0 | 1,859 | 2 | 0 | 0 | 4,719,836 | Use a parameterized query
and
Use a transaction. | 1 | 0 | 0 | Python and sqlite3 - adding thousands of rows | 4 | python,sql,django,sqlite | 0 | 2011-01-18T02:01:00.000 |
I have a .sql file containing thousands of individual insert statements. It takes forever to do them all. I am trying to figure out a way to do this more efficiently. In python the sqlite3 library can't do things like ".read" or ".import" but executescript is too slow for that many inserts.
I installed the sqlite3.exe shell in hopes of using ".read" or ".import" but I can't quite figure out how to use it. Running it through django in eclipse doesn't work because it expects the database to be at the root of my C drive which seems silly. Running it through the command line doesn't work because it can't find my database file (unless I'm doing something wrong)
Any tips?
Thanks! | 6 | 2 | 0.099668 | 0 | false | 13,787,939 | 0 | 1,859 | 2 | 0 | 0 | 4,719,836 | In addition to running the queries in bulk inside a single transaction, also try VACUUM and ANALYZEing the database file. It helped a similar problem of mine. | 1 | 0 | 0 | Python and sqlite3 - adding thousands of rows | 4 | python,sql,django,sqlite | 0 | 2011-01-18T02:01:00.000 |
At my organization, PostgreSQL databases are created with a 20-connection limit as a matter of policy. This tends to interact poorly when multiple applications are in play that use connection pools, since many of those open up their full suite of connections and hold them idle.
As soon as there are more than a couple of applications in contact with the DB, we run out of connections, as you'd expect.
Pooling behaviour is a new thing here; until now we've managed pooled connections by serializing access to them through a web-based DB gateway (?!) or by not pooling anything at all. As a consequence, I'm having to explain (literally, 5 trouble tickets from one person over the course of the project) over and over again how the pooling works.
What I want is one of the following:
A solid, inarguable rationale for increasing the number of available connections to the database in order to play nice with pools.
If so, what's a safe limit? Is there any reason to keep the limit to 20?
A reason why I'm wrong and we should cut the size of the pools down or eliminate them altogether.
For what it's worth, here are the components in play. If it's relevant how one of these is configured, please weigh in:
DB: PostgreSQL 8.2. No, we won't be upgrading it as part of this.
Web server: Python 2.7, Pylons 1.0, SQLAlchemy 0.6.5, psycopg2
This is complicated by the fact that some aspects of the system access data using SQLAlchemy ORM using a manually configured engine, while others access data using a different engine factory (Still sqlalchemy) written by one of my associates that wraps the connection in an object that matches an old PHP API.
Task runner: Python 2.7, celery 2.1.4, SQLAlchemy 0.6.5, psycopg2 | 3 | 2 | 0.379949 | 0 | false | 4,729,629 | 0 | 303 | 1 | 0 | 0 | 4,729,361 | I think it's reasonable to require one connection per concurrent activity, and it's reasonable to assume that concurrent HTTP requests are concurrently executed.
Now, the number of concurrent HTTP requests you want to process should scale with a) the load on your server, and b) the number of CPUs you have available. If all goes well, each request will consume CPU time somewhere (in the web server, in the application server, or in the database server), meaning that you couldn't process more requests concurrently than you have CPUs. In practice, it's not that all goes well: some requests will wait for IO at some point, and not consume any CPU. So it's ok to process some more requests concurrently than you have CPUs.
Still, assuming that you have, say, 4 CPUs, allowing 20 concurrent requests is already quite some load. I'd rather throttle HTTP requests than increasing the number of requests that can be processed concurrently. If you find that a single request needs more than one connection, you have a flaw in your application.
So my recommendation is to cope with the limit, and make sure that there are not too many idle connections (compared to the number of requests that you are actually processing concurrently). | 1 | 0 | 0 | How can I determine what my database's connection limits should be? | 1 | python,database,sqlalchemy,pylons,connection-pooling | 0 | 2011-01-18T21:40:00.000 |
Question is rather conceptual, then direct.
What's the best solution to keep two different calendars synchronised? I can run a cron job for example every minute, I can keep additional information in database. How to avoid events conflicts?
As far I was thinking about these two solutions. First one is keeping a database which gathers information from both calendars and each time compares if something new appeared in any of them. Inside this database we can judge, which events should be added, edited or removed and then send those information back to both calendars.
Second one is keepien two databases for both calendars and collecting information separately. Then, after those databases are compared, we can say, where did the changes occure and send information from database A to calendar B or from database B to calendar A. I'm afraid this solution leads to more conflicts when changes were made to both databases.
What do you think of these? To be more accurate, I mean two google calendars and script written in python using gdata. Any idea of more simple solution? | 0 | 0 | 1.2 | 0 | true | 4,738,228 | 0 | 516 | 1 | 0 | 0 | 4,737,852 | Most calendars, including the Google calendar, has ways to import and synchronize data. You can use these ways. Just import the gdata information (perhaps you need to make it into ics first, I don't know) into the Google calendar. | 1 | 0 | 1 | Two calendars synchronization | 1 | python,synchronization,calendar | 0 | 2011-01-19T16:27:00.000 |
I can connect to a Oracle 10g release 2 server using instant client. Using pyodbc and cx_Oracle.
Using either module, I can execute a select query without any problems, but when I try to update a table, my program crashes.
For example,
SELECT * FROM table WHERE col1 = 'value'; works fine.
UPDATE table SET col2 = 'value' WHERE col1 = 'val'; does not work
Is this a known limitation with instant client, or is there a problem with my installation?
Thanks in advance for your help. | 2 | 1 | 0.099668 | 0 | false | 4,753,975 | 0 | 587 | 2 | 0 | 0 | 4,748,962 | Use the instant client with SQL*Plus and see if you can run the update. If there's a problem, SQL*Plus is production quality, so won't crash and it should give you a reasonable error message. | 1 | 0 | 0 | Oracle instant client can't execute sql update | 2 | python,oracle,pyodbc,cx-oracle,instantclient | 0 | 2011-01-20T15:26:00.000 |
I can connect to a Oracle 10g release 2 server using instant client. Using pyodbc and cx_Oracle.
Using either module, I can execute a select query without any problems, but when I try to update a table, my program crashes.
For example,
SELECT * FROM table WHERE col1 = 'value'; works fine.
UPDATE table SET col2 = 'value' WHERE col1 = 'val'; does not work
Is this a known limitation with instant client, or is there a problem with my installation?
Thanks in advance for your help. | 2 | 0 | 0 | 0 | false | 4,749,022 | 0 | 587 | 2 | 0 | 0 | 4,748,962 | Sounds more like your user you are connecting with doesn't have those privileges on that table. Do you get an ORA error indicating insufficient permissions when performing the update? | 1 | 0 | 0 | Oracle instant client can't execute sql update | 2 | python,oracle,pyodbc,cx-oracle,instantclient | 0 | 2011-01-20T15:26:00.000 |
I have a massive data set of customer information (100s of millions of records, 50+ tables).
I am writing a python (twisted) app that I would like to interact with the dataset, performing table manipulation. What I really need is an abstraction of 'table', so I can add/remove/alter columns/tables without having to resort to only creating SQL.
Is there an ORM that will not add significant overhead to my application, considering the size of the dataset? | 1 | 0 | 1.2 | 0 | true | 4,764,551 | 0 | 858 | 1 | 0 | 0 | 4,764,476 | I thought that ORM solutions had to do with DQL (Data Query Language), not DDL (Data Definition Language). You don't use ORM to add, alter, or remove columns at runtime. You'd have to be able to add, alter, or remove object attributes and their types at the same time.
ORM is about dynamically generating SQL and developer's lift, not what you're alluding to. | 1 | 0 | 0 | Python ORM for massive data set | 4 | python,orm | 0 | 2011-01-21T22:26:00.000 |
I'm trying to use sqlalchemy on Cygwin with a MSSQL backend but I cannot seem to get any of the MSSQL Python DB APIs installed on Cygwin. Is there one that is known to work? | 2 | 0 | 0 | 0 | false | 5,013,126 | 0 | 682 | 1 | 0 | 0 | 4,770,083 | FreeTDS + unixodbc + pyodbc stack will work on Unix-like systems and should therefore work just as well in Cygwin. You should use version 8.0 of TDS protocol. This can be configured in connection string. | 1 | 0 | 0 | Which Python (sqlalchemy) mssql DB API works in Cygwin? | 1 | python,sql-server,cygwin,sqlalchemy | 0 | 2011-01-22T19:34:00.000 |
A website I am making revolves around a search utility, and a want to have something on the homepage that lists the top 10 (or something) most searched queries of the day.
What would be the easiest / most efficient way of doing this?
Should I use a sql database, or just a text file containing the top 10 queries and a cronjob erasing the data every day?
Also, how would I avoid the problem of two users searching for something at the same and it only recording one of them, i.e multithreading?
The back-end of the site is all written in python | 0 | 2 | 1.2 | 0 | true | 4,778,081 | 0 | 98 | 1 | 0 | 0 | 4,778,058 | Put the queries in a table, with one row per distinct query, and a column to count. Insert if the query doesn't exist already, or otherwise increment the query row counter.
Put a cron job together than empties the table at 12 midnight. Use transactions to prevent two different requests from colliding. | 1 | 0 | 0 | How to make a "top queries" page | 2 | python,sql,multithreading | 0 | 2011-01-24T02:23:00.000 |
In some project I implement user-requested mapping (at runtime) of two tables which are connected by a 1-to-n relation (one table has a ForeignKey field).
From what I get from the documentation, the usual way is to add a orm.relation to the mapped properties with a mapped_collection as collection_class on the non-foreignkey table with a backref, so that in the end both table orm objects have each other mapped on an attribute (one has a collection through the collection_class of the orm.relation used on it, the other has an attribute placed on it by the backref).
I am in a situation where I sometimes do just want the ForeignKey-side to have a mapped attribute to the other table (that one, that is created by the backref), depending on what the user decides (he might just want to have that side mapped).
Now I'm wondering whether I can simply use an orm.relation on the ForeignKey table aswell, so I'd probably end up with an orm.relation on the non-foreignkey table as before with a mapped_collection but no backref, and another orm.relation on the foreignkey table replacing that automagic backref (making two orm.relations on both tables mapping each other from both sides).
Will that get me into trouble? Is the result equivalent (to just one orm.relation on the non-foreignkey table with a backref)? Is there another way how I could map just on the ForeignKey-side without having to map the dictionary on the non-ForeignKey table aswell with that backref? | 0 | 1 | 0.197375 | 0 | false | 5,594,860 | 1 | 664 | 1 | 0 | 0 | 4,782,344 | I found the answer myself by now:
If you use an orm.relation from each side and no backrefs, you have to use back_populates or if you mess around at one side, it won't be properly updated in the mapping on the other side.
Therefore, an orm.relation from each side instead of an automated backref IS possible but you have to use back_populates accordingly. | 1 | 0 | 0 | SQLAlchemy - difference between mapped orm.relation with backref or two orm.relation from both sides | 1 | python,database,sqlalchemy,relation | 0 | 2011-01-24T13:10:00.000 |
I'm writing a network scheduling like program in Python 2.6+ in which I have a complex queue requirement: Queue should store packets, should retrieve by timestamp or by packet ID in O(1), should be able to retrieve all the packets below a certain threshold, sort packet by priorities etc. It should insert and delete with reasonable complexity as well.
Now I have two choices:
Combine a few data structures and synchronize them properly to fulfill my requirement.
Use some in-memory database so that I can perform all sorts of operations easily.
Any suggestions please? | 0 | 0 | 0 | 0 | false | 4,803,269 | 0 | 94 | 1 | 0 | 0 | 4,802,900 | A database is just some indexes and fancy algorithms wrapped around a single data structure -- a table. You don't have a lot of control about what happens under the hood.
I'd try using the built-in Python datastructures. | 1 | 0 | 0 | Need advice on customized datastructure vs using in-memory DB? | 2 | python,data-structures | 0 | 2011-01-26T09:20:00.000 |
I am new to python and its workings.
I have an excel spreadsheet which was got using some VBA's.
Now I want to invoke Python to do some of the jobs...
My question then is: How can I use python script instead of VBA in an excel spreadsheet?
An example of such will be appreciated. | 2 | 0 | 0 | 0 | false | 4,872,985 | 0 | 2,789 | 1 | 0 | 0 | 4,829,509 | I've always done the manipulation of Excel spreadsheets and Word documents with standalone scripts which use COM objects to manipulate the documents. I've never come across a good use case for putting Python into a spreadsheet in place of VBA. | 1 | 0 | 1 | Use of python script instead of VBA in Excel | 2 | python,excel | 0 | 2011-01-28T14:49:00.000 |
How do I use the Werkzeug framework without any ORM like SQLAlchemy? In my case, it's a lot of effort to rewrite all the tables and columns in SQLAlchemy from existing tables & data.
How do I query the database and make an object from the database output?
In my case now, I use Oracle with cx_Oracle. If you have a solution for MySQL, too, please mention it.
Thanks. | 1 | 0 | 0 | 0 | false | 4,838,669 | 1 | 445 | 1 | 0 | 0 | 4,838,528 | Is it a problem to use normal DB API, issue regular SQL queries, etc? cx_Oracle even has connection pooling biolt in to help you manage connections. | 1 | 0 | 0 | Werkzeug without ORM | 3 | python,orm,werkzeug | 0 | 2011-01-29T18:09:00.000 |
I have several occasions where I want to collect data when in the field. This is in situations where I do not always have access to my postgres database.
To keep things in sync, it would be excellent if I could use psycopg2 functions offline to generate queries that can be held back and once I am able to connect to the database; process everything that is held back.
One thing I am currently struggling with is that the psycopg2 cursor requires a connection to be constructed.
My question is:
Is there a way to use a cursor to do things like mogrify without an active connection object? Or with a connection object that is not connected to a database? I would then like to write the mogrify results temporarily to file so they can be processed later. | 12 | 0 | 0 | 0 | false | 4,880,978 | 0 | 3,099 | 1 | 0 | 0 | 4,879,804 | It seems like it would be easier and more versatile to store the data to be inserted later in another structure. Perhaps a csv file. Then when you connect you can run through that table, but you can also easily do other things with that CSV if necessary. | 1 | 0 | 0 | Use psycopg2 to construct queries without connection | 2 | python,psycopg2,offline-mode | 0 | 2011-02-02T21:00:00.000 |
After setting up a django site and running on the dev server, I have finally gotten around to figuring out deploying it in a production environment using the recommended mod_wsgi/apache22. I am currently limited to deploying this on a Windows XP machine.
My problem is that several django views I have written use the python subprocess module to run programs on the filesystem. I keep getting errors when running the subprocess.Popen I have seen several SO questions that have asked about this, and the accepted answer is to use WSGIDaemonProcess to handle the problem (due to permissions of the apache user, I believe).
The only problem with this is that WSGIDaemonProcess is not available for mod_wsgi on Windows. Is there any way that I can use mod_wsgi/apache/windows/subprocess together? | 6 | 1 | 0.099668 | 0 | false | 8,750,220 | 1 | 2,617 | 1 | 0 | 0 | 4,882,605 | I ran into a couple of issues trying to use subprocess under this configuration. Since I am not sure what specifically you had trouble with I can share a couple of things that were not easy for me to solve but in hindsight seem pretty trivial.
I was receiving permissions related errors when trying to execute an application. I searched quite a bit but was having a hard time finding Windows specific answers. This one was obvious: I changed the user under which Apache runs to a user with higher permissions. (Note, there are security implications with that so you want to be sure you understand what you are getting in to).
Django (depending on your configuration) may store strings as Unicode. I had a command line application I was trying to run with some parameters from my view which was crashing despite having the correct arguments passed in. After a couple hours of frustration I did a type(args) which returned <type 'unicode'> rather than my expected string. A quick conversion resolved that issue. | 1 | 0 | 0 | Django + Apache + Windows WSGIDaemonProcess Alternative | 2 | python,django,apache,subprocess,mod-wsgi | 0 | 2011-02-03T04:07:00.000 |
I'm storing MySQL DateTimes in UTC, and let the user select their time zone, storing that information.
However, I want to to some queries that uses group by a date. Is it better to store that datetime information in UTC (and do the calculation every time) or is it better to save it in the timezone given? Since time zones for users can change, I wonder.
Thanks | 0 | 1 | 0.099668 | 0 | false | 4,928,246 | 0 | 123 | 2 | 0 | 0 | 4,928,220 | It's almost always better to save the time information in UTC, and convert it to local time when needed for presentation and display.
Otherwise, you will go stark raving mad trying to manipulate and compare dates and times in your system because you will have to convert each time to UTC time for comparison and manipulation. | 1 | 0 | 0 | How to handle time zones in a CMS? | 2 | python,mysql,timezone | 0 | 2011-02-08T00:10:00.000 |
I'm storing MySQL DateTimes in UTC, and let the user select their time zone, storing that information.
However, I want to to some queries that uses group by a date. Is it better to store that datetime information in UTC (and do the calculation every time) or is it better to save it in the timezone given? Since time zones for users can change, I wonder.
Thanks | 0 | 3 | 1.2 | 0 | true | 4,928,244 | 0 | 123 | 2 | 0 | 0 | 4,928,220 | Generally always store in UTC and convert for display, it's the only sane way to do time differences etc. Or when somebody next year decides to change the summer time dates. | 1 | 0 | 0 | How to handle time zones in a CMS? | 2 | python,mysql,timezone | 0 | 2011-02-08T00:10:00.000 |
I am having problem when I do a query to mongodb using pymongo.
I do not know how to avoid getting the _id for each record.
I am doing something like this,
result = db.meta.find(filters, [
'model',
'fields.parent',
'fields.status',
'fields.slug',
'fields.firm',
'fields.properties'])
I do not want to iterate the cursor elements only to delete a field.
Thanks,
Joaquin | 1 | 0 | 0 | 0 | false | 4,941,686 | 0 | 803 | 1 | 0 | 0 | 4,937,817 | Does make any sense. The object id is core part of each document. Convert the BSON/JSON document to a native datastructure (depending on your implementation language) and remove _id on this level. Apart from that it does not make much sense what you are trying to accomplish. | 1 | 0 | 0 | PYMongo: Keep returning _id in every record after quering, How can I exclude this record? | 2 | python,mongodb,pymongo | 0 | 2011-02-08T20:08:00.000 |
I am in the middle of a project involving trying to grab numerous pieces of information out of 70GB worth of xml documents and loading it into a relational database (in this case postgres) I am currently using python scripts and psycopg2 to do this inserts and whatnot. I have found that as the number of rows in the some of the tables increase. (The largest of which is at around 5 million rows) The speed of the script (inserts) has slowed to a crawl. What was once taking a couple of minutes now takes about an hour.
What can I do to speed this up? Was I wrong in using python and psycopg2 for this task? Is there anything I can do to the database that may speed up this process. I get the feeling I am going about this in entirely the wrong way. | 4 | 2 | 0.057081 | 0 | false | 4,969,077 | 0 | 4,138 | 2 | 0 | 0 | 4,968,837 | Considering the process was fairly efficient before and only now when the dataset grew up it slowed down my guess is it's the indexes. You may try dropping indexes on the table before the import and recreating them after it's done. That should speed things up. | 1 | 0 | 0 | Postgres Performance Tips Loading in billions of rows | 7 | python,database-design,postgresql,psycopg2 | 0 | 2011-02-11T12:11:00.000 |
I am in the middle of a project involving trying to grab numerous pieces of information out of 70GB worth of xml documents and loading it into a relational database (in this case postgres) I am currently using python scripts and psycopg2 to do this inserts and whatnot. I have found that as the number of rows in the some of the tables increase. (The largest of which is at around 5 million rows) The speed of the script (inserts) has slowed to a crawl. What was once taking a couple of minutes now takes about an hour.
What can I do to speed this up? Was I wrong in using python and psycopg2 for this task? Is there anything I can do to the database that may speed up this process. I get the feeling I am going about this in entirely the wrong way. | 4 | 0 | 0 | 0 | false | 4,968,869 | 0 | 4,138 | 2 | 0 | 0 | 4,968,837 | I'd look at the rollback logs. They've got to be getting pretty big if you're doing this in one transaction.
If that's the case, perhaps you can try committing a smaller transaction batch size. Chunk it into smaller blocks of records (1K, 10K, 100K, etc.) and see if that helps. | 1 | 0 | 0 | Postgres Performance Tips Loading in billions of rows | 7 | python,database-design,postgresql,psycopg2 | 0 | 2011-02-11T12:11:00.000 |
I am trying to find the best solution (perfomance/easy code) for the following situation:
Considering a database system with two tables, A (production table) and A'(cache table):
Future rows are added first into A' table in order to not disturb the production one.
When a timer says go (at midnight, for example) rows from A' are incorporated to A.
Dealing with duplicates, inexistent rows, etc have to be considerated.
I've been reading some about Materialized Views, Triggers, etc. The problem is that I should not introduce so much noise in the production table because is the reference table for a server (a PowerDNS server in fact).
So, what do you guys make of it? Should I better use triggers, MV, or programatically outside of the database?? (I'm using python, BTW)
Thanks in advance for helping me. | 0 | 1 | 0.197375 | 0 | false | 4,973,738 | 1 | 611 | 1 | 0 | 0 | 4,973,316 | The "best" solution according to the criteria you've laid out so far would just be to insert into the production table.
...unless there's actually something extremely relevant you're not telling us | 1 | 0 | 0 | Materialize data from cache table to production table [PostgreSQL] | 1 | python,postgresql,triggers,materialized-views | 0 | 2011-02-11T19:46:00.000 |
When exactly the database transaction is being commited? Is it for example at the end of every response generation?
To explain the question: I need to develop a bit more sophisticated application where I have to control DB transactions less or more manually. Especialy I have to be able to design a set of forms with some complex logics behind the forms (some kind of 'wizard') but the database operations must not be commited until the last form and the confirmation.
Of course I could put everything to the session without making any DB change but it's not a solution, the changes are quite complex and realy have to be performed. So the only way is to keep it uncommited.
Now back to the question: if I undertand how is it working in web2py it will be easier for me to decide if thats a good framework for me. I am a java and php programmer, I know python but I don't know web2py yet ...
If you know any web page when it's explained I also wppreciate.
THanks! | 2 | 1 | 0.099668 | 0 | false | 5,443,158 | 1 | 1,224 | 1 | 0 | 0 | 4,979,392 | you can call db.commit() and db.rollback() pretty much everywhere. If you do not and the action does not raise an exception, it commits before returning a response to the client. If it raises an exception and it is not explicitly caught, it rollsback. | 1 | 0 | 0 | web2py and DB transactions | 2 | python,web2py | 0 | 2011-02-12T17:14:00.000 |
I am using TG2.1 on WinXP.
Python ver is 2.6.
Trying to use sqlautocode (0.5.2) for working with my existing MySQL schema.
SQLAlchemy ver is 0.6.6
import sqlautocode # works OK
While trying to reflect the schema ----
sqlautocode mysql:\\username:pswd@hostname:3306\schema_name -o tables.py
SyntaxError: invalid syntax
is raised.
Can someone please point out what's going wrong, & how to handle the same?
Thanks,
Vineet. | 2 | 1 | 0.099668 | 0 | false | 5,003,413 | 0 | 659 | 1 | 0 | 0 | 4,994,838 | Hey, I got it right somehow.
The problem seems to be version mismatch between SA 0.6 & sqlautocode 0.6
Seems that they don't work in tandom.
So I removed those & installed SA 0.5
Now it's working.
Thanks,
Vineet Deodhar. | 1 | 0 | 0 | sqlautocode for mysql giving syntax error | 2 | python,web-applications,turbogears2 | 0 | 2011-02-14T16:55:00.000 |
This relates to primary key constraint in SQLAlchemy & sqlautocode.
I have SA 0.5.1 & sqlautocode 0.6b1
I have a MySQL table without primary key.
sqlautocode spits traceback that "could not assemble any primary key columns".
Can I rectify this with a patch sothat it will reflect tables w/o primary key?
Thanks,
Vineet Deodhar | 0 | 0 | 0 | 0 | false | 5,292,555 | 1 | 321 | 3 | 0 | 0 | 5,003,475 | We've succeeded in faking sqa if the there's combination of columns on the underlying table that uniquely identify it.
If this is your own table and you're not live, add a primary key integer column or something.
We've even been able to map an existing legacy table in a database with a) no pk and b) no proxy for a primary key in the other columns. It was Oracle not MySQL but we were able to hack sqa to see Oracle's rowid as a pk, though this is only safe for insert and query...update is not possible since it can't uniquely identify which row it should be updating. But these are ugly hacks so if you can help it, don't go down that road. | 1 | 0 | 0 | sqlautocode : primary key required in tables? | 3 | python,web-applications,turbogears,turbogears2 | 0 | 2011-02-15T12:11:00.000 |
This relates to primary key constraint in SQLAlchemy & sqlautocode.
I have SA 0.5.1 & sqlautocode 0.6b1
I have a MySQL table without primary key.
sqlautocode spits traceback that "could not assemble any primary key columns".
Can I rectify this with a patch sothat it will reflect tables w/o primary key?
Thanks,
Vineet Deodhar | 0 | 0 | 0 | 0 | false | 5,292,729 | 1 | 321 | 3 | 0 | 0 | 5,003,475 | If the problem is that sqlautocode will not generate your class code because it cannot determine the PKs of the table, then you would probably be able to change that code to fit your needs (even if it means generating SQLA code that doesn't have PKs). Eventually, if you're using the ORM side of SQLA, you're going to need fields defined as PKs, even if the database doesn't explicitly label them as such. | 1 | 0 | 0 | sqlautocode : primary key required in tables? | 3 | python,web-applications,turbogears,turbogears2 | 0 | 2011-02-15T12:11:00.000 |
This relates to primary key constraint in SQLAlchemy & sqlautocode.
I have SA 0.5.1 & sqlautocode 0.6b1
I have a MySQL table without primary key.
sqlautocode spits traceback that "could not assemble any primary key columns".
Can I rectify this with a patch sothat it will reflect tables w/o primary key?
Thanks,
Vineet Deodhar | 0 | 0 | 0 | 0 | false | 5,003,573 | 1 | 321 | 3 | 0 | 0 | 5,003,475 | I don't think so. How an ORM is suposed to persist an object to the database without any way to uniquely identify records?
However, most ORMs accept a primary_key argument so you can indicate the key if it is not explicitly defined in the database. | 1 | 0 | 0 | sqlautocode : primary key required in tables? | 3 | python,web-applications,turbogears,turbogears2 | 0 | 2011-02-15T12:11:00.000 |
The python unit testing framework called nosetest has a plugin for sqlalchemy, however there is no documentation for it that I can find. I'd like to know how it works, and if possible, see a code example. | 3 | 0 | 0 | 0 | false | 10,268,378 | 0 | 340 | 1 | 0 | 0 | 5,009,112 | It is my understanding that this plugin is only meant for unit testing SQLAlchemy itself and not as a general tool. Perhaps that is why there are no examples or documentation? Posting to the SQLAlchemy mailing list is likely to give you a better answer "straight from the horse's mouth". | 1 | 0 | 0 | How does the nosetests sqlalchemy plugin work? | 1 | python,sqlalchemy,nosetests | 0 | 2011-02-15T20:26:00.000 |
We have a system which generates reports in XLS using Spreadsheet_Excel_Writer for smaller files and in case of huge files we just export them as CSVs.
We now want to export excel sheets which are multicolor etc. as a part of report generation, which in excel could be done through a few macros.
Is there any good exporter which generates the excel sheets with macros?(Spreadsheet_Excel_Writer cant do this) If it exists for PHP it would be amazing but if it exists for any other language, its fine we could interface it. | 0 | 0 | 1.2 | 0 | true | 5,028,703 | 0 | 515 | 1 | 0 | 0 | 5,028,536 | It's your "excel sheets with macros" that is going to cause you all end of problems. If you're on a Windows platform, with Excel installed, then PHP's COM extension should allow you to do this. Otherwise, I'm nor aware of any PHP library which allows you to create macros... not even PHPExcel. I suspect the same will apply with most languages, other than perhaps those running with .Net (and possibly Mono).
However, do you really need macros to play with colour? Can't you do this more simply with styles, and perhaps conditional formatting?
PS. What's your definition of "huge"? | 1 | 0 | 0 | Good xls exporter to generate excel sheets automatically with a few macros from any programming language? | 1 | java,php,python,macros,xls | 0 | 2011-02-17T11:49:00.000 |
Basically I'm looking for an equivalent of DataMapper.auto_upgrade! from the Ruby world.
In other words:
change the model
run some magic -> current db schema is investigated and changed to reflect the model
profit
Of course, there are cases when it's impossible for such alteration to be non-desctructive, eg. when you deleted some attribute. But I don't mean such case. I'm looking for a general solution which doesn't get in the way when rapidly prototyping and changing the schema.
TIA | 0 | 0 | 0 | 0 | false | 5,037,471 | 1 | 220 | 1 | 0 | 0 | 5,036,118 | Sqlalchemy-migrate (http://packages.python.org/sqlalchemy-migrate/) is intended to help do these types of operations. | 1 | 0 | 0 | Can SQLAlchemy do a non-destructive alter of the db comparing the current model with db schema? | 1 | python,orm,sqlalchemy | 0 | 2011-02-17T23:44:00.000 |
I have recently converted my workspace file format for my application to sqlite. In order to ensure robust operation on NFS I've used a common update policy, I do all modifications to a copy stored in a temp location on the local harddisk. Only when saving do I modify the original file (potentially on NFS) by copying over the original file with the temp file. I only open the orginal file to keep an exclusive lock on it so it someone else tries to open they will be warned that someone else is using it.
The problem is this: When I go to save my temp file back over the original file I must release the lock on the orginal file, this provides a window for someone else to get in and take the original, albeit a small window.
I can think of a few ways around this:
(1) being to simply dump the contents of the temp in to the orginal by using sql, i.e. drop tables on original, vacumm original, select from temp and insert into orginal. I don't like doing sql operations on a sqlite file stored on NFS though. This scares me with corruptions issues. Am I right to think like this?
(2) Use various extra files to act as a guard to prevent other from coming in while copying the temp over the original. Using files as a mutex is problematic at best. I also don't like the idea of having extra files hanging around if the application crashes.
I'm wondering if anyone has any different solutions for this. Again to copy the temp file over the original file while ensuring other application don't sneak in and grab the original file while doing so?
I'm using python2.5, sqlalchemy 0.6.6 and sqlite 3.6.20
Thanks,
Dean | 3 | 2 | 0.379949 | 0 | false | 5,095,693 | 0 | 2,749 | 1 | 0 | 0 | 5,043,327 | SQLite NFS issues are due to broken caching and locking. If your process is the only one accessing the file on NFS then you'll be ok.
The SQLite backup API was designed to solve exactly your problem. You can either backup directly to the NFS database or to another local temp file and then copy that. The backup API deals with all the locking and concurrency issues.
You can use APSW to get access to the backup API or the most recent version of pysqlite. (Disclosure: I am the APSW author.) | 1 | 0 | 0 | How to ensure a safe file sync with sqlite and NFS | 1 | python,sqlite,sqlalchemy,nfs | 0 | 2011-02-18T15:44:00.000 |
Consider this test case:
import sqlite3
con1 = sqlite3.connect('test.sqlite')
con1.isolation_level = None
con2 = sqlite3.connect('test.sqlite')
con2.isolation_level = None
cur1 = con1.cursor()
cur2 = con2.cursor()
cur1.execute('CREATE TABLE foo (bar INTEGER, baz STRING)')
con1.isolation_level = 'IMMEDIATE'
cur1.execute('INSERT INTO foo VALUES (1, "a")')
cur1.execute('INSERT INTO foo VALUES (2, "b")')
print cur2.execute('SELECT * FROM foo').fetchall()
con1.commit()
print cur2.execute('SELECT * FROM foo').fetchall()
con1.rollback()
print cur2.execute('SELECT * FROM foo').fetchall()
From my knowledge I was expecting to see this as a result:
[]
[(1, u'a'), (2, u'b')]
[]
But here it's resulting in this:
[]
[(1, u'a'), (2, u'b')]
[(1, u'a'), (2, u'b')]
So the call to rollback() method in the first connection didn't reverted the previously commited changes. Why? Shouldn't it roll back them?
Thank you in advance. | 0 | 3 | 1.2 | 0 | true | 5,051,345 | 0 | 488 | 1 | 0 | 0 | 5,051,151 | You can't both commit and rollback the same transaction. con1.commit() ends your transaction on that cursor. The next con1.rollback() is either being silently ignored or is rolling back an empty transaction. | 1 | 0 | 0 | Python sqlite3 module not rolling back transactions | 1 | python,sqlite,rollback | 0 | 2011-02-19T13:59:00.000 |
I am developing a multiplayer gaming server that uses Django for the webserver (HTML frontend, user authentication, games available, leaderboard, etc.) and Twisted to handle connections between the players and the games and to interface with the games themselves. The gameserver, the webserver, and the database may run on different machines.
What is the "best" way to architect the shared database, in a manner that supports changes to the database schema going forward. Should I try incorporating Django's ORM in the Twisted framework and used deferreds to make it non-blocking? Should I be stuck creating and maintaining two separate databases schemas / interfaces, one in Django's model and the other using twisted.enterprise.row?
Similarly, with user authentication, should I utilize twisted's user authentication functionality, or try to include Django modules into the gameserver to handle user authentication on the game side? | 9 | 2 | 0.197375 | 0 | false | 5,051,832 | 1 | 2,454 | 2 | 0 | 0 | 5,051,408 | I would just avoid the Django ORM, it's not all that and it would be a pain to access outside of a Django context (witness the work that was required to make Django support multiple databases). Twisted database access always requires threads (even with twisted.adbapi), and threads give you access to any ORM you choose. SQLalchemy would be a good choice. | 1 | 0 | 0 | Sharing a database between Twisted and Django | 2 | python,database,django,twisted | 0 | 2011-02-19T14:42:00.000 |
I am developing a multiplayer gaming server that uses Django for the webserver (HTML frontend, user authentication, games available, leaderboard, etc.) and Twisted to handle connections between the players and the games and to interface with the games themselves. The gameserver, the webserver, and the database may run on different machines.
What is the "best" way to architect the shared database, in a manner that supports changes to the database schema going forward. Should I try incorporating Django's ORM in the Twisted framework and used deferreds to make it non-blocking? Should I be stuck creating and maintaining two separate databases schemas / interfaces, one in Django's model and the other using twisted.enterprise.row?
Similarly, with user authentication, should I utilize twisted's user authentication functionality, or try to include Django modules into the gameserver to handle user authentication on the game side? | 9 | 10 | 1.2 | 0 | true | 5,051,760 | 1 | 2,454 | 2 | 0 | 0 | 5,051,408 | First of all I'd identify why you need both Django and Twisted. Assuming you are comfortable with Twisted using twisted.web and auth will easily be sufficient and you'll be able to reuse your database layer for both the frontend and backend apps.
Alternatively you could look at it the other way, what is Twisted doing better as a game server? Are you hoping to support more players (more simultaneous connections) or something else? Consider that if you must use threaded within twisted to do blocking database access that you are most likely not going to be able to efficently/reliably support hundreds of simultaneous threads. Remember python has a Global Interpreter Lock so threads are not necessarily the best way to scale.
You should also consider why you are looking to use a SQL Database and an ORM. Does your game have data that is really best suited to being stored in an relational database? Perhaps it's worth examining something like MongoDB or another key-value or object database for storing game state. Many of these NoSQL stores have both blocking drivers for use in Django and non-blocking drivers for use in Twisted (txmongo for example).
That said, if you're dead set on using both Django and Twisted there are a few techniques for embedding blocking DB access into a non-blocking Twisted server.
adbapi (uses twisted thread pool)
Direct use of the twisted thread pool using reactor.deferToThread
The Storm ORM has a branch providing Twisted support (it handles deferToThread calls internally)
SAsync is a library that tries to make SQLAlchemy work in an Async way
Have twisted interact via RPC with a process that manages the blocking DB
So you should be able to manage the Django ORM objects yourself by importing them in twisted and being very careful making calls to reactor.deferToThread. There are many possible issues when working with these objects within twisted in that some ORM objects can issue SQL when accessing/setting a property, etc.
I realize this isn't necessarily the answer you were expecting but perhaps more detail about what you're hoping to accomplish and why you are choosing these specific technologies will allow folks to get you better answers. | 1 | 0 | 0 | Sharing a database between Twisted and Django | 2 | python,database,django,twisted | 0 | 2011-02-19T14:42:00.000 |
I am trying to MySQL for Python (MySQLdb package) in Windows so that I can use it in the Django web frame.
I have just installed MySQL Community Server 5.5.9 and I have managed to run it and test it using the testing procedures suggested in the MySQL 5.5 Reference Manual. However, I discovered that I still don't have the MySQL AB folder, the subsequent MySQL Server 5.5 folder and regkey in the HKEY_LOCAL_MACHINE, which is needed to build the MySQLdb package.
From the MySQL 5.5 Reference Manual, it says that:
The MySQL Installation Wizard creates one Windows registry key in a typical install situation, located in HKEY_LOCAL_MACHINE\SOFTWARE\MySQL AB.
However, I do have the Start Menu short cut and all the program files installed. I have used the msi installation and installed without problems. Should I be getting the MySQL AB folder? Does anyone know what has happened and how I should get the MySQL AB/MySQL Server 5.5 folder and the regkey? | 5 | 2 | 1.2 | 0 | true | 5,412,380 | 1 | 854 | 1 | 0 | 0 | 5,059,883 | I found that the key was actually generated under HKEY_CURRENT_USER instead of HKEY_LOCAL_MACHINE. Thanks. | 1 | 0 | 0 | MySQL AB, MySQL Server 5.5 Folder in HKEY_LOCAL_MACHINE not present | 1 | python,mysql,django | 0 | 2011-02-20T20:43:00.000 |
I'm creating a small website with Django, and I need to calculate statistics with data taken from several tables in the database.
For example (nothing to do with my actual models), for a given user, let's say I want all birthday parties he has attended, and people he spoke with in said parties. For this, I would need a wide query, accessing several tables.
Now, from the object-oriented perspective, it would be great if the User class implemented a method that returned that information. From a database model perspective, I don't like at all the idea of adding functionality to a "row instance" that needs to query other tables. I would like to keep all properties and methods in the Model classes relevant to that single row, so as to avoid scattering the business logic all over the place.
How should I go about implementing database-wide queries that, from an object-oriented standpoint, belong to a single object? Should I have an external kinda God-object that knows how to collect and organize this information? Or is there a better, more elegant solution? | 3 | 1 | 1.2 | 0 | true | 5,064,564 | 1 | 110 | 2 | 0 | 0 | 5,063,658 | I recommend extending Django's Model-Template-View approach with a controller. I usually have a controller.py within my apps which is the only interface to the data sources. So in your above case I'd have something like get_all_parties_and_people_for_user(user).
This is especially useful when your "data taken from several tables in the database" becomes "data taken from several tables in SEVERAL databases" or even "data taken from various sources, e.g. databases, cache backends, external apis, etc.". | 1 | 0 | 0 | Correct way of implementing database-wide functionality | 2 | python,database,django,django-models,coding-style | 0 | 2011-02-21T08:17:00.000 |
I'm creating a small website with Django, and I need to calculate statistics with data taken from several tables in the database.
For example (nothing to do with my actual models), for a given user, let's say I want all birthday parties he has attended, and people he spoke with in said parties. For this, I would need a wide query, accessing several tables.
Now, from the object-oriented perspective, it would be great if the User class implemented a method that returned that information. From a database model perspective, I don't like at all the idea of adding functionality to a "row instance" that needs to query other tables. I would like to keep all properties and methods in the Model classes relevant to that single row, so as to avoid scattering the business logic all over the place.
How should I go about implementing database-wide queries that, from an object-oriented standpoint, belong to a single object? Should I have an external kinda God-object that knows how to collect and organize this information? Or is there a better, more elegant solution? | 3 | 0 | 0 | 0 | false | 5,065,280 | 1 | 110 | 2 | 0 | 0 | 5,063,658 | User.get_attended_birthday_parties() or Event.get_attended_parties(user) work fine: it's an interface that makes sense when you use it. Creating an additional "all-purpose" object will not make your code cleaner or easier to maintain. | 1 | 0 | 0 | Correct way of implementing database-wide functionality | 2 | python,database,django,django-models,coding-style | 0 | 2011-02-21T08:17:00.000 |
So I am trying to take a large number of xml files (None are that big in particular and I can split them up as I see fit.) In all there is about 70GB worth of data. For the sake of reference the loading script is written in python and uses psycopg2 to interface with a postgres table.
Anyway, what I am trying to do is to deal with data that works something like this. The relation count being the number of time the two tags are seen together and the tag count being the number of time the tag has been seen. I have all the tags already its just getting the times they appear and the times they appear together of the xml that has become a problem.
Tag Table | Relations Table
TagID TagName TagCount | tag1 tag2 relationCount
1 Dogs 20 | 1 2 5
2 Beagles 10 | 1 3 2
3 Birds 11 | 2 3 7
The problem I am encountering is getting the data to load in a reasonable amount of time. I have been iterating over the update methods as I count how often the tags come up in the xml files.
I suppose I am asking if anyone has any ideas. Should I create some sort of buffer to hold the update info and try to use cur.executeall() periodically and/or should I restructure the database somehow. Anyway, any and all thoughts on this issue are appreciated. | 0 | 3 | 0.53705 | 0 | false | 5,066,699 | 0 | 126 | 1 | 0 | 0 | 5,066,569 | If I understand this "...I have been iterating over the update methods" it sounds like you are updating the database rows as you go? If this is so, consider writing some code that passes the XML, accumulates the totals you are tracking, outputs them to a file, and then loads that file with COPY.
If you are updating existing data, try something like this:
1) Pass the XML file(s) to generate all new totals from the new data
2) COPY that into a working table - a table that you clear out before and after every batch
3) Issue an INSERT from the working table to the real tables for all rows that cannot be found, inserting zeros for all values
4) Issue an UPDATE from the working table to the real tables to increment the counters.
5) Truncate the working table. | 1 | 0 | 0 | Efficiently creating a database to analyze relationships between information | 1 | python,database-design,postgresql,psycopg2 | 0 | 2011-02-21T13:30:00.000 |
I am using Python 2.7 and trying to get a Django project running on a MySQL backend.
I have downloaded mysqldb and followed the guide here:http://cd34.com/blog/programming/python/mysql-python-and-snow-leopard/
Yet when I go to run the django project the following traceback occurs:
Traceback (most recent call last):
File "/Users/andyarmstrong/Documents/workspace/BroadbandMapper/src/BroadbandMapper/manage.py", line 11, in
execute_manager(settings)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 209, in execute
translation.activate('en-us')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/__init__.py", line 66, in activate
return real_activate(language)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/functional.py", line 55, in _curried
return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/__init__.py", line 36, in delayed_loader
return getattr(trans, real_name)(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 193, in activate
_active[currentThread()] = translation(language)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 176, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 159, in _fetch
app = import_module(appname)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/admin/__init__.py", line 1, in
from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/admin/helpers.py", line 1, in
from django import forms
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/forms/__init__.py", line 17, in
from models import *
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/forms/models.py", line 6, in
from django.db import connections
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/__init__.py", line 77, in
connection = connections[DEFAULT_DB_ALIAS]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 92, in __getitem__
backend = load_backend(db['ENGINE'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 33, in load_backend
return import_module('.base', backend_name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 14, in
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Users/andyarmstrong/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-x86_64.egg-tmp/_mysql.so, 2): Library not loaded: libmysqlclient.16.dylib
Referenced from: /Users/andyarmstrong/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-x86_64.egg-tmp/_mysql.so
Reason: image not found
I have tried the following also:http://whereofwecannotspeak.wordpress.com/2007/11/02/mysqldb-python-module-quirk-in-os-x/
adding a link between the mysql lib directory and somewhere else...
Help! | 2 | 2 | 1.2 | 0 | true | 5,072,940 | 1 | 1,535 | 2 | 1 | 0 | 5,072,066 | I eventually managed to solve the problem by Installing python 2.7 with Mac Ports and installing mysqldb using Mac Ports - was pretty simple after that. | 1 | 0 | 0 | Python mysqldb on Mac OSX 10.6 not working | 2 | python,mysql,django,macos | 0 | 2011-02-21T22:35:00.000 |
I am using Python 2.7 and trying to get a Django project running on a MySQL backend.
I have downloaded mysqldb and followed the guide here:http://cd34.com/blog/programming/python/mysql-python-and-snow-leopard/
Yet when I go to run the django project the following traceback occurs:
Traceback (most recent call last):
File "/Users/andyarmstrong/Documents/workspace/BroadbandMapper/src/BroadbandMapper/manage.py", line 11, in
execute_manager(settings)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 209, in execute
translation.activate('en-us')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/__init__.py", line 66, in activate
return real_activate(language)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/functional.py", line 55, in _curried
return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/__init__.py", line 36, in delayed_loader
return getattr(trans, real_name)(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 193, in activate
_active[currentThread()] = translation(language)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 176, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 159, in _fetch
app = import_module(appname)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/admin/__init__.py", line 1, in
from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/admin/helpers.py", line 1, in
from django import forms
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/forms/__init__.py", line 17, in
from models import *
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/forms/models.py", line 6, in
from django.db import connections
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/__init__.py", line 77, in
connection = connections[DEFAULT_DB_ALIAS]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 92, in __getitem__
backend = load_backend(db['ENGINE'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 33, in load_backend
return import_module('.base', backend_name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 14, in
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Users/andyarmstrong/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-x86_64.egg-tmp/_mysql.so, 2): Library not loaded: libmysqlclient.16.dylib
Referenced from: /Users/andyarmstrong/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-x86_64.egg-tmp/_mysql.so
Reason: image not found
I have tried the following also:http://whereofwecannotspeak.wordpress.com/2007/11/02/mysqldb-python-module-quirk-in-os-x/
adding a link between the mysql lib directory and somewhere else...
Help! | 2 | 0 | 0 | 0 | false | 5,305,496 | 1 | 1,535 | 2 | 1 | 0 | 5,072,066 | you needed to add the MySQL client libraries to the LD_LIBRARY_PATH. | 1 | 0 | 0 | Python mysqldb on Mac OSX 10.6 not working | 2 | python,mysql,django,macos | 0 | 2011-02-21T22:35:00.000 |
I'm building my first app with GAE to allow users to run elections, and I create an Election entity for each election.
To avoid storing too much data, I'd like to automatically delete an Election entity after a certain period of time -- say three months after the end of the election. Is it possible to do this automatically in GAE? Or do I need to do this manually?
If it matters, I'm using the Python interface. | 3 | 5 | 1.2 | 0 | true | 5,079,939 | 1 | 2,483 | 1 | 1 | 0 | 5,079,885 | Assuming you have a DateProperty on the entities indicating when the election ended, you can have a cron job search for any older than 3 months every night and delete them. | 1 | 0 | 0 | Automatic deletion or expiration of GAE datastore entities | 3 | python,google-app-engine,google-cloud-datastore | 0 | 2011-02-22T15:09:00.000 |
I know that if I figure this one out or if somebody shows me, it'll be a forehead slapper. Before posting any questions, I try for at least three hours and quite a bit of searching. There are several hints that are close, but nothing I have adopted/tried seems to work.
I am taking a byte[] from Java and passing that via JSON (with Gson) to a python JSON using Flask. This byte[] is stored in a python object as an integer list when received, but now I need to send it to MySQLdb and store it as a blob. The data contents is binary file data.
How do I convert a python integer list [1,2,-3,-143....] to something that I can store in MySQL? I have tried bytearray() and array.array(), but those choke when I access the list directly from the object and try and convert to a string to store thorugh MySQLdb.
Any links or hints are greatly appreciated. | 1 | 0 | 0 | 0 | false | 31,187,500 | 0 | 7,408 | 1 | 0 | 0 | 5,088,671 | I found ''.join(map(lambda x: chr(x % 256), data)) to be painfully slow (~4 minutes) for my data on python 2.7.9, where a small change to str(bytearray(map(lambda x: chr(x % 256), data))) only took about 10 seconds. | 1 | 0 | 1 | Convert Java byte array to Python byte array | 3 | java,python,mysql,binary,byte | 0 | 2011-02-23T08:47:00.000 |
I have installed MySqldb through .exe(precompiled). Its is stored in site-packages. But now i don't know how to test, that it is accessable or not. And major problem how to import in my application like import MySqldb. Help me i am very new techie in python i just want to work with my existing Mysql. Thanks in advance... | 0 | 3 | 1.2 | 0 | true | 5,090,944 | 0 | 165 | 1 | 0 | 0 | 5,090,870 | Just open your CMD/Console, type python, press Enter, type import MySQLdb and then press Enter again.
If no error is shown, you're ok! | 1 | 0 | 0 | how to import mysqldb | 1 | python,mysql | 0 | 2011-02-23T12:23:00.000 |
I want to retain the flexibility of switching between MySQL and PostgreSQL without the awkwardness of using an ORM - SQL is a fantastic language and i would like to retain it's power without the additional overhead of an ORM.
So...is there a best practice for abstraction the database layer of a Python application to provide the stated flexibility.
Thanks community! | 1 | 1 | 0.099668 | 0 | false | 5,090,938 | 0 | 965 | 1 | 0 | 0 | 5,090,901 | Have a look at SQLAlchemy. You can use it to execute literal SQL on several RDBMS, including MySQL and PostgreSQL. It wraps the DB-API adapters with a common interface, so they will behave as similarly as possible.
SQLAlchemy also offers programmatic generation of SQL, with or without the included ORM, which you may find very useful. | 1 | 0 | 0 | Starting new project: database abstraction in Python, best practice for retaining option of MySQL or PostgreSQL without ORM | 2 | python,mysql,database,postgresql,abstraction | 0 | 2011-02-23T12:25:00.000 |
I'm looking for a library that lets me run SQL-like queries on python "object databases". With object database I mean a fairly complex structure of python objects and lists in memory. Basically this would be a "reverse ORM" - instead of providing an object oriented interface to a relational database, it would provide a SQL-ish interface to an object database.
C#'s LINQ is very close. Python's list comprehensions are very nice, but the syntax gets hairy when doing complex things (sorting, joining, etc.). Also, I can't (easily) create queries dynamically with list comprehensions.
The actual syntax could either be string based, or use a object-oriented DSL (a la from(mylist).select(...)). Bonus points if the library would provide some kind of indices to speed up search.
Does this exist or do I have to invent it? | 17 | 2 | 0.057081 | 0 | false | 5,127,794 | 0 | 9,831 | 1 | 0 | 0 | 5,126,776 | One major difference between what SQL does and what you can do in idiomatic python, in SQL, you tell the evaluator what information you are looking for, and it works out the most efficient way of retrieving that based on the structure of the data it holds. In python, you can only tell the interpreter how you want the data, there's no equivalent to a query planner.
That said, there are a few extra tools above and beyond list comprehensions that help alot.
First, use a structure that closely resembles the declarative nature of SQL. Many of them are builtins. map, filter, reduce, zip, all, any, sorted, as well as the contents of the operator, functools and itertools packages, all offer a fairly concise way of expressing data queries. | 1 | 0 | 1 | Query language for python objects | 7 | python,object-oriented-database | 0 | 2011-02-26T12:03:00.000 |
I have three tables, 1-Users, 2-Softwares, 3-UserSoftwares.
if suppose, Users table having 6 user records(say U1,U2,...,U6) and Softwares table having 4 different softwares(say S1,S2,S3,S4) and UserSoftwares stores the references if a user requested for given software only.
For example: UserSoftwares(5 records) have only two columns(userid, softwareid) which references others. and the data is:
U1 S1
U2 S2
U2 S3
U3 S3
U4 S1
Now I m expecting following results:(if current login user is U2):
S1 Disable
S2 Enable
S3 Enable
S4 Disable
Here, 1st column is softwareid or name and 2nd column is status which having only two values(Enable/Disable) based on UserSoftwares table(model). Note status is not a field of any model(table).
"My Logic is:
1. loop through each software in softwares model
2. find softwareid with current login userid (U2) in UserSoftwares model:
if it found then set status='Enable'
if not found then set status='Disable'
3. add this status property to software object.
4. repeat this procedure for all softwares.
"
What should be the query in python google app engine to achieve above result? | 1 | 0 | 0 | 0 | false | 5,143,851 | 1 | 873 | 1 | 1 | 0 | 5,142,192 | If your are looking for join - there is no joins in GAE. BTW, there is pretty easy to make 2 simple queries (Softwares and UserSoftware), and calculate all additional data manually | 1 | 0 | 0 | Querying on multiple tables using google apps engine (Python) | 3 | python,google-app-engine,model | 0 | 2011-02-28T12:48:00.000 |