Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I have a SQL dump of a legacy DB, and a folder with images, and those are referenced by some rows of certain tables, and I need to migrate that data to the new Django models. The specific problem is how to "perform" the upload, but in a management command. When the table with the field referenced is migrated to it's corresponding model, I need to also set the image field of the model, and I also need to process the filename accordingly to the upload_to parameter for the ImageField. How to programmatically populate the image field from a file path or a file descriptor?
1
0
0
0
false
8,280,938
1
438
1
0
0
8,280,859
One approach would be to create a utility django project specifying your legacy database in settings.py. Then use the inspectdb management command to create a django model representation of your legacy database. And finally use dumpdata to get you data in JSON format. You could then finally make your own JSON script that inserts your old data in your new models.
1
0
0
Migrate a legacy DB to Django, with image files
1
python,django,data-migration,filefield
0
2011-11-26T19:05:00.000
Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?
0
0
0
0
false
8,299,759
0
98
2
0
0
8,299,614
Your strategy is reasonable though I would first look at doing as much of the work as possible in the database query using LIKE and other SQL functions. It should be possible to make a query that matches complex criteria.
1
0
0
Could someone give me their two cents on this optimization strategy
3
python,mysql
1
2011-11-28T17:14:00.000
Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?
0
2
0.132549
0
false
8,299,780
0
98
2
0
0
8,299,614
Regarding: "would this be faster:" The behind-the-scenes logistics of the SQL engine are really optimized for this sort of thing. You might need to create an SQL PROCEDURE or a fairly complex query, however. Caveat, if you're not particularly good at or fond of maintaining SQL, and this isn't a time-sensitive query, then you might be wasting programmer time over CPU/IO time in getting it right. However, if this is something that runs often or is time-sensitive, you should almost certainly be building some kind of JOIN logic in SQL, passing in the appropriate values (possibly wildcards), and letting the database do the filtering in the relational data set, instead of collecting a larger number of "wrong" records and then filtering them out in procedural code. You say the database is "pretty slow." Is this because it's on a distant host, or because the tables aren't indexed for the types of searches you're doing? … If you're doing a complex query against columns that aren't indexed for it, that can be a pain; you can use various SQL tools including ANALYZE to see what might be slowing down a query. Most SQL GUI's will have some shortcuts for such things, as well.
1
0
0
Could someone give me their two cents on this optimization strategy
3
python,mysql
1
2011-11-28T17:14:00.000
I am making an internal API in Python (pardon my terms) that provides a layer over MySQL and Solr (databases) with only simple computing. A Python program that spawns from scratch waits 80ms for Solr, while taking negligible time by itself. I am worried about the incomplete threading support of Python. So which of the modern Thrift servers allows high-performance request handling? In Python, I could make a WSGI app under Apache workers that: pooling resources such as DB connection objects high performance with minimum processes graceful dropping of requests (relatively) graceful code reloading a keep-alive mechanism (restart the application if it crashes)
1
1
0.197375
0
false
8,842,820
0
1,136
1
0
0
8,308,610
Apparently TProcessPoolServer is a good server and forks different processes, avoiding threading issues.
1
0
0
Performance of TNonblockingServer, TThreadPoolServer for DB-bound server in Python
1
python,thrift
0
2011-11-29T09:42:00.000
The idea is to make a script that would get stored procedure and udf contents (code) every hour (for example) and add it to SVN repo. As a result we have sql versioning control system. Does anyone know how to backup stored procedure code using Python (sqlAlchemy, pyodbc or smth). I'v done this via C# before using SQL Management Objects. Thanks in advance!
0
1
0.197375
0
false
8,414,549
1
244
1
0
0
8,412,636
There is no easy way to access SMO from Python (because there is no generic solution for accessing .NET from Python), so I would write a command-line tool in C# and call it from Python using the subprocess module. Perhaps you could do something with ctypes but I have no idea if that's feasible. But, perhaps a more important question is why you want or need to do this. Does the structure of your database really change so often? If so, presumably you have no real control over it so what benefit does source control have in that scenario? How do you deploy database changes in the first place? Usually changes go from source control into production, not the other way around, so the 'master' source of DDL (including tables, indexes etc.) is SVN, not the database. But you haven't given much information about what you really need to achieve, so perhaps there is a good reason for needing to do this in your environment.
1
0
0
backup msSQL stored proc or UDF code via Python
1
c#,python,sql-server,stored-procedures,smo
0
2011-12-07T09:00:00.000
I am parsing the USDA's food database and storing it in SQLite for query purposes. Each food has associated with it the quantities of the same 162 nutrients. It appears that the list of nutrients (name and units) has not changed in quite a while, and since this is a hobby project I don't expect to follow any sudden changes anyway. But each food does have a unique quantity associated with each nutrient. So, how does one go about storing this kind of information sanely. My priorities are multi-programming language friendly (Python and C++ having preference), sanity for me as coder, and ease of retrieving nutrient sets to sum or plot over time. The two things that I had thought of so far were 162 columns (which I'm not particularly fond of, but it does make the queries simpler), or a food table that has a link to a nutrient_list table that then links to a static table with the nutrient name and units. The second seems more flexible i ncase my expectations are wrong, but I wouldn't even know where to begin on writing the queries for sums and time series. Thanks
2
4
0.379949
0
false
8,431,705
0
373
1
0
0
8,431,451
Use the second (more normalized) approach. You could even get away with fewer tables than you mentioned: tblNutrients -- NutrientID -- NutrientName -- NutrientUOM (unit of measure) -- Otherstuff tblFood -- FoodId -- FoodName -- Otherstuff tblFoodNutrients -- FoodID (FK) -- NutrientID (FK) -- UOMCount It will be a nightmare to maintain a 160+ field database. If there is a time element involved too (can measurements change?) then you could add a date field to the nutrient and/or the foodnutrient table depending on what could change.
1
0
0
How to store data with large number (constant) of properties in SQL
2
c++,python,sql,sqlite
0
2011-12-08T13:07:00.000
I have a question about making the decision whether to use MySQL database or Mongo database, the problem with my decision is that I am highly depending on these things: I want to select records between two dates (period) However is this possible? My Application won't do any complex queries, just basic crud. It has Facebook integration so sometimes I got to JOIN the users table at the current setup.
1
1
1.2
0
true
8,437,321
0
295
2
0
0
8,437,213
MySQL(SQL) or MongoDB(NoSQL), both can work for your needs. but idea behind using RDBMS/NoSQL is the requirement of your application if your application care about speed and no relation between the data is necessary and your data schema changes very frequently, you can choose MongoDB, faster since no joins needed, every data is a stored as document else, go for MySQL
1
0
0
MongoDB or MySQL database
3
mysql,mongodb,mysql-python
0
2011-12-08T20:25:00.000
I have a question about making the decision whether to use MySQL database or Mongo database, the problem with my decision is that I am highly depending on these things: I want to select records between two dates (period) However is this possible? My Application won't do any complex queries, just basic crud. It has Facebook integration so sometimes I got to JOIN the users table at the current setup.
1
3
0.197375
0
false
8,437,280
0
295
2
0
0
8,437,213
Either DB will allow you to filter between dates and I wouldn't use that requirement to make the decision. Some questions you should answer: Do you need to store your data in a relational system, like MySQL? Relational databases are better at cross entity joining. Will your data be very complicated, but you will only make simple queries (e.g. by an ID), if so MongoDB may be a better fit as storing and retrieving complex data is a cinch. Who and where will you be querying the data from? MySql uses SQL for querying, which is a much more well known skill than mongo's JSON query syntax. These are just three questions to ask. In order to make a recommendation, we'll need to know more about your application?
1
0
0
MongoDB or MySQL database
3
mysql,mongodb,mysql-python
0
2011-12-08T20:25:00.000
I'm looking for a flat-file, portable key-value store in Python. I'll be using strings for keys and either strings or lists for values. I looked at ZODB but I'd like something which is more widely used and is more actively developed. Do any of the dmb modules in Python require system libraries or a database server (like mysql or the likes) or can I write to file with any of them? If a dbm does not support a python lists, I imagine that I can just serialize it?
2
2
0.099668
0
false
8,528,030
0
4,198
1
0
0
8,528,001
You can look at the shelve module. It uses pickle under the hood, and allows you to create a key-value look up that persists between launches. Additionally, the json module with dump and load methods would probably work pretty well as well.
1
0
1
Flat file key-value store in python
4
python
0
2011-12-15T23:22:00.000
I'm user of a Python application that has poorly indexed tables and was wondering if it's possible to improve performance by converting the SQLite database into an in-memory database upon application startup. My thinking is that it would minimize the issue of full table scans, especially since SQLite might be creating autoindexes, as the documentation says that is enabled by default. How can this be accomplished using the SQLAlchemy ORM (that is what the application uses)?
2
0
0
0
false
8,549,565
0
1,450
2
0
0
8,549,326
Whenever you set a variable in Python you are instantiating an object. This means you are allocating memory for it. When you query sqlite you are simply reading information off the disk into memory. sqlalchemy is simply an abstraction. You read the data from disk into memory in the same way, by querying the database and setting the returned data to a variable.
1
0
0
Is it possible to read a SQLite database into memory with SQLAlchemy?
2
python,sqlite,sqlalchemy
0
2011-12-18T01:59:00.000
I'm user of a Python application that has poorly indexed tables and was wondering if it's possible to improve performance by converting the SQLite database into an in-memory database upon application startup. My thinking is that it would minimize the issue of full table scans, especially since SQLite might be creating autoindexes, as the documentation says that is enabled by default. How can this be accomplished using the SQLAlchemy ORM (that is what the application uses)?
2
1
0.099668
0
false
10,002,601
0
1,450
2
0
0
8,549,326
At the start of the program, move the database file to a ramdisk, point SQLAlchemy to it and do your processing, and then move the SQLite file back to non-volatile storage. It's not a great solution, but it'll help you determine whether caching your database in memory is worthwhile.
1
0
0
Is it possible to read a SQLite database into memory with SQLAlchemy?
2
python,sqlite,sqlalchemy
0
2011-12-18T01:59:00.000
I've been learning python by building a webapp on google app engine over the past five or six months. I also just finished taking a databases class this semester where I learned about views, and their benefits. Is there an equivalent with the GAE datastore using python?
2
5
1.2
0
true
8,570,245
1
106
1
1
0
8,570,066
Read-only views (the most common type) are basically queries against one or more tables to present the illusion of new tables. If you took a college-level database course, you probably learned about relational databases, and I'm guessing you're looking for something like relational views. The short answer is No. The GAE datastore is non-relational. It doesn't have tables. It's essentially a very large distributed hash table that uses composite keys to present the (very useful) illusion of Entities, which are easy at first glance to mistake for rows in a relational database. The longer answer depends on what you'd do if you had a view.
1
0
0
Is there an equivalent of a SQL View in Google App Engine Python?
2
python,sql,google-app-engine
0
2011-12-20T02:21:00.000
I am writing a quick and dirty script which requires interaction with a database (PG). The script is a pragmatic, tactical solution to an existing problem. however, I envisage that the script will evolve over time into a more "refined" system. Given the fact that it is currently being put together very quickly (i.e. I don't have the time to pour over huge reams of documentation), I am tempted to go the quick and dirty route, using psycopg. The advantages for psycopg2 (as I currently understand it) is that: written in C, so faster than sqlAlchemy (written in Python)? No abstraction layer over the DBAPI since works with one db and one db only (implication -> fast) (For now), I don't need an ORM, so I can directly execute my SQL statements without having to learn a new ORM syntax (i.e. lightweight) Disadvantages: I KNOW that I will want an ORM further down the line psycopg2 is ("dated"?) - don't know how long it will remain around for Are my perceptions of SqlAlchemy (slow/interpreted, bloated, steep learning curve) true - IS there anyway I can use sqlAlchemy in the "rough and ready" way I want to use psycopg - namely: execute SQL statements directly without having to mess about with the ORM layer, etc. Any examples of doing this available?
57
108
1.2
0
true
8,588,766
0
40,683
2
0
0
8,588,126
SQLAlchemy is a ORM, psycopg2 is a database driver. These are completely different things: SQLAlchemy generates SQL statements and psycopg2 sends SQL statements to the database. SQLAlchemy depends on psycopg2 or other database drivers to communicate with the database! As a rather complex software layer SQLAlchemy does add some overhead but it also is a huge boost to development speed, at least once you learned the library. SQLAlchemy is a excellent library and will teach you the whole ORM concept, but if you don't want to generate SQL statements to begin with then you don't want SQLAlchemy.
1
0
0
SQLAlchemy or psycopg2?
2
python,postgresql,sqlalchemy,psycopg2
0
2011-12-21T10:08:00.000
I am writing a quick and dirty script which requires interaction with a database (PG). The script is a pragmatic, tactical solution to an existing problem. however, I envisage that the script will evolve over time into a more "refined" system. Given the fact that it is currently being put together very quickly (i.e. I don't have the time to pour over huge reams of documentation), I am tempted to go the quick and dirty route, using psycopg. The advantages for psycopg2 (as I currently understand it) is that: written in C, so faster than sqlAlchemy (written in Python)? No abstraction layer over the DBAPI since works with one db and one db only (implication -> fast) (For now), I don't need an ORM, so I can directly execute my SQL statements without having to learn a new ORM syntax (i.e. lightweight) Disadvantages: I KNOW that I will want an ORM further down the line psycopg2 is ("dated"?) - don't know how long it will remain around for Are my perceptions of SqlAlchemy (slow/interpreted, bloated, steep learning curve) true - IS there anyway I can use sqlAlchemy in the "rough and ready" way I want to use psycopg - namely: execute SQL statements directly without having to mess about with the ORM layer, etc. Any examples of doing this available?
57
11
1
0
false
8,589,254
0
40,683
2
0
0
8,588,126
To talk with database any one need driver for that. If you are using client like SQL Plus for oracle, MysqlCLI for Mysql then it will direct run the query and that client come with DBServer pack. To communicate from outside with any language like java, c, python, C#... We need driver to for that database. psycopg2 is driver to run query for PostgreSQL from python. SQLAlchemy is the ORM which is not same as database driver. It will give you flexibility so you can write your code without any database specific standard. ORM provide database independence for programmer. If you write object.save in ORM then it will check, which database is associated with that object and it will generate insert query according to the backend database.
1
0
0
SQLAlchemy or psycopg2?
2
python,postgresql,sqlalchemy,psycopg2
0
2011-12-21T10:08:00.000
I have to compare massive database dumps in xls format to parse for changes day-to-day (gross, right?). I'm currently doing this in the most backwards way possible, and using xlrd to turn the xls into csv files, and then I'm running diffs to compare them. Since it's a database, and I don't have a means of knowing if the data ever stays in the same order after something like an item deletion, I can't do a compare x line to x line between the files, so doing lists of tuples or something wouldn't make the most sense to me. I basically need to find every single change that could have happened on any row REGARDLESS of that row's position in the actual dump, and the only real "lookup" I could think of is SKU as a unique ID (it's a product table from an ancient DB system), but I need to know a lot more than just products being deleted or added, because they could modify pricing or anything else in that item. Should I be using sets? And once I've loaded 75+ thousand lines of this database file into a "set", is my ram usage going to be hysterical? I thought about loading in each row of the xls as a big concatenated string to add to a set. Is that an efficient idea? I could basically get a list of rows that differ between sets and then go back after those rows in the original db file to find my actual differences. I've never worked with data parsing on a scale like this. I'm mostly just looking for any advice to not make this process any more ridiculous than it has to be, and I came here after not really finding something that seemed specific enough to my case to feel like good advice. Thanks in advance.
2
0
0
0
false
8,609,909
0
19,719
1
0
0
8,609,737
You could load the data into a database, and compare the databases. If you think that is easier. The key question you might need to think about is: can you sort the data somehow? Sorted sets are so much easier to handle. P.S. 75000 lines is not very much. Anything that fits into main memory of a regular computer is not much. Add a couple of 0s.
1
0
0
Python comparing two massive sets of data in the MOST efficient method possible
3
python,database,diff,set,compare
0
2011-12-22T21:08:00.000
I am using postgresql in ubuntu, and now i am working on python. I want to connect postgresql with an android application. Is there any way to connect postgresql with an android application? Any reply would be appreciated.
0
2
0.379949
0
false
8,613,304
0
377
1
0
0
8,613,246
Better way is to Use RestFUL API or WebService as front end for your Android device to connect to your PostgreSQL backend. I am not sure if it is possible to directly connect your android device to postgre SQL.
1
0
0
what is the way to connect postgresql with android in ubuntu
1
android,python,web-services,postgresql,ubuntu-10.04
0
2011-12-23T07:11:00.000
I'm writing a script where by a user registers his/her username, but a function checks whether this username is already in the db or not. But I'm stuck on how to match my query with the input. Here is the code: def checker(self, insane): t = (insane,) cmd = "SELECT admin_user FROM admin_db where admin_user = \"%s\";" %t self.cursor.execute(cmd) namer = self.cursor.fetchone() print namer if namer == insane: print("Username is already chosen!") exit(1) else: pass Since namer returns as (u'maverick',) It doesn't match with the input. How should I go about implementing that?
0
1
1.2
0
true
8,617,856
0
1,309
1
0
0
8,617,837
The DB fetch models return a tuple for each row. Since you've only selected a single field, you can simply access namer[0] to get the actual value.
1
0
0
Matching user's input with query from sqlite3 db in python
1
python,sql,sqlite
0
2011-12-23T15:40:00.000
I've looked in the documentation and haven't seen (from first sight) anything about cache in Pyramid. Maybe I missed something... Or maybe there are some third party packages to help with this. For example, how to cache db query (SQLAlchemy), how to cache views? Could anyone give some link to examples or documentation? Appreciate any help! EDITED: How to use memcache or database type cache or filebased cache?
3
6
1
0
false
14,859,955
0
4,486
1
0
0
8,651,061
Your options are pyramid_beaker and dogpile.cache pyramid_beaker was written to offer beaker caching for sessions. it also lets you configure beaker cache regions, which can be used elsewhere. dogpile.cache is a replacement for beaker. it hasn't been integrated to offer session support or environment.ini based setup yet. however it addresses a lot of miscellaneous issues and shortcomings with beaker. you can't/shouldn't cache a SqlAlchemy query or results. weird and bad things will happen, because the SqlAlchemy objects are bound to a database session. it's much better to convert the sqlalchemy results into another object/dict and cache those.
1
0
0
How to cache using Pyramid?
2
python,sqlalchemy,pyramid
0
2011-12-28T01:50:00.000
I'm engaged in developing a turn-based casual MMORPG game server. The low level engine(NOT written by us) which handle networking, multi-threading, timer, inter-server communication, main game loop etc, was written by C++. The high level game logic was written by Python. My question is about the data model design in our game. At first we simply try to load all data of a player into RAM and a shared data cache server when client login and schedule a timer periodically flush data into data cache server and data cache server will persist data into database. But we found this approach has some problems 1) Some data needs to be saved or checked instantly, such as quest progress, level up, item & money gain etc. 2) According to game logic, sometimes we need to query some offline player's data. 3) Some global game world data needs to be shared between different game instances which may be running on a different host or a different process on the same host. This is the main reason we need a data cache server sits between game logic server and database. 4) Player needs freely switch between game instances. Below is the difficulty we encountered in the past: 1) All data access operation should be asynchronized to avoid network I/O blocking the main game logic thread. We have to send message to database or cache server and then handle data reply message in callback function and continue proceed game logic. It quickly become painful to write some moderate complex game logic that needs to talk several times with db and the game logic is scattered in many callback functions makes it hard to understand and maintain. 2) The ad-hoc data cache server makes things more complex, we hard to maintain data consistence and effectively update/load/refresh data. 3) In-game data query is inefficient and cumbersome, game logic need to query many information such as inventory, item info, avatar state etc. Some transaction machanism is also needed, for example, if one step failed the entire operation should be rollback. We try to design a good data model system in RAM, building a lot of complex indexs to ease numerous information query, adding transaction support etc. Quickly I realized what we are building is a in-memory database system, we are reinventing the wheel... Finally I turn to the stackless python, we removed the cache server. All data are saved in database. Game logic server directly query database. With stackless python's micro tasklet and channel, we can write game logic in a synchronized way. It is far more easy to write and understand and productivity greatly improved. In fact, the underlying DB access is also asynchronized: One client tasklet issue request to another dedicate DB I/O worker thread and the tasklet is blocked on a channel, but the entire main game logic is not blocked, other client's tasklet will be scheduled and run freely. When DB data reply the blocked tasklet will be waken up and continue to run on the 'break point'(continuation?). With above design, I have some questions: 1) The DB access will be more frequently than previous cached solution, does the DB can support high frequent query/update operation? Does some mature cache solution such as redis, memcached is needed in near future? 2) Are there any serious pitfalls in my design? Can you guys give me some better suggestions, especially on in-game data management pattern. Any suggestion would be appreciated, thanks.
8
2
0.197375
0
false
8,660,848
0
3,298
2
0
0
8,660,622
It's difficult to comment on the entire design/datamodel without greater understanding of the software, but it sounds like your application could benefit from an in-memory database.* Backing up such databases to disk is (relatively speaking) a cheap operation. I've found that it is generally faster to: A) Create an in-memory database, create a table, insert a million** rows into the given table, and then back-up the entire database to disk than B) Insert a million** rows into a table in a disk-bound database. Obviously, single record insertions/updates/deletions also run faster in-memory. I've had success using JavaDB/Apache Derby for in-memory databases. *Note that the database need not be embedded in your game server. **A million may not be an ideal size for this example.
1
0
0
Need suggestion about MMORPG data model design, database access and stackless python
2
python,database,python-stackless
0
2011-12-28T19:57:00.000
I'm engaged in developing a turn-based casual MMORPG game server. The low level engine(NOT written by us) which handle networking, multi-threading, timer, inter-server communication, main game loop etc, was written by C++. The high level game logic was written by Python. My question is about the data model design in our game. At first we simply try to load all data of a player into RAM and a shared data cache server when client login and schedule a timer periodically flush data into data cache server and data cache server will persist data into database. But we found this approach has some problems 1) Some data needs to be saved or checked instantly, such as quest progress, level up, item & money gain etc. 2) According to game logic, sometimes we need to query some offline player's data. 3) Some global game world data needs to be shared between different game instances which may be running on a different host or a different process on the same host. This is the main reason we need a data cache server sits between game logic server and database. 4) Player needs freely switch between game instances. Below is the difficulty we encountered in the past: 1) All data access operation should be asynchronized to avoid network I/O blocking the main game logic thread. We have to send message to database or cache server and then handle data reply message in callback function and continue proceed game logic. It quickly become painful to write some moderate complex game logic that needs to talk several times with db and the game logic is scattered in many callback functions makes it hard to understand and maintain. 2) The ad-hoc data cache server makes things more complex, we hard to maintain data consistence and effectively update/load/refresh data. 3) In-game data query is inefficient and cumbersome, game logic need to query many information such as inventory, item info, avatar state etc. Some transaction machanism is also needed, for example, if one step failed the entire operation should be rollback. We try to design a good data model system in RAM, building a lot of complex indexs to ease numerous information query, adding transaction support etc. Quickly I realized what we are building is a in-memory database system, we are reinventing the wheel... Finally I turn to the stackless python, we removed the cache server. All data are saved in database. Game logic server directly query database. With stackless python's micro tasklet and channel, we can write game logic in a synchronized way. It is far more easy to write and understand and productivity greatly improved. In fact, the underlying DB access is also asynchronized: One client tasklet issue request to another dedicate DB I/O worker thread and the tasklet is blocked on a channel, but the entire main game logic is not blocked, other client's tasklet will be scheduled and run freely. When DB data reply the blocked tasklet will be waken up and continue to run on the 'break point'(continuation?). With above design, I have some questions: 1) The DB access will be more frequently than previous cached solution, does the DB can support high frequent query/update operation? Does some mature cache solution such as redis, memcached is needed in near future? 2) Are there any serious pitfalls in my design? Can you guys give me some better suggestions, especially on in-game data management pattern. Any suggestion would be appreciated, thanks.
8
6
1.2
0
true
8,660,935
0
3,298
2
0
0
8,660,622
I've worked with one MMO engine that operated in a somewhat similar fashion. It was written in Java, however, not Python. With regards to your first set of points: 1) async db access We actually went the other route, and avoided having a “main game logic thread.” All game logic tasks were spawned as new threads. The overhead of thread creation and destruction was completely lost in the noise floor compared to I/O. This also preserved the semantics of having each “task” as a reasonably straightforward method, instead of the maddening chain of callbacks that one otherwise ends up with (although there were still cases of this.) It also meant that all game code had to be concurrent, and we grew increasingly reliant upon immutable data objects with timestamps. 2) ad-hoc cache We employed a lot of WeakReference objects (I believe Python has a similar concept?), and also made use of a split between the data objects, e.g. “Player”, and the “loader” (actually database access methods) e.g. “PlayerSQLLoader;” the instances kept a pointer to their Loader, and the Loaders were called by a global “factory” class that would handle cache lookups versus network or SQL loads. Every “Setter” method in a data class would call the method changed, which was an inherited boilerplate for myLoader.changed (this); In order to handle loading objects from other active servers, we employed “proxy” objects that used the same data class (again, say, “Player,”) but the Loader class we associated was a network proxy that would (synchronously, but over gigabit local network) update the “master” copy of that object on another server; in turn, the “master” copy would call changed itself. Our SQL UPDATE logic had a timer. If the backend database had received an UPDATE of the object within the last ($n) seconds (we typically kept this around 5), it would instead add the object to a “dirty list.” A background timer task would periodically wake and attempt to flush any objects still on the “dirty list” to the database backend asynchronously. Since the global factory maintained WeakReferences to all in-core objects, and would look for a single instantiated copy of a given game object on any live server, we would never attempt to instantiate a second copy of one game object backed by a single DB record, so the fact that the in-RAM state of the game might differ from the SQL image of it for up to 5 or 10 seconds at a time was inconsequential. Our entire SQL system ran in RAM (yes, a lot of RAM) as a mirror to another server who tried valiantly to write to disc. (That poor machine burned out RAID drives on average of once every 3-4 months due to “old age.” RAID is good.) Notably, the objects had to be flushed to database when being removed from cache, e.g. due to exceeding the cache RAM allowance. 3) in-memory database … I hadn't run across this precise situation. We did have “transaction-like” logic, but it all occurred on the level of Java getters/setters. And, in regards to your latter points: 1) Yes, PostgreSQL and MySQL in particular deal well with this, particularly when you use a RAMdisk mirror of the database to attempt to minimize actual HDD wear and tear. In my experience, MMO's do tend to hammer the database more than is strictly necessary, however. Our “5 second rule”* was built specifically to avoid having to solve the problem “correctly.” Each of our setters would call changed. In our usage pattern, we found that an object typically had either 1 field changed, and then no activity for some time, or else had a “storm” of updates happen, where many fields changed in a row. Building proper transactions or so (e.g. informing the object that it was about to accept many writes, and should wait for a moment before saving itself to the DB) would have involved more planning, logic, and major rewrites of the system; so, instead, we bypassed the situation. 2) Well, there's my design above :-) In point of fact, the MMO engine I'm presently working on uses even more reliance upon in-RAM SQL databases, and (I hope) will be doing so a bit better. However, that system is being built using an Entity-Component-System model, rather than the OOP model that I described above. If you already are based on an OOP model, shifting to ECS is a pretty paradigm shift and, if you can make OOP work for your purposes, it's probably better to stick with what your team already knows. *- “the 5 second rule” is a colloquial US “folk belief” that after dropping food on the floor, it's still OK to eat it if you pick it up within 5 seconds.
1
0
0
Need suggestion about MMORPG data model design, database access and stackless python
2
python,database,python-stackless
0
2011-12-28T19:57:00.000
Apologies for the longish description. I want to run a transform on every doc in a large-ish Mongodb collection with 10 million records approx 10G. Specifically I want to apply a geoip transform to the ip field in every doc and either append the result record to that doc or just create a whole other record linked to this one by say id (the linking is not critical, I can just create a whole separate record). Then I want to count and group by say city - (I do know how to do the last part). The major reason I believe I cant use map-reduce is I can't call out to the geoip library in my map function (or at least that's the constraint I believe exists). So I the central question is how do I run through each record in the collection apply the transform - using the most efficient way to do that. Batching via Limit/skip is out of question as it does a "table scan" and it is going to get progressively slower. Any suggestions? Python or Js preferred just bec I have these geoip libs but code examples in other languages welcome.
3
1
1.2
0
true
8,666,791
0
477
2
0
0
8,663,432
Since you have to go over "each record", you'll do one full table scan anyway, then a simple cursor (find()) + maybe only fetching few fields (_id, ip) should do it. python driver will do the batching under the hood, so maybe you can give a hint on what's the optimal batch size (batch_size) if the default is not good enough. If you add a new field and it doesn't fit the previously allocated space, mongo will have to move it to another place, so you might be better off creating a new document.
1
0
0
How do I transform every doc in a large Mongodb collection without map/reduce?
2
python,mongodb,mapreduce
0
2011-12-29T02:33:00.000
Apologies for the longish description. I want to run a transform on every doc in a large-ish Mongodb collection with 10 million records approx 10G. Specifically I want to apply a geoip transform to the ip field in every doc and either append the result record to that doc or just create a whole other record linked to this one by say id (the linking is not critical, I can just create a whole separate record). Then I want to count and group by say city - (I do know how to do the last part). The major reason I believe I cant use map-reduce is I can't call out to the geoip library in my map function (or at least that's the constraint I believe exists). So I the central question is how do I run through each record in the collection apply the transform - using the most efficient way to do that. Batching via Limit/skip is out of question as it does a "table scan" and it is going to get progressively slower. Any suggestions? Python or Js preferred just bec I have these geoip libs but code examples in other languages welcome.
3
0
0
0
false
8,677,503
0
477
2
0
0
8,663,432
Actually I am also attempting another approach in parallel (as plan B) which is to use mongoexport. I use it with --csv to dump a large csv file with just the (id, ip) fields. Then the plan is to use a python script to do a geoip lookup and then post back to mongo as a new doc on which map-reduce can now be run for count etc. Not sure if this is faster or the cursor is. We'll see.
1
0
0
How do I transform every doc in a large Mongodb collection without map/reduce?
2
python,mongodb,mapreduce
0
2011-12-29T02:33:00.000
I have a row of data in dict format. Is there an easy way to insert it into a mysql table. I know that I can write a custom function to convert dict into a custom sql query, but I am looking for a more direct alternative.
2
2
1.2
0
true
8,674,504
0
1,650
2
0
0
8,674,426
MySQLDB does not come with anything which allows a direct operation like that. This is a common problem with a variety of answers, including a custom function for this purpose. In my experience, it is best to buckle down and just write the paramaterized SQL most of the time. If you have the same thing going on a lot, then I would consider factoring it into a utility function. HOWEVER, if you are hand-writing static SQL using parameters, then most of the security and bug related issues are taken care of. When you start basing your SQL on a dictionary of data that came from where (?), you need to be much more careful. In summary, your code will likely be more readable and maintainable and secure if you simply write the queries, use parameters, and document well. (Note: some proponents of ORM, etc... may disagree... this is an opinion based on a lot of experience on what was simple, reliable, and worked for our team)
1
0
0
MySQLdb inserting a dict into a table
2
python,dictionary,insert,mysql-python
0
2011-12-29T22:52:00.000
I have a row of data in dict format. Is there an easy way to insert it into a mysql table. I know that I can write a custom function to convert dict into a custom sql query, but I am looking for a more direct alternative.
2
5
0.462117
0
false
8,674,547
0
1,650
2
0
0
8,674,426
Well... According to the documentation for paramstyle: Set to 'format' = ANSI C printf format codes, e.g. '...WHERE name=%s'. If a mapping object is used for conn.execute(), then the interface actually uses 'pyformat' = Python extended format codes, e.g. '...WHERE name=%(name)s'. However, the API does not presently allow the specification of more than one style in paramstyle So, it should be just a matter of: curs.execute("INSERT INTO foo (col1, col2, ...) VALUES (%(key1)s, %(key2)s, ...)", dictionary) where key1, key2, etc. would be keys from the dictionary. Disclaimer: I haven't tried this myself :) Edit: yeah, tried it. It works.
1
0
0
MySQLdb inserting a dict into a table
2
python,dictionary,insert,mysql-python
0
2011-12-29T22:52:00.000
Assuming I have a schema with the name "my_schema", how can I create tables with "django syncdb" for that particular schema? Or is there any other alternatives for quickly creating tables from my django models? I think, by default django creates tables for the "public" schema.
7
0
0
0
false
42,578,440
1
4,285
1
0
0
8,680,673
I have used following info and work for me 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'dab_name', 'USER': 'username', 'PASSWORD': 'password', 'HOST': 'localhost', 'PORT': '5432', 'OPTIONS': { 'options': '-c search_path=tours' #schema name } } Tested on postgresql 9 and django 1.10.2 Thanks @romke
1
0
0
How to specify schema name while running "syncdb" in django?
2
python,database,django
0
2011-12-30T14:52:00.000
For example in CQL, SELECT * from abc_dimension ORDER BY key ASC; seems to be not working. Any help?
2
0
0
1
false
43,361,173
0
3,328
1
0
0
8,751,293
Latest versions of Cassandra support aggregations within single partition only.
1
0
0
does cassandra cql support aggregation functions, like group by and order by
3
python,cassandra,cql
0
2012-01-05T23:13:00.000
We are sketching out how a system would work. The problem is that have a set of items with a computed value for each item. Say for example you like players in the nba and there are a certain set of players that you have shown preferences about. Examples might be: number of games played rebounding scoring assists minutes played per game players that your other friends like likelihood of being traded - you often want players that are going to be traded There's approx 500 players in the nba. From a performance pov, querying is cost prohibitive - throwing in other people' s preferences etc.... We have been thinking of doing an alternative approaches. One approach is a NoSQL where each user gets written a document of each player. To be honest, this seems like too many unkowns as I have zero experience. Another approach is where each person in the system would get a table dedicated to them. Perhaps write out the table definition via cron on a nightly basis and when the user logs in, do a create table statement and then have a dedicated query against that. This sounds really ugly to me too (although feasible). We could also certainly have a single table where each user has a row for each player. I'd rather not premise the whole system off self-joins though. It seems to take querying off-line and we could feasibly measure 1000 players against these different parameters. Are there other ideas that I'm missing? I don't want anything too esoteric - preferably just MySQL and Python. Would be using InnoDB and not so concerned about splitting up the tables per database per host issue. Any other ideas or realword experience would be appreciated? I'm sure this has been solved many times before. thx
0
0
0
0
false
8,754,480
0
49
1
0
0
8,753,559
I'm using mongodb right now for the first time and I find it to be really awesome in the way it lets you represent a document pretty much like an object oriented class structure. You can easily have a document per user that stores any number of embedded documents. Your players can be in a nested dictionary or a list, and you can index on player names. Then when you request a user, there are NO joins. You have all your data. Plus there is a flexible schema so you can always just add more fields whenever you want. As for mysql table-per-user, I agree its really messy and can get out of control. Another alternative is you could look into a key-value store like Redis for caching purposes since its in memory, it would be fast as well.
1
0
0
Caching computed results for each user - would having a table dedicated to each user make sense?
1
python,mysql,database-design
0
2012-01-06T04:56:00.000
Are there any python3 modules for importing(reading) excel .xls files or is there any work in progress on porting one of the python2 modules? All I'm coming up with is xlrd for python2.
1
0
0
0
false
8,797,959
0
530
1
0
0
8,788,041
I believe the maintainer of xlrd is working on porting it, but it's not yet ready.
1
0
0
python3 module for importing xls files
1
excel,python-3.x,xls
0
2012-01-09T11:55:00.000
I use xlrd to read data from excel files. For integers stored in the files, let's say 63, the xlrd interprets it as 63.0 of type number. Why can't xlrd recognize 63 as an integer? Assume sheet.row(1)[0].value gives us 63.0. How can I convert it back to 63.
20
5
0.16514
0
false
8,826,320
0
30,994
1
0
0
8,825,681
I'm reminded of this gem from the xlrd docs: Dates in Excel spreadsheets In reality, there are no such things. What you have are floating point numbers and pious hope. The same is true of integers. Perhaps minus the pious hope.
1
0
0
Integers from excel files become floats?
6
python,xlrd
0
2012-01-11T19:48:00.000
I have a regular desktop application which is written in Python/GTK and SQLObject as ORM. My goal is to create a webinterface where a user can login and sync/edit the database. My application is split up in different modules, so the database and gtk code are completly separate, so I would like to run the same database code on the webserver too. So, I would like to know if there's a webframework which could handle these criteria: User authentication Use my own database code/SQLObject Some widgets to build a basic ui This would be my first webproject, so I'm a bit confused by all searchresults. CherryPy, Turbogears, web2py, Pyramid? I would be happy if someone could give me some pointers what would be a good framework in my situation.
0
0
0
0
false
8,896,107
1
375
1
0
0
8,895,942
Try the pyramid, it does not impose anything you like as opposed to Django. And has a wealth of features for building Web applications at any level.
1
0
0
Choosing Python/SQLObject webframework
2
python,sqlobject
0
2012-01-17T14:05:00.000
Well i have a question that i feel i've been answered several times, from what i found here. However, as a newbie, i can't really understand how to perform a really basic operation. Here's the thing : i have an .xls and when i use xlrd to get a value i'm just using sh.cell(0,0) (assuming that sh is my sheet); if what is in the cell is a string i get something like text:u'MyName' and i only want to keep the string 'MyName'; if what is in the cell is a number i get something like number:201.0 and i only want to keep the integer 201. If anyone can indicate me what i should to only extract the value, formatted as i want, thank you.
9
0
0
0
false
43,124,531
0
27,320
1
0
0
8,909,342
The correct answer to this is to simply use the Cell.value function. This will return a number or a Unicode string depending on what the cell contains.
1
0
0
Python xlrd : how to convert an extracted value?
4
python,xlrd
0
2012-01-18T11:29:00.000
We recently moved a Zeo instance over to a new server environment and one of the changes was the file system now has the database files stored on an NFS share. When trying to start zeo, we've been getting lock file errors which after researching seems to be because of a known issue of lock files being created on an NFS share. My question is, can we maintain the data (.fs) files on the share but have the lock files created on the server's filesystem? We want to maintain the data being stored on the SAN so moving the data over to box is really not an option. Any help would be greatly appreciated!
0
1
0.197375
0
false
8,916,946
0
260
1
0
0
8,914,644
This is likely not a good setup. Your best bet is to work-around NFS in spite of it: maybe a loopback ext3 filesystem mounted on a regular file on the NFS volume -- NFSv3 should have few practical limits to filesize that you won't have natively. Only you will be able to measure if this performs well enough. Otherwise, you should know that (generally) no networked database performs well or without side-effects over NFS.
1
0
0
Zeo/ZODB lock file location, possible to change?
1
python,zope,zodb
0
2012-01-18T17:36:00.000
I need to provide individuals with their financial statement, and I am using S3. So far what I am doing is making the file public-read and creating a unique Key, using uuid.uuid4(). Would this be acceptable, or how else could I make this more secure? Sending authentication keys for each individual is not an option.
1
1
0.066568
0
false
8,935,272
0
260
1
0
0
8,935,191
Even though version 4 UUIDs are supposed to incorporate random data, I wouldn't want to rely on the fact that the RNG used by Python's uuid.uuid4() being securely random. The Python docs make no mention about the quality of the randomness, so I'd be afraid that you might end up with guessable UUID's. I'm not a crypto expert, so I won't suggest a specific alternative, but I would suggest using something that is designed to produce crypto-quailty random data, and transform that into something that can be used as an S3 key (I'm not sure what the requirements on S3 key data might be, but I'd guess they're supposed to be something like a filename). To be honest, having no security other than an unguessable name still leaves me with a bad feeling. It seems to easy to have an unintentional leak of the names, as Ian Clelland suggests in his comment.
1
0
0
Sending 'secure' financial statements on S3
3
python,security,amazon-s3
0
2012-01-20T00:08:00.000
In python, I am firing Sparql queries to get data from dbpedia. At a point approximately firing 7,000 queries, my script is hangs at line results = sparql.query().convert() which is already executed atleast 5000 times in the loop Any idea what could be the issue in it
1
3
0.53705
0
false
8,965,215
0
157
1
0
1
8,965,123
try splitting up the .query() and .convert() into two separate lines. I would guess that .query() is where it's hanging, and I would further guess that you are being rate-limited by DBPedia, but I can't find any information on what their limits might be.
1
0
0
python script hangs at results = sparql.query().convert()
1
python,sparql,mysql-python,dbpedia
0
2012-01-22T22:02:00.000
First of all, I am sorry if this question doesn't belong to SO since I don't know where else to post it, anyway... I am looking for a decent python based database development RAD framework with nice data aware widgets and grids. A desktop framework would be much preferable to a web framework (I've developed heavy DB-centric apps in django but the web dev experience is still painful compared to a desktop one), although a web framework will do as long as there are powerful data-centric widgets to go along with it. Ideally, it should be as useful as say Delphi or MSAccess / VBA (I used to develop using those a long time ago). For the record, I have very good development experience in django and wxPython and as I've said developing heavy data-centric web apps is tough and wxPython although very powerful lacks DB-related widgets. Please note that the use of Python is mandatory because I've been using this language exclusively for all my projects in the last few years and I can't bear the idea of switching back to more mundane languages. Thanks for any suggestion...
3
1
0.099668
0
false
15,711,432
1
684
1
0
0
9,045,723
I am also looking for something similar to Kexi. Unfortunately python scripting is not supported in Kexi for windows. I would like to find something better than MS Access, and it does not have to be based on python. So far I have looked at quite a few IDE's but have not found anything where a GUI and database application can be built us as quickly as in access. I think of all the best one I have seen is Alpha 5. There could be something based on Net Beans but I really do not know. Oracle APEX is another one I have heard about but it doesn't support desktop applications (as far as I know).
1
1
0
Python database widgets/environment like MSAccess
2
python,database,widget,rad
0
2012-01-28T13:49:00.000
I am building a web App with mongoDB as the backend. Some of the documents need to store a collection of items in some sort of list, and then the system will need to frequently check if a specified item is present in that list. Using Python's 'in' operator takes Big-O(N) time, n being the size of the list. Since these list can get quite large, I want something faster than that. Python's 'set' type does this operation in constant time (and enforces uniqueness, which is good in my case), but is considered an invalid data type to put in MongoDB. So what's the best way to do this? Is there some way to just use a regular list and exploit mongo's indexing features? Again, I want to know, for a given document in a collection, does a list inside that document contain particular element?
5
6
1.2
0
true
9,116,463
0
2,402
1
0
0
9,115,979
You can represent a set using a dictionary. Your elements become the keys, and all the values can be set to a constant such as 1. The in operator checks for the existence of a key. EDIT. MongoDB stores a dict as a BSON document, where the keys must be strings (with some additional restrictions), so the above advice is of limited use.
1
0
1
Mongodb with Python's "set()" type
1
python,mongodb
0
2012-02-02T16:29:00.000
i have Ubuntu and installed python3 since my script is written in it. Since I use MYSQL with MySQLdb, I installed apt-get install python-mysqldb however this installed MySQLdb to Python (which is 2.6 on Ubuntu) and not to Python3. How can I install MySQLdb for Python3 Should I use it at all or switch to PyMSQL Sorry, I have just started working with Python today...
9
1
0.028564
0
false
51,553,512
0
14,751
1
0
0
9,146,320
If you are planning to switch from MySQLDB then I recommend you to use MySQL connector Python Why ? it work with both Python 2 and 3 Because it is official Oracle driver for MySQL for working with Python. It is purely written in Python You can also use C extension to connect to MySQL. MySQL Connector/Python is implementing the MySQL Client/Server protocol completely in Python. This means you don't have to compile anything or MySQL doesn't even have to be installed on the machine. Also, it has great support for connection pooling
1
0
0
install python MySQLdb to python3 not python
7
python,mysql,ubuntu,installation
0
2012-02-05T02:06:00.000
I have a project in which i use JDBC with MySQL to store some user information, Java REST for the server and Python REST for the client. My question is: by default(i haven't changed anything in the configurations), are the http requests from the client serialized on the server's side? I ask this because i'd like to know if i need to make the database insert/delete querys thread-safe or something.
0
1
1.2
0
true
9,158,709
1
253
1
0
0
9,158,578
Of course they need to be thread safe. You should be writing your Java server as if it were single threaded, because a Java EE app server will assign a thread per incoming request. You also need to think about database isolation and table locking. Will you allow "dirty reads" or should your transactions be serializable? Should you SELECT FOR UPDATE? That's a database setting, separate from threading considerations.
1
0
0
Serialization with JDBC with MySQL, JAVA REST and Python REST
1
java,python,mysql,rest,jdbc
0
2012-02-06T10:16:00.000
Why use py manage.py test ? What's the point? It creates the table anyway... if I wanted to test it, then I wouldn't want it to create the actual table!!!
0
0
1.2
0
true
9,161,223
0
126
1
0
0
9,160,411
Test is meant to perform both the upgrade and the downgrade steps. You want to verify that the application is usable in both states. So the idea would be to upgrade, run tests, downgrade, run tests, and verify you don't break things. If the test run fails, it gives you a chance to clean it up, reset, and try again. Usually, I'd say that the test run must completely cleanly before the migration is considered "good" and able to be committed to the code base.
1
0
0
In SQLAlchemy-migrate, what's the point of using "test"?
1
python,mysql,database,sqlalchemy
0
2012-02-06T12:53:00.000
I wanted your advice for the best design approach at the following Python project. I am building a web service system that is split into 2 parts: This part grabs realtime data from a 3rd party API and puts the data in a DB. This part exposes a json API to access data from the DB mentioned in 1). Some background info - 2) runs on django, and exposes the API via view methods. It uses SQLAlchemy instead of the django ORM. My questions are: - Should 1) and 2) run on the same machine, considering that they both access the same MySQL DB? - What should 1) run on? I was thinking about just running cron jobs with Python scripts that also use SQLAlchemy. This is because I don't see a need for an entire web framework here, especially because this needs to work super fast. Is this the best approach? - Data size - 1) fetches about 60,000 entries and puts them in the DB every 1 minute (an entry contains of about 12 Float values and a few Dates and Integers). What is the best way to deal with the ever growing amount of data here? Would you split the DB? If so, into what? Thanks!
2
0
1.2
0
true
9,189,761
1
319
1
0
0
9,182,936
I would say, run the two on the same machien to start with, and see how the performance goes. Why spend money on a second machine if you don’t have to? As for “dealing with the ever growing amount of data”—do you need to keep old data around? If not, your second task can simply delete old data when it’s done with it. Provided all the records are properly time-stamped, you don’t need to worry about race conditions between the two tasks.
1
0
0
Realtime data server architecture
1
python,real-time
0
2012-02-07T19:56:00.000
I am working on a system where a bunch of modules connect to a MS SqlServer DB to read/write data. Each of these modules are written in different languages (C#, Java, C++) as each language serves the purpose of the module best. My question however is about the DB connectivity. As of now, all these modules use the language-specific Sql Connectivity API to connect to the DB. Is this a good way of doing it ? Or alternatively, is it better to have a Python (or some other scripting lang) script take over the responsibility of connecting to the DB? The modules would then send in input parameters and the name of a stored procedure to the Python Script and the script would run it on the database and send the output back to the respective module. Are there any advantages of the second method over the first ? Thanks for helping out!
2
0
0
0
false
9,456,223
0
112
1
0
0
9,202,562
If we assume that each language you use will have an optimized set of classes to interact with databases, then there shouldn't be a real need to pass all database calls through a centralized module. Using a "middle-ware" for database manipulation does offer a very significant advantage. You can control, monitor and manipulate your database calls from a central and single location. So, for example, if one day you wake up and decide that you want to log certain elements of the database calls, you'll need to apply the logical/code change only in a single piece of code (the middle-ware). You can also implement different caching techniques using middle-ware, so if the different systems share certain pieces of data, you'd be able to keep that data in the middle-ware and serve it as needed to the different modules. The above is a very advanced edge-case and it's not commonly used in small applications, so please evaluate the need for the above in your specific application and decide if that's the best approach. Doing things the way you do them now is fine (if we follow the above assumption) :)
1
0
0
DB Connectivity from multiple modules
1
python,sql-server,architecture
0
2012-02-08T22:32:00.000
Is there any tutorials about how to set-up sqlalchemy for windows? I went to www.sqlalchemy.org and they don't have clear instructions about set-up for windows. When I opened the zipped package, I see distribute_setup, ez_setup and setup.py among other files but it doesn't see to install sqlalchemy.
1
1
0.099668
0
false
29,831,712
0
9,330
1
0
0
9,221,888
The Command pip install sqlalchemy will download the necessary files and run setup.py install for you.
1
0
0
Configuring sqlalchemy for windows
2
python,sql,database,orm,sqlalchemy
0
2012-02-10T02:21:00.000
There have been many questions along these lines but I'm struggling to apply them to my scenario. Any help would be be greatly appreciated! We currently have a functioning mySQL database hosted on a website, data is entered from a website and via PHP it is put into the database. At the same time we want to now create a python application that works offline. It should carry out all the same functions as the web version and run totally locally, this means it needs a copy of the entire database to run locally and when changes are made to such local database they are synced next time there is an internet connection available. First off I have no idea what the best method would be to run such a database offline. I was considering just setting up a localhost, however this needs to be distributable to many machines. Hence setting up a localhost via an installer of some sort may be impractical no? Secondly synchronization? Not a clue on how to go about this! Any help would be very very very appreciated. Thank you!
2
0
1.2
0
true
9,237,543
0
2,804
2
0
0
9,237,481
How high-performance does your local application need to be? Also, how reliable is the locally available internet connection? If you don't need extremely high performance, why not just leave the data in the remote MySQL server? If you're sure you need access to local data I'd look at MySQL's built-in replication for synchronization. It's really simple to setup/use and you could use it to maintain a local read-only copy of the remote database for quick data access. You'd simply build into your application the ability to perform write queries on the remote server and do read queries against the local DB. The lag time between the two servers is generally very low ... like on the order of milliseconds ... but you do still have to contend with network congestion preventing a local slave database from being perfectly in-sync with the master instantaneously. As for the python side of things, google mysql-python because you'll need a python mysql binding to work with a MySQL database. Finally, I'd highly recommend SQLalchemy as an ORM with python because it'll make your life a heck of a lot easier. I would say an ideal solution, however, would be to set up a remote REST API web service and use that in place of directly accessing the database. Of course, you may not have the in-house capabilities, the time or the inclination to do that ... which is also okay :)
1
0
0
Python sync with mySQL for local application?
3
php,python,mysql,localhost,sync
0
2012-02-11T03:04:00.000
There have been many questions along these lines but I'm struggling to apply them to my scenario. Any help would be be greatly appreciated! We currently have a functioning mySQL database hosted on a website, data is entered from a website and via PHP it is put into the database. At the same time we want to now create a python application that works offline. It should carry out all the same functions as the web version and run totally locally, this means it needs a copy of the entire database to run locally and when changes are made to such local database they are synced next time there is an internet connection available. First off I have no idea what the best method would be to run such a database offline. I was considering just setting up a localhost, however this needs to be distributable to many machines. Hence setting up a localhost via an installer of some sort may be impractical no? Secondly synchronization? Not a clue on how to go about this! Any help would be very very very appreciated. Thank you!
2
0
0
0
false
9,237,521
0
2,804
2
0
0
9,237,481
Are you planning to run mysql on your local python offline apps ? I would suggest something like sqlite. As for keeping things in sync, it also depends on the type of data that needs to be synchronized. One question that needs to be answered: Are the data generated by these python apps something that is opague ? If yes (i.e. it doesn't have any relations to other entities), then you can queue the data locally and push it up to the centrally hosted website.
1
0
0
Python sync with mySQL for local application?
3
php,python,mysql,localhost,sync
0
2012-02-11T03:04:00.000
I'm building a database front-end with python and glade. I need to present SQL query results in the form of database tables inside my app's window (schema followed by tuples/records). Both the schema and the database entries are dynamic because the schema could be that of a join operation or in general altered and the number of tuples could be any valid number.One possible solution could be to format a given table with python, create a text object in my GUI and change its' value to that produced by python. Advices and suggestions are very welcome.
3
3
1.2
0
true
9,302,750
0
2,104
1
0
0
9,299,934
Given that the number and name of the columns to display isn't known beforehand, you could just create a gtk.TreeView widget in glade and modify it as you need in the application code. This widget could be updated to use a new model using gtk.TreeView.set_model and the columns could be adapted to match the information to be dsplayed with the gtk.TreeView.{append,remove,insert}_column columns. Regarding the model, you coud create a new gtk.ListStore with appropriate columns depending on the results from the database. I hope this helps.
1
1
0
GUI for database tables with pygtk and glade
1
python,user-interface,gtk,pygtk,glade
0
2012-02-15T19:28:00.000
I'm having a problem with the sessions in my python/wsgi web app. There is a different, persistent mysqldb connection for each thread in each of 2 wsgi daemon processes. Sometimes, after deleting old sessions and creating a new one, some connections still fetch the old sessions in a select, which means they fail to validate the session and ask for login again. Details: Sessions are stored in an InnoDB table in a local mysql database. After authentication (through CAS), I delete any previous sessions for that user, create a new session (insert a row), commit the transaction, and redirect to the originally requested page with the new session id in the cookie. For each request, a session id in the cookie is checked against the sessions in the database. Sometimes, a newly created session is not found in the database after the redirect. Instead, the old session for that user is still there. (I checked this by selecting and logging all of the sessions at the beginning of each request). Somehow, I'm getting cached results. I tried selecting the sessions with SQL_NO_CACHE, but it made no difference. Why am I getting cached results? Where else could the caching occur, and how can stop it or refresh the cache? Basically, why do the other connections fail to see the newly inserted data?
9
16
1.2
0
true
9,318,495
0
5,316
1
0
0
9,318,347
MySQL defaults to the isolation level "REPEATABLE READ" which means you will not see any changes in your transaction that were done after the transaction started - even if those (other) changes were committed. If you issue a COMMIT or ROLLBACK in those sessions, you should see the changed data (because that will end the transaction that is "in progress"). The other option is to change the isolation level for those sessions to "READ COMMITTED". Maybe there is an option to change the default level as well, but you would need to check the manual for that.
1
0
0
Why are some mysql connections selecting old data the mysql database after a delete + insert?
2
python,mysql,session,caching,wsgi
0
2012-02-16T20:09:00.000
I am looking to store information in a database table that will be constantly receiving and sending data back and forth to an iPhone App/Python Socket. The problem is, if I were to have my own servers, what is the maximum queries I can sustain? The reason I'm asking is because if I were to have thousands of people using the clients and multiple queries are going a second, I'm afraid something will go wrong. Is there a different way of storing user information without MySQL? Or is MySQL OK for what I am doing? Thank you!
4
3
0.291313
0
false
9,322,806
0
1,960
1
0
0
9,322,523
The maximum load is going to vary based on the design of your application and the power of the hardware that you put it on. A well designed application on reasonable hardware will far outperform what you need to get the project off the ground. If you are unexpectedly successful, you will have money to put into real designers, real programmers and a real business plan. Until then, just have fun hacking away and see if you can bring your idea to reality.
1
0
0
How many SQL queries can be run at a time?
2
iphone,python,mysql,objective-c,database
0
2012-02-17T03:37:00.000
I will start a project ( not commercial, just for learning ) but I would like to choose to work with the right tools as I would if I were doing it for a company. First of all small description of what I will need. It would be a server-client(s) application. For the server: - GUI for Windows - ORM - Database without installation (sqlite ???) - GUI builder (RAD Tool) - Ability to create easily REST Services Clients would be android devices - GUI for android mobile Clients would connect to the server and get some initial settings and then start to send information to the server. Server should be able to display properly the information collected from the clients and edit them if needed. Open source technologies are mandatatory. First I am thinking to use sqlite ( I should not make any installation except the programm). Any alternatives here? For the server maybe python with a gui library and sql alchemy. What about Camelot? And for the clients (android) java. I think there are no other options here. Can you make some comments on the above choices? Maybe you can suggest something different which will make the development faster...
0
0
0
0
false
9,354,732
0
657
1
0
0
9,354,695
As you have asserted: client is java only. On server: GUI for Windows : WPF ORM - Database without installation : SQLCE 4.0 - Maybe use codefirst GUI builder (RAD Tool) : Visual Studio lets you do that for WPF apps Ability to create easily REST Services : Use WCF hope that helps
1
1
0
Right tools for GUI windows program
2
android,python,client-server
0
2012-02-20T00:15:00.000
I'm new to Python (relatively new to programing in general) and I have created a small python script that scrape some data off of a site once a week and stores it to a local database (I'm trying to do some statistical analysis on downloaded music). I've tested it on my Mac and would like to put it up onto my server (VPS with WiredTree running CentOS 5), but I have no idea where to start. I tried Googling for it, but apparently I'm using the wrong terms as "deploying" means to create an executable file. The only thing that seems to make sense is to set it up inside Django, but I think that might be overkill. I don't know... EDIT: More clarity
1
1
0.066568
0
false
9,357,006
1
865
1
1
0
9,356,926
Copy script to server test script manually on server set cron, "crontab -e" to a value that will test it soon once you've debugged issues set cron to the appropriate time.
1
0
0
Deploying a Python Script on a Server (CentOS): Where to start?
3
python,django,centos
0
2012-02-20T06:14:00.000
I have a column heading Fee. Using xlwt in python, I successfully generated the required excel.This column is always blank at the creation of Excel file. Is it possible to have the Fee column preformatted to 'Currency' and 'two decimal places', so that when I write manually in the Fee column of the Excel file after downloading, 23 should change into $23.00 ??
5
11
1.2
0
true
9,376,306
0
2,721
1
0
0
9,375,637
I got it working like this: currency_style = xlwt.XFStyle() currency_style.num_format_str = "[$$-409]#,##0.00;-[$$-409]#,##0.00" sheet.write(row+2, col, val, style=currency_style)
1
0
0
Preformat to currency and two decimal places in python using xlwt for excel
1
python,excel,xlwt
0
2012-02-21T10:10:00.000
I have a static folder that is managed by apache where images are stored. I wonder if it's possible by configuring apache to send all files from that folder as downloadable files, not opening them as images inside browser? I suppose I can do it by creating a special view in Flask, but I think it would be nicer if I could do it with some more simple solution.
4
1
0.099668
0
false
9,378,819
1
219
1
0
0
9,378,664
You can force the contents to be a downloadable attachment using http headers. In PHP that would be: $fileName = 'dummy.jpg'; header("Content-Disposition: attachment; filename=$fileName"); Then, the script dumps the raw contents of the file.
1
0
0
Send image as an attachment in browser
2
python,browser,flask
0
2012-02-21T13:46:00.000
In order to demonstrate the security feature of Oracle one has to call OCIServerVersion() or OCIServerRelease() when the user session has not yet been established. While having the database parameter sec_return_server_release_banner = false. I am using Python cx_Oracle module for this, but I am not sure how to get the server version before establishing the connection. Any ideas?
7
0
0
0
false
21,155,146
0
1,064
1
0
0
9,389,381
With-out establishing a connection,. No you can never asking anything. It's like going to Google Page.(Internet Architecture - wether you call it sessionless or session based) As for Authentical, if no permission are set - Oracle uses a username 'nobody' as a user and thus gives every user a session. I am a user of Oracle APEX, and I use Python, PLSQL regurlary. That's one nice question. Thanks.
1
0
0
python cx_oracle and server information
2
python,cx-oracle
0
2012-02-22T05:08:00.000
I'd like to add a feature to my behind the firewall webapp that exposes and ODBC interface so users can connect with a spreadsheet program to explore our data. We don't use a RDBMS so I want to emulate the server side of the connection. I've searched extensively for a library or framework that helps to implement the server side component of an ODBC connection with no luck. Everything I can find is for the other side of the equation - connecting one's client program to a database using an ODBC driver. It would be great to use Python but at this point language preference is secondary, although it does have to run on *nix.
1
0
0
0
false
9,432,486
0
59
1
0
0
9,432,332
The server side of ODBC is already done, it is your RDBMS. ODBC is a client side thing, most implementations are just a bridge between ODBC interface and the native client interface for you-name-your-RDBMS-here. That is why you will not find anything about the server side of ODBC... :-) Implementing a RDBMS (even with a subset of SQL) is no easy quest. My advice is to expose your underlying database storage, the best solution depends on what database are you using. If its a read-only interface, expose a database mirror using some sort of asynchronous replication. If you want it read/write, trust me, you better don't. If your customer is savvy, expose an API, it he isn't you don't want him fiddling with your database. :-) [updated] If your data is not stored on a RDBMS, IMHO there is no point in exposing it through a relational interface like ODBC. The advice to use some sort of asynchronous replication with a relational database is still valid and probably the easiest approach. Otherwise you will have to reinvent the wheel implementing an SQL parser, network connection, authentication and related logic. If you think it's worth, go for it!
1
0
0
Are there any libraries (or frameworks) that aid in implementing the server side of ODBC?
1
python,odbc
0
2012-02-24T14:26:00.000
I am using plone to build my site. In one page template, I have the <input type="file" name="file"> and this form: <form method="post" action="addintoDb" enctype="multipart/form-data" The addintoDb is a python script that save my information into db:context.addParam(name=request.name, path=request['file']). in my db i have name and in the path: <ZPublisher.HTTPRequest.FileUpload instance at 0x081F98C8> but i want to have the put where the file was uploaded (like c:...) I hope somebody can help me.
2
1
0.197375
0
false
9,456,924
1
890
1
0
0
9,446,769
You are not saving the file on the filesystem, but in the Zope object database. You'd have to use python code (not a python script) to open a filepath with the open built-in function to save the data to.
1
0
0
upload file with python script in plone
1
python,plone,zope
0
2012-02-25T18:27:00.000
I have a handful of servers all connected over WAN links (moderate bandwidth, higher latency) that all need to be able to share info about connected clients. Each client can connect to any of the servers in the 'mesh'. Im looking for some kind of distributed database each server can host and update. It would be important that each server is able to get updated with the current state if its been offline for any length of time. If I can't find anything, the alternative will be to pick a server to host a MySQL DB all the servers can insert to; but I'd really like to remove this as a single-point-of-failure if possible. (and the downtime associated with promoting a slave to master) Is there any no-single-master distributed data store you have used before and would recommend? It would most useful if any solution has Python interfaces.
1
0
0
0
false
9,466,181
0
981
1
1
0
9,456,954
What you describe reminds me of an Apache Cassandra cluster configured so that each machine hosts a copy of the whole dataset and reads and writes succeed when they reach a single node (I never did that, but I think it's possible). Nodes should be able to remain functional when WAN links are down and receive pending updates as soon as they get back on-line. Still, there is no magic - if conflicting updates are issued on different servers or outdated replicas are used to generate new data, consistency problems will arise on any architecture you select. A second issue is that for every local write, you'll get n-1 remote writes and your servers may spend a lot of time and bandwidth debating who has the latest record. I strongly suggest you fire up a couple EC2 instances and play with their connectivity to check if everything works the way you expect. This seems to be in the "creative misuse" area and your mileage may vary wildly, if you get any at all.
1
0
0
Distributed state
5
python,database,linux,datastore,distributed-system
0
2012-02-26T20:41:00.000
I am playing around with python and thought I would make a simple language learning program... ie: lanuageA | languageB | type of word | synonym | antonym | basically flash cards... I have made a crude version using python and json, and have just started playing with sqlite3... Is a database a better way to organize the information, and for pulling things out and referencing against each other. and against user input. Or would it be easier to use nested dictionaries?
2
0
0
0
false
9,484,859
0
371
2
0
0
9,484,814
If your data fits in memory and you only require to access elements by key, a dictionary is probably just enough for your needs.
1
0
1
json or sqlite3 for a dictionary
2
python,json,sqlite
0
2012-02-28T15:32:00.000
I am playing around with python and thought I would make a simple language learning program... ie: lanuageA | languageB | type of word | synonym | antonym | basically flash cards... I have made a crude version using python and json, and have just started playing with sqlite3... Is a database a better way to organize the information, and for pulling things out and referencing against each other. and against user input. Or would it be easier to use nested dictionaries?
2
1
1.2
0
true
9,485,889
0
371
2
0
0
9,484,814
Who is going to modify your data? If you plan to only modify the word set yourself (as a developer, not a user of an application), you can use JSON to keep the data on the disk If you want to allow users of your application to add/edit/remove flashcards, you should use a database (sqlite3 is OK), because otherwise you would have to save the whole data file after each small change made by the user. You could, of course, split the data into separate JSON files, add thread locks, etc., but that's what database engines are for.
1
0
1
json or sqlite3 for a dictionary
2
python,json,sqlite
0
2012-02-28T15:32:00.000
I am fetching records from gae model using cursor() and with_cursor() logic as used in paging. but i am not sure how to check that there is no any other record in db that is pointed by cursor. i am fetching these records in chunks within some iterations.when i got my required results in the first iteration then in next iteration I want to check there is no any record in model but I not get any empty/None value of cursor at this stage.please let me know how to perform this check with cursors in google app engine with python.
0
0
0
0
false
9,521,520
1
550
1
1
0
9,521,289
i am not 100% sure about that but what i used to do is compare the last cursor with the actual cursor and i think i noticed that they were the same so i came to the conclusion that it was the last cursor.
1
0
0
cursor and with_cursor() in GAE
3
python,google-app-engine
0
2012-03-01T17:47:00.000
I'm new to Django and have only been using sqlite3 as a database engine in Django. Now one of the applications I'm working on is getting pretty big, both in terms of models' complexity and requests/second. How do database engines supported by Django compare in terms of performance? Any pitfalls in using any of them? And the last but not least, how easy is it to switch to another engine once you've used one for a while?
48
6
1
0
false
9,540,685
1
41,120
1
0
0
9,540,154
MySQL and PostgreSQL work best with Django. I would highly suggest that when you choose one that you change your development settings to use it while development (opposed to using sqlite3 in dev mode and a "real" database in prod) as there are subtle behavioral differences that can caused lots of headaches in the future.
1
0
0
Which database engine to choose for Django app?
4
python,database,django,sqlite
0
2012-03-02T20:49:00.000
I am trying to think through a script that I need to create. I am most likely going to be using php unless there would be a better language to do this with e.g. python or ror. I only know a little bit of php so this will definitely be a learning experience for me and starting fresh with a different language wouldn't be a problem if it would help in the long run. What I am wanting to do is create a website where people can sign up for WordPress hosting. Right now I have the site set up with WHMCS. If I just leave it how it is I will have manually go in and install WordPress every time a customer signs up. I would like an automated solution that creates a database and installs WordPress as soon as the customer signs up. With WHMCS I can run a script as soon as a customer signs up and so far I understand how to create a database, download WordPress, and install WordPress. The only thing is I can't figure out how to make it work with more than one customer because with each customer there will be a new database. What I need the script to do is when customer A signs up, the script will create a database name "customer_A" (that name is just an example) and when, lets say my second customer signs up, the script will create a database named "customer_B". Is there a possible solution to this? Thanks for the help
0
0
0
0
false
9,543,194
0
90
1
0
0
9,543,171
I did this yesterday. my process was to add a row to a master accounts table, get the auto inc id, use that along with the company name to create the db name. so in my case the db's are Root_1companyname1 Root_2companyname2 .. Root_ is optional of course. Ask if you have any questions.
1
0
0
Automate database creation with incremental name?
1
php,python,wordpress
1
2012-03-03T03:30:00.000
I have a Python Flask app I'm writing, and I'm about to start on the backend. The main part of it involves users POSTing data to the backend, usually a small piece of data every second or so, to later be retrieved by other users. The data will always be retrieved within under an hour, and could be retrieved in as low as a minute. I need a database or storage solution that can constantly take in and store the data, purge all data that was retrieved, and also perform a purge on data that's been in storage for longer than an hour. I do not need any relational system; JSON/key-value should be able to handle both incoming and outgoing data. And also, there will be very constant reading, writing, and deleting. Should I go with something like MongoDB? Should I use a database system at all, and instead write to a directory full of .json files constantly, or something? (Using only files is probably a bad idea, but it's kind of the extent of what I need.)
3
1
1.2
0
true
9,545,480
1
168
1
0
0
9,544,618
You might look at mongoengine we use it in production with flask(there's an extension) and it has suited our needs well, there's also mongoalchemy which I haven't tried but seems to be decently popular. The downside to using mongo is that there is no expire automatically, having said that you might take a look at using redis which has the ability to auto expire items. There are a few ORMs out there that might suit your needs.
1
0
0
In need of a light, changing database/storage solution
1
python,database,flask
0
2012-03-03T08:17:00.000
there's something I'm struggling to understand with SQLAlchamy from it's documentation and tutorials. I see how to autoload classes from a DB table, and I see how to design a class and create from it (declaratively or using the mapper()) a table that is added to the DB. My question is how does one write code that both creates the table (e.g. on first run) and then reuses it? I don't want to have to create the database with one tool or one piece of code and have separate code to use the database. Thanks in advance, Peter
0
0
0
0
false
9,554,925
0
77
1
0
0
9,554,204
I think you're perhaps over-thinking the situation. If you want to create the database afresh, you normally just call Base.metadata.create_all() or equivalent, and if you don't want to do that, you don't call it. You could try calling it every time and handling the exception if it goes wrong, assuming that the database is already set up. Or you could try querying for a certain table and if that fails, call create_all() to put everything in place. Every other part of your app should work in the same way whether you perform the db creation or not.
1
0
0
SQLAlchamy Database Construction & Reuse
2
python,database,sqlalchemy
0
2012-03-04T10:41:00.000
I'm developing a multi-player game in Python with a Flask frontend, and I'm using it as an opportunity to learn more about the NoSQL way of doing things. Redis seems to be a good fit for some of the things I need for this app, including storage of server-side sessions and other transient data, e.g. what games are in progress, who's online, etc. There are also several good Flask/Redis recipes that have made things very easy so far. However, there are still some things in the data model that I would prefer lived inside a traditional RDBMS, including user accounts, logs of completed games, etc. It's not that Redis can't do these things, but I just think the RDBMS is more suited to them, and since Redis wants everything in memory, it seems to make sense to "warehouse" some of this data on disk. The one thing I don't quite have a good strategy for is how to make these two data stores live happily together. Using ORMs like SQLAlchemy and/or redisco seems right out, because the ORMs are going to want to own all the data that's part of their data model, and there are inevitably times I'm going to need to have classes from one ORM know about classes from the other one (e.g. "users are in the RDBMS, but games are in Redis, and games have users participating in them.) Does anyone have any experience deploying python web apps using a NoSQL store like Redis for some things and an RDBMS for others? If so, do you have any strategies for making them work together?
4
3
1.2
0
true
9,557,895
1
532
1
0
0
9,557,552
You should have no problem using an ORM because, in the end, it just stores strings, numbers and other values. So you could have a game in progress, and keep its state in Redis, including the players' IDs from the SQL player table, because the ID is just a unique integer.
1
0
0
Redis and RDBMS coexistence (hopefully cooperation) in Flask applications
1
python,nosql,redis,rdbms,flask
0
2012-03-04T18:17:00.000
I have a game where each player has a score. I would like to have a global scoreboard where players can compare their scores, see how well they are placed and browse the scoreboard. Unfortunately I cannot find an efficient way to program this: storing the current player position in the scoreboard means I have to update a large part of the scoreboard when a player improves his score, and not storing the position means I have to recompute it each time I need it (which would also require a lot of computations). Is there a better solution to this problem? Or is one of the above solutions "good enough" to be used practically with a lot of users and a lot of updates?
0
0
0
0
false
9,576,946
0
857
1
0
0
9,576,578
The ORDER BY clause was made for that and doesn't look so slow.
1
0
0
Scoreboard using Python and SQL
3
python,sql
0
2012-03-06T01:20:00.000
I have a table address. This table is constantly getting new row inserts, appox 1 row per second. Lets called it process1. In parallel, I need to iterate over SELECT * from address results inserted till now via process1. This is Process2. It should wait for Process1 to insert new rows if it reaches the end, ie, there are no more rows to process (iterate) in address. Both Process1 and 2 are very long. Several hours or maybe days. How should process2 look like in python?
0
0
1.2
0
true
9,607,771
0
112
1
0
0
9,607,711
Add a TIMESTAMP column and select rows with a newer timestamp than the latest processed.
1
0
0
python dynamically select rows from mysql
1
python,mysql
0
2012-03-07T19:21:00.000
I am trying rewrite a simple Rails application I made a while ago with cherrypy and Python3. So far I have been unable to find a Python replacement for ActiveRecord (the persistence part of the application). Most of the recommendations I've found on StackOverflow have been for SQL Alchemy. I looked into this and it seems much too complicated to get up and running. After reading its online docs and a book from Amazon, It's still not clear how to even proceed; not a good sign. So my question is, what are developers using to persist data in their python3 web applications? Also, I looked into Django but python3 is a requirement so that's out. Thanks
2
0
0
0
false
9,832,722
1
456
2
0
0
9,678,989
I have developed a transparent persistent storage system for python this is currently in an alpha-stage. Once you create a persistent object, you can access and modify its attributes using standard python syntax (obj.x=3;) and the persistence is done behind the scenes (by overloading the setattr methods, etc.). Contact me if you are interested in learning more. -Stefan
1
0
0
Persistence for a python (cherrypy) web application?
3
python,web-applications,persistence,cherrypy
0
2012-03-13T05:49:00.000
I am trying rewrite a simple Rails application I made a while ago with cherrypy and Python3. So far I have been unable to find a Python replacement for ActiveRecord (the persistence part of the application). Most of the recommendations I've found on StackOverflow have been for SQL Alchemy. I looked into this and it seems much too complicated to get up and running. After reading its online docs and a book from Amazon, It's still not clear how to even proceed; not a good sign. So my question is, what are developers using to persist data in their python3 web applications? Also, I looked into Django but python3 is a requirement so that's out. Thanks
2
1
0.066568
0
false
9,679,132
1
456
2
0
0
9,678,989
SQL Alchemy is a industrial standard is no choice. But it's not as difficult as it seems at first sight
1
0
0
Persistence for a python (cherrypy) web application?
3
python,web-applications,persistence,cherrypy
0
2012-03-13T05:49:00.000
I seem to remember reading somewhere that google app engine automatically caches the results of very frequent queries into memory so that they are retrieved faster. Is this correct? If so, is there still a charge for datastore reads on these queries?
3
1
0.066568
0
false
9,689,883
1
1,313
2
1
0
9,689,588
I think that app engine does not cache anything for you. While it could be that, internally, it caches some things for a split second, I don't think you should rely on that. I think you will be charged the normal number of read operations for every entity you read from every query.
1
0
0
Does app engine automatically cache frequent queries?
3
python,google-app-engine,memcached,bigtable
0
2012-03-13T18:06:00.000
I seem to remember reading somewhere that google app engine automatically caches the results of very frequent queries into memory so that they are retrieved faster. Is this correct? If so, is there still a charge for datastore reads on these queries?
3
1
0.066568
0
false
9,690,080
1
1,313
2
1
0
9,689,588
No, it doesn't. However depending on what framework you use for access to the datastore, memcache will be used. Are you developing in java or python? On the java side, Objectify will cache GETs automatically but not Queries. Keep in mind that there is a big difference in terms of performance and cachability between gets and queries in both python and java. You are not charged for datastore reads for memcache hits.
1
0
0
Does app engine automatically cache frequent queries?
3
python,google-app-engine,memcached,bigtable
0
2012-03-13T18:06:00.000
I have a spreadsheet with about 1.7m lines, totalling 1 GB, and need to perform various queries on it. Being most comfortable with Python, my first approach was to hack together a bunch of dictionaries keyed in a way that would facilitate the queries I was trying to make. E.g. if I needed to be able to access everyone with a particular area code and age, I would make an areacode_age 2-dimensional dict. I ended up needing quite a few of these, which multiplied my memory footprint (to the order of ~10GB), and even though I had enough RAM to support this, the process was still quite slow. At this point, it seemed like I was playing a sucker's game. "Well this is what relational databases were made for, right?", I thought. I imported sqlite3 and imported my data into an in-memory database. I figure databases are built for speed and this will solve my problems. It turns out though, that doing a query like "SELECT (a, b, c) FROM foo WHERE date1<=d AND date2>e AND name=f" takes 0.05 seconds. Doing this for my 1.7m rows would take 24 hours of compute time. My hacky approach with dictionaries was about 3 orders of magnitude faster for this particular task (and, in this example, I couldn't key on date1 and date2 obviously, so I was getting every row that matched name and then filtering by date). So, my question is, why is this so slow, and how can I make it fast? And what is the Pythonic approach? Possibilities I've been considering: sqlite3 is too slow, and I need something more heavyweight I need to somehow change my schema or my queries to be more... optimized? the approaches I've tried so far are entirely wrong and I need a whole new tool of some kind I read somewhere that, in sqlite 3, doing repeated calls to cursor.execute is much slower than using cursor.executemany. It turns out that executemany isn't even compatible with select statements though, so I think this was a red herring. Thanks.
2
4
1.2
0
true
9,695,095
0
572
1
0
0
9,694,967
sqlite3 is too slow, and I need something more heavyweight First, sqlite3 is fast, sometime faster than MySQL Second, you have to use index, put a compound index in (date1, date2, name) will speed thing up significantly
1
0
0
Querying (pretty) big relational data in Python in a reasonable amount of time?
3
python,database,sqlite,indexing,bigdata
0
2012-03-14T02:05:00.000
i am new in python and i want to read office 2010 excel file without changing its style. Currently its working fine but changing date format. i want it as they are in excel file.
0
1
0.197375
0
false
9,757,506
0
719
1
0
0
9,757,361
i want it as they are in excel file. A date is recorded in an Excel file (both 2007+ XLSX files and earlier XLS files) as a floating point number of days (and fraction thereof) since some date in 1899/1900 or 1904. Only the "number format" that is recorded against the cell can be used to distinguish whether a date or a number was intended. You will need to be able to retrieve the actual float value and the "number format" and apply the format to the float value. If the "number format" being used is one of the standard ones, this should be easy enough to do. Customised number formats are another matter. Locale-dependant formats likewise. To get detailed help, you will need to give examples of what raw data you have got and what you want to "see" and how it is now being presented ("changing date format").
1
0
1
How to read office 2010 excelfile using openpyxl without changing style
1
python,openpyxl
0
2012-03-18T09:49:00.000
I'm creating python app using relatively big SQL database (250k rows). Application needs GUI where most important part of it would be to present results of SQL queries. So I'm looking for a best way to quickly present data in tables in GUI. Most preferably I'd be using wx - as it has seamless connection to main application I'm working with. And what I need is least effort between SQL query a and populating GUI table. I used once wx.grid, but it seemed to be limited functionality. Also I know of wx.grid.pygridtablebase - what is the difference? What would be easiest way to do this?
1
1
0.197375
0
false
9,771,997
0
420
1
0
0
9,762,841
You could use wx.grid or one of the ListCtrls. There's an example of a grid with 100 million cells in the wxPython demo that you could use for guidance on projects with lots of information. For ListCtrls, you would want to use a Virtual ListCtrl using the wx.LC_VIRTUAL flag. There's an example of that in the demo as well.
1
1
0
Most seamless way to present data in gui
1
python,sqlite,user-interface,wxpython,wxwidgets
0
2012-03-18T22:22:00.000
>>> _cursor.execute("select * from bitter.test where id > 34") 1L >>> _cursor.fetchall() ({'priority': 1L, 'default': 0, 'id': 35L, 'name': 'chinanet'},) >>> _cursor.execute("select * from bitter.test where id > 34") 1L >>> _cursor.fetchall() ({'priority': 1L, 'default': 0, 'id': 35L, 'name': 'chinanet'},) >>> the first time, i run cursor.execute and cursor.fetchall, i got the right result. before the second time i run execute and fetchall i insert data into mysql which id id 36, i also run commit command in mysql but cursor.execute/fetchall counld only get the data before without new data
2
2
1.2
0
true
9,765,239
0
677
1
0
0
9,764,963
I guess you're using InnoDB. This is default for an InnoDB transaction. REPEATABLE READ This is the default isolation level for InnoDB. For consistent reads, there is an important difference from the READ COMMITTED isolation level: All consistent reads within the same transaction read the snapshot established by the first read. This convention means that if you issue several plain (nonlocking) SELECT statements within the same transaction, these SELECT statements are consistent also with respect to each other. See Section 13.2.8.2, “Consistent Nonlocking Reads”. I haven't tested yet but forcing MySQLdb to start a new transaction by issuing a commit() on the current connection or create a new connection might solve the issue.
1
0
0
cursor fetch wrong records from mysql
2
python,mysql-python
0
2012-03-19T04:13:00.000
Is there a new way to connect to MySQL from Python with Mac OS X Lion (10.7.x)? All the material I can find only seems to support Snow Leopard (10.6) and older. I've tried installing pyodbc, but can't get the odbc drivers to register with the operating system (maybe a 10.6 -> 10.7 compatibility issue?)
1
0
1.2
0
true
9,792,010
0
391
1
0
0
9,791,587
Turns out the newest MySql_python worked great. just had to run sudo python setup.py install
1
0
0
Python MySQL On Mac OS X Lion
2
python,mysql,macos
0
2012-03-20T17:09:00.000
Are there any alternative to xlrd, xlwt and xlutils for handling MS Excel in python? As far as I know, their licensing does not allow it to be used for commercial purpose and I was wondering if there are any alternative to that other than using COM.
2
1
0.099668
0
false
10,435,892
0
3,189
1
0
0
9,805,426
openpyxl is definitely worth a test drive, but keep in mind that it support only XLSX files, while xlrd/xlwr support only XLS files.
1
0
0
Alternative to xlrd, xlwt and xlutils in python
2
python
0
2012-03-21T13:17:00.000
I am writing a music catalogue application with PyQt using to display GUI. I have a problem about choosing database engine. There are simply too many options. I can use: -PyQt build-in QSql -sqlite3 -SQLAlchemy (Elixir) -SQLObject -Python DB-API Probably there are far more options, this list is what I got from google (I'm open for any other propositions). If I decide to use some ORM which database system should I use? MySql, PostgreSQL or other? I know some MySql, but I heard a lot of good thing about PostgreSQL, on the other hand sqlite3 seems be most popular in desktop applications. I would be grateful for any advice. EDIT: An application is meant to work on Linux and Windows. I think database size should be around 100-10k entries.
2
1
0.099668
0
false
9,860,250
0
475
2
0
0
9,859,343
You would be better off using an ORM (Object Relational Library) Library that would allow you to design in an OOP way, and let it take care of the database for you. There are many advantages; but one of the greatest is that you won't be tied to a database engine. You can use sqlite for development, and keep your project compatible with Postgresql, Mysql and even Oracle D, depending on a single change in a configuration parameter. Given that, my ORM of choice is SQLAlchemy, due to its maturity for being well known and used (but others could be fine as well).
1
0
0
Databases and python
2
python,database,sqlite,sqlalchemy
0
2012-03-25T10:09:00.000
I am writing a music catalogue application with PyQt using to display GUI. I have a problem about choosing database engine. There are simply too many options. I can use: -PyQt build-in QSql -sqlite3 -SQLAlchemy (Elixir) -SQLObject -Python DB-API Probably there are far more options, this list is what I got from google (I'm open for any other propositions). If I decide to use some ORM which database system should I use? MySql, PostgreSQL or other? I know some MySql, but I heard a lot of good thing about PostgreSQL, on the other hand sqlite3 seems be most popular in desktop applications. I would be grateful for any advice. EDIT: An application is meant to work on Linux and Windows. I think database size should be around 100-10k entries.
2
4
0.379949
0
false
9,859,423
0
475
2
0
0
9,859,343
SQLite3 has the advantage of shipping with Python, so it doesn't require any installation. It has a number of nice features (easy-of-use, portability, ACID, storage in a single file, and it is reasonably fast). SQLite makes a good starting-off point. Python's DB API assures a consistent interface to all the popular DBs, so it shouldn't be difficult to switch to another DB later if you change you mind. The decision about whether to use an ORM is harder and it is more difficult to change your mind later. If you can isolate the DB access in just a few functions, then you may not need an ORM at all.
1
0
0
Databases and python
2
python,database,sqlite,sqlalchemy
0
2012-03-25T10:09:00.000
I have a small issue(for lack of a better word) with MySQL db. I am using Python. So I have this table in which rows are inserted regularly. As regularly as 1 row /sec. I run two Python scripts together. One that simulates the insertion at 1 row/sec. I have also turned autocommit off and explicitly commit after some number of rows, say 10. The other script is a simple "SELECT count(*) ..." query on the table. This query doesn't show me the number of rows the table currently has. It is stubbornly stuck at whatever number of rows the table had initially when the script started running. I have even tried "SELECT SQL_NO_CACHE count(*) ..." to no effect. Any help would be appreciated.
2
1
1.2
0
true
9,868,793
0
392
2
0
0
9,866,319
If autocommit is turned off in the reader as well, then it will be doing the reads inside a transaction and thus not seeing the writes the other script is doing.
1
0
0
Python MySQL- Queries are being unexpectedly cached
3
python,mysql
0
2012-03-26T03:37:00.000
I have a small issue(for lack of a better word) with MySQL db. I am using Python. So I have this table in which rows are inserted regularly. As regularly as 1 row /sec. I run two Python scripts together. One that simulates the insertion at 1 row/sec. I have also turned autocommit off and explicitly commit after some number of rows, say 10. The other script is a simple "SELECT count(*) ..." query on the table. This query doesn't show me the number of rows the table currently has. It is stubbornly stuck at whatever number of rows the table had initially when the script started running. I have even tried "SELECT SQL_NO_CACHE count(*) ..." to no effect. Any help would be appreciated.
2
0
0
0
false
9,867,231
0
392
2
0
0
9,866,319
My guess is that either the reader or writer (most likely the writer) is operating inside a transaction which hasn't been committed. Try ensuring that the writer is committing after each write, and try a ROLLBACK from the reader to make sure that it isn't inside a transaction either.
1
0
0
Python MySQL- Queries are being unexpectedly cached
3
python,mysql
0
2012-03-26T03:37:00.000
I have several Shelve i.e. .db files that I wish to merge together into one single database. The only method I could think of was to iterate through each database rewriting each iteration to the new database, but this takes too long. Is there a better way to do this?
0
0
0
0
false
9,915,108
0
320
1
0
0
9,915,062
Shelves are mappings, and mappings have an update() method.
1
0
0
How can I merge Shelve files/databases?
1
python,database,shelve
0
2012-03-28T20:20:00.000
I have some *.xls (excel 2003) files, and I want to convert those files into xlsx (excel 2007). I use the uno python package, when I save the documents, I can set the Filter name: MS Excel 97 But there is no Filter name like 'MS Excel 2007', How can set the the filter name to convert xls to xlsx ?
43
0
0
0
false
67,111,357
0
99,140
1
0
0
9,918,646
This is a solution for MacOS with old xls files (e.g. Excel 97 2004). The best way I found to deal with this format, if excel is not an option, is to open the file in openoffice and save it to another format as csv files.
1
0
0
how to convert xls to xlsx
17
python,uno
0
2012-03-29T03:20:00.000
I have to create Excel spreadsheet with nice format from Python. I thought of doing it by: I start in Excel as it is very easy to format: I write in Excel the model I want, with the good format I read this from Python I create from Python an Excel spreadsheet with the same format In the end, the purpose is to create from Python Excel spreadsheets, but formatting with xlwt takes a lot of time, so I thought of formatting first in Excel to help. I have researched for easy ways to doing this but haven't found any. I can stick to my current working solution, using xlwt in Python to create formatted Excel, but it is quite awkward to use. Thanks for any reply
3
0
0
0
false
10,001,613
0
4,524
1
0
0
9,920,935
You said: formatting with xlwt takes a lot of time and it is quite awkward to use Perhaps you are not using easyxf? If so, check out the tutorial that you can access via www.python-excel.org, and have a look at examples/xlwt_easyxf_simple_demo.py in your xlwt installation.
1
0
0
Easily write formatted Excel from Python: Start with Excel formatted, use it in Python, and regenerate Excel from Python
2
python,excel,format,xlwt
0
2012-03-29T07:32:00.000
I want to ask you what programming language I should use to develop a horizontally scalable database. I don't care too much about performance. Currently, I only know PHP and Python, but I wonder if Python is good for scalability. Or is this even possible in Python? The reasons I don't use an existing system is, I need deep insight into the system, and there is no database out there that can store indexes the way I want. (It's a mix of non relational, sparse free multidimensional, and graph design) EDIT: I already have most of the core code written in Python and investigated ways to improve adding data for that type of database design, what limits the use of other databases even more. EDIT 2: Forgot to note, the database tables are several hundred gigabytes.
0
1
1.2
0
true
9,927,520
0
284
3
0
0
9,927,372
The deveopment of a scalable database is language independent, i cannot say much about PHP, but i can tell you good things about Python, it's easy to read, easy to learn, etc. In my opinion it makes the code much cleaner than other languges.
1
0
0
Programming a scalable database
4
python,programming-languages,database-programming
0
2012-03-29T14:25:00.000
I want to ask you what programming language I should use to develop a horizontally scalable database. I don't care too much about performance. Currently, I only know PHP and Python, but I wonder if Python is good for scalability. Or is this even possible in Python? The reasons I don't use an existing system is, I need deep insight into the system, and there is no database out there that can store indexes the way I want. (It's a mix of non relational, sparse free multidimensional, and graph design) EDIT: I already have most of the core code written in Python and investigated ways to improve adding data for that type of database design, what limits the use of other databases even more. EDIT 2: Forgot to note, the database tables are several hundred gigabytes.
0
0
0
0
false
9,927,811
0
284
3
0
0
9,927,372
Since this is clearly a request for "opinion", I thought I'd offer my $.02 We looked at MongoDB 12-months ago, and started to really like it...but for one issue. MongoDB limits the largest database to amount of physical RAM installed on the MongoDB server. For our tests, this meant we were limited to 4 GB databases. This didn't fit our needs, so we walked away (too bad really, because Mongo looked great). We moved back to home turf, and went with PostgreSQL for our project. It is an exceptional system, with lots to like. But we've kept an eye on the NoSQL crowd ever since, and it looks like Riak is doing some really interesting work. (fyi -- it's also possible the MongoDB project has resolved the DB size issue -- we haven't kept up with that project).
1
0
0
Programming a scalable database
4
python,programming-languages,database-programming
0
2012-03-29T14:25:00.000
I want to ask you what programming language I should use to develop a horizontally scalable database. I don't care too much about performance. Currently, I only know PHP and Python, but I wonder if Python is good for scalability. Or is this even possible in Python? The reasons I don't use an existing system is, I need deep insight into the system, and there is no database out there that can store indexes the way I want. (It's a mix of non relational, sparse free multidimensional, and graph design) EDIT: I already have most of the core code written in Python and investigated ways to improve adding data for that type of database design, what limits the use of other databases even more. EDIT 2: Forgot to note, the database tables are several hundred gigabytes.
0
0
0
0
false
9,927,445
0
284
3
0
0
9,927,372
Betweent PHP & Python, definitely Python. Where I work, the entire system is written in Python and it scales quite well. p.s.: Do take a look at Mongo Db though.
1
0
0
Programming a scalable database
4
python,programming-languages,database-programming
0
2012-03-29T14:25:00.000
I have a web application build in Django + Python that interact with web services (written in JAVA). Now all the database management part is done by web-services i.e. all CRUD operations to actual database is done by web-services. Now i have to track all User Activities done on my website in some log table. Like If User posted a new article, then a new row is created into Articles table by web-services and side by side, i need to add a new row into log table , something like "User : Raman has posted a new article (with ID, title etc)" I have to do this for all Objects in my database like "Article", "Media", "Comments" etc Note : I am using PostgreSQL So what is the best way to achieve this..?? (Should I do it in PostgreSQL OR JAVA ..??..And How..??)
0
0
0
0
false
9,942,327
1
2,693
2
0
0
9,942,206
In your log table you can have various columns, including: user_id (the user that did the action) activity_type (the type of activity, such as view or commented_on) object_id (the actual object that it concerns, such as the Article or Media) object_type (the type of object; this can be used later, in combination with object_id to lookup the object in the database) This way, you can keep track of all actions the users do. You'd need to update this table whenever something happens that you wish to track.
1
0
0
How to store all user activites in a website..?
5
java,python,django,postgresql,user-activity
0
2012-03-30T11:39:00.000
I have a web application build in Django + Python that interact with web services (written in JAVA). Now all the database management part is done by web-services i.e. all CRUD operations to actual database is done by web-services. Now i have to track all User Activities done on my website in some log table. Like If User posted a new article, then a new row is created into Articles table by web-services and side by side, i need to add a new row into log table , something like "User : Raman has posted a new article (with ID, title etc)" I have to do this for all Objects in my database like "Article", "Media", "Comments" etc Note : I am using PostgreSQL So what is the best way to achieve this..?? (Should I do it in PostgreSQL OR JAVA ..??..And How..??)
0
1
1.2
0
true
9,942,819
1
2,693
2
0
0
9,942,206
So, you have UI <-> Web Services <-> DB Since the web services talk to the DB, and the web services contain the business logic (i.e. I guess you validate stuff there, create your queries and execute them), then the best place to 'log' activities is in the services themselves. IMO, logging PostgreSQL transactions is a different thing. It's not the same as logging 'user activities' anymore. EDIT: This still means you create DB schema for 'logs' and write them to DB. Second EDIT: Catching log worthy events in the UI and then logging them from there might not be the best idea either. You will have to rewrite logging if you ever decide to replace the UI, or for example, write an alternate UI for, say mobile devices, or something else.
1
0
0
How to store all user activites in a website..?
5
java,python,django,postgresql,user-activity
0
2012-03-30T11:39:00.000
I'm trying to implement an script that reads the content(files and folders) of a certain directory and writes it in a database. My goal is create an software that allows me to organize those files and folders relating description and tags to them, without affecting the correspondig physical files in the disk. But for now I'm facing a logical problem: How do I make a direct connection between that physical file and the database register? I want that, even if the physical file, for some reason, is edited or moved to another folder inside the root directory, the software is still able to relate that file with its original register in the database. My first idea was to use a checksum hash to identify every file but, I'm guessing that if the file is edited, so does the hash, doesn't it? Besides that, I also think that a folder itself can't be checked that way. Another solution that came up to my mind was applying a unique key in the beginning of every file and folder name in the directory. That may work, but it seems to me like an improvised solution and, therefore, I'mhoping that there may be another way to do it that I haven't considered yet. Does anyone have an advice on that?
0
0
0
0
false
9,978,197
0
114
1
0
0
9,977,888
You can't. It looks that there is no way to identify the file: neither by content nor by pathname. One workaround might be: use path as id (and use them as reference in the DB) and do not use system tools (like mv) to move files but your own script which updates the file system and the database.
1
0
0
How to link a file to a database?
2
php,python,database,windows,linux
0
2012-04-02T14:00:00.000
I am requesting a web page and want to cache the page data as a raw html string. (First I escaped the data string) I use sqlite3 to save my data on. When I tried give the byte_string in dictionary, or tuple, using placeholders in request, it raise "Programming Error" saying to convert the application to use unicode strings. I save it as SQLITE3 TEXT datatype. I tried data.encode("utf-8") and encode("utf-8") both raises the same error UnicodeDecodeError: 'utf8' codec can't decode byte 0xf6 in position 11777: invalid start byte I know it contains a strange character, this character is 'ö'. How can i solve this problem. Do i need use BLOB datatype of sqlite3
0
0
0
0
false
9,991,929
1
241
1
0
0
9,991,854
You should .decode with the correct encoding. In this case Latin 1 or CP1252. »ö« is obviously not 0xf6 in UTF-8 so why should it work?
1
0
0
How to convert a stringbyte(raw html string) to sqlite3 TEXT supporting unicode in Python
1
python,unicode,utf-8,sqlite
0
2012-04-03T10:57:00.000
Ok so I have a script that connects to a mssql db and i need to run as a service which I have already accomplished but when I run it as a service it overrides my credentials that I have put in when i connect to the db with the ad computer account. It runs perfect when i run it on its own and not as a service. My Connection String is: 'DRIVER={SQL Server};SERVER=MyServer;DATABASE=MyDB;UID=DOMAIN\myusername;PWD=A;Trusted_Connection=True' The Error is: Error: ('28000', "[28000] [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user 'DOMAIN\COMPUTERNAME') Any Advice?
4
2
0.197375
0
false
10,001,004
0
7,735
1
0
0
10,000,256
In the last project I worked on, I found that DRIVER={SQL Server};SERVER=SERVERNAME;DATABASE=DBName is sufficient to initiate a db connection in trusted mode. If it still does not work, it is probably either 1) the account DEEPTHOUGHT on mssql server is not set up properly. 2) the runAs in the service is not set up properly (why error message mentions 'ComputerName' instead of 'DEEPTHOUGHT'?)
1
0
0
Failed to Login as 'Domain\ComputerName' pyodbc with py2exe
2
python,sql-server,py2exe,pyodbc
0
2012-04-03T19:46:00.000
I need a unique datastore key for users authenticated via openid with the python 2.7 runtime for the google apps engine. Should I use User.federated_identity() or User.federated_provider() + User.federated_identity()? In other words is User.federated_identity() unique for ALL providers or just one specific provider?
2
2
1.2
0
true
10,023,490
1
194
1
1
0
10,002,209
User.federated_identity() "Returns the user's OpenID identifier.", which is unique by definition (it's a URL that uniquely identifies the user).
1
0
0
Generating a unique data store key from a federated identity
1
python,google-app-engine,authentication
0
2012-04-03T22:13:00.000
I'm writing a Oracle of Bacon type website that involves a breadth first search on a very large directed graph (>5 million nodes with an average of perhaps 30 outbound edges each). This is also essentially all the site will do, aside from display a few mostly text pages (how it works, contact info, etc.). I currently have a test implementation running in Python, but even using Python arrays to efficiently represent the data, it takes >1.5gb of RAM to hold the whole thing. Clearly Python is the wrong language for a low-level algorithmic problem like this, so I plan to rewrite most of it in C using the Python/C bindings. I estimate that this'll take about 300 mb of RAM. Based on my current configuration, this will run through mod_wsgi in apache 2.2.14, which is set to use mpm_worker_module. Each child apache server will then load up the whole python setup (which loads the C extension) thus using 300 mb, and I only have 4gb of RAM. This'll take time to load and it seems like it'd potentially keep the number of server instances lower than it could otherwise be. If I understand correctly, data-heavy (and not client-interaction-heavy) tasks like this would typically get divorced from the server by setting up an SQL database or something of the sort that all the server processes could then query. But I don't know of a database framework that'd fit my needs. So, how to proceed? Is it worth trying to set up a database divorced from the webserver, or in some other way move the application a step farther out than mod_wsgi, in order to maybe get a few more server instances running? If so, how could this be done? My first impression is that the database, and not the server, is always going to be the limiting factor. It looks like the typical Apache mpm_worker_module configuration has ServerLimit 16 anyways, so I'd probably only get a few more servers. And if I did divorce the database from the server I'd have to have some way to run multiple instances of the database as well (I already know that just one probably won't cut it for the traffic levels I want to support) and make them play nice with the server. So I've perhaps mostly answered my own question, but this is a kind of odd situation so I figured it'd be worth seeing if anyone's got a firmer handle on it. Anything I'm missing? Does this implementation make sense? Thanks in advance! Technical details: it's a Django website that I'm going to serve using Apache 2.2.14 on Ubuntu 10.4.
2
1
1.2
0
true
10,020,054
1
214
1
0
0
10,017,645
First up, look at daemon mode of mod_wsgi and don't use embedded mode as then you can control separate to Apache child processes the number of Python WSGI application processes. Secondly, you would be better off putting the memory hungry bits in a separate backend process. You might use XML-RPC or other message queueing system to communicate with the backend processes, or even perhaps see if you can use Celery in some way.
1
0
0
Maximizing apache server instances with large mod_wsgi application
1
python,database,django,apache,mod-wsgi
1
2012-04-04T19:06:00.000
I've been doing some HA testing of our database and in my simulation of server death I've found an issue. My test uses Django and does this: Connect to the database Do a query Pull out the network cord of the server Do another query At this point everything hangs indefinitely within the mysql_ping function. As far as my app is concerned it is connected to the database (because of the previous query), it's just that the server is taking a long time to respond... Does anyone know of any ways to handle this kind of situation? connect_timeout doesn't work as I'm already connected. read_timeout seems like a somewhat too blunt instrument (and I can't even get that working with Django anyway). Setting the default socket timeout also doesn't work (and would be vastly too blunt as this would affect all socket operations and not just MySQL). I'm seriously considering doing my queries within threads and using Thread.join(timeout) to perform the timeout. In theory, if I can do this timeout then reconnect logic should kick in and our automatic failover of the database should work perfectly (kill -9 on affected processes currently does the trick but is a bit manual!).
2
0
0
0
false
10,192,810
1
181
1
0
0
10,018,055
I would think this would be more inline with setting a read_timeout on your front-facing webserver. Any number of reasons could exist to hold up your django app indefinitely. While you have found one specific case there could be many more (code errors, cache difficulties, etc).
1
0
0
How can I detect total MySQL server death from Python?
1
python,django,mysql-python
0
2012-04-04T19:35:00.000
I am having a problem and not sure if this is possible at all, so if someone could point me in the right direction. I need to open a file from a webpage, open it in excel and save the file. The problem I am running into the file name on the website has a file name ( not an active link ) and then it will have a "download " button that is not specific to the file I need to download. So instead of the download button being "file1todaysdate", they are nothing that I could use from day to day. Is there a way I could locate file name then grab the file from the download icon? then save in excel? If not sorry for wasting time.
2
0
0
0
false
10,023,435
1
1,137
1
0
0
10,023,418
Examine the Content-Disposition header of the response to discover what the server wants you to call the file.
1
0
0
Python File Download
3
python
0
2012-04-05T06:05:00.000
I am trying to do a parser, that reads several excel files. I need values usually at the bottom of a row where you find a sum of all upper elements. So the cell value is actually "=sum()" or "A5*0.5" lets say... To a user that opens this file with excel it appears like a number, which is fine. But if I try to read this value with ws.cell(x, y).value I do not get anything. So my question is how to read this kind of fields with xlrd, if it is possible to read it like ws.cell(x, y).value or something similar? thanks
2
0
0
0
false
10,029,868
0
2,953
1
0
0
10,029,641
As per the link for your question,I have posted above, the author of xlrd says, 'The work is 'in-progress' but is not likely to be available soon as the focus of xlrd lies elsewhere". By this, I assume that there is nothing much you can do about it. Note: this is based on author's comment on Jan, 2011.
1
0
0
how to read formulas with xlrd
1
python,excel,xlrd
0
2012-04-05T13:36:00.000
Let's say I have Python process 1 on machine 1 and Python process 2 on machine 2. Both processes are the same and process data sent by a load balancer. Both processes need to interact with a database - in my case Postgres so each process needs to know what database it should talk to, it needs to have the right models on each machine etc. It's just too tightly coupled. The ideal would be to have a separate process dealing with the database stuff like connections, keeping up with db model changes, requests to the databases etc. What my process 1 and process 2 should do is just say I have some JSON data that needs to be saved or updated on this table or I need this data in json format. Maybe I'm asking the impossible but is there any Python solution that would at least make life a little easier when it comes to having distributed processes interacting with relational databases in the most decoupled way possible?
1
1
0.099668
0
false
10,045,103
0
157
1
0
0
10,044,862
If it's db connection information you're interested in, I recently wrote a service for this. Each process has token(s) set in configuration and uses those to query the service for db connection info. The data layer uses that info to create connections, no DSN's are stored. On the server side, you just maintain a dictionary of token->DSN mappings. You could do connection pooling with bpgergo's suggestion, but you should still include an authentication or identification method. That way, if there's a network intrusion, malicious clients may not be able to impersonate one of the clients. The service implementation is broken into a few parts: A RESTful service that supports calls of the form http://192.168.1.100/getConnection?token=mytokenstring A key-value storage system that stores a mapping like {'mytokenstring': {'dbname': 'db', 'ip': '192.168.1.101', 'user': 'dbuser', 'password': 'password', ..} This system shouldn't be on the front end network, but if your web tier is compromised, this approach doesn't buy you any protection for the db. A db object that on instantiation, retrieves a dsn using an appropriate token and creates a new db connection. You should re-use this connection object for the rest of the page response if you can. The response time from the service will be fast, but there's a lot more overhead required for db connections. Once implemented, some care is required for handing schema incompatibilities when switching the dsn info behind a token. You may be able to resolve this by pinning a token to a user session, etc.
1
0
1
Any Python solution for having distributed processes interact with relational databases in the most decoupled way possible?
2
python,database,distributed-computing
0
2012-04-06T14:25:00.000
The documentation for Pandas has numerous examples of best practices for working with data stored in various formats. However, I am unable to find any good examples for working with databases like MySQL for example. Can anyone point me to links or give some code snippets of how to convert query results using mysql-python to data frames in Pandas efficiently ?
97
4
0.061461
0
false
27,531,471
0
123,656
1
0
0
10,065,051
pandas.io.sql.frame_query is deprecated. Use pandas.read_sql instead.
1
0
0
python-pandas and databases like mysql
13
python,pandas
0
2012-04-08T18:01:00.000
I am developing an application designed for a secretary to use. She has a stack of hundreds of ballot forms which have a number of questions on them, and wishes to input this data into a program to show the total votes for each answer. Each question has a number of answers. For example: Q: "Re-elect current president of the board" A: Choice between "Yes" or "No" or "Neutral" Year on year the questions can change, as well as the answers, but the current application used in the company is hard coded with the questions and answers of last year. My aim is to create an app (in Django/Python) which allows the secretary to add/delete questions and answers as she wishes. I am relatively new to Django... I have created an app in University and know how to create basic models and implement the Twitter bootstrap for the GUI. But I'm a little confused about how to enable the secretary to add custom fields in (which are obviously defined in SQL). Does anyone have any small tips on how to get started? By the way, I recognize that this could be achievable using the admin part of website and would welcome any suggestions about that. Thank you.
0
5
1.2
0
true
10,066,588
1
72
1
0
0
10,066,573
You really don't want to implement each question/answer as a separate DB field. Instead, make a table of questions and a table of answers, and have a field in the answers table (in general, a ForeignKey) to indicate which question a given answer is associated with.
1
0
0
Adding field to SQL table from Django Application
1
python,sql,django
0
2012-04-08T21:14:00.000
I have to build a web application which uses Python, php and MongoDB. Python - For offline database populating on my local home machine and then exporting db to VPS. Later I am planning to schedule this job using cron. PHP - For web scripting. The VPS I wish to buy supports Python and LAMP Stack but not mongoDB (myhosting.com LAMP stack VPS) by default. Now since mongoDB isn't supported by default, I would have to install mongoDB manually on VPS. So what I want to know is that, had it been my VPS would have supported mongoDB would I have benefitted in terms of performance and scalability. Also can someone please suggest a VPS suitable in my case.
1
1
0.099668
0
false
10,074,035
0
810
1
0
0
10,073,934
If the vps you are looking at restricts the packages you can install, and you need something that they prohibit, I would look for another vps. Both rackspace and amazon a range of instances, and numerous supported os. With either of them you choose your operating system and are free to install whatever you want.
1
0
0
Performance of MongoDB on VPS or cloud service not having mongoDB installed
2
php,python,mongodb,vps
1
2012-04-09T13:28:00.000
I have a desktop python application whose data backend is a MySQL database, but whose previous database was a network-accessed xml file(s). When it was xml-powered, I had a thread spawned at the launch of the application that would simply check the xml file for changes and whenever the date modified changed (due to any user updating it), the app would refresh itself so multiple users could use and see the changes of the app as they went about their business. Now that the program has matured and is venturing toward an online presence so it can be used anywhere. Xml is out the window and I'm using MySQL with SQLAlchemy as the database access method. The plot thickens, however, because the information is no longer stored in one xml file but rather it is split into multiple tables in the SQL database. This complicates the idea of some sort of 'last modified' table value or structure. Thus the question, how do you inform the users that the data has changed and the app needs to refresh? Here are some of my thoughts: Each table needs a last-modified column (this seems like the worst option ever) A separate table that holds some last modified column? Some sort of push notification through a server? It should be mentioned that I have the capability of running perhaps a very small python script on the same server hosting the SQL db that perhaps the app could connect to and (through sockets?) it could pass information to and from all connected clients? Some extra information: The information passed back and forth would be pretty low-bandwidth. Mostly text with the potential of some images (rarely over 50k). Number of clients at present is very small, in the tens. But the project could be picked up by some bigger companies with client numbers possibly getting into the hundreds. Even still the bandwidth shouldn't be a problem for the foreseeable future. Anyway, somewhat new territory for me, so what would you do? Thanks in advance!
0
0
0
0
false
10,091,535
0
450
1
0
0
10,091,108
As I understand this is not a client-server application, but rather an application that has a common remote storage. One idea would be to change to web services (this would solve most of your problems on the long run). Another idea (if you don't want to switch to web) is to refresh periodically the data in your interface by using a timer. Another way (and more complicated) would be to have a server that receives all the updates, stores them in the database and then pushes the changes to the other connected clients. The first 2 ideas you mentioned will have maintenance, scalability and design uglyness issues. The last 2 are a lot better in my opinion, but I still stick to web services as being the best.
1
0
0
Best way to inform user of an SQL Table Update?
1
python,mysql,notifications
0
2012-04-10T14:55:00.000