Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I had a postgresql query where I need to take column defined as character from table and then pass this value to the function where it only accepts integer.So in this case, how can i solve the problem??Can anyone help?? | 0 | 0 | 0 | 0 | false | 5,148,795 | 0 | 126 | 1 | 0 | 0 | 5,148,790 | ord(val) will give you the integer value of a character. int(val) will cast a value into an integer. | 1 | 0 | 1 | how to convert value of column defined as character into integer in python | 2 | python,casting | 0 | 2011-02-28T23:23:00.000 |
As part of artifacts delivery, our developers give the data and structure scripts in .sql files. I usually "double click" on these files to open in "Microsoft SQL Server Management Studio". Management studio will prompt me for entering database server and user/pwd. I enter them manually and click on Execute button to execute these scripts.
These scripts contain structure and data sql commands. Each script may contain more than one data command (like select, insert, update, etc). Structure and data scripts are provided in separate .sql files.
These scripts also contain stored procedures and functions, etc. They also contain comments / description.
I want to automate execution of these scripts through python. I looked at pyodbc and pymssql but they dont look like solve my issue.
Through pyodbc, i need to read each .sql file and read the sql commands and execute them one by one. As the files may have comments / description / SPs / etc, reading the files will be little difficult.
Can anyone give suggestion on how to automate this?
Thanks in advance. | 1 | 4 | 0.379949 | 0 | false | 5,174,307 | 0 | 3,938 | 1 | 0 | 0 | 5,174,269 | You could just run them using sqlcmd. Sqlcmd is a command line utility that will let you run .sql scripts from the command line, which I'm sure you can kick off through python. | 1 | 0 | 0 | Execute .sql files that are used to run in SQL Management Studio in python | 2 | python,sql,sql-server | 0 | 2011-03-02T22:22:00.000 |
I'm using SQLAlchemy 0.6.6 against a Postgres 8.3 DB on Windows 7 an PY 2.6. I am leaving the defaults for configuring pooling when I create my engine, which is pool_size=5, max_overflow=10.
For some reason, the connections keep piling up and I intermittently get "Too many clients" from PG. I am positive that connections are being closed in a finally block as this application is only accessed via WSGI (CherryPy) and uses a connection/request pattern. I am also logging when connections are being closed just to make sure.
I've tried to see what's going on by adding echo_pool=true during my engine creation, but nothing is being logged. I can see SQL statement rolling through the console when I set echo=True, but nothing for pooling.
Anyway, this is driving me crazy because my co-worker who is on a Mac doesn't have any of these issues (I know, get a Mac), so I'm trying to see if this is the result of a bug or something. Google has yielded nothing so I'm hoping to get some help here.
Thanks,
cc | 1 | 0 | 1.2 | 0 | true | 5,195,465 | 0 | 1,198 | 1 | 0 | 0 | 5,185,438 | Turns out there was ScopedSession being used outside the normal application usage and the close wasn't in a finally. | 1 | 0 | 0 | SQLAlchemy Connection Pooling Problems - Postgres on Windows | 1 | python,postgresql,sqlalchemy,connection-pooling,cherrypy | 0 | 2011-03-03T19:21:00.000 |
I have about 30 MB of textual data that is core to the algorithms I use in my web application.
On the one hand, the data is part of the algorithm and changes to the data can cause an entire algorithm to fail. This is why I keep the data in text files in my source control, and all changes are auto-tested (pre-commit). I currently have a good level of control. Distributing the data along with the source as we spawn more web instances is a non-issue because it tags along with the source. I currently have these problems:
I often develop special tools to manipulate the files, replicating database access tool functionality
I would like to give non-developers controlled web-access to this data.
On the other hand, it is data, and it "belongs" in a database. I wish I could place it in a database, but then I would have these problems:
How do I sync this database to the source? A release contains both code and data.
How do I ship it with the data as I spawn a new instance of the web server?
How do I manage pre-commit testing of the data?
Things I have considered thus-far:
Sqlite (does not solve the non-developer access)
Building an elaborate pre-production database, which data-users will edit to create "patches" to the "real" database, which developers will accept, test and commit. Sounds very complex.
I have not fully designed this yet and I sure hope I'm reinventing the wheel here and some SO user will show me the error of my ways...
BTW: I have a "regular" database as well, with things that are not algorithmic-data.
BTW2: I added the Python tag because I currently use Python, Django, Apache and Nginx, Linux (and some lame developers use Windows).
Thanks in advance!
UPDATE
Some examples of the data (the algorithms are natural language processing stuff):
World Cities and their alternate names
Names of currencies
Coordinates of hotels
The list goes on and on, but Imagine trying to parse the sentence Romantic hotel for 2 in Rome arriving in Italy next monday if someone changes the coordinates that teach me that Rome is in Italy, or if someone adds `romantic' as an alternate name for Las Vegas (yes, the example is lame, but I hope you get the drift). | 4 | 1 | 0.099668 | 0 | false | 5,210,383 | 1 | 127 | 1 | 0 | 0 | 5,210,318 | Okay, here's an idea:
Ship all the data as is done now.
Have the installation script install it in the appropriate databases.
Let users modify this database and give them a button "restore to original" that simply reinstalls from the text file.
Alternatively, this route may be easier, esp. when upgrading an installation:
Ship all the data as is done now.
Let users modify the data and store the modified versions in the appropriate database.
Let the code look in the database, falling back to the text files if the appropriate data is not found. Don't let the code modify the text files in any way.
In either case, you can keep your current testing code; you just need to add tests that make sure the database properly overrides text files. | 1 | 0 | 0 | What's the best way to handle source-like data files in a web application? | 2 | python | 0 | 2011-03-06T11:56:00.000 |
I'm not sure if this is an issue specific to sqlite databases but after adding some properties I executed syncdb successfully but still the the columns were not added to the database and when I try the access the model in admin I get no such column error.
Why is this happening and how do I overcome this issue?
Details: Django 1.3, Python 2.6, OSX 10.6, PyCharm. | 4 | 3 | 0.197375 | 0 | false | 5,211,417 | 1 | 5,639 | 1 | 0 | 0 | 5,211,340 | As always, syncdb does not migrate the existing schema. | 1 | 0 | 0 | Django manage.py syncdb doing nothing when used with sqlite3 | 3 | python,django,sqlite | 0 | 2011-03-06T15:26:00.000 |
I am trying to modify the guestbook example webapp to reduce the amount of database writes.
What I am trying to achieve is to load all the guestbook entries into memcache which I have done.
However I want to be able to directly update the memcache with new guestbook entries and then write all changes to the database as a batch put.() every 30 seconds.
Has anyone got an example of how I could achieve the above? it would really help me!
Thanks :) | 2 | 6 | 1.2 | 0 | true | 5,222,081 | 1 | 1,124 | 1 | 1 | 0 | 5,221,977 | This is a recipe for lost data. I have a hard time believing that a guest book is causing enough write activity to be an issue. Also, the bookkeeping involved in this would be tricky, since memcache isn't searchable. | 1 | 0 | 0 | Limit amount of writes to database using memcache | 3 | python,google-app-engine,caching,memcached,google-cloud-datastore | 0 | 2011-03-07T16:12:00.000 |
I'm using sqlalchemy with reflection, a couple of partial indices in my DB make it dump warnings like this:
SAWarning: Predicate of partial index i_some_index ignored during reflection
into my logs and keep cluttering. It does not hinder my application behavior. I would like to keep these warnings while developing, but not at production level. Does anyone know how to turn this off? | 35 | 12 | 1 | 0 | false | 5,331,129 | 0 | 14,820 | 1 | 0 | 0 | 5,225,780 | the warning means you did a table or metadata reflection, and it's reading in postgresql indexes that have some complex condition which the SQLAlchemy reflection code doesn't know what to do with. This is a harmless warning, as whether or not indexes are reflected doesn't affect the operation of the application, unless you wanted to re-emit CREATE statements for those tables/indexes on another database. | 1 | 0 | 0 | Turn off a warning in sqlalchemy | 2 | python,postgresql,sqlalchemy | 0 | 2011-03-07T22:00:00.000 |
I'm working on a client machine running suse linux and python 2.4.2. I'm not allowed to dowload anything from the net including any external libraries. So, is there any way I can connect to a database (oracle) using only the default libraries? | 0 | 1 | 1.2 | 0 | true | 5,228,737 | 0 | 360 | 1 | 0 | 0 | 5,228,728 | No. There is nothing in the standard library for connecting to database servers. | 1 | 0 | 0 | Python: Connecting to db without any external libraries | 2 | python,database,oracle | 0 | 2011-03-08T05:32:00.000 |
So I have a Django web application and I need to add a payment module to it.
Basically a user will prepay for a certain amount of service and this will slowly reduce over as the user uses the service. I'm wondering what is the best practice to facilitate this? I can process payments using Satchmo, but then just storing the USD value in a database and having my code interacting with that value directly seems kind of risky. Sure I can do that but I am wondering if there is a well tested solution to this problem out there already? | 3 | 1 | 0.099668 | 0 | false | 5,236,907 | 1 | 453 | 2 | 0 | 0 | 5,236,855 | My language agnostic recommendation would be to make sure that the database that communicates with the web app is read only; at least for the table(s) that deal with these account balances. So, you process payments, and manage the reduction of account balances in a database that is not accessible to anyone other than you (i.e. not connected to the internet, or this web app). You can periodically take snapshots of that database and update the one that interacts with the webapp, so your users have a read copy of their balance. This way, even if a user is able to modify the data to increase their balance by a million bucks, you know that you have their true balance in a separate location. Basically, you'd never have to trust the data on the webapp side - it would be purely informational for your users. | 1 | 0 | 0 | Securely storing account balances in a database? | 2 | python,database,django,web-applications | 0 | 2011-03-08T18:45:00.000 |
So I have a Django web application and I need to add a payment module to it.
Basically a user will prepay for a certain amount of service and this will slowly reduce over as the user uses the service. I'm wondering what is the best practice to facilitate this? I can process payments using Satchmo, but then just storing the USD value in a database and having my code interacting with that value directly seems kind of risky. Sure I can do that but I am wondering if there is a well tested solution to this problem out there already? | 3 | 6 | 1.2 | 0 | true | 5,236,901 | 1 | 453 | 2 | 0 | 0 | 5,236,855 | I don't know about a "well-tested solution" as you put it, but I would strongly caution against just storing a dollar value in the database and increasing or decreasing that dollar value. Instead, I would advise storing transactions that can be audited if anything goes wrong. Calculate the amount available from the credit and debit transactions in the user account rather than storing it directly.
For extra safety, you would want to ensure that your application cannot delete any transaction records. If you cannot deny write permissions on the relevant tables for some reason, try replicating the transactions to a second database (that the application does not touch) as they are created. | 1 | 0 | 0 | Securely storing account balances in a database? | 2 | python,database,django,web-applications | 0 | 2011-03-08T18:45:00.000 |
Is there a way in cx_Oracle to capture the stdout output from an oracle stored procedure? These show up when using Oracle's SQL Developer or SQL Plus, but there does not seem to be a way to fetch it using the database drivers. | 3 | 4 | 1.2 | 0 | true | 5,247,755 | 0 | 2,934 | 1 | 0 | 0 | 5,244,517 | You can retrieve dbms_output with DBMS_OUTPUT.GET_LINE(buffer, status). Status is 0 on success and 1 when there's no more data.
You can also use get_lines(lines, numlines). numlines is input-output. You set it to the max number of lines and it is set to the actual number on output. You can call this in a loop and exit when the returned numlines is less than your input. lines is an output array. | 1 | 0 | 0 | Capturing stdout output from stored procedures with cx_Oracle | 4 | python,oracle10g,cx-oracle | 0 | 2011-03-09T10:36:00.000 |
I ´m trying to serialize an array in python to insert it on a database of MySQL... I try with pickle.dump() method but it returns byte... what can I use?? thanks!!
(I ´m working in python 3) | 3 | 1 | 0.049958 | 0 | false | 5,259,400 | 0 | 1,312 | 1 | 0 | 0 | 5,259,329 | Pickle is a binary serialization, that's why you get a byte string.
Pros:
more compact
can express most of Python's objects.
Con's:
bytes can be harder to handle
Python only.
JSON is more universal, so you're not tied to reading data with Python. It's also mostly ASCII, so it's easier to handle. the con is that it's limited to numbers, strings, arrays and dicts. usually enough, but even datetimes have to be converted to string representation before encoding. | 1 | 0 | 1 | Serizalize an array in Python | 4 | python,mysql | 0 | 2011-03-10T12:01:00.000 |
I need to use a Python ORM with a MS-Access database (in Windows).
My first searches are not really succesfull :
SQLAlchemy : no MS Access support in the two last versions.
DAL from Web2Py : no Access (??)
Storm : no MS Access
sqlobject: no MS Access
dejavu : seems OK for MS Access but
is the project alive ?
Any ideas or informations are welcome ... | 1 | 1 | 1.2 | 0 | true | 5,262,564 | 1 | 1,062 | 1 | 0 | 0 | 5,262,387 | Web2py recently updated their DAL making it much easier to add support for new db engines. I don't believe there is currently native Jet (MS Access) support, but the existing SQL Server support could probably be modified without much effort to provide MS Access support. The latest version of the web2py DAL is a single stand-alone .py file, so it's not a "heavy" package.
For what it's worth, I've successfully used the web2py DAL as a stand-alone module with SQL Server after initially trying and giving up on SQLAlchemy. In fairness to SQLAlchemy, I had used the web2py DAL as part of the framework and was already comfortable with it. | 1 | 0 | 0 | Python ORM for MS-Access | 2 | python,ms-access,orm | 0 | 2011-03-10T16:08:00.000 |
Is opening/closing db cursor costly operation? What is the best practice, to use a different cursor or to reuse the same cursor between different sql executions? Does it matter if a transaction consists of executions performed on same or different cursors belonging to same connection?
Thanks. | 2 | 1 | 1.2 | 0 | true | 5,275,401 | 0 | 659 | 1 | 0 | 0 | 5,275,236 | This will depend a lot on your database as well as your chose python implementation - have you tried profiling a few short test operations? | 1 | 0 | 0 | db cursor - transaction in python | 1 | python,database,transactions,cursor | 0 | 2011-03-11T15:58:00.000 |
I have a very large dataset - millions of records - that I want to store in Python. I might be running on 32-bit machines so I want to keep the dataset down in the hundreds-of-MB range and not ballooning much larger than that.
These records - represent a M:M relationship - two IDs (foo and bar) and some simple metadata like timestamps (baz).
Some foo have too nearly all bar in them, and some bar have nearly all foo. But there are many bar that have almost no foos and many foos that have almost no bar.
If this were a relational database, a M:M relationship would be modelled as a table with a compound key. You can of course search on either component key individually comfortably.
If you store the rows in a hashtable, however, you need to maintain three hashtables as the compound key is hashed and you can't search on the component keys with it.
If you have some kind of sorted index, you can abuse lexical sorting to iterate the first key in the compound key, and need a second index for the other key; but its less obvious to me what actual data-structure in the standard Python collections this equates to.
I am considering a dict of foo where each value is automatically moved from tuple (a single row) to list (of row tuples) to dict depending on some thresholds, and another dict of bar where each is a single foo, or a list of foo.
Are there more efficient - speedwise and spacewise - ways of doing this? Any kind of numpy for indices or something?
(I want to store them in Python because I am having performance problems with databases - both SQL and NoSQL varieties. You end up being IPC memcpy and serialisation-bound. That is another story; however the key point is that I want to move the data into the application rather than get recommendations to move it out of the application ;) ) | 1 | 2 | 0.099668 | 0 | false | 5,303,400 | 0 | 240 | 1 | 0 | 0 | 5,302,816 | What you describe sounds like a sparse matrix, where the foos are along one axis and the bars along the other one. Each non-empty cell represents a relationship between one foo and one bar, and contains the "simple metadata" you describe.
There are efficient sparse matrix packages for Python (scipy.sparse, PySparse) you should look at. I found these two just by Googling "python sparse matrix".
As to using a database, you claim that you've had performance problems. I'd like to suggest that you may not have chosen an optimal representation, but without more details on what your access patterns look like, and what database schema you used, it's awfully hard for anybody to contribute useful help. You might consider editing your post to provide more information. | 1 | 0 | 1 | Efficient large dicts of dicts to represent M:M relationships in Python | 4 | python,data-structures | 0 | 2011-03-14T18:34:00.000 |
When I put my database file (which is a .sdb) into a directory and try to access it from that directory, I receive an error. The error reads "unable to open database file". For example, let's say my .sdb file is in the "data" directory and I use the command "con = lite.connect('data\noktalar.sdb')", this error occurs. Why is that so?
Thanks. | 3 | 1 | 0.099668 | 0 | false | 5,321,757 | 0 | 324 | 1 | 0 | 0 | 5,321,699 | Where is your python process running from? Try to point to the absolute path of the file. And when pointing to path use raw string r'c:\\mypath\data\notktalar.sub' | 1 | 0 | 0 | Python Database Error | 2 | python,sqlite | 0 | 2011-03-16T06:14:00.000 |
I'm using the Python version of Google App Engine and Datastore. What is a good way to load a table that will contain lookup data?
By look up data I mean that after the initial load no rows will need to be inserted, deleted, or updated
Blowing away all rows and reloading the table is not acceptable if it destroys referential integrity with other rows referring to it.
Here is an example of a couple kinds that I am using that I want to load lookup data into
class Badge(db.Model):
name = db.StringProperty()
level = db.IntegerProperty()
class Achievement(db.Model):
name = db.StringProperty()
level = db.IntegerProperty()
badge = db.ReferenceProperty(reference_class=Badge)
Here is an example of a kind not holding look up data but referring to it
class CamperAchievement(db.Model):
camper = db.ReferenceProperty(reference_class=Camper)
achievement = db.ReferenceProperty(reference_class=Achievement)
session = db.ReferenceProperty(reference_class=Session)
passed = db.BooleanProperty(default=True)
I'm looking to find out two things:
What should the code to load the data look like?
What should trigger the loading code to execute? | 2 | 2 | 1.2 | 0 | true | 5,331,814 | 1 | 319 | 1 | 0 | 0 | 5,328,112 | If it's really created once and never changes within the lifetime of a deployment, and it's relatively small (a few megs or less), store it with your app as data files. Have the app load the data into memory initially, and cache it there. | 1 | 0 | 0 | Need Pattern for lookup tables in Google App Engine | 2 | python,google-app-engine,google-cloud-datastore | 0 | 2011-03-16T16:05:00.000 |
I'm designing a python application which works with a database. I'm planning to use sqlite.
There are 15000 objects, and each object has a few attributes. every day I need to add some data for each object.(Maybe create a column with the date as its name).
However, I would like to easily delete the data which is too old but it is very hard to delete columns using sqlite(and it might be slow because I need to copy the required columns and then delete the old table)
Is there a better way to organize this data other than creating a column for every date? Or should I use something other than sqlite? | 0 | 0 | 0 | 0 | false | 5,339,473 | 0 | 113 | 2 | 0 | 0 | 5,335,330 | If your database is pretty much a collection of almost-homogenic data, you could as well go for a simpler key-value database. If the main action you perform on the data is scanning through everything, it would perform significantly better.
Python library has bindings for popular ones as "anydbm". There is also a dict-imitating proxy over anydbm in shelve. You could pickle your objects with the attributes using any serializer you want (simplejson, yaml, pickle) | 1 | 0 | 0 | Please help me design a database schema for this: | 3 | python,sqlite,data-modeling | 0 | 2011-03-17T05:42:00.000 |
I'm designing a python application which works with a database. I'm planning to use sqlite.
There are 15000 objects, and each object has a few attributes. every day I need to add some data for each object.(Maybe create a column with the date as its name).
However, I would like to easily delete the data which is too old but it is very hard to delete columns using sqlite(and it might be slow because I need to copy the required columns and then delete the old table)
Is there a better way to organize this data other than creating a column for every date? Or should I use something other than sqlite? | 0 | 0 | 0 | 0 | false | 5,335,386 | 0 | 113 | 2 | 0 | 0 | 5,335,330 | For that size of a db, I would use something else. I've used sqlite once for a media library with about 10k objects and it was slow, like 5 minutes to query it all and display, searches were :/, switching to postgres made life so much easier. This is just on the performance issue only.
It also might be better to create an index that contains the date and the data/column you want to add and a pk reference to the object it belongs and use that for your deletions instead of altering the table all the time. This can be done in sqlite if you give the pk an int type and save the pk of the object to it, instead of using a Foreign Key like you would with mysql/postgres. | 1 | 0 | 0 | Please help me design a database schema for this: | 3 | python,sqlite,data-modeling | 0 | 2011-03-17T05:42:00.000 |
is it possible to Insert a python tuple in a postgresql database | 0 | 1 | 0.049958 | 0 | false | 5,342,409 | 0 | 4,015 | 2 | 0 | 0 | 5,342,359 | Really we need more information. What data is inside the tuple? Is it just integers? Just strings? Is it megabytes of images?
If you had a Python tuple like (4,6,2,"Hello",7) you could insert the string '(4,6,2,"Hello",7)' into a Postgres database, but that's probably not the answer you're looking for.
You really need to figure out what data you're really trying to store before you can figure out how/where to store it.
EDIT: So the short answer is "no", you cannot store an arbitrary Python tuple in a postgres database, but there's probably some way to take whatever is inside the tuple and store it somewhere useful. | 1 | 0 | 0 | is it possible to Insert a python tuple in a postgresql database | 4 | python,database,postgresql | 0 | 2011-03-17T16:47:00.000 |
is it possible to Insert a python tuple in a postgresql database | 0 | 1 | 0.049958 | 0 | false | 5,342,419 | 0 | 4,015 | 2 | 0 | 0 | 5,342,359 | This question does not make any sense. You can insert using SQL whatever is supported by your database model. If you need a fancy mapper: look at an ORM like SQLAlchemy. | 1 | 0 | 0 | is it possible to Insert a python tuple in a postgresql database | 4 | python,database,postgresql | 0 | 2011-03-17T16:47:00.000 |
I am attemping to install OpsCenter for Cassandra, and using the the standard REHL image. I can't figure out how to get this to work. Another version of EPEL perhaps?
yum install opscenter....
Error: Package: python26-rrdtool-1.2.27-1.i386 (opscenter)
Requires: librrd.so.2 | 2 | 0 | 0 | 0 | false | 5,344,716 | 0 | 410 | 1 | 1 | 0 | 5,344,641 | Try installing rrdtool via yum, that should contain librrd.so.2 and correct your issue. | 1 | 0 | 0 | Amazon Linux AMI EC2 - librrd.so.2 dependency issue | 2 | python,linux,centos,cassandra,yum | 0 | 2011-03-17T20:10:00.000 |
I have a Webserver running in Python. He is getting some Data from some Apps and need to store these in MongoDB. My MongoDB is sharded.
Now i want that my Webserver know how much Shards MongoDB has. At the moment he reads this from a cfg file. There is an Statement in MongoDb named printshardingstatus where u can see all shards. So i tried to call this statement from my Pythonserver. But it seems that it is not possible.I dont find such a function in the Pymongo API.
So my question is, is there an chance to run an MongoDB Statement in Python, so that it is directly passed and executed in MongoDB ? | 0 | 0 | 0 | 0 | false | 5,377,084 | 0 | 1,576 | 1 | 0 | 0 | 5,350,599 | You can simply get config databasr and
execute find() on shards collection
just like normal collection. | 1 | 0 | 0 | Execute MongoDb Statements in Python | 3 | python,mongodb,pymongo | 0 | 2011-03-18T10:21:00.000 |
I'm creating a server with Apache2 + mod_python + Django for development and would like to know how to use Mercurial to manage application development.
My idea is to make the folder where the Mercurial stores the project be the same folder to deploy Django.
Thank you for your attention! | 0 | 0 | 1.2 | 0 | true | 5,397,870 | 1 | 627 | 1 | 0 | 0 | 5,397,528 | I thought about this, good idea for development.
Use mercurial in common way. And of course you need deploy mercurial server before.
If you update your django project, it will be compiled on the fly.
My workflow:
Set up mercurial server or use bitbucket
Init repo locally
Push repo to central repo
On server pull repo in some target dir
Edit smth locally and push to central repo
Pull repo on server and everything is fine | 1 | 0 | 0 | How to use Mercurial to deploy Django applications? | 1 | python,django,mercurial,apache2,mod-python | 0 | 2011-03-22T20:37:00.000 |
I have my database in msacess 2000 .mdb format which I downloaded from the net and now I want to access that database from my program which is a python script.
Can I call tables from my programs??
it would be very grateful if anyone of you please suggest me what to do | 2 | 0 | 0 | 0 | false | 5,402,549 | 0 | 5,861 | 1 | 0 | 0 | 5,402,463 | Create an ODBC DSN wit hthis MDB. Python can access ODBC data sources. | 1 | 0 | 0 | How do I access a .mdb file from python? | 3 | python,ms-access | 0 | 2011-03-23T08:16:00.000 |
I am using the function open_workbook() to open an excel file. But I cannot find any function to close the file later in the xlrd module. Is there a way to close the xls file using xlrd?
Or is not required at all? | 23 | 6 | 1 | 0 | false | 5,404,018 | 0 | 22,147 | 1 | 0 | 0 | 5,403,781 | The open_workbook calls the release_resources ( which closes the mmaped file ) before returning. | 1 | 0 | 0 | Is there a way to close a workbook using xlrd | 2 | python,xlrd | 0 | 2011-03-23T10:24:00.000 |
Scenario
Entity1 (id,itmname)
Entity2 (id,itmname,price)
Entity3 (id,itmname,profit)
profit and price are both IntegerProperty
I want to count all the item with price more then 500 and profit more then 10.
I know its join operation and is not supported by google. I tried my best to find out the way other then executing queries separately and performing count but I didn't get anything.
The reason for not executing queries separately is query execution time. In each query I am getting more then 50000 records as result so it takes nearly 20 seconds in fetching records from first query. | 1 | 0 | 0 | 0 | false | 5,415,555 | 1 | 372 | 1 | 1 | 0 | 5,415,342 | The standard solution to this problem is denormalization. Try storing a copy of price and profit in Entity1 and then you can answer your question with a single, simple query on Entity1. | 1 | 0 | 0 | Optimizing join query performance in google app engine | 2 | python,google-app-engine | 0 | 2011-03-24T05:58:00.000 |
I have to re-design an existing application which uses Pylons (Python) on the backend and GWT on the frontend.
In the course of this re-design I can also change the backend system.
I tried to read up on the advantages and disadvantages of various backend systems (Java, Python, etc) but I would be thankful for some feedback from the community.
Existing application:
The existing application was developed with GWT 1.5 (runs now on 2.1) and is a multi-host-page setup.
The Pylons MVC framework defines a set of controllers/host pages in which GWT widgets are embedded ("classical website").
Data is stored in a MySQL database and accessed by the backend with SQLAlchemy/Elixir. Server/client communication is done with RequestBuilder (JSON).
The application is not a typical business like application with complex CRUD functionality (transactions, locking, etc) or sophisticated permission system (tough a simple ACL is required).
The application is used for visualization (charts, tables) of scientific data. The client interface is primarily used to display data in read-only mode. There might be some CRUD functionality but it's not the main aspect of the app.
Only a subset of the scientific data is going to be transfered to the client interface but this subset is generated out of large datasets.
The existing backend uses numpy/scipy to read data from db/files, create matrices and filter them.
The numbers of users accessing or using the app is relatively small, but the burden on the backend for each user/request is pretty high because it has to read and filter large datasets.
Requirements for the new system:
I want to move away from the multi-host-page setup to the MVP architecture (one single host page).
So the backend only serves one host page and acts as data source for AJAX calls.
Data will be still stored in a relational database (PostgreSQL instead of MySQL).
There will be a simple ACL (defines who can see what kind of data) and maybe some CRUD functionality (but it's not a priority).
The size of the datasets is going to increase, so the burden on the backend is probably going to be higher. There won't be many concurrent requests but the few ones have to be handled by the backend quickly. Hardware (RAM and CPU) for the backend server is not an issue.
Possible backend solutions:
Python (SQLAlchemy, Pylons or Django):
Advantages:
Rapid prototyping.
Re-Use of parts of the existing application
Numpy/Scipy for handling large datasets.
Disadvantages:
Weakly typed language -> debugging can be painful
Server/Client communication (JSON parsing or using 3rd party libraries).
Python GIL -> scaling with concurrent requests ?
Server language (python) <> client language (java)
Java (Hibernate/JPA, Spring, etc)
Advantages:
One language for both client and server (Java)
"Easier" to debug.
Server/Client communication (RequestFactory, RPC) easer to implement.
Performance, multi-threading, etc
Object graph can be transfered (RequestFactory).
CRUD "easy" to implement
Multitear architecture (features)
Disadvantages:
Multitear architecture (complexity,requires a lot of configuration)
Handling of arrays/matrices (not sure if there is a pendant to numpy/scipy in java).
Not all features of the Java web application layers/frameworks used (overkill?).
I didn't mention any other backend systems (RoR, etc) because I think these two systems are the most viable ones for my use case.
To be honest I am not new to Java but relatively new to Java web application frameworks. I know my way around Pylons though in the new setup not much of the Pylons features (MVC, templates) will be used because it probably only serves as AJAX backend.
If I go with a Java backend I have to decide whether to do a RESTful service (and clearly separate client from server) or use RequestFactory (tighter coupling). There is no specific requirement for "RESTfulness". In case of a Python backend I would probably go with a RESTful backend (as I have to take care of client/server communication anyways).
Although mainly scientific data is going to be displayed (not part of any Domain Object Graph) also related metadata is going to be displayed on the client (this would favor RequestFactory).
In case of python I can re-use code which was used for loading and filtering of the scientific data.
In case of Java I would have to re-implement this part.
Both backend-systems have its advantages and disadvantages.
I would be thankful for any further feedback.
Maybe somebody has experience with both backend and/or with that use case.
thanks in advance | 4 | 1 | 1.2 | 0 | true | 5,421,810 | 1 | 1,559 | 1 | 1 | 0 | 5,417,372 | We had the same dilemma in the past.
I was involved in designing and building a system that had a GWT frontend and Java (Spring, Hibernate) backend. Some of our other (related) systems were built in Python and Ruby, so the expertise was there, and a question just like yours came up.
We decided on Java mainly so we could use a single language for the entire stack. Since the same people worked on both the client and server side, working in a single language reduced the need to context-switch when moving from client to server code (e.g. when debugging). In hindsight I feel that we were proven right and that that was a good decision.
We used RPC, which as you mentioned yourself definitely eased the implementation of c/s communication. I can't say that I liked it much though. REST + JSON feels more right, and at the very least creates better decoupling between server and client. I guess you'll have to decide based on whether you expect you might need to re-implement either client or server independently in the future. If that's unlikely, I'd go with the KISS principle and thus with RPC which keeps it simple in this specific case.
Regarding the disadvantages for Java that you mention, I tend to agree on the principle (I prefer RoR myself), but not on the details. The multitier and configuration architecture isn't really a problem IMO - Spring and Hibernate are simple enough nowadays. IMO the advantage of using Java across client and server in this project trumps the relative ease of using python, plus you'll be introducing complexities in the interface (i.e. by doing REST vs the native RPC).
I can't comment on Numpy/Scipy and any Java alternatives. I've no experience there. | 1 | 0 | 0 | Feedback on different backends for GWT | 1 | java,python,gwt,architecture,web-frameworks | 0 | 2011-03-24T09:56:00.000 |
Is there a way to reduce the I/O's associated with either mysql or a python script? I am thinking of using EC2 and the costs seem okay except I can't really predict my I/O usage and I am worried it might blindside me with costs.
I basically develop a python script to parse data and upload it into mysql. Once its in mysql, I do some fairly heavy analytic on it(creating new columns, tables..basically alot of math and financial based analysis on a large dataset). So is there any design best practices to avoid heavy I/O's? I think memcached stores a everything in memory and accesses it from there, is there a way to get mysql or other scripts to do the same?
I am running the scripts fine right now on another host with 2 gigs of ram, but the ec2 instance I was looking at had about 8 gigs so I was wondering if I could use the extra memory to save me some money. | 2 | 0 | 0 | 0 | false | 5,426,527 | 0 | 202 | 1 | 1 | 0 | 5,425,289 | You didn't really specify whether it was writes or reads. My guess is that you can do it all in a mysql instance in a ramdisc (tmpfs under Linux).
Operations such as ALTER TABLE and copying big data around end up creating a lot of IO requests because they move a lot of data. This is not the same as if you've just got a lot of random (or more predictable queries).
If it's a batch operation, maybe you can do it entirely in a tmpfs instance.
It is possible to run more than one mysql instance on the machine, it's pretty easy to start up an instance on a tmpfs - just use mysql_install_db with datadir in a tmpfs, then run mysqld with appropriate params. Stick that in some shell scripts and you'll get it to start up. As it's in a ramfs, it won't need to use much memory for its buffers - just set them fairly small. | 1 | 0 | 0 | reducing I/O on application and database | 2 | python,mysql,amazon-ec2,mysql-management | 1 | 2011-03-24T20:44:00.000 |
I using MySQLdb for access to mysql database from python. I need to know if connection with database is still alive... are there any attribute or method in order to do this???
thanks!! | 0 | 0 | 0 | 0 | false | 5,430,722 | 0 | 146 | 1 | 0 | 0 | 5,430,652 | To be honest, I haven't used mysqldb in python in a very long time.
That being said, I would suggest using an execute("now()") (or "select 1", any other "dummy" SQL command) and handle any errors.
edit: That should also probably be part of a class you're using. Don't fill your entire project with .execute("now()") on every other line. ;) | 1 | 0 | 0 | Verify the connection with MySQL database | 1 | python-3.x,mysql-python | 0 | 2011-03-25T09:31:00.000 |
save_or_update has been removed in 0.6. Are there alternatives to use them in 0.6 and above?
I noticed the existence of the method _save_or_update_state for session objects, but there are no docs on this method. | 2 | 1 | 0.066568 | 0 | false | 5,469,880 | 0 | 4,986 | 2 | 0 | 0 | 5,442,825 | Session.merge() works fine for both new and existing object. But you have to remember, that merge() returns object bound to the session as opposed to add() (and save_or_update() in old versions) which puts object passed as argument into the session. This behavior is required to insure there is a single object for each identity in the session. | 1 | 0 | 0 | save_or_update using SQLalchemy 0.6 | 3 | python,sql,sqlalchemy | 0 | 2011-03-26T14:05:00.000 |
save_or_update has been removed in 0.6. Are there alternatives to use them in 0.6 and above?
I noticed the existence of the method _save_or_update_state for session objects, but there are no docs on this method. | 2 | -1 | -0.066568 | 0 | false | 11,861,997 | 0 | 4,986 | 2 | 0 | 0 | 5,442,825 | session.merge() will not work if you have your db setup as a master-slave, where you typically want to query from the slave, but write to the master. I have such a setup, and ended up re-querying from the master just before the writing, then using a session.add() if the data is indeed not there on the master. | 1 | 0 | 0 | save_or_update using SQLalchemy 0.6 | 3 | python,sql,sqlalchemy | 0 | 2011-03-26T14:05:00.000 |
I'm actually working in a search engine project. We are working with python + mongoDb.
I have a pymongo cursor after excecuting a find() command to the mongo db. The pymongo cursor has around 20k results.
I have noticed that the iteration over the pymongo cursor is really slow compared with a normal iteration over for example a list of the same size.
I did a little benchmark:
iteration over a list of 20k strings: 0.001492 seconds
iteration over a pymongo cursor with 20k results: 1.445343 seconds
The difference is really a lot. Maybe not a problem with this amounts of results, but if I have millions of results the time would be unacceptable.
Has anyone got an idea of why pymongo cursors are too slow to iterate?
Any idea of how can I iterate the cursor in less time?
Some extra info:
Python v2.6
PyMongo v1.9
MongoDB v1.6 32 bits | 10 | 1 | 0.049958 | 0 | false | 7,828,897 | 0 | 14,362 | 2 | 0 | 0 | 5,480,340 | the default cursor size is 4MB, and the maximum it can go to is 16MB. you can try to increase your cursor size until that limit is reached and see if you get an improvement, but it also depends on what your network can handle. | 1 | 0 | 1 | Python + MongoDB - Cursor iteration too slow | 4 | python,mongodb,performance,iteration,database-cursor | 0 | 2011-03-29T23:52:00.000 |
I'm actually working in a search engine project. We are working with python + mongoDb.
I have a pymongo cursor after excecuting a find() command to the mongo db. The pymongo cursor has around 20k results.
I have noticed that the iteration over the pymongo cursor is really slow compared with a normal iteration over for example a list of the same size.
I did a little benchmark:
iteration over a list of 20k strings: 0.001492 seconds
iteration over a pymongo cursor with 20k results: 1.445343 seconds
The difference is really a lot. Maybe not a problem with this amounts of results, but if I have millions of results the time would be unacceptable.
Has anyone got an idea of why pymongo cursors are too slow to iterate?
Any idea of how can I iterate the cursor in less time?
Some extra info:
Python v2.6
PyMongo v1.9
MongoDB v1.6 32 bits | 10 | -4 | -1 | 0 | false | 5,480,531 | 0 | 14,362 | 2 | 0 | 0 | 5,480,340 | You don't provide any information about the overall document sizes. Fetch such an amount of document requires both network traffic and IO on the database server.
The performance is sustained "bad" even in "hot" state with warm caches? You can use "mongosniff" in order to inspect the "wire" activity and system tools like "iostat" to monitor the disk activity on the server. In addition "mongostat" gives a bunch of valuable information". | 1 | 0 | 1 | Python + MongoDB - Cursor iteration too slow | 4 | python,mongodb,performance,iteration,database-cursor | 0 | 2011-03-29T23:52:00.000 |
I'm in the planning phase of an Android app which synchronizes to a web app. The web side will be written in Python with probably Django or Pyramid while the Android app will be straightforward java. My goal is to have the Android app work while there is no data connection, excluding the social/web aspects of the application.
This will be a run-of-the-mill app so I want to stick to something that can be installed easily through one click in the market and not require a separate download like CloudDB for Android.
I haven't found any databases that support this functionality so I will write it myself. One caveat with writing the sync logic is there will be some shared data between users that multiple users will be able to write to. This is a solo project so I thought I'd through this up here to see if I'm totally off-base.
The app will process local saves to the local sqlite database and then send messages to a service which will attempt to synchronize these changes to the remote database.
The sync service will alternate between checking for messages for the local app, i.e. changes to shared data by other users, and writing the local changes to the remote server.
All data will have a timestamp for tracking changes
When writing from the app to the server, if the server has newer information, the user will be warned about the conflict and prompted to overwrite what the server has or abandon the local changes. If the server has not been updated since the app last read the data, process the update.
When data comes from the server to the app, if the server has newer data overwrite the local data otherwise discard it as it will be handled in the next go around by the app updating the server.
Here's some questions:
1) Does this sound like overkill? Is there an easier way to handle this?
2) Where should this processing take place? On the client or the server? I'm thinking the advantage of the client is less processing on the server but if it's on the server, this makes it easier to implement other clients.
3) How should I handle the updates from the server? Incremental polling or comet/websocket? One thing to keep in mind is that I would prefer to go with a minimal installation on Webfaction to begin with as this is the startup.
Once these problems are tackled I do plan on contributing the solution to the geek community. | 4 | 1 | 0.197375 | 0 | false | 11,871,778 | 1 | 1,618 | 1 | 0 | 0 | 5,544,689 | 1) Looks like this is pretty good way to manage your local & remote changes + support offline work. I don't think this is overkill
2) I think, you should cache user's changes locally with local timestamp until synchronizing is finished. Then server should manage all processing: track current version, commit and rollback update attempts. Less processing on client = better for you! (Easier to support and implement)
3) I'd choose polling if I want to support offline-mode, because in offline you can't keep your socket open and you will have to reopen it every time when Internet connection is restored.
PS: Looks like this is VEEERYY OLD question... LOL | 1 | 0 | 0 | Android app database syncing with remote database | 1 | python,android | 0 | 2011-04-04T21:42:00.000 |
when I launch my application with apache2+modwsgi
I catch
Exception Type: ImportError
Exception Value: DLL load failed: The specified module could not be found.
in line
from lxml import etree
with Django dev server all works fine
Visual C++ Redistributable 2008 installed
Dependency walker told that msvcrt90.dll is missed
but there is same situation with cx_Oracle, but cx_Oracle's dll loads correct
any ideas?
windows 2003 server 64bit and windows XP sp3 32bit
python 2.7 32 bit
cx_Oracle 5.0.4 32bit
UPD:
download libxml2-2.7.7 and libxslt-1.1.26
tried to build with setup.py build --compiler mingw32
Building lxml version 2.3.
Building with Cython 0.14.1.
ERROR: 'xslt-config' is not recognized as an internal or external command,
operable program or batch file.
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
running build
running build_py
running build_ext
skipping 'src/lxml\lxml.etree.c' Cython extension (up-to-date)
building 'lxml.etree' extension
C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python27\include -IC:\Python27\PC -c src/lxml\lxml.etree.c -o build\temp.win32-2.7\Release\src\lxml\lxml.et
ree.o -w
writing build\temp.win32-2.7\Release\src\lxml\etree.def
C:\MinGW\bin\gcc.exe -mno-cygwin -shared -s build\temp.win32-2.7\Release\src\lxml\lxml.etree.o build\temp.win32-2.7\Release\src\lxml\etree.def -LC:\Python27\lib
s -LC:\Python27\PCbuild -llibxslt -llibexslt -llibxml2 -liconv -lzlib -lWS2_32 -lpython27 -lmsvcr90 -o build\lib.win32-2.7\lxml\etree.pyd
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0xd11): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0xd24): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x1ee92): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x1eed6): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x2159e): undefined reference to `_imp__xmlMalloc'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x2e741): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x2e784): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x3f157): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x3f19a): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x3f4ac): undefined reference to `_imp__xmlFree'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0x3f4ef): more undefined references to `_imp__xmlFree' follow
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0xb1ad5): undefined reference to `xsltLibxsltVersion'
build\temp.win32-2.7\Release\src\lxml\lxml.etree.o:lxml.etree.c:(.text+0xb1b9a): undefined reference to `xsltDocDefaultLoader'
collect2: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
UPD2:
I understand why import cx_Oracle works fine: cx_Oracle.pyd contains "MSVCRT.dll" dependence etree.pyd doesn't have it | 2 | 2 | 1.2 | 0 | true | 5,559,988 | 1 | 1,054 | 1 | 0 | 0 | 5,552,162 | It is indeed because of 'msvcrt90.dll'. From somewhere in micro patch revisions of Python 2.6 they stopped building in automatic dependencies on the DLL for extension modules and relied on Python executable doing it. When embedded in other systems however you are then dependent on that executable linking to DLL and in the case of Apache it doesn't. The change in Python has therefore broken many systems which embed Python on Windows and the only solution is for every extension module to have their own dependencies on required DLLs which many don't. The psycopg2 extension was badly affected by this and they have change their builds to add the dependency back in themselves now. You might go searching about the problem as it occurred for psycopg2. One of the solutions was to rebuild extensions with MinGW compiler on Windows instead. | 1 | 0 | 0 | problem with soaplib (lxml) with apache2 + mod_wsgi | 1 | python,apache2,mingw,lxml,cx-oracle | 0 | 2011-04-05T12:55:00.000 |
I'm currently working on a proof of concept application using Python 3.2 via SQLAlchemy with a MS SQL Server back end. Thus far, I'm hitting a brick wall looking for ways to actually do the connection. Most discussions point to using pyODBC, however it does not support Python 3.x yet.
Does anyone have any connection examples for MS SQL and SQLAlchemy, under Python 3.2?
This is under Windows 7 64bit also.
Thanks. | 0 | 0 | 1.2 | 0 | true | 5,559,890 | 0 | 867 | 1 | 0 | 0 | 5,559,645 | At this moment none of the known Python drivers to connect to Sql Server had a compatible python 3000 version.
PyODBC
mxODBC
pymssql
zxjdbc
AdoDBAPI | 1 | 0 | 0 | SQLAlchemy 3.2 and MS SQL Connectivity | 1 | python,sql-server,sqlalchemy | 0 | 2011-04-05T23:10:00.000 |
I'm working on a python server which concurrently handles transactions on a number of databases, each storing performance data about a different application. Concurrency is accomplished via the Multiprocessing module, so each transaction thread starts in a new process, and shared-memory data protection schemes are not viable.
I am using sqlite as my DBMS, and have opted to set up each application's DB in its own file. Unfortunately, this introduces a race condition on DB creation; If two process attempt to create a DB for the same new application at the same time, both will create the file where the DB is to be stored. My research leads me to believe that one cannot lock a file before it is created; Is there some other mechanism I can use to ensure that the file is not created and then written to concurrently?
Thanks in advance,
David | 0 | 0 | 0 | 0 | false | 5,559,724 | 0 | 327 | 2 | 0 | 0 | 5,559,660 | You could capture the error when trying to create the file in your code and in your exception handler, check if the file exists and use the existing file instead of creating it. | 1 | 0 | 1 | Prevent a file from being created in python | 5 | python,sqlite | 0 | 2011-04-05T23:13:00.000 |
I'm working on a python server which concurrently handles transactions on a number of databases, each storing performance data about a different application. Concurrency is accomplished via the Multiprocessing module, so each transaction thread starts in a new process, and shared-memory data protection schemes are not viable.
I am using sqlite as my DBMS, and have opted to set up each application's DB in its own file. Unfortunately, this introduces a race condition on DB creation; If two process attempt to create a DB for the same new application at the same time, both will create the file where the DB is to be stored. My research leads me to believe that one cannot lock a file before it is created; Is there some other mechanism I can use to ensure that the file is not created and then written to concurrently?
Thanks in advance,
David | 0 | 0 | 0 | 0 | false | 5,559,768 | 0 | 327 | 2 | 0 | 0 | 5,559,660 | You didn't mention the platform, but on linux open(), or os.open() in python, takes a flags parameter which you can use. The O_CREAT flag creates a file if it does not exist, and the O_EXCL flag gives you an error if the file already exists. You'll also be needing O_RDONLY, O_WRONLY or O_RDWR for specifying the access mode. You can find these constants in the os module.
For example: fd = os.open(filename, os.O_RDWR | os.O_CREAT | os.O_EXCL) | 1 | 0 | 1 | Prevent a file from being created in python | 5 | python,sqlite | 0 | 2011-04-05T23:13:00.000 |
Any idea on how I could run a bunch of .sql files that contains lots of functions from within sqlalchemy, after I create the schema ? I've tried using DDL(), engine.text(<text>).execute(), engine.execute(<text>). None of them work, they are either failing because improper escape or some other weird errors. I am using sqlalchemy 0.6.6 | 2 | 1 | 0.197375 | 0 | false | 5,564,716 | 0 | 1,729 | 1 | 0 | 0 | 5,563,437 | You can't do that. You must parse the file and split it into individual SQL commands, and then execute each one separately in a transaction. | 1 | 0 | 0 | run .sql files from within sqlalchemy | 1 | python,sqlalchemy | 0 | 2011-04-06T08:25:00.000 |
I have been trying to generate data in Excel.
I generated .CSV file.
So up to that point it's easy.
But generating graph is quite hard in Excel...
I am wondering, is python able to generate data AND graph in excel?
If there are examples or code snippets, feel free to post it :)
Or a workaround can be use python to generate graph in graphical format like .jpg, etc or .pdf file is also ok..as long as workaround doesn't need dependency such as the need to install boost library. | 6 | 2 | 0.07983 | 1 | false | 5,568,485 | 0 | 48,351 | 1 | 0 | 0 | 5,568,319 | I suggest you to try gnuplot while drawing graph from data files. | 1 | 0 | 0 | use python to generate graph in excel | 5 | python,excel,charts,export-to-excel | 0 | 2011-04-06T14:47:00.000 |
I have a python program which makes use of MySQL database.
I am getting following error.
It would be very grateful if some one help me out a solution.
Traceback (most recent call last):
File "version2_1.py", line 105, in
refine(wr,w)#function for replacement
File "version2_1.py", line 49, in refine
wrds=db_connect.database(word)
File "/home/anusha/db_connect.py", line 6, in database
db = MySQLdb.connect("localhost","root","localhost","anusha" )
File "/usr/lib/pymodules/python2.6/MySQLdb/_init_.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 170, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1045, "Access denied for user 'root'@'localhost' (using password: YES)") | 1 | 0 | 0 | 0 | false | 5,606,690 | 0 | 3,356 | 1 | 0 | 0 | 5,606,665 | Looks like you have an incorrect username/password for MySQL. Try creating a user in MySQL and use that to connect. | 1 | 0 | 0 | Error when trying to execute a Python program that uses MySQL | 3 | python,mysql,mysql-error-1045 | 0 | 2011-04-09T17:37:00.000 |
from the interpreter i can issue >>> from MySQLdb just fine. so, I'm assuming the module did actually load. My source looks as follows:
from Tkinter import *
from MySQLdb import *
"""
Inventory control for Affordable Towing
Functions:
connection() - Controls database connection
delete() - Remove item from database
edit() - Edit item's attributes in database
lookup() - Lookup an item
new() - Add a new item to database
receive() - Increase quantity of item in database
remove() - Decrease quantity of item in database
report() - Display inventory activity
transfer() - Remove item from one location, receive item in another
"""
def control():
....dbInfo = { 'username':'livetaor_atowtw', 'password':'spam', \
....'server':'eggs.com', 'base':'livetaor_towing', 'table':'inventory' }
....def testConnection():
........sql = MySQLdb.connect(user=dbInfo[username], passwd=dbInfo[password], \
........host=dbInfo[server], db=dbInfo[base])
........MySQLdb.mysql_info(sql)
....testConnection()
control()
this gives me:
brad@brads-debian:~/python/towing/inventory$ python inventory.py
Traceback (most recent call last):
..File "inventory.py", line 53, in
....control()
..File "inventory.py", line 26, in control
....testConnection()
..File "inventory.py", line 22, in testConnection
....sql = MySQLdb.connect(user=dbInfo[username], passwd=dbInfo[password], \
NameError: global name 'MySQLdb' is not defined
1) where am I going wrong?
2) any other gotcha's that you folks see?
3) any advice on how to check for a valid connection to the database, (not just the server)? | 0 | 1 | 0.099668 | 0 | false | 5,609,341 | 0 | 7,873 | 1 | 0 | 0 | 5,609,322 | from MySQLdb import * and import MySQLdb do very different things. | 1 | 0 | 0 | python2.6 with MySQLdb, NameError 'MySQLdb' not defined | 2 | python,mysql,programming-languages,network-programming | 0 | 2011-04-10T02:12:00.000 |
Is there some module to allow for easy DB provider configuration via connection string, similar to PHP's PDO where I can nicely say "psql://" or "mysql://" or, in this python project, am I just going to have to code some factory classes that use MySQLdb, psycopg2, etc? | 0 | 0 | 0 | 0 | false | 5,617,901 | 0 | 413 | 1 | 0 | 0 | 5,617,246 | There's something not quite as nice in logilab.database, but which works quite well (http://www.logilab.org/project/logilab-database). Supports sqlite, mysql, postgresql and some versions of mssql, and some abstraction mechanisms on the SQL understood by the different backend engines. | 1 | 0 | 0 | python and DB connection abstraction? | 2 | python | 0 | 2011-04-11T05:49:00.000 |
Python hangs on
lxml.etree.XMLSchema(tree)
when I use it on apache server + mod_wsgi (Windows)
When I use Django dev server - all works fine
if you know about other nice XML validation solution against XSD, tell me pls
Update:
I'm using soaplib, which uses lxml
logger.debug("building schema...")
self.schema = etree.XMLSchema(etree.parse(f))
logger.debug("schema %r built, cleaning up..." % self.schema)
I see "building schema..." in apache logs, but I don't see "schema %r built, cleaning up..."
Update 2:
I built lxml 2.3 with MSVS 2010 visual C++; afterwards it crashes on this line self.schema = etree.XMLSchema(etree.parse(f)) with Unhandled exception at 0x7c919af2 in httpd.exe: 0xC0000005: Access violation writing location 0x00000010. | 5 | 1 | 0.066568 | 0 | false | 6,176,299 | 1 | 1,123 | 2 | 0 | 0 | 5,617,599 | I had a similar problem on a Linux system. Try installing a more recent version of libxml2 and reinstalling lxml, at least that's what did it for me. | 1 | 0 | 0 | Python hangs on lxml.etree.XMLSchema(tree) with apache + mod_wsgi | 3 | python,apache,mod-wsgi,lxml,xml-validation | 0 | 2011-04-11T06:34:00.000 |
Python hangs on
lxml.etree.XMLSchema(tree)
when I use it on apache server + mod_wsgi (Windows)
When I use Django dev server - all works fine
if you know about other nice XML validation solution against XSD, tell me pls
Update:
I'm using soaplib, which uses lxml
logger.debug("building schema...")
self.schema = etree.XMLSchema(etree.parse(f))
logger.debug("schema %r built, cleaning up..." % self.schema)
I see "building schema..." in apache logs, but I don't see "schema %r built, cleaning up..."
Update 2:
I built lxml 2.3 with MSVS 2010 visual C++; afterwards it crashes on this line self.schema = etree.XMLSchema(etree.parse(f)) with Unhandled exception at 0x7c919af2 in httpd.exe: 0xC0000005: Access violation writing location 0x00000010. | 5 | 2 | 0.132549 | 0 | false | 6,685,198 | 1 | 1,123 | 2 | 0 | 0 | 5,617,599 | I had the same problem (lxml 2.2.6, mod_wsgi 3.2). A work around for this is to pass a file or filename to the constructor: XMLSchema(file=). | 1 | 0 | 0 | Python hangs on lxml.etree.XMLSchema(tree) with apache + mod_wsgi | 3 | python,apache,mod-wsgi,lxml,xml-validation | 0 | 2011-04-11T06:34:00.000 |
I have a sentence like the cat sat on the mat stored as a single sql field. I want to periodically search for keywords which are not not in a stop list, in this case cat sat mat What's the best way to store them in an SQL table for quick searching?
As far as I can see it I see the following options
Up to [n] additional columns per row, one for each word.
Store all of the interesting words in a single, comma separated field.
A new table, linked to the first with either of the above options.
Do nothing and search for a match each time I have a new word to search on.
Which is best practice and which is fastest for searching for word matches? I'm using sqlite in python if that makes a difference. | 1 | 1 | 0.066568 | 0 | false | 5,627,582 | 0 | 290 | 2 | 0 | 0 | 5,627,140 | I do something similar with SQLite too. In my experience it's not as fast as other db's in this type of situation so it pays to make your schema as simple as possible.
Up to [n] additional columns per row, one for each word.
Store all of the interesting words in a single, comma separated field.
A new table, linked to the first with either of the above options.
Do nothing and search for a match each time I have a new word to search on.
Of your 4 options, 2) and 4) may be too slow if you're looking to scale and matching using LIKE. Matching using full text is faster though, so that's worth looking into. 1) looks to be bad database design, what if there's more words than columns ? And if there's less, it's just wasted space. 3) is best IMO, if you make the words the primary key in their own table the searching speed should be acceptably fast. | 1 | 0 | 0 | Storing interesting words from a sentence | 3 | python,sql,sqlite | 0 | 2011-04-11T20:26:00.000 |
I have a sentence like the cat sat on the mat stored as a single sql field. I want to periodically search for keywords which are not not in a stop list, in this case cat sat mat What's the best way to store them in an SQL table for quick searching?
As far as I can see it I see the following options
Up to [n] additional columns per row, one for each word.
Store all of the interesting words in a single, comma separated field.
A new table, linked to the first with either of the above options.
Do nothing and search for a match each time I have a new word to search on.
Which is best practice and which is fastest for searching for word matches? I'm using sqlite in python if that makes a difference. | 1 | 1 | 1.2 | 0 | true | 5,627,243 | 0 | 290 | 2 | 0 | 0 | 5,627,140 | I would suggest giving your sentences a key, likely IDENTITY. I would then create a second table linking to your sentence table, with a row for each interesting word.
If you'd like to search for say, words starting with ca- if you stored these words in a comma delimited you'd have to wildcard the start and end, whereas if they are each in a separate row you can bypass the beginning wildcard.
Also, assuming you find a match, in a comma separated list you'd have to parse out which word is actually a hit. With the second table you simply return the word itself. Not to mention the fact that storing multiple values in one field a major no-no in a relational database. | 1 | 0 | 0 | Storing interesting words from a sentence | 3 | python,sql,sqlite | 0 | 2011-04-11T20:26:00.000 |
Beginner question- what is the difference between sqlite and sqlalchemy? | 37 | 66 | 1 | 0 | false | 5,632,745 | 0 | 24,446 | 1 | 0 | 0 | 5,632,677 | They're apples and oranges.
Sqlite is a database storage engine, which can be better compared with things such as MySQL, PostgreSQL, Oracle, MSSQL, etc. It is used to store and retrieve structured data from files.
SQLAlchemy is a Python library that provides an object relational mapper (ORM). It does what it suggests: it maps your databases (tables, etc.) to Python objects, so that you can more easily and natively interact with them. SQLAlchemy can be used with sqlite, MySQL, PostgreSQL, etc.
So, an ORM provides a set of tools that let you interact with your database models consistently across database engines. | 1 | 0 | 0 | What is the difference between sqlite3 and sqlalchemy? | 2 | python,sqlite,sqlalchemy | 0 | 2011-04-12T08:54:00.000 |
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
If the first column contains the value, X, then I need to be able to delete the entire row.
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there! | 5 | 12 | 1.2 | 0 | true | 5,635,203 | 0 | 51,378 | 1 | 0 | 0 | 5,635,054 | Don't delete. Just copy what you need.
read the original file
open a new file
iterate over rows of the original file (if the first column of the row does not contain the value X, add this row to the new file)
close both files
rename the new file into the original file | 1 | 0 | 0 | Python to delete a row in excel spreadsheet | 6 | python,excel,xlwt | 0 | 2011-04-12T12:19:00.000 |
After much study and investigation, I've decided to do my Python development with pyQT4 using Eric5 as the editor. However, I've run into a brick wall with trying to get MySQL to work. It appears that there's an issue with the QMySQL driver. From the discussions that I've seen so far, the only fix is to install the pyQT SDK and then recompile the MySQL driver. A painful process that I really don't want to have to go through. I would actually prefer to use MS SQL but I'm not finding any drivers for pyQT with MSSQL support.
So, my question is: What is the best approach for using pyQT with either mySQL or MSSQL, that actually works?
While waiting for an answer, I might just tinker with SQLAlchemy and mySQL.Connector to see if it will co-exist with pyQT. | 1 | 2 | 0.132549 | 0 | false | 5,643,057 | 0 | 6,327 | 1 | 0 | 0 | 5,642,537 | yes that will work, I do the same thing. I like a programming API, like what SQLAlchemy provides over the Raw SQL version of Qt's QtSql module. It works fine and nice, just populate a subclassed QAbstractTableModel with data from your sqlalchemy queries, like you would with data from any other python object. This though means you're handling caching and database queries, losing the niceness of QtSqlTableModel. But shouldn't be too bad. | 1 | 0 | 0 | pyQT and MySQL or MSSQL Connectivity | 3 | python,mysql,sql-server,pyqt | 0 | 2011-04-12T22:50:00.000 |
I am very new to python and Django, was actually thrown in to finish off some coding for my company since our coder left for overseas.
When I run python manage.py syncdb I receive the following error
psycopg2.OperationalError: FATAL: password authentication failed for user "winepad"
I'm not sure why I am being prompted for user "winepad" as I've created no such user by that name, I am running the sync from a folder named winepad. In my pg_hba.conf file all I have is a postgres account which I altered with a new password.
Any help would be greatly appreciated as the instructions I left are causing me some issues.
Thank you in advance | 0 | 1 | 0.099668 | 0 | false | 5,643,247 | 1 | 3,164 | 1 | 0 | 0 | 5,643,201 | Check your settings.py file. The most likely reason for this issue is that the username for the database is set to "winepad". Change that to the appropriate value and rerun python manage.py syncdb That should fix the issue. | 1 | 0 | 0 | python manage.py syncdb | 2 | python,django,postgresql | 0 | 2011-04-13T00:37:00.000 |
I have just begun learning Python. Eventually I will learn Django, as my goal is to able to do web development (video sharing/social networking). At which point should I begin learning MySQL? Do I need to know it before I even begin Django? If so, how much should I look to know before diving into Django? Thank you. | 1 | 0 | 0 | 0 | false | 5,643,494 | 1 | 1,826 | 2 | 0 | 0 | 5,643,400 | Django uses its own ORM, so I guess it's not completely necessary to learn MySQL first, but I suspect it would help a fair bit to know what's going on behind the scenes, and it will help you think in the correct way to formulate your queries.
I would start learning MySQL (or any other SQL), after you've got a pretty good grip on Python, but probably before you start learning Django, or perhaps alongside. You won't need a thorough understanding of SQL. At least, not to get started.
Err... ORM/Object Relational Mapper, it hides/abstracts the complexities of SQL and lets you access your data through the simple objects/models you define in Python. For example, you might have a "Person" model with Name, Age, etc. That Name and Age could be stored and retrieved from the database transparently just be accessing the object, without having to write any SQL. (Just a simple .save() and .get()) | 1 | 0 | 0 | Beginning MySQL/Python | 4 | python,mysql,django,new-operator | 0 | 2011-04-13T01:16:00.000 |
I have just begun learning Python. Eventually I will learn Django, as my goal is to able to do web development (video sharing/social networking). At which point should I begin learning MySQL? Do I need to know it before I even begin Django? If so, how much should I look to know before diving into Django? Thank you. | 1 | 0 | 0 | 0 | false | 5,654,701 | 1 | 1,826 | 2 | 0 | 0 | 5,643,400 | As Django documents somehow Recommends, It is better to learning PostgreSQL.
PostgreSQL is working pretty with Django, I never had any problem with Django/PostgreSQL.
I all know is sometimes i have weird error when working with MySQL. | 1 | 0 | 0 | Beginning MySQL/Python | 4 | python,mysql,django,new-operator | 0 | 2011-04-13T01:16:00.000 |
I am new to Python and having some rudimentary problems getting MySQLdb up and running. I'm hoping somebody out there can help me.
When I first tried to install the module using setup.py, the setup terminated because it was unable to find mysql_config. This is because I didn't realize the module expected MySQL to be installed on the local machine. I am only trying to connect to a remote MySQL server.
My question is twofold:
1) How should I use MySQLdb on a machine that doesn't have MySQL installed, only to connect to a remote server?
2) How can I roll back what appears to be a corrupt installation of MySQLdb? Whenever I try to import MySQLdb from a script, I get the error "no module named _mysql", which according to the documentation, indicates a faulty install.
BTW: I am on a Mac running Snow Leopard/Python 2.6.1
Thank you! | 0 | 0 | 0 | 0 | false | 5,644,390 | 0 | 125 | 1 | 0 | 0 | 5,644,374 | Install the MySQL client libraries.
Install the MySQL client library development files, and build again. | 1 | 0 | 0 | Help with MySQLdb module: corrupted installation and connecting to remote servers | 1 | python,mysql,python-module,mysql-python,setup.py | 0 | 2011-04-13T04:23:00.000 |
I am facing a problem where I am trying to add data from a python script to mysql database with InnonDB engine, it works fine with myisam engine of the mysql database. But the problem with the myisam engine is that it doesn't support foreign keys so I'll have to add extra code each place where I want to insert/delete records in database.
Does anyone know why InnonDB doesn't work with python scripts and possible solutions for this problem ?? | 3 | 6 | 1.2 | 0 | true | 5,654,733 | 0 | 1,182 | 1 | 0 | 0 | 5,654,107 | InnoDB is transactional. You need to call connection.commit() after inserts/deletes/updates.
Edit: you can call connection.autocommit(True) to turn on autocommit. | 1 | 0 | 0 | Problem in insertion from python script in mysql database with innondb engine | 2 | python,mysql,innodb,myisam | 0 | 2011-04-13T18:55:00.000 |
I have a query set of approximately 1500 records from a Django ORM query. I have used the select_related() and only() methods to make sure the query is tight. I have also used connection.queries to make sure there is only this one query. That is, I have made sure no extra queries are getting called on each iteration.
When I run the query cut and paste from connection.queries it runs in 0.02 seconds. However, it takes seven seconds to iterate over those records and do nothing with them (pass).
What can I do to speed this up? What causes this slowness? | 7 | 3 | 0.148885 | 0 | false | 5,656,734 | 1 | 4,372 | 2 | 0 | 0 | 5,656,238 | 1500 records is far from being a large dataset, and seven seconds is really too much. There is probably some problem in your models, you can easily check it by getting (as Brandon says) the values() query, and then create explicitly the 1500 object by iterating the dictionary. Just convert the ValuesQuerySet into a list before the construction to factor out the db connection. | 1 | 0 | 0 | How do I speed up iteration of large datasets in Django | 4 | python,django | 0 | 2011-04-13T22:03:00.000 |
I have a query set of approximately 1500 records from a Django ORM query. I have used the select_related() and only() methods to make sure the query is tight. I have also used connection.queries to make sure there is only this one query. That is, I have made sure no extra queries are getting called on each iteration.
When I run the query cut and paste from connection.queries it runs in 0.02 seconds. However, it takes seven seconds to iterate over those records and do nothing with them (pass).
What can I do to speed this up? What causes this slowness? | 7 | 1 | 0.049958 | 0 | false | 5,657,066 | 1 | 4,372 | 2 | 0 | 0 | 5,656,238 | Does your model's Meta declaration tell it to "order by" a field that is stored off in some other related table? If so, your attempt to iterate might be triggering 1,500 queries as Django runs off and grabs that field for each item, and then sorts them. Showing us your code would help us unravel the problem! | 1 | 0 | 0 | How do I speed up iteration of large datasets in Django | 4 | python,django | 0 | 2011-04-13T22:03:00.000 |
I have a group of related companies that share items they own with one-another. Each item has a company that owns it and a company that has possession of it. Obviously, the company that owns the item can also have possession of it. Also, companies sometimes permanently transfer ownership of items instead of just lending it, so I have to allow for that as well.
I'm trying to decide how to model ownership and possession of the items. I have a Company table and an Item table.
Here are the options as I see them:
Inventory table with entries for each Item - Company relationship. Has a company field pointing to a Company and has Boolean fields is_owner and has_possession.
Inventory table with entries for each Item. Has an owner_company field and a possessing_company field that each point to a Company.
Two separate tables: ItemOwner and ItemHolder**.
So far I'm leaning towards option three, but the tables are so similar it feels like duplication. Option two would have only one row per item (cleaner than option one in this regard), but having two fields on one table that both reference the Company table doesn't smell right (and it's messy to draw in an ER diagram!).
Database design is not my specialty (I've mostly used non-relational databases), so I don't know what the best practice would be in this situation. Additionally, I'm brand new to Python and Django, so there might be an obvious idiom or pattern I'm missing out on.
What is the best way to model this without Company and Item being polluted by knowledge of ownership and possession? Or am I missing the point by wanting to keep my models so segregated? What is the Pythonic way?
Update
I've realized I'm focusing too much on database design. Would it be wise to just write good OO code and let Django's ORM do it's thing? | 2 | 0 | 0 | 0 | false | 5,656,695 | 1 | 198 | 1 | 0 | 0 | 5,656,345 | Option #1 is probably the cleanest choice. An Item has only one owner company and is possessed by only one possessing company.
Put two FK to Company in Item, and remember to explicitly define the related_name of the two inverses to be different each other.
As you want to avoid touching the Item model, either add the FKs from outside, like in field.contribute_to_class(), or put a new model with a one-to-one rel to Item, plus the foreign keys.
The second method is easier to implement but the first will be more natural to use once implemented. | 1 | 0 | 0 | How to model lending items between a group of companies | 3 | django,design-patterns,database-design,django-models,python | 0 | 2011-04-13T22:17:00.000 |
I am working on a realtime data website that has a data-mining backend side to it. I am highly experienced in both Python and C++/C#, and wondering which one would be preferable for the backend development.
I am strongly leaning towards Python for its available libraries and ease of use. But am I wrong? If so, why?
As I side question, would you recommend using SQLAlchemy? Are there any drawback to it (performance is crucial) compared to _mysql or MySQLdb?
Thanks! | 1 | 1 | 0.197375 | 0 | false | 5,658,631 | 0 | 1,302 | 1 | 0 | 0 | 5,658,529 | We do backend development based on Zope, Python and other Python-related stuff since almost 15 years. Python gives you great flexibility and all-batteries included (likely true for C#, not sure about C++).
If you do RDBMS development with Python: SQLAlchemy is the way to go. It provides a huge functionality and saved my a** over the last years a couple of times...Sqlalchemy can be complex and complicated but the advantages is that you can hide a complex database schema behind an OO facade..very handy like any ORM in general.
_mysql vs MySQLdb...I only know of the python-mysql package. | 1 | 0 | 0 | Designing a Website Backend - Python or C++/C#? | 1 | c#,c++,python,backend | 1 | 2011-04-14T04:27:00.000 |
I am trying to insert a query that contains é - or \xe9 (INSERT INTO tbl1 (text) VALUES ("fiancé")) into a MySQL table in Python using the _mysql module.
My query is in unicode, and when I call _mysql.connect(...).query(query) I get a UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position X
: ordinal not in range(128).
Obviously the call to query causes a conversion of the unicode string to ASCII somehow, but the question is why? My DB is in utf8 and the connection is opened with the flags use_unicode=True and charset='utf8'. Is unicode simply not supported with _mysql or MySQLdb? Am I missing something else?
Thanks! | 0 | 0 | 0 | 0 | false | 5,658,972 | 0 | 522 | 1 | 0 | 0 | 5,658,737 | I know this doesn't directly answer your question, but why aren't you using prepared statements? That will do two things: probably fix your problem, and almost certainly fix the SQLi bug you've almost certainly got.
If you won't do that, are you absolutely certain your string itself is unicode? If you're just naively using strings in python 2.7, it probably is being forced into an ASCII string. | 1 | 0 | 0 | Python MySQL Unicode Error | 1 | python,mysql,unicode | 0 | 2011-04-14T04:55:00.000 |
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. What is the standard practice for getting and closing cursors? In particular, how long should my cursors last? Should I get a new cursor for each transaction?
I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal? | 95 | -6 | -1 | 0 | false | 5,670,056 | 0 | 99,138 | 1 | 0 | 0 | 5,669,878 | I suggest to do it like php and mysql. Start i at the beginning of your code before printing of the first data. So if you get a connect error you can display a 50x(Don't remember what internal error is) error message. And keep it open for the whole session and close it when you know you wont need it anymore. | 1 | 0 | 0 | When to close cursors using MySQLdb | 5 | python,mysql,mysql-python | 0 | 2011-04-14T21:23:00.000 |
I am now working on a big backend system for a real-time and history tracking web service.
I am highly experienced in Python and intend to use it with sqlalchemy (MySQL) to develop the backend.
I don't have any major experience developing robust and sustainable backend systems and I was wondering if you guys could point me out to some documentation / books about backend design patterns? I basically need to feed data to a database by querying different services (over HTML / SOAP / JSON) at realtime, and to keep history of that data.
Thanks! | 0 | 0 | 0 | 0 | false | 5,671,966 | 1 | 4,070 | 1 | 0 | 0 | 5,670,639 | Use Apache, Django and Piston.
Use REST as the protocol.
Write as little code as possible.
Django models, forms, and admin interface.
Piston wrapppers for your resources. | 1 | 0 | 0 | Python Backend Design Patterns | 2 | python,backend | 0 | 2011-04-14T22:55:00.000 |
I want to generate compound charts (e.g: Bar+line) from my database using python.
How can i do this ?
Thanks in Advance | 0 | 1 | 0.049958 | 1 | false | 6,272,840 | 0 | 277 | 1 | 0 | 0 | 5,693,151 | Pretty easy to do with pygooglechart -
You can basically follow the bar chart examples that ship with the software and then use the add_data_line method to make the lines on top of the bar chart | 1 | 0 | 0 | Compoud charts with python | 4 | python,charts | 0 | 2011-04-17T11:09:00.000 |
I'm running django site with MySQL as DB back-end.
Finally i've got 3 millions rows in django_session table. Most of them are expired, thus i want to remove them.
But if i manually run delete from django_session where expire_date < "2011-04-18" whole site seems to be hanged - it cannot be accessed via browser.
Why such kind of blocking is possible? How to avoid it? | 1 | 1 | 0.049958 | 0 | false | 5,703,375 | 1 | 513 | 2 | 0 | 0 | 5,703,308 | I am not MySQL expert, but I guess MySQL locks the table for the deleting and this might be MySQL transaction/backend related. When deleting is in progress MySQL blocks the access to the table from other connections. MyISAM and InnoDB backend behavior might differ. I suggest you study MySQL manual related to this: the problem is not limited to Django domain, but generally how to delete MySQL rows without blocking access to the table.
For the future reference I suggest you set-up a session cleaner task which will clear the sessions, let's say once in a day, from cron so that you don't end up with such huge table. | 1 | 0 | 0 | MySQL&django hangs on huge session delete | 4 | python,mysql,django | 0 | 2011-04-18T13:03:00.000 |
I'm running django site with MySQL as DB back-end.
Finally i've got 3 millions rows in django_session table. Most of them are expired, thus i want to remove them.
But if i manually run delete from django_session where expire_date < "2011-04-18" whole site seems to be hanged - it cannot be accessed via browser.
Why such kind of blocking is possible? How to avoid it? | 1 | 5 | 1.2 | 0 | true | 5,703,378 | 1 | 513 | 2 | 0 | 0 | 5,703,308 | If your table is MyISAM, DELETE operations lock the table and it is not accessible by the concurrent queries.
If there are many records to delete, the table is locked for too long.
Split your DELETE statement into several shorter batches. | 1 | 0 | 0 | MySQL&django hangs on huge session delete | 4 | python,mysql,django | 0 | 2011-04-18T13:03:00.000 |
I have had a virtualenv for Trunk up and running for a while, but now I am trying to branch, and get things setup on another virtualenv for my 'refactor' branch.
Everything looks to be setup correctly, but when I try to run any manage.py commands, I get this error:
_mysql_exceptions.OperationalError: (1045, "Access denied for user 'brian'@'localhost' (using password: NO)")
I just don't understand why it's not attempting to use the password I have set in my django settings file. Is there some addition mysql setup I could have overlooked? Does this issue ring any bells for anyone?
Thanks in advance. | 1 | 1 | 1.2 | 0 | true | 8,742,834 | 1 | 1,450 | 1 | 0 | 0 | 5,726,440 | I found the problem I was having.
Django was importing a different settings.py file.
I had another django project inside my django product like myproject/myproject/.
Instead of importing myproject/settings.py, it was importing myproject/myproject/settings.py
I assume that Aptana Studio created that project there. If you use eclipse you are also likely to have this problem. | 1 | 0 | 0 | Can't access MySQL database in Django VirtualEnv on localhost | 1 | python,mysql,django,mysql-error-1045 | 0 | 2011-04-20T06:39:00.000 |
In Python, is there a way to get notified that a specific table in a MySQL database has changed? | 10 | 1 | 0.039979 | 0 | false | 5,771,943 | 0 | 30,571 | 2 | 0 | 0 | 5,771,925 | Not possible with standard SQL functionality. | 1 | 0 | 0 | python: how to get notifications for mysql database changes? | 5 | python,mysql | 0 | 2011-04-24T17:03:00.000 |
In Python, is there a way to get notified that a specific table in a MySQL database has changed? | 10 | 10 | 1 | 0 | false | 5,771,988 | 0 | 30,571 | 2 | 0 | 0 | 5,771,925 | It's theoretically possible but I wouldn't recommend it:
Essentially you have a trigger on the the table the calls a UDF which communicates with your Python app in some way.
Pitfalls include what happens if there's an error?
What if it blocks? Anything that happens inside a trigger should ideally be near-instant.
What if it's inside a transaction that gets rolled back?
I'm sure there are many other problems that I haven't thought of as well.
A better way if possible is to have your data access layer notify the rest of your app. If you're looking for when a program outside your control modifies the database, then you may be out of luck.
Another way that's less ideal but imo better than calling an another program from within a trigger is to set some kind of "LastModified" table that gets updated by triggers with triggers. Then in your app just check whether that datetime is greater than when you last checked. | 1 | 0 | 0 | python: how to get notifications for mysql database changes? | 5 | python,mysql | 0 | 2011-04-24T17:03:00.000 |
I am developing a database based django application and I have installed apache, python and django using macport on a snow leopard machine. I ran into issues installing MySQL with macport. But I was able to successfully install a standalone MySQL server (from MySQL.com). Is it possible to remove the MysQL package installed along with py26-MySQL? | 0 | 2 | 1.2 | 0 | true | 5,782,960 | 1 | 182 | 1 | 0 | 0 | 5,782,875 | To use py26-mysql you don't need the entire server distribution for MySQL. You do need the client libs, at the very least. If you remove the server, you need to make sure you re-install the base libraries needed by the Python module to function. | 1 | 0 | 0 | Is it possible to install py26-mysql without installing mysql5 package? | 1 | python,mysql,macports | 0 | 2011-04-25T20:26:00.000 |
I have some database structure; as most of it is irrelevant for us, i'll describe just some relevant pieces. Let's lake Item object as example:
items_table = Table("invtypes", gdata_meta,
Column("typeID", Integer, primary_key = True),
Column("typeName", String, index=True),
Column("marketGroupID", Integer, ForeignKey("invmarketgroups.marketGroupID")),
Column("groupID", Integer, ForeignKey("invgroups.groupID"), index=True))
mapper(Item, items_table,
properties = {"group" : relation(Group, backref = "items"),
"_Item__attributes" : relation(Attribute, collection_class = attribute_mapped_collection('name')),
"effects" : relation(Effect, collection_class = attribute_mapped_collection('name')),
"metaGroup" : relation(MetaType,
primaryjoin = metatypes_table.c.typeID == items_table.c.typeID,
uselist = False),
"ID" : synonym("typeID"),
"name" : synonym("typeName")})
I want to achieve some performance improvements in the sqlalchemy/database layer, and have couple of ideas:
1) Requesting the same item twice:
item = session.query(Item).get(11184)
item = None (reference to item is lost, object is garbage collected)
item = session.query(Item).get(11184)
Each request generates and issues SQL query. To avoid it, i use 2 custom maps for an item object:
itemMapId = {}
itemMapName = {}
@cachedQuery(1, "lookfor")
def getItem(lookfor, eager=None):
if isinstance(lookfor, (int, float)):
id = int(lookfor)
if eager is None and id in itemMapId:
item = itemMapId[id]
else:
item = session.query(Item).options(*processEager(eager)).get(id)
itemMapId[item.ID] = item
itemMapName[item.name] = item
elif isinstance(lookfor, basestring):
if eager is None and lookfor in itemMapName:
item = itemMapName[lookfor]
else:
# Items have unique names, so we can fetch just first result w/o ensuring its uniqueness
item = session.query(Item).options(*processEager(eager)).filter(Item.name == lookfor).first()
itemMapId[item.ID] = item
itemMapName[item.name] = item
return item
I believe sqlalchemy does similar object tracking, at least by primary key (item.ID). If it does, i can wipe both maps (although wiping name map will require minor modifications to application which uses these queries) to not duplicate functionality and use stock methods. Actual question is: if there's such functionality in sqlalchemy, how to access it?
2) Eager loading of relationships often helps to save alot of requests to database. Say, i'll definitely need following set of item=Item() properties:
item.group (Group object, according to groupID of our item)
item.group.items (fetch all items from items list of our group)
item.group.items.metaGroup (metaGroup object/relation for every item in the list)
If i have some item ID and no item is loaded yet, i can request it from the database, eagerly loading everything i need: sqlalchemy will join group, its items and corresponding metaGroups within single query. If i'd access them with default lazy loading, sqlalchemy would need to issue 1 query to grab an item + 1 to get group + 1*#items for all items in the list + 1*#items to get metaGroup of each item, which is wasteful.
2.1) But what if i already have Item object fetched, and some of the properties which i want to load are already loaded? As far as i understand, when i re-fetch some object from the database - its already loaded relations do not become unloaded, am i correct?
2.2) If i have Item object fetched, and want to access its group, i can just getGroup using item.groupID, applying any eager statements i'll need ("items" and "items.metaGroup"). It should properly load group and its requested relations w/o touching item stuff. Will sqlalchemy properly map this fetched group to item.group, so that when i access item.group it won't fetch anything from the underlying database?
2.3) If i have following things fetched from the database: original item, item.group and some portion of the items from the item.group.items list some of which may have metaGroup loaded, what would be best strategy for completing data structure to the same as eager list above: re-fetch group with ("items", "items.metaGroup") eager load, or check each item from items list individually, and if item or its metaGroup is not loaded - load them? It seems to depend on the situation, because if everything has already been loaded some time ago - issuing such heavy query is pointless. Does sqlalchemy provide a way to track if some object relation is loaded, with the ability to look deeper than just one level?
As an illustration to 2.3 - i can fetch group with ID 83, eagerly fetching "items" and "items.metaGroup". Is there a way to determine from an item (which has groupID of an 83), does it have "group", "group.items" and "group.items.metaGroup" loaded or not, using sqlalchemy tools (in this case all of them should be loaded)? | 7 | 7 | 1.2 | 0 | true | 5,819,858 | 0 | 3,854 | 1 | 0 | 0 | 5,795,492 | To force loading lazy attributes just access them. This the simplest way and it works fine for relations, but is not as efficient for Columns (you will get separate SQL query for each column in the same table). You can get a list of all unloaded properties (both relations and columns) from sqlalchemy.orm.attributes.instance_state(obj).unloaded.
You don't use deferred columns in your example, but I'll describe them here for completeness. The typical scenario for handling deferred columns is the following:
Decorate selected columns with deferred(). Combine them into one or several groups by using group parameter to deferred().
Use undefer() and undefer_group() options in query when desired.
Accessing deferred column put in group will load all columns in this group.
Unfortunately this doesn't work reverse: you can combine columns into groups without deferring loading of them by default with column_property(Column(…), group=…), but defer() option won't affect them (it works for Columns only, not column properties, at least in 0.6.7).
To force loading deferred column properties session.refresh(obj, attribute_names=…) suggested by Nathan Villaescusa is probably the best solution. The only disadvantage I see is that it expires attributes first so you have to insure there is not loaded attributes among passed as attribute_names argument (e.g. by using intersection with state.unloaded).
Update
1) SQLAlchemy does track loaded objects. That's how ORM works: there must be the only object in the session for each identity. Its internal cache is weak by default (use weak_identity_map=False to change this), so the object is expunged from the cache as soon as there in no reference to it in your code. SQLAlchemy won't do SQL request for query.get(pk) when object is already in the session. But this works for get() method only, so query.filter_by(id=pk).first() will do SQL request and refresh object in the session with loaded data.
2) Eager loading of relations will lead to fewer requests, but it's not always faster. You have to check this for your database and data.
2.1) Refetching data from database won't unload objects bound via relations.
2.2) item.group is loaded using query.get() method, so there won't lead to SQL request if object is already in the session.
2.3) Yes, it depends on situation. For most cases it's the best is to hope SQLAlchemy will use the right strategy :). For already loaded relation you can check if related objects' relations are loaded via state.unloaded and so recursively to any depth. But when relation is not loaded yet you can't get know whether related objects and their relations are already loaded: even when relation is not yet loaded the related object[s] might be already in the session (just imagine you request first item, load its group and then request other item that has the same group). For your particular example I see no problem to just check state.unloaded recursively. | 1 | 0 | 0 | Completing object with its relations and avoiding unnecessary queries in sqlalchemy | 2 | python,sqlalchemy,eager-loading | 0 | 2011-04-26T19:35:00.000 |
We're currently in the process of implementing a CRM-like solution internally for a professional firm. Due to the nature of the information stored, and the varying values and keys for the information we decided to use a document storage database, as it suited the purposes perfectly (In this case we chose MongoDB).
As part of this CRM solution we wish to store relationships and associations between entities, examples include storing conflicts of interest information, shareholders, trustees etc. Linking all these entities together in the most effective way we determined a central model of "relationship" was necessary. All relationships should have history information attached to them ( commencement and termination dates), as well as varying meta data; for example a shareholder relationship would also contain number of shares held.
As traditional RDBMS solutions didn't suit our former needs, using them in our current situation is not viable. What I'm trying to determine is whether using a graph database is more pertinent in our case, or if in fact just using mongo's built-in relational information is appropriate.
The relationship information is going to be used quite heavily throughout the system. An example of some of the informational queries we wish to perform are:
Get all 'key contact' people of companies who are 'clients' of 'xyz limited'
Get all other 'shareholders' of companies where 'john' is a shareholder
Get all 'Key contact' people of entities who are 'clients' of 'abc limited' and are clients of 'trust us bank limited'
Given this "tree" structure of relationships, is using a graph database (such as Neo4j) more appropriate? | 16 | 1 | 0.049958 | 0 | false | 5,821,550 | 0 | 2,546 | 2 | 0 | 0 | 5,817,182 | stay with mongodb. Two reasons - 1. its better to stay in the same domain if you can to reduce complexity and 2. mongodb is excellent for querying and requires less work than redis, for example. | 1 | 0 | 0 | Using MongoDB as our master database, should I use a separate graph database to implement relationships between entities? | 4 | python,django,mongodb,redis,neo4j | 0 | 2011-04-28T10:28:00.000 |
We're currently in the process of implementing a CRM-like solution internally for a professional firm. Due to the nature of the information stored, and the varying values and keys for the information we decided to use a document storage database, as it suited the purposes perfectly (In this case we chose MongoDB).
As part of this CRM solution we wish to store relationships and associations between entities, examples include storing conflicts of interest information, shareholders, trustees etc. Linking all these entities together in the most effective way we determined a central model of "relationship" was necessary. All relationships should have history information attached to them ( commencement and termination dates), as well as varying meta data; for example a shareholder relationship would also contain number of shares held.
As traditional RDBMS solutions didn't suit our former needs, using them in our current situation is not viable. What I'm trying to determine is whether using a graph database is more pertinent in our case, or if in fact just using mongo's built-in relational information is appropriate.
The relationship information is going to be used quite heavily throughout the system. An example of some of the informational queries we wish to perform are:
Get all 'key contact' people of companies who are 'clients' of 'xyz limited'
Get all other 'shareholders' of companies where 'john' is a shareholder
Get all 'Key contact' people of entities who are 'clients' of 'abc limited' and are clients of 'trust us bank limited'
Given this "tree" structure of relationships, is using a graph database (such as Neo4j) more appropriate? | 16 | 6 | 1 | 0 | false | 5,836,158 | 0 | 2,546 | 2 | 0 | 0 | 5,817,182 | The documents in MongoDB very much resemble nodes in Neo4j, minus the relationships. They both hold key-value properties. If you've already made the choice to go with MongoDB, then you can use Neo4j to store the relationships and then bridge the stores in your application. If you're choosing new technology, you can go with Neo4j for everything, as the nodes can hold property data just as well as documents can.
As for the relationship part, Neo4j is a great fit. You have a graph, not unrelated documents. Using a graph database makes perfect sense here, and the sample queries have graph written all over them.
Honestly though, the best way to find out what works for you is to do a PoC - low cost, high value.
Disclaimer: I work for Neo Technology. | 1 | 0 | 0 | Using MongoDB as our master database, should I use a separate graph database to implement relationships between entities? | 4 | python,django,mongodb,redis,neo4j | 0 | 2011-04-28T10:28:00.000 |
I built a previous program that took client info and stored it in a folder of txt files (impractical much) but now I want to upgrade the program to be more efficient and put the info into a database of some sort...
How can I take the info from the text files and add them to the new database without having to manually do each one. I know this is vague but I need more so the method/logic instead of the exact code, Also if I don't use SQL what is another method for making a db (Not using another commercial Db)
btw the txt files are in simple format (name,city,age) all on separate lines for easy iteration | 2 | 0 | 0 | 0 | false | 15,442,076 | 0 | 5,085 | 1 | 0 | 0 | 5,823,236 | The main reason for DB to have a SQL is to make it separate and generic from the application that you are developing.
To have your own DB built you need to have a storage mechanism could be files on the hard disk, with search options so that you can access data immediately with keywords that you are interested in. on top of this you have to have a layer that initiates queues, reads them and translates to the lower file read and write functions. you need to have this queue layer because lets say you have 100 applications and all are trying to read and write from the same file at the same time and you can imagine what can happen to the file . there will be access denied , somebody using it, data corrupted etc etc.. so you need to put all these in queue and let this queue layer translate things for you.
to start with start from different ways of reading/writing/sorting of data into the file, and a queue layer. From there you can build applications.
The queue layer here is similar to the client that is trying to push the data into the communication port in most of the available databases. | 1 | 0 | 0 | If I want to build a custom database, how could I? | 5 | python,database | 0 | 2011-04-28T18:21:00.000 |
Is it possible to save my in-memory sqlite database to hard disk?
If it is possible, some python code would be awesome.
Thanks in advance.
EDIT:
I succeeded this task by using apsw . It works like a charm. Thanks for your contribution. | 19 | 6 | 1 | 0 | false | 5,832,180 | 0 | 13,227 | 3 | 0 | 0 | 5,831,548 | Yes. When you create the connection to the database, replace :memory: with the path where you want to save the DB.
sqlite uses caches for file based DBs, so this shouldn't be (much) slower. | 1 | 0 | 0 | python save in memory sqlite | 7 | python,sqlite | 0 | 2011-04-29T11:43:00.000 |
Is it possible to save my in-memory sqlite database to hard disk?
If it is possible, some python code would be awesome.
Thanks in advance.
EDIT:
I succeeded this task by using apsw . It works like a charm. Thanks for your contribution. | 19 | 13 | 1 | 0 | false | 5,925,061 | 0 | 13,227 | 3 | 0 | 0 | 5,831,548 | (Disclosure: I am the APSW author)
The only safe way to make a binary copy of a database is to use the backup API that is part of SQLite and is exposed by APSW. This does the right thing with ordering, locking and concurrency.
To make a SQL (text) copy of the a database then use the APSW shell which includes a .dump implementation that is very complete. You can use cursor.execute() to turn the SQL back into a database.
On recent platforms you are unlikely to see much of a difference between a memory database and a disk one (assuming you turned journaling off for the disk) as the operating system maintains a file system cache. Older operating systems like Windows XP did have a default configuration of only using 10MB of memory for file cache no matter how much RAM you have. | 1 | 0 | 0 | python save in memory sqlite | 7 | python,sqlite | 0 | 2011-04-29T11:43:00.000 |
Is it possible to save my in-memory sqlite database to hard disk?
If it is possible, some python code would be awesome.
Thanks in advance.
EDIT:
I succeeded this task by using apsw . It works like a charm. Thanks for your contribution. | 19 | 1 | 0.028564 | 0 | false | 5,831,644 | 0 | 13,227 | 3 | 0 | 0 | 5,831,548 | Open a disk based database and just copy everything from one to the other. | 1 | 0 | 0 | python save in memory sqlite | 7 | python,sqlite | 0 | 2011-04-29T11:43:00.000 |
In MySQL, I have two different databases -- let's call them A and B.
Database A resides on server server1, while database B resides on server server2.
Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).
In such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?
If so, how do I go about it, programatically, in python? (I am using python's MySQLDB to separately interact with each one of the databases). | 29 | 4 | 0.26052 | 0 | false | 5,832,825 | 0 | 32,061 | 2 | 0 | 0 | 5,832,787 | It is very simple - select data from one server, select data from another server and aggregate using Python. If you would like to have SQL query with JOIN - put result from both servers into separate tables in local SQLite database and write SELECT with JOIN. | 1 | 0 | 0 | MySQL -- Joins Between Databases On Different Servers Using Python? | 3 | python,mysql | 0 | 2011-04-29T13:36:00.000 |
In MySQL, I have two different databases -- let's call them A and B.
Database A resides on server server1, while database B resides on server server2.
Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).
In such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?
If so, how do I go about it, programatically, in python? (I am using python's MySQLDB to separately interact with each one of the databases). | 29 | 3 | 0.197375 | 0 | false | 5,832,954 | 0 | 32,061 | 2 | 0 | 0 | 5,832,787 | No. It is not possible to do the join as you would like. But you may be able to sort something out by replicating one of the servers to the other for the individual database.
One data set is under the control of one copy of MySQL and the other dataset is under the control of the other copy of MySQL. The query can only be processed by one of the (MySQL) servers.
If you create a copy of the second database on the first server or vice versa (the one that gets the fewest updates is best) you can set up replication to keep the copy up to date. You will then be able to run the query as you want. | 1 | 0 | 0 | MySQL -- Joins Between Databases On Different Servers Using Python? | 3 | python,mysql | 0 | 2011-04-29T13:36:00.000 |
Here's what I want to do.
Develop a Django project on a development server with a development database. Run the south migrations as necessary when I change the model.
Save the SQL from each migration, and apply those to the production server when I'm ready to deploy.
Is such a thing possible with South? (I'd also be curious what others do to get your development database changes on production when working with Django) | 28 | 50 | 1 | 0 | false | 5,897,509 | 1 | 10,745 | 2 | 0 | 0 | 5,833,418 | You can at least inspect the sql generated by doing manage.py migrate --db-dry-run --verbosity=2. This will not do anything to the database and will show all the sql. I would still make a backup though, better safe than sorry. | 1 | 0 | 0 | Django - South - Is There a way to view the SQL it runs? | 5 | python,database,migration,django-south | 0 | 2011-04-29T14:32:00.000 |
Here's what I want to do.
Develop a Django project on a development server with a development database. Run the south migrations as necessary when I change the model.
Save the SQL from each migration, and apply those to the production server when I'm ready to deploy.
Is such a thing possible with South? (I'd also be curious what others do to get your development database changes on production when working with Django) | 28 | 2 | 0.07983 | 0 | false | 5,932,967 | 1 | 10,745 | 2 | 0 | 0 | 5,833,418 | I'd either do what Lutger suggested (and maybe write a log parser to strip out just the SQL), or I'd run my migration against a test database with logging enabled on the test DB.
Of course, if you can run it against the test database, you're just a few steps away from validating the migration. If it passes, run it again against production. | 1 | 0 | 0 | Django - South - Is There a way to view the SQL it runs? | 5 | python,database,migration,django-south | 0 | 2011-04-29T14:32:00.000 |
In my server process, it looks like this:
Main backend processes:
Processes Huge list of files and , record them inside MySQL.
On every 500 files done, it writes "Progress Report" to a separate file /var/run/progress.log like this "200/5000 files done"
It is multi-processed with 4 children, each made sure to run on a separate file.
Web server process:
Read the output of /var/run/progress.log every 10 seconds via Ajax and report to progress bar.
When processing a very large list of files (e.g. over 3 GB archive), the processes lock up after about 2 hours of processing.
I can't find what is going on. Does that mean that /var/run/progress.log caused an I/O deadlock? | 1 | 0 | 0 | 0 | false | 12,211,059 | 1 | 773 | 1 | 1 | 0 | 5,848,184 | Quick advice, make sure (like, super sure) that you do close your file.
So ALWAYS use a try-except-final block for this
Remember that the contens of a final block will ALWAYS be executed, that will prevent you a lot of head pain :) | 1 | 0 | 0 | If I open and read the file which is periodically written, can I/O deadlock occur? | 2 | python,linux,performance,io,deadlock | 0 | 2011-05-01T11:55:00.000 |
I'm working with Tornado and MongoDB and I would like to send a confirmation email to the user when he creates an account in my application.
For the moment, I use a simple XHTML page with a form and I send information to my MongoDB database using Tornado. I would like to have an intermediate step which sends an email to the user before inserting the data into the database.
I would like to know how could I send this email and insert the user account only after the user receives the email and confirms his registration. | 5 | 6 | 1 | 0 | false | 7,483,440 | 1 | 3,734 | 1 | 1 | 0 | 5,862,238 | I wonder why you would handle registration like that. The usual way to handle registration is:
Write the user info to the database, but with an 'inactive' label attached to the user.
Send an email to the user.
If the user confirms the registration, then switch the user to 'active'.
If you don't want to write to the database, you can write to a cache (like memcache, redis), then when the user confirms the registration, you can get the user info from the cache and write it to the database. | 1 | 0 | 0 | How can I send a user registration confirmation email using Tornado and MongoDB? | 2 | python,email,tornado | 0 | 2011-05-02T20:41:00.000 |
The identity map and unit of work patterns are part of the reasons sqlalchemy is much more attractive than django.db. However, I am not sure how the identity map would work, or if it works when an application is configured as wsgi and the orm is accessed directly through api calls, instead of a shared service. I would imagine that apache would create a new thread with its own python instance for each request. Each instance would therefore have their own instance of the sqlalchemy classes and not be able to make use of the identity map. Is this correct? | 6 | 0 | 0 | 0 | false | 5,869,588 | 1 | 2,298 | 1 | 0 | 0 | 5,869,514 | So this all depends on how you setup your sqlalchemy connection. Normally what you do is to manage each wsgi request to have it's own threadlocal session. This session will know about all of the goings-on of it, items added/changed/etc. However, each thread is not aware of the others. In this way the loading/preconfiguring of the models and mappings is shared during startup time, however each request can operate independent of the others. | 1 | 0 | 0 | sqlalchemy identity map question | 2 | python,sqlalchemy,identity-map | 0 | 2011-05-03T12:35:00.000 |
I have a data model called Game.
In the Game model, I have two properties called player1 and player2 which are their names.
I want to find a player in gamebut I don't know how to buil the query because gql does not support OR clause and then I can't use select * from Game where player1 = 'tom' or player2 = 'tom' statement.
So, how can I solve this question?
Do I have to modify my data model? | 3 | 0 | 0 | 0 | false | 10,265,451 | 1 | 631 | 1 | 0 | 0 | 5,875,881 | Note that there is no gain of performance in using Drew's schema, because queries in list properties must check for equality against all the elements of the list. | 1 | 0 | 0 | Google app engine gql query two properties with same string | 3 | python,google-app-engine | 0 | 2011-05-03T21:24:00.000 |
Can i use Berkeley DB python classes in mobile phone directly , i mean Do DB python classes and methods are ready to be used in any common mobile phone like Nokia,Samsong (windows mobile)..etc.
If a phone system supports python language, does that mean that it is easy and straightforward to use Berkeley DB on it... | 0 | 1 | 1.2 | 0 | true | 5,888,966 | 0 | 159 | 1 | 0 | 0 | 5,888,854 | Berkeley DB is a library that needs to be available. What you may have is Python bindings to Berkeley DB. If the library is not present, having Python will not help.
Look for SQLite, which may be present (it is for iPhone) as it has SQL support and its library size is smaller than Berkeley DB, which makes it better suited for mobile OSes. | 1 | 0 | 0 | Can use Berkeley DB in mobile phone | 1 | python,database,mobile,windows-mobile,berkeley-db | 0 | 2011-05-04T19:30:00.000 |
A primary goal of a project I plan to bid on involves creating a Microsoft Access database using python. The main DB backend will be postgres, but the plan is to export an Access image.
This will be a web app that'll take input from the user and go through a black box and output the results as an access db. The web app will be built on a linux server.
I have a few related questions:
Is there a reliable library or module that can be used?
What has your experience been using Access and python?
Any tips, tricks, or must avoids I need to know about?
Thanks :) | 7 | 2 | 0.049958 | 0 | false | 5,925,032 | 0 | 3,427 | 4 | 0 | 0 | 5,891,359 | The various answers to the duplicate question suggest that your "primary goal" of creating an MS Access database on a linux server is not attainable.
Of course, such a goal is of itself not worthwhile at all. If you tell us what the users/consumers of the Access db are expected to do with it, maybe we can help you. Possibilities: (1) create a script and a (set of) file(s) which the user downloads and runs to create an Access DB (2) if it's just for casual user examination/manipulation, an Excel file may do. | 1 | 0 | 0 | Building an MS Access database using python | 8 | python,linux,ms-access | 0 | 2011-05-05T00:26:00.000 |
A primary goal of a project I plan to bid on involves creating a Microsoft Access database using python. The main DB backend will be postgres, but the plan is to export an Access image.
This will be a web app that'll take input from the user and go through a black box and output the results as an access db. The web app will be built on a linux server.
I have a few related questions:
Is there a reliable library or module that can be used?
What has your experience been using Access and python?
Any tips, tricks, or must avoids I need to know about?
Thanks :) | 7 | 0 | 0 | 0 | false | 5,954,299 | 0 | 3,427 | 4 | 0 | 0 | 5,891,359 | Could you create a self-extracting file to send to the Windows user who has Microsoft Access installed?
Include a blank .mdb file.
dynamically build xml documents with tables, schema
and data
Include an import executable that will take
all of the xml docs and import into
the Access .mdb file.
It's an extra step for the user, but you get to rely on their existing drivers, software and desktop. | 1 | 0 | 0 | Building an MS Access database using python | 8 | python,linux,ms-access | 0 | 2011-05-05T00:26:00.000 |
A primary goal of a project I plan to bid on involves creating a Microsoft Access database using python. The main DB backend will be postgres, but the plan is to export an Access image.
This will be a web app that'll take input from the user and go through a black box and output the results as an access db. The web app will be built on a linux server.
I have a few related questions:
Is there a reliable library or module that can be used?
What has your experience been using Access and python?
Any tips, tricks, or must avoids I need to know about?
Thanks :) | 7 | 2 | 0.049958 | 0 | false | 5,964,496 | 0 | 3,427 | 4 | 0 | 0 | 5,891,359 | If you know this well enough:
Python, it's database modules, and ODBC configuration
then you should know how to do this:
open a database, read some data, insert it in to a different database
If so, then you are very close to your required solution. The trick is, you can open an MDB file as an ODBC datasource. Now: I'm not sure if you can "CREATE TABLES" with ODBC in an MDB file, so let me propose this recipe:
Create an MDB file with name "TARGET.MDB" -- with the necessary tables, forms, reports, etc. (Put some dummy data in and test that it is what the customer would want.)
Set up an ODBC datasource to the file "TARGET.MDB". Test to make sure you can read/write.
Remove all the dummy data -- but leave the table defs intact. Rename the file "TEMPLATE.MDB".
When you need to generate a new MDB file: with Python copy TEMPLATE.MDB to TARGET.MDB.
Open the datasource to write to TARGET.MDB. Create/copy required records.
Close the datasource, rename TARGET.MDB to TODAYS_REPORT.MDB... or whatever makes sense for this particular data export.
Would that work for you?
It would almost certainly be easier to do that all on Windows as the support for ODBC will be most widely available. However, I think in principle you could do this on Linux, provided you find the right ODBC components to access MDB via ODBC. | 1 | 0 | 0 | Building an MS Access database using python | 8 | python,linux,ms-access | 0 | 2011-05-05T00:26:00.000 |
A primary goal of a project I plan to bid on involves creating a Microsoft Access database using python. The main DB backend will be postgres, but the plan is to export an Access image.
This will be a web app that'll take input from the user and go through a black box and output the results as an access db. The web app will be built on a linux server.
I have a few related questions:
Is there a reliable library or module that can be used?
What has your experience been using Access and python?
Any tips, tricks, or must avoids I need to know about?
Thanks :) | 7 | 0 | 0 | 0 | false | 5,972,450 | 0 | 3,427 | 4 | 0 | 0 | 5,891,359 | Well, looks to me like you need a copy of vmware server on the linux box running windows, a web service in the vm to write to access, and communications to it from the main linux box. You aren't going to find a means of creating an access db on Linux. Calling it a requirement isn't going to make it technically possible. | 1 | 0 | 0 | Building an MS Access database using python | 8 | python,linux,ms-access | 0 | 2011-05-05T00:26:00.000 |
I'm working with the BeautifulSoup python library.
I used the urllib2 library to download the HTML code from a page, and then I have parsed it with BeautifulSoup.
I want to save some of the HTML content into a MySql table, but I'm having some problems with the encoding. The MySql table is encoded with 'utf-8' charset.
Some examples:
When I download the HTML code and parse it with BeautifulSoup I have something like:
"Ver las \xc3\xbaltimas noticias. Ent\xc3\xa9rate de las noticias de \xc3\xbaltima hora con la mejor cobertura con fotos y videos"
The correct text would be:
"Ver las últimas noticias. Entérate de las noticias de última hora con la mejor cobertura con fotos y videos"
I have tried to encode and decode that text with multiple charsets, but when I insert it into MySql I have somethig like:
"Ver las últimas noticias y todos los titulares de hoy en Yahoo! Noticias Argentina. Entérate de las noticias de última hora con la mejor cobertura con fotos y videos"
I'm having problems with the encoding, but I don't know how to solve them.
Any suggestion? | 2 | 2 | 0.197375 | 0 | false | 5,903,100 | 1 | 693 | 1 | 0 | 0 | 5,902,914 | BeautifulSoup returns all data as unicode strings. First triple check that the unicode strings are ccorrect. If not then there is some issue with the encoding of the input data. | 1 | 0 | 0 | Wrong encoding with Python BeautifulSoup + MySql | 2 | python,mysql,encoding,urllib2,beautifulsoup | 0 | 2011-05-05T19:12:00.000 |
I'm using PyCrypto to store some files inside a SQLITE database.
I'm using 4 fields :
the name of the file,
the length of the file (in bytes)
the SHA512 hash of the file
the encrypted file (with AES and then base64 to ASCII).
I need all the fields to show some info about the file without decrypting it.
The question is : is it secure to store the data like this ?
For example, the first characters of a ZIP file, or executable file are always the same, and if you already know the hash and the length of the file ... is it possible to decrypt the file, maybe partially ?
If it's not secure, how can I store some information about the file to index the files without decrypting them ? (information like length, hash, name, tags, etc)
(I use python, but you can give examples in any language) | 3 | 3 | 0.148885 | 0 | false | 5,919,875 | 0 | 484 | 3 | 0 | 0 | 5,919,819 | Data encrypted with AES has the same length as the plain data (give or take some block padding), so giving original length away doesn't harm security. SHA512 is a strong cryptographic hash designed to provide minimal information about the original content, so I don't see a problem here either.
Therefore, I think your scheme is quite safe. Any information "exposed" by it is negligible. Key management will probably be a much bigger concern anyway. | 1 | 0 | 1 | Storing encrypted files inside a database | 4 | python,database,security,encryption | 0 | 2011-05-07T07:56:00.000 |
I'm using PyCrypto to store some files inside a SQLITE database.
I'm using 4 fields :
the name of the file,
the length of the file (in bytes)
the SHA512 hash of the file
the encrypted file (with AES and then base64 to ASCII).
I need all the fields to show some info about the file without decrypting it.
The question is : is it secure to store the data like this ?
For example, the first characters of a ZIP file, or executable file are always the same, and if you already know the hash and the length of the file ... is it possible to decrypt the file, maybe partially ?
If it's not secure, how can I store some information about the file to index the files without decrypting them ? (information like length, hash, name, tags, etc)
(I use python, but you can give examples in any language) | 3 | 1 | 1.2 | 0 | true | 5,920,346 | 0 | 484 | 3 | 0 | 0 | 5,919,819 | To avoid any problems concerning the first few bytes being the same, you should use AES in Block Cipher mode with a random IV. This ensures that even if the first block (length depends on the key size) of two encrypted files is exactly the same, the cipher text will be different.
If you do that, I see no problem with your approach. | 1 | 0 | 1 | Storing encrypted files inside a database | 4 | python,database,security,encryption | 0 | 2011-05-07T07:56:00.000 |
I'm using PyCrypto to store some files inside a SQLITE database.
I'm using 4 fields :
the name of the file,
the length of the file (in bytes)
the SHA512 hash of the file
the encrypted file (with AES and then base64 to ASCII).
I need all the fields to show some info about the file without decrypting it.
The question is : is it secure to store the data like this ?
For example, the first characters of a ZIP file, or executable file are always the same, and if you already know the hash and the length of the file ... is it possible to decrypt the file, maybe partially ?
If it's not secure, how can I store some information about the file to index the files without decrypting them ? (information like length, hash, name, tags, etc)
(I use python, but you can give examples in any language) | 3 | 0 | 0 | 0 | false | 5,933,351 | 0 | 484 | 3 | 0 | 0 | 5,919,819 | You really need to think about what attacks you want to protect against, and the resources of the possible attackers.
In general, storing some data encrypted is only useful if it satisfies your exact requirements. In particular, if there is a way an attacker could compromise the key at the same time as the data, then the encryption is effectively useless. | 1 | 0 | 1 | Storing encrypted files inside a database | 4 | python,database,security,encryption | 0 | 2011-05-07T07:56:00.000 |
I have large text files upon which all kinds of operations need to be performed, mostly involving row by row validations. The data are generally of a sales / transaction nature, and thus tend to contain a huge amount of redundant information across rows, such as customer names. Iterating and manipulating this data has become such a common task that I'm writing a library in C that I hope to make available as a Python module.
In one test, I found that out of 1.3 million column values, only ~300,000 were unique. Memory overhead is a concern, as our Python based web application could be handling simultaneous requests for large data sets.
My first attempt was to read in the file and insert each column value into a binary search tree. If the value has never been seen before, memory is allocated to store the string, otherwise a pointer to the existing storage for that value is returned. This works well for data sets of ~100,000 rows. Much larger and everything grinds to a halt, and memory consumption skyrockets. I assume the overhead of all those node pointers in the tree isn't helping, and using strcmp for the binary search becomes very painful.
This unsatisfactory performance leads me to believe I should invest in using a hash table instead. This, however, raises another point -- I have no idea ahead of time how many records there are. It could be 10, or ten million. How do I strike the right balance of time / space to prevent resizing my hash table again and again?
What are the best data structure candidates in a situation like this?
Thank you for your time. | 3 | 1 | 1.2 | 0 | true | 5,931,175 | 0 | 501 | 1 | 0 | 0 | 5,931,151 | Hash table resizing isn't a concern unless you have a requirement that each insert into the table should take the same amount of time. As long as you always expand the hash table size by a constant factor (e.g. always increasing the size by 50%), the computational cost of adding an extra element is amortized O(1). This means that n insertion operations (when n is large) will take an amount of time that is proportionate to n - however, the actual time per insertion may vary wildly (in practice, one of the insertions will be very slow while the others will be very fast, but the average of all operations is small). The reason for this is that when you insert an extra element that forces the table to expand from e.g. 1000000 to 1500000 elements, that insert will take a lot of time, but now you've bought yourself 500000 extremely fast future inserts before you need to resize again. In short, I'd definitely go for a hash table. | 1 | 0 | 0 | BST or Hash Table? | 3 | python,c,data-structures,file-io | 0 | 2011-05-08T23:41:00.000 |
when I try to install python-mysql today, I got a number of compilation error or complaining /Developer/SDKs/MacOSX10.4u.sdk not found, like the following:
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.3-i386-2.6/MySQLdb
running build_ext
building '_mysql' extension
Compiling with an SDK that doesn't seem to exist: /Developer/SDKs/MacOSX10.4u.sdk
Please check your Xcode installation
However, I already installed latest xcode 4.0, which does include latest GCC and SDK.
I tried to find out where the 10.4u.sdk is specified, but could not find it in the system environment, program source and setuptools source.
I tried to export
export SDK=/Developer/SDKs/MacOSX10.5.sdk
export SDKROOT=/Developer/SDKs/MacOSX10.5.sdk
but still has no luck.
so anyone has any idea where this is specified in Mac Snow Leopard pls?
thx | 0 | 0 | 1.2 | 0 | true | 5,936,425 | 0 | 358 | 1 | 1 | 0 | 5,935,910 | Check your environment for CFLAGS or LDFLAGS. Both of these can include the -isysroot argument that influences the SDK selection. The other place to start at is to look at the output of python2.6-config --cflags --ldflags since (I believe) that this influences the Makefile generation. Make sure to run easy_install with --verbose and see if it yields any additional insight. | 1 | 0 | 0 | mac snow leopard setuptools stick to MacOSX10.4u.sdk when trying to install python-mysql | 1 | python,mysql,macos,osx-snow-leopard,compilation | 0 | 2011-05-09T10:58:00.000 |
The Facts:
I am working on a NoteBook with Intel Core 2 Duo 2,26 GHz and 4 Gigabyte of Ram.
It has a Apache Server and a MySQL Server running.
My Server (I did lshw | less) shows a 64 Bit CPU with 2,65 GHz and 4 Gigabyte Ram, too. It has the XAMPP-Package running on it.
The Database structures (tables, indices, ...) are identical and so is the Python script I am running.
The Problem:
While the script runs in approximately 30 seconds on my macbook it took the script 11 minutes on the server!
What are the points you would check first for a bottleneck?
The Solution:
There were two indices missing on one of the machines. I added them and voilá: Everything was super! The `EXPLAIN' keyword of MySQL was worth a mint. =) | 2 | 2 | 1.2 | 0 | true | 5,944,478 | 0 | 101 | 2 | 0 | 0 | 5,944,433 | What kind of server? If you're renting a VPS or similar you're contending with other users for CPU time.
What platform is running on both? Tell us more about your situation! | 1 | 0 | 0 | How do I find why a python scripts runs in significantly different running times on different machines? | 2 | python,mysql,runtime | 0 | 2011-05-10T02:07:00.000 |
The Facts:
I am working on a NoteBook with Intel Core 2 Duo 2,26 GHz and 4 Gigabyte of Ram.
It has a Apache Server and a MySQL Server running.
My Server (I did lshw | less) shows a 64 Bit CPU with 2,65 GHz and 4 Gigabyte Ram, too. It has the XAMPP-Package running on it.
The Database structures (tables, indices, ...) are identical and so is the Python script I am running.
The Problem:
While the script runs in approximately 30 seconds on my macbook it took the script 11 minutes on the server!
What are the points you would check first for a bottleneck?
The Solution:
There were two indices missing on one of the machines. I added them and voilá: Everything was super! The `EXPLAIN' keyword of MySQL was worth a mint. =) | 2 | 0 | 0 | 0 | false | 5,956,131 | 0 | 101 | 2 | 0 | 0 | 5,944,433 | I would check that the databases in question are of similar scope. You say they're the same structure, but are they sized similarly? If your test case only has 100 entries when production has 100000000, that's one huge potential area for performance problems. | 1 | 0 | 0 | How do I find why a python scripts runs in significantly different running times on different machines? | 2 | python,mysql,runtime | 0 | 2011-05-10T02:07:00.000 |
sorry for my English in advance.
I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.
With one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.
That is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?
Some informations :
- I use pycassa
- I have separated commitlog repertory and data repertory on differents disks
- I use batch insert for each thread
- Consistency Level : ONE
- Replicator factor : 1 | 0 | 0 | 0 | 1 | false | 5,950,881 | 0 | 1,686 | 4 | 0 | 0 | 5,950,427 | It's possible you're hitting the python GIL but more likely you're doing something wrong.
For instance, putting 2M rows in a single batch would be Doing It Wrong. | 1 | 0 | 0 | Insert performance with Cassandra | 4 | python,multithreading,insert,cassandra | 0 | 2011-05-10T13:02:00.000 |
sorry for my English in advance.
I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.
With one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance.
That is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family?
Some informations :
- I use pycassa
- I have separated commitlog repertory and data repertory on differents disks
- I use batch insert for each thread
- Consistency Level : ONE
- Replicator factor : 1 | 0 | 0 | 0 | 1 | false | 5,956,519 | 0 | 1,686 | 4 | 0 | 0 | 5,950,427 | Try running multiple clients in multiple processes, NOT threads.
Then experiment with different insert sizes.
1M inserts in 3 mins is about 5500 inserts/sec, which is pretty good for a single local client. On a multi-core machine you should be able to get several times this amount provided that you use multiple clients, probably inserting small batches of rows, or individual rows. | 1 | 0 | 0 | Insert performance with Cassandra | 4 | python,multithreading,insert,cassandra | 0 | 2011-05-10T13:02:00.000 |