Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
sorry for my English in advance. I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family. With one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance. That is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family? Some informations : - I use pycassa - I have separated commitlog repertory and data repertory on differents disks - I use batch insert for each thread - Consistency Level : ONE - Replicator factor : 1
0
0
0
1
false
6,078,703
0
1,686
4
0
0
5,950,427
You might consider Redis. Its single-node throughput is supposed to be faster. It's different from Cassandra though, so whether or not it's an appropriate option would depend on your use case.
1
0
0
Insert performance with Cassandra
4
python,multithreading,insert,cassandra
0
2011-05-10T13:02:00.000
sorry for my English in advance. I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family. With one thread, that operation took around 3 min. But I would like do the same operation with 2 millions rows, and keeping a good time. Then I tried with 2 threads to insert 2 millions rows, expecting a similar result around 3-4min. bUT i gor a result like 7min...twice the first result. As I check on differents forums, multithreading is recommended to improve performance. That is why I am asking that question : is it useful to use multithreading to insert data in local node (client and server are in the same computer), in only one column family? Some informations : - I use pycassa - I have separated commitlog repertory and data repertory on differents disks - I use batch insert for each thread - Consistency Level : ONE - Replicator factor : 1
0
0
0
1
false
8,491,215
0
1,686
4
0
0
5,950,427
The time taken doubled because you inserted twice as much data. Is it possible that you are I/O bound?
1
0
0
Insert performance with Cassandra
4
python,multithreading,insert,cassandra
0
2011-05-10T13:02:00.000
I'm using this javascript library (http://valums.com/ajax-upload/) to upload file to a tornado web server, but I don't know how to get the file content. The javascript library is uploading using XHR, so I assume I have to read the raw post data to get the file content. But I don't know how to do it with Tornado. Their documentation doesn't help with this, as usual :( In php they have something like this: $input = fopen("php://input", "r"); so what's the equivalence in tornado?
3
2
1.2
0
true
5,989,216
1
1,836
1
1
0
5,983,032
I got the answer. I need to use self.request.body to get the raw post data. I also need to pass in the correct _xsrf token, otherwise tornado will fire a 403 exception. So that's about it.
1
0
0
asynchronous file upload with ajaxupload to a tornado web server
1
python,file-upload,tornado,ajax-upload
0
2011-05-12T19:00:00.000
Is there a function in Python that checks if the returned value is None and if it is, allows you to set it to another value like the IFNULL function in MySQL?
15
-2
-0.049958
0
false
16,633,853
0
35,088
2
0
0
5,987,371
Since this question is now over 2 years old I guess this is more for future references :) What I like to do is max('', mightBeNoneVar) or max(0, mightBeNoneVar) (depending on the context). More elaborate example: print max('', col1).ljust(width1) + ' ==> '+ max('', col2).ljust(width2)
1
0
1
Python equivalent for MySQL's IFNULL
8
python
0
2011-05-13T04:54:00.000
Is there a function in Python that checks if the returned value is None and if it is, allows you to set it to another value like the IFNULL function in MySQL?
15
1
0.024995
0
false
50,119,942
0
35,088
2
0
0
5,987,371
nvl(v1,v2) will return v1 if not null otherwise it returns v2. nvl = lambda a,b: a or b
1
0
1
Python equivalent for MySQL's IFNULL
8
python
0
2011-05-13T04:54:00.000
I have been running Davical on a CentOS 5 box for a while now with no problems. Yesterday however, I installed Trac bug-tracker which eventually forced me to run a full update via Yum which updated a whole heap of packages. I cant seem to work out exactly what the issue is and time spent googling didn't seem to bring about much in the way of ideas. Has anyone had the same problem or could anyone indicate a way to better identify whats going on? Many Thanks! Full Error readout : [Wed May 11 17:52:53 2011] [error] davical: LOG: always: Query: QF: SQL error "58P01" - ERROR: could not load library "/usr/lib/pgsql/plpgsql.so": /usr/lib/pgsql/plpgsql.so: undefined symbol: PinPortal" Checking to see if file exists [@shogun ~]# tree -a /usr/lib/pgsql/ | grep "plpgsql" |-- plpgsql.so Version of pg installed [@shogun ~]# pg_config | grep "VERSION" VERSION = PostgreSQL 8.1.23 [@shogun postgresql-8.3.8]# yum list installed | grep 'post' postgresql.i386 8.1.23-1.el5_6.1 installed postgresql-devel.i386 8.1.23-1.el5_6.1 installed postgresql-libs.i386 8.1.23-1.el5_6.1 installed postgresql-python.i386 8.1.23-1.el5_6.1 installed postgresql-server.i386 8.1.23-1.el5_6.1 installed
6
8
1.2
0
true
5,998,204
0
5,259
1
0
0
5,994,297
I have had this problem before, although with 8.4 instead of 8.1, but the issue is the same, I believe. A recent minor upgrade of all supported maintenance branches of PostgreSQL introduced the function PinPortal in the server, and made PL/pgSQL use it. So if you use a plpgsql.so from the newer version with a server from the older version, you will get this error. In your case, the change happened between 8.1.21 and 8.1.22. And even if all your installed packages show the newer version, you need to restart the server to make sure you actually use the newer version. The problem is, as soon as you install the newer PL/pgSQL, it will get used by the next session that is started, but the newer server binary won't get used until you restart the server. So if your upgrade process doesn't restart the server immediately, you will invariably get these errors as soon as something tries to use PL/pgSQL. If this actually turns out to be the problem, you might want to review why your server wasn't restarted.
1
0
0
Could not load library "/usr/lib/pgsql/plpgsql.so" & undefined symbol: PinPortal
1
python,postgresql,centos,trac
0
2011-05-13T15:38:00.000
What is the recommonded way to interact between python and MySQL? Currently I am using MySQLdb and I heared from Oursql. But I asked myself, if there is a more appropriate way to manage this.
3
0
0
0
false
40,519,201
0
234
1
0
0
6,002,147
I personally use pymysql, but have heard a lot of people use MySQLdb. Both are very similar in the way they behave, and could easily be interchangeable. Personally, (working as a python/MySQL QA) I've yet to hear of / let alone work with OurSQL. With that said, it honestly depends what you want to accomplish. Python has a lot of connectors, and a tons of bells and whistles to complete almost anything; (and) As such, it is important to note that it is important to always look at how popular the component is, as well as how frequently it gets updated.
1
0
0
Is there a recommended way for interaction between python and MySQL?
2
python,mysql,interaction
0
2011-05-14T13:32:00.000
I have setup an Apache server with mod_wsgi, python_sql, mysql and django. Everything works fine, except the fact that if I make some code changes, they do not reflect immidiately, though I thing that everything is compiled on the fly when it comes to python/mod_wsgi. I have to shut down the server and come back again to see the changes. Can someone point me to how hot-deployment can be achieved with the above setup?? Thanks, Neeraj
4
3
0.291313
0
false
6,007,285
1
768
1
0
0
6,006,666
Just touching the wsgi file allways worked for me.
1
0
0
Hot deployment using mod_wsgi,python and django on Apache
2
python,django,apache2,mod-wsgi,hotdeploy
0
2011-05-15T05:26:00.000
I need to insert rows into PG one of the fields is date and time with time stamp, this is the time of incident, so I can not use --> current_timestamp function of Postgres at the time of insertion, so how can I then insert the time and date which I collected before into pg row in the same format as it would have been created by current_timestamp at that point in time.
64
-4
-1
0
false
18,624,640
0
190,399
1
0
0
6,018,214
Just use now() or CURRENT_TIMESTAMP I prefer the latter as I like not having additional parenthesis but thats just personal preference.
1
0
1
How to insert current_timestamp into Postgres via python
7
python,postgresql,datetime
0
2011-05-16T13:34:00.000
I'm trying to run a python script using python 2.6.4. The hosting company has 2.4 installed so I compiled my own 2.6.4 on a similar server and then moved the files over into ~/opt/python. that part seems to be working fine. anyhow, when I run the script below, I am getting ImportError: No module named _sqlite3 and I'm not sure what to do to fix this. Most online threads mention that sqlite / sqlite3 is included in python 2.6 - so I'm not sure why this isn't working. -jailshell-3.2$ ./pyDropboxValues.py Traceback (most recent call last): File "./pyDropboxValues.py", line 21, in import sqlite3 File "/home/myAccount/opt/lib/python2.6/sqlite3/__init__.py", line 24, in from dbapi2 import * File "/home/myAccount/opt/lib/python2.6/sqlite3/dbapi2.py", line 27, in from _sqlite3 import * ImportError: No module named _sqlite3 I think I have everything set up right as far as the directory structure. -jailshell-3.2$ find `pwd` -type d /home/myAccount/opt /home/myAccount/opt/bin /home/myAccount/opt/include /home/myAccount/opt/include/python2.6 /home/myAccount/opt/lib /home/myAccount/opt/lib/python2.6 /home/myAccount/opt/lib/python2.6/distutils /home/myAccount/opt/lib/python2.6/distutils/command /home/myAccount/opt/lib/python2.6/distutils/tests /home/myAccount/opt/lib/python2.6/compiler /home/myAccount/opt/lib/python2.6/test /home/myAccount/opt/lib/python2.6/test/decimaltestdata /home/myAccount/opt/lib/python2.6/config /home/myAccount/opt/lib/python2.6/json /home/myAccount/opt/lib/python2.6/json/tests /home/myAccount/opt/lib/python2.6/email /home/myAccount/opt/lib/python2.6/email/test /home/myAccount/opt/lib/python2.6/email/test/data /home/myAccount/opt/lib/python2.6/email/mime /home/myAccount/opt/lib/python2.6/lib2to3 /home/myAccount/opt/lib/python2.6/lib2to3/pgen2 /home/myAccount/opt/lib/python2.6/lib2to3/fixes /home/myAccount/opt/lib/python2.6/lib2to3/tests /home/myAccount/opt/lib/python2.6/xml /home/myAccount/opt/lib/python2.6/xml/parsers /home/myAccount/opt/lib/python2.6/xml/sax /home/myAccount/opt/lib/python2.6/xml/etree /home/myAccount/opt/lib/python2.6/xml/dom /home/myAccount/opt/lib/python2.6/site-packages /home/myAccount/opt/lib/python2.6/logging /home/myAccount/opt/lib/python2.6/lib-dynload /home/myAccount/opt/lib/python2.6/sqlite3 /home/myAccount/opt/lib/python2.6/sqlite3/test /home/myAccount/opt/lib/python2.6/encodings /home/myAccount/opt/lib/python2.6/wsgiref /home/myAccount/opt/lib/python2.6/multiprocessing /home/myAccount/opt/lib/python2.6/multiprocessing/dummy /home/myAccount/opt/lib/python2.6/curses /home/myAccount/opt/lib/python2.6/bsddb /home/myAccount/opt/lib/python2.6/bsddb/test /home/myAccount/opt/lib/python2.6/idlelib /home/myAccount/opt/lib/python2.6/idlelib/Icons /home/myAccount/opt/lib/python2.6/tmp /home/myAccount/opt/lib/python2.6/lib-old /home/myAccount/opt/lib/python2.6/lib-tk /home/myAccount/opt/lib/python2.6/hotshot /home/myAccount/opt/lib/python2.6/plat-linux2 /home/myAccount/opt/lib/python2.6/ctypes /home/myAccount/opt/lib/python2.6/ctypes/test /home/myAccount/opt/lib/python2.6/ctypes/macholib /home/myAccount/opt/share /home/myAccount/opt/share/man /home/myAccount/opt/share/man/man1 And finally the contents of the sqlite3 directory: -jailshell-3.2$ find `pwd` /home/myAccount/opt/lib/python2.6/sqlite3 /home/myAccount/opt/lib/python2.6/sqlite3/__init__.pyo /home/myAccount/opt/lib/python2.6/sqlite3/dump.pyc /home/myAccount/opt/lib/python2.6/sqlite3/__init__.pyc /home/myAccount/opt/lib/python2.6/sqlite3/dbapi2.pyo /home/myAccount/opt/lib/python2.6/sqlite3/dbapi2.pyc /home/myAccount/opt/lib/python2.6/sqlite3/dbapi2.py /home/myAccount/opt/lib/python2.6/sqlite3/dump.pyo /home/myAccount/opt/lib/python2.6/sqlite3/__init__.py /home/myAccount/opt/lib/python2.6/sqlite3/dump.py I feel like I need to add something into the sqlite3 directory - maybe sqlite3.so? But I don't know where to get that. What am I doing wrong here? Please remember that I'm using a shared host so that means installing / compiling on another server and then copying the files over. Thanks! :) Update Just wanted to confirm that the answer from @samplebias did work out very well. I needed to have the dev package installed on the machine I was compiling from to get it to add in sqlite3.so and related files. Also found the link in the answer very helpful. Thanks @samplebias !
2
0
0
0
false
6,026,507
0
1,962
1
0
0
6,026,485
In general, the first thing to do is to ask your host. I seems a bit odd that SQLite is not installed (or installed properly). So they'll likely fix it quite fast if you ask them.
1
0
0
How can I get sqlite working on a shared hosting server?
4
python,linux,unix,sqlite
0
2011-05-17T05:10:00.000
I need to have some references in my table and a bunch of "deferrable initially deferred" modifiers, but I can't find a way to make this work in the default generated Django code. Is it safe to create the table manually and still use Django models?
2
2
0.132549
0
false
6,053,509
1
112
1
0
0
6,053,426
Yes. I don't see why not, but that would be most unconventional and breaking convention usually leads to complications down the track. Describe the problem you think it will solve and perhaps someone can offer a more conventional solution.
1
0
0
Is it safe to write your own table creation SQL for use with Django, when the generated tables are not enough?
3
python,sql,django,postgresql
0
2011-05-19T03:30:00.000
I am creating a software with user + password. After autentification, the user can access some semi public services, but also encrypt some files that only the user can access. The user must be stored as is, without modification, if possible. After auth, the user and the password are kept in memory as long as the software is running (i don't know if that's okay either). The question is how should i store this user + password combination in a potentially unsecure database? I don't really understand what should i expose. Let's say I create an enhanced key like this: salt = random 32 characters string (is it okay?) key = hash(usr password + salt) for 1 to 65000 do key = hash(key + usr password + salt) Should I store the [plaintext user], [the enhanced key] and [the salt] in the database ? Also, what should I use to encrypt (with AES or Blowfish) some files using a new password everytime ? Should I generate a new salt and create a new enhanced key using (the password stored in memory of the program + the salt) ? And in this case, if i store the encrypted file in the database, i should probably only store the salt. The database is the same as where i store the user + password combination. The file can only be decrypted if someone can generate the key, but he doesn't know the password. Right ? I use Python with PyCrypto, but it's not really important, a general example is just fine. I have read a few similar questions, but they are not very explicit. Thank you very very much!
10
2
0.197375
0
false
6,058,858
0
3,307
1
0
0
6,058,019
If you use a different salt for each user, you must store it somewhere (ideally in a different place). If you use the same salt for every user, you can hardcode it in your app, but it can be considered less secure. If you don't keep the salt, you will not be able to match a given password against the one in your database. The aim of the salt is to make bruteforce or dictionnary attacks a lot harder. That is why it is more secure if store separately, to avoid someone having both hash passwords and corresponding salts.
1
0
0
Storing user and password in a database
2
python,security,passwords
0
2011-05-19T11:37:00.000
The sqlite docs says that using the pragma default_cache_size is deprecated. I looked, but I couldn't see any explanation for why. Is there a reason for this? I'm working on an embedded python program, and we open and close connections a lot. Is the only alternative to use the pragma cache_size on every database connection?
5
2
1.2
0
true
6,175,144
0
890
1
0
0
6,062,999
As Firefox is massively using SQLite I wouldn't be surprised if this request came from their camp to prevent any kind of 3rd party interference (e.g. "trashing" with large/small/invalid/obscure values) by this kind of pragma propagating through all database connections Hence, my strong belief is that there is no alternative and that you really need to set cache_size for each database connection
1
0
0
Alternative to deprecated sqlite pragma "default_cache_size"
1
python,sqlite
0
2011-05-19T18:10:00.000
I have a website where people post comments, pictures, and other content. I want to add a feature that users can like/unlike these items. I use a database to store all the content. There are a few approaches I am looking at: Method 1: Add a 'like_count' column to the table, and increment it whenever someone likes an item Add a 'user_likes' table to keep a track that everything the user has liked. Pros: Simple to implement, minimal queries required. Cons: The item needs to be refreshed with each change in like count. I have a whole list of items cached, which will break. Method 2: Create a new table 'like_summary' and store the total likes of each item in that table Add a 'user_likes' table to keep a track that everything the user has liked. Cache the like_summary data in memcache, and only flush it if the value changes Pros: Less load on the main items table, it can be cached without worrying. Cons: Too many hits on memcache (a page shows 20 items, which needs to be loaded from memcache), might be slow Any suggestions?
0
1
0.066568
0
false
6,067,968
1
104
2
0
0
6,067,919
You will actually only need the user_likes table. The like_count is calculated from that table. You will only need to store that if you need to gain performance, but since you're using memcached, It may be a good idea to not store the aggregated value in the database, but store it only in memcached.
1
0
0
What would be a good strategy to implement functionality similar to facebook 'likes'?
3
python,architecture
1
2011-05-20T05:46:00.000
I have a website where people post comments, pictures, and other content. I want to add a feature that users can like/unlike these items. I use a database to store all the content. There are a few approaches I am looking at: Method 1: Add a 'like_count' column to the table, and increment it whenever someone likes an item Add a 'user_likes' table to keep a track that everything the user has liked. Pros: Simple to implement, minimal queries required. Cons: The item needs to be refreshed with each change in like count. I have a whole list of items cached, which will break. Method 2: Create a new table 'like_summary' and store the total likes of each item in that table Add a 'user_likes' table to keep a track that everything the user has liked. Cache the like_summary data in memcache, and only flush it if the value changes Pros: Less load on the main items table, it can be cached without worrying. Cons: Too many hits on memcache (a page shows 20 items, which needs to be loaded from memcache), might be slow Any suggestions?
0
1
0.066568
0
false
6,067,953
1
104
2
0
0
6,067,919
One relation table that does a many-to-many mapping between user and item should do the trick.
1
0
0
What would be a good strategy to implement functionality similar to facebook 'likes'?
3
python,architecture
1
2011-05-20T05:46:00.000
I'm very new to Python and I have Python 3.2 installed on a Win 7-32 workstation. Trying to connect to MSSQLServer 2005 Server using adodbapi-2.4.2.2, the latest update to that package. The code/connection string looks like this: conn = adodbapi.connect('Provider=SQLNCLI.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=XXX;Data Source=123.456.789'); From adodbapi I continually get the error (this is entire error message from Wing IDE shell): Traceback (most recent call last): File "D:\Program Files\Wing IDE 4.0\src\debug\tserver_sandbox.py", line 2, in if name == 'main': File "D:\Python32\Lib\site-packages\adodbapi\adodbapi.py", line 298, in connect raise InterfaceError #Probably COM Error adodbapi.adodbapi.InterfaceError: I can trace through the code and see the exception as it happens. I also tried using conn strings with OLEDB provider and integrated Windows security, with same results. All of these connection strings work fine from a UDL file on my workstation, and from SSMS, but fail with the same error in adodbapi. How do I fix this?
6
2
0.132549
0
false
21,480,454
0
8,199
1
0
0
6,086,341
I had the same problem, and I tracked it down to a failure to load win32com.pyd, because of some system DLLs that was not in the "dll load path", such as msvcp100.dll I solved the problem by copying a lot of these dll's (probably too many) into C:\WinPython-64bit-3.3.3.2\python-3.3.3.amd64\Lib\site-packages\win32
1
0
0
Connecting to SQLServer 2005 with adodbapi
3
python,database,sql-server-2005,adodbapi
0
2011-05-22T06:03:00.000
My company has decided to implement a datamart using [Greenplum] and I have the task of figuring out how to go on about it. A ballpark figure of the amount of data to be transferred from the existing [DB2] DB to the Greenplum DB is about 2 TB. I would like to know : 1) Is the Greenplum DB the same as vanilla [PostgresSQL]? (I've worked on Postgres AS 8.3) 2) Are there any (free) tools available for this task (extract and import) 3) I have some knowledge of Python. Is it feasible, even easy to do this in a resonable amount of time? I have no idea how to do this. Any advice, tips and suggestions will be hugely welcome.
0
0
0
0
false
7,550,497
0
2,294
2
0
0
6,110,384
Many of Greenplum's utilities are written in python and the current DBMS distribution comes with python 2.6.2 installed, including the pygresql module which you can use to work inside the GPDB. For data transfer into greenplum, I've written python scripts that connect to the source (Oracle) DB using cx_Oracle and then dumping that output either to flat files or named pipes. gpfdist can read from either sort of source and load the data into the system.
1
0
0
Transferring data from a DB2 DB to a greenplum DB
4
python,postgresql,db2,datamart,greenplum
0
2011-05-24T12:28:00.000
My company has decided to implement a datamart using [Greenplum] and I have the task of figuring out how to go on about it. A ballpark figure of the amount of data to be transferred from the existing [DB2] DB to the Greenplum DB is about 2 TB. I would like to know : 1) Is the Greenplum DB the same as vanilla [PostgresSQL]? (I've worked on Postgres AS 8.3) 2) Are there any (free) tools available for this task (extract and import) 3) I have some knowledge of Python. Is it feasible, even easy to do this in a resonable amount of time? I have no idea how to do this. Any advice, tips and suggestions will be hugely welcome.
0
0
0
0
false
23,668,974
0
2,294
2
0
0
6,110,384
Generally, it is really slow if you use SQL insert or merge to import big bulk data. The recommended way is to use the external tables you define to use file-based, web-based or gpfdist protocol hosted files. And also greenplum has a utility named gpload, which can be used to define your transferring jobs, like source, output, mode(inert, update or merge).
1
0
0
Transferring data from a DB2 DB to a greenplum DB
4
python,postgresql,db2,datamart,greenplum
0
2011-05-24T12:28:00.000
I have a lot of objects which form a network by keeping references to other objects. All objects (nodes) have a dict which is their properties. Now I'm looking for a fast way to store these objects (in a file?) and reload all of them into memory later (I don't need random access). The data is about 300MB in memory which takes 40s to load from my SQL format, but I now want to cache it to have faster access. Which method would you suggest? (my pickle attempt failed due to recursion errors despite trying to mess around with getstate :( maybe there is something fast anyway? :))
2
0
0
0
false
6,130,718
0
252
1
0
0
6,128,458
Perhaps you could set up some layer of indirection where the objects are actually held within, say, another dictionary, and an object referencing another object will store the key of the object being referenced and then access the object through the dictionary. If the object for the stored key is not in the dictionary, it will be loaded into the dictionary from your SQL database, and when it doesn't seem to be needed anymore, the object can be removed from the dictionary/memory (possibly with an update to its state in the database before the version in memory is removed). This way you don't have to load all the data from your database at once, and can keep a number of the objects cached in memory for quicker access to those. The downside would be the additional overhead required for each access to the main dict.
1
0
1
Store and load a large number linked objects in Python
3
python,persistent-storage
0
2011-05-25T17:31:00.000
I have a written a very small web-based survey using cgi with python(This is my first web app. ).The questions are extracted from a MySQL database table and the results are supposed to be saved in the same database. I have created the database along with its table locally. My app works fine on my local computer(localhost). To create db,table and other transaction with the MySQL i had to do import MySQLdb in my code. Now I want to upload everything on my personal hosting. As far as I know my hosting supports Python,CGI and has MySQL database. And I know that I have to change some parameters in the connection string in my code, so I can connect to the database, but I have two problems: I remember that I installed MySQLdb as an extra to my Python, and in my code i am using it, how would I know that my hosting's python interpretor has this installed, or do I even need it, do I have to use another library? How do I upload my database onto my hosting? Thanks
0
0
0
0
false
6,139,936
0
513
1
0
0
6,139,777
You can write a simple script like import MySQLdb and catch any errors to see if the required package is installed. If this fails you can ask the hosting provider to install your package, typically via a ticket The hosting providers typically also provide URL's to connect to the MySQL tables they provision for you, and some tools like phpmyadmin to load database dumps into the hosted MySQL instance
1
0
0
Uploading a mysql database to a webserver supporting python
3
python,mysql,database-connection,cpanel
0
2011-05-26T13:59:00.000
I'm building a real-time service so my real-time database need to storage in memcached before fetch to DB (Avoid to read/write mysql db too much). I want to fetch data to mysql DB when some events occur , etc : before data expire or have least-recently-used (LRU) data. What is solution for my problem ? My system used memcached , mysql ,django and python-memcache Thank
1
1
0.197375
0
false
6,146,042
1
329
1
0
0
6,143,748
Memcached is not a persistent store, so if you need your data to be durable at all then you will need to store them in a persistent store immediately. So you need to put them somewhere - possibly a MySQL table - as soon as the data arrive, and make sure they are fsync'd to disc. Storing them in memcached as well only speeds up access, so it is a nice to have. Memcached can discard any objects at any time for any reason, not just when they expire.
1
0
0
Can auto transfer data from memcached to mysql DB?
1
python,mysql,django,memcached
0
2011-05-26T19:06:00.000
Can someone please point me in the right direction of how I can connect to MS SQL Server with Python? What I want to do is read a text file, extract some values and then insert the values from the text file into a table in my Sql Server database. I am using Python 3.1.3, and it seems some of the modules I have come across in research online are not included in the library. Am I missing something? Is there a good 3rd party module I should know about. Any help would be greatly appreciated.I am using Windows. thanks
3
1
0.066568
0
false
6,193,973
0
8,530
1
0
0
6,154,069
I found a module called CEODBC that I was able to use with Python 3 after doing some research. It looks like they will also be releasing a Python3 compatible version of PYODBC soon. Thanks for all your help.
1
0
0
Connecting to Sql Server with Python 3 in Windows
3
python,sql,python-3.x,database-connection,python-module
0
2011-05-27T14:59:00.000
I have to write up a python program that communicates with a My SQL database to write in data... I have done the code however it does not enter all the data as it says there are duplicates... is there a way to just inclue them?
0
0
0
0
false
6,162,872
0
38
1
0
0
6,162,827
You should provide more information like you SQL and database schema. It sounds like you are trying to insert items with the same primary key. If you remove the primary key you should be able to insert the data, or change the insert statement to not insert the field which is the primary key.
1
0
0
Is there a way to write into python code a command to include duplicate entries into a My SQL database
1
python,sql
0
2011-05-28T16:14:00.000
I am planning to develop a web-based application which could crawl wikipedia for finding relations and store it in a database. By relations, I mean searching for a name say,'Bill Gates' and find his page, download it and pull out the various information from the page and store it in a database. Information may include his date of birth, his company and a few other things. But I need to know if there is any way to find these unique data from the page, so that I could store them in a database. Any specific books or algorithms would be greatly appreciated. Also mentioning of good opensource libraries would be helpful. Thank You
2
2
0.132549
0
false
6,171,789
0
3,092
1
0
0
6,171,764
You mention Python and Open Source, so I would investigate the NLTK (Natural Language Toolkit). Text mining and natural language processing is one of those things that you can do a lot with a dumb algorithm (eg. Pattern matching), but if you want to go a step further and do something more sophisticated - ie. Trying to extract information that is stored in a flexible manner or trying to find information that might be interesting but is not known a priori, then natural language processing should be investigated. NLTK is intended for teaching, so it is a toolkit. This approach suits Python very well. There are a couple of books for it as well. The O'Reilly book is also published online with an open license. See NLTK.org
1
0
0
Mining Wikipedia for mapping relations for text mining
3
python,pattern-matching,data-mining,wikipedia,text-mining
0
2011-05-30T02:24:00.000
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import. I have windows os and linux so whatever solution you can think of please let me know.
2
7
1
0
false
6,172,230
0
3,332
5
0
0
6,172,123
This may not be a usable answer but someone needs to say it. You shouldn't have to do this. CSV is a file format with an expected data encoding. If someone is supplying you a CSV file then it should be delimited and escaped properly, otherwise its a corrupted file and you should reject it. Make the supplier re-export the file properly from whatever data store it was exported from. If you asked someone to send you JPG and they send what was a proper JPG file with every 5th byte omitted or junk bytes inserted you wouldnt accept that and say "oh, ill reconstruct it for you".
1
0
0
What is an easy way to clean an unparsable csv file
6
php,python,mysql,csv
0
2011-05-30T03:54:00.000
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import. I have windows os and linux so whatever solution you can think of please let me know.
2
0
0
0
false
6,172,324
0
3,332
5
0
0
6,172,123
First of all - find all kinds of mistake. And then just replace them with empty strings. Just do it! If you need this corrupted data - only you can recover it.
1
0
0
What is an easy way to clean an unparsable csv file
6
php,python,mysql,csv
0
2011-05-30T03:54:00.000
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import. I have windows os and linux so whatever solution you can think of please let me know.
2
0
0
0
false
6,172,154
0
3,332
5
0
0
6,172,123
MySQL import has many parameters including escape characters. Given the example, I think the quotes are escaped by putting a quote in the front. So an import with esaped by '"' would work.
1
0
0
What is an easy way to clean an unparsable csv file
6
php,python,mysql,csv
0
2011-05-30T03:54:00.000
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import. I have windows os and linux so whatever solution you can think of please let me know.
2
0
0
0
false
6,172,145
0
3,332
5
0
0
6,172,123
That's a really tough issue. I don't know of any real way to solve it, but maybe you could try splitting on ",", cleaning up the items in the resulting array (unicorns :) ) and then re-joining the row?
1
0
0
What is an easy way to clean an unparsable csv file
6
php,python,mysql,csv
0
2011-05-30T03:54:00.000
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people from inputting garbage data. From the example above you can see that if you consider the outside quotes to be the enclosing quotes then it is right but of course mysql, excel, libreoffice, and etc see a whole new field. Is there a way to fix this problem? Some fields I found even have a backslash before the last enclosing quote. I'm at a loss as I have 17 million records to import. I have windows os and linux so whatever solution you can think of please let me know.
2
1
0.033321
0
false
6,172,224
0
3,332
5
0
0
6,172,123
You don't say if you have control over the creation of the CSV file. I am assuming you do, as if not, the CVS file is corrupt and cannot be recovered without human intervention, or some very clever algorithms to "guess" the correct delimiters vs the user entered ones. Convert user entered tabs (assuming there are some) to spaces and then export the data using TABS separator. If the above is not possible, you need to implement an ESC sequence to ensure that user entered data is not treated as a delimiter.
1
0
0
What is an easy way to clean an unparsable csv file
6
php,python,mysql,csv
0
2011-05-30T03:54:00.000
I'm writing the server for a Javascript app that has a syncing feature. Files and directories being created and modified by the client need to be synced to the server (the same changes made on the client need to be made on the server, including deletes). Since every file is on the server, I'm debating the need for a MySQL database entry corresponding to each file. The following information needs to be kept on each file/directory for every user: Whether it was deleted or not (since deletes need to be synced to other clients) The timestamp of when every file was last modified (so I know whether the file needs updating by the client or not) I could keep both of those pieces of information in files (e.g. .deleted file and .modified file in every user's directory containing file paths + timestamps in the latter) or in the database. However, I also have to fit under an 80mb memory constraint. Between file storage and database storage, which would be more memory-efficient for this purpose? Edit: Files have to be stored on the filesystem (not in a database), and users have a quota for the storage space they can use.
1
0
0
0
false
6,181,167
0
216
1
0
0
6,180,732
In my opinion, the only real way to be sure is to build a test system and compare the space requirements. It shouldn't take that long to generate some random data programatically. One might think the file system would be more efficient, but databases can and might compress the data or deduplicate it, or whatever. Don't forget that a database would also make it easier to implement new features, perhaps access control.
1
0
0
Memory usage of file versus database for simple data storage
3
python,django,memory
0
2011-05-30T21:04:00.000
I need a job scheduler (a library) that queries a db every 5 minutes and, based on time, triggers events which have expired and rerun on failure. It should be in Python or PHP. I researched and came up with Advanced Python Scheduler but it is not appropriate because it only schedules the jobs in its job store. Instead, I want that it takes jobs from a database. I also found Taskforest, which exactly fits my needs except it is a text-file based scheduler meaning the jobs have to be added to the text-file either through the scheduler or manually, which I don't want to do. Could anyone suggest me something useful?
0
1
1.2
0
true
6,184,556
1
710
1
0
0
6,184,491
Here's a possible solution - a script, either in php or python performing your database tasks - a scheduler : Cron for linux, or the windows task scheduler ; where you set the frequency of your jobs. I'm using this solution for multiple projects. Very easy to set up.
1
0
0
Database Based Job scheduler
2
php,python,database
0
2011-05-31T07:50:00.000
When accessing a MySQL database on low level using python, I use the MySQLdb module. I create a connection instance, then a cursor instance then I pass it to every function, that needs the cursor. Sometimes I have many nested function calls, all desiring the mysql_cursor. Would it hurt to initialise the connection as global variable, so I can save me a parameter for each function, that needs the cursor? I can deliver an example, if my explanation was insufficient...
2
1
1.2
0
true
6,191,102
0
301
1
0
0
6,190,982
I think that database cursors are scarce resources, so passing them around can limit your scalability and cause management issues (e.g. which method is responsible for closing the connection)? I'd recommend pooling connections and keeping them open for the shortest time possible. Check out the connection, perform the database operation, map any results to objects or data structures, and close the connection. Pass the object or data structure with results around rather than passing the cursor itself. The cursor scope should be narrow.
1
0
0
What is the best way to handle connections (e.g. to mysql server using MySQLdb) in python, needed by multiple nested functions?
1
python,connection,global-variables
0
2011-05-31T17:03:00.000
I am trying to push user account data from an Active Directory to our MySQL-Server. This works flawlessly but somehow the strings end up showing an encoded version of umlauts and other special characters. The Active Directory returns a string using this sample format: M\xc3\xbcller This actually is the UTF-8 encoding for Müller, but I want to write Müller to my database not M\xc3\xbcller. I tried converting the string with this line, but it results in the same string in the database: tempEntry[1] = tempEntry[1].decode("utf-8") If I run print "M\xc3\xbcller".decode("utf-8") in the python console the output is correct. Is there any way to insert this string the right way? I need this specific format for a web developer who wants to have this exact format, I don't know why he is not able to convert the string using PHP directly. Additional info: I am using MySQLdb; The table and column encoding is utf8_general_ci
37
0
0
0
false
7,720,395
0
77,413
1
0
0
6,202,726
and db.set_character_set('utf8'), imply that use_unicode=True ?
1
0
0
Writing UTF-8 String to MySQL with Python
8
python,unicode,utf-8
0
2011-06-01T14:23:00.000
I'd like to be able to include python code snippets in Excel (ideally, in a nice format -- all colors/formats should be kept the same). What would be the best way to go about it? EDIT: I just want to store python code in an Excel spreadsheet for an easy overview -- I am not going to run it -- just want it to be nicely visible/formatted as part of an Excel worksheet.
2
1
0.099668
0
false
6,220,700
0
486
1
0
0
6,216,278
While Excel itself doesnot support other scripting Langauges than VBA, the open source OpenOffice and LibreOffice packages - which include a spreadsheet - can be scriptable with Python. Still, they won't allow Python code to be pasted on teh cells out of the box - but it is possible to write Python code which can act on the spredsheet contents (and do all the other things Python can do).
1
0
1
Include Python Code In Excel?
2
python,excel
0
2011-06-02T14:58:00.000
What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database. Thanks!
6
8
1.2
0
true
6,224,703
0
15,473
3
0
0
6,222,381
Well conceptually you can store a list as a bunch of rows in a table using a one-to-many relation, or you can focus on how to store a list in a particular database backend. For example postgres can store an array in a particular cell using the sqlalchemy.dialects.postgres.ARRAY data type which can serialize a python array into a postgres array column.
1
0
1
The best way to store a python list to a database?
4
python,database,sqlalchemy,pyramid
0
2011-06-03T02:22:00.000
What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database. Thanks!
6
0
0
0
false
40,277,177
0
15,473
3
0
0
6,222,381
sqlalchemy.types.PickleType can store list
1
0
1
The best way to store a python list to a database?
4
python,database,sqlalchemy,pyramid
0
2011-06-03T02:22:00.000
What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database. Thanks!
6
0
0
0
false
6,224,600
0
15,473
3
0
0
6,222,381
Use string(Varchar). From Zen of Python: "Simple is better than complex."
1
0
1
The best way to store a python list to a database?
4
python,database,sqlalchemy,pyramid
0
2011-06-03T02:22:00.000
I'm writing a small Python CGI script that captures the User-Agent, parses the OS, browser name and version, maps it to a database, and returns a device grade (integer). Since this is only one table, it's a pretty simple operation, but I will likely have substantial traffic (10,000+ hits a day, potentially scaling much higher in the near and far future). Which noSQL database would you recommend for this sort of application? I would also like to build an admin interface which allows for manual input and is searchable. I'm fairly new to Python and completely new to noSQL, and I'm having trouble finding any good info or libraries. Any suggestions?
2
2
0.132549
0
false
6,231,573
0
776
1
0
0
6,230,793
It depends on your use-case. Are you planning on caching the records temporarily or do you want the records to persist? If the former, Redis would be the best choice because of its speed. If the latter, it would be better to choose either CouchDB or MongoDB because they can handle large datasets.
1
0
0
Recommendations for a noSQL database for use with Python
3
python,nosql
0
2011-06-03T17:53:00.000
MongoDB performs really well compared to our hacking of MySQL in de-normalized way. After database migration, I realized that we might need some server-side procedures to invoke after/before database manipulation. Some sorta 3-tier architecture. I am just wondering the possible and easy way to prototype it. Are there any light server-side hooks for mongodb, just like server-side hooks for svn, git? ex, post-commit, pre-commit, ...
2
0
0
0
false
19,877,756
0
1,855
2
0
0
6,273,573
FWIW, one of the messages in the web UI seems to imply that some hooks do exist ("adding sharding hook to enable versioning and authentication to remote servers"), but they might be only avilable within the compiled binaries, not to clients.
1
0
0
What is suggested way to have server-side hooks over mongodb?
2
python,mongodb,hook,server-side,3-tier
0
2011-06-08T02:26:00.000
MongoDB performs really well compared to our hacking of MySQL in de-normalized way. After database migration, I realized that we might need some server-side procedures to invoke after/before database manipulation. Some sorta 3-tier architecture. I am just wondering the possible and easy way to prototype it. Are there any light server-side hooks for mongodb, just like server-side hooks for svn, git? ex, post-commit, pre-commit, ...
2
2
0.197375
0
false
6,277,024
0
1,855
2
0
0
6,273,573
No, there are no features currently available in MongoDB equivalent to hooks or triggers. It'd be best to handle this sort of thing from within your application logic.
1
0
0
What is suggested way to have server-side hooks over mongodb?
2
python,mongodb,hook,server-side,3-tier
0
2011-06-08T02:26:00.000
I have a script with several functions that all need to make database calls. I'm trying to get better at writing clean code rather than just throwing together scripts with horrible style. What is generally considered the best way to establish a global database connection that can be accessed anywhere in the script but is not susceptible to errors such as accidentally redefining the variable holding a connection. I'd imagine I should be putting everything in a module? Any links to actual code would be very useful as well. Thanks.
3
0
0
0
false
6,282,794
0
317
1
0
0
6,281,732
Use a model system/ORM system.
1
0
0
Proper way to establish database connection in python
2
python,database,coding-style,mysql-python
0
2011-06-08T15:58:00.000
I'm developing a python code that uses Sqlite in a multi-threaded program. A remote host calls some xmlrpc functions and new threads are created. Each function which is running in a new thread, uses sqlite for either inserting data into or reading data from the database. My problem is that when call the server more than 5 time at the same time, the server breaks with "segmentation fault". And the output doesn't provide any other information. Any idea what can cause the problem?
0
2
0.197375
0
false
6,289,986
0
1,212
2
0
0
6,289,821
If you read the sqlite documentation (http://www.sqlite.org/threadsafe.html), you'll see that it says: SQLite support three different threading modes: Single-thread. In this mode, all mutexes are disabled and SQLite is unsafe to use in more than a single thread at once. Multi-thread. In this mode, SQLite can be safely used by multiple threads provided that no single database connection is used simultaneously in two or more threads. Serialized. In serialized mode, SQLite can be safely used by multiple threads with no restriction. So it would be that you're either in single-thread mode, or in multi-thread mode and reusing connections. Reusing the connection is only safe in sequential mode (which is slow) Now, the Python documentation states that it should not allow you to share connections. Are you using the python-sqlite3 module, or are you natively interfacing with the database?
1
0
1
Segmentation Fault in Python multi-threaded Sqlite use!
2
python,multithreading,sqlite
0
2011-06-09T08:04:00.000
I'm developing a python code that uses Sqlite in a multi-threaded program. A remote host calls some xmlrpc functions and new threads are created. Each function which is running in a new thread, uses sqlite for either inserting data into or reading data from the database. My problem is that when call the server more than 5 time at the same time, the server breaks with "segmentation fault". And the output doesn't provide any other information. Any idea what can cause the problem?
0
1
0.099668
0
false
6,313,973
0
1,212
2
0
0
6,289,821
My APSW module is threadsafe and you can use that. The standard Python SQLite cannot be safely used concurrently across multiple threads.
1
0
1
Segmentation Fault in Python multi-threaded Sqlite use!
2
python,multithreading,sqlite
0
2011-06-09T08:04:00.000
Suppose that I have a huge SQLite file (say, 500[MB]) stored in Amazon S3. Can a python script that is run on a small EC2 instance directly access and modify that SQLite file? or must I first copy the file to the EC2 instance, change it there and then copy over to S3? Will the I/O be efficient? Here's what I am trying to do. As I wrote, I have a 500[MB] SQLite file in S3. I'd like to start say 10 different Amazon EC2 instances that will each read a subset of the file and do some processing (every instance will handle a different subset of the 500[MB] SQLite file). Then, once processing is done, every instance will update only the subset of the data it dealt with (as explained, there will be no overlap of data among processes). For example, suppose that the SQLite file has say 1M rows: instance 1 will deal with (and update) rows 0 - 100000 instance 2 will will deal with (and update) rows 100001 - 200000 ......................... instance 10 will deal with (and update) rows 900001 - 1000000 Is it at all possible? Does it sound OK? any suggestions / ideas are welcome.
4
0
0
0
false
38,705,012
0
7,758
2
0
0
6,301,795
Amazon EFS can be shared among ec2 instances. It's a managed NFS share. SQLITE will still lock the whole DB on write. The SQLITE Website does not recommend NFS shares, though. But depending on the application you can share the DB read-only among several ec2 instances and store the results of your processing somewhere else, then concatenate the results in the next step.
1
0
0
Amazon EC2 & S3 When using Python / SQLite?
5
python,sqlite,amazon-s3,amazon-ec2
0
2011-06-10T03:54:00.000
Suppose that I have a huge SQLite file (say, 500[MB]) stored in Amazon S3. Can a python script that is run on a small EC2 instance directly access and modify that SQLite file? or must I first copy the file to the EC2 instance, change it there and then copy over to S3? Will the I/O be efficient? Here's what I am trying to do. As I wrote, I have a 500[MB] SQLite file in S3. I'd like to start say 10 different Amazon EC2 instances that will each read a subset of the file and do some processing (every instance will handle a different subset of the 500[MB] SQLite file). Then, once processing is done, every instance will update only the subset of the data it dealt with (as explained, there will be no overlap of data among processes). For example, suppose that the SQLite file has say 1M rows: instance 1 will deal with (and update) rows 0 - 100000 instance 2 will will deal with (and update) rows 100001 - 200000 ......................... instance 10 will deal with (and update) rows 900001 - 1000000 Is it at all possible? Does it sound OK? any suggestions / ideas are welcome.
4
2
0.07983
0
false
6,301,870
0
7,758
2
0
0
6,301,795
Since S3 cannot be directly mounted, your best bet is to create an EBS volume containing the SQLite file and work directly with the EBS volume from another (controller) instance. You can then create snapshots of the volume, and archive it into S3. Using a tool like boto (Python API), you can automate the creation of snapshots and the process of moving the backups into S3.
1
0
0
Amazon EC2 & S3 When using Python / SQLite?
5
python,sqlite,amazon-s3,amazon-ec2
0
2011-06-10T03:54:00.000
Suppose that I have a huge SQLite file (say, 500[MB]). Can 10 different python instances access this file at the same time and update different records of it?. Note, the emphasis here is on different records. For example, suppose that the SQLite file has say 1M rows: instance 1 will deal with (and update) rows 0 - 100000 instance 2 will will deal with (and update) rows 100001 - 200000 ......................... instance 10 will deal with (and update) rows 900001 - 1000000 Meaning, each python instance will only be updating a unique subset of the file. Will this work, or will I have serious integrity issues?
2
4
1.2
0
true
6,301,903
0
1,234
1
0
0
6,301,816
Updated, thanks to André Caron. You can do that, but only read operations supports concurrency in SQLite, since entire database is locked on any write operation. SQLite engine will return SQLITE_BUSY status in this situation (if it exceeds default timeout for access). Also consider that this heavily depends on how good file locking is implemented for given OS and file system. In general I wouldn't recommend to use proposed solution, especially considering that DB file is quite large, but you can try. It will be better to use server process based database (MySQL, PostgreSQL, etc.) to implement desired app behaviour.
1
0
1
SQLite Concurrency with Python?
2
python,sqlite,concurrency
0
2011-06-10T03:58:00.000
I've Collective Intelligence book, but I'm not sure how it can be apply in practical. Let say I have a PHP website with mySQL database. User can insert articles with title and content in the database. For the sake of simplicity, we just compare the title. How to Make Coffee? 15 Things About Coffee. The Big Question. How to Sharpen A Pencil? Guy Getting Hit in Balls We open 'How to Make Coffee?' article and because there are similarity in words with the second and fourth title, they will be displayed in Related Article section. How can I implement this using PHP and mySQL? It's ok if I have to use Python. Thanks in advance.
7
0
0
0
false
47,667,603
0
21,630
1
0
0
6,302,184
This can be simply achieved by using wildcards in SQL queries. If you have larger texts and the wildcard seems to be unable to capture the middle part of text then check if the substring of one matches the other. I hope this helps. BTW, your question title asks about implementing recommendation system and the question description just asks about matching a field among database records. Recommendation system is a broad topic and comes with many interesting algorithms (e.g, Collaborative filtering, content-based method, matrix factorization, neural networks, etc.). Please feel free to explore these advanced topics if your project is to that scale.
1
0
0
How to Implement A Recommendation System?
3
php,python,mysql,recommendation-engine
0
2011-06-10T05:05:00.000
I'm using MySQLdb in Python. I have an update that may succeed or fail: UPDATE table SET reserved_by = PID state = "unavailable" WHERE state = "available" AND id = REQUESTED_ROW_ID LIMIT 1; As you may be able to infer, multiple processes are using the database, and I need processes to be able to securely grab rows for themselves, without race conditions causing problems. My theory (perhaps incorrect) is that only one process will be able to succeed with this query (.rowcount=1) -- the others will fail (.rowcount=0) or get a different row (.rowcount=1). The problem is, it appears that everything that happens through MySQLdb happens in a virtual world -- .rowcount reads =1, but you can't really know whether anything really happened, until you perform a .commit(). My questions: In MySQL, is a single UPDATE atomic within itself? That is, if the same UPDATE above (with different PID values, but the same REQUESTED_ROW_ID) were sent to the same MySQL server at "once," am I guaranteed that one will succeed and the other will fail? Is there a way to know, after calling "conn.commit()", whether there was a meaningful change or not? ** Can I get a reliable .rowcount for the actual commit operation? Does the .commit operation send the actual query (SET's and WHERE conditions intact,) or does it just perform the SETs on affected rows, independent the WHERE clauses that inspired them? Is my problem solved neatly by .autocommit?
0
0
0
0
false
6,339,210
0
576
1
0
0
6,337,798
Turn autocommit on. The commit operation just "confirms" updates already done. The alternative is rollback, which "undoes" any updates already made.
1
0
0
How do I get the actual cursor.rowcount upon .commit?
1
python,mysql,connect,mysql-python,rowcount
0
2011-06-14T00:11:00.000
I want to do the following: Have a software running written in Python 2.7 This software connects to a database (Currently a MySQL database) This software listen for connections on a port X on TCP When a connection is established, a client x request or command something, then the software use the database to store, remove or fetch information (Based on the request or command). What I currently have in head is the classic approach of connecting to the database, store the connection to the database in an object (as a variable) that is passed in the threads that are spawned by the connection listener, then these threads use the variable in the object to do what they need to do with the database connection. (I know that multi-processing is better then multi-threading in Python, but it's not related to my question at this time) Now my question, how should I use SQLAlchemy in this context? I am quite confused even although I have been reading quite a lot of documentation about it and there doesn't seem to be "good" examples on how to handle this kind of situation specifically even although I have been searching quite a lot.
0
1
1.2
0
true
6,338,431
0
183
1
0
0
6,337,812
What is the problem here? SQLAlchemy maintains a thread-local connection pool..what else do you need?
1
0
1
How to use SQLAlchemy in this context
1
python,sqlalchemy
0
2011-06-14T00:14:00.000
I've trying to make large changes to a number of excel workbooks(over 20). Each workbook contains about 16 separate sheets, and I want to write a script that will loop through each workbook and the sheets contains inside and write/modify the cells that I need. I need to keep all string validation, macros, and formatting. All the workbooks are in 2007 format. I've already looked at python excel libaries and PHPexcel, but macros, buttons, formulas, string validation, and formatting and not kept when the new workbook is written. Is there an easy way to do this, or will I have to open up each workbook individually and commit the changes. I'm trying to avoid creating a macro in VBscript and having to open up each workbook separately to commit the changes I need.
4
0
0
0
false
6,361,909
0
3,284
1
0
0
6,348,011
You can also use the PyWin32 libraries to script this with Python using typical COM techniques. This lets you use Python to do your processing, and still save all of the extra parts of each workbook that other Python Excel libraries may not handle.
1
0
0
Scripting changes to multiple excel workbooks
3
python,vba,scripting,excel
0
2011-06-14T18:09:00.000
I'm looking to implement an audit trail for a reasonably complicated relational database, whose schema is prone to change. One avenue I'm thinking of is using a DVCS to track changes. (The benefits I can imagine are: schemaless history, snapshots of entire system's state, standard tools for analysis, playback and migration, efficient storage, separate system, keeping DB clean. The database is not write-heavy and history is not not a core feature, it's more for the sake of having an audit trail. Oh and I like trying crazy new approaches to problems.) I'm not an expert with these systems (I only have basic git familiarity), so I'm not sure how difficult it would be to implement. I'm thinking of taking mercurial's approach, but possibly storing the file contents/manifests/changesets in a key-value data store, not using actual files. Data rows would be serialised to json, each "file" could be an row. Alternatively an entire table could be stored in a "file", with each row residing on the line number equal to its primary key (assuming the tables aren't too big, I'm expecting all to have less than 4000 or so rows. This might mean that the changesets could be automatically generated, without consulting the rest of the table "file". (But I doubt it, because I think we need a SHA-1 hash of the whole file. The files could perhaps be split up by a predictable number of lines, eg 0 < primary key < 1000 in file 1, 1000 < primary key < 2000 in file 2 etc, keeping them smallish) Is there anyone familiar with the internals of DVCS' or data structures in general who might be able to comment on an approach like this? How could it be made to work, and should it even be done at all? I guess there are two aspects to a system like this: 1) mapping SQL data to a DVCS system and 2) storing the DVCS data in a key/value data store (not files) for efficiency. (NB the json serialisation bit is covered by my ORM)
2
0
0
0
false
6,380,661
0
386
2
0
0
6,380,623
If the database is not write-heavy (as you say), why not just implement the actual database tables in a way that achieves your goal? For example, add a "version" column. Then never update or delete rows, except for this special column, which you can set to NULL to mean "current," 1 to mean "the oldest known", and go up from there. When you want to update a row, set its version to the next higher one, and insert a new one with no version. Then when you query, just select rows with the empty version.
1
0
0
Using DVCS for an RDBMS audit trail
3
python,git,mercurial,rdbms,audit-trail
0
2011-06-17T01:55:00.000
I'm looking to implement an audit trail for a reasonably complicated relational database, whose schema is prone to change. One avenue I'm thinking of is using a DVCS to track changes. (The benefits I can imagine are: schemaless history, snapshots of entire system's state, standard tools for analysis, playback and migration, efficient storage, separate system, keeping DB clean. The database is not write-heavy and history is not not a core feature, it's more for the sake of having an audit trail. Oh and I like trying crazy new approaches to problems.) I'm not an expert with these systems (I only have basic git familiarity), so I'm not sure how difficult it would be to implement. I'm thinking of taking mercurial's approach, but possibly storing the file contents/manifests/changesets in a key-value data store, not using actual files. Data rows would be serialised to json, each "file" could be an row. Alternatively an entire table could be stored in a "file", with each row residing on the line number equal to its primary key (assuming the tables aren't too big, I'm expecting all to have less than 4000 or so rows. This might mean that the changesets could be automatically generated, without consulting the rest of the table "file". (But I doubt it, because I think we need a SHA-1 hash of the whole file. The files could perhaps be split up by a predictable number of lines, eg 0 < primary key < 1000 in file 1, 1000 < primary key < 2000 in file 2 etc, keeping them smallish) Is there anyone familiar with the internals of DVCS' or data structures in general who might be able to comment on an approach like this? How could it be made to work, and should it even be done at all? I guess there are two aspects to a system like this: 1) mapping SQL data to a DVCS system and 2) storing the DVCS data in a key/value data store (not files) for efficiency. (NB the json serialisation bit is covered by my ORM)
2
2
0.132549
0
false
6,396,514
0
386
2
0
0
6,380,623
I've looked into this a little on my own, and here are some comments to share. Although I had thought using mercurial from python would make things easier, there's a lot of functionality that the DVCS's have that aren't necessary (esp branching, merging). I think it would be easier to simply steal some design decisions and implement a basic system for my needs. So, here's what I came up with. Blobs The system makes a json representation of the record to be archived, and generates a SHA-1 hash of this (a "node ID" if you will). This hash represents the state of that record at a given point in time and is the same as git's "blob". Changesets Changes are grouped into changesets. A changeset takes note of some metadata (timestamp, committer, etc) and links to any parent changesets and the current "tree". Trees Instead of using Mercurial's "Manifest" approach, I've gone for git's "tree" structure. A tree is simply a list of blobs (model instances) or other trees. At the top level, each database table gets its own tree. The next level can then be all the records. If there are lots of records (there often are), they can be split up into subtrees. Doing this means that if you only change one record, you can leave the untouched trees alone. It also allows each record to have its own blob, which makes things much easier to manage. Storage I like Mercurial's revlog idea, because it allows you to minimise the data storage (storing only changesets) and at the same time keep retrieval quick (all changesets are in the same data structure). This is done on a per record basis. I think a system like MongoDB would be best for storing the data (It has to be key-value, and I think Redis is too focused on keeping everything in memory, which is not important for an archive). It would store changesets, trees and revlogs. A few extra keys for the current HEAD etc and the system is complete. Because we're using trees, we probably don't need to explicitly link foreign keys to the exact "blob" it's referring to. Justing using the primary key should be enough. I hope! Use case: 1. Archiving a change As soon as a change is made, the current state of the record is serialised to json and a hash is generated for its state. This is done for all other related changes and packaged into a changeset. When complete, the relevant revlogs are updated, new trees and subtrees are generated with the new object ("blob") hashes and the changeset is "committed" with meta information. Use case 2. Retrieving an old state After finding the relevant changeset (MongoDB search?), the tree is then traversed until we find the blob ID we're looking for. We go to the revlog and retrieve the record's state or generate it using the available snapshots and changesets. The user will then have to decide if the foreign keys need to be retrieved too, but doing that will be easy (using the same changeset we started with). Summary None of these operations should be too expensive, and we have a space efficient description of all changes to a database. The archive is kept separately to the production database allowing it to do its thing and allowing changes to the database schema to take place over time.
1
0
0
Using DVCS for an RDBMS audit trail
3
python,git,mercurial,rdbms,audit-trail
0
2011-06-17T01:55:00.000
I'm trying to figure out how to use python's mysqldb. I can do my job with my current knownledge, but I want to use the best practices. Should I close properly my cursor? Exiting the program isn't close it autmatically? (Shouldn't I expect the object destructor to do it anyway?) Should I create new cursors for every query, or one cursor is enough for multiple different queries in the same DB?
2
2
1.2
0
true
6,453,159
0
742
1
0
0
6,453,067
Should I close properly my cursor? Yes, you should. Explicit is better than implicit. Should I create new cursors for every query, or one cursor is enough for multiple different queries in the same DB? This depends on how you use this cursor. For simple tasks it is enough to use one cursor. For some complex application it is better to create separate cursor for each batch of SQL-queries.
1
0
0
How to properly use mysqldb in python
1
python,cursor,mysql-python
0
2011-06-23T11:08:00.000
How can Flask / SQLAlchemy be configured to create a new database connection if one is not present? I have an infrequently visited Python / Flask server which uses SQLAlchemy. It gets visited every couple of days, and on the first visit it often throws a "MySQL server has gone away" error. Subsequent page views are fine, but it looks unprofessional to have this initial error. I'd like to know the correct way to handle this - advice like "make a really long time out", which would be about 4 days long in this case, doesn't seem correct. How can I test for the lack of a database connection and create one if needed?
64
6
1
0
false
58,821,330
1
33,654
2
0
0
6,471,549
The pessimistic approach as described by @wim pool_pre_ping=True can now be done for Flask-SQLAlchemy using a config var --> SQLALCHEMY_POOL_PRE_PING = True
1
0
0
Avoiding "MySQL server has gone away" on infrequently used Python / Flask server with SQLAlchemy
7
python,mysql,sqlalchemy,flask,database-connection
0
2011-06-24T17:34:00.000
How can Flask / SQLAlchemy be configured to create a new database connection if one is not present? I have an infrequently visited Python / Flask server which uses SQLAlchemy. It gets visited every couple of days, and on the first visit it often throws a "MySQL server has gone away" error. Subsequent page views are fine, but it looks unprofessional to have this initial error. I'd like to know the correct way to handle this - advice like "make a really long time out", which would be about 4 days long in this case, doesn't seem correct. How can I test for the lack of a database connection and create one if needed?
64
2
0.057081
0
false
51,015,137
1
33,654
2
0
0
6,471,549
When I encountered this error I was storing a LONGBLOB / LargeBinary image ~1MB in size. I had to adjust the max_allowed_packet config setting in MySQL. I used mysqld --max-allowed-packet=16M
1
0
0
Avoiding "MySQL server has gone away" on infrequently used Python / Flask server with SQLAlchemy
7
python,mysql,sqlalchemy,flask,database-connection
0
2011-06-24T17:34:00.000
I have to read incoming data from a barcode scanner using pyserial. Then I have to store the contents into a MySQL database. I have the database part but not the serial part. can someone show me examples of how to do this. I'm using a windows machine.
1
1
1.2
0
true
6,474,062
0
916
1
0
0
6,471,569
You will find it easier to use a USB scanner. These will decode the scan, and send it as if it were typed on the keyboard, and entered with a trailing return. The barcode is typically written with leading and trailing * characters, but these are not sent with the scan. Thus you print "*AB123*" using a 3 of 9 font, and when it is scanned sys.stdin.readline().stript() will return "AB123". There are more than a few options that can be set in the scanner, so you need to read the manual. I have shown the factory default above for a cheap nameless scanner I bought from Amazon.
1
0
0
Reading incoming data from barcode
1
python,pyserial
0
2011-06-24T17:35:00.000
I couldn't find any information about this in the documentation, but how can I get a list of tables created in SQLAlchemy? I used the class method to create the tables.
133
99
1
0
false
30,554,677
0
133,023
1
0
0
6,473,925
There is a method in engine object to fetch the list of tables name. engine.table_names()
1
0
0
SQLAlchemy - Getting a list of tables
14
python,mysql,sqlalchemy,pyramid
0
2011-06-24T21:25:00.000
I'm dealing with some big (tens of millions of records, around 10gb) database files using SQLite. I'm doint this python's standard interface. When I try to insert millions of records into the database, or create indices on some of the columns, my computer slowly runs out of memory. If I look at the normal system monitor, it looks like the majority of the system memory is free. However, when I use top, it looks like I have almost no system memory free. If I sort the processes by their memory consuption, then non of them uses more than a couple percent of my memory (including the python process that is running sqlite). Where is all the memory going? Why do top and Ubuntu's system monitor disagree about how much system memory I have? Why does top tell me that I have very little memory free, and at the same time not show which process(es) is (are) using all the memory? I'm running Ubuntu 11.04, sqlite3, python 2.7.
2
0
1.2
0
true
6,491,966
0
1,060
1
0
0
6,491,856
The memory may be not assigned to a process, but it can be e.g. a file on tmpfs filesystem (/dev/shm, /tmp sometimes). You should show us the output of top or free (please note those tools do not show a single 'memory usage' value) to let us tell something more about the memory usage. In case of inserting records to a database it may be a temporary image created for the current transaction, before it is committed to the real database. Splitting the insertion into many separate transactions (if applicable) may help. I am just guessing, not enough data. P.S. It seems I mis-read the original question (I assumed the computer slows down) and there is no problem. sehe's answer is probably better.
1
0
0
Why does running SQLite (through python) cause memory to "unofficially" fill up?
2
python,sqlite,memory,ubuntu,memory-leaks
0
2011-06-27T11:03:00.000
I'm doing a project that is serial based and has to update a database when a barcode is being read. Which programming language has better tools for working with a MySQl database and Serial communication. I debating right now between python and realbasic.
0
3
0.291313
0
false
6,498,450
0
1,691
2
0
0
6,498,272
It's hard to imagine that Realbasic is a better choice than Python for any project.
1
0
0
What language is better for serial programming and working with MySQL database? Python? Realbasic?
2
python,mysql,serial-port,realbasic
0
2011-06-27T20:02:00.000
I'm doing a project that is serial based and has to update a database when a barcode is being read. Which programming language has better tools for working with a MySQl database and Serial communication. I debating right now between python and realbasic.
0
3
1.2
0
true
6,498,607
0
1,691
2
0
0
6,498,272
Python is a general purpose language with tremendous community support and a "batteries-included" philosophy that leads to simple-designs that focus on the business problem at hand. It is a good choice for a wide variety of projects. The only reasons not to choose Python would be: You (or your team) have greater experience in another general purpose language with good library and community support. You have a particular problem that is handled best by a specialty language that was written with that sort of problem in mind. The only thing I know about RealBASIC is that I hadn't heard of it until now, so it's a lock that it doesn't have quite the community of Python. (Exhibit A: 60,000 Python questions on SO, only 49 RealBASIC questions.) And if it is a derivative of BASIC, it would be not be a specialty language. Python seems a clear choice here, unless it means learning a new language, and you are proficient with RealBASIC.
1
0
0
What language is better for serial programming and working with MySQL database? Python? Realbasic?
2
python,mysql,serial-port,realbasic
0
2011-06-27T20:02:00.000
When building a website, one have to decide how to store the session info, when a user is logged in. What is a pros and cons of storing each session in its own file versus storing it in a database?
7
3
1.2
0
true
6,510,307
0
2,180
2
0
0
6,510,075
I generally wouldnt ever store this information in a file - you run the risk of potentially swapping this file in and out of memory (yes it could be cached at times) but I would rather use an in memory mechanism designed as such and you are then using something that is fairly nonstandard. In ASP.Net you can use in in memory collection that is good for use on a single server. if you need multiple load balanced web servers (web farm) and a user could go to any other server as they come in for each request, this option is not good. If the web process restarts, the sessions are lost. They can also timeout. You can use a state server in asp.net for multiple server access - this runs outside of your webserver's process. If the web process restarts - you are OK and multiple servers access this. This traffic going to the state server is not encrypted and you would ideally use IPSEC policies to secure the traffic in a more secure environment. You can use sql server to manage state (automatically) by setting up the web.config to use sql server as your session database. This gives the advantage of a high performance database and multi server access. You can use your own sessions in a database if you need them to persist to a long time outside of the normal mechanism and want tighter control on the database fields (maybe you want to query specific fields) Also just out of curiosity - maybe you are referring to sessions as user preferences? In that case research asp.net profiles
1
0
0
What is the pros/cons of storing session data in file vs database?
3
php,asp.net,python,ruby-on-rails,ruby
0
2011-06-28T16:45:00.000
When building a website, one have to decide how to store the session info, when a user is logged in. What is a pros and cons of storing each session in its own file versus storing it in a database?
7
1
0.066568
0
false
6,527,272
0
2,180
2
0
0
6,510,075
I'm guessing, based on your previous questions, that this is being asked in the context of using perl's CGI::Application module, with CGI::Application::Plugin::Session. If you use that module with the default settings, it will write the session data into files stored in the /tmp directory - which is very similar to what PHP does. If your app is running in a shared hosting environment, you probably do NOT want to do this, for security reasons, since other users may be able to view/modify data in /tmp. You can fix this by writing the files into a directory that only you have permission to read/write (i.e., not /tmp). While developing, I much prefer to use YAML for serialization, rather than the default (storable), since it is human-readable. If you have your own web server, and you're able to run your database (mysql) server on the same machine, then storing the session data in a database instead of a file will usually yield higher performance - especially if you're able to maintain a persistent database connection (i.e. using mod_perl or fastcgi). BUT - if your database is on a remote host, and you have to open a new connection each time you need to update session data, then performance may actually be worse, and writing to a file may be better. Note that you can also use sqlite, which looks like a database to your app, but is really just a file on your local file system. Regardless of performance, the database option may be undesirable in shared-host environments because of bandwidth limitations, and other resource restrictions. The performance difference is also probably negligible for a low-traffic site (i.e., a few thousand hits per day).
1
0
0
What is the pros/cons of storing session data in file vs database?
3
php,asp.net,python,ruby-on-rails,ruby
0
2011-06-28T16:45:00.000
I have a pyramid project that uses mongodb for storage. Now I'm trying to write a test but how do I specify connection to the mongodb? More specifically, which database should I connect to (test?) and how do I use fixtures? In Django it creates a temporary database but how does it work in pyramid?
2
2
0.379949
0
false
6,934,811
0
594
1
0
0
6,515,160
Just create a database in your TestCase.setUp and delete in TestCase.tearDown You need mongodb running because there is no mongolite3 like sqlite3 for sql I doubt that django is able to create a temporary file to store a mongodb database. It probably just use sqlite:/// which create a database with a memory storage.
1
0
0
How do i create unittest in pyramid with mongodb?
1
python,mongodb,pyramid
1
2011-06-29T02:30:00.000
I've used a raw SQL Query to access them, and it seems to have worked. However, I can't figure out a way to actually print the results to an array. The only thing that I can find is the cursor.fetchone() command, which gives me a single row. Is there any way that I can return an entire column in a django query set?
0
0
0
0
false
6,539,716
1
223
2
0
0
6,539,687
You can use cursor.fetchall() instead of cursor.fetchone() to retrieve all rows. And then extract nessesary field: raw_items = cursor.fetchall() items = [ item.field for item in raw_items ]
1
0
0
How do I use django db API to save all the elements of a given column in a dictionary?
2
python,django
0
2011-06-30T18:54:00.000
I've used a raw SQL Query to access them, and it seems to have worked. However, I can't figure out a way to actually print the results to an array. The only thing that I can find is the cursor.fetchone() command, which gives me a single row. Is there any way that I can return an entire column in a django query set?
0
1
1.2
0
true
6,539,798
1
223
2
0
0
6,539,687
dict(MyModel.objects.values_list('id', 'my_column')) will return a dictionary with all elements of my_column with the row's id as the key. But probably you're just looking for a list of all the values, which you should receive via MyModel.objects.values_list('my_column', flat=True)!
1
0
0
How do I use django db API to save all the elements of a given column in a dictionary?
2
python,django
0
2011-06-30T18:54:00.000
I have a photo gallery with an album model (just title and date and stuff) and a photo model with a foriegn key to the album and three imageFields in it (regular, mid and thumb). When a user delete an album i need to delete all the photos reletaed to the album (from server) then all the DB records that point to the album and then the album itself... Couldn't find anything about this and actualy found so many answers what one say the oposite from the other. Can any one please clarify this point, how does this is beeing done in the real world? Thank you very much, Erez
1
0
1.2
0
true
6,553,381
1
80
1
0
0
6,550,003
Here is a possible answer for the question i figured out: Getting the list of albums in a string, in my case separated by commas You need to import shutil, then: @login_required def remove_albums(request): if request.is_ajax(): if request.method == 'POST': #if the ajax call for delete what ok we get the list of albums to delete albums_list = request.REQUEST['albums_list'].rsplit(',') for album in albums_list: obj_album = Album.objects.get(id=int(album)) #getting the directory for the images than need to be deleted dir_path = MEDIA_ROOT + '/images/galleries/%d' % obj_album.id #deleting the DB record obj_album.delete() #testing if there is a folder (there might be a record with no folder if no file was uploaded - deleting the album before uploading images) try: #deleting the folder and all the files in it shutil.rmtree(dir_path) except OSError: pass return HttpResponse('') Sorry for how the code look like, don't know why, I can't make it show correct... Have fun and good luck :-)
1
0
0
How to delete an object and all related objects with all imageFields insite them (photo gallery)
1
python,django,django-models,django-views
0
2011-07-01T15:30:00.000
(1) What's the fastest way to check if an item I'm about to "insert" into a MongoDB collection is unique (and if so not insert it) (2) For an existing database, what's the fastest way to look at all the entries and remove duplicates but keep one copy i.e. like a "set" function: {a,b,c,a,a,b} -> {a,b,c} I am aware that technically speaking each entry is unique, since they get a unique ObjectID You may assume the entries are completely flat key:value lists Solutions with indexing are fine I prefer Python code (i.e. mongo python API) if possible Thanks!
0
2
0.379949
0
false
6,567,552
0
824
1
0
0
6,567,511
(1) Create a unique index on the related columns and catch the error upon insertion time
1
0
1
Fastest Way to (1) not insert duplicate entry (2) consolidate duplicates in Mongo DB?
1
python,mongodb
0
2011-07-04T05:11:00.000
Our site has two separate projects connected to the same database. This is implemented by importing the models from project1 into project2 and using it to access and manipulate data. This works fine on our test server, but we are planning deployment and we decided we would rather have the projects on two separate machines, with the database on a third one. I have been looking around for ideas on how to import the model from a project on another machine but that doesn't seem to be possible. An obvious solution would be to put the models in a separate app and have it on both boxes, but that means code is duplicated and changes have to be applied twice. I'm looking for suggestions on how to deal with this and am wondering if other people have encountered similar issues. We'll be deploying on AWS if that helps. Thanks.
2
1
0.197375
0
false
6,574,660
1
253
1
0
0
6,572,203
This isn't really a Django question. It is more a Python Question. However to answer your question Django is going to have to be able to import these files one way or another. If they are on seperate machines you really should refactor the code out into it's own app and then install this app on each of the machines. The only other way I can think of to do this is to make your own import hook that can import a file from across a network but that is a really bad idea for a multitude of reasons.
1
0
0
separate django projects on different machines using a common database
1
python,django,deployment,architecture,amazon-web-services
0
2011-07-04T13:31:00.000
I'm writing my first web site, and am dealing with user registration. One common problem to me like to everyone else is to detect user already exist. I am writing the app with python, and postgres as database. I have currently come up with 2 ideas: 1) lock(mutex) u = select from db where name = input_name if u == null insert into db (name) values (input_name) else return 'user already exist' unlock(mutex) 2) try: insert into db (name) values(input) except: return 'user already exist' The first way is to use mutex lock for clear logic, while the second way using exception to indicate user existence. Can anyone discuss what are the pros and cons of both of the methods?
0
2
0.197375
0
false
6,580,794
0
235
2
0
0
6,580,723
I think both will work, and both are equally bad ideas. :) My point is that implementing user authentication in python/pg has been done so many times in the past that there's hardly justification for writing it yourself. Have you had a look at Django, for example? It will take care of this for you, and much more, and let you focus your efforts on your particular application.
1
0
0
Detect users already exist in database on user registration
2
python,sql,database
0
2011-07-05T09:47:00.000
I'm writing my first web site, and am dealing with user registration. One common problem to me like to everyone else is to detect user already exist. I am writing the app with python, and postgres as database. I have currently come up with 2 ideas: 1) lock(mutex) u = select from db where name = input_name if u == null insert into db (name) values (input_name) else return 'user already exist' unlock(mutex) 2) try: insert into db (name) values(input) except: return 'user already exist' The first way is to use mutex lock for clear logic, while the second way using exception to indicate user existence. Can anyone discuss what are the pros and cons of both of the methods?
0
0
0
0
false
6,580,784
0
235
2
0
0
6,580,723
Slightly different, I usually do a select query via AJAX to determine if a username already exists, that way I can display a message on the UI explaining that the name is already taken and suggest another before the submit the registration form.
1
0
0
Detect users already exist in database on user registration
2
python,sql,database
0
2011-07-05T09:47:00.000
Im trying to use Spyder with pyodbc to connect mysql using a PyQT4 gui framework. I have pyodbc in Spyder figure out. How do I use PyQt4 to get info into gui's? I'm looking to use gui on Fedora and winx64. Edit: I figured out the fedora driver. Can anyone help me with QMYSQL driver.
0
0
0
0
false
6,644,662
0
842
1
0
0
6,582,404
Have you considered using PyQt's built-in MySQL support? This could make it a bit easier to display DB info, depending on what you want the interface to look like.
1
0
0
How connect Spyder to mysql on Winx64 and Fedora?
2
python,pyqt4,spyder
0
2011-07-05T12:07:00.000
I'm trying to have a purely in-memory SQLite database in Django, and I think I have it working, except for an annoying problem: I need to run syncdb before using the database, which isn't too much of a problem. The problem is that it needs to create a superuser (in the auth_user table, I think) which requires interactive input. For my purposes, I don't want this -- I just want to create it in memory, and I really don't care about the password because I'm the only user. :) I just want to hard-code a password somewhere, but I have no idea how to do this programmatically. Any ideas?
1
3
1.2
0
true
6,600,219
1
1,200
1
0
0
6,599,716
Disconnect django.contrib.auth.management.create_superuser from the post_syncdb signal, and instead connect your own function that creates and saves a new superuser User with the desired password.
1
0
0
Django In-Memory SQLite3 Database
3
python,database,django,sqlite,in-memory-database
0
2011-07-06T16:20:00.000
Is there an elegant way to do an INSERT ... ON DUPLICATE KEY UPDATE in SQLAlchemy? I mean something with a syntax similar to inserter.insert().execute(list_of_dictionaries) ?
44
-1
-0.022219
0
false
17,374,720
0
60,718
1
0
0
6,611,563
As none of these solutions seem all the elegant. A brute force way is to query to see if the row exists. If it does delete the row and then insert otherwise just insert. Obviously some overhead involved but it does not rely on modifying the raw sql and it works on non orm stuff.
1
0
1
SQLAlchemy ON DUPLICATE KEY UPDATE
9
python,mysql,sqlalchemy
0
2011-07-07T13:43:00.000
I have a python loader using Andy McCurdy's python library that opens multiple Redis DB connections and sets millions of keys looping through files of lines each containing an integer that is the redis-db number for that record. Alltogether, only 20 databases are open at the present time, but eventually there may be as many as 100 or more. I notice that the redis log (set to verbose) always tells me there are "4 clients connected (0 slaves),... though I know that my 20 are open and are being used. So I'm guessing this is about the connection pooling support built into the python library. Am I correct in that guess? If so the real question is is there a way to increase the pool size -- I have plenty of machine resources, a lot dedicated to Redis? Would increasing the pool size help performance as the number of virtual connections I'm making goes up? At this point, I am actually hitting only ONE connection at a time though I have many open as I shuffle input records among them. But eventually there will be many scripts (2 dozen?) hitting Redis in parallel, mostly reading and I am wondering what effect increasing the pool size would have. Thanks matthew
1
1
0.197375
0
false
6,703,919
0
1,371
1
0
0
6,628,953
So I'm guessing this is about the connection pooling support built into the python library. Am I correct in that guess? Yes. If so the real question is is there a way to increase the pool size Not needed, it will increase connections up to 2**31 per default (andys lib). So your connections are idle anyways. If you want to increase performance, you will need to change the application using redis. and I am wondering what effect increasing the pool size would have. None, at least not in this case. IF redis becomes the bottleneck at some point, and you have a multi-core server. You must run multiple redis instances to increase performance, as it only runs on a single core. When you run multiple instances, and doing mostly reads, the slave feature can increase performance as the slaves can be used for all the reads.
1
0
0
configuring connection-pool size with Andy McCurdy's python-for-redis library
1
python,configuration,redis,connection-pooling
0
2011-07-08T18:43:00.000
If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient. Is this even a plausible strategy? Or is this the way they do it: When a page loads, it takes the end datetime from the server and runs a javascript countdown clock against it on the user machine? If so, how do you do the countdown clock with javascript? And how would I be able to delete/move data once the time limit is over without a user page load? Or is it absolutely necessary for the user to load the page to check the time limit to create an efficient countdown clock?
0
2
0.197375
0
false
6,639,561
1
2,127
2
0
0
6,639,247
I don't think this question has anything to do with SQL, really--except that you might retrieve an expiration time from SQL. What you really care about is just how to display the timeout real-time in the browser, right? Obviously the easiest way is just to send a "seconds remaining" counter to the page, either on the initial load, or as part of an AJAX request, then use Javascript to display the timer, and update it every second with the current value. I would opt for using a "seconds remaining" counter rather than an "end datetime", because you can't trust a browser's clock to be set correctly--but you probably can trust it to count down seconds correctly. If you don't trust Javascript, or the client's clock, to be accurate, you could periodically re-send the current "seconds remaining" value to the browser via AJAX. I wouldn't do this every second, maybe every 15 or 60 seconds at most. As for deleting/moving data when the clock expires, you'll need to do all of that in Javascript. I'm not 100% sure I answered all of your questions, but your questions seem a bit scattered anyway. If you need more clarification on the theory of operation, please ask.
1
0
0
Live countdown clock with django and sql?
2
javascript,python,django,time,countdown
0
2011-07-10T04:52:00.000
If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient. Is this even a plausible strategy? Or is this the way they do it: When a page loads, it takes the end datetime from the server and runs a javascript countdown clock against it on the user machine? If so, how do you do the countdown clock with javascript? And how would I be able to delete/move data once the time limit is over without a user page load? Or is it absolutely necessary for the user to load the page to check the time limit to create an efficient countdown clock?
0
0
0
0
false
6,639,878
1
2,127
2
0
0
6,639,247
I have also encountered the same problem a while ago. First of all your problem is not related neither django nor sql. It is a general concept and it is not very easy to implement because of overhead in server. One solution come into my mind is keeping start time of the process in the database. When someone request you to see remaingn time, read it from database, subtract the current time and server that time and in your browser initialize your javascript function with that value and countdown like 15 sec. After that do the same operation with AJAX without waiting user's request. However, there would be other implementations depending your application. If you explain your application in detail there could be other solutions. For example, if you implement a questionnaire with limited time, then for every answer submit, you should pass the calculated javascript value for that second.
1
0
0
Live countdown clock with django and sql?
2
javascript,python,django,time,countdown
0
2011-07-10T04:52:00.000
I have a Django project which has a mysql database backend. How can I export contents from my db to an Excel (xls, xlsx) format?
0
0
0
0
false
6,650,011
1
1,028
1
0
0
6,649,990
phpMyAdmin has an Export tab, and you can export in CSV. This can be imported into Excel.
1
0
0
MySQLdb to Excel
4
python,mysql,django,excel
0
2011-07-11T12:20:00.000
I try to connect to database in a domain from my virtual machine. It works on XP, but somehow does not work on Win7 and quitting with: "OperationalError: (1042, "Can't get hostname for your address")" Now I tried disable Firewall and stuff, but that doesn't matter anyway. I don't need the DNS resolving, which will only slow everything down. So I want to use the option "skip-name-resolve", but there is no my.ini or my.cnf when using MySQLdb for Python, so how can I still use this option? Thanks for your help -Alex
12
1
0.099668
0
false
6,668,116
0
60,490
1
0
0
6,668,073
This is an option which needs to be set in the MySQL configuration file on the server. It can't be set by client APIs such as MySQLdb. This is because of the potential security implications. That is, I may want to deny access from a particular hostname. With skip-name-resolve enabled, this won't work. (Admittedly, access control via hostname is probably not the best idea anyway.)
1
0
0
How to use the option skip-name-resolve when using MySQLdb for Python?
2
python,mysql,mysql-python,resolve
0
2011-07-12T17:00:00.000
I'm using an object database (ZODB) in order to store complex relationships between many objects but am running into performance issues. As a result I started to construct indexes in order to speed up object retrieval and insertion. Here is my story and I hope that you can help. Initially when I would add an object to the database I would insert it in a branch dedicated to that object type. In order to prevent multiple objects representing the same entity I added a method that would iterate over existing objects in the branch in order to find duplicates. This worked at first but as the database grew in size the time it took to load each object into memory and check attributes grew exponentially and unacceptably. To solve that issue I started to create indexes based on the attributes in the object so that when an object would be added it would be saved in the type branch as well as within an attribute value index branch. For example, say I was saving an person object with attributes firstName = 'John' and lastName = 'Smith', the object would be appended to the person object type branch and would also be appended to lists within the attribute index branch with keys 'John' and 'Smith'. This saved a lot of time with duplicate checking since the new object could be analysed and only the set of objects which intersect within the attribute indexes would need to be checked. However, I quickly ran into another issue with regards to dealing when updating objects. The indexes would need to updated to reflect the fact that they may not be accurate any more. This requires either remembering old values so that they could be directly accessed and the object removed or iterating over all values of an attribute type in order to find then remove the object. Either way performance is quickly beginning to degrade again and I can't figure out a way to solve it. Has you had this kind of issue before? What did you do solve it, or is this just something that I have to deal with when using OODBMS's? Thank in advance for the help.
5
8
1.2
0
true
6,674,416
0
601
2
0
0
6,668,234
Yes, repoze.catalog is nice, and well documented. In short : don't make indexing part of your site structure! Look at using a container/item hierarchy to store and traverse content item objects; plan to be able to traverse content by either (a) path (graph edges look like a filesystem) or (b) by identifying singleton containers at some distinct location. Identify your content using either RFC 4122 UUIDs (uuid.UUID type) or 64-bit integers. Use a central catalog to index (e.g. repoze.catalog); the catalog should be at a known location relative to the root application object of your ZODB. And your catalog will likely index attributes of objects and return record-ids (usually integers) on query. Your job is to map those integer ids to (perhaps indrecting via UUIDs) to some physical traversal path in the database where you are storing content. It helps if you use zope.location and zope.container for common interfaces for traversal of your object graph from root/application downward. Use zope.lifecycleevent handlers to index content and keep things fresh. The problem -- generalized ZODB is too flexible: it is just a persistent object graph with transactions, but this leaves room for you to sink or swim in your own data-structures and interfaces. The solution -- generalized Usually, just picking pre-existing idioms from the community around the ZODB will work: zope.lifecycleevent handlers, "containerish" traversal using zope.container and zope.location, and something like repoze.catalog. More particular Only when you exhaust the generalized idioms and know why they won't work, try to build your own indexes using the various flavors of BTrees in ZODB. I actually do this more than I care to admit, but usually have good cause. In all cases, keep your indexes (search, discovery) and site (traversal and storage) structure distinct. The idioms for the problem domain Master ZODB BTrees: you likely want: To store content objects as subclasses of Persistent in containers that are subclasses of OOBTree providing container interfaces (see below). To store BTrees for your catalog or global indexes or use packages like repoze.catalog and zope.index to abstract that detail away (hint: catalog solutions typically store indexes as OIBTrees that will yield integer record ids for search results; you then typically have some sort of document mapper utility that translates those record ids into something resolvable in your application like a uuid (provided you can traverse the graph to the UUID) or a path (the way the Zope2 catalog does). IMHO, don't bother working with intids and key-references and such (these are less idiomatic and more difficult if you don't need them). Just use a Catalog and DocumentMap from repoze.catalog to get results in integer to uuid or path form, and then figure out how to get your object. Note, you likely want some utility/singleton that has the job of retrieving your object given an id or uuid returned from a search. Use zope.lifecycleevent or similar package that provides synchronous event callback (handler) registrations. These handlers are what you should call whenever an atomic edit is made on your object (likely once per transaction, but not in transaction machinery). Learn the Zope Component Architecture; not an absolute requirement, but surely helpful, even if just to understand zope.interface interfaces of upstream packages like zope.container Understanding of how Zope2 (ZCatalog) does this: a catalog fronts for multiple indexes or various sorts, which each search for a query, each have specialized data structures, and each return integer record id sequences. These are merged across indexes by the catalog doing set intersections and returned as a lazy-mapping of "brain" objects containing metadata stubs (each brain has a getObject() method to get the actual content object). Getting actual objects from a catalog search relies upon the Zope2 idiom of using paths from the root application object to identify the location of the item cataloged.
1
0
1
Method for indexing an object database
2
python,indexing,zodb,object-oriented-database
0
2011-07-12T17:14:00.000
I'm using an object database (ZODB) in order to store complex relationships between many objects but am running into performance issues. As a result I started to construct indexes in order to speed up object retrieval and insertion. Here is my story and I hope that you can help. Initially when I would add an object to the database I would insert it in a branch dedicated to that object type. In order to prevent multiple objects representing the same entity I added a method that would iterate over existing objects in the branch in order to find duplicates. This worked at first but as the database grew in size the time it took to load each object into memory and check attributes grew exponentially and unacceptably. To solve that issue I started to create indexes based on the attributes in the object so that when an object would be added it would be saved in the type branch as well as within an attribute value index branch. For example, say I was saving an person object with attributes firstName = 'John' and lastName = 'Smith', the object would be appended to the person object type branch and would also be appended to lists within the attribute index branch with keys 'John' and 'Smith'. This saved a lot of time with duplicate checking since the new object could be analysed and only the set of objects which intersect within the attribute indexes would need to be checked. However, I quickly ran into another issue with regards to dealing when updating objects. The indexes would need to updated to reflect the fact that they may not be accurate any more. This requires either remembering old values so that they could be directly accessed and the object removed or iterating over all values of an attribute type in order to find then remove the object. Either way performance is quickly beginning to degrade again and I can't figure out a way to solve it. Has you had this kind of issue before? What did you do solve it, or is this just something that I have to deal with when using OODBMS's? Thank in advance for the help.
5
0
0
0
false
6,668,904
0
601
2
0
0
6,668,234
Think about using an attribute hash (something like Java's hashCode()), then use the 32-bit hash value as the key. Python has a hash function, but I am not real familiar with it.
1
0
1
Method for indexing an object database
2
python,indexing,zodb,object-oriented-database
0
2011-07-12T17:14:00.000
I am looking at the Flask tutorial, and it suggests to create a new database connection for each web request. Is it the right way to do things ? I always thought that the database connection should only once be created once for each thread. Can that be done, while maintaining the application as thread-safe, with flask, or other python web servers.
21
0
0
0
false
6,698,054
1
11,438
1
0
0
6,688,413
In my experience, it's often a good idea to close connections frequently. In particular, MySQL likes to close connections that have been idle for a while, and sometimes that can leave the persistent connection in a stale state that can make the application unresponsive. What you really want to do is optimize the "dead connection time", the fraction of the time a connection is up but isn't doing any work. In the case of creating a new connection with every request, that dead time is really just the setup and teardown time. If only make a connection once (per thread), and it never goes bad, then the dead time is the idle time. When your application is serving only a few requests, the number of connections that occur will also be small, and so there's not much advantage of keeping a connection open, but idle. On the other extreme, when the application is very busy, connections are almost never idle, and closing a connection that will just be reopened immediately is also wasted. In the middle, when new requests sometimes follow in flight requests, but sometimes not, you'll have to do some performance tuning on things like pool size, request timeout, and so on. A very busy app, which uses a connection pool to keep connections open will only ever see one kind of dead time; waiting for requests that will never return because the connection has gone bad. A simple solution to this problem is to execute a known, good query (which in MySQL is spelled SELECT 1) before providing a connection from the pool to a request and recycle the connection if it doesn't return quickly.
1
0
0
How to preserve database connection in a python web server
3
python,mysql,flask
0
2011-07-14T04:13:00.000
The datetime module does date validation and math which is fine when you care about reality. I need an object that holds dates generated even if they were invalid. Date time is way too strict as sometimes I know year only or year and month only and sometimes I have a date like 2011-02-30. Is there a module out there that is like datetime but that can handle invalid dates? If not, what's the best way to handle this while duplicating as little functionality as possible and still allowing date math when it is possible to perform? UPDATE Motivation for this is integration with multiple systems that use dates and don't care about invalid dates (mysql and perl) in addition to wanting the ability to tell basic general ranges of time. For fuzzy date math start from the beginning of the known unit of time (if I know year and month but not day, use the first, if i know year but no month or day, use january first). This last bit is not necessary but would be nice and I get why it is not common as people who need special case date math will probably build it themselves. One of the major issues I have is loading dates from mysql into python using sqlalchemy and mysqldb -- if you load a value from a date column im mysql that looks like '2011-01-00' in mysql, you get None in python. This is not cool by any stretch.
6
0
0
0
false
12,212,950
0
2,334
1
0
0
6,697,770
I haven't heard of such module out there and don't think there is one. I would probably end up storing two dates for every instance: 1. the original input as a string, which could contain anything, even "N/A", just for showing back the original value, and 2. parsed and "normalized" datetime object which is the closest representation of the input. Depending on the purpose I would allow Null/None objects where it really couldn't be estimated (like the mentioned "N/A" case) or not. This solution will allow you to revert/change the "estimation" as you do not lose any information. If you don't care about it so much, SQLAlchemy allows declaring your own column and data types for transparently converting such values back and forth into a string column in the DB.
1
0
1
allowing invalid dates in python datetime
2
python,datetime
0
2011-07-14T17:56:00.000
Does anyone know of a way of accessing MS Excel from Python? Specifically I am looking to create new sheets and fill them with data, including formulae. Preferably I would like to do this on Linux if possible, but can do it from in a VM if there is no other way.
22
5
0.16514
0
false
21,573,501
0
37,522
2
0
0
6,698,229
Long time after the original question, but last answer pushed it top of feed again. Others might benefit from my experience using python and excel. I am using excel and python quite bit. Instead of using the xlrd, xlwt modules directly, I normally use pandas. I think pandas uses these modules as imports, but i find it much easier using the pandas provided framework to create and read the spreadsheets. Pandas's Dataframe structure is very "spreadsheet-like" and makes life a lot easier in my opinion. The other option that I use (not in direct answer to your problem) is DataNitro. It allows you to use python directly within excel. Different use case, but you would use it where you would normally have to write VBA code in Excel.
1
0
0
Excel Python API
6
python,excel
0
2011-07-14T18:34:00.000
Does anyone know of a way of accessing MS Excel from Python? Specifically I am looking to create new sheets and fill them with data, including formulae. Preferably I would like to do this on Linux if possible, but can do it from in a VM if there is no other way.
22
3
0.099668
0
false
6,698,343
0
37,522
2
0
0
6,698,229
It's surely possible through the Excel object model via COM: just use win32com modules for Python. Can't remember more but I once controlled the Media Player through COM from Python. It was piece of cake.
1
0
0
Excel Python API
6
python,excel
0
2011-07-14T18:34:00.000
Recently I've begun working on exploring ways to convert about 16k Corel Paradox 4.0 database tables (my client has been using a legacy platform over 20 years mainly due to massive logistical matters) to more modern formats (i.e.CSV, SQL, etc.) en mass and so far I've been looking at PHP since it has a library devoted to Paradox data processing however while I'm fairly confident in how to write the conversion code (i.e. simply calling a few file open, close, and write functions) I'm concerned about error detection and ensuring that when running the script, I don't spend hours waiting for it to run only to see 16k corrupt files exported. Also, I'm not fully sure about the logic loop for calling the files. I'm thinking of having the program generate a list of all the files with the appropriate extension and then looping through the list, however I'm not sure if that's ideal for a directory of this size. This is being run on a local Windows 7 x64 system with XAMPP setup (the database is all internal use) so I'm not sure if pure PHP is the best idea -- so I've been wondering if Python or some other lightweight scripting language might be better for handling this. Thanks very much in advance for any insights and assistance,
1
0
1.2
0
true
6,711,276
0
734
2
0
0
6,709,833
If you intend to just convert the data which I guess is a process you do only once you will run the script locally as a command script. For that you don't need a web site and thus XAMPP. What language you take is secondary except you say that PHP has a library. Does python or others have one? About your concern of error detection why not test your script with only one file first. If that conversion is successful you can build your loop and test this on maybe five files, i.e. have a counter that ends the process after that number. It that is still okay you can go on with the rest. You can also write log data and dump a result for every 100 files processed. This way you can see if your script is doing something or idling.
1
0
0
Batch converting Corel Paradox 4.0 Tables to CSV/SQL -- via PHP or other scripts
2
php,python,mysql,xampp,php-gtk
0
2011-07-15T15:56:00.000
Recently I've begun working on exploring ways to convert about 16k Corel Paradox 4.0 database tables (my client has been using a legacy platform over 20 years mainly due to massive logistical matters) to more modern formats (i.e.CSV, SQL, etc.) en mass and so far I've been looking at PHP since it has a library devoted to Paradox data processing however while I'm fairly confident in how to write the conversion code (i.e. simply calling a few file open, close, and write functions) I'm concerned about error detection and ensuring that when running the script, I don't spend hours waiting for it to run only to see 16k corrupt files exported. Also, I'm not fully sure about the logic loop for calling the files. I'm thinking of having the program generate a list of all the files with the appropriate extension and then looping through the list, however I'm not sure if that's ideal for a directory of this size. This is being run on a local Windows 7 x64 system with XAMPP setup (the database is all internal use) so I'm not sure if pure PHP is the best idea -- so I've been wondering if Python or some other lightweight scripting language might be better for handling this. Thanks very much in advance for any insights and assistance,
1
1
0.099668
0
false
39,728,385
0
734
2
0
0
6,709,833
This is doubtless far too late to help you, but for posterity... If one has a Corel Paradox working environment, one can just use it to ease the transition. We moved the Corel Paradox 9 tables we had into an Oracle schema we built by connecting to the schema (using an alias such as SCHEMA001) then writing this Procedure in a script from inside Paradox: Proc writeTable(targetTable String) errorTrapOnWarnings(Yes) try tc.open(targetTable) tc.copy(":SCHEMA001:" + targetTable) tc.close() onFail errorShow() endTry endProc One could highly refine this with more Paradox programming, but you get the idea. One thing we discovered, though, is that Paradox uses double quotes for the column names when it creates the Oracle version, which means you can get lower-case letters in column names in Oracle, which is a pain. We corrected that by writing a quick Oracle query to upper() all the resulting column names. We called the procedure like so: Var targetTable String tc TCursor endVar method run(var eventInfo Event) targetTable = "SomeTableName" writeTable(targetTable) msgInfo("TransferData.ssl--script finished", "That's all, folks!" ) return endMethod
1
0
0
Batch converting Corel Paradox 4.0 Tables to CSV/SQL -- via PHP or other scripts
2
php,python,mysql,xampp,php-gtk
0
2011-07-15T15:56:00.000
I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of sanitizing filepaths with SQLAlchemy or dealing with filepaths in MySQL in general. I need the solution to be OS-agnostic and established. It does not need to be implemented in SA. I'll accept any procedure for encoding that works; failing that, I need a list of all chars that need to be escaped and a smart choice of an escape char.
9
0
0
0
false
7,404,552
0
14,347
4
0
0
6,713,715
SQLAlchemy do sanitize for you if you will use regular queries. Maybe the problem that you use like clause. Like require addition escape for such symbols: _%. Thus you will need replace methods if you want to quote like expression.
1
0
0
Escaping special characters in filepaths using SQLAlchemy
4
python,mysql,sqlalchemy,escaping,filepath
0
2011-07-15T22:12:00.000
I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of sanitizing filepaths with SQLAlchemy or dealing with filepaths in MySQL in general. I need the solution to be OS-agnostic and established. It does not need to be implemented in SA. I'll accept any procedure for encoding that works; failing that, I need a list of all chars that need to be escaped and a smart choice of an escape char.
9
-3
-0.148885
0
false
6,720,094
0
14,347
4
0
0
6,713,715
Why do you need to escape the file paths? As far as you are not manually writing select / insert queries, SQLAlchemy will take care of the escaping when it generates the query internally. The file paths can be inserted as they are into the database.
1
0
0
Escaping special characters in filepaths using SQLAlchemy
4
python,mysql,sqlalchemy,escaping,filepath
0
2011-07-15T22:12:00.000
I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of sanitizing filepaths with SQLAlchemy or dealing with filepaths in MySQL in general. I need the solution to be OS-agnostic and established. It does not need to be implemented in SA. I'll accept any procedure for encoding that works; failing that, I need a list of all chars that need to be escaped and a smart choice of an escape char.
9
-4
-1
0
false
7,435,697
0
14,347
4
0
0
6,713,715
You don't need do anything SQLAlchemy will do it for you.
1
0
0
Escaping special characters in filepaths using SQLAlchemy
4
python,mysql,sqlalchemy,escaping,filepath
0
2011-07-15T22:12:00.000
I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of sanitizing filepaths with SQLAlchemy or dealing with filepaths in MySQL in general. I need the solution to be OS-agnostic and established. It does not need to be implemented in SA. I'll accept any procedure for encoding that works; failing that, I need a list of all chars that need to be escaped and a smart choice of an escape char.
9
-3
-0.148885
0
false
7,479,678
0
14,347
4
0
0
6,713,715
As I know there isn’t what you are looking for in SQLAlchemy. Just go basestring.replace() method by yourself.
1
0
0
Escaping special characters in filepaths using SQLAlchemy
4
python,mysql,sqlalchemy,escaping,filepath
0
2011-07-15T22:12:00.000
I am working in python and using xlwt. I have got a sample excel sheet and have to generate same excel sheet from python. Now the problem is heading columns are highlighted using some color from excel color palette and I am not able to find the name of color. I need to generate exact copy of sample given to me. Is there any function in xlwt which let me read color of cell of one sheet and then put that color in my sheet?
3
0
0
0
false
15,435,059
0
1,582
1
0
0
6,723,242
Best you read the colours from the sample given to you with xlrd. If there are only a few different colours and they stay the same over time, you can also open the file in Excel and use a colour picker tool to get the RGB values of the relevant cells.
1
0
0
Reading background color of a cell of an excel sheet from python?
1
python,excel,colors,cell,xlwt
0
2011-07-17T10:29:00.000
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb" I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true? I don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.
0
2
1.2
0
true
6,738,365
1
1,094
3
0
0
6,738,310
You'll need to install MySQL for python as Django needs this to do the connecting, once you have the package installed you shouldn't need to configure it though as Django just needs to import from it. Edit: from your comments there is a setuptools bundled but it has been replaced by the package distribute, install this python package and you should have access to easy_install which makes it really easy to get new packages. Assuming you've added PYTHONPATH/scripts to your environment variables, you can call easy_install mysql_python
1
0
0
Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb"
3
python,mysql,django,mysql-python,bitnami
0
2011-07-18T19:29:00.000
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb" I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true? I don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.
0
0
0
0
false
6,981,742
1
1,094
3
0
0
6,738,310
BitNami DjangoStack already includes the mysql-python components components. I guess you selected MySQL as the database when installing the BitNami Stack, right? (it also includes PostgreSQL and SQLite). Do you receive the error at installation time? Or later working with your Django project? In which platform are you using the BitNami DjangoStack?
1
0
0
Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb"
3
python,mysql,django,mysql-python,bitnami
0
2011-07-18T19:29:00.000
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb" I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together? Is this not true? I don't want to have to deal with installing mysql-python components as that can be frustrating to get working alone as I have tried before.
0
0
0
0
false
12,083,825
1
1,094
3
0
0
6,738,310
So I got this error after installing Bitnami Django stack on Windows Vista. Turns out that I had all components installed, but easy_install mysql_python didn't unwrap the entire package... ? I inst... uninst... inst... uninst multiple times, but no combination (using mysql for the startup Project) made any difference. In the end, I simply renamed the egg file (in this case MySQL_python-1.2.3-py2.7-win32.egg) file to .zip and extracted the missing parts into a directory on my PYTHONPATH and everything worked like a charm.
1
0
0
Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb"
3
python,mysql,django,mysql-python,bitnami
0
2011-07-18T19:29:00.000
I am developing a Django app being a Web frontend to some Oracle database with another local DB keeping app's data such as Guardian permissions. The problem is that it can be modified from different places that I don't have control of. Let's say we have 3 models: User, Thesis and UserThesis. UserThesis - a table specifying relationship between Thesis and User (User being co-author of Thesis) Scenario: User is removed as an author of Thesis by removing entry in UserThesis table by some other app. User tries to modify Thesis using our Django app. And he succeeds, because Guardian and Django do not know about change in UserThesis. I thought about some solutions: Having some cron job look for changes in UserThesis by checking the modification date of entry. Easy to check for additions, removals would require looking on all relationships again. Modifying Oracle DB schema to add Guardian DB tables and creating triggers on UserThesis table. I wouldn't like to do this, because of Oracle DB being shared among number of different apps. Manually checking for relationship in views and templates (heavier load on Oracle). Which one is the best? Any other ideas?
1
0
1.2
0
true
7,011,483
1
159
1
0
0
6,775,359
I decided to go with manually checking the permissions, caching it whenever I can. I ended up with get_perms_from_cache(self, user) model method which helps me a lot.
1
0
0
Django-guardian on DB with shared (non-exclusive) access
1
python,django,database-permissions,django-permissions
0
2011-07-21T11:34:00.000
I'm building a centralised django application that will be interacting with a dynamic number of databases with basically identical schema. These dbs are also used by a couple legacy applications, some of which are in PHP. Our solution to avoid multiple silos of db credentials is to store this info in generic setting files outside of the respective applications. Setting files could be created, altered or deleted without the django application being restarted. For every request to the django application, there will be a http header or a url parameter which can be used to deduce which setting file to look at to determine which database credentials to use. My first thought is to use a custom django middleware that would parse the settings files (possibly with caching) and create a new connection object on each request, patching it into django.db before any ORM activity. Is there a more graceful method to handle this situation? Are there any thread safety issues I should consider with the middleware approach?
1
2
1.2
0
true
6,782,234
1
1,027
2
0
0
6,780,827
rereading the file is a heavy penalty to pay when it's unlikely that the file has changed. My usual approach is to use INotify to watch for configuration file changes, rather than trying to read a file on every request. Additionally, I tend to keep a "current" configuration, parsed from the file, and only replace it with a new value once i've finished parsing the config file and i'm certain it's valid. You could resolve some of your concerns about thread safety by setting the current configuration on each incoming request, so that the configuration can't change mid-way through a request.
1
0
0
Dynamic per-request database connections in Django
2
python,django
0
2011-07-21T18:26:00.000
I'm building a centralised django application that will be interacting with a dynamic number of databases with basically identical schema. These dbs are also used by a couple legacy applications, some of which are in PHP. Our solution to avoid multiple silos of db credentials is to store this info in generic setting files outside of the respective applications. Setting files could be created, altered or deleted without the django application being restarted. For every request to the django application, there will be a http header or a url parameter which can be used to deduce which setting file to look at to determine which database credentials to use. My first thought is to use a custom django middleware that would parse the settings files (possibly with caching) and create a new connection object on each request, patching it into django.db before any ORM activity. Is there a more graceful method to handle this situation? Are there any thread safety issues I should consider with the middleware approach?
1
0
0
0
false
6,780,942
1
1,027
2
0
0
6,780,827
You could start different instances with different settings.py files (by setting different DJANGO_SETTINGS_MODULE) on different ports, and redirect the requests to the specific apps. Just my 2 cents.
1
0
0
Dynamic per-request database connections in Django
2
python,django
0
2011-07-21T18:26:00.000
I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7. When I importing cx_Oracle module I get this error: Traceback (most recent call last): File "C:\Osebno\test.py", line 1, in import cx_oracle ImportError: No module named cx_oracle Can someone please tell me what is wrong?
3
0
0
0
false
6,788,993
0
17,773
3
0
0
6,788,937
It's not finding the module. Things to investigate: Do you have several python installations? Did it go to the right one? Do a global search for cx_oracle and see if it's in the correct place. Check your PYTHONPATH variable. Check Python's registry values HKLM\Software\Python\Pyhoncore. Are they correct?
1
0
0
Error when importing cx_Oracle module [Python]
5
python,windows-7,oracle10g
0
2011-07-22T10:51:00.000
I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7. When I importing cx_Oracle module I get this error: Traceback (most recent call last): File "C:\Osebno\test.py", line 1, in import cx_oracle ImportError: No module named cx_oracle Can someone please tell me what is wrong?
3
4
0.158649
0
false
6,789,312
0
17,773
3
0
0
6,788,937
Have you tried import cx_Oracle (upper-case O) instead of import cx_oracle?
1
0
0
Error when importing cx_Oracle module [Python]
5
python,windows-7,oracle10g
0
2011-07-22T10:51:00.000
I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7. When I importing cx_Oracle module I get this error: Traceback (most recent call last): File "C:\Osebno\test.py", line 1, in import cx_oracle ImportError: No module named cx_oracle Can someone please tell me what is wrong?
3
1
0.039979
0
false
16,885,226
0
17,773
3
0
0
6,788,937
after installing the cx_Oracle download the instant client form oracle owth all DLLs , then copy then in the same directory of cx_Oracle.pyd , it will work directly tried and worked for me.
1
0
0
Error when importing cx_Oracle module [Python]
5
python,windows-7,oracle10g
0
2011-07-22T10:51:00.000