Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm trying to use PyODBC to connect to an Access database. It works fine on Windows, but running it under OS X I get—
Traceback (most recent call last):
File "", line 1, in
File "access.py", line 10, in init
self.connection = connect(driver='{Microsoft Access Driver (.mdb)}', dbq=path, pwd=password)
pyodbc.Error: ('00000', '[00000] [iODBC][Driver Manager]dlopen({Microsoft Access Driver (.mdb)}, 6): image not found (0) (SQLDriverConnect)')
Do I have to install something else? Have I installed PyODBC wrong?
Thanks | 1 | 3 | 0.53705 | 0 | false | 11,155,551 | 0 | 1,896 | 1 | 0 | 0 | 11,154,965 | pyodbc allows connecting to ODBC data sources, but it does not actually implements drivers.
I'm not familiar with OS X, but on Linux ODBC sources are typically described in odbcinst.ini file (location is determined by ODBCSYSINI variable).
You will need to install Microsoft Access ODBC driver for OS X. | 1 | 0 | 0 | PyODBC "Image not found (0) (SQLDriverConnect)" | 1 | python,ms-access,pyodbc | 0 | 2012-06-22T11:03:00.000 |
Trying to set up Flask and SQLAlchemy on Windows but I've been running into issues.
I've been using Flask-SQLAlchemy along with PostgreSQL 9.1.4 (32 bit) and the Psycopg2 package. Here are the relevant bits of code, I created a basic User model just to test that my DB is connecting, and committing.
The three bits of code would come from the __init__.py file of my application, the models.py file and my settings.py file.
When I try opening up my interactive prompt and try the code in the following link out I get a ProgrammingError exception (details in link).
What could be causing this? I followed the documentation and I'm simply confused as to what I'm doing wrong especially considering that I've also used Django with psycopg2 and PostgreSQL on Windows. | 1 | 4 | 0.26052 | 0 | false | 11,210,290 | 1 | 7,080 | 1 | 0 | 0 | 11,167,518 | At the time you execute create_all, models.py has never been imported, so no class is declared. Thus, create_all does not create any table.
To solve this problem, import models before running create_all or, even better, don't separate the db object from the model declaration. | 1 | 0 | 0 | Setting up Flask-SQLAlchemy | 3 | python,sqlalchemy,flask,flask-sqlalchemy | 0 | 2012-06-23T06:47:00.000 |
I need to load fixtures into the system when a new VM is up. I have dumped MongoDB and Postgres. But I can't just sit in front of the PC whenever a new machine is up. I want to be able to just "issue" a command or the script automatically does it.
But a command like pg_dump to dump PostgreSQL will require a password. The problem is, the script that I uses to deploy these fixtures should be under version control. The file that contains this password (if that's the only way to do automation) will not be committed. If it needs to be committed, the deploy repository is restricted for internal developers only.
My question is... what do you consider a good practice in this situation? I am thinking of using Python's Popen to issue these commands.
Thanks.
I also can put it in the cache server... but not sure if it's the only "better" way... | 1 | 0 | 0 | 0 | false | 11,174,357 | 0 | 70 | 1 | 0 | 0 | 11,174,324 | You have to give the user that loads the fixture the privileges to write on the database regardless which way you are going to load the data.
With Postgres you can give login permission without password to specific users and eliminate the problem of a shared password or you can store the password in the pgpass file within the home directory.
Personally I find fabric a very nice tool to do deploys, in this specific case I will use it to connect to the remote machine and issue a psql -f 'dump_data.sql' -1 command. | 1 | 0 | 0 | Security concerns while loading fixtures | 1 | python,deployment,fixtures | 0 | 2012-06-24T01:20:00.000 |
I have a desktop app that has 65 modules, about half of which read from or write to an SQLite database. I've found that there are 3 ways that the database can throw an SQliteDatabaseError:
SQL logic error or missing database (happens unpredictably every now and then)
Database is locked (if it's being edited by another program, like SQLite Database Browser)
Disk I/O error (also happens unpredictably)
Although these errors don't happen often, when they do they lock up my application entirely, and so I can't just let them stand.
And so I've started re-writing every single access of the database to be a pointer to a common "database-access function" in its own module. That function then can catch these three errors as exceptions and thereby not crash, and also alert the user accordingly. For example, if it is a "database is locked error", it will announce this and ask the user to close any program that is also using the database and then try again. (If it's the other errors, perhaps it will tell the user to try again later...not sure yet). Updating all the database accesses to do this is mostly a matter of copy/pasting the redirect to the common function--easy work.
The problem is: it is not sufficient to just provide this database-access function and its announcements, because at all of the points of database access in the 65 modules there is code that follows the access that assumes the database will successfully return data or complete a write--and when it doesn't, that code has to have a condition for that. But writing those conditionals requires carefully going into each access point and seeing how best to handle it. This is laborious and difficult for the couple of hundred database accesses I'll need to patch in this way.
I'm willing to do that, but I thought I'd inquire if there were a more efficient/clever way or at least heuristics that would help in finishing this fix efficiently and well.
(I should state that there is no particular "architecture" of this application...it's mostly what could be called "ravioli code", where the GUI and database calls and logic are all together in units that "go together". I am not willing to re-write the architecture of the whole project in MVC or something like this at this point, though I'd consider it for future projects.) | 2 | 1 | 1.2 | 0 | true | 11,215,911 | 1 | 237 | 1 | 0 | 0 | 11,215,535 | Your gut feeling is right. There is no way to add robustness to the application without reviewing each database access point separately.
You still have a lot of important choice at how the application should react on errors that depends on factors like,
Is it attended, or sometimes completely unattended?
Is delay OK, or is it important to report database errors promptly?
What are relative frequencies of the three types of failure that you describe?
Now that you have a single wrapper, you can use it to do some common configuration and error handling, especially:
set reasonable connect timeouts
set reasonable busy timeouts
enforce command timeouts on client side
retry automatically on errors, especially on SQLITE_BUSY (insert large delays between retries, fail after a few retries)
use exceptions to reduce the number of application level handlers. You may be able to restart the whole application on database errors. However, do that only if you have confidence as to in which state you are aborting the application; consistent use of transactions may ensure that the restart method does not leave inconsistent data behind.
ask a human for help when you detect a locking error
...but there comes a moment where you need to bite the bullet and let the error out into the application, and see what all the particular callers are likely to do with it. | 1 | 0 | 0 | Efficient approach to catching database errors | 1 | python,database,sqlite,error-handling | 0 | 2012-06-26T20:38:00.000 |
I'm just curious that there are modern systems out there that default to something other than UTF-8. I've had a person block for an entire day on the multiple locations that a mysql system can have different encoding. Very frustrating.
Is there any good reason not to use utf-8 as a default (and storage space seems like not a good reason)? Not trying to be argumentitive, just curious.
thx | 8 | 6 | 1 | 0 | false | 11,219,610 | 0 | 519 | 2 | 0 | 0 | 11,219,060 | Once upon a time there was no unicode or UTF-8, and disparate encoding schemes were in use throughout the world.
It wasn't until back in 1988 that the initial unicode proposal was issued, with the goal of encoding all the worlds characters in a common encoding.
The first release in 1991 covered many character representations, however, it wasn't until 2006 that Balinese, Cuneiform, N'Ko, Phags-pa, and Phoenician were added.
Until then the Phoenicians, and the others, were unable to represent their language in UTF-8 pissing off many programmers who wondered why everything was not just defaulting to UTF-8. | 1 | 0 | 0 | why doesn't EVERYTHING default to UTF-8? | 2 | python,mysql,ruby,utf-8 | 0 | 2012-06-27T03:37:00.000 |
I'm just curious that there are modern systems out there that default to something other than UTF-8. I've had a person block for an entire day on the multiple locations that a mysql system can have different encoding. Very frustrating.
Is there any good reason not to use utf-8 as a default (and storage space seems like not a good reason)? Not trying to be argumentitive, just curious.
thx | 8 | -1 | -0.099668 | 0 | false | 11,219,088 | 0 | 519 | 2 | 0 | 0 | 11,219,060 | Some encodings have different byte orders (little and big endian) | 1 | 0 | 0 | why doesn't EVERYTHING default to UTF-8? | 2 | python,mysql,ruby,utf-8 | 0 | 2012-06-27T03:37:00.000 |
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed. | 0 | 1 | 0.049958 | 0 | false | 11,224,222 | 0 | 1,125 | 3 | 0 | 0 | 11,223,147 | If you're not after just parameter substitution, but full construction of the SQL, you have to do that using string operations on your end. The ? replacement always just stands for a value. Internally, the SQL string is compiled to SQLite's own bytecode (you can find out what it generates with EXPLAIN thesql) and ? replacements are done by just storing the value at the correct place in the value stack; varying the query structurally would require different bytecode, so just replacing a value wouldn't be enough.
Yes, this does mean you have to be ultra-careful. If you don't want to allow updates, try opening the DB connection in read-only mode. | 1 | 0 | 0 | Python + Sqlite 3. How to construct queries? | 4 | python,sqlite | 0 | 2012-06-27T09:27:00.000 |
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed. | 0 | 1 | 1.2 | 0 | true | 11,224,475 | 0 | 1,125 | 3 | 0 | 0 | 11,223,147 | If you're trying to transmit changes to the database to another computer, why do they have to be expressed as SQL strings? Why not pickle the query string and the parameters as a tuple, and have the other machine also use SQLite parameterization to query its database? | 1 | 0 | 0 | Python + Sqlite 3. How to construct queries? | 4 | python,sqlite | 0 | 2012-06-27T09:27:00.000 |
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed. | 0 | 0 | 0 | 0 | false | 11,224,003 | 0 | 1,125 | 3 | 0 | 0 | 11,223,147 | I want how to get the parsed 'sql param'.
It's all open source so you have full access to the code doing the parsing / sanitization. Why not just reading this code and find out how it works and if there's some (possibly undocumented) implementation that you can reuse ? | 1 | 0 | 0 | Python + Sqlite 3. How to construct queries? | 4 | python,sqlite | 0 | 2012-06-27T09:27:00.000 |
There is a worksheet.title method but not workbook.title method. Looking in the documentation there is no explicit way to find it, I wasn't sure if anyone knew a workaround or trick to get it. | 3 | 2 | 1.2 | 0 | true | 11,233,362 | 0 | 10,098 | 1 | 0 | 0 | 11,233,140 | A workbook doesn't really have a name - normally you'd just consider it to be the basename of the file it's saved as... slight update - yep, even in VB WorkBook.Name just returns "file on disk.xls" | 1 | 0 | 0 | Is there a way to get the name of a workbook in openpyxl | 1 | python,excel,openpyxl | 0 | 2012-06-27T18:54:00.000 |
I am running a webapp on google appengine with python and my app lets users post topics and respond to them and the website is basically a collection of these posts categorized onto different pages.
Now I only have around 200 posts and 30 visitors a day right now but that is already taking up nearly 20% of my reads and 10% of my writes with the datastore. I am wondering if it is more efficient to use the google app engine's built in get_by_id() function to retrieve posts by their IDs or if it is better to build my own. For some of the queries I will simply have to use GQL or the built in query language because they are retrieved on more than just and ID but I wanted to see which was better.
Thanks! | 0 | 0 | 0 | 0 | false | 11,270,908 | 1 | 82 | 1 | 1 | 0 | 11,270,434 | I'd suggest using pre-existing code and building around that in stead of re-inventing the wheel. | 1 | 0 | 0 | use standard datastore index or build my own | 2 | python,google-app-engine,indexing,google-cloud-datastore | 0 | 2012-06-30T00:18:00.000 |
We are using Python Pyramid with SQLAlchemy and MySQL to build a web application. We would like to have user-specific database connections, so every web application user has their own database credentials. This is primarily for security reasons, so each user only has privileges for their own database content. We would also like to maintain the performance advantage of connection pooling. Is there a way we can setup a new engine at login time based on the users credentials, and reuse that engine for requests made by the same user? | 0 | 0 | 0 | 0 | false | 11,300,227 | 1 | 343 | 1 | 0 | 0 | 11,299,182 | The best way to do this that I know is to use the same database with multiple schemas. Unfortunately I don't think this works with MySQL. The idea is that you connection pool engines to the same database and then when you know what user is associated with the request you can switch schemas for that connection. | 1 | 0 | 0 | How to manage user-specific database connections in a Pyramid Web Application? | 1 | python,sqlalchemy,pyramid | 0 | 2012-07-02T18:29:00.000 |
I have successfully installed py27-mysql from MacPorts and MySQL-python-1.2.3c1 on a machine running Snow Leopard. Because I have MySQL 5.1.48 in an odd location (/usr/local/mysql/bin/mysql/), I had to edit the setup.cfg file when I installed mysql-python. However, now that it's installed, I'm still getting the error "ImportError: No module named MySQLdb" when I run "import MySQLdb" in python. What is left to install? Thanks. | 0 | 0 | 0 | 0 | false | 12,535,972 | 0 | 67 | 1 | 0 | 0 | 11,304,019 | MacPorts' py27-mysql, MySQL-python, and MySQLdb are all synonyms for the same thing. If you successfully installed py27-mysql, you should not need anything else, and it's possible you've messed up your python site-packages. Also, make sure you are invoking the right python binary, i.e. MacPorts' python27 and not the one that comes with Mac OS X. | 1 | 0 | 0 | setting up mysql-python on Snow Leopard | 1 | mysql-python | 0 | 2012-07-03T03:19:00.000 |
Now on writing path as sys.path.insert(0,'/home/pooja/Desktop/mysite'), it ran fine asked me for the word tobe searched and gave this error:
Traceback (most recent call last):
File "call.py", line 32, in
s.save()
File
"/usr/local/lib/python2.6/dist-packages/django/db/models/base.py",
line 463, in save
self.save_base(using=using, force_insert=force_insert,
force_update=force_update)
File
"/usr/local/lib/python2.6/dist-packages/django/db/models/base.py",
line 524, in
save_base
manager.using(using).filter(pk=pk_val).exists())):
File
"/usr/local/lib/python2.6/dist-packages/django/db/models/query.py",
line 562, in exists
return self.query.has_results(using=self.db)
File
"/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py",
line 441, in has_results
return bool(compiler.execute_sql(SINGLE))
File
"/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py",
line 818, in execute_sql
cursor.execute(sql, params)
File
"/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py",
line 40, in execute
return self.cursor.execute(sql, params) File
"/usr/local/lib/python2.6/dist-packages/django/db/backends/sqlite3/base.py",
line 337, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.DatabaseError: no such table: search_keywords
Please help!! | 1 | 1 | 1.2 | 0 | true | 11,308,029 | 1 | 768 | 1 | 0 | 0 | 11,307,928 | The exception says: no such table: search_keywords, which is quite self-explanatory and means that there is no database table with such name. So:
You may be using relative path to db file in settings.py, which resolves to a different db depending on place where you execute the script. Try to use absolute path and see if it helps.
You have not synced your models with the database. Run manage.py syncdb to generate the database tables. | 1 | 0 | 0 | error in accessing table created in django in the python code | 1 | python,django,linux,sqlite,ubuntu-10.04 | 0 | 2012-07-03T09:18:00.000 |
im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents.
at this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appengine is way to slow. i tried to store serialized objects in the blobstore but between reading/writing and updating stuff to the blobstore it takes too much time and the app gets really slow.
i thought of using a nosql db on an external machine to do these operations over appengine.
a few options would be mongodb, couchdb or redis. but i am not sure about how good they perform with that much data and concurrent requests/inserts from different tenants.
lets say i have 20 tenants and each tenant has 50k docs. are these dbs capable to handle this load?
is this even the right way to go? | 1 | 0 | 0 | 0 | false | 11,319,983 | 1 | 239 | 2 | 1 | 0 | 11,319,890 | The overhead of making calls from appengine to these external machines is going to be worse than the performance you're seeing now (I would expect). why not just move everything to a non-appengine machine?
I can't speak for couch, but mongo or redis are definitely capable of handling serious load as long as they are set up correctly and with enough horsepower for your needs. | 1 | 0 | 0 | key/value store with good performance for multiple tenants | 2 | javascript,python,google-app-engine,nosql,multi-tenant | 0 | 2012-07-03T22:08:00.000 |
im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents.
at this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appengine is way to slow. i tried to store serialized objects in the blobstore but between reading/writing and updating stuff to the blobstore it takes too much time and the app gets really slow.
i thought of using a nosql db on an external machine to do these operations over appengine.
a few options would be mongodb, couchdb or redis. but i am not sure about how good they perform with that much data and concurrent requests/inserts from different tenants.
lets say i have 20 tenants and each tenant has 50k docs. are these dbs capable to handle this load?
is this even the right way to go? | 1 | 2 | 1.2 | 0 | true | 11,323,377 | 1 | 239 | 2 | 1 | 0 | 11,319,890 | Why not use the much faster regular appengine datastore instead of blobstore? Simply store your documents in regular entities as Blob property. Just make sure the entity size doesn't exceed 1 MB in which case you have to split up your data into more then one entity. I run an application whith millions of large Blobs that way.
To further speed up things use memcache or even in-memory cache. Consider fetching your entites with eventual consistency which is MUCH faster. Run as many database ops in parallel as possible using either bulk operations or the async API. | 1 | 0 | 0 | key/value store with good performance for multiple tenants | 2 | javascript,python,google-app-engine,nosql,multi-tenant | 0 | 2012-07-03T22:08:00.000 |
Not sure if the title is a great way to word my actual problem and I apologize if this is too general of a question but I'm having some trouble wrapping my head around how to do something.
What I'm trying to do:
The idea is to create a MySQL database of 'outages' for the thousands of servers I'm responsible for monitoring. This would give a historical record of downtime and an easy way to retroactively tell what happened. The database will be queried by a fairly simple PHP form where one could browse these outages by date or server hostname etc.
What I have so far:
I have a python script that runs as a cron periodically to call the Pingdom API to get a list of current down alerts reported by the pingdom service. For each down alert, a row is inserted into a database containing a hostname, time stamp, pingdom check id, etc. I then have a simple php form that works fine to query for down alerts.
The problem:
What I have now is missing some important features and isn't quite what I'm looking for. Currently, querying this database would give me a simple list of down alerts like this:
Pindom alerts for Test_Check from 2012-05-01 to 2012-06-30:
test_check was reported DOWN at 2012-05-24 00:11:11
test_check was reported DOWN at 2012-05-24 00:17:28
test_check was reported DOWN at 2012-05-24 00:25:24
test_check was reported DOWN at 2012-05-24 00:25:48
What I would like instead is something like this:
test_check was reported down for 15 minutes (2012-05-24 00:11:11 to 2012-05-24 00:25:48)(link to comment on this outage)(link to info on this outage).
In this ideal end result, there would be one row containing a outage ID, hostname of the server pingdom is reporting down, the timestamp for when that box was reported down originally and the timestamp for when it was reported up again along with a 'comment' field I (and other admins) would use to add notes about this particular event after the fact. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options.
I'm a little lost as to how I will go about combining several down alerts that occur within a short period of time into a single 'outage' that would be inserted into a separate table in the existing MySQL database where individual down alerts are currently being stored. This would allow me to comment and add specific details for future reference and would generally make this thing a lot more usable. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options.
I've been wracking my brain trying to figure out how to do this. It seems like a simple concept but I'm a somewhat inexperienced programmer (I'm a Linux admin by profession) and I'm stumped at this point.
I'm looking for any thoughts, advice, examples or even just a more technical explanation of what I'm trying to do here to help point me in the right direction. I hope this makes sense. Thanks in advance for any advice :) | 0 | 0 | 0 | 0 | false | 11,329,769 | 0 | 131 | 1 | 0 | 0 | 11,329,588 | The most basic solution with the setup you have now would be to:
Get a list of all events, ordered by server ID and then by time of the event
Loop through that list and record the start of a new event / end of an old event for your new database when:
the server ID changes
the time between the current event and the previous event from the same server is bigger than a certain threshold you set.
Store the old event you were monitoring in your new database
The only complication I see, is that the next time you run the script, you need to make sure that you continue monitoring events that were still taking place at the time you last ran the script. | 1 | 0 | 0 | How can I combine rows of data into a new table based on similar timestamps? (python/MySQL/PHP) | 2 | php,python,mysql,json,pingdom | 1 | 2012-07-04T12:56:00.000 |
I'm writing a bit of Python code that watches a certain directory for new files, and inserts new files into a database using the cx_Oracle module. This program will be running as a service. At a given time there could be many files arriving at once, but there may also be periods of up to an hour where no files are received. Regarding good practice: is it bad to keep a database connection open indefinitely? On one hand something tells me that it's not a good idea, but on the other hand there is a lot of overhead in creating a new database object for every file received and closing it afterwards, especially when many files are received at once. Any suggestions on how to approach this would be greatly appreciated. | 3 | 2 | 1.2 | 0 | true | 11,347,776 | 0 | 1,153 | 1 | 0 | 0 | 11,346,224 | If you only need one or two connections, I see no harm in keeping them open indefinitely.
With Oracle, creating a new connection is an expensive operation, unlike in some other databases, such as MySQL where it is very cheap to create a new connection. Sometimes it can even take a few seconds to connect which can become a bit of a bottleneck for some applications if they close and open connections too frequently.
An idle connection on Oracle uses a small amount of memory, but aside from that, it doesn't consume any other resources while it sits there idle.
To keep your DBAs happy, you will want to make sure you don't have lots of idle connections left open, but I'd be happy with one or two. | 1 | 0 | 0 | Keeping database connection open - good practice? | 1 | python,oracle | 0 | 2012-07-05T14:17:00.000 |
I am trying to query ODBC compliant databases using pyodbc in ubuntu. For that, i have installed the driver (say mysql-odbc-driver). After installation the odbcinst.ini file with the configurations gets created in the location /usr/share/libmyodbc/odbcinst.ini
When i try to connect to the database using my pyodbc connection code, i get a driver not found error message.
Now when I copy the contents of the file to /etc/odbcinst.ini, it works!
This means pyodbc searches for the driver information in file /etc/odbcinst.ini.
How can I change the location where it searches the odbcinst.ini file for the driver information
Thanks. | 5 | 6 | 1.2 | 0 | true | 11,393,468 | 0 | 7,504 | 1 | 0 | 0 | 11,393,269 | Assuming you are using unixODBC here was some possibilities:
rebuild unixODBC from scratch and set --sysconfdir
export ODBCSYSINI env var pointing to a directory and unixODBC will look here for odbcinst.ini and odbc.ini system dsns
export ODBCINSTINI and point it at your odbcinst.ini file
BTW, I doubt pyodbc looks anything up in the odbcinst.ini file but unixODBC will. There is a list of ODBC Driver manager APIs which can be used to examine ODBC ini files. | 1 | 0 | 0 | setting the location where pyodbc searches for odbcinst.ini file | 1 | python,odbc,pyodbc | 0 | 2012-07-09T10:30:00.000 |
There is a list of data that I want to deal with. However I need to process the data with multiple instances to increase efficiency.
Each time each instance shall take out one item, delete it from the list and process it with some procedures.
First I tried to store the list in a sqlite database, but sqlite allows multiple read-locks which means multiple instances might get the same item from the database.
Is there any way that makes each instance will get an unique item to process?
I could use other data structure (other database or just file) if needed.
By the way, is there a way to check whether a DELETE operation is successful or not, after executing cursor.execute(delete_query)? | 0 | 0 | 0 | 0 | false | 20,908,479 | 0 | 484 | 2 | 0 | 0 | 11,430,276 | Why not read in all the items from the database and put them in a queue? You can have a worker thread get at item, process it and move on to the next one. | 1 | 0 | 1 | Concurrency on sqlite database using python | 4 | python,database,sqlite,concurrency,locking | 0 | 2012-07-11T10:07:00.000 |
There is a list of data that I want to deal with. However I need to process the data with multiple instances to increase efficiency.
Each time each instance shall take out one item, delete it from the list and process it with some procedures.
First I tried to store the list in a sqlite database, but sqlite allows multiple read-locks which means multiple instances might get the same item from the database.
Is there any way that makes each instance will get an unique item to process?
I could use other data structure (other database or just file) if needed.
By the way, is there a way to check whether a DELETE operation is successful or not, after executing cursor.execute(delete_query)? | 0 | 0 | 1.2 | 0 | true | 11,430,479 | 0 | 484 | 2 | 0 | 0 | 11,430,276 | How about another field in db as a flag (e.g. PROCESSING, UNPROCESSED, PROCESSED)? | 1 | 0 | 1 | Concurrency on sqlite database using python | 4 | python,database,sqlite,concurrency,locking | 0 | 2012-07-11T10:07:00.000 |
I'm writing a script to be run as a cron and I was wondering, is there any difference in speed between the Ruby MySQL or Python MySQL in terms of speed/efficiency? Would I be better of just using PHP for this task?
The script will get data from a mysql database with 20+ fields and store them in another table every X amount of minutes. Not much processing of the data will be necessary. | 0 | 7 | 1.2 | 0 | true | 11,431,795 | 0 | 254 | 1 | 0 | 0 | 11,431,679 | Just pick the language you feel most comfortable with. It shouldn't make a noticeable difference.
After writing the application, you can search for bottlenecks and optimize that | 1 | 0 | 0 | Python MySQL vs Ruby MySQL | 2 | python,mysql,ruby | 1 | 2012-07-11T11:28:00.000 |
since it is not possible to access mysql remotely on GAE, without the google cloud sql,
could I put a sqlite3 file on google cloud storage and access it through the GAE with django.db.backends.sqlite3?
Thanks. | 1 | 0 | 1.2 | 0 | true | 11,498,320 | 1 | 2,078 | 2 | 1 | 0 | 11,462,291 | No. SQLite requires native code libraries that aren't available on App Engine. | 1 | 0 | 0 | Google App Engine + Google Cloud Storage + Sqlite3 + Django/Python | 3 | python,django,sqlite,google-app-engine,google-cloud-storage | 0 | 2012-07-12T23:44:00.000 |
since it is not possible to access mysql remotely on GAE, without the google cloud sql,
could I put a sqlite3 file on google cloud storage and access it through the GAE with django.db.backends.sqlite3?
Thanks. | 1 | 0 | 0 | 0 | false | 11,463,047 | 1 | 2,078 | 2 | 1 | 0 | 11,462,291 | Google Cloud SQL is meant for this, why don't you want to use it?
If you have every frontend instance load the DB file, you'll have a really hard time synchronizing them. It just doesn't make sense. Why would you want to do this? | 1 | 0 | 0 | Google App Engine + Google Cloud Storage + Sqlite3 + Django/Python | 3 | python,django,sqlite,google-app-engine,google-cloud-storage | 0 | 2012-07-12T23:44:00.000 |
Is it possible to determine fields available in a table (MySQL DB) pragmatically at runtime using SQLAlchemy or any other python library ? Any help on this would be great.
Thanks. | 4 | 0 | 0 | 0 | false | 11,500,397 | 0 | 103 | 1 | 0 | 0 | 11,500,239 | You can run the SHOW TABLE TABLENAME and get the columns of the tables. | 1 | 0 | 0 | How to determine fields in a table using SQLAlchemy? | 3 | python,sqlalchemy | 0 | 2012-07-16T08:00:00.000 |
I wanted to know whether mysql query with browser is faster or python's MySQLdb is faster. I am using MysqlDb with PyQt4 for desktop ui and PHP for web ui. | 0 | 1 | 0.099668 | 0 | false | 11,510,083 | 0 | 161 | 2 | 0 | 0 | 11,508,670 | I believe you're asking about whether Python or PHP (what I think you mean by browser?) is more efficient at making a database call.
The answer? It depends on the specific code and calls, but it's going to be largely the same. Both Python and PHP are interpreted languages and interpret the code at run time. If either of the languages you were using were compiled (say, like, if you used C), I'd say you might see a speed advantage of one over the other, but with the current information you've given us, I can't really judge that.
I would use the language you are most comfortable in or feel would best fit the task - they're both going to connect to a MySQL database and do the same exact commands and queries, so just write the code in the easiest way possible for you to do it.
Also, your question as posed doesn't make much sense. Browsers don't interact with a MySQL database, PHP, which is executed by a server when you request a page, does. | 1 | 0 | 0 | browser query vs python MySQLdb query | 2 | php,python,mysql,pyqt4,mysql-python | 0 | 2012-07-16T16:38:00.000 |
I wanted to know whether mysql query with browser is faster or python's MySQLdb is faster. I am using MysqlDb with PyQt4 for desktop ui and PHP for web ui. | 0 | 0 | 0 | 0 | false | 11,509,874 | 0 | 161 | 2 | 0 | 0 | 11,508,670 | Browsers don't perform database queries (unless you consider the embedded SQLite database), so not only is your question nonsensical, it is in fact completely irrelevant. | 1 | 0 | 0 | browser query vs python MySQLdb query | 2 | php,python,mysql,pyqt4,mysql-python | 0 | 2012-07-16T16:38:00.000 |
Trying to do HelloWorld on GoogleAppEngine, but getting the following error.
C:\LearningGoogleAppEngine\HelloWorld>dev_appserver.py helloworld
WARNING 2012-07-17 10:21:37,250 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 133, in
run_file(file, globals())
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 129, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 694, in sys.exit(main(sys.argv))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 582, in main root_path, {}, default_partition=default_partition)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3217, in LoadAppConfig raise AppConfigNotFoundError
google.appengine.tools.dev_appserver.AppConfigNotFoundError
I've found posts on GoogleCode, StackO regarding this issue. But no matter what I try, I still can't overcome this error.
Python version installed on Windows 7 machine is: 2.7.3
GAE Launcher splash screen displays the following:
Release 1.7.0
Api versions: ['1']
Python: 2.5.2
wxPython : 2.8.8.1(msw-unicode)
Can someone help? | 3 | 1 | 0.066568 | 0 | false | 11,533,684 | 1 | 1,146 | 2 | 1 | 0 | 11,520,573 | it's been a while, but I believe I've previously fixed this by adding import rdbms to dev_appserver.py
hmm.. or was that import MySQLdb? (more likely) | 1 | 0 | 0 | GoogleAppEngine error: rdbms_mysqldb.py:74 | 3 | google-app-engine,python-2.7 | 0 | 2012-07-17T10:25:00.000 |
Trying to do HelloWorld on GoogleAppEngine, but getting the following error.
C:\LearningGoogleAppEngine\HelloWorld>dev_appserver.py helloworld
WARNING 2012-07-17 10:21:37,250 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 133, in
run_file(file, globals())
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 129, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 694, in sys.exit(main(sys.argv))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 582, in main root_path, {}, default_partition=default_partition)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3217, in LoadAppConfig raise AppConfigNotFoundError
google.appengine.tools.dev_appserver.AppConfigNotFoundError
I've found posts on GoogleCode, StackO regarding this issue. But no matter what I try, I still can't overcome this error.
Python version installed on Windows 7 machine is: 2.7.3
GAE Launcher splash screen displays the following:
Release 1.7.0
Api versions: ['1']
Python: 2.5.2
wxPython : 2.8.8.1(msw-unicode)
Can someone help? | 3 | 0 | 0 | 0 | false | 12,513,978 | 1 | 1,146 | 2 | 1 | 0 | 11,520,573 | just had the exact same error messages: I found that restarting Windows fixed everything and I did not have to deviate from the YAML or py file given on the google helloworld python tutorial. | 1 | 0 | 0 | GoogleAppEngine error: rdbms_mysqldb.py:74 | 3 | google-app-engine,python-2.7 | 0 | 2012-07-17T10:25:00.000 |
I would like to get the suggestion on using No-SQL datastore for my particular requirements.
Let me explain:
I have to process the five csv files. Each csv contains 5 million rows and also The common id field is presented in each csv.So, I need to merge all csv by iterating 5 million rows.So, I go with python dictionary to merge all files based on the common id field.But here the bottleneck is you can't store the 5 million keys in memory(< 1gig) with python-dictionary.
So, I decided to use No-Sql.I think It might be helpful to process the 5 million key value storage.Still I didn't have clear thoughts on this.
Anyway we can't reduce the iteration since we have the five csvs each has to be iterated for updating the values.
Is it there an simple steps to go with that?
If this is the way Could you give me the No-Sql datastore to process the key-value pair?
Note: We have the values as list type also. | 1 | 0 | 0 | 1 | false | 11,522,576 | 0 | 347 | 1 | 0 | 0 | 11,522,232 | If this is just a one-time process, you might want to just setup an EC2 node with more than 1G of memory and run the python scripts there. 5 million items isn't that much, and a Python dictionary should be fairly capable of handling it. I don't think you need Hadoop in this case.
You could also try to optimize your scripts by reordering the items in several runs, than running over the 5 files synchronized using iterators so that you don't have to keep everything in memory at the same time. | 1 | 0 | 0 | Process 5 million key-value data in python.Will NoSql solve? | 3 | python,nosql | 0 | 2012-07-17T12:15:00.000 |
I am working on the XLWT XLRD XLUTIL packages. Whenever I write to a new sheet, all the formulas have been obliterated.
I tried the following fixes, but they all failed:
Re-write all the formulas in with a loop:
Failure: XLWT Formula does not support advanced i.e. VLOOKUP Formulas
Doing the calculations all in Python: this is ridiculous
How can I preserve the formulas using the above packages? Can I use some other packages to solve my problem? Or, do I need to code my own solution? | 1 | 1 | 0.197375 | 0 | false | 11,596,820 | 0 | 1,819 | 1 | 0 | 0 | 11,527,100 | (a) xlrd does not currently support extracting formulas.
(b) You say "XLWT Formula does not support advanced i.e. VLOOKUP Formulas". This is incorrect. If you are the same person that I seem to have convinced that xlwt supports VLOOKUP etc after a lengthy exchange of private emails over the last few days, please say so. Otherwise please supply a valid (i.e. Excel accepts it) formula that xlwt won't parse correctly.
(c) Doing the calculations in Python is not ridiculous if the output is only for display. | 1 | 0 | 0 | Preserving Formula in Excel Python XLWT | 1 | python,excel,formula,xlrd,xlwt | 0 | 2012-07-17T16:44:00.000 |
How do I specify the column that I want in my query using a model (it selects all columns by default)? I know how to do this with the sqlalchmey session: session.query(self.col1), but how do I do it with with models? I can't do SomeModel.query(). Is there a way? | 183 | 2 | 0.039979 | 0 | false | 68,064,416 | 1 | 191,024 | 1 | 0 | 0 | 11,530,196 | result = ModalName.query.add_columns(ModelName.colname, ModelName.colname) | 1 | 0 | 0 | Flask SQLAlchemy query, specify column names | 10 | python,sqlalchemy,flask-sqlalchemy | 0 | 2012-07-17T20:16:00.000 |
I have a data organization issue. I'm working on a client/server project where the server must maintain a copy of the client's filesystem structure inside of a database that resides on the server. The idea is to display the filesystem contents on the server side in an AJAX-ified web interface. Right now I'm simply uploading a list of files to the database where the files are dumped sequentially. The problem is how to recapture the filesystem structure on the server end once they're in the database. It doesn't seem feasible to reconstruct the parent->child structure on the server end by iterating through a huge list of files. However, when the file objects have no references to each other, that seems to be the only option.
I'm not entirely sure how to handle this. As near as I can tell, I would need to duplicate some type of filesystem data structure on the server side (in a Btree perhaps?) with objects maintaining pointers to their parents and/or children. I'm wondering if anyone has had any similar past experiences they could share, or maybe some helpful resources to point me in the right direction. | 4 | 2 | 0.197375 | 0 | false | 11,554,828 | 1 | 1,195 | 1 | 0 | 0 | 11,554,676 | I suggest to follow the Unix way. Each file is considered a stream of bytes, nothing more, nothing less. Each file is technically represented by a single structure called i-node (index node) that keeps all information related to the physical stream of the data (including attributes, ownership,...).
The i-node does not contain anything about the readable name. Each i-node is given a unique number (forever) that acts for the file as its technical name. You can use similar number to give the stream of bytes in database its unique identification. The i-nodes are stored on the disk in a separate contiguous section -- think about the array of i-node structures (in the abstract sense), or about the separate table in the database.
Back to the file. This way it is represented by unique number. For your database representation, the number will be the unique key. If you need the other i-node information (file attributes), you can add the other columns to the table. One column will be of the blob type, and it will represent the content of the file (the stream of bytes). For AJAX, I gues that the files will be rather small; so, you should not have a problem with the size limits of the blob.
So far, the files are stored in as a flat structure (as the physical disk is, and as the relational database is).
The structure of directory names and file names of the files are kept separately, in another files (kept in the same structure, together with the other files, represented also by their i-node). Basically, the directory file captures tuples (bare_name, i-node number). (This way the hard links are implemented in Unix -- two names are paired with the same i-none number.) The root directory file has to have a fixed technical identification -- i.e. the reserved i-node number. | 1 | 0 | 0 | Data structures in python: maintaining filesystem structure within a database | 2 | python,database,data-structures,filesystems | 0 | 2012-07-19T05:50:00.000 |
I have a huge database in sqlite3 of 41 million rows in a table. However, it takes around 14 seconds to execute a single query. I need to significantly improve the access time! Is this a hard disk hardware limit or a processor limit? If it is a processor limit then I think I can use the 8 processors I have to parallelise the query. However I never found a way to parallelize queries in SQLite for python. Is there any way to do this? Can I have a coding example? Or are other database programs more efficient in access? If so then which ones? | 0 | 1 | 1.2 | 0 | true | 11,557,147 | 0 | 204 | 1 | 0 | 0 | 11,556,783 | Firstly, make sure any relevant indexes are in place to assist in efficient queries -- which may or may not help...
Other than that, SQLite is meant to be a (strangely) lite embedded SQL DB engine - 41 million rows is probably pushing it depending on number and size of columns etc...
You could take your DB and import it to PostgreSQL or MySQL, which are both open-source RDMS's with Python bindings and extensive feature sets. They'll be able to handle queries, indexing, caching, memory management etc... on large data effectively. (Or at least, since they're designed for that purpose, more effectively than SQLite which wasn't...) | 1 | 0 | 0 | reducing SQLITE3 access time in Python (Parallelization?) | 1 | python,sqlite,parallel-processing | 0 | 2012-07-19T08:25:00.000 |
I am parsing .xlsx files using openpyxl.While writing into the xlsx files i need to maintain the same font colour as well as cell colour as was present in the cells of my input .xlsx files.Any idea how to extract the colour coding from the cell and then implement the same in another excel file.Thanks in advance | 0 | 0 | 0 | 0 | false | 11,783,592 | 0 | 116 | 1 | 0 | 0 | 11,562,279 | I believe you can access the font colour by:
colour = ws.cell(row=id,column=id).style.font.color
I am not sure how to access the cell colour though. | 1 | 0 | 0 | How to detect colours and then apply colours while working with .xlsx(excel-2007) files on python 3.2(windows 7) | 1 | python,python-3.x,excel-2007,openpyxl | 0 | 2012-07-19T13:47:00.000 |
I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row? | 3 | -1 | -0.033321 | 0 | false | 11,566,922 | 0 | 1,336 | 2 | 0 | 0 | 11,566,537 | select top 1 * from tablename order by date_and_time DESC (for sql server)
select * from taablename order by date_and_time DESC limit 1(for mysql) | 1 | 0 | 0 | Retrieving only the most recent row in MySQL | 6 | python,mysql | 0 | 2012-07-19T17:50:00.000 |
I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row? | 3 | 3 | 0.099668 | 0 | false | 11,566,549 | 0 | 1,336 | 2 | 0 | 0 | 11,566,537 | SELECT * FROM table ORDER BY date, time LIMIT 1 | 1 | 0 | 0 | Retrieving only the most recent row in MySQL | 6 | python,mysql | 0 | 2012-07-19T17:50:00.000 |
I am creating a GUI that is dependent on information from MySQL table, what i want to be able to do is to display a message every time the table is updated with new data. I am not sure how to do this or even if it is possible. I have codes that retrieve the newest MySQL update but I don't know how to have a message every time new data comes into a table. Thanks! | 1 | 3 | 1.2 | 0 | true | 11,567,806 | 0 | 1,004 | 1 | 0 | 0 | 11,567,357 | Quite simple and straightforward solution will be just to poll the latest autoincrement id from your table, and compare it with what you've seen at the previous poll. If it is greater -- you have new data. This is called 'active polling', it's simple to implement and will suffice if you do this not too often. So you have to store the last id value somewhere in your GUI. And note that this stored value will reset when you restart your GUI application -- be sure to think what to do at the start of the GUI. Probably you will need to track only insertions that occur while GUI is running -- then, at the GUI startup you need just to poll and store current id value, and then poll peroidically and react on its changes. | 1 | 0 | 0 | Scanning MySQL table for updates Python | 2 | python,mysql | 0 | 2012-07-19T18:48:00.000 |
Database A resides on server server1, while database B resides on server server2.
Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).
In such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?
If so, how do I go about it, programatically, | 0 | 0 | 0 | 0 | false | 11,585,571 | 0 | 214 | 2 | 0 | 0 | 11,585,494 | Without doing something like replicating database A onto the same server as database B and then doing the JOIN, this would not be possible. | 1 | 0 | 0 | MySQL Joins Between Databases On Different Servers Using Python? | 2 | mysql,python-2.7 | 0 | 2012-07-20T19:04:00.000 |
Database A resides on server server1, while database B resides on server server2.
Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).
In such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?
If so, how do I go about it, programatically, | 0 | 0 | 0 | 0 | false | 11,585,697 | 0 | 214 | 2 | 0 | 0 | 11,585,494 | I don't know python, so I'm going to assume that when you do a query it comes back to python as an array of rows.
You could query table A and after applying whatever filters you can, return that result to the application. Same to table B. Create a 3rd Array, loop through A, and if there is a joining row in B, add that joined row to the 3rd array. In the end the 3rd array would have the equivalent of a join of the two tables. It's not going to be very efficient, but might work okay for small recordsets. | 1 | 0 | 0 | MySQL Joins Between Databases On Different Servers Using Python? | 2 | mysql,python-2.7 | 0 | 2012-07-20T19:04:00.000 |
I'm working on my Thesis where the Python application that connects to other linux servers over SSH is implemented. The question is about storing the passwords in the database (whatever kind, let's say MySQL for now). For sure keeping them not encrypted is a bad idea. But what can I do to feel comfortable with storing this kind of confidential data and use them later to connect to other servers? When I encrypt the password I'll not be able to use it to login the other machine.
Is the public/private keys set the only option in this case? | 1 | 3 | 1.2 | 0 | true | 11,588,240 | 0 | 454 | 1 | 0 | 0 | 11,587,845 | In my opinion using key authentication is the best and safest in my opinion for the SSH part and is easy to implement.
Now to the meat of your question. You want to store these keys, or passwords, into a database and still be able to use them. This requires you to have a master password that can decrypt them from said database. This points a point of failure into a single password which is not ideal. You could come up with any number of fancy schemes to encrypt and store these master passwords, but they are still on the machine that is used to log into the other servers and thus still a weak point.
Instead of looking at this from the password storage point of view, look at it from a server security point of view. If someone has access to the server with the python daemon running on it then they can log into any other server thus this is more of a environment security issue than a password one.
If you can think of a way to get rid of this singular point of failure then encrypting and storing the passwords in a remote database will be fine as long as the key/s used to encrypt them are secure and unavailable to anyone else which is outside the realm of the python/database relationship. | 1 | 0 | 0 | Storing encrypted passwords storage for remote linux servers | 1 | python,mysql,ssh,password-protection | 0 | 2012-07-20T22:40:00.000 |
I am using python and sqlite3 to handle a website. I need all timezones to be in localtime, and I need daylight savings to be accounted for. The ideal method to do this would be to use sqlite to set a global datetime('now') to be +10 hours.
If I can work out how to change sqlite's 'now' with a command, then I was going to use a cronjob to adjust it (I would happily go with an easier method if anyone has one, but cronjob isn't too hard) | 1 | 2 | 0.197375 | 0 | false | 21,014,456 | 0 | 3,357 | 1 | 0 | 0 | 11,590,082 | you can try this code, I am in Taiwan , so I add 8 hours:
DateTime('now','+8 hours') | 1 | 0 | 0 | sqlite timezone now | 2 | python,sqlite,timezone,dst | 0 | 2012-07-21T06:48:00.000 |
I am looking for a method of checking all the fields in a MySQL table. Let's say I have a MySQL table with the fields One Two Three Four Five and Big One. These are fields that contains numbers that people enter in, sort of like the Mega Millions. Users enter numbers and it inserts the numbers they picked from least to greatest.
Numbers would be drawn and I need a way of checking if any of the numbers that each user picked matched the winning numbers drawn, same for the Big One. If any matched, I would have it do something specific, like if one number or all numbers matched.
I hope you understand what I am saying. Thank you. | 1 | 0 | 0 | 0 | false | 11,919,262 | 0 | 82 | 1 | 0 | 0 | 11,597,835 | I would imagine MySQL has some sort of 'set' logic in it, but if it's lacking, I know Python has sets, so I'll use an example of those in my solution:
Create a set with the numbers of the winning ticket:
winners = set({11, 22, 33, 44, 55})
For each query, jam all it's numbers into a set too:
current_user = set({$query[0], $query[1], $query[2]...$query[4]})
Print out how many overlapping numbers there are:
print winners.intersection(current_user)
And finally, for the 'big one', use an if statement.
Let me know if this helps. | 1 | 0 | 0 | Python MySQL Number Matching | 1 | python,mysql | 0 | 2012-07-22T04:49:00.000 |
I'm making a company back-end that should include a password-safe type feature. Obviously the passwords needs to be plain text so the users can read them, or at least "reversible" to plain text somehow, so I can't use hashes.
Is there anything more secure I can do than just placing the passwords in plain-text into the database?
Note: These are (mostly) auto-generated passwords that is never re-used for anything except the purpose they are saved for, which is mostly FTP server credentials. | 0 | 2 | 0.197375 | 0 | false | 11,603,255 | 0 | 105 | 1 | 0 | 0 | 11,603,136 | You can use MySQL's ENCODE(), DES_ENCRYPT() or AES_ENCRYPT() functions, and store the keys used to encrypt in a secure location. | 1 | 0 | 0 | What security measures can I take to secure passwords that can't be hashed in a database? | 2 | python,mysql,hash,passwords | 0 | 2012-07-22T18:59:00.000 |
I am wondering what the most reliable way to generate a timestamp is using Python. I want this value to be put into a MySQL database, and for other programming languages and programs to be able to parse this information and use it.
I imagine it is either datetime, or the time module, but I can't figure out which I'd use in this circumstance, nor the method. | 0 | 0 | 0 | 0 | false | 11,642,138 | 0 | 390 | 2 | 0 | 0 | 11,642,105 | For a database, your best bet is to store it in the database-native format, assuming its precision matches your needs. For a SQL database, the DATETIME type is appropriate.
EDIT: Or TIMESTAMP. | 1 | 0 | 1 | Most reliable way to generate a timestamp with Python | 3 | python,time | 0 | 2012-07-25T03:04:00.000 |
I am wondering what the most reliable way to generate a timestamp is using Python. I want this value to be put into a MySQL database, and for other programming languages and programs to be able to parse this information and use it.
I imagine it is either datetime, or the time module, but I can't figure out which I'd use in this circumstance, nor the method. | 0 | 0 | 0 | 0 | false | 11,642,253 | 0 | 390 | 2 | 0 | 0 | 11,642,105 | if it's just a simple timestamp that needs to be read by multiple programs, but which doesn't need to "mean" anything in sql, and you don't care about different timezones for different users or anything like that, then seconds from the unix epoch (start of 1970) is a simple, common standard, and is returned by time.time().
python actually returns a float (at least on linux), but if you only need accuracy to the second store it as an integer.
if you want something that is more meaningful in sql then use a sql type like datetime or timestamp. that lets you do more "meaningful" queries (like query for a particular day) more easily (you can do them with seconds from epoch too, but it requires messing around with conversions), but it also gets more complicated with timezones and converting into different formats in different languages. | 1 | 0 | 1 | Most reliable way to generate a timestamp with Python | 3 | python,time | 0 | 2012-07-25T03:04:00.000 |
I am building a system where entries are added to a SQL database sporadically throughout the day. I am trying to create a system which imports these entries to SOLR each time.
I cant seem to find any infomation about adding individual records to SOLR from SQL. Can anyone point me in the right direction or give me a bit more information to get me going?
Any help would be much appreciated,
James | 1 | 0 | 0 | 0 | false | 11,679,439 | 0 | 606 | 1 | 0 | 0 | 11,647,112 | Besides DIH, you could setup a trigger in your db to fire Solr's REST service that would update changed docs for all inserted/updated/deleted documents.
Also, you could setup a Filter (javax.servlet spec) in your application to intercept server requests and push them to Solr before they even reach database (it can even be done in the same transaction, but there's rarely a real need for that, eventual consistency is usually fine for search engines). | 1 | 0 | 0 | SOLR - Adding a single entry at a time | 4 | python,search,solr | 0 | 2012-07-25T09:52:00.000 |
i am trying to connect to mysql in django. it asked me to install the module. the module prerequisites are "MySQL 3.23.32 or higher" etc. do i really need to install mysql, cant i just connect to remote one?? | 0 | 4 | 0.664037 | 0 | false | 11,653,215 | 1 | 62 | 1 | 0 | 0 | 11,653,040 | You need to install the client libraries. The Python module is a wrapper around the client libraries. You don't need to install the server. | 1 | 0 | 0 | Not able to install python mysql module | 1 | python,mysql,django | 0 | 2012-07-25T15:19:00.000 |
when I try to install the pyodbc by using "python setup.py build install", it shows up with some errors like the following:
gcc -pthread -fno-strict-aliasing -DNDEBUG -march=i586 -mtune=i686 -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fwrapv -fPIC -DPYODBC_VERSION=3.0.3 -I/usr/include/python2.6 -c /root/Desktop/pyodbc-3.0.3/src/sqlwchar.cpp -o build/temp.linux-i686-2.6/root/Desktop/pyodbc-3.0.3/src/sqlwchar.o -Wno-write-strings
In file included from /root/Desktop/pyodbc-3.0.3/src/sqlwchar.cpp:2:
/root/Desktop/pyodbc-3.0.3/src/pyodbc.h:41:20: error: Python.h: No such file or directory
/root/Desktop/pyodbc-3.0.3/src/pyodbc.h:42:25: error: floatobject.h: No such file or directory
/root/Desktop/pyodbc-3.0.3/src/pyodbc.h:43:24: error: longobject.h: No such file or directory
/root/Desktop/pyodbc-3.0.3/src/pyodbc.h:44:24: error: boolobject.h: No such file or directory
and few more lines with similar feedback, in the end of the reply is like:
/root/Desktop/pyodbc-3.0.3/src/pyodbccompat.h:106: error: expected ‘,’ or ‘;’ before ‘{’ token
error: command 'gcc' failed with exit status 1
and I have searched around for the solutions, everyone says to install python-devel and it will be fine, but I got this working on a 64bit opensuse without the python-devel,but it doesn't work on the 32bit one, and I couldn't found the right version for python2.6.0-8.12.2 anywhere on the internet... so I'm quite confused, please help! thanks in advance. | 1 | 2 | 1.2 | 0 | true | 11,691,895 | 0 | 4,644 | 1 | 1 | 0 | 11,691,039 | I don't see a way around having the Python header files (which are part of python-devel package). They are required to compile the package.
Maybe there was a pre-compiled egg for the 64bit version somewhere, and this is how it got installed.
Why are you reluctant to install python-devel? | 1 | 0 | 0 | Error when installing pyodbc on opensuse | 2 | python,pyodbc,opensuse | 0 | 2012-07-27T15:32:00.000 |
I'm writing an application that makes heavy use of geodjango (on PostGis) and spatial lookups. Distance queries on database side work great, but now I have to calculate distance between two points on python side of application (these points come from models obtained using separate queries).
I can think of many ways that would calculate this distance, but I want to know do it in manner that is consistent with what the database will output.
Is there any magic python function that calculates distance between two points given in which SRID they are measured? If not what other approach could you propose. | 2 | 0 | 0 | 0 | false | 11,703,980 | 0 | 1,898 | 1 | 0 | 0 | 11,703,407 | Use the appropriate data connection to execute the SQL function that you're already using, then retrieve that... Keeps everything consistent. | 1 | 0 | 0 | How to calculate distance between points on python side of my application in way that is consistent in what database does | 3 | python,django,gis,postgis,geodjango | 0 | 2012-07-28T17:57:00.000 |
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like <92>,<89>, <94> etc.
Any thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128) | 1 | 0 | 0 | 1 | false | 18,619,898 | 0 | 1,903 | 1 | 0 | 0 | 11,705,114 | Are all these "junk" characters in the range <80> to <9F>? If so, it's highly likely that they're Microsoft "Smart Quotes" (Windows-125x encodings). Someone wrote up the text in Word or Outlook, and copy/pasted it into a Web application. Both Latin-1 and UTF-8 regard these characters as control characters, and the usual effect is that the text display gets cut off (Latin-1) or you see a ?-in-black-diamond-invalid-character (UTF-8).
Note that Word and Outlook, and some other MS products, provide a UTF-8 version of the text for clipboard use. Instead of <80> to <9F> codes, Smart Quotes characters will be proper multibyte UTF-8 sequences. If your Web page is in UTF-8, you should normally get a proper UTF-8 character instead of the Smart Quote in Windows-125x encoding. Also note that this is not guaranteed behavior, but "seems to work pretty consistently". It all depends on a UTF-8 version of the text being available, and properly handled (i.e., you didn't paste into, say, gvim on the PC, and then copy/paste into a Web text form). This may well also work for various PC applications, so long as they are looking for UTF-8-encoded text. | 1 | 0 | 0 | Junk characters (smart quotes, etc.) in output file | 4 | python,mysql,vim,encoding,smart-quotes | 0 | 2012-07-28T22:20:00.000 |
I have an items table that is related to an item_tiers table. The second table consists of inventory receipts for an item in the items table. There can be 0 or more records in the item_tiers table related to a single record in the items table. How can I, using query, get only records that have 1 or more records in item tiers....
results = session.query(Item).filter(???).join(ItemTier)
Where the filter piece, in pseudo code, would be something like ...
if the item_tiers table has one or more records related to item. | 1 | 1 | 1.2 | 0 | true | 11,747,157 | 1 | 144 | 1 | 0 | 0 | 11,746,610 | If there is a foreign key defined between tables, SA will figure the join condition for you, no need for additional filters.
There is, and i was really over thinking this. Thanks for the fast response. – Ominus | 1 | 0 | 0 | SQLAlchemy - Query show results where records exist in both table | 2 | python,sqlalchemy | 0 | 2012-07-31T18:24:00.000 |
I currently run my own server "in the cloud" with PHP using mod_fastcgi and mod_vhost_alias. My mod_vhost_alias config uses a VirtualDocumentRoot of /var/www/%0/htdocs so that I can serve any domain that routes to my server's IP address out of a directory with that name.
I'd like to begin writing and serving some Python projects from my server, but I'm unsure how to configure things so that each site has access to the appropriate script processor.
For example, for my blog, dead-parrot.com, I'm running a PHP blog platform (Habari, not WordPress). But I'd like to run an app I've written in Flask on not-dead-yet.com.
I would like to enable Python execution with as little disruption to my mod_vhost_alias configuration as possible, so that I can continue to host new domains on this server simply by adding an appropriate directory. I'm willing to alter the directory structure, if necessary, but would prefer not to add additional, specific vhost config files for every new Python-running domain, since apart from being less convenient than my current setup with just PHP, it seems kind of hacky to have to name these earlier alphabetically to get Apache to pick them up before the single mod_vhost_alias vhost config.
Do you know of a way that I can set this up to run Python and PHP side-by-side as conveniently as I do just PHP? Thanks! | 3 | 0 | 0 | 0 | false | 36,646,397 | 1 | 6,266 | 1 | 0 | 0 | 11,796,126 | Even I faced the same situation, and initially I was wondering in google but later realised and fixed it, I'm using EC2 service in aws with ubuntu and I created alias to php and python individually and now I can access both. | 1 | 0 | 0 | Can I run PHP and Python on the same Apache server using mod_vhost_alias and mod_wsgi? | 2 | php,python,apache,mod-vhost-alias | 1 | 2012-08-03T12:53:00.000 |
I have really big collection of files, and my task is to open a couple of random files from this collection treat their content as a sets of integers and make an intersection of it.
This process is quite slow due to long times of reading files from disk into memory so I'm wondering whether this process of reading from file can be speed up by rewriting my program in some "quick" language. Currently I'm using python which could be inefficient for this kind of job. (I could implement tests myself if I knew some other languages beside python and javascript...)
Also will putting all the date into database help? Files wont fit the RAM anyway so it will be reading from disk again only with database related overhead.
The content of files is the list of long integers. 90% of the files are quite small, less than a 10-20MB, but 10% left are around 100-200mb. As input a have filenames and I need read each of the files and output integers present in every file given.
I've tried to put this data in mongodb but that was as slow as plain files based approach because I tried to use mongo index capabilities and mongo does not store indexes in RAM.
Now I just cut the 10% of the biggest files and store rest in the redis, sometimes accessing those big files. This is, obviously temporary solution because my data grows and amount of RAM available does not. | 4 | 3 | 1.2 | 0 | true | 11,805,422 | 0 | 213 | 1 | 0 | 0 | 11,805,309 | One thing you could try is calculating intersections of the files on a chunk-by-chunk basis (i.e., read x-bytes into memory from each, calculate their intersections, and continue, finally calculating the intersection of all intersections).
Or, you might consider using some "heavy-duty" libraries to help you. Consider looking into PyTables (with HDF storage)/using numpy for calculating intersections. The benefit there is that the HDF layer should help deal with not keeping the entire array structure in memory all at once---though I haven't tried any of these tools before, it seems like they offer what you need. | 1 | 0 | 1 | Is speed of file opening/reading language dependent? | 2 | python,file,file-io,io,filesystems | 0 | 2012-08-04T01:52:00.000 |
I have some SQL Server tables that contain Image data types.
I want to make it somehow usable in PostgreSQL. I'm a python programmer, so I have a lot of learn about this topic. Help? | 0 | 0 | 0 | 0 | false | 15,846,639 | 0 | 297 | 1 | 0 | 0 | 11,805,709 | What you need to understand first is that the interfaces at the db level are likely to be different. Your best option is to write an abstraction layer for the blobs (and maybe publish it open source for the dbs you want to support).
On the PostgreSQL side you need to figure out whether you want to bo with bytea or lob. These are very different and have different features and limitations. If you are enterprising you might build in at least support in the spec for selecting them. In general bytea is better for smaller files while lob has more management overhead but it can both support larger files and supports chunking, seeking etc. | 1 | 0 | 0 | How can I select and insert BLOB between different databases using python? | 1 | python,sql-server,postgresql,blob | 0 | 2012-08-04T03:36:00.000 |
In the High-Replication Datastore (I'm using NDB), the consistency is eventual. In order to get a guaranteed complete set, ancestor queries can be used. Ancestor queries also provide a great way to get all the "children" of a particular ancestor with kindless queries. In short, being able to leverage the ancestor model is hugely useful in GAE.
The problem I seem to have is rather simplistic. Let's say I have a contact record and a message record. A given contact record is being treated as the ancestor for each message. However, it is possible that two contacts are created for the same person (user error, different data points, whatever). This situation produces two contact records, which have messages related to them.
I need to be able to "merge" the two records, and bring put all the messages into one big pile. Ideally, I'd be able to modify ancestor for one of the record's children.
The only way I can think of doing this, is to create a mapping and make my app check to see if record has been merged. If it has, look at the mappings to find one or more related records, and perform queries against those. This seems hugely inefficient. Is there more of "by the book" way of handling this use case? | 5 | 9 | 1.2 | 0 | true | 11,855,209 | 1 | 1,422 | 1 | 1 | 0 | 11,854,137 | The only way to change the ancestor of an entity is to delete the old one and create a new one with a new key. This must be done for all child (and grand child, etc) entities in the ancestor path. If this isn't possible, then your listed solution works.
This is required because the ancestor path of an entity is part of its unique key. Parents of entities (i.e., entities in the ancestor path) need not exist, so changing a parent's key will leave the children in the datastore with no parent. | 1 | 0 | 0 | How to change ancestor of an NDB record? | 1 | python,google-app-engine,google-cloud-datastore | 0 | 2012-08-07T21:04:00.000 |
I am looking for a pure-python SQL library that would give access to both MySQL and PostgreSQL.
The only requirement is to run on Python 2.5+ and be pure-python, so it can be included with the script and still run on most platforms (no-install).
In fact I am looking for a simple solution that would allow me to write SQL and export the results as CSV files. | 3 | 1 | 0.066568 | 0 | false | 11,870,176 | 0 | 2,182 | 1 | 0 | 0 | 11,868,582 | Use SQL-Alchemy. It will work with most database types, and certainly does work with postgres and MySQL. | 1 | 0 | 0 | Pure python SQL solution that works with PostgreSQL and MySQL? | 3 | python,mysql,postgresql | 0 | 2012-08-08T16:03:00.000 |
I am relatively new to Django and one thing that has been on my mind is changing the database that will be used when running the project.
By default, the DATABASES 'default' is used to run my test project. But in the future, I want to be able to define a 'production' DATABASES configuration and have it use that instead.
In a production environment, I won't be able to "manage.py runserver" so I can't really set the settings.
I read a little bit about "routing" the database to use another database, but is there an easier way so that I won't need to create a new router every time I have another database I want to use (e.g. I can have test database, production database, and development database)? | 0 | 1 | 0.066568 | 0 | false | 11,878,547 | 1 | 304 | 1 | 0 | 0 | 11,878,454 | You can just use a different settings.py in your production environment.
Or - which is a bit cleaner - you might want to create a file settings_local.py next to settings.py where you define a couple of settings that are specific for the current machine (like DEBUG, DATABASES, MEDIA_ROOT etc.) and do a from settings_local import * at the beginning of your generic settings.py file. Of course settings.py must not overwrite these imported settings. | 1 | 0 | 0 | How do I make Django use a different database besides the 'default'? | 3 | python,database,django,configuration | 0 | 2012-08-09T07:14:00.000 |
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly.
Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections? | 8 | 1 | 0.039979 | 0 | false | 11,889,137 | 1 | 9,082 | 3 | 0 | 0 | 11,889,104 | I think connection pooling is the best thing to do if this application is to serve multiple clients and concurrently. | 1 | 0 | 0 | Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request? | 5 | python,postgresql,web-applications,flask,psycopg2 | 0 | 2012-08-09T17:48:00.000 |
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly.
Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections? | 8 | 3 | 0.119427 | 0 | false | 11,889,659 | 1 | 9,082 | 3 | 0 | 0 | 11,889,104 | The answer depends on how many such requests will happen and how many concurrently in your web app ? Connection pooling is usually a better idea if you expect your web app to be busy with 100s or even 1000s of user concurrently logged in. If you are only doing this as a side project and expect less than few hundred users, you can probably get away without pooling. | 1 | 0 | 0 | Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request? | 5 | python,postgresql,web-applications,flask,psycopg2 | 0 | 2012-08-09T17:48:00.000 |
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly.
Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections? | 8 | 0 | 0 | 0 | false | 61,078,209 | 1 | 9,082 | 3 | 0 | 0 | 11,889,104 | Pooling seems to be totally impossible in context of Flask, FastAPI and everything relying on wsgi/asgi dedicated servers with multiple workers.
Reason for this behaviour is simple: you have no control about the pooling and master thread/process.
A pooling instance is only usable for a single thread serving a set of clients - so for just one worker. Any other worker will get it's own pool and therefore there cannot be any sharing of established connections.
Logically it's also impossible, because you cannot share these object states across threads/processes in multi core env with python (2.x - 3.8). | 1 | 0 | 0 | Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request? | 5 | python,postgresql,web-applications,flask,psycopg2 | 0 | 2012-08-09T17:48:00.000 |
I've searched and I can't seem to find anything.
Here is the situation:
t1 = table 1
t2 = table 2
v = view of table 1 and table 2 joined
1.) User 1 is logged into database. Does SELECT * FROM v;
2.) User 2 is logged into same database and does INSERT INTO t1 VALUES(1, 2, 3);
3.) User 1 does another SELECT * FROM v; User 1 can't see the inserted row from User 2 until logging out and logging back in.
Seems like views don't get sync'd across "sessions"? How can I make it so User 1 can see the INSERT?
FYI I'm using python and mysqldb. | 2 | 1 | 1.2 | 0 | true | 11,979,334 | 0 | 542 | 1 | 0 | 0 | 11,979,276 | Instead of logging out and logging back in, user 2 could simply commit their transaction.
MySQL InnoDB tables use transactions, requiring a BEGIN before one or more SQL statements, and either COMMIT or ROLLBACK afterwards, resulting in all your updates/inserts/deletes either happening or not. But there's a "feature" that results in an automatic BEGIN if not explicitly issued, and an automatic COMMIT when the connection is closed. This is why you see the changes after the other user closes the connection.
You should really get into the habit of explicitly beginning and committing your transactions, but there's also another way: set connection.autocommit = True, which will result in every sql update/insert/delete being wrapped in its own implicit transaction, resulting in the behavior you originally expected.
Don't take what I said above to be entirely factually correct, but it suffices to explain the fundamentals of what's going on and how to control it. | 1 | 0 | 0 | MySQL view doesn't update when underlaying table changes across different users | 2 | python,mysql,mysql-python | 0 | 2012-08-16T00:51:00.000 |
I am fairly new to databases and have just figured out how to use MongoDB in python2.7 on Ubuntu 12.04. An application I'm writing uses multiple python modules (imported into a main module) that connect to the database. Basically, each module starts by opening a connection to the DB, a connection which is then used for various operations.
However, when the program exits, the main module is the only one that 'knows' about the exiting, and closes its connection to MongoDB. The other modules do not know this and have no chance of closing their connections. Since I have little experience with databases, I wonder if there are any problems leaving connections open when exiting.
Should I:
Leave it like this?
Instead open the connection before and close it after each operation?
Change my application structure completely?
Solve this in a different way? | 3 | 3 | 1.2 | 0 | true | 11,989,459 | 0 | 1,207 | 1 | 0 | 0 | 11,989,408 | You can use one pymongo connection across different modules. You can open it in a separate module and import it to other modules on demand. After program finished working, you are able to close it. This will be the best option.
About other questions:
You can leave like this (all connections will be closed when script finishes execution), but leaving something unclosed is a bad form.
You can open/close connection for each operation (but establishing connection is a time-expensive operation.
That what I'd advice you (see this answer's first paragraph)
I think this point can be merged with 3. | 1 | 0 | 0 | When to disconnect from mongodb | 1 | python,mongodb,pymongo | 0 | 2012-08-16T14:29:00.000 |
Could any one shed some light on how to migrate my MongoDB to PostgreSQL? What tools do I need, what about handling primary keys and foreign key relationships, etc?
I had MongoDB set up with Django, but would like to convert it back to PostgreSQL. | 2 | 1 | 0.099668 | 0 | false | 15,858,338 | 1 | 1,475 | 1 | 0 | 0 | 12,034,390 | Whether the migration is easy or hard depends on a very large number of things including how many different versions of data structures you have to accommodate. In general you will find it a lot easier if you approach this in stages:
Ensure that all the Mongo data is consistent in structure with your RDBMS model and that the data structure versions are all the same.
Move your data. Expect that problems will be found and you will have to go back to step 1.
The primary problems you can expect are data validation problems because you are moving from a less structured data platform to a more structured one.
Depending on what you are doing regarding MapReduce you may have some work there as well. | 1 | 0 | 0 | From MongoDB to PostgreSQL - Django | 2 | python,django,mongodb,database-migration,django-postgresql | 0 | 2012-08-20T08:25:00.000 |
I have two programs: the first only write to sqlite db, and the second only read. May I be sure that there are never be some errors? Or how to avoid from it (in python)? | 3 | 1 | 0.099668 | 0 | false | 12,047,988 | 0 | 383 | 1 | 0 | 0 | 12,046,760 | generally, it is safe if there is only one program writing the sqlite db at one time.
(If not, it will raise exception like "database is locked." while two write operations want to write at the same time.)
By the way, it is no way to guarantee the program will never have errors. using Try ... catch to handle exception will make the program much safer. | 1 | 0 | 0 | sqlite3: safe multitask read & write - how to? | 2 | python,concurrency,sqlite | 0 | 2012-08-20T23:51:00.000 |
So I'm using xlrd to pull data from an Excel sheet. I get it open and it pulls the data perfectly fine.
My problem is the sheet updates automatically with data from another program. It is updating stock information using an rtd pull.
Has anyone ever figured out any way to pull data from a sheet like this that is up-to-date? | 0 | 1 | 0.197375 | 0 | false | 12,049,844 | 0 | 364 | 1 | 0 | 0 | 12,049,067 | Since all that xlrd can do is read a file, I'm assuming that the excel file is saved after each update.
If so, use os.stat() on the file before reading it with xlrd and save the results (or at least those of os.stat().st_mtime). Then periodically use os.stat() again, and check if the file modification time (os.stat().st_mtime) has changed, indicating that the file has been changed. If so, re-read the file with xlrd. | 1 | 0 | 0 | Pulling from an auto-updating Excel sheet | 1 | python,excel | 0 | 2012-08-21T05:48:00.000 |
I am currently sitting in front of a more specific problem which has to do with fail-over support / redundancy for a specific web site which will be hosted over @ WebFaction. Unfortunately replication at the DB level is not an option as I would have to install my own local PostgreSQL instances for every account and I am worried about performance amongst other things. So I am thinking about using Django's multi-db feature and routing all writes to all (shared) databases and the balance the reads to the nearest db.
My problem is now that all docs I read seem to indicate that this would most likely not be possible. To be more precise what I would need:
route all writes to a specific set of dbs (same type, version, ...)
if one write fails, all the others will be rolled back (transactions)
route all reads to the nearest db (could be statically configured)
Is this currently possible with Django's multi-db support?
Thanks a lot in advance for any help/hints... | 1 | 1 | 0.197375 | 0 | false | 12,934,130 | 1 | 345 | 1 | 0 | 0 | 12,070,031 | I was looking for something similar. What I found is:
1) Try something like Xeround cloud DB - it's built on MySQL and is compatible but doesn't support savepoints. You have to disable this in (a custom) DB engine. The good thing is that they replicate at the DB level and provide automatic scalability and failover. Your app works as if there's a single DB. They are having some connectivity issues at the moment though which are blocking my migration.
2) django-synchro package - looks promissing for replications at the app layer but I have some concerns about it. It doesn't work on objects.update() which I use a lot in my code. | 1 | 0 | 0 | Django multi-db: Route all writes to multiple databases | 1 | python,django,redundancy,webfaction,django-orm | 0 | 2012-08-22T09:24:00.000 |
I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo?
I've read all about both but there's a lot I don't understand. Mongo sounds better/easier but at the same time it sounds better/easier for those that already know Relational Databases very well and are looking for something more agile. | 3 | 1 | 0.049958 | 0 | false | 12,078,992 | 1 | 5,188 | 2 | 0 | 0 | 12,078,928 | Postgres is a great database for Django in production. sqlite is amazing to develop with. You will be doing a lot of work to try to not use a RDBMS on your first Django site.
One of the greatest strengths of Django is the smooth full-stack integration, great docs, contrib apps, app ecosystem. Choosing Mongo, you lose a lot of this. GeoDjango also assumes SQL and really loves postgres/postgis above others - and GeoDjango is really awesome.
If you want to use Mongo, I might recommend that you start with something like bottle, flask, tornado, cyclone, or other that are less about the full-stack integration and less assuming about you using a certain ORM. The Django tutorial, for instance, assumes that you are using the ORM with a SQL DB. | 1 | 0 | 0 | First time Django database SQL or NoSQL? | 4 | python,sql,django,nosql | 0 | 2012-08-22T18:07:00.000 |
I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo?
I've read all about both but there's a lot I don't understand. Mongo sounds better/easier but at the same time it sounds better/easier for those that already know Relational Databases very well and are looking for something more agile. | 3 | 0 | 0 | 0 | false | 12,079,233 | 1 | 5,188 | 2 | 0 | 0 | 12,078,928 | sqlite is the simplest to start with. If you already know SQL toss a coin to choose between MySQL and Postgres for your first project! | 1 | 0 | 0 | First time Django database SQL or NoSQL? | 4 | python,sql,django,nosql | 0 | 2012-08-22T18:07:00.000 |
I need python and php support. I am currently using mongodb and it is great for my data (test results), but I need to store results of a different type of test which are over 32 MB and exceed mongo limit of 16 MB.
Currently each test is a big python dictionary and I retrieve and represent them with php. | 2 | 0 | 1.2 | 0 | true | 12,090,898 | 0 | 128 | 1 | 0 | 0 | 12,090,204 | You can store up to 16MB of data per MongoDB BSON document (e.g. using the pymongo Binary datatype). For arbitrary large data you want to use GridFS which basically stored your data as chunks + extra metadata. When you using MongoDB with its replication features (replica sets) you will have kind of a distributed binary store (don't mix this up with a distributed filesystem (no integration with local filesystem). | 1 | 0 | 0 | no-sql database for document sizes over 32 MB? | 1 | php,python,mongodb,size,limit | 0 | 2012-08-23T11:07:00.000 |
I am using MySQLdb. I am developing a simple GUI application using Rpy2. What my program does?
- User can input the static data and mathematical operations will be computed using those data.
- Another thing where I am lost is, user will give the location of their database and the program will computer maths using the data from the remote database.
I have accomplished the result using the localhost.
How can I do it from the remote database? Any idea?
Thanx in advance! | 0 | 0 | 0 | 0 | false | 12,091,455 | 0 | 187 | 1 | 0 | 0 | 12,091,413 | When you establish the MySQL connection, use the remote machines IP address / hostname and corresponding credentials (username, password). | 1 | 0 | 0 | How to take extract data from the remote database in Python? | 1 | python,database | 0 | 2012-08-23T12:17:00.000 |
I use
python 2.7
pyodbc module
google app engine 1.7.1
I can use pydobc with python but the Google App Engine can't load the module. I get a no module named pydobc error.
How can I fix this error or how can use MS-SQL database with my local Google App Engine. | 3 | 0 | 0 | 0 | false | 12,116,542 | 1 | 2,793 | 1 | 1 | 0 | 12,108,816 | You could, at least in theory, replicate your data from the MS-SQL to the Google Cloud SQL database. It is possible create triggers in the MS-SQL database so that every transaction is reflected on your App Engine application via a REST API you will have to build. | 1 | 0 | 0 | How can use Google App Engine with MS-SQL | 2 | python,sql-server,google-app-engine | 0 | 2012-08-24T11:45:00.000 |
I am trying to copy and use the example 'User Authentication with PostgreSQL database' from the web.py cookbook. I can not figure out why I am getting the following errors.
at /login
'ThreadedDict' object has no attribute 'login'
at /login
'ThreadedDict' object has no attribute 'privilege'
Here is the error output to the terminal for the second error. (the first is almost identical)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 239, in process
return self.handle()
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 230, in handle
return self._delegate(fn, self.fvars, args)
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 420, in _delegate
return handle_class(cls)
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 396, in handle_class
return tocall(*args)
File "/home/erik/Dropbox/Python/Web.py/Code.py", line 44, in GET
render = create_render(session.privilege)
File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/session.py", line 71, in __getattr__
return getattr(self._data, name)
AttributeError: 'ThreadedDict' object has no attribute 'privilege'
127.0.0.1:36420 - - [25/Aug/2012 01:12:38] "HTTP/1.1 GET /login" - 500 Internal Server Error
Here is my code.py file. Pretty much cut-n-paste from the cookbook. I tried putting all of the class and def on top of the main code. I have also tried launching python with sudo as mentioned in another post.
import web
class index:
def GET(self):
todos = db.select('todo')
return render.index(todos)
class add:
def POST(self):
i = web.input()
n = db.insert('todo', title=i.title)
raise web.seeother('/')
def logged():
return False #I added this to test error #1, Now I get error #2
#if session.login==1:
# return True
#else:
# return False
def create_render(privilege):
if logged():
if privilege == 0:
render = web.template.render('templates/reader')
elif privilege == 1:
render = web.template.render('templates/user')
elif privilege == 2:
render = web.template.render('templates/admin')
else:
render = web.template.render('templates/communs')
else:
render = web.template.render('templates/communs')
return render
class Login:
def GET(self):
if logged():
render = create_render(session.privilege)
return '%s' % render.login_double()
else:
# This is where error #2 is
render = create_render(session.privilege)
return '%s' % render.login()
def POST(self):
name, passwd = web.input().name, web.input().passwd
ident = db.select('users', where='name=$name', vars=locals())[0]
try:
if hashlib.sha1("sAlT754-"+passwd).hexdigest() == ident['pass']:
session.login = 1
session.privilege = ident['privilege']
render = create_render(session.privilege)
return render.login_ok()
else:
session.login = 0
session.privilege = 0
render = create_render(session.privilege)
return render.login_error()
except:
session.login = 0
session.privilege = 0
render = create_render(session.privilege)
return render.login_error()
class Reset:
def GET(self):
session.login = 0
session.kill()
render = create_render(session.privilege)
return render.logout()
#web.config.debug = False
render = web.template.render('templates/', base='layout')
urls = (
'/', 'index',
'/add', 'add',
'/login', 'Login',
'/reset', 'Reset'
)
app = web.application(urls, globals())
db = web.database(dbn='postgres', user='hdsfgsdfgsd', pw='dfgsdfgsdfg', db='postgres', host='fdfgdfgd.com')
store = web.session.DiskStore('sessions')
# Too me, it seems this is being ignored, at least the 'initializer' part
session = web.session.Session(app, store, initializer={'login': 0, 'privilege': 0})
if __name__ == "__main__": app.run() | 0 | 0 | 0 | 0 | false | 12,137,859 | 1 | 2,157 | 1 | 0 | 0 | 12,120,539 | Okay, I was able to figure out what I did wrong. Total newbie stuff and all part of the learning process. This code now works, well mostly. The part that I was stuck on is now working. See my comments in the code
Thanks
import web
web.config.debug = False
render = web.template.render('templates/', base='layout')
urls = (
'/', 'index',
'/add', 'add',
'/login', 'Login',
'/reset', 'Reset'
)
app = web.application(urls, globals())
db = web.database(blah, blah, blah)
store = web.session.DiskStore('sessions')
session = web.session.Session(app, store, initializer={'login': 0, 'privilege': 0})
class index:
def GET(self):
todos = db.select('todo')
return render.index(todos)
class add:
def POST(self):
i = web.input()
n = db.insert('todo', title=i.title)
raise web.seeother('/')
def logged():
if session.get('login', False):
return True
else:
return False
def create_render(privilege):
if logged():
if privilege == 0:
render = web.template.render('templates/reader')
elif privilege == 1:
render = web.template.render('templates/user')
elif privilege == 2:
render = web.template.render('templates/admin')
else:
render = web.template.render('templates/communs')
else:
## This line is key, i do not have a communs folder, thus returning an unusable object
#render = web.template.render('templates/communs') #Original code from example
render = web.template.render('templates/', base='layout')
return render
class Login:
def GET(self):
if logged():
## Using session.get('something') instead of session.something does not blow up when it does not exit
render = create_render(session.get('privilege'))
return '%s' % render.login_double()
else:
render = create_render(session.get('privilege'))
return '%s' % render.login()
def POST(self):
name, passwd = web.input().name, web.input().passwd
ident = db.select('users', where='name=$name', vars=locals())[0]
try:
if hashlib.sha1("sAlT754-"+passwd).hexdigest() == ident['pass']:
session.login = 1
session.privilege = ident['privilege']
render = create_render(session.get('privilege'))
return render.login_ok()
else:
session.login = 0
session.privilege = 0
render = create_render(session.get('privilege'))
return render.login_error()
except:
session.login = 0
session.privilege = 0
render = create_render(session.get('privilege'))
return render.login_error()
class Reset:
def GET(self):
session.login = 0
session.kill()
render = create_render(session.get('privilege'))
return render.logout()
if __name__ == "__main__": app.run() | 1 | 0 | 0 | web.py User Authentication with PostgreSQL database example | 1 | python,session,login,web.py | 0 | 2012-08-25T08:47:00.000 |
I am using PyMongo and gevent together, from a Django application. In production, it is hosted on Gunicorn.
I am creating a single Connection object at startup of my application. I have some background task running continuously and performing a database operation every few seconds.
The application also serves HTTP requests as any Django app.
The problem I have is the following. It only happens in production, I have not been able to reproduce it on my dev environment. When I let the application idle for a little while (although the background task is still running), on the first HTTP request (actually the first few), the first "find" operation I perform never completes. The greenlet actually never resumes. This causes the first few HTTP requests to time-out.
How can I fix that? Is that a bug in gevent and/or PyMongo? | 3 | 4 | 0.664037 | 0 | false | 12,163,744 | 1 | 862 | 1 | 0 | 0 | 12,157,350 | I found what the problem is. By default PyMongo has no network timeout defined on the connections, so what was happening is that the connections in the pool got disconnected (because they aren't used for a while). Then when I try to reuse a connection and perform a "find", it takes a very long time for the connection be detected as dead (something like 15 minutes). When the connection is detected as dead, the "find" call finally throws an AutoReconnectError, and a new connection is spawned up to replace to stale one.
The solution is to set a small network timeout (15 seconds), so that the call to "find" blocks the greenlet for 15 seconds, raises an AutoReconnectError, and when the "find" is retried, it gets a new connection, and the operation succeeds. | 1 | 0 | 0 | Deadlock with PyMongo and gevent | 1 | python,mongodb,pymongo,gevent,greenlets | 0 | 2012-08-28T10:26:00.000 |
I'm trying to install mysql-python package on a machine with Centos 6.2 with Percona Server.
However I'm running into EnvironmentError: mysql_config not found error.
I've carefully searched information regarding this error but all I found is that one needs to add path to mysql_config binary to the PATH system variable.
But it looks like, with my percona installation, a don't have mysql_config file at all
find / -type f -name mysql_config returns nothing. | 1 | 0 | 1.2 | 0 | true | 12,202,936 | 0 | 1,025 | 1 | 0 | 0 | 12,202,303 | mysql_config is a part of mysql-devel package. | 1 | 0 | 0 | mysql-python with Percona Server installation | 1 | python,percona | 0 | 2012-08-30T17:27:00.000 |
I have a collection that is potentially going to be very large. Now I know MongoDB doesn't really have a problem with this, but I don't really know how to go about designing a schema that can handle a very large dataset comfortably. So I'm going to give an outline of the problem.
We are collecting large amounts of data for our customers. Basically, when we gather this data it is represented as a 3-tuple, lets say (a, b, c), where b and c are members of sets B and C respectively. In this particular case we know that the B and C sets will not grow very much over time. For our current customers we are talking about ~200,000 members. However, the A set is the one that keeps growing over time. Currently we are at about ~2,000,000 members per customer, but this is going to grow (possibly rapidly.) Also, there are 1->n relations between b->a and c->a.
The workload on this data set is basically split up into 3 use cases. The collections will be periodically updated, where A will get the most writes, and B and C will get some, but not many. The second use case is random access into B, then aggregating over some number of documents in C that pertain to b \in B. And the last usecase is basically streaming a large subset from A and B to generate some new data.
The problem that we are facing is that the indexes are getting quite big. Currently we have a test setup with about 8 small customers, the total dataset is about 15GB in size at the moment, and indexes are running at about 3GB to 4GB. The problem here is that we don't really have any hot zones in our dataset. It's basically going to get an evenly distributed load amongst all documents.
Basically we've come up with 2 options to do this. The one that I described above, where all data for all customers is piled into one collection. This means we'd have to create an index om some field that links the documents in that collection to a particular customer.
The other options is to throw all b's and c's together (these sets are relatively small) but divide up the C collection, one per customer. I can imangine this last solution being a bit harder to manage, but since we rarely access data for multiple customers at the same time, it would prevent memory problems. MongoDB would be able to load the customers index into memory and just run from there.
What are your thoughts on this?
P.S.: I hope this wasn't too vague, if anything is unclear I'll go into some more details. | 2 | 1 | 1.2 | 0 | true | 12,216,914 | 0 | 239 | 1 | 0 | 0 | 12,210,307 | It sounds like the larger set (A if I followed along correctly), could reasonably be put into its own database. I say database rather than collection, because now that 2.2 is released you would want to minimize lock contention between the busier database and the others, and to do that a separate database would be best (2.2 introduced database level locking). That is looking at this from a single replica set model, of course.
Also the index sizes sound a bit out of proportion to your data size - are you sure they are all necessary? Pruning unneeded indexes, combining and using compound indexes may well significantly reduce the pain you are hitting in terms of index growth (it would potentially make updates and inserts more efficient too). This really does need specifics and probably belongs in another question, or possibly a thread in the mongodb-user group so multiple eyes can take a look and make suggestions.
If we look at it with the possibility of sharding thrown in, then the truly important piece is to pick a shard key that allows you to make sure locality is preserved on the shards for the pieces you will frequently need to access together. That would lend itself more toward a single sharded collection (preserving locality across multiple related sharded collections is going to be very tricky unless you manually split and balance the chunks in some way). Sharding gives you the ability to scale out horizontally as your indexes hit the single instance limit etc. but it is going to make the shard key decision very important.
Again, specifics for picking that shard key are beyond the scope of this more general discussion, similar to the potential index review I mentioned above. | 1 | 0 | 0 | Split large collection into smaller ones? | 1 | python,mongodb | 0 | 2012-08-31T06:56:00.000 |
Are there any generally accepted practices to get around this? Specifically, for user-submitted images uploaded to a web service. My application is running in Python.
Some hacked solutions that came to mind:
Display the uploaded image from a local directory until the S3 image is ready, then "hand it off" and update the database to reflect the change.
Display a "waiting" progress indicator as a background gif and the image will just appear when it's ready (w/ JavaScript) | 0 | 1 | 0.197375 | 0 | false | 12,242,133 | 1 | 323 | 1 | 0 | 1 | 12,241,945 | I'd save time and not do anything. The wait times are pretty fast.
If you wanted to stall the end-user, you could just show a 'success' page without the image. If the image isn't available, most regular users will just hit reload.
If you really felt like you had to... I'd probably go with a javascript solution like this:
have a 'timestamp uploaded' column in your data store
if the upload time is under 1 minute, instead of rendering an img=src tag... render some javascript that polls the s3 bucket in 15s intervals
Again, chances are most users will never experience this - and if they do, they won't really care. The UX expectations of user generated content are pretty low ( just look at Facebook ); if this is an admin backend for an 'enterprise' service that would make workflow better, you may want to invest time on the 'optimal' solution. For a public facing website though, i'd just forget about it. | 1 | 0 | 0 | What are some ways to work with Amazon S3 not offering read-after-write consistency in US Standard? | 1 | python,amazon-s3,amazon-web-services | 0 | 2012-09-03T04:22:00.000 |
I'm creating a game mod for Counter-Strike in python, and it's basically all done. The only thing left is to code a REAL database, and I don't have any experience on sqlite, so I need quite a lot of help.
I have a Player class with attribute self.steamid, which is unique for every Counter-Strike player (received from the game engine), and self.entity, which holds in an "Entity" for player, and Entity-class has lots and lots of more attributes, such as level, name and loads of methods. And Entity is a self-made Python class).
What would be the best way to implement a database, first of all, how can I save instances of Player with an other instance of Entity as it's attribute into a database, powerfully?
Also, I will need to get that users data every time he connects to the game server, (I have player_connect event), so how would I receive the data back?
All the tutorials I found only taught about saving strings or integers, but nothing about whole instances. Will I have to save every attribute on all instances (Entity instance has few more instances as it's attributes, and all of them have huge amounts of attributes...), or is there a faster, easier way?
Also, it's going to be a locally saved database, so I can't really use any other languages than sql. | 1 | 0 | 1.2 | 0 | true | 12,268,131 | 0 | 356 | 1 | 0 | 0 | 12,266,016 | You need an ORM. Either you roll your own (which I never suggest), or you use one that exists already. Probably the two most popular in Python are sqlalchemy, and the ORM bundled with Django. | 1 | 0 | 0 | Python sqlite3, saving instance of a class with an other instance as it's attribute? | 2 | python,database,sqlite,instance | 0 | 2012-09-04T14:47:00.000 |
I have a few a few model classes such as a user class which is passed a dictionary, and wraps it providing various methods, some of which communicate with the database when a value needs to be changed. The dictionary itself is made from an sqlalchemy RowProxy, so all its keys are actually attribute names taken directly from the sql user table. (attributes include user_id, username, email, passwd, etc)
If a user is logged in, should I simply save this dictionary to a redis key value store, and simply call a new user object when needed and pass it this dictionary from redis(which should be faster than only saving a user id in a session and loading the values again from the db based on that user_id)?
Or should I somehow serialize the entire object and save it in redis? I'd appreciate any alternate methods of managing model and session objects that any of you feel would be better as well.
In case anyone is wondering I'm only using the sqlalchemy expression language, and not the orm. I'm using the model classes as interfaces, and coding against those. | 1 | 4 | 1.2 | 0 | true | 12,320,928 | 1 | 241 | 1 | 0 | 0 | 12,292,277 | Unless you're being really careful, serializing the entire object into redis is going to cause problems. You're effectively treating it like a cache, so you have to be careful that those values are expired if the user changes something about themselves. You also have to make sure that all of the values are serializable (likely via pickle). You didn't specify whether this is a premature optimization so I'm going to say that it probably is and recommend that you just track the user id and reload his information when you need it from your database. | 1 | 0 | 0 | How do I go about storing session objects? | 2 | python,session,sqlalchemy,session-state,pyramid | 0 | 2012-09-06T02:42:00.000 |
Hi I intend to draw a chart with data in an xlsx file.
In order to keep the style, I HAVE TO draw it within excel.
I found a package named win32com, which can give a support to manipulate excel file with python on win32 platform, but I don't know where is the doc.....
Another similar question is how to change the style of cells, such as font, back-color ?
So maybe all I wanna know is the doc, you know how to fish is more useful than fishes.... and an example is better. | 0 | 1 | 0.049958 | 0 | false | 13,086,152 | 0 | 2,868 | 1 | 0 | 0 | 12,296,563 | Documentation for win32com is next to non-existent as far I know. However, I use the following method to understand the commands.
MS-Excel
In Excel, record a macro of whatever action you intend to, say plotting a chart. Then go to the Macro menu and use View Macro to get the underlying commands. More often than not, the commands used would guide you to the corresponding commands in python that you need to use.
Pythonwin
You can use pythonwin to browse the underlying win32com defined objects (in your case Microsoft Excel Objects). In pythonwin (which can be found at \Lib\site-packages\pythonwin\ in your python installation), go to Tools -> COM Makepy Utility, select your required Library (in this case, Microsoft Excel 14.0 Object Library) and press Ok. Then when the process is complete, go to Tools -> COM Browser and open the required library under Registered Libraries. Note the ID no. as this would correspond to the source file. You can browse the various components of the library in the COM Browser.
Source
Go to \Lib\site-packages\win32com\ in your python installation folder. Run makepy.py and choose the required library. After this, the source file of the library can be found at \Lib\site-packages\win32com\gen_py . It is one of those files with the wacky name. The name corresponds to that found in Pythonwin. Open the file, and search for the commands you saw in the Excel Macro. (#2 and #3 maybe redundant, I am not sure) | 1 | 0 | 0 | How to draw a chart with excel using python? | 4 | python,excel,win32com | 0 | 2012-09-06T09:03:00.000 |
I use Python with SQLAlchemy for some relational tables. For the storage of some larger data-structures I use Cassandra. I'd prefer to use just one technology (cassandra) instead of two (cassandra and PostgreSQL). Is it possible to store the relational data in cassandra as well? | 6 | 3 | 0.197375 | 0 | false | 12,302,894 | 0 | 8,349 | 1 | 0 | 0 | 12,297,847 | playOrm supports JOIN on noSQL so that you CAN put relational data into noSQL but it is currently in java. We have been thinking of exposing a S-SQL language from a server for programs like yours. Would that be of interest to you?
The S-SQL would look like this(if you don't use partitions, you don't even need anything before the SELECT statement piece)...
PARTITIONS t(:partId) SELECT t FROM TABLE as t INNER JOIN t.security as s WHERE s.securityType = :type and t.numShares = :shares")
This allows relational data in a noSQL environment AND IF you partition your data, you can scale as well very nicely with fast queries and fast joins.
If you like, we can quickly code up a prototype server that exposes an interface where you send in S-SQL requests and we return some form of json back to you. We would like it to be different than SQL result sets which was a very bad idea when left joins and inner joins are in the picture.
ie. we would return results on a join like so (so that you can set a max results that actually works)...
tableA row A - tableB row45
- tableB row65
- tableB row 78
tableA row C - tableB row46
- tableB row93
NOTICE that we do not return multiple row A's so that if you have max results 2 you get row A and row C where as in ODBC/JDBC, you would get ONLY rowA two times with row45 and row 65 because that is what the table looks like when it is returned (which is kind of stupid when you are in an OO language of any kind).
just let playOrm team know if you need anything on the playOrm github website.
Dean | 1 | 0 | 0 | Can I use SQLAlchemy with Cassandra CQL? | 3 | python,sqlalchemy,cassandra | 0 | 2012-09-06T10:17:00.000 |
I'm looking to expand my recommender system to include other features (dimensions). So far, I'm tracking how a user rates some document, and using that to do the recommendations. I'm interested in adding more features, such as user location, age, gender, and so on.
So far, a few mysql tables have been enough to handle this, but i fear it will quickly become messy as i add more features.
My question: how can i best represent and persist this kind of multi dimensional data?
Python specific tips would be helpful.
Thank you | 1 | 0 | 0 | 0 | false | 12,369,285 | 0 | 289 | 2 | 0 | 0 | 12,355,416 | An SQL database should work fine in your case. In fact, you can store all the training examples in just one database, each row representing a particular training set and each column representing a feature. You can add features by adding collumns as and when required. In a relational database, you might come across access errors when querying for your data for various inconsistency reasons. Try using a NoSQL database. I personally user MongoDB and Pymongo on python to store the training examples as dicts in JSON format. (Easier for web apps this way). | 1 | 0 | 0 | Multi feature recommender system representation | 2 | python,numpy,scipy,data-mining | 0 | 2012-09-10T16:05:00.000 |
I'm looking to expand my recommender system to include other features (dimensions). So far, I'm tracking how a user rates some document, and using that to do the recommendations. I'm interested in adding more features, such as user location, age, gender, and so on.
So far, a few mysql tables have been enough to handle this, but i fear it will quickly become messy as i add more features.
My question: how can i best represent and persist this kind of multi dimensional data?
Python specific tips would be helpful.
Thank you | 1 | 0 | 0 | 0 | false | 24,491,488 | 0 | 289 | 2 | 0 | 0 | 12,355,416 | I recommend using tensors, which is multidimensional arrays. You can use any data table or simply text files to store a tensor. Each line or row is a record / transaction with different features all listed. | 1 | 0 | 0 | Multi feature recommender system representation | 2 | python,numpy,scipy,data-mining | 0 | 2012-09-10T16:05:00.000 |
Trying to set up some basic data I/O scripts in python that read and write from a local sqlite db. I'd like to use the command line to verify that my scripts work as expected, but they don't pick up on any of the databases or tables I'm creating.
My first script writes some data from a dict into the table, and the second script reads it and prints it.
Write:
# first part of script creates a dict called 'totals'
import sqlite3 as lite
con = lite.connect('test.db')
with con:
cur = con.cursor()
cur.execute("DROP TABLE IF EXISTS testtbl")
cur.execute("CREATE TABLE testtbl(Date TEXT PRIMARY KEY, Count INT, AverageServerTime REAL, TotalServerTime REAL, AverageClientTime REAL, TotalClientTime REAL)")
cur.execute('INSERT INTO testtbl VALUES("2012-09-08", %s, %s, %s, %s, %s)' % (float(totals['count()']), float(totals['serverTime/count()']), float(totals['serverTime']), float(totals['totalLoadTime/count()']), float(totals['totalLoadTime'])))
Read:
import sqlite3 as lite
con = lite.connect('test.db')
with con:
cur = con.cursor()
cur.execute("SELECT * FROM testtbl")
rows = cur.fetchall()
for row in rows:
print row
These scripts are separate and both work fine. However, if I navigate to the directory in the command line and activate sqlite3, nothing further works. I've tried '.databases', '.tables', '.schema' commands and can't get it to respond to this particular db. I can create dbs within the command line and view them, but not the ones created by my script. How do I link these up?
Running stock Ubuntu 12.04, Python 2.7.3, SQLite 3.7.9. I also installed libsqlite3-dev but that hasn't helped. | 1 | 2 | 1.2 | 0 | true | 12,360,397 | 0 | 1,252 | 1 | 1 | 0 | 12,360,279 | Are you putting the DB file name in the command ?
$ sqlite3 test.db | 1 | 0 | 0 | sqlite3 command line tools don't work in Ubuntu | 1 | python,linux,sqlite,ubuntu | 0 | 2012-09-10T22:23:00.000 |
Okay,
I kinda asked this question already, but noticed that i might have not been as clear as i could have been, and might have made some errors myself.
I have also noticed many people having the same or similar problems with sqlite3 in python. So i thought i would ask this as clearly as i could, so it could possibly help others with the same issues aswell.
What does python need to find when compiling, so the module is enabled and working?
(In detail, i mean exact files, not just "sqlite dev-files")?
And if it needs a library, it propably needs to be compiled with the right architecture? | 0 | 0 | 0 | 0 | false | 12,420,541 | 0 | 3,032 | 1 | 0 | 0 | 12,420,338 | As I understand you would like to install python form sources. To make sqlite module available you have to install sqlite package and its dev files (for example sqlite-devel for CentOS). That's it. YOu have to re-configure your sources after installing the required packages.
Btw you will face up the same problem with some other modules. | 1 | 0 | 1 | What does Python need to install sqlite3 module? | 2 | python,sqlite | 0 | 2012-09-14T07:59:00.000 |
For my database project, I am using SQL Alchemy. I have a unit test that adds the object to the table, finds it, updates it, and deletes it. After it goes through that, I assumed I would call the session.rollback method in order to revert the database changes. It does not work because my sequences are not reverted. My plan for the project is to have one database, I do not want to create a test database.
I could not find in the documentation on SQL Alchemy on how to properly rollback the database changes. Does anyone know how to rollback the database transaction? | 4 | -3 | 1.2 | 0 | true | 12,443,800 | 0 | 3,017 | 1 | 0 | 0 | 12,440,044 | Postgres does not rollback advances in a sequence even if the sequence is used in a transaction which is rolled back. (To see why, consider what should happen if, before one transaction is rolled back, another using the same sequence is committed.)
But in any case, an in-memory database (SQLite makes this easy) is the best choice for unit tests. | 1 | 0 | 0 | How to rollback the database in SQL Alchemy? | 1 | python,unit-testing,sqlalchemy,rollback | 0 | 2012-09-15T18:39:00.000 |
I am provided with text files containing data that I need to load into a postgres database.
The files are structured in records (one per line) with fields separated by a tilde (~). Unfortunately it happens that every now and then a field content will include a tilde.
As the files are not tidy CSV, and the tilde's not escaped, this results in records containing too many fields, which cause the database to throw an exception and stop loading.
I know what the record should look like (text, integer, float fields).
Does anyone have suggestions on how to fix the overlong records? I code in per but I am happy with suggestions in python, javascript, plain english. | 1 | 0 | 0 | 0 | false | 12,553,211 | 1 | 111 | 1 | 0 | 0 | 12,553,197 | If you know what each field is supposed to be, perhaps you could write a regular expression which would match that field type only (ignoring tildes) and capture the match, then replace the original string in the file? | 1 | 0 | 0 | Messed up records - separator inside field content | 2 | python,perl,language-agnostic | 0 | 2012-09-23T14:36:00.000 |
Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks | 38 | 0 | 0 | 0 | false | 56,126,467 | 1 | 52,339 | 2 | 0 | 1 | 12,570,465 | Given that encryption at rest is a much desired data standard now, smart_open does not support this afaik | 1 | 0 | 0 | How to upload a file to S3 without creating a temporary local file | 12 | python,amazon-s3,amazon | 0 | 2012-09-24T18:09:00.000 |
Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks | 38 | 2 | 0.033321 | 0 | false | 12,570,568 | 1 | 52,339 | 2 | 0 | 1 | 12,570,465 | I assume you're using boto. boto's Bucket.set_contents_from_file() will accept a StringIO object, and any code you have written to write data to a file should be easily adaptable to write to a StringIO object. Or if you generate a string, you can use set_contents_from_string(). | 1 | 0 | 0 | How to upload a file to S3 without creating a temporary local file | 12 | python,amazon-s3,amazon | 0 | 2012-09-24T18:09:00.000 |
I have the below setup
2 node hadoop/hbase cluster with thirft server running on hbase.
Hbase has a table with 10 million rows.
I need to run aggregate queries like sum() on the hbase table
to show it on the web(charting purpose).
For now I am using python(thrift client) to get the dataset and display.
I am looking for database(hbase) level aggregation function to use in the web.
Any thoughts? | 0 | 0 | 0 | 0 | false | 21,502,085 | 1 | 1,019 | 1 | 0 | 0 | 12,585,286 | Phoenix is a good solution for low latency result from Hbase tables than Hive.
It is good for range scans than Hbase scanners because they use secondary indexes and SkipScan.
As in your case , you use Python and phoenix API have only JDBC connectors.
Else Try Hbase Coprocessors. Which do SUM, MAX, COUNT,AVG functions.
you can enable coprocessors while creating table and can USE the Coprocessor functions
You can try Impala, which provide an ODBC connector, JDBC connector. Impala uses hive metatable for executing massively parallel batch execution.
You need to create a Hive metatable for your Hbase Table. | 1 | 0 | 0 | Hadoop Hbase query | 3 | java,python,hadoop,hbase,thrift | 0 | 2012-09-25T14:35:00.000 |
I am finding Neo4j slow to add nodes and relationships/arcs/edges when using the REST API via py2neo for Python. I understand that this is due to each REST API call executing as a single self-contained transaction.
Specifically, adding a few hundred pairs of nodes with relationships between them takes a number of seconds, running on localhost.
What is the best approach to significantly improve performance whilst staying with Python?
Would using bulbflow and Gremlin be a way of constructing a bulk insert transaction?
Thanks! | 18 | 2 | 0.07983 | 0 | false | 31,026,259 | 0 | 12,651 | 1 | 0 | 1 | 12,643,662 | Well, I myself had need for massive performance from neo4j. I end up doing following things to improve graph performance.
Ditched py2neo, since there were lot of issues with it. Besides it is very convenient to use REST endpoint provided by neo4j, just make sure to use request sessions.
Use raw cypher queries for bulk insert, instead of any OGM(Object-Graph Mapper). That is very crucial if you need an high-performant system.
Performance was not still enough for my needs, so I ended writing a custom system that merges 6-10 queries together using WITH * AND UNION clauses. That improved performance by a factor of 3 to 5 times.
Use larger transaction size with atleast 1000 queries. | 1 | 0 | 0 | Fastest way to perform bulk add/insert in Neo4j with Python? | 5 | python,neo4j,py2neo | 0 | 2012-09-28T16:15:00.000 |
I see plenty of examples of importing a CSV into a PostgreSQL db, but what I need is an efficient way to import 500,000 CSV's into a single PostgreSQL db. Each CSV is a bit over 500KB (so grand total of approx 272GB of data).
The CSV's are identically formatted and there are no duplicate records (the data was generated programatically from a raw data source). I have been searching and will continue to search online for options, but I would appreciate any direction on getting this done in the most efficient manner possible. I do have some experience with Python, but will dig into any other solution that seems appropriate.
Thanks! | 9 | 0 | 0 | 0 | false | 12,646,923 | 0 | 10,104 | 1 | 0 | 0 | 12,646,305 | Nice chunk of data you have there. I'm not 100% sure about Postgre, but at least MySQL provides some SQL commands, to feed a csv directly into a table. This bypasses any insert checks and so on and is thatswhy more than a order of magnitude faster than any ordinary insert operations.
So the probably fastest way to go is create some simple python script, telling your postgre server, which csv files in which order to hungrily devour into it's endless tables. | 1 | 0 | 0 | Efficient way to import a lot of csv files into PostgreSQL db | 3 | python,csv,import,postgresql-9.1 | 0 | 2012-09-28T19:38:00.000 |
I have downloaded mysql-connector-python-1.0.7-py2.7.msi from MySQL site
and try to install but it gives error that
Python v2.7 not found. We only support Microsoft Windows Installer(MSI) from python.org.
I am using Official Python v 2.7.3 on windows XP SP3 with MySQL esssential5.1.66
Need Help ??? | 12 | 10 | 1 | 0 | false | 13,899,478 | 0 | 19,218 | 2 | 0 | 0 | 12,702,146 | I met the similar problem under Windows 7 when installing mysql-connector-python-1.0.7-py2.7.msi and mysql-connector-python-1.0.7-py3.2.msi.
After changing from "Install only for yourself" to "Install for all users" when installing Python for windows, the "python 3.2 not found" problem disappear and mysql-connector-python-1.0.7-py3.2.msi was successfully installed.
I guess the problem is that mysql connector installer only looks for HKEY_LOCAL_MACHINE entries, and the things it looks for might be under HKEY_CURRENT_USER etc. So the solution that change the reg table directly also works. | 1 | 0 | 0 | mysql for python 2. 7 says Python v2.7 not found | 8 | python,mysql,python-2.7,mysql-connector-python | 0 | 2012-10-03T04:57:00.000 |
I have downloaded mysql-connector-python-1.0.7-py2.7.msi from MySQL site
and try to install but it gives error that
Python v2.7 not found. We only support Microsoft Windows Installer(MSI) from python.org.
I am using Official Python v 2.7.3 on windows XP SP3 with MySQL esssential5.1.66
Need Help ??? | 12 | 0 | 0 | 0 | false | 19,051,115 | 0 | 19,218 | 2 | 0 | 0 | 12,702,146 | I solved this problem by using 32bit python | 1 | 0 | 0 | mysql for python 2. 7 says Python v2.7 not found | 8 | python,mysql,python-2.7,mysql-connector-python | 0 | 2012-10-03T04:57:00.000 |
I have a server which files get uploaded to, I want to be able to forward these on to s3 using boto, I have to do some processing on the data basically as it gets uploaded to s3.
The problem I have is the way they get uploaded I need to provide a writable stream that incoming data gets written to and to upload to boto I need a readable stream. So it's like I have two ends that don't connect. Is there a way to upload to s3 with a writable stream? If so it would be easy and I could pass upload stream to s3 and it the execution would chain along.
If there isn't I have two loose ends which I need something in between with a sort of buffer, that can read from the upload to keep that moving, and expose a read method that I can give to boto so that can read. But doing this I'm sure I'd need to thread the s3 upload part which I'd rather avoid as I'm using twisted.
I have a feeling I'm way over complicating things but I can't come up with a simple solution. This has to be a common-ish problem, I'm just not sure how to put it into words very well to search it | 4 | 3 | 1.2 | 0 | true | 12,716,129 | 1 | 636 | 1 | 1 | 1 | 12,714,965 | boto is a Python library with a blocking API. This means you'll have to use threads to use it while maintaining the concurrence operation that Twisted provides you with (just as you would have to use threads to have any concurrency when using boto ''without'' Twisted; ie, Twisted does not help make boto non-blocking or concurrent).
Instead, you could use txAWS, a Twisted-oriented library for interacting with AWS. txaws.s3.client provides methods for interacting with S3. If you're familiar with boto or AWS, some of these should already look familiar. For example, create_bucket or put_object.
txAWS would be better if it provided a streaming API so you could upload to S3 as the file is being uploaded to you. I think that this is currently in development (based on the new HTTP client in Twisted, twisted.web.client.Agent) but perhaps not yet available in a release. | 1 | 0 | 0 | Boto reverse the stream | 2 | python,stream,twisted,boto | 0 | 2012-10-03T18:54:00.000 |
I want to select data from multiple tables, so i just want to know that can i used simple SQL queries for that, If yes then please give me an example(means where to use these queries and how).
Thanks. | 0 | 1 | 0.099668 | 0 | false | 12,740,533 | 1 | 77 | 1 | 0 | 0 | 12,740,424 | Try this.
https://docs.djangoproject.com/en/dev/topics/db/sql/ | 1 | 0 | 0 | Can I used simple sql commands in django | 2 | python,sql,django,django-queryset | 0 | 2012-10-05T06:05:00.000 |
Background:
I'm working on dataview, and many of the reports are generated by very long running queries. I've written a small query caching daemon in python that accepts a query, spawns a thread to run it, and stores the result when done as a pickled string. The results are generally various aggregations broken down by month, or other factors, and the result sets are consequently not large. So my caching daemon can check whether it has the result already, and return it immediately, otherwise it sends back a 'pending' message (or 'error' or 'failed' or various other messages). The point being, that the client, which is a django web server would get back 'pending' and query again in 5~10 seconds, in the meanwhile putting up a message for the user saying 'your report is being built, please be patient'.
The problem:
I would like to add the ability for the user to cancel a long running query, assuming it hasn't been cached already. I know I can kill a query thread in MySQL using KILL, but is there a way to get the thread/query/process id of the query in a manner similar to getting the id of the last inserted row? I'm doing this through the python MySQLdb module, and I can't see any properties/methods of the cursor object that would return this. | 0 | 2 | 1.2 | 0 | true | 12,743,439 | 0 | 1,016 | 1 | 0 | 0 | 12,743,436 | There is a property of the connection object called thread_id, which returns an id to be passed to KILL. MySQL has a thread for each connection, not for each cursor, so you are not killing queries, but are instead killing connection. To kill an individual query you must run each query in it's own connection, and then kill the connection using the result from thread_id | 1 | 0 | 0 | Get process id (of query/thread) of most recently run query in mysql using python mysqldb | 1 | python,mysql | 0 | 2012-10-05T09:32:00.000 |
I'm reading conflicting reports about using PostgreSQL on Amazon's Elastic Beanstalk for python (Django).
Some sources say it isn't possible: (http://www.forbes.com/sites/netapp/2012/08/20/amazon-cloud-elastic-beanstalk-paas-python/). I've been through a dummy app setup, and it does seem that MySQL is the only option (amongst other ones that aren't Postgres).
However, I've found fragments around the place mentioning that it is possible - even if they're very light on detail.
I need to know the following:
Is it possible to run a PostgreSQL database with a Django app on Elastic Beanstalk?
If it's possible, is it worth the trouble?
If it's possible, how would you set it up? | 5 | 5 | 0.462117 | 0 | false | 21,391,684 | 1 | 2,422 | 1 | 0 | 0 | 12,850,550 | Postgre is now selectable from the AWS RDS configurations. Validated through Elastic Beanstalk application setup 2014-01-27. | 1 | 0 | 0 | PostgreSQL for Django on Elastic Beanstalk | 2 | python,django,postgresql,amazon-elastic-beanstalk | 0 | 2012-10-12T00:21:00.000 |
I have 10000 files in a s3 bucket.When I list all the files it takes 10 minutes. I want to implement a search module using BOTO (Python interface to AWS) which searches files based on user input. Is there a way I can search specific files with less time? | 2 | 3 | 0.291313 | 0 | false | 12,907,767 | 1 | 5,534 | 1 | 0 | 1 | 12,904,326 | There are two ways to implement the search...
Case 1. As suggested by john - you can specify the prefix of the s3 key file in your list method. that will return you result of S3 key files which starts with the given prefix.
Case 2. If you want to search the S3 key which are end with specific suffix or we can say extension then you can specify the suffix in delimiter. Remember it will give you correct result only in the case if you are giving suffix for the search item which is end with that string.
Else delimiter is used for path separator.
I will suggest you Case 1 but if you want to faster search with specific suffix then you can try case 2 | 1 | 0 | 0 | Search files(key) in s3 bucket takes longer time | 2 | python,amazon-s3,boto | 0 | 2012-10-15T21:29:00.000 |
A user accesses his contacts on his mobile device. I want to send back to the server all the phone numbers (say 250), and then query for any User entities that have matching phone numbers.
A user has a phone field which is indexed. So I do User.query(User.phone.IN(phone_list)), but I just looked at AppStats, and is this damn expensive. It cost me 250 reads for this one operation, and this is something I expect a user to do often.
What are some alternatives? I suppose I can set the User entity's id value to be his phone number (i.e when creating a user I'd do user = User(id = phone_number)), and then get directly by keys via ndb.get_multi(phones), but I also want to perform this same query with emails too.
Any ideas? | 3 | 0 | 0 | 0 | false | 12,980,347 | 1 | 176 | 1 | 1 | 0 | 12,976,652 | I misunderstood part of your problem, I thought you were issuing a query that was giving you 250 entities.
I see what the problem is now, you're issuing an IN query with a list of 250 phone numbers, behind the scenes, the datastore is actually doing 250 individual queries, which is why you're getting 250 read ops.
I can't think of a way to avoid this. I'd recommend avoiding searching on long lists of phone numbers. This seems like something you'd need to do only once, the first time the user logs in using that phone. Try to find some way to store the results and avoid the query again. | 1 | 0 | 0 | Efficient way to do large IN query in Google App Engine? | 3 | python,google-app-engine | 0 | 2012-10-19T14:43:00.000 |