text
stringlengths 226
34.5k
|
---|
C++ destructor calling of boost::python wrapped objects
Question: Does boost::python provide any guarantee when the C++ destructor of a wrapped
object is called considering the moment of reaching the zero reference count
of the corresponding python object?
I am concerned about a C++ object that opens a file for writing and performs
the file closing in its destructor. Is it guaranteed that the file is written
when all python references to the object are deleted or out of scope?
I mean:
A=MyBoostPythonObject()
del A # Is the C++ destructor of MyBoostPythonObject called here?
My experience suggests that the destructor is always called at this point but
could not find any guarantee for this.
Answer: Boost.Python makes the guarantee that if the Python object has ownership of
the wrapped C++ object, then when the Python object is deleted, the wrapped
C++ object will be deleted. The Python object's lifetime is dictated by
Python, wherein when an object’s reference count reaches zero, the object
_may_ be immediately destroyed. For non-simplistic cases, such as cyclic
references, the objects will be managed by the garbage collector, and _may_ be
destroyed before the program exits.
One Pythonic solution may be to expose a type that implements the [context
manager protocol](https://docs.python.org/2/library/stdtypes.html#context-
manager-types). The content manager protocol is made up of a pair of methods:
one which will be invoked when entering a runtime context, and one which will
be invoked when exiting a runtime context. By using a context manager, one
could control the scope in which a file is open.
>>> with MyBoostPythonObject() as A: # opens file.
... A.write(...) # file remains open while in scope.
... # A destroyed once context's scope is exited.
* * *
Here is an example [demonstrating](http://coliru.stacked-
crooked.com/a/245d68e4bcea05f0) exposing a RAII-type class to Python as a
context manager:
#include <boost/python.hpp>
#include <iostream>
// Legacy API.
struct spam
{
spam(int x) { std::cout << "spam(): " << x << std::endl; }
~spam() { std::cout << "~spam()" << std::endl; }
void perform() { std::cout << "spam::perform()" << std::endl; }
};
/// @brief Python Context Manager for the Spam class.
class spam_context_manager
{
public:
spam_context_manager(int x): x_(x) {}
void perform() { return impl_->perform(); }
// context manager protocol
public:
// Use a static member function to get a handle to the self Python
// object.
static boost::python::object enter(boost::python::object self)
{
namespace python = boost::python;
spam_context_manager& myself =
python::extract<spam_context_manager&>(self);
// Construct the RAII object.
myself.impl_ = std::make_shared<spam>(myself.x_);
// Return this object, allowing caller to invoke other
// methods exposed on this class.
return self;
}
bool exit(boost::python::object type,
boost::python::object value,
boost::python::object traceback)
{
// Destroy the RAII object.
impl_.reset();
return false; // Do not suppress the exception.
}
private:
std::shared_ptr<spam> impl_;
int x_;
};
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::class_<spam_context_manager>("Spam", python::init<int>())
.def("perform", &spam_context_manager::perform)
.def("__enter__", &spam_context_manager::enter)
.def("__exit__", &spam_context_manager::exit)
;
}
Interactive usage:
>>> import example
>>> with example.Spam(42) as spam:
... spam.perform()
...
spam(): 42
spam::perform()
~spam()
|
How to scrape PHP Ajax using Python?
Question: I'm a beginner with python, I'm trying build a python program that will scrape
product descriptions from <http://turnpikeshoes.com/shop/TCF00003>. Python has
many libraries and I'm sure many approaches to achieving my goal. I've done a
few successful scrapes using requests however the fields I was looking for are
not showing up, Using chromes inspector I found an Ajax POST request.
Here is my code
from lxml import html
import requests
url = 'http://turnpikeshoes.com/shop/TCF00003'
#URL
headers = {'user-agent': 'my-app/0.0.1'}
#Header info sent to server
page = requests.get(url, headers=headers)
#Get response
tree = html.fromstring(page.content)
#Page Content
ShortDsc = tree.xpath('//span[@itemprop="reviewBody"]/text()')
LongDsc = tree.xpath('//li[@class="productLongDescription"]/text()')
print 'ShortDsc:', ShortDsc
print 'LongDsc:', LongDsc
I think I need to send a request directly to admin-ajax.php
Any help is greatly appreciated
Answer: You should try selenium in this case if you want to scrape javascript content:
from selenium import webdriver
import time
driver = webdriver.PhantomJS()
driver.get("http://turnpikeshoes.com/shop/TCF00003")
time.sleep(5)
LongDsc = driver.find_element_by_class_name("productLongDescription").text
print 'LongDsc:', LongDsc
Btw, you should also install PhantomJS as headless browser.
|
Transform list into dictonary with counter values
Question: I have a list, containing project names:
`my_list = ['a', 'b', 'c', 'a', 'd', 'a', 'a']`
I want to put the letters into a dictonary, with the key values containing the
number, for how many time a letter is in a list:
my_dict = {'a' : 4, 'b' : 1, 'c' : 1, 'd' : 1}
How can I do this in python?
Answer: This is exactly what
[`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter)
is for:
> A Counter is a dict subclass for counting hashable objects. It is an
> unordered collection where elements are stored as dictionary keys and their
> counts are stored as dictionary values.
>>> from collections import Counter
>>> my_list = ['a', 'b', 'c', 'a', 'd', 'a', 'a']
>>> Counter(my_list)
Counter({'a': 4, 'c': 1, 'b': 1, 'd': 1})
|
Generating a CSRF token manually with Flask WTF-Forms
Question: I'd like to create and fill out a Flask WTF-Form using only python code.
However, the form doesn't automatically generate a CSRF token when I create it
with python code. Is there any way to do this manually?
The form in question:
from flask_wtf import Form
from wtforms import StringField
from wtforms.validators import DataRequired, URL
class URLForm(Form):
url = StringField('url', validators=[DataRequired(), URL(), Level3Url()])
the code I use to generate the form:
from forms import URLForm
form = URLForm()
if 'url' in request.args:
url = request.args.get('url')
form.url.data = url
if form.validate():
...
Answer: Since you're not using CSRF protection, disable it. You can also pass
`request.args` as the source of data.
form = URLForm(request.args, csrf_enabled=False)
|
Xpath - obtaining 2 nodes with 1 node having default value if missing
Question: I am using xpath in Python 2.7 with lxml:
from lxml import html
...
tree = html.fromstring(source)
results = tree.xpath(...xpath string...)
Now the problem is the xpath string and am getting quite lost in this. I am
trying to get all the nodes from one path like this:
//a[@class="hyperlinkClass"]/span/text() (1)
There are no missing entries in this part and this works fine. But I'm also
trying to get a part relative to this as well, like so:
//a[@class="hyperlinkClass"]/span/following-sibling::div[@class="divClassName"]/span[@class="spanClassName"]/text() (2)
This works fine by itself, but (2) may or may not have nodes for each node in
(1). What I would like to do is to have a default value for if (2) is
missing/empty for each (1), say "absent". This sounds straightforward and
maybe it is, but I'm hitting a brick wall here.
By doing '(1) | (2)' I get all the values needed, but no way to match them. If I do '(1) | concat((2), "absent")', this doesn't work either - concat doesn't seem to work in python, though I've read with xpath that it is valid. I saw [here](http://stackoverflow.com/questions/4489976/xpath-to-return-default-value-if-node-not-present) the "Becker method", but that doesn't work either (or I can't get it to).
Hopefully, someone can shine a light on how to get this working or if it's
even possible.
Answer: Don't make this more complicated than it is:
path1 = '//a[@class="hyperlinkClass"]/span'
path2 = './following-sibling::div[@class="divClassName"]/span[@class="spanClassName"]'
for link in tree.xpath(path1):
other_node = link.xpath(path2)
if len(other_node):
print(link.text, other_node[0].text)
else:
print(link.text, 'n/a')
|
Xpath following siblings until another sibling
Question: I'm new to using Xpath. I'm trying to parse some data in Python using Xpath.
Parsing the following HTML:
<table>
<tr>
<td class="DT">29-04-14</td>
<td class="Regio">Text</td>
<td class="Md">Text</td>
</tr>
<tr>
<td></td>
<td></td>
<td class="SomeClass">Some other text</td>
</tr>
<tr>
<td></td>
<td></td>
<td class="SomeOtherClass">Some more text</td>
</tr>
<tr>
<td class="DT">22-04-14</td>
<td class="Regio">Text</td>
<td class="Md">Text</td>
</tr>
<tr>
<td></td>
<td></td>
<td class="OmsAm">more text</td>
</tr>
<tr>
<td class="DT">30-04-14</td>
<td class="Regio">Text</td>
<td class="Md">Text</td>
</tr>
<tr>
<td></td>
<td></td>
<td class="OmsBr">Some other Text</td>
</tr>
<tr>
<td></td>
<td></td>
<td class="OmsBr">More Text</td>
</tr>
<tr>
<td></td>
<td></td>
<td class="OmsBr">Some different text</td>
</tr>
</table>
I need all `<td>` in following siblings `<tr>` after a `<tr>` with some values
in his `<td>`s, but _until_ the next `<tr>` with some values in all `<td>`s.
E.g. assuming my current position is the first `<tr>`, I would need these
table cells:
<td class="SomeClass">Some other text</td>
<td class="SomeOtherClass">Some more text</td>
Assuming my current position is the table row 4
<tr>
<td class="DT">22-04-14</td>
<td class="Regio">Text</td>
<td class="Md">Text</td>
</tr>
I would only need
<td class="OmsAm">more text</td>
This is the Xpath I'm using to get all sibling `<tr>`, but it gets me _all_
follinwg siblings, and not until the sibling it should stop: `./following-
sibling::tr/td[1][not(text()[1])]/..`
I think I have to implement the Kayesian method, but I don't understand this
in my case. Any help would be really apreciated!
Answer: I may be misinterpreting the question, but if, for each `<tr><td
class="DT">xx-xx-xx</td>`, you want all `<tr>` after it, and before the next
`<tr><td class="DT">xx-xx-xx</td>`, one pattern is to loop on these "boundary"
`<tr><td class="DT">xx-xx-xx</td>` elements, and selecting following sibling
rows with a condition on how many "boundaries" are found before.
Let's use `lxml` to illustrate. First, we create a document from your sample
input:
>>> import lxml.html
>>> t = '''<table>
... <tr>
... <td class="DT">29-04-14</td>
... <td class="Regio">Text</td>
... <td class="Md">Text</td>
... </tr>
... <tr>
... <td></td>
... <td></td>
... <td class="SomeClass">Some other text</td>
... </tr>
... <tr>
... <td></td>
... <td></td>
... <td class="SomeOtherClass">Some more text</td>
... </tr>
... <tr>
... <td class="DT">22-04-14</td>
... <td class="Regio">Text</td>
... <td class="Md">Text</td>
... </tr>
... <tr>
... <td></td>
... <td></td>
... <td class="OmsAm">more text</td>
... </tr>
... <tr>
... <td class="DT">30-04-14</td>
... <td class="Regio">Text</td>
... <td class="Md">Text</td>
... </tr>
... <tr>
... <td></td>
... <td></td>
... <td class="OmsBr">Some other Text</td>
... </tr>
... <tr>
... <td></td>
... <td></td>
... <td class="OmsBr">More Text</td>
... </tr>
... <tr>
... <td></td>
... <td></td>
... <td class="OmsBr">Some different text</td>
... </tr>
... </table>'''
>>> doc = lxml.html.fromstring(t)
Now, let's count these `<tr><td class="DT">xx-xx-xx</td>`:
>>> doc.xpath('//table/tr[td/@class="DT"]')
[<Element tr at 0x7f948ab00548>, <Element tr at 0x7f948ab005e8>, <Element tr at 0x7f948ab00638>]
>>> doc.xpath('count(//table/tr[td/@class="DT"])')
3.0
>>> list(enumerate(doc.xpath('//table/tr[td/@class="DT"]'), start=1))
[(1, <Element tr at 0x7f948ab00548>), (2, <Element tr at 0x7f948ab005e8>), (3, <Element tr at 0x7f948ab00638>)]
We can loop on these rows and select the rows that come after in the document
(we'll select text nodes to "see" which row these are:
>>> for cnt, row in enumerate(doc.xpath('//table/tr[td/@class="DT"]'), start=1):
... print( row.xpath('./following-sibling::tr/td/text()') )
...
['Some other text', 'Some more text', '22-04-14', 'Text', 'Text', 'more text', '30-04-14', 'Text', 'Text', 'Some other Text', 'More Text', 'Some different text']
['more text', '30-04-14', 'Text', 'Text', 'Some other Text', 'More Text', 'Some different text']
['Some other Text', 'More Text', 'Some different text']
We're selecting too many rows in each iteration, all the rows until the end of
the `<table>`. We need an additional "end" condition for following rows.
We're counting the `tr[td/@class="DT"]` in the loop, so we can check how many
preceding `tr[td/@class="DT"]` each row has:
For the 1st set:
row.xpath('./following-sibling::tr[count(./preceding-sibling::tr[td/@class="DT"])=1]
For the 2nd:
row.xpath('./following-sibling::tr[count(./preceding-sibling::tr[td/@class="DT"])=2]
etc.
So, in the loop, we can use the current count with an XPath variable with lxml
([an underrated XPath feature supported by
lxml](http://lxml.de/xpathxslt.html)):
>>> for cnt, row in enumerate(doc.xpath('//table/tr[td/@class="DT"]'), start=1):
... print( row.xpath('./following-sibling::tr[count(./preceding-sibling::tr[td/@class="DT"])=$count]', count=cnt) )
...
[<Element tr at 0x7f948ab00548>, <Element tr at 0x7f948ab005e8>, <Element tr at 0x7f948ec02f98>]
[<Element tr at 0x7f948ab00548>, <Element tr at 0x7f948ab00638>]
[<Element tr at 0x7f948ab00548>, <Element tr at 0x7f948ab005e8>, <Element tr at 0x7f948ab00688>]
>>>
Hm, we're selecting 1 row too much in each iteration.
That's because `<tr><td class="DT">30-04-14</td>` also has 1 preceding
`<tr><td class="DT">`
We can add an extra predicate for selecting rows that do NOT have a `<td
class="DT">`
>>> for cnt, row in enumerate(doc.xpath('//table/tr[td/@class="DT"]'), start=1):
... print( row.xpath('''
... ./following-sibling::tr[count(./preceding-sibling::tr[td/@class="DT"])=$count]
... [not(td/@class="DT")]''', count=cnt) )
...
[<Element tr at 0x7f948ab00548>, <Element tr at 0x7f948ab005e8>]
[<Element tr at 0x7f948ab00548>]
[<Element tr at 0x7f948ab00548>, <Element tr at 0x7f948ab005e8>, <Element tr at 0x7f948ab00688>]
>>>
The number of results per iteration looks right. Let's finally check using
text nodes:
>>> for cnt, row in enumerate(doc.xpath('//table/tr[td/@class="DT"]'), start=1):
... print( row.xpath('''
... ./following-sibling::tr[count(./preceding-sibling::tr[td/@class="DT"])=$count]
... [not(td/@class="DT")]
... /td/text()''', count=cnt) )
...
['Some other text', 'Some more text']
['more text']
['Some other Text', 'More Text', 'Some different text']
>>>
|
Python3 test import error
Question: I am using python3 to try and get a test file for sample application working
yet it keeps throwing `ImportError: No module named 'calculate'`
my file structure is:
/calculate
__init__.py
calculate.py
test/
__init__.py
calculate_test.py
I cannot figure out why this is the case, any help would be much appreciated.
The `__init__.py` files are empty.
**calculate.py** contains:
class Calculate(object):
def add(self, x, y):
return x + y
if __name__ == '__main__':
calc = Calculate()
result = calc.add(2, 2)
print(result)
**calculate_test.py** contains:
import unittest
from calculate import Calculate
class TestCalculate(unittest.TestCase):
def setUp(self):
self.calc = Calculate()
def test_add_method_returns_correct_result(self):
self.assertEqual(4, self.calc.add(2,2))
if __name__ == '__main__':
unittest.main()
I am running `python test/calculate_test.py` from the root `/calculate` folder
and am getting the error
Traceback (most recent call last):
File "test/calculate_test.py", line 2, in <module>
from calculate import Calculate
ImportError: No module named 'calculate'
I have been fiddling around with different structures and cannot understand
what the problem is.
Answer: Your project's structure is the reason. The test script doesn't have the outer
directory in the search path when you start it. Here are some ways to fix that
1. Move the test file into the same directory that contains the module it imports. That will require no changes in the test file.
2. Use this structure
./project/
calculate_test.py
calculate/
__init__.py
calculate.py
This will require you to change the import signature in calculate_test.py to
something like `from calculate import calculate`
|
How to cast float to string with no decimal places
Question: I'm using `openpyxl` to read values from a spreadsheet. These values are being
read as floats, I am not entirely sure why.
import openpyxl as opx
wb = opx.load_workbook(SKU_WORKBOOK_PATH, use_iterators=True, data_only=True)
ws = wb.worksheets[0]
for row in ws.iter_rows():
foo = str(int(row[1].internal_value))
This is throwing the error:
ValueError: invalid literal for int() with base 10: '6978279.0'
Normally, openpyxl reads in integer values as `int`, but this time it has read
it in a float cast as a string. In the spreadsheet, the value of this cell is
`6978279`.
I am converting this to the string I want with `foo = str(int(float(foo)))`
which results in `'6978279'` as intended. I could also do `foo = foo[:-2]`,
but this worries me that another cell, which may be read as an `int` or with
more decimal places, would screw things up.
This feels like a terrible, messy way of mashing what I have into what I want.
Is there a more pythonic way to do this? Am I reading the `xlsx` in a way that
forces floats? How can I do this without triple casting?
Answer: If you will never have decimals like `0.0` you can `str.rstrip` it will make
`6978279.0` `6978279` removing zeros from the end of any other decimals is not
going to change its value `1.12300` will be `1.233`:
In [20]: "1.234200".rstrip("0.")
Out[20]: '1.2342'
In [21]: "1.0".rstrip("0.")
Out[21]: '1'
If you could have 0.0 etc.. you could catch when the string is empty:
In [22]: s = "0.0".rstrip("0.") or "0"
|
Read text file into dictionary to be used later for adding/modifying/deleting
Question: Let me preface by saying I'm not 100% sure if using a dictionary is the best
course of action for this task but that is what I believe I need to use to
accomplish this.
I have a .txt file that is formatted like this:
first_name last_name rate hours
first_name last_name rate hours
first_name last_name rate hours
first_name last_name rate hours
There is a single space between each item. Each line represents a person.
For my program I need to be able to:
* print out all the people at once
* be able to search for a person by first or last name and print out their information
* modify a person (first name, last name, hours, rate)
* delete a person (all their information)
When it gets printed I **_DO NOT_** need to see the [rate] and [hours] but
[gross pay] instead (gross pay = rate * hours).
I am fairly new to file processing with python so my first attempt at this was
just to read every line from the file and print it out on the screen, but I
came across the problem of being able to display [gross pay].
# 'print_emp', display only a single employee's data chosen by the user displayed as
# firstname, lastname, grosspay (on one line of output)
def print_emp():
menu_name = ' '*int(OFFSET/2) + "EMPLOYEE LOOKUP"
dotted = (OFFSET+len(menu_name))*'-'
try:
with open('employees.txt') as file:
print('{} \n{} \n{}'.format(dotted, menu_name, dotted))
emp_name = input("Employee Name: ")
print('{0:20} {1:20} {2}'.format("First Name", "Last Name", "Gross Pay"))
for line in file:
if emp_name in line:
print (line.strip())
#print("\nEmployee", emp_name, "does not exist. Try again.\n")
#break
except FileNotFoundError:
print("Error: File not found.")
# 'print_all_emp', display all employee data in format firstname, lastname,
# grosspay (on one line of output per employee)
def print_all_emps():
menu_name = ' '*int(OFFSET/2) + "EMPLOYEE LIST"
dotted = (OFFSET+len(menu_name))*'-'
try:
with open('employees.txt', 'r') as file:
print('{} \n{} \n{}'.format(dotted, menu_name, dotted))
print('{0:20} {1:20} {2}'.format("First Name", "Last Name", "Gross Pay"))
for line in file:
print(line.strip())
print(dotted)
except FileNotFoundError:
print("Error: File not found.")
I am not sure how I go about reading my .txt file into a dictionary (if that's
what I need to do) where I assign a key to each person that includes their
first name, last name, rate, and hours and then multiplying the rate * hours
to create the gross pay and then displaying that gross pay.
I will be creating three more functions where I can add, delete, and modify
the people in the .txt file.
**EDIT :**
I believe what I am going for as an end program looks something like this:
<https://en.wikibooks.org/wiki/Non-
Programmer%27s_Tutorial_for_Python_3/File_IO>
But without the load and save functions...
Answer: Presuming you have space delimited data, you can just use the csv library.
import csv
labels = ['first_name', 'last_name', 'rate', 'hours']
data = csv.DictReader(open('./test.txt'), delimiter=' ', fieldnames=labels)
result = []
for row in data:
result.append(row)
print result
You will wind up with an array of dictionaries that each have the labels as
key names.
|
Python don't register in MySQL server
Question: there’s something wrong in my python script: when I try to put some data in my
database and print it, it looks like it’s working, but when I rerun the code,
or if I check the phpmyadmin, there’s no data saved in the db. Does anyone
have some idea on how to solve this problem?
import mysql.connector
from mysql.connector import errorcode
def connect():
""" Connect to MySQL database """
try:
conn = mysql.connector.connect(host='localhost',
database='Temperature',
user='Temperature',
password='mypass')
if conn.is_connected():
print('Connected to MySQL database')
cur = conn.cursor()
query = "INSERT INTO Temp(temp, humi) " \
"VALUES(315, 55)"
try:
cur.execute(query)
except MySQLdb.ProgrammingError as e:
print(e)
query = "SELECT * FROM Temp"
try:
cur.execute(query)
for reading in cur.fetchall():
print (str(reading[0])+" "+str(reading[1]))
except MySQLdb.ProgrammingError as e:
print(e)
except Error as e:
print(e)
finally:
conn.close()
if __name__ == '__main__':
connect()
Answer: You will need to add `conn.commit()` before `conn.close()`. That should solve
the problem.
|
invalid syntax when using pymysql
Question: I'm learning using python with Mysql. Same query works differently between
Mysql and pymysql. For example:In mysql console I have a table named "pages"
INSERT INTO pages (title,content) VALUES ("test title","test content")
And It works. In python,I import pymysql ,create connection and cursor =
"cur",when type:
cur.execute("INSERT INTO pages (title,content) VALUES ("test title","test content")")
raise an error : invalid syntax. But when I try this with place holder:
cur.execute("insert into pages (title,content) values (%s,%s)",("test title","test content"))
It works! So I was confused why same query work in Mysql but not in python,and
does It means I should always use place holder when using pymysql? Thanks
Answer: First of all, the last version (the _parameterized_ query) should be the most
preferred one since you don't need to put the quotes around the placeholders
at all letting the database driver figure out the type conversion and the
quotes automatically. And, this way you are safe from [SQL
injection](https://en.wikipedia.org/wiki/SQL_injection) attacks.
Now, to answering your question. If you want to use double quotes inside
double quotes, you need to _escape_ them:
cur.execute("INSERT INTO pages (title,content) VALUES (\"test title\", \"test content\")")
Or, use single quotes outside and double quotes inside:
cur.execute('INSERT INTO pages (title,content) VALUES ("test title", "test content")')
|
Django channels - Echo example not working
Question: I'm following the instructions in the [documentation
site](http://channels.readthedocs.io/en/latest/getting-started.html), but I
got stuck in the echo example, the websocket is created correctly and it's
connected to the server but when I send anything to the server I'm not getting
any response (In the example says I should see an alert window with the same
message that I send into the socket but I don't, although I've changed the
alert for a console.log but still), what I'm doing wrong?
In _settings.py_ :
INSTALLED_APPS = {
...
'channels',
'myapp',
...
}
...
# Channels settings
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgiref.inmemory.ChannelLayer",
"ROUTING": "myapp.routing.channel_routing",
},
}
In _routing.py_ :
from channels.routing import route
from myapp.consumers import *
channel_routing = [
route("websocket.receive", ws_receive),
]
In _consumers.py_ :
def ws_receive(message):
# ASGI WebSocket packet-received and send-packet message types
# both have a "text" key for their textual data.
message.reply_channel.send({
"text": message.content['text'],
})
In asgi.py
import os
from channels.asgi import get_channel_layer
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings")
channel_layer = get_channel_layer()
Then I run: python manage.py runserver, and in my browser I go to the server
url and in the console I put the following:
socket = new WebSocket("ws://" + window.location.host + "/chat/");
socket.onmessage = function(e) {
alert(e.data);
}
socket.onopen = function() {
socket.send("hello world");
}
Again, at this point I should see an alert window (or the console.log message)
but I get nothing.
[![response](http://i.stack.imgur.com/LqTBA.png)](http://i.stack.imgur.com/LqTBA.png)
The requests that I made have a status of pending (Although I read
[here](http://meta.stackexchange.com/questions/189350/websockets-stuck-in-
pending-on-stack-overflow) and the first comment says it's normal)
[![requests](http://i.stack.imgur.com/NaxTa.png)](http://i.stack.imgur.com/NaxTa.png)
And the server output looks like this:
[![server-
output](http://i.stack.imgur.com/EfFfL.png)](http://i.stack.imgur.com/EfFfL.png)
Every time that I've tried to send something through the websocket in the
browser, the server just print CONNECT but no log from the js console is
showing.
### Edit: I've tested websockets in my browser against echo.websocket.org and
I got the answer as expected:
[![test-
websocket](http://i.stack.imgur.com/VLjX3.png)](http://i.stack.imgur.com/VLjX3.png)
Answer: I changed to an older version of twisted and it fixed it. Hth
|
SyntaxError in if/elif block
Question: So I tried making a basic Rock Paper Scissors game with Python 3 and random
AI.
import random
x=0
InvalidInput="Invalid Input, please use a capital letter at the start of your input"
while x==0:
AI=random.randint(1,3)
UserInput=input("Rock, Paper, or Scissors?")
if AI==1:
AI="Rock"
elif AI==2:
AI="Paper"
elif AI==3:
AI="Scissors"
print("You chose {}, Computer chose {}".format(UserInput, AI)
#Quit
elif UserInput=="Quit":
break
#Tie
elif AI==UserInput:
print("Its a tie!")
#AI chooses Rock
elif AI=="Rock":
if UserInput=="Paper":
print("You Win!")
elif UserInput=="Scissors":
print("You Lose")
else:
print(InvalidInput)
#AI chooses Paper
elif AI=="Paper":
if UserInput=="Scissors":
print("You Win!")
elif UserInput=="Rock":
print("You Lose")
else:
print(InvalidInput)
#AI chooses Scissors
elif AI=="Scissors":
if UserInput=="Rock":
print("You Win!")
elif UserInput=="Paper":
print("You Lose")
else:
print(InvalidInput)
now I that is the bug:
> Traceback (most recent call last): File "python", line 16 elif
> UserInput=="Quit": ^ SyntaxError: invalid syntax
Answer: Made some changes, indented the print, also wrote closing bracket for print,
moved Quit to a separate if and added check for Capital letters in Rock,
Paper, Scissors and Quit.
import random
x=0
InvalidInput="Invalid Input, please use a capital letter at the start of your input"
while x==0:
AI=random.randint(1,3)
if AI==1:
AI="Rock"
elif AI==2:
AI="Paper"
elif AI==3:
AI="Scissors"
UserInput=input("Rock, Paper, or Scissors?")
if UserInput[0] not in 'RPSQ':
print(InvalidInput)
continue
else:
#Quit
if UserInput=="Quit":
break
print("You chose {}, Computer chose {}".format(UserInput, AI))
#Tie
if AI==UserInput:
print("Its a tie!")
#AI chooses Rock
elif AI=="Rock":
if UserInput=="Paper":
print("You Win!")
elif UserInput=="Scissors":
print("You Lose")
else:
print(InvalidInput)
#AI chooses Paper
elif AI=="Paper":
if UserInput=="Scissors":
print("You Win!")
elif UserInput=="Rock":
print("You Lose")
else:
print(InvalidInput)
#AI chooses Scissors
elif AI=="Scissors":
if UserInput=="Rock":
print("You Win!")
elif UserInput=="Paper":
print("You Lose")
else:
print(InvalidInput)
|
Running bash in subprocess breaks stdout of tty if interrupted while waiting on `read -s`?
Question: As @Bakuriu points out in the comments this is basically the same problem as
in [BASH: Ctrl+C during input breaks current
terminal](http://stackoverflow.com/questions/31808863/bash-ctrlc-during-input-
breaks-current-terminal) However, I can only reproduce the problem when bash
is run as a subprocess of another executable, and not directly from bash,
where it seems to handle terminal cleanup fine. I would be interested in an
answer as to why bash seems to be broken in this regard.
I have a Python script meant to log the output of subprocess that is started
by that script. If the subprocess happens to be a bash script that at some
point reads user input by calling the `read -s` built-in (the `-s`, which
prevents echoing of entered characters, being key), and the user interrupts
the script (i.e. by Ctrl-C), then bash fails to restore output to the tty,
even though it continues to accept input.
I whittled this down to a simple example:
$ cat test.py
#!/usr/bin/python
import subprocess as sp
p = sp.Popen(['bash', '-c', 'read -s foo; echo $foo'])
p.wait()
Upon running `./test.py` it will wait for some input. If you type some input
and press Enter the script returns and echos your input as expected, and there
is no issue. However, if you immediately hit "Ctrl-C", Python displayed a
traceback for the `KeyboardInterrupt`, and then returns to the bash prompt.
However, nothing you type is displayed to the terminal. Typing `reset<enter>`
successfully resets the terminal, however.
I'm somewhat at a loss as to exactly what's happening here.
**Update:** I managed to reproduce this without Python in the mix either. I
was trying to run bash in strace to see if I could glean anything that was
going on. With the following bash script:
$ cat read.sh
#!/bin/bash
read -s foo
echo $foo
Running `strace ./read.sh` and immediately hitting Ctrl-C produces:
...
ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon -echo ...}) = 0
brk(0x1a93000) = 0x1a93000
read(0, Process 25487 detached
<detached ...>
Where PID 25487 was `read.sh`. This leaves the terminal in the same broken
state. However, `strace -I1 ./read.sh` simply interrupts the `./read.sh`
process and returns to a normal, non-broken terminal.
Answer: It seems like this is related to the fact that `bash -c` starts a **non-
interactive** shell. This probably prevents it from restoring the terminal
state.
To explicitly start an interactive shell you can just pass the `-i` option to
bash.
$ cat test_read.py
#!/usr/bin/python3
from subprocess import Popen
p = Popen(['bash', '-c', 'read -s foo; echo $foo'])
p.wait()
$ diff test_read.py test_read_i.py
3c3
< p = Popen(['bash', '-c', 'read -s foo; echo $foo'])
---
> p = Popen(['bash', '-ic', 'read -s foo; echo $foo'])
When I run and press `Ctrl`+`C`:
$ ./test_read.py
I obtain:
Traceback (most recent call last):
File "./test_read.py", line 4, in <module>
p.wait()
File "/usr/lib/python3.5/subprocess.py", line 1648, in wait
(pid, sts) = self._try_wait(0)
File "/usr/lib/python3.5/subprocess.py", line 1598, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt
and the terminal isn't properly restored.
If I run the `test_read_i.py` file in the same way I just get:
$ ./test_read_i.py
$ echo hi
hi
no error, and terminal works.
|
How can I take integer regex?
Question: I'm trying to use regex in Python for taking some parts of a text. From a text
I need to take this kind of substring '2016-049172'. So what's the equivalent
regex? Thank you very much.
Here's a piece of code:
import re
pattern = re.compile(r"\s-\s[0-9]+[0-9]$]")
my_string = 'Ticketing TSX - 2016-049172'
matches = re.findall(pattern,my_string)
print matches
Of course, my output is empty list. (I apologize for initial bad post, I'm
new)
Answer: The regex to use is this:
\d{4}-\d{6}
Updating your example code, this will do it for you:
import re
pattern = re.compile(r"\d{4}-\d{6}")
my_string = 'Ticketing TSX - 2016-049172'
matches = re.findall(pattern,my_string)
print matches
|
Phoenix Channel sending messages from a client outside the project
Question: I wanted to send a message to my user channel of my Phoenix Application. I
have joined a user_token with the channel as `users:user_token` in the
`user_channel.ex` . I was successful doing it from another controller called
the `toy_controller` by calling a broadcast method. The broadcast method is in
the user channel. And I have written a jQuery file to handle the events. I was
looking for something which can send messages to the same channel from outside
of the project, because I wanted to do some IoT stuff. I have tried a python
module called `occamy.socket` and the JS client of Phoenix that it uses
internally. Then, I found a disconnection always. I can't figure out the exact
address of the websocket connection from Phoenix. If I am trying it with that
Phoenix npm library in that project folder itself, it says `ReferenceError:
window is not defined` always. And, I think it is because of the
initialization part of the socket in the `web/static/js/socket.js` file where
it's written as
let socket = new Socket("/socket", {params: {token: window.userToken}})
, but I am not sure. The thing that I have tried is below
var Socket = require("phoenix-socket").Socket;
var socket = new Socket("ws://localhost:4000/socket");
In the python client, I was also trying to connect to this address and got a
disconnection error. I want to do it for IoT purposes, where I want to monitor
sensor data of a user. Each user will be having their own sensors to be
monitored. So, I have configured the channel `topic:subtopic` channel as
`users:user_token` . I need to send messages from my raspberry pi to this
channel using those unique tokens of the users. My user_channel, user.js,
app.js and socket.js are given below.
//web/static/js/socket.js
import {Socket} from "phoenix"
let socket = new Socket("/socket", {params: {token: window.userToken}})
socket.connect()
export default socket
//web/static/app.js
import "phoenix_html"
import user from "./user"
#web/channels/user_channel.ex
defmodule Tworit.UserChannel do
use Tworit.Web, :channel
def join("users:" <> user_token, payload, socket) do
if authorized?(payload) do
{:ok, "Joined To User:#{user_token}", socket}
else
{:error, %{reason: "unauthorized"}}
end
end
def handle_in("ping", payload, socket) do
{:reply, {:ok, payload}, socket}
end
def handle_in("shout", payload, socket) do
broadcast socket, "shout", payload
{:noreply, socket}
end
def handle_out(event, payload, socket) do
push socket, event, payload
{:noreply, socket}
end
defp authorized?(_payload) do
true
end
def broadcast_change(toy, current_user) do
payload = %{
"name" => toy.name,
"body" => toy.body
}
Tworit.Endpoint.broadcast("users:#{current_user.token}", "change", payload)
end
end
//web/static/js/user.js
import socket from "./socket"
$(function() {
let ul = $("ul#em")
if (ul.length) {
var token = ul.data("id")
var topic = "users:" + token
// Join the topic
let channel = socket.channel(topic, {})
channel.join()
.receive("ok", data => {
console.log("Joined topic", topic)
})
.receive("error", resp => {
console.log("Unable to join topic", topic)
})
channel.on("change", toy => {
console.log("Change:", toy);
$("#message").append(toy["name"])
})
}
});
Answer: Finally, I am able to send and receive messages asynchronously from a python
program. It uses websockets asyncio module from python. I figured out the
various events required for phoenix channels like 'phx_join' for joining a
topic and all. So, the following program worked.
import asyncio
import websockets
import json
import time
from random import randint
import serial
from pyfirmata import Arduino, util
board = Arduino('/dev/ttyACM1')
it = util.Iterator(board)
it.start()
board.analog[0].enable_reporting()
board.analog[1].enable_reporting()
board.analog[2].enable_reporting()
board.analog[3].enable_reporting()
import RPi.GPIO as gpio
gpio.setmode(gpio.BCM)
gpio.setup(14, gpio.OUT)
async def main():
async with websockets.connect('ws://IP_addr:4000/socket/websocket') as websocket:
data = dict(topic="users:user_token", event="phx_join", payload={}, ref=None)
#this method joins the phoenix channel
await websocket.send(json.dumps(data))
print("Joined")
while True:
msg = await retrieve() # waits for data from arduino analog pins
await websocket.send(json.dumps(msg)) # sends the sensor output to phoenix channel
print("sent")
call = await websocket.recv() # waits for anything from the phoenix server
control = json.loads(call)
# I have sent values from 2 buttons for swicthing a led with event 'control'
if(control['event'] == "control"):
event(control['payload']['val']) #swiches the led as per the input from event 'control'
print("< {}".format(call))
def event(val):
if(val == "on"):
gpio.output(14, True)
if(val == "off"):
gpio.output(14, False)
async def retrieve():
#analog read
load = board.analog[0].read()
pf = board.analog[1].read()
reading = board.analog[2].read()
thd = board.analog[3].read()
output = {"load": load, "pf": pf, "reading": reading,"thd": thd}
msg = dict(topic="users:user_token", event="sensor_output", payload=output, ref=None) # with
#event "sensor_outputs"
#the phoenix server displays the data on to a page.
print(msg)
return(msg)
asyncio.get_event_loop().run_until_complete(main())
asyncio.get_event_loop().run_forever()
|
formatting the return in python - print out different types of values
Question: I have the following code
$ipython
> import csv
> with open('q1_4.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter = ' ', quotechar = '|')
for row in reader:
print [tuple(row)]
In each row, I have four values, each of which is a string, a number, a number
and a number. How could I print out "string", num, num, num for each row,
instead of "string", "string", "string", "string"?
**UPDATED** I have this modification based on the comments below:
import csv
from itertools import chain
result = []
with open("q1_4.csv", "rb") as csvfile:
reader = csv.reader(csvfile, delimiter = ",", quotechar = "|")
for row in reader:
result.append(tuple(chain([row[0]], map(float, row[1:4]))))
print result
As I am interested in getting the tuple results into a list, I have result =
[] and the following modifications. However, this gives me as many replicates
as the number of row in my dataset. How could this be improved? Thank you!!
Answer: If formatting is all you are concerned about
with open('q1_4.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter = ' ', quotechar = '|')
for row in reader:
print '"{}", {}, {}, {}'.format(*row)
Test:
>>> row = ["string", "1", "2", "3"]
>>> print '"{}", {}, {}, {}'.format(*row)
"string", 1, 2, 3
**Update** per your request in the comments. Here are several options you can
use to return a tuple
return (row[0],) + tuple(int(n) for n in row[1:]) # Python 2 and 3
return (row[0],) + tuple(map(int, row[1:])) # Python 2 and 3
return tuple(chain([row[0]], map(int, row[1:]))) # Python 2 and 3; requires importing `chain` from `itertools`
Though I prefer Python 3 here:
return (row[0], *map(int, row[1:])) # Python 3 only
If you want to return a list of tuples, take any of that constructs (I'll pick
the third one for brevity) and do:
return [(row[0], *map(int, row[1:])) for row in reader]
**NOTE** You can only return from a function, so you'll have to wrap it all
into a function.
|
Python Turtle mainloop() usage
Question: I have the following code from an [online
tutorial](http://openbookproject.net/thinkcs/python/english3e/events.html#an-
example-state-machines) to learn event-based programming by making a stop
light that changes state when the mouse is clicked. Here is the entirety of my
code:
import turtle
turtle.setup(400,500)
wn = turtle.Screen()
wn.title("Tess becomes a traffic light!")
wn.bgcolor("lightgreen")
tess = turtle.Turtle()
def draw_housing():
tess.pensize(3)
tess.color("black","darkgrey")
tess.begin_fill()
tess.forward(80)
tess.left(90)
tess.forward(200)
tess.circle(40, 180)
tess.forward(200)
tess.left(90)
tess.end_fill()
draw_housing()
tess.penup()
tess.forward(40)
tess.left(90)
tess.forward(40)
tess.shape("circle")
tess.shapesize(3)
tess.fillcolor("green")
state_num = 0
def nextFSMstate():
global state_num
if state_num == 0:
tess.forward(70)
tess.fillcolor("orange")
state_num = 1
elif state_num == 1:
tess.forward(70)
tess.fillcolor("red")
state_num = 2
else:
tess.back(140)
tess.fillcolor("green")
state_num = 0
wn.onkey(nextFSMstate, "space")
wn.listen()
turtle.mainloop()
# example says wn.mainloop() but I get error. This works though
In the tutorial, they use:
wn.mainloop()
But I get the error:
File "stopLights.py", line 51, in <module>
wn.mainloop()
AttributeError: '_Screen' object has no attribute 'mainloop'
and have to use
turtle.mainloop()
Why the difference? I am using Python 2.7 in Ubuntu; the example is in
PyScripter. Thanks in advance.
Answer: It appears to be an error in the tutorial.
On line 4, they define `wn = turtle.Screen()`, which means that the later call
to `wn.mainloop()` is equivalent to calling `turtle.Screen().mainloop()`.
This doesn't make any sense; as the error message states there is no
`.mainloop()` method of `turtle.Screen()`. There _is_ , however a
`.mainloop()` method of the base `turtle` object, which is why calling that
works.
|
Scraping Edgar with Python regular expressions
Question: I am working on a personal project's initial stage of downloading 10-Q
statements from EDGAR. Quick disclaimer, I am very new to programming and
python so the code that I wrote is very basic, not even using custom functions
and classes, just a very long script that I'm more comfortable editing. As a
result, some solutions are quite rough (i.e. concatenating urls using CIKs and
other search options instead of doing requests with "browser" headers)
I keep running into a problem that those who have scraped EDGAR might be
familiar with. Every now and then my script just stops running. It doesn't
raise any exceptions (I created some that append txt reports with links that
can't be opened and so forth). I suspect that either SEC servers have a
certain limit of requests from an IP per some unit of time (if I wait some
time after CTRL-C'ing the script and run it again, it generates more output
compared to rapid re-activation), alternatively it could be TWC that
identifies me as a bot and limits such requests.
If it's SEC, what could potentially work? I tried learning how to work with
TOR and potentially get a new IP every now and then but I can't really find
some basic tutorial that would work for my level of expertise. Maybe someone
can recommend something good on the topic?
Maybe the timers would work? Like force the script to sleep every hour or so
(still trying to figure out how to make such timers and reset them if an event
occurs). The main challenge with this particular problem is that I can't let
it run at night.
Thank you in advance for any advice, I keep fighting with it for days and at
this stage it could take me more than a month to get what I want (before I
even start tackling 10-Ks)
Answer: It seems like delays are pretty useful - sitting at 3.5k downloads with no
interruptions thanks to a simple:
import(time)
time.sleep(random.randint(0, 1) + abs(random.normalvariate(0, 0.2)))
|
REGEX in Python only matches exponent
Question: I was reading some lines from a file, which I want to match to be floats, here
is a minimal example:
import re
regex="[-+]?[0-9]+\.?[0-9]+([eE][-+]?[0-9]+)?"
string="0.00000000000000000E0 0.00000000000000000E0 0.00000000000000000E0"´
print(re.findall(regex,string))
, Which gives me
['E0', 'E0', 'E0']
Instead of the expected
['0.00000000000000000E0', '0.00000000000000000E0', '0.00000000000000000E0']
Answer: Change the regex to
regex=r"[-+]?[0-9]+\.?[0-9]+(?:[eE][-+]?[0-9]+)?"
^^
The point is to use a non-capturing group instead of the capturing one so that
`findall` did not have to return only the captured text (when there are no
capturing groups defined in the pattern, `re.findall` will return whole
matched texts).
Also, use a raw string literal always to define regex pattern to avoid any
other misunderstanding.
A [Python demo](http://ideone.com/6UucQT):
import re
regex=r"[-+]?[0-9]+\.?[0-9]+(?:[eE][-+]?[0-9]+)?"
string="0.00000000000000000E0 0.00000000000000000E0 0.00000000000000000E0"
print(re.findall(regex,string))
|
Uploading a file to a form using python requests
Question: Trying to write a script that fills in an online form at this
[website](http://www.formstack.com/forms/?1455656-XG7ryB28LE) and uploads a
zip file. I have looked at [the documentation](http://docs.python-
requests.org/en/latest/user/quickstart/#post-a-multipart-encoded-file) and
[several](http://stackoverflow.com/questions/17722006/login-and-upload-file-
using-python-requests/17722134#17722134)
[other](http://stackoverflow.com/questions/22567306/python-requests-file-
upload) [posts](http://stackoverflow.com/questions/27050399/make-an-http-post-
request-to-upload-a-file-using-python-urllib-urllib2) on here but still cant
get my script to upload the file.
Here is the html source for the file upload:
<input type="file" id="field19567427" name="field19567427"
size="30" class="fsField fsUpload uploadTypes-jpg,jpeg,gif,png,bmp,tif,
doc,docx,xls,xlsx,txt,mp3,mp4,aac,wav,au,wmv,avi,mpg,mpeg,zip,gz,rar,z,tgz,tar,sitx" />
Here is my python code(_forgive all my imports I have been trying a lot of
different approaches_):
import urllib
import urllib2
import cookielib
import webbrowser
import os
import base64
import requests
from pprint import pprint
walla = "X:\\Test\\Test.html"
my_file = open("X:\\Some_Directory\\Meh.zip", 'rb')
values = {
"field19567029" : "Some Company",
"field20044433" : "Some Email",
"field40168419" : "Some Phone Num",
"field19567035" : "Some Code",
"field19567303" : "Some Distance",
"field19567306" : "Map Projection",
}
zippy = {
"field19567427" : my_file
}
url = "http://www.formstack.com/forms/?1455656-XG7ryB28LE"
url2 = "http://httpbin.org/post"
if os.path.exists(walla):
os.remove(walla)
r = requests.post(url, data=values, files=zippy)
#r.status_code
#pprint(r.json()['headers'])
with open(walla, "w") as f:
f.write(r.content)
Answer: With your specific url, you need to add some data:
url = "http://www.formstack.com/forms/?1455656-XG7ryB28LE"
session = requests.session()
r = session.get(url) # This can be used to determine form and viewkey values
data = {
"form": "1455656", # Added
"viewkey": "XG7ryB28LE", # Added
"_submit": "1", # Added
"field19567029" : "Some Company",
"field20044433" : "Some Email",
"field40168419" : "Some Phone Num",
"field19567035" : "Some Code",
"field19567303" : "Some Distance",
"field19567306" : "Map Projection",
}
files = {"field19567427": open("X:\\Some_Directory\\Meh.zip", 'rb')}
r2 = session.post(url, data=data, files=files)
print r2.content
|
Are str() and int() time consuming in python?
Question: I face myself using a lot of this kind of structure:
for node in range(int(self.num_nodes)):
dists[str(node)] = -1
So, my questions is: what does python(3+) uses when `str()` or `int()`, for
example, are called? It just casts? It converts? (something else?)
How would you advice me to write this kind of code? Keep in mind that the key
for dists must be a `str` and `num_nodes` is also a `str`.
Answer: How expensive `str` and `int` are depend entirely on the objects that you pass
to them. Ultimately, they just call the object's
[`__str__`](https://docs.python.org/3/reference/datamodel.html#object.__str__)
and
[`__int__`](https://docs.python.org/3/reference/datamodel.html#object.__int__)
hook methods. For most simple objects, these operations should be pretty fast.
e.g. to convert a simple string to an integer takes my computer about .5
microseconds:
$python -mtimeit -s 'x="1"' 'int(x)'
1000000 loops, best of 3: 0.479 usec per loop
and the reverse operation only takes a little over .1 microseconds:
$ python -mtimeit -s 'x=1' 'str(x)'
10000000 loops, best of 3: 0.12 usec per loop
however, it's very easy to create pathological objects for which this would be
_really_ expensive.
import time
class Argv(object):
def __str__(self):
time.sleep(1000)
return 'ha ha ha!'
|
complexity of set of nameduple lookup
Question: Hi in Python i have a namedtuple because i want to store a few values in the
same object.
A = namedtuple("A", "key1 key2 key3")
I store those A's in a registry class which holds a set()
class ARegistry(object):
def __init__(self):
self._register = set()
def register(self, value1, value2, value3):
self._register.add(A(key1=value1, key2=value2, key3=value3)
def __getitem__(self, value1):
return next((x for x in self._registry if x.key1 == value1), None)
def get_by_key2(self, value):
return next((x for x in self._registry if x.key2 == value), None)
def get_by_key3(self, value):
return next((x for x in self._registry if x.key3 == value), None)
In this way i can easily retrieve those namedtuples by key1 which i need in
most cases (80%), but also on key2 or key3 (other 20%):
myobj1 = a_register["foo"] # Search on key1
myobj2 = a_register.get_by_key2("bar") # Search on key2
myobj3 = a_register.get_by_key3("bar") # Search on key3
**Question:**
Now from that i read in the documentation about sets, is that lookup in sets
is of complexity O(1). But is this still true if i store namedtuple in sets
like in the example above? Or does such a construct increase the lookup time
of objects in my registry and is another method of being able to lookup values
by multiple keys preferred, time-wise.
Answer: Lookup in a set is only O(1) if you are looking for the item in the set. You
are looking at each item in the set to see if it matches a particular
criterion -- which is completely different (It'll be O(N) complexity on
average).
A more efficient way to store this would be to put the tuple into a dict that
maps the key to the tuple. You'll need 3 dicts to store the data this way (so
there is more memory involved in this approach if that is a concern)
from collections import defaultdict
class ARegistry(object):
def __init__(self):
self._register = [
defaultdict(list), # lookup based on first item in A
defaultdict(list), # lookup based on second item in A
defaultdict(list), # lookup based on third item in A
]
def register(self, value1, value2, value3):
tup = A(key1=value1, key2=value2, key3=value3)
for v, registry in zip(tup, self._register):
registry[v].append(tup)
def __getitem__(self, value1):
return next(iter(self._register[0][value1]), None)
def get_by_key2(self, value):
return next(iter(self._register[1][value]), None)
def get_by_key3(self, value):
return next(iter(self._register[2][value]), None)
|
Python Zeep Client request throws error in xml exception
Question: When I run the following code, I keep getting the following error: `here is an
error in XML document (113, 25). ---> The string '' is not a valid Boolean
value.` I do not understand why this is happening.
[Here](http://resumeparsing.com/#ParseResume) is the documentation and
according to it the boolean fields are not required.
from zeep import Client
client = Client('http://services.resumeparsing.com/ResumeService.asmx?wsdl')
response = client.service.ParseResume(request={'AccountId': 'XXXXXXX',\
'ServiceKey':'XXXXXXXXX',\
'FileBytes': file_bytes, 'FileText': file_text, \
})
print(response)
Any help will be appreciated!
Answer: Author of zeep here; which version are you using? It seems that zeep generates
XML which is not valid according to the server.
You can see which XML is sent by enabling the debug log level, see
<http://docs.python-zeep.org/en/latest/transport.html#debugging>
|
(GPS & MySQL) No module error in python 2.7 with virtualenv running on Lubuntu
Question: I have installed the gps packages and the mysql packages using : sudo apt-get
install gpsd gpsd-clients sudo dpkg-reconfigure gpsd & sudo apt-get install
python2.7-mysqldb it shows that the packages have been successfully installed
but when i run my python code which has :
import gps
import MySQLdb as mdb
error comes:
ImportError: No module named gps
and ImportError: No module named MySQLdb
in python sys.path gives the following:
sys.path
['', '/home/odroid/.virtualenvs/barc/src/django-tastypie-master', '/home/odroid/barc/workspace/devel/lib/python2.7/dist-packages', '/opt/ros/indigo/lib/python2.7/dist-packages', '/usr/local/lib/python2.7/site-packages', '/home/odroid/.virtualenvs/barc/lib/python2.7', '/home/odroid/.virtualenvs/barc/lib/python2.7/plat-arm-linux-gnueabihf', '/home/odroid/.virtualenvs/barc/lib/python2.7/lib-tk', '/home/odroid/.virtualenvs/barc/lib/python2.7/lib-old', '/home/odroid/.virtualenvs/barc/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-arm-linux-gnueabihf', '/usr/lib/python2.7/lib-tk', '/home/odroid/.virtualenvs/barc/local/lib/python2.7/site-packages', '/home/odroid/.virtualenvs/barc/lib/python2.7/site-packages']
Answer: % SOLVED, if python is running in a virtual env then there is a issue with the
path of the modules. need to change path :
export PYTHONPATH=$PYTHONPATH:/usr/lib/python2.7/dist-packages % this command
has to be made everytime the terminal window is closed and for permanently
implementing this change// source vim .bashrc % copy //export
PYTHONPATH=$PYTHONPATH:/usr/lib/python2.7/dist-packages % and then source
.bashrc
|
Finding correct package versions using standalone Python 2.7 and Anaconda/Python 3.5 on same computer (Mac)
Question: I have been using Python 2.7 for some time on this machine; I needed to
install the Anaconda distribution with Python 3.5 for a team project.
I successfully installed Python 3.5, and now `python` points to Python 3.5,
but when I try `import numpy`, my system goes looking for numpy in the place
where my packages for 2.7 are located:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/numpy/__init__.py", line 180, in <module>
from . import add_newdocs
File "/Library/Python/2.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/Library/Python/2.7/site-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/Library/Python/2.7/site-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/Library/Python/2.7/site-packages/numpy/core/__init__.py", line 14, in <module>
from . import multiarray
ImportError: dlopen(/Library/Python/2.7/site-packages/numpy/core/multiarray.so, 2): Symbol not found: _PyBuffer_Type
Referenced from: /Library/Python/2.7/site-packages/numpy/core/multiarray.so
Expected in: flat namespace
in /Library/Python/2.7/site-packages/numpy/core/multiarray.so
I've tried adding `~/anaconda/pkgs` to my `PYTHONPATH` variable ahead of
`/Library/Python/2.7/site-packages`, but this doesn't seem to help. Though my
focus is on getting things to work in 3.5, I'd like to be able to use both
Python 2.7 and 3.5, so I don't want to uninstall 2.7 or completely remove the
pointer to that version's package location.
Answer: First, ensure you have the proper path for the anaconda installation. Then add
' ~//bin' to your PYTHONPATH instead.
|
Pool Multiprocessing Python
Question: Basically the issue is as follows: I have a bunch of workers that have a
function prescribed to each (the function is worker(alist) ) and am trying to
process 35 workers at the same time. Each worker reads their line from the
file (the modulo part) and should process the line using the "worker"
function. I've pen-tested and found that the raw manipulation and deletion of
the useless indices is working 100% as intended.
The args part of the "pool.apply_async" function isn't passing the list "raw"
into it and starting the process. Raw is completely correct and functions
normally, worker by itself functions normally, the pool.apply_async function
is the only place that there seems to be an issue and I have no idea how to
fix it. Any help please?
The relevant code is here:
NUM_WORKERS=35
f=open("test.csv")
pool=multiprocessing.Pool()
open("final.csv",'w')
for workernumber in range(1, NUM_WORKERS):
for i,line in enumerate(f):
if i==0:
print "Skipping first line" #dont do anything
elif i%workernumber==0:
raw = line.split(',')[0][1:-1].split()
uselessindices=[-2,-3,-4,-5,-6]
counter=0
for ui in uselessindices:
del raw[ui+counter]
counter+=1
print raw
pool.apply_async(worker, args=(raw,))
pool.close()
pool.join()
Answer:
import multiprocessing
def worker(arg):
print 'doing work "%s"' % arg
return
NUM_WORKERS=35
with open('test.csv', 'w') as test:
for i in xrange(100):
if i % 10 == 0:
test.write('\n')
test.write('"%s 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23",' % i)
f=open("test.csv")
pool=multiprocessing.Pool(processes=NUM_WORKERS)
open("final.csv",'w')
for i, line in enumerate(f):
if i == 0:
continue
raw = line.split(',')[0][1:-1].split()
uselessindices=[-2,-3,-4,-5,-6]
counter=0
for ui in uselessindices:
del raw[ui+counter]
counter+=1
pool.apply_async(worker, args=(raw,))
pool.close()
pool.join()
print 'last raw len: %s' % len(raw)
print 'last raw value: %s' % raw
Output:
doing work "['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['10', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['20', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['30', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['40', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['50', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['60', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['70', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['80', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
doing work "['90', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']"
last raw len: 19
last raw value: ['90', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '23']
|
Why won't it expand both tar.gz files?
Question: I have two tar.gz files, 2014_SRS.tar.gz and 2013_SRS.tar.gz. Each of the
files contains a folder called SRS, which is full of text files. I downloaded
these from an ftp server. I want to unzip them automatically in Python. This
is my code:
import re
import ftplib
import os
import time
import tarfile
import sys
print('1')
tar = tarfile.open('2014_SRS.tar.gz')
tar.extractall()
tar.close()
print('2')
tar = tarfile.open('2013_SRS.tar.gz')
tar.extractall()
tar.close()
print('3')
This code only opens the second file. How do I fix it to open both files?
Also, I tried using a for loop to run through the whole directory. The code is
shown below.
for i in os.listdir(os.getcwd()):
if i.endswith(".tar.gz"):
tar = tarfile.open(i, "r:gz")
tar.extractall()
tar.close()
However this gave me an EOFError. In addition, before I ran bit of code, I was
able to unzip both files manually. However, after I run it, and after the code
gives me an error, I cannot unzip the 2014_SRS file manually anymore. How do I
fix this?
Answer: While this may not answer your specific question as to why both files could
not be unzipped with your code , the following is one way to unzip a list of
tar.gz files.
import tarfile, glob
srcDir = "/your/src/directory"
dstDir = "/your/dst/directory"
for f in glob.glob(srcDir + "/*.gz"):
t = tarfile.open(f,"r:gz")
for member in t.getmembers():
t.extract(member,dstDir)
t.close()
|
Decoding NumPy int64 binary representation
Question: So I did a stupid thing, and forgot to explicitly type-convert some values I
was putting into an SQLite database (using Python's SQLalchemy). The column
was set up to store an `INT`, whereas the input was actually a `numpy.int64`
dtype.
The values I am getting back out of the database look like:
`b'\x15\x00\x00\x00\x00\x00\x00\x00'`
It seems that SQLite has gone and stored the binary representation for these
values, rather than the integer itself.
Is there a way to decode these values in Python, or am I stuck with loading
all my data again (not a trivial exercise at this point)?
Answer: You can use
[`struct.unpack()`](https://docs.python.org/3/library/struct.html#struct.unpack):
>>> import struct
>>> value = struct.unpack('<q', b'\x15\x00\x00\x00\x00\x00\x00\x00')
>>> value
(21,)
>>> value[0]
21
That assumes that the data was stored little endian as specified by the `<` in
the `unpack()` format string, and that it is a signed "long long" (8 bytes) as
specified by the `q`. If the data is big endian:
>>> struct.unpack('>q', b'\x15\x00\x00\x00\x00\x00\x00\x00')
(1513209474796486656,)
I imagine that little endian is more likely to be correct in this case.
P.S. I have just confirmed that when a `numpy.int64` is inserted into a SQLite
`int` field it can be retrieved using `struct.unpack()` as shown above.
|
Error on Python serial import
Question: When I try to import the serial I get the following error:
Traceback (most recent call last):
File "C:\Documents and Settings\eduardo.pereira\workspace\thgspeak\tst.py", line 7, in <module>
import serial
File "C:\Python27\lib\site-packages\serial\__init__.py", line 27, in <module>
from serial.serialwin32 import Serial
File "C:\Python27\lib\site-packages\serial\serialwin32.py", line 15, in <module>
from serial import win32
File "C:\Python27\lib\site-packages\serial\win32.py", line 182, in <module>
CancelIoEx = _stdcall_libraries['kernel32'].CancelIoEx
File "C:\Python27\lib\ctypes\__init__.py", line 375, in __getattr__
func = self.__getitem__(name)
File "C:\Python27\lib\ctypes\__init__.py", line 380, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'CancelIoEx' not found
I have installed the latest version of pySerial, Python 2.7 runing on a WinXP
laptop. Tried everywhere and found no similar problem. Is there any solution
for that? Thanks in advance...
Answer: The version of pySerial that you're using is trying to call a
[function](https://github.com/pyserial/pyserial/commit/5a39b8897bbadb4b4e6da38a0cb557522bac3e1a)
that's only available in Windows Vista, whereas you're running Windows XP.
It might be worth experimenting with using an older version of pySerial.
The code in question was [added to pySerial on 3 May
2016](https://github.com/pyserial/pyserial/commit/5a39b8897bbadb4b4e6da38a0cb557522bac3e1a),
so a version just prior to that might be a good start.
|
Get tweets from local host, python, pymongo
Question: I am trying this code:
import pymongo
import json
import numpy as np
client = pymongo.MongoClient('localhost', 27017)
db = client.test
collection = db['tweets']
print ("Tweets Capturados: ", collection.count())
But, I get this error:
ServerSelectionTimeoutError: localhost:27017: [WinError 10061] No connection could be made because the target machine actively refused it.
Is there a generic localhost from Twitter API that I could use?
Answer: Are you sure MongoDB is running on your local machine? Please check whether it
is up and running. There is nothing wrong with your code. Also that'd be
useful to know which version of pymongo you're using.
> Is there a generic localhost from Twitter API that I could use?
Can you please elaborate this more?
|
Python - Let pip only search locally for extra packages
Question: Im trying to build an NSIS distributable, and this contains several packages.
One of them is `pyVISA-1.8` which needs the package `enum34` to work.
Now, I usually bundle all the wheels I need for the packages in the nsis
script, but when I do this for `pyVISA` , (i.e tell pip to `pip install
enum34-1.X.X.whl` then `pip install pyVisa-1.8.tar.gz` I cant `import visa`
without failures (pointing to enum34). (This might actually be a bug)
I found out that if i let let pip find the package on its own, the install
works. This is not an option, however, because this distro should be able to
be run on offline systems, so I _need_ to have all the source code in the nsis
installer.
How do I tell pip where the locally cached enum34.whl is located?
Regards
EDIT: Here is the error:
C:\Users\Administrator>pip list
ecdsa (0.13)
enum (0.4.6)
matplotlib (1.4.3)
numpy (1.9.2)
paramiko (1.15.2)
Pillow (3.1.0)
pip (7.1.2)
pycrypto (2.6.1)
pyparsing (2.0.7)
python-dateutil (2.4.2)
python-nmap (0.6.0)
pytz (2015.4)
requests (2.7.0)
setuptools (18.2)
six (1.10.0)
C:\Users\Administrator>pip install C:\python27\Dependencies\enum34-1.1.6-py2-non
e-any.whl
Processing c:\python27\dependencies\enum34-1.1.6-py2-none-any.whl
Installing collected packages: enum34
Successfully installed enum34-1.1.6
C:\Users\Administrator>pip install C:\python27\Dependencies\PyVISA-1.8.tar.gz
Processing c:\python27\dependencies\pyvisa-1.8.tar.gz
Requirement already satisfied (use --upgrade to upgrade): enum34 in c:\python27\
lib\site-packages (from PyVISA==1.8)
Installing collected packages: PyVISA
Running setup.py install for PyVISA
Successfully installed PyVISA-1.8
C:\Users\Administrator>python
Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015, 20:32:19) [MSC v.1500 32 bit (
Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import visa
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\Python27\lib\site-packages\visa.py", line 16, in <module>
from pyvisa import logger, __version__, log_to_screen, constants
File "c:\Python27\lib\site-packages\pyvisa\__init__.py", line 45, in <module>
from .highlevel import ResourceManager
File "c:\Python27\lib\site-packages\pyvisa\highlevel.py", line 22, in <module>
from . import constants
File "c:\Python27\lib\site-packages\pyvisa\constants.py", line 599, in <module
>
class AccessModes(enum.IntEnum):
AttributeError: 'module' object has no attribute 'IntEnum'
>>>
Answer: The problem was that enum-0.4.6 was also installed and preceded enum34 in the
path : (omn a brand new install with both packages installed:)
C:\Users\Administrator>python -c "import enum; print enum.__path__"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: 'module' object has no attribute '__path__'
As we can see from the following statement:
C:\Users\Administrator>python -c "import sys; print sys.path"
['', 'c:\\Python27\\lib\\site-packages\\enum-0.4.6-py2.7.egg', ..... ]
enum0.4.6 is the first path to be checked, which causes us problems, since we
wanted enum34. I did not know their functionalities were not mutually
exclusive. Anyway; uninstalling the enum 0.4.6 module solves all my problems,
because enum34 is backported to python2.7 and has all the functionality of
enum 0.4.6, apparently:
C:\Users\Administrator>pip uninstall enum
Uninstalling enum-0.4.6:
c:\python27\lib\site-packages\enum-0.4.6-py2.7.egg
Now we can check the path of the module:
C:\Users\Administrator>python -c "import enum; print enum.__path__"
['c:\\Python27\\lib\\site-packages\\enum']
|
How to find an exact sequence of words in lists using Python 3?
Question: I am coding in Python 3 on a Windows platform.
I am making a function that will pass in a user's inputted sentence which my
function will then `.split()` and make it a list of each word that was in
their original sentence.
My function will also pass in a predefined list of word patterns that my
function will be watching for as a match on the exact sequence of words
appearing in the user's sentence.
Now just so this is clear, I already can use `.intersection()` to find where
the exact words are matches **but I am looking for an exact sequence of
words**.
For instance if my user inputs: `"I love hairy cats"`, and the predefined list
of key words is something like this: `["I love", "hairy cats", "I love cats",
"love hair"]`, my function should only indicate `"I love"` and `"hairy cats"`
as these two matched the specified sequence of words as they appeared in the
predefined lists.
Here is my code thus far:
def parse_text(message, keywords):
newList = []
Message = message.split()
Keywords = keywords # Keywords need to be a list type
setMessage = set(word for word in Message)
setKeywords = set(word for word in Keywords)
newList = setMessage.intersection(setKeywords)
return newList
This works perfectly so far only if my keywords list contains only single
words. My issue is when I try to make my list with multiple words to denote
the sequence.
If my user's original message is:
message = "Hello world, yes and no"
keywords = ["help", "Hello", "yes", "so"] # this works, intersec "Hello" and "yes"
keywords = ["help me", "Hello mom", "yes and no", "so"] # this does not work, just returns empty "set()"
Any ideas of how I can make adjustments to my function to check my user's
original sentence for a specific sequence of words as they appear my keyword
list?
Answer: Why use sets at all? This is a pretty straightforward string operation:
def parse_text(message, keywords):
newList = []
for keyword in keywords:
if keyword in message:
newList.append(keyword)
return newList
or, using list comprehensions for more succinctness:
def parse_text(message, keywords):
return [keyword for keyword in keywords if keyword in message]
Finally, one additional form using regular expressions that enforces complete
words:
from re import search
def parse_text(message, keywords):
newList = []
for keyword in keywords:
if search(r'\b{}\b'.format(keyword), message):
newList.append(keyword)
return newList
|
Datetime format problems
Question: I'm having problems converting a dateTime from one format to another.
Mon 13 Jun 2016 10:00
should become
13/06/2016 10:00:00
However, I'm having problems with the hours minutes & seconds (Yes I realise
the seconds are not supplied - so that might be causing formatting problems)
Here's what I've got so far:
#!/usr/bin/env python
import datetime
def convertDateString(s):
d = datetime.datetime.strptime(s, "%a %d %b %Y")
# return "{dt.day}/{dt:%m}/{dt.year}".format(dt = datetime.date(d.year, d.month, d.day)) # no time, but works fine
return "{dt.day}/{dt:%m}/{dt.year} {dt:%H}:{dt:%M}:{dt:%S}".format(dt = datetime.date(d.year, d.month, d.day))
# print convertDateString("Mon 13 Jun 2016") # works fine, but ignores time
print convertDateString("Mon 13 Jun 2016 10:00:00")
which gives ValueError: unconverted data remains: 10:00:00
Can someone please point out where am I going wrong?
Answer: I figured it out !
# Mon 13 Jun 2016 10:00 # in
# 13/6/2016 10:00:00 # out
d = datetime.datetime.strptime(s, "%a %d %b %Y %H:%M")
return d.strftime("%d/%m/%Y %H:%M:%S")
|
Can't transform image into polar. Python, OpenCV
Question: I'm trying to implement ring artefact reduction algorithm using python. The
first step is to transform image from cartesian to polar. I suppose that I can
use opencv to do that. In this topic [fast Cartesian to Polar to Cartesian in
Python](http://stackoverflow.com/questions/9924135/fast-cartesian-to-polar-to-
cartesian-in-python) author managed to use LinearPolar function but it doesn't
work on my computer.
I tried the following way:
import cv
import cv2
img = cv2.imread('artifact.png', 0)
img1 = cv2.imread('artifact.png', 0)
cv.LinearPolar(cv.fromarray(img),cv.fromarray(img1),(130,110), 1 ,cv.CV_WARP_FILL_OUTLIERS)
But it returned an error : AttributeError: 'module' object has no attribute
'LinearPolar'
So is there any way to use this function?
Thanks
Answer: Did you try using cv2.LinearPolar ?
<https://fossies.org/dox/opencv-3.1.0/logpolar_8py_source.html>
|
Python Adding one hour to time.time()
Question: Hi i want to add one hour to Python time.time().
My current way of doing it is :
t = int(time.time())
expiration_time = t + 3600
Is this considered bad for any reasons? If so is there a better way of doing
this easily.
Answer: It's not considered bad for any reason . I do it this way many times . Here is
an example :
import time
t0 = time.time()
print time.strftime("%I %M %p",time.localtime(t0))
03 31 PM
t1 = t0 + 60*60
print time.strftime("%I %M %p",time.localtime(t1))
04 31 PM
Here are other ways of doing it using 'datetime'
import datetime
t1 = datetime.datetime.now() + datetime.timedelta(hours=1)
t2 = datetime.datetime.now() + datetime.timedelta(minutes=60)
|
Python Ctypes register callback functions
Question: I ran into something very strange using Python and ctypes. I'm using Python
3.4.3. First, some background into the project:
I have compiled a custom dll from C code. I'm using ctypes to interface with
the dll. The C library is interfacing with some custom hardware. Sometimes the
hardware generates an interrupt and passes it along to the C library on the
computer. In the C API, there is a function with the prototype `void
register_callback(int addr, void (*callback)(void))`. I have an array of
callback function pointers, which are initialized to NULL. When this function
is called, the callback function pointer at index addr is set to callback,
like this: `callbacks[addr] = callback;`.
When the user programs in Python, they instantiate objects from classes that
model different hardware parts (such as a button or an RGB LED). They can then
write a custom callback function and call `button.register_callback(func)`
(assuming they have a Button object named button, of course), which calls the
register_callback function in the C library. Now, when the button is pressed
and the interrupt is generated, the C library will call the appropriate
callback function (i.e. `callbacks[addr]();`).
Now, the weirdness:
In Python, my first attempt at the register_callback method in Python looked
like this:
class Obj:
def __init__(self, name):
# Initialize stuff
def register_callback(self, func):
CB_T = ctypes.CFUNCTYPE(None)
cb_ptr = CB_T(func)
host_api.register_callback(self.addr, cb_ptr) # host_api is the loaded dll
And in main:
def cb1():
print("cb1")
def cb2():
print("cb2")
def main(argv):
# Initialization stuff
# Now create the objects and register the callbacks:
obj = Obj_module.Obj()
obj2 = Obj_module.Obj()
obj.register_callback(cb1)
obj2.register_callback(cb2)
while True:
pass
When I ran this, only "cb2" was being printed, regardless of which button I
was pressing. The REALLY weird thing is that when I switched the order in
which I registered the callbacks :
obj2.register_callback(cb2)
obj.register_callback(cb1)
only "cb1" was being printed, regardless of the button I pressed! In the C
library, I verified (by printf) that different callback function pointers were
being set and called, depending on the button, but that the same function
pointer was being passed to the C register_callback function.
I was able to fix the problem by adding a line to the register_callback
method:
def register_callback(self, func):
CB_T = ctypes.CFUNCTYPE(None)
cb_ptr = CB_T(func)
(ctypes.cast(cb_ptr, ctypes.POINTER(ctypes.c_int)))
host_api.register_callback(self.addr, cb_ptr)
Apparently, converting cb_ptr to a ctypes POINTER fixed the problem -
different function pointers were being passed in, and I successfully saw "cb1"
or "cb2" printed, depending on the button that I pressed.
My question is, WHY? Why was the same function pointer being passed in the
original code, why was it changing depending on the order that I registered
the callbacks, and why does converting cb_ptr to a ctypes POINTER ensure that
the function pointers are different?
I'm kind of a beginner at Python, but I'm much more experienced at C. Thanks
in advance for your responses.
Answer: Your `cb_ptr` is being garbage collected. From the
[documentation](https://docs.python.org/2/library/ctypes.html#callback-
functions):
> Make sure you keep references to CFUNCTYPE() objects as long as they are
> used from C code. ctypes doesn’t, and if you don’t, they may be garbage
> collected, crashing your program when a callback is made.
In this code sample if the line `ptrs.append(cb_ptr)` is commented out the
location of `cb_ptr` is the same for both `Obj` instances (on my computer).
Uncommenting the line results in two memory locations.
import ctypes
ptrs = []
class Obj:
def __init__(self):
pass
def register_callback(self, func):
CB_T = ctypes.CFUNCTYPE(None)
cb_ptr = CB_T(func)
ptrs.append(cb_ptr)
print(cb_ptr)
def cb1(): print("cb1")
def cb2(): print("cb2")
def main(argv):
obj = Obj()
obj2 = Obj()
obj.register_callback(cb1)
obj2.register_callback(cb2)
main(None)
|
Python SQLite avoid overcrowding by deleting the last item
Question: I use sqlite with python, when insert a new row I want to delete one of the
end
conn.execute("INSERT INTO ORDERS (ORD_ID, TYPE) VALUES (?, ?)", [ord_id, type_n]);
conn.commit()
> ID ID_ORD TYPE
>
> * * *
>
> 3 136984714 0 **< \--(-1)**
>
> 4 136982197 1
>
> 5 136983730 1
>
> 6 136984717 0 **< \--(+1)**
How it could be done this?
Answer: If your `ord_id` is always guaranteed to auto-increment you could do:
import sqlite3
conn = sqlite3.connect(':memory:')
conn.execute('create table orders (ord_id, type);')
conn.execute('insert into orders (ord_id, type) values (?,?);',(3,136984714))
conn.execute('insert into orders (ord_id, type) values (?,?);',(4,136982197))
conn.execute('insert into orders (ord_id, type) values (?,?);',(5,136983730))
conn.execute('insert into orders (ord_id, type) values (?,?);',(6,136984717))
conn.execute('delete from orders where ord_id = (select min(ord_id) from orders);')
>>> conn.execute('select * from orders').fetchall()
[(4, 136982197), (5, 136983730), (6, 136984717)]
As is mentioned in the comments you could alternately implement this as a
trigger:
conn.execute('''CREATE TRIGGER delete_from_orders
AFTER INSERT ON orders
FOR EACH ROW
BEGIN
DELETE FROM orders WHERE ord_id =
(SELECT MIN(ORD_ID) FROM orders);
END''')
|
Python tkinter wont display diagonal lines
Question: I recently started using Arch Linux, and after transferring a python file from
my mac to the Linux, and running it, it did not work. This is pretty common,
but, the way in which it didn't work was very strange. The program is one that
graphs equations of lines, but on Linux, the tkinter Canvas object's
create_line method no longer displays diagonal lines. For example, graphing
y=x wouldn't show anything but it would say it successfully graphed the line.
I tried drawing a diagonal line (from (0, 0) to (20, 20)) outside of my
program, and it doesn't work there either. I tried adjusting the width, which
didn't change anything. I'm using i3wm, and tried lxde, which didn't change
anything. I have the latest version of python3 installed with pacman, and I
had to install tk separately. Is there any way to fix this?
Update: It doesn't work with python3 installed from the website either
Update: Works in a virtual machine running arch linux, so it may have
something to do with the drivers or hardware
Edit: here is an example
from tkinter import *
root = Tk()
canvas = Canvas(master=root)
canvas.pack()
canvas.create_line(0, 0, 20, 20)
returns 1, doesn't do anything else. Meanwhile:
canvas.create_line(0, 20, 20, 20) # horizontal line
returns 1, 2, 3... (depends on how many things you have drawn) and draws the
line.
Answer: I was able to fix it by installing the correct driver, in my case xf86-video-
intel, and rebooting. I think it was just a newbie mistake, but its still sort
of interesting that the missing driver only affected diagonal lines in
tkinter.
|
How to get pyPdf to work with os or glob
Question: My goal is to read a directory with several PDF files and return the number of
pages in each file using Python. I'm trying to use the pyPdf library but it
fails.
If I do this:
from pyPdf import PdfFileReader
testFile = "C:\\path\\file.pdf"
pdfFile = PdfFileReader(file(testFile, 'rb'))
print pdfFile.getNumPages()
I'll get a result
If I do this, it fails:
pdfList = []
for root, dirs, files in os.walk("C:\\path"):
for file in files:
pdfList.append(os.path.join(root, file)
for item in pdfList:
targetPdf = PdfFileReader(file(item,'rb'))
numPages = targetPdf.getNumPages()
print item, numPages
This always results in:
TypeError: 'str' object is not callable
If I try to recreate a pyPdf object manually, I get the same thing.
What am I doing wrong?
Answer: Issue is due to using name, file as variable. You are using file as variable
name in first for loop. And as a function call in statement, targetPdf =
PdfFileReader(file(item,'rb')).
Try changing variable name in first for loop from file to fileName. Hope that
helps
|
Removing a row from CSV with python if data wasn't recorded in a column
Question: I'm trying to import a batch of CSV's into PostgreSQL and constantly run into
an issue with missing data:
> psycopg2.DataError: missing data for column "column_name" CONTEXT:
> COPY table_name, line _where ever in the CSV that data wasn't
> recorded, and here are data values up to the missing column_.
There is no way to get the complete set of data written to the row at times,
and I have to deal with the files as is. I am trying to figure a way to remove
the row if data wasn't recorded into any column. Here's what I have:
file_list = glob.glob(path)
for f in file_list:
filename = os.path.basename(f) #get the file name
arc_csv = arc_path + filename #path for revised copy of CSV
with open(f, 'r') as inp, open(arc_csv, 'wb') as out:
writer = csv.writer(out)
for line in csv.reader(inp):
if "" not in line: #if the row doesn't have any empty fields
writer.writerow(line)
cursor.execute("COPY table_name FROM %s WITH CSV HEADER DELIMITER ','",(arc_csv,))
Answer: Unfortunately, you _cannot parameterize table or column names_. Use string
formatting, but make sure to validate/escape the value properly:
cursor.execute("COPY table_name FROM {column_name} WITH CSV HEADER DELIMITER ','".format(column_name=arc_csv))
|
Error while using w, h = template.shape[::-1]
Question: I am getting an error:
w, h = template.shape[::-1]
AttributeError: 'NoneType' object has no attribute 'shape'
My code:
import cv2
import numpy as np
img_rgb = cv2.imread('opencv-template-matching-python-tutorial.jpg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('opencv-template-for-matching.jpg',0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,255,255), 2)
cv2.imshow('Detected',img_rgb)
How can I fix this problem?
Answer: I'm not too familiar with `opencv`, but that error means that
`cv2.imread('opencv-template-for-matching.jpg',0)` fails to read that file and
thus returns `None`.
Make sure that this file exists and in the supported format. From `imread`'s
[documentation](http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html?highlight=imread#cv2.imread):
> The function imread loads an image from the specified file and returns it.
> **If the image cannot be read (because of missing file, improper
> permissions, unsupported or invalid format), the function returns an empty
> matrix ( Mat::data==NULL)**. Currently, the following file formats are
> supported: Windows bitmaps - *.bmp, *.dib (always supported) JPEG files -
> *.jpeg, *.jpg, *.jpe (see the Notes section) JPEG 2000 files - *.jp2 (see
> the Notes section) Portable Network Graphics - *.png (see the Notes section)
> Portable image format - *.pbm, *.pgm, *.ppm (always supported) Sun rasters -
> *.sr, *.ras (always supported) TIFF files - *.tiff, *.tif (see the Notes
> section)
|
Python flask : No module named requests
Question: I'm having trouble using `requests` module in my flask app. I have two files
`rest_server.py` and `independent.py` at same directory level. The
`independent.py` uses `requests` module and it executes correctly if I
directly run it. But when I import `independent.py` in `rest_server.py` it
shows following error `
import independent
File "/home/satwik/Desktop/angelhack/independent.py", line 5, in <module>
import requests
ImportError: No module named requests`
I've tried `pip install requests` and it shows requirement already satisfied.
Also I've tried to import `requests` in `rest_server.py` and found it to
execute correctly too. Here's my code
**independent.py **
`import json
import os
import sys
import requests
sys.path.append('/home/satwik/Desktop/angelhack/comprehensive_search')
** rest_server.py **
`#!flask/bin/python
import six
from flask import Flask, jsonify, abort, request, make_response, url_for
from flask.ext.httpauth import HTTPBasicAuth
import independent
app = Flask(__name__, static_url_path="")`
How should I fix this?
Answer: # Why you get the "no module named ..." error
Your two files have one big difference: rest_server.py includes a _shebang_
line, while independent.py doesn't.
When you say you _directly execute_ the file `independent.py`, you type
`python independent.py` (I'm assuming here, because you didn't specify that).
That means you are executing with the system python interpreter, which will
look for modules installed at system level. Systemwide you have the _requests_
module installed, via `pip install requests`, so python finds it, imports the
thing and happily runs your script.
When you execute the file `rest_server.py`, instead, you can do so calling the
script's name: `./rest_server.py` (assuming correct permissions settings). In
this case, the first line `#!flask/bin/python` (the so called _shebang line_)
instructs to use a different python interpreter, the one contained in the
`flask` folder, which I assume contains a virtual environment.
You get the `no module named requests` because **that module is not installed
inside the _flask_ virtual environment**.
# How you can fix the error
To fix the problem, just **install the _requests_ module inside the virtual
environment**.
You first activate the virtual environment and then install the module you
need:
$ source flask/bin/activate
$ pip install requests
Then you can try execute `./rest_server.py` again and the _requests_ module
should be properly imported.
For more on the shebang line: <https://en.wikipedia.org/wiki/Shebang_(Unix)>
For more on virtual environments: <https://pypi.python.org/pypi/virtualenv>
|
Tensorflow feed_dict with tensorflow.python.framework.errors.InvalidArgumentError
Question: my example is like the following:
import tensorflow as tf
import numpy as np
batch_size = 10
real_data = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
for i in range(batch_size):
real_data[i] = i
print np.shape(real_data)
holder = tf.placeholder(tf.int32, shape=[None, 1])
with tf.Session() as sess:
feed_dict = {
holder: real_data
}
sess.run([], feed_dict=feed_dict)
The out put is like following:
/home/att/anaconda2/bin/python /home/att/文档/code/justtest/ates.py
(10, 1)
Traceback (most recent call last):
File "/home/att/文档/code/justtest/ates.py", line 17, in <module>
sess.run([], feed_dict=feed_dict)
File "/home/att/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/ session.py", line 340, in run
run_metadata_ptr)
File "/home/att/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/ session.py", line 564, in _run
feed_dict_string, options, run_metadata)
File "/home/att/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/ session.py", line 637, in _do_run
target_list, options, run_metadata)
File "/home/att/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/ session.py", line 659, in _do_call
e.code)
tensorflow.python.framework.errors.InvalidArgumentError
Process finished with exit code 1
What confuse me is the data shape is exactly the same as the placeholder, both
are (10, 1), but why still raise this error? A further quesiton is when i feed
the data to the placeholder, what should the data be like(the data type and
the data shape)?
Any help will be grateful :)
Answer: You cannot use `sess.run([])`, you need to provide a graph node inside like:
sess.run([some_node], feed_dict=feed_dict)
|
Python loop through list and return "out of sequence" values
Question: Consider this list:
dates = [
('2015-02-03', 'name1'),
('2015-02-04', 'nameg'),
('2015-02-04', 'name5'),
('2015-02-05', 'nameh'),
('1929-03-12', 'name4'),
('2023-07-01', 'name7'),
('2015-02-07', 'name0'),
('2015-02-08', 'nameh'),
('2015-02-15', 'namex'),
('2015-02-09', 'namew'),
('1980-12-23', 'name2'),
('2015-02-12', 'namen'),
('2015-02-13', 'named'),
]
How can I identify those dates that are out of sequence. I don't care if they
repeat, or skip, I just need the ones way out of line. Ie, I should get back:
('1929-03-12', 'name4'),
('2023-07-01', 'name7'),
('2015-02-15', 'namex'),
('1980-12-23', 'name2'),
Namex is less obvious, but it's not in the general order of the list.
My simplistic start (which I have deleted to simplify the question) is
obviously woefully incomplete.
* * *
**Update** : Based on the comments, it seems an implementation of the [Longest
Increase
Subsequence](https://en.wikipedia.org/wiki/Longest_increasing_subsequence)
(LIS) will get me started, a python implementation found here:
* <http://stackoverflow.com/a/9832414/1061836>
* [How to determine the longest increasing subsequence using dynamic programming?](http://stackoverflow.com/questions/2631726/how-to-determine-the-longest-increasing-subsequence-using-dynamic-programming)
* <https://rosettacode.org/wiki/Longest_increasing_subsequence#Python>
* <http://codereview.stackexchange.com/questions/10230/python-implementation-of-the-longest-increasing-subsequence>
Seems once I get the LIS, I can compare it to the original list and see where
the gaps are... Fascinating. SO is the hive-mind of awesomeness.
Answer: ## Short answer, general solution
Using my [answer to the "Longest increasing subsequence"
question](http://stackoverflow.com/a/38337443/1916449), this could be
implemented simply as:
def out_of_sequence(seq):
indices = set(longest_subsequence(seq, 'weak', key=lambda x: x[0], index=True))
return [e for i, e in enumerate(seq) if i not in indices]
## Longer answer, specific solution
Based on the [question at Code
Review](http://codereview.stackexchange.com/q/10230/110989) and a [question
about non-decreasing sequences](http://stackoverflow.com/a/21718032/1916449)
(since that's what you're after), here's a solution to your problem:
from bisect import bisect_right
from operator import itemgetter
def out_of_sequence(seq, key = None):
if key is None: key = lambda x: x
lastoflength = [0] # end position of subsequence with given length
predecessor = [None] # penultimate element of l.i.s. ending at given position
for i in range(1, len(seq)):
# find length j of subsequence that seq[i] can extend
j = bisect_right([key(seq[k]) for k in lastoflength], key(seq[i]))
# update old subsequence or extend the longest
try: lastoflength[j] = i
except: lastoflength.append(i)
# record element preceding seq[i] in the subsequence for backtracking
predecessor.append(lastoflength[j-1] if j > 0 else None)
indices = set()
i = lastoflength[-1]
while i is not None:
indices.add(i)
i = predecessor[i]
return [e for i, e in enumerate(seq) if i not in indices]
print(*out_of_sequence(dates, itemgetter(0)), sep='\n')
Outputs:
('1929-03-12', 'name4')
('2023-07-01', 'name7')
('2015-02-15', 'namex')
('1980-12-23', 'name2')
* * *
The `key` parameter (inspired by
[`sorted`](https://docs.python.org/3.5/library/functions.html#sorted) builtin)
specifies a function of one argument that is used to extract a comparison key
from each list element. The default value is `None` so the caller has a
convenient way of saying "I want to compare the elements directly". If it is
set to `None` we use `lambda x: x` as an [identity
function](https://en.wikipedia.org/wiki/Identity_function), so the elements
are not changed in any way before the comparison.
In your case, you want to use the dates as keys for comparison, so we use
[`itemgetter(0)`](https://docs.python.org/3/library/operator.html#operator.itemgetter)
as `key`. And `itemgetter(1)` would use the names as `key`, see:
>>> print(*map(itemgetter(1), dates))
name1 nameg name5 nameh name4 name7 name0 nameh namex namew name2 namen named
Using `itemgetter(k)` is equivalent to `lambda x: x[k]`:
>>> print(*map(lambda x: x[1], dates))
name1 nameg name5 nameh name4 name7 name0 nameh namex namew name2 namen named
Using it with `map` is equivalent to a generator expression:
>>> print(*(x[1] for x in dates))
name1 nameg name5 nameh name4 name7 name0 nameh namex namew name2 namen named
But if we used a similar list comprehension to pass the sequence to
`out_of_sequence` we would get a different result from expected:
>>> print(*out_of_sequence([x[0] for x in dates]), sep='\n')
1929-03-12
2023-07-01
2015-02-15
1980-12-23
Likewise, if we compare the date-name pairs directly we get wrong results
(because `'nameg'` compares greater to `'name5'`):
>>> print(*out_of_sequence(dates), sep='\n')
('2015-02-04', 'nameg')
('1929-03-12', 'name4')
('2023-07-01', 'name7')
('2015-02-15', 'namex')
('1980-12-23', 'name2')
Because we want to return dates and names, and we want to order by dates only,
we need to pass a function that extracts dates using the `key` parameter.
An alternative would be to get rid of `key` and just write:
j = bisect_right([seq[k][0] for k in lastoflength], seq[i][0])
But since this is stackoverflow, maybe one day another person will come by
this answer and will need some other key extraction, therefore I decided to
post the more general solution here.
|
Custom Python module not importing
Question: I can't seem to get past this and do not quite understand what is happening. I
have a directory with two class files in it. Using the REPL from within that
directory I can import both files and execute their logic. From their parent
directory which main() is ran from however, only one class file is visible,
pagetable.
The project structure is currently,
project/
src/
__init__.py # empty
pagingsimulation.py # main() imports memory
process.py
memory/
__init__.py # imports pagetable.py
pagetable.py # visible
page.py # error
pagingsimulation.py was able import memory/ and instantiate
pagetable.PageTable, but once I created page.py and had pagetable.py import
page.py, pagingsimulation.py now throws this error upon execution.
Traceback (most recent call last):
File "pagingsimulator.py", line 5, in <module>
import memory
File "src/memory/__init__.py", line 1, in <module>
from .pagetable import PageTable
File "src/memory/pagetable.py", line 1, in <module>
import page
ImportError: No module named 'page'
within memory/__init__.py I currently have,
from .pagetable import PageTable
...but have tried many other variations without success.
I've tried multiple approaches and have researched this for awhile and perhaps
it is something I just cannot see at this point. What is preventing my custom
modules from importing each other when ran from main()?
Answer: The solution as I suspected was a pathing issue and more specifically in
relation to how the modules interact once imported into the parent file,
pagingsimulation.py.
So to resolve this issue it had nothing to do with __init__.py but rather how
I was accessing page.py from within pagetable.py
So pagingsimulator.py uses,
import memory
And within memory, the **init**.py file has,
from .pagetable import PageTable
For PageTable to access Page, the import statement had to be,
from memory import Page
It seems a bit funky to me and after failing so many times I would like to say
there is a cleaner way to do this, but for the time being I'll take my win and
hope that leaving this question here benefits someone else as I was unable to
find something similar during my search.
|
python logging: multiple loggers error
Question: I have objects called Job which has it's own logger (each Job need to have a
log file which is represented by logging.getLogger())
The problem is I create thousands of Jobs (~4000) and they all want to create
a logger.
Traceback (most recent call last):
File "/u/lib/btool/Job.py", line 151, in __init__
File "/usr/lib/python2.7/logging/__init__.py", line 911, in __init__
File "/usr/lib/python2.7/logging/__init__.py", line 936, in _open
IOError: [Errno 24] Too many open files: '/x/zooland/20160710-032802.log'
Is there way to deal with multiple loggers?
Answer: Here's a custom file handler that stores the log message and then closes the
file.
import logging
class MyFileHandler(logging.Handler):
def __init__(self, filename):
self.filename = filename
super().__init__()
def emit(self, record):
log_text = self.format(record)
try:
fh = open(self.filename, "a")
fh.write(log_text)
fh.close()
return True
except:
return False
logger = logging.getLogger("job")
handler = MyFileHandler("file-1")
logger.addHandler(handler)
logger.error("hola")
|
Extracting required Variables from Event Log file using Python
Question: [![enter image description
here](http://i.stack.imgur.com/zSGWk.png)](http://i.stack.imgur.com/zSGWk.png)
sample first row of event log file ,here i have successfully extracted
evrything apart from last key value pair which is attribute-
{"event_type":"ActionClicked","event_timestamp":1451583172592,"arrival_timestamp":1451608731845,"event_version":"3.0",
"application":{"app_id":"7ffa58dab3c646cea642e961ff8a8070","cognito_identity_pool_id":"us-east-1:
4d9cf803-0487-44ec-be27-1e160d15df74","package_name":"com.think.vito","sdk":{"name":"aws-sdk-android","version":"2.2.2"}
,"title":"Vito","version_name":"1.0.2.1","version_code":"3"},"client":{"client_id":"438b152e-5b7c-4e99-9216-831fc15b0c07",
"cognito_id":"us-east-1:448efb89-f382-4975-a1a1-dd8a79e1dd0c"},"device":{"locale":{"code":"en_GB","country":"GB",
"language":"en"},"make":"samsung","model":"GT-S5312","platform":{"name":"ANDROID","version":"4.1.2"}},
"session":{"session_id":"c15b0c07-20151231-173052586","start_timestamp":1451583052586},"attributes":{"OfferID":"20186",
"Category":"40000","CustomerID":"304"},"metrics":{}}
Hello Every One ,I am trying to extract the content from Event log file as
shown in attached image .As to requirement i have to fetch `customer ID`,
`offer id`, `category` these are important variable i need to extract from the
this event log file .this is csv formatted file. i tryed with regular
expression but it is't working because you can observe format of every column
is different. As you see first row has `category` `customer id` `offer id` and
second row is totally blank in this case regular expression wont work apart
from this we have to consider we have to consider all possible condition, we
has 14000 sample.in Event log file ...#Jason # Parsing #Python #Pandas
Answer: **Edit**
The data, after your edit, now appears to be JSON data. You can still use
`literal_eval` as below, or you could use the
[`json`](https://docs.python.org/3/library/json.html#module-json) module:
import json
with open('event.log') as events:
for line in events:
event = json.loads(line)
# process event dictionary
To access the `CustomerID`, `OfferID`, `Category` etc. you need to access the
nested dictionary associated with the key `'attributes'` in the `event`
dictionary:
print(event['attributes']['CustomerID'])
print(event['attributes']['OfferID'])
print(event['attributes']['Category'])
If it is the case that some keys could be missing use `dict.get()` instead:
print(event['attributes'].get('CustomerID'))
print(event['attributes'].get('OfferID'))
print(event['attributes'].get('Category'))
Now you will get `None` if the key is missing.
You can extend this principle to access other items with the dictionary.
If I understand your question you also want to create a CSV file containing
the extracted fields. You use the extracted values with `csv.DictWriter` like
this:
import csv
with open('event.log') as events, open('output.csv', 'w') as csv_file:
fields = ['CustomerID', 'OfferID', 'Category']
writer = csv.DictWriter(csv_file, fields)
writer.writeheader()
for line in events:
event = json.loads(line)
writer.writerow(event['attributes'])
`DictWriter` will simply leave fields empty when the dictionary is missing
keys.
* * *
**Original answer** The data is not in CSV format, it appears to contain
Python dictionary strings. These can be parsed into Python dictionaries using
[`ast.literal_eval()`](https://docs.python.org/3/library/ast.html#ast.literal_eval):
from ast import literal_eval
with open('event.log') as events:
for line in events:
event = literal_eval(line)
# process event dictionary
|
Python - Remove duplicate pandas data frames from dictionary
Question: I have a dictionary containing pandas data frames that have the same column
names, and I'd like to remove duplicate data frames with identical values and
row ids.
Let's assume this is my dictionary of data frames:
>>> dd[0]
Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Boston Los Angeles 2016-03-28 06:00:00
>>> dd[1]
Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Boston Los Angeles 2016-03-28 06:00:00
>>> dd[2]
Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Boston Los Angeles 2016-03-28 06:00:00
>>> dd[3]
Origin Destination Time
1 New York Los Angeles 2016-03-28 04:00:00
2 Los Angeles Boston 2016-03-28 06:00:00
3 Boston New York 2016-03-28 08:00:00
>>> dd[4]
Origin Destination Time
1 New York Los Angeles 2016-03-28 04:00:00
2 Los Angeles Boston 2016-03-28 06:00:00
3 Boston New York 2016-03-28 08:00:00
>>> dd[5]
Origin Destination Time
3 Boston New York 2016-03-28 08:00:00
4 New York Los Angeles 2016-03-28 12:00:00
>>> dd[6]
Origin Destination Time
3 Boston New York 2016-03-28 08:00:00
4 New York Los Angeles 2016-03-28 12:00:00
I want the result to look like this:
>>> dd[0]
Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Boston Los Angeles 2016-03-28 06:00:00
>>> dd[3]
Origin Destination Time
1 New York Los Angeles 2016-03-28 04:00:00
2 Los Angeles Boston 2016-03-28 06:00:00
3 Boston New York 2016-03-28 08:00:00
>>> dd[5]
Origin Destination Time
3 Boston New York 2016-03-28 08:00:00
4 New York Los Angeles 2016-03-28 12:00:00
This is my code leading up to this above-mentioned example:
# Load data as pandas data frame
data = pd.read_csv("website.txt", names = ["Time", "Origin", `"Destination"])`
data["Time"] = pd.to_datetime(data["Time"], infer_datetime_format=True)
# Reverse data frame by index to loop backwards
data = data.reindex(index=df.index[::-1])
dd = {}
for i, e in reverse.iterrows():
dd[i] = data[ (data['Time'] > e['Time']-pd.Timedelta('4 hours')) & (data['Time'] < e['Time'] + pd.Timedelta('4 hours'))]
Original Text:
{"Time": "2016-03-28T02:00:00Z", "Origin": "New York", "Destination": "Boston"}
{"Time": "2016-03-28T02:00:00Z", "Origin": "New York", "Destination": "Boston"}
{"Time": "2016-03-28T02:00:00Z", "Origin": "New York", "Destination": "Boston"}
{"Time": "2016-03-28T04:00:00Z", "Origin": "New York", "Destination": "Los Angeles"}
{"Time": "2016-03-28T04:00:00Z", "Origin": "New York", "Destination": "Los Angeles"}
{"Time": "2016-03-28T04:00:00Z", "Origin": "New York", "Destination": "Los Angeles"}
{"Time": "2016-03-28T06:00:00Z", "Origin": "Boston", "Destination": "Los Angeles"}
{"Time": "2016-03-28T06:00:00Z", "Origin": "Boston", "Destination": "Los Angeles"}
{"Time": "2016-03-28T06:00:00Z", "Origin": "Boston", "Destination": "Los Angeles"}
{"Time": "2016-03-28T08:00:00Z", "Origin": "Boston", "Destination": "New York"}
{"Time": "2016-03-28T08:00:00Z", "Origin": "Boston", "Destination": "New York"}
{"Time": "2016-03-28T12:00:00Z", "Origin": "New York", "Destination": "Los Angeles"}
{"Time": "2016-03-28T12:00:00Z", "Origin": "New York", "Destination": "Los Angeles"}
Answer: ### One liner
{k: v.unstack() for k, v in pd.DataFrame({k: v.stack() for k, v in dd.iteritems()}).T.drop_duplicates().iterrows()}
### Explained version
# iterate through key, value pairs of dictionary,
# stacking each dataframe into a series so that we
# can pass the resulting dataframe into the pd.DataFrame constructor.
df1 = pd.DataFrame({k: v.stack() for k, v in dd.iteritems()})
# Each column is now one key, value pair from the original dictionary
# Transpose and drop duplicates
df2 = df1.T.drop_duplicates()
# reverse the original stacking and convert back to dictionary
# we could have used df2.T.iteritems() but df2.iterrows() took
# one fewer operations and fewer characters to type.
dd_ = {k: v.unstack() for k, v in df2.iterrows()}
for k, v in dd_.iteritems():
print 'key {}:'.format(k)
print v
print '-' * 10
key 0:
a b
0 1 2
1 3 4
----------
key 2:
a b
0 2 3
1 4 5
----------
* * *
### Setup to get same results as me (copy and paste this)
from StringIO import StringIO
import pandas as pd
text0 = """ Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Boston Los Angeles 2016-03-28 06:00:00"""
text1 = """ Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Boston Los Angeles 2016-03-28 06:00:00"""
text2 = """ Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Los Angeles Boston 2016-03-28 06:00:00"""
dd = {}
dd[0] = pd.read_csv(StringIO(text0), sep='\s{2,}', index_col=0, engine='python')
dd[0].Time = pd.to_datetime(dd[0].Time)
dd[1] = pd.read_csv(StringIO(text1), sep='\s{2,}', index_col=0, engine='python')
dd[1].Time = pd.to_datetime(dd[1].Time)
dd[2] = pd.read_csv(StringIO(text2), sep='\s{2,}', index_col=0, engine='python')
dd[2].Time = pd.to_datetime(dd[2].Time)
# Then run solutions above:
df1 = pd.DataFrame({k: v.stack() for k, v in dd.iteritems()})
df2 = df1.T.drop_duplicates()
dd_ = {k: v.unstack() for k, v in df2.iterrows()}
for k, v in dd_.iteritems():
print 'key {}:'.format(k)
print v
print '-' * 10
You should get this:
key 0:
Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Boston Los Angeles 2016-03-28 06:00:00
----------
key 2:
Origin Destination Time
0 New York Boston 2016-03-28 02:00:00
1 New York Los Angeles 2016-03-28 04:00:00
2 Los Angeles Boston 2016-03-28 06:00:00
----------
* * *
### Version
import sys
import pandas as pd
print sys.version
print pd.__version__
2.7.11 |Anaconda custom (x86_64)| (default, Dec 6 2015, 18:57:58)
[GCC 4.2.1 (Apple Inc. build 5577)]
0.18.1
|
OpenCV - VideoCapture(filename) works in Java but not in Python (Windows 7)
Question: I've been trying to open a video file using OpenCV and process its frames. I
have both avi file and mp4 file, the mp4 file works well in Java but in Python
(where I really need it...) it doesn't work (I keep getting None in
videocapture.read()).
Any ideas what can this be? how can it be solved?
EDIT: Here's the code I have:
import cv2
video_capture = cv2.VideoCapture('myfile.mp4')
video_capture.set(propId=cv2.cv.CV_CAP_PROP_FRAME_WIDTH, value=1280.0)
video_capture.set(propId=cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, value=720.0)
ret, frame = self.video_capture.read()
if frame is not None:
# processing code...never reaches here
Thanks.
Answer: Check [this](http://stackoverflow.com/questions/16374633/opencv-videocapture-
cannot-read-video-in-python-but-able-in-vs11) question and the solution
provided by [this](http://stackoverflow.com/a/11703998/410487) answer.
Maybe it could help.
|
Python selenium drop down menu click
Question: i want to select option from a drop down menu, for this i use that :
br.find_element_by_xpath("//*[@id='adyen-encrypted-form']/fieldset/div[3]/div[2]/div/div/div/div/div[2]/div/ul/li[5]/span").click()
To select option month 4 but when i do that pyhton return error message :
> selenium.common.exceptions.ElementNotVisibleException: Message: element not
> visible (Session info: chrome=51.0.2704.103) (Driver info:
> chromedriver=2.22.397929
> (fb72fb249a903a0b1041ea71eb4c8b3fa0d9be5a),platform=Mac OS X 10.11.5 x86_64)
That is the HTML code:
</div>
<div class="form-row exp-date clearfix fancyform">
<div class="formfield expired-label monthcaption">
<label>Date d'expiration <span>*</span></label>
</div>
<div class="formfield month">
<div class="value value-select">
<select class="selectbox required" id="dwfrm_adyenencrypted_expiryMonth" data-missing-error="Veuillez sélectionner le mois d'expiration" data-parse-error="Ce contenu est invalide" data-range-error="Ce contenu est trop long ou trop court" data-value-error="Cette date d'expiration est invalide" pattern="^(:?0[1-9]|1[0-2])$" required="required" >
<option class="selectoption" label="Mois" value="">Mois</option>
<option class="selectoption" label="01" value="01">01</option>
<option class="selectoption" label="02" value="02">02</option>
<option class="selectoption" label="03" value="03">03</option>
<option class="selectoption" label="04" value="04">04</option>
<option class="selectoption" label="05" value="05">05</option>
<option class="selectoption" label="06" value="06">06</option>
<option class="selectoption" label="07" value="07">07</option>
<option class="selectoption" label="08" value="08">08</option>
<option class="selectoption" label="09" value="09">09</option>
<option class="selectoption" label="10" value="10">10</option>
<option class="selectoption" label="11" value="11">11</option>
<option class="selectoption" label="12" value="12">12</option>
</select>
What is wrong ? I know selenium cant find the element but i dont know why ,
xpath wrong ? i need to use other method to find element ? thanks for anwsers
Answer: You should use `Select()` to select an option from drop down as below :-
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
wait = WebDriverWait(driver, 10)
element = wait.until(EC.visibility_of_element_located((By.ID, "dwfrm_adyenencrypted_expiryMonth")))
select = Select(element)
select.select_by_value("04")
**Edited** :- If unfortunately above does not work you can also try using
`.execute_script()` as below :-
wait = WebDriverWait(driver, 10)
element = wait.until(EC.presence_of_element_located((By.ID, "dwfrm_adyenencrypted_expiryMonth")))
driver.execute_script("var select = arguments[0]; for(var i = 0; i < select.options.length; i++){ if(select.options[i].value == arguments[1]){ select.options[i].selected = true; } }", element, "04")
Hope it will work...:)
|
XLS with formula in more than one cells within a column with Python
Question: After a long day playing with lots of variants I was left with this code:
from xlrd import open_workbook
from xlwt import Workbook, Formula
from xlutils.copy import copy
rb = open_workbook("test.xls")
wb = copy(rb)
s = wb.get_sheet(0)
s.write(2,4, Formula('D3-B3') )
wb.save('test.xls')
This works to edit a XLSfile and allowed me to enter a formula in a cell. Now
I'm stuck on how can I edit a column to put a formula in the more than one
cell that would continue to each cell in the column with the data from the
cells in that row, like I did with D3-B3 the row number would change each cell
to match that row.
Answer: With a simple loop:
s = wb.get_sheet(0)
last_row = 10 # change to your last required row
for i in range(4, last_row + 1):
s.write(2, i, Formula('D{row}-B{row}'.format(row=i-1)))
wb.save('test.xls')
|
How to count rows not values in python pandas?
Question: I would like to group DataFrame by some field like
student_data.groupby(['passed'])
and then count number of rows inside each group.
I know how to count values like
student_data.groupby(['passed'])['passed'].count()
or
student_data.groupby(['passed']).agg({'passed': 'count'})
but this will [exclude empties by default](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.count.html). I would like to count all
rows in groups?
I found I can count rows in entire DataFrame with
len(student_data.index)
but can't find any `index` field in `GroupBy` object or something.
Answer: You need [`value_counts`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.value_counts.html) with parameter
`dropna=False`:
import pandas as pd
import numpy as np
student_data = pd.DataFrame({'passed':[1,1,2,2,2,np.nan,np.nan]})
print(student_data)
passed
0 1.0
1 1.0
2 2.0
3 2.0
4 2.0
5 NaN
6 NaN
print (student_data['passed'].value_counts(dropna=False))
2.0 3
1.0 2
NaN 2
Name: passed, dtype: int64
|
Cant stop the program repeating using "while" loop... Python
Question: Here's what i have so far... if you run the module and choose to play it
simply repeats the dice throw infinitely. Help?
answer=input("Would you like to play? Answer Y/N: ")
while answer == "Y" or answer == "y" or answer == "yes":
import random
die=random.randint(1,6)
dieTwo=random.randint(1,6)
if die== 1:
print("XXXXXXX")
print("X X")
print("X O X")
print("X X")
print("XXXXXXX")
print("You rolled a one...")
if die== 2:
print("XXXXXXX")
print("X O X")
print("X X")
print("X O X")
print("XXXXXXX")
print("You rolled a two...")
if die== 3:
print("XXXXXXX")
print("X O X")
print("X O X")
print("X O X")
print("XXXXXXX")
print("You rolled a three...")
if die== 4:
print("XXXXXXX")
print("X O O X")
print("X X")
print("X O O X")
print("XXXXXXX")
print("You rolled a four...")
if die== 5:
print("XXXXXXX")
print("X O O X")
print("X O X")
print("X O O X")
print("XXXXXXX")
print("You rolled a five...")
if die== 6:
print("XXXXXXX")
print("X O O X")
print("X O O X")
print("X O O X")
print("XXXXXXX")
print("You rolled a six...")
if dieTwo==1:
print("XXXXXXX")
print("X X")
print("X O X")
print("X X")
print("XXXXXXX")
print("and a one")
if dieTwo==2:
print("XXXXXXX")
print("X O X")
print("X X")
print("X O X")
print("XXXXXXX")
print("and a two")
if dieTwo==3:
print("XXXXXXX")
print("X O X")
print("X O X")
print("X O X")
print("XXXXXXX")
print("and a three")
if dieTwo==4:
print("XXXXXXX")
print("X O O X")
print("X X")
print("X O O X")
print("XXXXXXX")
print("and a four")
if dieTwo==5:
print("XXXXXXX")
print("X O O X")
print("X O X")
print("X O O X")
print("XXXXXXX")
print("and a five")
if dieTwo==6:
print("XXXXXXX")
print("X O O X")
print("X O O X")
print("X O O X")
print("XXXXXXX")
print("and a six")
print("")
if answer=="N" or answer=="n" or answer=="no":
print("Thank you for playing.")
Answer: Why would you expect this to be anything but an infinite loop? You have a
`while` loop that checks for your condition, and this condition will always be
true because you never change `answer`
You could also make a function for printing your dice, and instead of using
`or` you can use `answer in ("Y", "y", "yes")` as one example if you wanted to
be terser in places where you have conditions like this.
|
Flask SqlAlchemy MySQL connection timed out due to QueuePool overflow limit
Question: Please I need help with the following error which I get on the 16th database
connection. None of the other answers on Stackoverflow seem to work:
QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30
Backend configuration:
* Python 2.6.9
* Flask 0.10.1
* Flask-SQLAlchemy 2.1
* Mysql-connector-python 1.0.12
* Mysql 5.6.27
Database Setup:
connection_str = 'mysql+mysqlconnector://%s:%s@%s:%s/%s' % (config["DATABASE_USER"], config["DATABASE_PASSWORD"], \
config["DATABASE_HOST"], config["DATABASE_PORT"], \
config["DATABASE_SCHEMA1"])
engine = create_engine(connection_str, convert_unicode=True, pool_recycle=config["DATABASE_POOL_RECYCLE"])
db_session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=engine))
Base = declarative_base()
Base.query = db_session.query_property()
import application_package.models
Base.metadata.create_all(bind=engine)
@app.teardown_appcontext
def shutdown_session(exception=None):
db_session.remove()
Answer: I've realized the problem is that I created a separate threadpool with threads
that weren't terminating and were keeping all my database connections open
even after the response has been returned to the client. This was a bad hack
and terrible idea. I intend to get rid of this threadpool and use celery to
schedule asynchronous tasks.
|
Python Multiprocessing outputting entire program
Question: I don't normally ask questions on the internet nor am i a very good
programmer, but i have been struggling with this problem for a while but i
cant fathom why it doesn't work. I'm trying to do some maths that i thought i
could do in multiple threads, the code below shows my attempt to output the
answers that each worker comes to when it is done. I already realise that it
is fairly inefficient, what with workers having to wait for others and surely
many other problems, but i just want to get this version working. Any help is
greatly appreciated
I am running windows 10, python 3.5, with 4 cores and 8 threads and running
the program through the console,
Here is my code:
import math
from multiprocessing import Process, Lock
import time
lowMult = 0
highMult = 0
dist = "ERROR!"
print("Welcome to this maths test program")
print("We will be testing the nature of closest whole multiple pairs")
print("this test will run for all values 900000 to 900100")
print("press enter to begin")
input()
def worker(name, l, num):
l.acquire()
print (name, "Starting")
l.release()
found = False
test = math.sqrt(num)
if (test % 1) == 0:
l.acquire()
print(num, "=", int(test), "*", int(test), "0", "SQRT!")
l.release()
else:
test = int(test)
for lowMult in range(test, 0, -1):
for highMult in range(test, (num +1)):
if (lowMult * highMult) == num:
found = True
dist = highMult - lowMult
break
if found:
break
l.acquire()
if lowMult == 1:
print(num, "=", lowMult, "*", highMult, dist, "PRIME!")
else:
print(num, "=", lowMult, "*", highMult, dist)
print (name, "Exiting")
l.release()
if __name__ == '__main__':
lock = Lock()
jobs = []
num0 = 900000
num1 = 900001
num2 = 900002
num3 = 900003
for num in range(1, 100):
thread1 = Process(name='worker 1', target=worker, args=("worker 1", lock, num0,))
jobs.append(thread1)
thread1.start()
thread2 = Process(name='worker 2', target=worker, args=("worker 2", lock, num1,))
jobs.append(thread2)
thread2.start()
thread3 = Process(name='worker 3', target=worker, args=("worker 3", lock, num2,))
jobs.append(thread3)
thread3.start()
thread4 = Process(name='worker 4', target=worker, args=("worker 4", lock, num3,))
jobs.append(thread4)
thread4.start()
thread1.join()
thread2.join()
thread3.join()
thread4.join()
num0 += 4
num1 += 4
num2 += 4
num3 += 4
while True:
if thread1.is_alive() & thread2.is_alive() & thread3.is_alive() & thread4.is_alive():
time.sleep(2)
else:
input()
break
Ïn IDLE i get nothing, but i read around that this would happen, but in
console i get the first few print commands instead of what i am expecting
multiple times:
print("Welcome to this maths test program")
print("We will be testing the nature of closest whole multiple pairs")
print("this test will run for all values 900000 to 900100")
print("press enter to begin")
I get four of these before another input prompt, at which point the first
calculation is done but it seems not to be in parallel, and the second is
never finished. from my cpu usage it seems it never starts, but the program
doesnt terminate. I have no idea what is going on.
**EDIT:**
It seems as though the actual computing problems lie within my use of
`input()` and the definition of 'name'. a few other issues with `.join()` led
me to remove them, and the `.is_alive()` check was fixed to use OR instead of
AND. Nevertheless, when i run this code in python 3.5 and i get a windows
console, I get four copies of this code unintentionally:
Welcome to this maths test program
We will be testing the nature of closest whole multiple pairs
this test will run for all values 900000 to 900100
press enter to begin
This is what i am currently trying to solve.
Answer: If you are using Python 2.7, then `input()` will likely "crash", because it is
"equivalent to `eval(raw_input(prompt))`" according to the documentation:
<https://docs.python.org/2/library/functions.html#input>
You could use `raw_input` for the purpose of reading user inputs (actually I
don't see the point of asking for a keypress here).
Another problem is that the variable `name` is not defined in the `worker`
function.
Also the `thread1.is_alive() & thread2.is_alive() & thread3.is_alive() &
thread4.is_alive()` should always evaluate to `False`.
After removing all the `input()`, the output I get is (with Python 2.7) the
following, which seems to be the expected result as I understand:
Welcome to this maths test program
We will be testing the nature of closest whole multiple pairs
this test will run for all values 900000 to 900100
press enter to begin
(900000, 'Starting')
(900001, 'Starting')
(900002, 'Starting')
(900003, 'Starting')
(900000, '=', 900, '*', 1000, 100)
('worker 1', 'Exiting')
(900003, '=', 611, '*', 1473, 862)
('worker 4', 'Exiting')
(900002, '=', 2, '*', 450001, 449999)
('worker 3', 'Exiting')
(900001, '=', 1, '*', 900001, 900000, 'PRIME!')
('worker 2', 'Exiting')
(900004, 'Starting')
(900005, 'Starting')
(900006, 'Starting')
(900007, 'Starting')
(900006, '=', 6, '*', 150001, 149995)
('worker 3', 'Exiting')
(900004, '=', 28, '*', 32143, 32115)
('worker 1', 'Exiting')
(900007, '=', 1, '*', 900007, 900006, 'PRIME!')
('worker 4', 'Exiting')
(900005, '=', 5, '*', 180001, 179996)
('worker 2', 'Exiting')
...
|
Import tensorflow error on mac
Question: **Enviorment** :
Mac OSX 10.10
Pyhon: 2.7.10
I have following error when I was trying to `import tensorflow`
Python 2.7.10 (default, Jul 14 2015, 19:46:27)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
RuntimeError: module compiled against API version 0xa but this version of numpy is 0x9
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/Library/Python/2.7/site-packages/tensorflow/python/__init__.py", line 48, in <module>
from tensorflow.python import pywrap_tensorflow
File "/Library/Python/2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/Library/Python/2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
ImportError: numpy.core.multiarray failed to import
I was following the [official install
pege](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/get_started/os_setup.md#pip-
installation), and typed these two command
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/tensorflow-0.9.0-py2-none-any.whl
$ sudo pip install --upgrade $TF_BINARY_URL
I've tried uninstall `tensorflow` and `protobuf`,and then reinstall
`tensorflow`, but it threw the same error.
## Update
After I uninstall `numpy`and`tensorflow`, I reinstall `numpy`.However, I
couldn't reinstall `tensorflow`.
It threw this
$sudo pip install --upgrade $TF_BINARY_URL
The directory '/Users/Coda/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/Coda/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
You must give at least one requirement to install (see "pip help install")
Is that because I uninstall `tensorflow` wrongly? I use `pip uninstall
tensorflow` to uninstall.
Answer: I think the issue is that your `numpy` is one version older than needed (`0x9`
= 9; `0xa` = 10). Maybe upgrade numpy and see if importing `tensorflow` works
after that.
_Edit/Update:_ For your new error, try the pip install with the `--no-cache-
dir` option.
|
Using requests function in python to submit data to a website and call back a response
Question: I am trying to use the requests function in python to post the text content of
a text file to a website, submit the text for analysis on said website, and
pull the results back in to python. I have read through a number of responses
here and on other websites, but have not yet figured out how to correctly
modify the code to a new website.
I'm familiar with beautiful soup so pulling in webpage content and removing
HTML isn't an issue, its the submitting the data that I don't understand.
My code currently is:
import requests
fileName = "texttoAnalyze.txt"
fileHandle = open(fileName, 'rU');
url_text = fileHandle.read()
url = "http://www.webpagefx.com/tools/read-able/"
payload = {'value':url_text}
r = requests.post(url, payload)
print r.text
This code comes back with the html of the website, but hasn't recognized the
fact that I'm trying to a submit a form.
Any help is appreciated. Thanks so much.
Answer: You need to send the same request the website is sending, usually you can get
these with web debugging tools (like chrome/firefox developer tools).
In this case the url the request is being sent to is:
`http://www.webpagefx.com/tools/read-able/check.php`
With the following params:
`tab=Test+by+Direct+Link&directInput=SOME_RANDOM_TEXT`
So your code should look like this:
url = "http://www.webpagefx.com/tools/read-able/check.php"
payload = {'directInput':url_text, 'tab': 'Test by Direct Link'}
r = requests.post(url, data=payload)
print r.text
Good luck!
|
Creating sequence vector from text in Python
Question: I am now trying to prepare the input data for LSTM-based NN. I have some big
number of text documents and what i want is to make sequence vectors for each
document so i am able to feed them as train data to LSTM RNN.
My poor approach:
import re
import numpy as np
#raw data
train_docs = ['this is text number one', 'another text that i have']
#put all docs together
train_data = ''
for val in train_docs:
train_data += ' ' + val
tokens = np.unique(re.findall('[a-zа-я0-9]+', train_data.lower()))
voc = {v: k for k, v in dict(enumerate(tokens)).items()}
and then brutforce replace each doc with a "voc" dict.
Is there any libs which can help with this task?
Answer: You could use NLTK to tokenise the training documents. NLTK provides a
standard word tokeniser or allows you to define your own tokeniser (e.g.
RegexpTokenizer). Take a look
[here](http://www.nltk.org/api/nltk.tokenize.html) for more details about the
different tokeniser functions available.
[Here](http://www.nltk.org/book/ch01.html) might also be helpful for pre-
processing the text.
A quick demo using NLTK's pre-trained word tokeniser below:
from nltk import word_tokenize
train_docs = ['this is text number one', 'another text that i have']
train_docs = ' '.join(map(str, train_docs))
tokens = word_tokenize(train_docs)
voc = {v: k for k, v in dict(enumerate(tokens)).items()}
|
tkinter traceback error on python 3.5.1 windows 8.1
Question: I want to get an input data from user and put it into text file, but there's
an error as follows:
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\dasom\AppData\Local\Programs\Python\Python35-32\Lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:/Users/dasom/PycharmProjects/Exercise/4.pyw", line 5, in save_data
filed.write("Depot:\n%s\n" % depot.get())
AttributeError: 'NoneType' object has no attribute 'get'`
It says about line 1549 in **init**.py file, I looked up for it and I coudn't
understand what the problem is.
def __call__(self, *args):
"""Apply first function SUBST to arguments, than FUNC."""
try:
if self.subst:
args = self.subst(*args)
return self.func(*args)
except SystemExit:
raise
except:
self.widget._report_exception()
Here's my whole code
from tkinter import *
def save_data():
filed = open("deliveries.txt", "a")
filed.write("Depot:\n%s\n" % depot.get())
filed.write("Description :\n%s\n" % description.get())
filed.write("Address :\n%s\n" % address.get("1.0", END))
depot.delete(0, END)
description.delete(0, END)
address.delete("1.0", END)
app = Tk()
app.title('Head-Ex Deliveries')
Label(app, text='Depot:').pack()
depot = Entry(app).pack()
Label(app, text="Description:").pack()
description = Entry(app).pack()
Label(app, text='Address:').pack()
address = Text(app).pack()
Button(app, text='save', command=save_data).pack()
app.mainloop()
Actually, I just typed the code of textbook. Your help will be greatly
appreciated. Thanks.
Answer: If the textbook has code like that, it's a poor textbook. This line:
depot = Entry(app).pack()
is doing two things. First it creates an `Entry`, and then it places it into
the app. Unfortunately, the `pack()` method acts in-place and returns `None`
instead of a reference to the original `Entry` widget. Split it up:
depot = Entry(app)
depot.pack()
Do this for all similar instances of assigning the `None` return value from an
in-place method to a reference that you expect to point to a useful object.
|
Tensorflow TypeError on session.run arguments/output
Question: I'm training a CNN quite similar to the one in
[this](http://stackoverflow.com/questions/37901882/tensorflow-reshaping-a-
tensor) example, for image segmentation. The images are 1500x1500x1, and
labels are of the same size.
After defining the CNN structure, and in launching the session as in this code
sample: (conv_net_test.py)
with tf.Session() as sess:
sess.run(init)
summ = tf.train.SummaryWriter('/tmp/logdir/', sess.graph_def)
step = 1
print ("import data, read from read_data_sets()...")
#Data defined by me, returns a DataSet object with testing and training images and labels for segmentation problem.
data = import_data_test.read_data_sets('Dataset')
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = data.train.next_batch(batch_size)
print ("running backprop for step %d" % step)
batch_x = batch_x.reshape(batch_size, n_input, n_input, n_channels)
batch_y = batch_y.reshape(batch_size, n_input, n_input, n_channels)
batch_y = np.int64(batch_y)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y, keep_prob: dropout})
if step % display_step == 0:
# Calculate batch loss and accuracy
#pdb.set_trace()
loss, acc = sess.run([loss, accuracy], feed_dict={x: batch_x, y: batch_y, keep_prob: 1.})
step += 1
print "Optimization Finished"
I hit upon the following TypeError (stacktrace below):
conv_net_test.py in <module>()
178 #pdb.set_trace()
--> 179 loss, acc = sess.run([loss, accuracy], feed_dict={x: batch_x, y: batch_y, keep_prob: 1.})
180 step += 1
181 print "Optimization Finished!"
tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
370 try:
371 result = self._run(None, fetches, feed_dict, options_ptr,
--> 372 run_metadata_ptr)
373 if run_metadata:
374 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
582
583 # Validate and process fetches.
--> 584 processed_fetches = self._process_fetches(fetches)
585 unique_fetches = processed_fetches[0]
586 target_list = processed_fetches[1]
tensorflow/python/client/session.pyc in _process_fetches(self, fetches)
538 raise TypeError('Fetch argument %r of %r has invalid type %r, '
539 'must be a string or Tensor. (%s)'
--> 540 % (subfetch, fetch, type(subfetch), str(e)))
TypeError: Fetch argument 1.4415792e+2 of 1.4415792e+2 has invalid type <type 'numpy.float32'>, must be a string or Tensor. (Can not convert a float32 into a Tensor or Operation.)
I am stumped at this point. Maybe this is a simple case of converting the
type, but I'm not sure how/where. Also, why does the loss have to be a string?
(Assuming the same error will pop up for the accuracy as well, once this is
fixed).
Any help appreciated!
Answer: Where you use `loss = sess.run(loss)`, you redefine in python the variable
`loss`.
The first time it will run fine. The second time, you will try to do:
sess.run(1.4415792e+2)
Because `loss` is now a float.
* * *
You should use different names like:
loss_val, acc = sess.run([loss, accuracy], feed_dict={x: batch_x, y: batch_y, keep_prob: 1.})
|
pywinrm - running New-Mailbox powershell cmdlet remotely
Question: I've been trying to get the [pywinrm](https://pypi.python.org/pypi/pywinrm)
module to run the `New-Mailbox` Powershell cmdlet remotely. This is what I
have so far:
import winrm
ps_text = "$pass = ConvertTo-SecureString -String '%s' -AsPlainText -Force; \
Add-PSSnapIn Microsoft.Exchange.Management.Powershell.E2010; \
New-Mailbox -UserPrincipalName '%s@contoso.lan' \
-Alias '%s' -Name '%s' -Password $pass \
-Firstname '%s' \
-Lastname '%s' \
-DisplayName '%s' \
-PrimarySmtpAddress '%s'" % \
('password123',
'jfd',
'john.doe',
'John Doe',
'John',
'Doe',
'John Doe',
'john.doe@contoso.co.uk')
remote = winrm.Session('https://contoso.co.uk:5986', auth=('ps_remote', 'irrelevant'))
r = remote_session.run_cmd("powershell", ["-version", "2.0", "-c", ps_text])
This is the output I get:
> New-Mailbox : Value cannot be null. Parameter name: serverSettings At line:1
> char:147 \+ $pass = ConvertTo-SecureString -String 'password123'
> -AsPlainText -Force; Add-PSSnapIn
> Microsoft.Exchange.Management.Powershell.E2010; New-Mailbox <<<<
> -UserPrincipalName 'jfd@contoso.lan' -Alias 'john.doe' -Name 'John Doe'
> -Password $pass -Firstname 'John' -Lastname 'Doe' -DisplayName 'John Doe'
> -PrimarySmtpAddress 'john.doe@contoso.co.uk' \+ CategoryInfo : NotSpecified:
> (:) [New-Mailbox], ArgumentNullException \+ FullyQualifiedErrorId :
> System.ArgumentNullException,Microsoft.Exchange.Management.RecipientTasks.NewMailbox
I figure it doesn't like $pass being unquoted, so I then wrap it in single
quotes and get:
> New-Mailbox : Cannot bind parameter 'Password'. Cannot convert the "$pass"
> value of type "System.String" to type "System.Security.SecureString".At
> line:1 char:231 \+ $pass = ConvertTo-SecureString -String 'password123'
> -AsPlainText -Force; Add-PSSnapIn
> Microsoft.Exchange.Management.Powershell.E2010; New-Mailbox
> -UserPrincipalName 'jfd@contoso.lan' -Alias 'john.doe' -Name 'John Doe'
> **-Password <<<< '$pass'** -Firstname 'John' -Lastname 'Doe' -DisplayName
> 'John Doe' -PrimarySmtpAddress 'john.doe@contoso.co.uk' \+ CategoryInfo :
> InvalidArgument: (:) [New-Mailbox], ParameterBindingException \+
> FullyQualifiedErrorId :
> CannotConvertArgumentNoMessage,Microsoft.Exchange.Management.RecipientTasks.NewMailbox
Emphasis mine. And now it interprets it literally instead of expanding the
$pass variable. Is there any way I can get this correctly executed?
Some notes:
* I'm not using the documented `run_ps` method of `winrm.Session` as it does not run against the correct version of Powershell on the remote side (version 2.0 is required here as that's what the Exchange Management Snap-in requires).
* I have tried using the low level API of pywinrm as detailed on the package page, but it's no good.
* If I insert `$pass;` after first line of Powershell, the standard output does actually return `System.Security.SecureString`, which makes it even weirder that it would complain about a null argument in the first example.
* I have tried wrapping in triple quotes, and swapping quote styles. No dice.
* I have tried various other methods, including making subprocess calls to a local Powershell to run `New-PSSession` to import the remote session, but this fails to actually get access to the cmdlet I need. I've also tried running script blocks remotely using `Invoke-Command`, but even though that supposedly imports the Snap-in successfully, it doesn't run the actual cmdlet.
* A pure Python solution would be preferable (hence the use of pywinrm), but I'm open to anything at this stage. Examples of running `New-Mailbox` remotely are pretty sparse, or maybe my Google-fu is weak in this instance.
Answer: In PowerShell, single quotes will prevent variable expansion from occurring.
My recommendation would be:
* Remove the single quotes from the `$pass`
* Notice in the error message you posted, that the error is coming from `-serverSettings`, not `-Password`
* Pass in a value for `-serverSettings` (I couldn't find this parameter in Exchange documentation, but check if it exists or not in your version)
* Validate the PowerShell command on a Windows system, before attempting to issue the command through `pywinrm`
Hope this helps! Cheers
|
Simple python script to get a libreoffice base field and play on vlc
Question: I've banged my head for hours on this one, and I don't understand the
LibreOffice macro api well enough to know how to make this work:
1) This script works in python:
#!/usr/bin/env python3
import subprocess
def play_vlc(path="/path/to/video.avi"):
subprocess.call(['vlc', path])
return None
play_vlc("/path/to/video.avi")
2) I've got python scripts working fine in LibreOffice Base, and this script
is fired on a button press. The video opens (with an error - see below)
Now, _how do open the path found in a given record's field labeled "path"_ \--
ie, what is being passed to python, and how do I pull that relevant bit of
info?
Further, whenever I fire this, the video plays, but I also get:
com.sun.star.uno.RuntimeExceptionError during invoking function play_vlc in module file:///usr/lib/libreoffice/share/Scripts/python/vlc.py (<class 'TypeError'>: Can't convert 'com.sun.star.lang.EventObject' object to str implicitly
/usr/lib/python3.5/subprocess.py:1480 in function _execute_child() [restore_signals, start_new_session, preexec_fn)]
/usr/lib/python3.5/subprocess.py:947 in function __init__() [restore_signals, start_new_session)]
/usr/lib/python3.5/subprocess.py:557 in function call() [with Popen(*popenargs, **kwargs) as p:]
/usr/lib/libreoffice/share/Scripts/python/vlc.py:8 in function play_vlc() [subprocess.call(['vlc', path])]
/usr/lib/libreoffice/program/pythonscript.py:870 in function invoke() [ret = self.func( *args )]
)
Please help!
Answer: For example, say the form is based on a table containing a column called
`PATH`. Assign the button's `Execute action` event to this function:
def playvlc_button_pressed(oEvent):
oForm = oEvent.Source.getModel().getParent()
lNameCol = oForm.findColumn('PATH')
sPath = oForm.getString(lNameCol)
play_vlc(sPath)
Documentation for Base macros is confusing, but there is some at:
<http://www.pitonyak.org/database/>
|
Reading a serial port in python with unknown data length
Question: Hello I am trying to read data from a pic32 microcontroller configured as a
serial port.
The pic32 sends "binary" data variable in length (14 to 26 bytes long). I want
to read in the data and separate the bits then convert them to their decimal
equivalent.
import serial
import csv
#open the configuartion file
with open('config.txt') as configFile:
#save the config parameters to an array called parameters
parameters = configFile.read().split()
#Function to Initialize the Serial Port
def init_serial():
global ser
ser = serial.Serial()
ser.baudrate = 9600
ser.port = 'COM7'
ser.timeout = 10
ser.open()
if ser.isOpen():
print ('Open: ' + ser.portstr)
#call the serial initilization function
init_serial()
#writes the lines from the config file to the serial port
counter = 0
while counter<4:
ser.write(chr(int(parameters[counter])).encode('utf-8') + chr(int(parameters[counter+1])).encode('utf-8'))
counter = counter + 2
#opens the csv file to append to
resultsFile = open('results.csv', 'wt')
#writes the titles of the four columns to the csv file
resultsFile.write("{} {} {} {}\n".format('ChannelAI', 'ChannelAQ', 'ChannelBI', 'ChannelBQ'))
count=0
while count < 10:
#read from serial port
incoming = ser.read(26)
#decodes incoming bytes to a string
incoming = incoming.decode('cp1252')
#will select element 4, 5 & 6 from incoming data
channelAIstr = incoming[4:6]
#converts slected elements to an integer
channelAI=int(channelAIstr, 16)
channelAQstr = incoming[7:10]
#channelAQ=int(channelAQstr, 16)
channelBIstr = incoming[10:13]
#channelBI=int(channelBIstr, 16)
channelBQstr = incoming[13:16]
#channelBQ=int(channelBQstr, 16)
#writes to csv file
resultsFile.write("{} {} {} {}\n".format(str(channelAI), str(channelAQ), str(channelBI), str(channelBQ)))
count = count + 1
#close the file to save memory
resultsFile.close()
I am having some trouble properly reading and converting the bits from the
serial port. Any help on how to do this would be appreciated.
I know I am reading the serial port correctly and am getting data that looks
something like this "\x00\x7f\x7f" as an example. I then want to convert this
3 byte long string to an integer.
Answer: I am not sure how you determine if the data is 14 or 26 bytes long or anything
in-between.
In all cases, you might want to use a wrapper class which wraps the IO.
On every request of data, you can have the wrapper either read a certain
number of bytes or all bytes which are available, until you have enough to
decode. Then the wrapper decodes them and returns them as a tuple.
This is a first guess about how you can proceed; I do not see where exactly
you are having trouble in your code.
* * *
But this is only syntactig sugar and for a more advanced state of your
program.
For the start, let's locate the more obvious things: You seem to be ripping
the data apart in a wrong way. Your code
#will select element 4, 5 & 6 from incoming data
channelAIstr = incoming[4:6]
does not fit to each other: indexing `[4:6]` means 4 inclusive to 6 exclusive.
If you really want 4, 5 and 6, you have to write `[4:7]`.
The next step is to convert this into an integer. If you have `\x00\x7f\x7f`,
this can mean a lot of things:
* If it is a 24 bit integer, it can be little or big endian. In the first case, it is `0x7F7F00`, in the second case it is `0x007F7F`.
These cases can be dealt with
>>> a='\x00\x7f\x7f'
>>> import struct
>>> struct.unpack("<I", a+"\x00")[0]
8355584
>>> 0x7f7f00
8355584
>>> struct.unpack(">I", "\x00"+a)[0]
32639
>>> 0x7f7f
32639
* If it is a different kind of data format (maybe some floating point?), you have to be more verbose what these data mean.
So, if I am right about the integer solution, just do any of
channelAI = struct.unpack(">I", "\x00" + channelAIstr)[0]
channelAI = struct.unpack("<I", channelAIstr + "\x00")[0]
|
Spark Redshift with Python
Question: I'm trying to connect Spark with amazon Redshift but i'm getting this error :
[![enter image description
here](http://i.stack.imgur.com/EV8RD.png)](http://i.stack.imgur.com/EV8RD.png)
My code is as follow :
from pyspark.sql import SQLContext
from pyspark import SparkContext
sc = SparkContext(appName="Connect Spark with Redshift")
sql_context = SQLContext(sc)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", <ACCESSID>)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", <ACCESSKEY>)
df = sql_context.read \
.option("url", "jdbc:redshift://example.coyf2i236wts.eu-central- 1.redshift.amazonaws.com:5439/agcdb?user=user&password=pwd") \
.option("dbtable", "table_name") \
.option("tempdir", "bucket") \
.load()
Answer: I think the `s3n://` URL style has been deprecated and/or removed.
Try defining your keys as `"fs.s3.awsAccessKeyId"`.
|
Unable to Process an image transformed in OpenCV via scikit-image
Question: I want to skeletonize an image using the scikit-image module for
skeletonization. This image is pre processed by OpenCV library. Given an Image
'Feb_16-0.jpg', I convert it to gray scale, perform the morphological
transformation of opening the image, then apply the Gaussian Blur and adaptive
thresholding using OpenCV and Python:
import cv2
import numpy as np
from matplotlib import pyplot as plt
from skimage.morphology import skeletonize
from skimage.viewer import ImageViewer
img = cv2.imread('Feb_16-0.jpg',0)
kernel = np.ones((1,1),np.uint8)
opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
blur = cv2.GaussianBlur(opening,(1,1),0)
ret3,th4 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
I now want to skeletonize the image using scikit-image
skimage.morphology.skeletonize. I have tried writing code for performing
erosion and dilation to manually skeletonize the image using OpenCV and
Python. But, this proved to be a highly inefficient processing so i decided to
switch to the scikit-image library at this point. However, when I pass the
numpy array preprocessed by OpenCV to the scikit-image module using the code:
skel = skeletonize(th4)
and try to view the results of the same, I end up with the error:
Image contains values other than 0 and 1
I am unable to interpret the cause for the same. Can anyone kindly help me out
in resolving this datatype error?
Answer: The input matrix to `skeletonize()` needs to be binary with either 0/1 or
True/False as entries. The output of `cv2.threshold()` is binary, but with
values 0/255. To convert the th4 matrix to 0/1 form you can for example do:
`th4[th4 == 255] = 1`
|
How to use python requests with a server that has two IP addresses
Question: I have a Ubuntu server that has multiple IP addresses. As an example, how do I
set the correct IP address for outbound requests in a library like python
requests?
Answer: By default, this is not handled at application level, but by the operating
system. According to linux routing table, the OS will choose the appropriate
interface depending on the destination IP you are trying to reach.
You can edit linux routing table with the `ip route` command
([manual](http://linux.die.net/man/8/ip)).
However, you can tell your application to use a given network interface
Here is a related question and proposed solution with a raw socket:
# From http://stackoverflow.com/questions/335607
import socket
s = socket.socket()
s.bind(("127.0.0.1", 0))
s.connect(("321.12.131.432", 80))
With requests, it should look like the following (untested)
# From http://stackoverflow.com/questions/12585317
import socket
real_create_conn = socket.create_connection
def set_src_addr(*args):
address, timeout = args[0], args[1]
source_address = ('IP_ADDR_TO_BIND_TO', 0)
return real_create_conn(address, timeout, source_address)
socket.create_connection = set_src_addr
import requests
r = requests.get('http://www.google.com')
|
What is the best way to save the comments collected from Facebook using Python?
Question: I'm collecting all the comments from some Facebook pages using Python and
Facebook-SDK.
Since I want to do Sentiment Analysis on these comments, what's the best way
to save these texts, such that it's not needed any changing in the texts?
I'm now saving the comments as a table and then as a CSV file.
table.to_csv('file-name.csv')
But if I want to read this saved file, I get the following error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position ...
By the way, I'm working with the German Texts.
Answer: Have you tried this?
Set default encoder at the top of your code
import sys
reload(sys)
sys.setdefaultencoding("ISO-8859-1")
or
pd.read_csv('file-name.csv', encoding = "ISO-8859-1")
|
grouping rows python pandas
Question: say I have the following dataframe and, the index represents ages, the column
names is some category, and the values in the frame are frequencies...
Now I would like to group ages in various ways (2 year bins, 5 year bins and
10 year bins)
>>> table_w
1 2 3 4
20 1000 80 40 100
21 2000 40 100 100
22 3000 70 70 200
23 3000 100 90 100
24 2000 90 90 200
25 2000 100 80 200
26 2000 90 60 100
27 1000 100 30 200
28 1000 100 90 100
29 1000 60 70 100
30 1000 70 100 100
31 900 40 100 90
32 700 100 30 100
33 700 30 50 90
34 600 10 40 100
I would like to end up with something like...
1 2 3 4
20-21 3000 ... ... ...
22-23 6000 ... ... ...
24-25 4000 ... ... ...
26-27 3000 ... ... ...
28-29 2000 ... ... ...
30-31 1900 ... ... ...
32-33 1400 ... ... ...
34 600 ... ... ...
Is there a simple and efficient way to do this?
Any help is greatly appreciated...
Answer: Use `pd.cut()` to create the age bins and group your dataframe with them.
import io
import numpy as np
import pandas as pd
data = io.StringIO("""\
1 2 3 4
20 1000 80 40 100
21 2000 40 100 100
22 3000 70 70 200
23 3000 100 90 100
24 2000 90 90 200
25 2000 100 80 200
26 2000 90 60 100
27 1000 100 30 200
28 1000 100 90 100
29 1000 60 70 100
30 1000 70 100 100
31 900 40 100 90
32 700 100 30 100
33 700 30 50 90
34 600 10 40 100
""")
df = pd.read_csv(data, delim_whitespace=True)
bins = np.arange(20, 37, 2)
df.groupby(pd.cut(df.index, bins, right=False)).sum()
Output:
1 2 3 4
[20, 22) 3000 120 140 200
[22, 24) 6000 170 160 300
[24, 26) 4000 190 170 400
[26, 28) 3000 190 90 300
[28, 30) 2000 160 160 200
[30, 32) 1900 110 200 190
[32, 34) 1400 130 80 190
[34, 36) 600 10 40 100
|
SublimeText3 cannot find Python modules (numpy) installed with MacPorts
Question: I installed Python 3.5 using MacPorts. I am trying to use SublimeText3 as an
editor. (Anything better and more integrated tan ST3 for python development??)
From the MacOSX terminal, I can 'import numpy' just fine, but SublimeText3
cannot find the packages.
Is it because the python packages are installed as 'Frameworks'?, because the
Path for finding those modules is not right?, other????
Here's what terminal shows:
$ type -a python3
python3 is /opt/local/bin/python3
python3 is /Library/Frameworks/Python.framework/Versions/3.5/bin/python3
python3 is /Library/Frameworks/Python.framework/Versions/3.5/bin/python3
python3 is /usr/local/bin/python3
Here's what ST3 shows:
File "/Users/xxx/Desktop/python_work/array_play.py", line 1, in <module>
import numpy
ImportError: No module named 'numpy'
[Finished in 0.0s with exit code 1]
[cmd: ['/Library/Frameworks/Python.framework/Versions/3.5/bin/python3', '-u', '/Users/xxx/Desktop/python_work/array_play.py']]
[dir: /Users/xxx/Desktop/python_work]
[path: /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/sbin::/Library/Frameworks/Python.framework/Versions/3.5/bin/python3/site-packages]
As you can see, I tried to add
/opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages
to the Path variable, since this is the location of the numpy folder, but this
doesn't work...
Should I just ignore this message in ST3 and use it only as an editor??? Seems
rather silly to have an IDE and not be able to build and run programs from
it...
Answer: First, you need to find out which python3 you are using in terminal. You can
do this by running command `which python3`. Suppose the output is:
`/Users/username/.anaconda3/bin/python3`.
Second, try to set PYTHONPATH environment variable in Sublime's settings. Go
to `Preferences -> Settings User`, and add the following lines at the end of
the settings file (but inside of the last `}` symbol).
"env":
{
"PYTHONPATH":"/Users/username/.anaconda3/bin"
}
|
python regex preserve specified special characters only
Question: I've been looking for a way to isolate special characters in a regex
expression, but I only seem to find the exact opposite of what I'm looking
for. So basically I want to is something along the lines of this:
import re
str = "I only want characters from the pattern below to appear in a list ()[]' including quotations"
pattern = """(){}[]"'-"""
result = re.findall(pattern, str)
What I expect from this is:
print(result)
#["(", ")", "[", "]", "'"]
**Edit:** thank you to whomever answered then deleted their comment with this
regex that solved my problem:
pattern = r"""[(){}\[\]"'\-]"""
Answer: Why would you need regex for this when it can be done without regex?
>>> str = "I only want characters from the pattern below to appear in a list ()[]' including quotations"
>>> pattern = """(){}[]"'-"""
>>> [x for x in str if x in pattern]
['(', ')', '[', ']', "'"]
|
Exceptions using django standalone with python3
Question: Trying to use django templates in stand-alone mode. I get these exceptions
(below). New to python, wondering if anyone would be willing to help out.
Django is used for templating in a script which is not shown here. However the
exact same exceptions appear when launching it.
>>> from django.template import Template, Context
>>> from django.conf import settings
>>> settings.configure()
>>> t = Template('My name is {{ my_name }}.')
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/django/template/utils.py", line 86, in __getitem__
return self._engines[alias]
KeyError: 'django'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.4/dist-packages/django/template/base.py", line 182, in __init__
engine = Engine.get_default()
File "/usr/lib/python3.4/functools.py", line 472, in wrapper
result = user_function(*args, **kwds)
File "/usr/local/lib/python3.4/dist-packages/django/template/engine.py", line 88, in get_default
django_engines = [engine for engine in engines.all()
File "/usr/local/lib/python3.4/dist-packages/django/template/utils.py", line 110, in all
return [self[alias] for alias in self]
....
....
File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 137, in get_app_configs
self.check_apps_ready()
File "/usr/local/lib/python3.4/dist-packages/django/apps/registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
My django version:
`python3 -c "import django; print(django.get_version())" ---> 1.9.7`
My python version:
`Python 3.4.3`
Answer: After calling `settings.configure()`, you must call `django.setup()`.
import django
from django.conf import settings
settings.configure()
django.setup()
from django.template import Template, Context
t = Template('My name is {{ my_name }}.')
c=Context({'my_name': 'Mindaugas'})
t.render(c)
See [the docs](https://docs.djangoproject.com/en/1.9/topics/settings/#calling-
django-setup-is-required-for-standalone-django-usage) for more info.
|
Python: Printing data only when number enters or leaves interval
Question: Currently I'm making a script that, given a set of celestial coordinates, will
tell you on the next days when that point will be visible for a specific
telescope. The criteria is simple, in the Horizontal Coordinate system,
altitude of the object must be between 30 and 65 degrees(Variable "crit" here
represents that, but in radians). So I have a set of parameters for the
telescope called "Ant" and then, using Pyephem:
#imported ephem as ep
obj= ep.FixedBody()
obj._ra= E.ra
obj._dec= E.dec
obj._epoch = E.epoch
Ant.date = ep.now()
for d in range(days):
for i in range(24):
for j in range (60):
Ant.date += ep.minute
obj.compute(Ant)
crit= float(obj.alt)
if crit>=0.523599 and crit <=1.13446:
print "Visible at %s" %Ant.date
Which results in printing a lot of "Visible at 2016/7/11 19:41:21", 1 for
every minute. I Just want it to print something like "Enters visibility at
2016/7/11 19:41:21, leaves at 2016/7/11 23:41:00", for example. Any Ideas will
be appreciated.
Disclaimer: Sorry, not a native english speaker.
Answer: You need to keep track of whether it is already in range. So, for instance, at
the beginning you'd initialize it:
is_visible = False
and your if statement might look like:
if crit>=0.523599 and crit <=1.13446:
if not is_visible:
print "Visible at %s" %Ant.date
is_visible = True
else:
if is_visible:
print "No longer visible at %s" % Ant.date
is_visible = False
|
ImportError: No module named spiders on mac OS using Homebrew installation package
Question: All,
I followed the following steps from scrapy.org to updated default system
packages and install scrapy, the open source framework for building spiders
found here: <http://doc.scrapy.org/en/1.1/intro/install.html>
1. I ran the `xcode-select --install` command from terminal
2. ran the command to install hombebrew package: `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`
3. I tried to run this command, but don't know if I correctly updated, I copied and pasted this exactly: `echo "export PATH=/usr/local/bin:/usr/local/sbin:$PATH" >> ~/.bashrc`
4. per the installation directions on scrapy, I tried to verify that changes had take place: `source ~/.bashrc`
5. I ran the command: `brew install python`
6. ran the command : `brew update; brew upgrade python`
7. ran the command: `pip install Scrapy`
I wanted to be very clear with the above commands, trying to update and
install packages. I proceeded to follow directions to create a project, define
items and create my first spider.
**Lastly** when I ran the command `scrapy crawl dmoz` I get the below error
message
**crawl command with output and error message**
Romans-MBP:tutorial Roman$ scrapy crawl dmoz
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 141, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 238, in __init__
super(CrawlerProcess, self).__init__(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 129, in __init__
self.spider_loader = _get_spider_loader(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 325, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 33, in from_settings
return cls(settings)
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 20, in __init__
self._load_all_spiders()
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 28, in _load_all_spiders
for module in walk_modules(name):
File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 63, in walk_modules
mod = import_module(path)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named spiders
Romans-MBP:tutorial Roman$
Answer: Check in `scrapy/tutorial/tutorial/spiders/[your_spider].py` the name of your
spider, that should be run with `scrapy crawl` command. In the example below,
the name is `dmozdirectory` and the run command is `scrapy crawl
dmozdirectory`
Example:
class DmozSpider(scrapy.Spider):
name = "dmozdirectory"
allowed_domains = ["dmoz.org"]
Also, you should be in the root directory of your project when running that
command, in `scrapy/tutorial/`
|
How to save a dictionary of objects in Python?
Question: I have a Python 3.5 program that creates an inventory of objects. I created a
class of Trampolines (color, size, spring, etc.). I constantly will create new
instances of the class and I then save a dictionary of them. The dictionary
looks like this:
my_dict = {name: instance} and the types are like so {"string": "object"}
MY ISSUE is that I want to know how to save this inventory list so that I can
start where I left off the last time I closed the program.
I don't want to use "pickle" because I'm trying to learn secure ways to do
this for more important versions in the future.
I thought about using SQLite3, so any tips on how to do this easily would be
appreciated.
My preferred solution would state how to do it with the "json" module. I tried
it, but the error I got was
__main__.Trampoline object at 0x00032432... is not JSON serializable
EDIT:
Below is the code I used when I got the error:
out_file = open(input("What do you want to save it as? "), "w")
json.dump(my_dict, out_file, indent=4)
out_file.close()
END of EDIT
I've done a good amount of research, and saw that there's also an issue with
many of these SAVE options that you can only do ONE object per 'save file',
but that the work around to this is that you use a dictionary of objects, such
as the one I made. Any info clarifying this would be great, too!
Thanks!
Answer: What you might be able to do is saving the instance's attributes to a CSV-file
and then just create it when starting up. This might be a bit too much code
and is possible not the best way. One obvious problem is that it doesn't work
if you don't have the same amount of attributes as parameters, which should be
possible to fix if necessary I believe. I just thought I might try and post
and see if it helps :)
import json
class Trampoline:
def __init__(self, color, size, height, spring):
self.color = color
self.size = size
self.height = height
self.spring = spring
def __repr__(self):
return "Attributes: {}, {}, {}, {}".format(self.color, self.size, self.height, self.spring)
my_dict = {
"name1": Trampoline('red', 100, 2.3, True),
"name2": Trampoline('blue', 50, 2.1, False),
"name3": Trampoline('green', 25, 1.8, True),
"name5": Trampoline('white', 10, 2.6, False),
"name6": Trampoline('black', 0, 1.4, True),
"name7": Trampoline('purple', -33, 3.0, True),
"name8": Trampoline('orange', -999, 2.5, False),
}
def save(my_dict):
with open('save_file.txt', 'w') as file:
temp = {}
for name, instance in my_dict.items():
attributes = {}
for attribute_name, attribute_value in instance.__dict__.items():
attributes[attribute_name] = attribute_value
temp[name] = attributes
json.dump(temp, file)
def load():
with open('save_file.txt', 'r') as file:
my_dict = {}
x = json.load(file)
for name, attributes in x.items():
my_dict[name] = Trampoline(**attributes)
return my_dict
# CHECK IF IT WORKS!
save(my_dict)
my_dict = load()
print("\n".join(["{} | {}".format(name, instance) for name, instance in sorted(my_dict.items())]))
|
Python 2.7 - Trying to work convert UTC string to local time taking into account DST
Question: I have a UTC time string like so
supplied_datetime = 20160711230000 -0500
This is the format
yyyyMMddhhmmss +/-hhmm
Now if I take that offset (-5hrs) from the original time it should read
supplied_datetime = 20160711180000
The next part is that I need to correct based on the local time making sure to
account for any dst times.
So lets say i'm in the UK which is UTC 0000 but we are in DST +0100, then the
time that ultimately gets displayed to the user will be
supplied_datetime = 20160711190000
So the formula is `supplied_datetime - (supplied_utc_offset +
local_utc_offset)`
This is as far as I have got before asking here.
local_utc_offset = calendar.timegm(time.localtime()) - calendar.timegm(time.gmtime(time.mktime(time.localtime())))
supplied_utc_offset = parse(programme.get('start')[:20])
Answer: If the format of your date time strings are always consistent, you can
manually obtain the local datetime given the utc datetime.
`time.timezone` gives local offset in seconds.
Then you just need to parse the datetime string and add it with local offset
in hours as well as the offset in the datetime string:
from datetime import datetime
import time
dst = time.localtime().tm_isdst # daylight saving
local_offset = (dst-(time.timezone / 3600)) * 10000 # Local offset in hours format
strtime = '20160711230000 -0500' # The original datetime string
dtstr, offset = strtime[:-6], strtime[-5:] # separate the offset from the datetime string
utc_dt = datetime.strptime(dtstr, '%Y%m%d%H%M%S')
local_dtstr = str(int(dtstr) + int(offset) * 100 + local_offset) # get local time string
local_dt = datetime.strptime(local_dtstr, '%Y%m%d%H%M%S')
In [37]: utc_dt
Out[37]: datetime.datetime(2016, 7, 11, 23, 0)
In [38]: local_dt
Out[38]: datetime.datetime(2016, 7, 11, 19, 0)
|
Python Selenium - What are possible keys in FireFox webdriver profile preferences
Question: I couldn't really find this information anywhere, I am looking for a list of
possible keys that can be used in the `profile.set_preference()` API.
Here is some context:
from selenium import webdriver
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1024, 768))
display.start()
profile = webdriver.FirefoxProfile()
Now, if I want to, say specify a client SSL, I need to configure that as a
preference of FireFox profile. I am trying to find the list of all the
preferences so I can play with this.
Answer: You can look at `profile.DEFAULT_PREFERENCES` which is the `json` at
`python2.7/site-packages/selenium/webdriver/firefox/webdriver_prefs.json`
{u'frozen': {u'app.update.auto': False,
u'app.update.enabled': False,
u'browser.EULA.3.accepted': True,
u'browser.EULA.override': True,
u'browser.displayedE10SNotice': 4,
u'browser.download.manager.showWhenStarting': False,
u'browser.link.open_external': 2,
u'browser.link.open_newwindow': 2,
u'browser.offline': False,
u'browser.reader.detectedFirstArticle': True,
u'browser.safebrowsing.enabled': False,
u'browser.safebrowsing.malware.enabled': False,
u'browser.search.update': False,
u'browser.selfsupport.url': u'',
u'browser.sessionstore.resume_from_crash': False,
u'browser.shell.checkDefaultBrowser': False,
u'browser.tabs.warnOnClose': False,
u'browser.tabs.warnOnOpen': False,
u'datareporting.healthreport.logging.consoleEnabled': False,
u'datareporting.healthreport.service.enabled': False,
u'datareporting.healthreport.service.firstRun': False,
u'datareporting.healthreport.uploadEnabled': False,
u'datareporting.policy.dataSubmissionEnabled': False,
u'datareporting.policy.dataSubmissionPolicyAccepted': False,
u'devtools.errorconsole.enabled': True,
u'dom.disable_open_during_load': False,
u'extensions.autoDisableScopes': 10,
u'extensions.blocklist.enabled': False,
u'extensions.logging.enabled': True,
u'extensions.update.enabled': False,
u'extensions.update.notifyUser': False,
u'javascript.enabled': True,
u'network.http.phishy-userpass-length': 255,
u'network.manage-offline-status': False,
u'offline-apps.allow_by_default': True,
u'prompts.tab_modal.enabled': False,
u'security.csp.enable': False,
u'security.fileuri.origin_policy': 3,
u'security.fileuri.strict_origin_policy': False,
u'security.warn_entering_secure': False,
u'security.warn_entering_secure.show_once': False,
u'security.warn_entering_weak': False,
u'security.warn_entering_weak.show_once': False,
u'security.warn_leaving_secure': False,
u'security.warn_leaving_secure.show_once': False,
u'security.warn_submit_insecure': False,
u'security.warn_viewing_mixed': False,
u'security.warn_viewing_mixed.show_once': False,
u'signon.rememberSignons': False,
u'toolkit.networkmanager.disable': True,
u'toolkit.telemetry.enabled': False,
u'toolkit.telemetry.prompted': 2,
u'toolkit.telemetry.rejected': True},
u'mutable': {u'browser.dom.window.dump.enabled': True,
u'browser.newtab.url': u'about:blank',
u'browser.newtabpage.enabled': False,
u'browser.startup.homepage': u'about:blank',
u'browser.startup.page': 0,
u'dom.max_chrome_script_run_time': 30,
u'dom.max_script_run_time': 30,
u'dom.report_all_js_exceptions': True,
u'javascript.options.showInConsole': True,
u'network.http.max-connections-per-server': 10,
u'startup.homepage_welcome_url': u'about:blank',
u'webdriver_accept_untrusted_certs': True,
u'webdriver_assume_untrusted_issuer': True}}
|
Installing beautifulsoup
Question: I have installed beautifulsoup for Python, but it gives me this error when I
import the library:
Traceback (most recent call last):
File "D:/Playroom/WebScraper_01.py", line 2, in <module>
from bs4 import BeautifulSoup
File "C:\Python\lib\site-packages\bs4\__init__.py", line 29, in <module>
from .builder import builder_registry
File "C:\Python\lib\site-packages\bs4\builder\__init__.py", line 294, in <module>
from . import _htmlparser
File "C:\Python\lib\site-packages\bs4\builder\_htmlparser.py", line 7, in <module>
from html.parser import (
ImportError: cannot import name 'HTMLParseError'
Does anyone know why?
Answer: * If you get the ImportError “No module named HTMLParser”, your problem is that you’re running the Python 2 version of the code under Python 3.
* If you get the ImportError “No module named html.parser”, your problem is that you’re running the Python 3 version of the code under Python 2.
See: <https://www.crummy.com/software/BeautifulSoup/bs4/doc/#problems-after-
installation>
|
How can I make a python3 program not crash if it tries to add a string and a number together
Question: Source code
* * *
import sys
hi = input("Input a number ")
yo = input("Input a second number ")
total = int(hi) + int(yo)
def convertStr(s):
try:
ret = int(s)
print(int(total))
except ValueError:
ret = str(total)
print("There was an error")
convertStr(total)
How can I make it so that the python can add two inputs together but not crash
when I put a string in one of the values?
Answer: You can use your function `convertStr(s)` on input `hi` and `yo`. If input is
not a number then ask again for input.
|
Check key, value of nested dictionary in python?
Question: I'm generating a nested dictionary in my program. After generating, I want to
iterate through that dictionary, and check for the dictionary key and value.
**Program-Code**
This is the dictionary I want to iterate whose value contains another
dictionary.
main_dict = {101: {1234: [11111,11111],5678: [44444,44444]},
102: {9100: [55555,55555],1112: [77777,88888]}}
I'm reading a csv file and storing contents in this dictionary. Like this :
**Input.csv -**
lineno,item,total
101,1234,11111
101,1234,11111
101,5678,44444
101,5678,44444
102,9100,55555
102,9100,55555
102,1112,77777
102,1112,88888
This is input csv file. I'm reading this csv file and I want to know for one
unique item total is how many times repeating?
For that stuff I'm doing like this :
for line in reader:
if line[0] in main_dict:
if line[1] in main_dict[line[0]]:
main_dict[line[0]][line[1]].append(line[2])
else:
main_dict[line[0]].update({line[1]:[line[2]]})
else:
main_dict[line[0]] = {line[1]:[line[2]]}
print main_dict
**Output of above program :**
{101: {1234: [11111,11111],5678: [44444,44444]},
102: {9100: [55555,55555],1112: [77777,88888]}}
**but I'm facing following error in this line-**
if line[1] in main_dict[line[0]]:
IndexError: list index out of range
**Iteration of main_dict-**
for key,value in main_dict.iteritems():
f1 = open(outputfile + op_directory +'/'+ key+'.csv', 'w')
writer1 = csv.DictWriter(f1, delimiter=',', fieldnames = fieldname)
writer1.writeheader()
if type(value) == type({}):
for k,v in value.iteritems():
if type(v) == type([]):
set1 = set(v)
for se in set1:
writer1.writerow({'item':k,'total':se,'total_count':v.count(se)})
I want to know best way to iterate this type of dictionary?
Sometimes I'm getting correct result just like above dictionary but many a
times I face this error, what is that I'm missing?
Thanks in advance!
Answer: As the comments pointed out, you are not checking if `line` is of length 3:
for line in reader:
if not len(line) == 3:
continue
Concerning your algorithm, I would use nested `defaultdict` to avoid the
if/else lines.
EDIT: I added a new defaultdict and the csv writing part after the question
edit:
from collections import defaultdict
import csv
counter = defaultdict(lambda: defaultdict(list))
main_dict= defaultdict(lambda: defaultdict(lambda: defaultdict(dict)))
fieldnames=['item', 'total', 'total_count']
# we suppose reader is a cvs.reader object
with open('input.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for line in reader:
if not len(line) == 3:
continue
# Remove unwanted spaces
lineno, item, total = [el.strip() for el in line]
# Do not deal with non digit entries (title for example)
if not lineno.isdigit():
continue
counter[lineno][item].append(total)
csvdict = {'item': item,
'total': total,
'total_count': counter[lineno][item].count(total)}
main_dict[lineno][item][total].update(csvdict)
# The writing part
for lineno in sorted(main_dict):
itemdict = main_dict[lineno]
output = 'output_%s.csv' % lineno
with open(output, 'wb') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames, delimiter=',')
writer.writeheader()
for totaldict in itemdict.values():
for csvdict in totaldict.values():
writer.writerow(csvdict)
You can then use the following function to print a readable representation of
the result:
def myprint(obj, ntab=0):
if isinstance(obj, (dict, defaultdict)):
for k in sorted(obj):
myprint('%s%s'%(ntab*' ', k), ntab+1)
myprint(obj[k], ntab+1)
else:
print('%s%s'%(ntab*' ', obj))
myprint(main_dict)
But if you want to count the item totals, I would use another defaultdict with
the total as the key and a tuple (lineno, item) as the value:
from collections import defaultdict
import csv
total_dict = defaultdict(list)
# we suppose reader is a cvs.reader object
with open('input.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for line in reader:
if not len(line) == 3:
continue
# Remove unwanted spaces
lineno, item, total = [el.strip() for el in line]
# Do not deal with non digit entries (title for example)
if not lineno.isdigit():
continue
total_dict[total].append((lineno, item))
You can have the number of each total very easily:
>>> print len(total_dict['55555'])
2
|
Error was retrieving data from S3 using boto for python
Question: I'm trying to get data from Amazon S3 using boto for python.
from boto.s3.connection import S3Connection
AWS_KEY = 'MY_KEY'
AWS_SECRET = 'MY_SECRET'
aws_connection = S3Connection(AWS_KEY, AWS_SECRET)
bucket = aws_connection.get_bucket('s3://mybucket.buckets.com/')
for file_key in bucket.list():
print file_key.name
I'm passing a valid Key, secret_key and Bucketname.
When I try executing the above code I'm getting the following error -
Traceback (most recent call last):
File "MyPython_Script.py", line 7, in <module>
bucket = aws_connection.get_bucket('s3://mybucket.buckets.com/')
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 506, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 525, in head_bucket
response = self.make_request('HEAD', bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 668, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/site-packages/boto/connection.py", line 1071, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/site-packages/boto/connection.py", line 1030, in _mexe
raise ex
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
Any idea how to overcome this error ? Is my format used to pass the bucket
name correct ?
Answer: You just need to pass the name of you bucket, not a seamless URL (note that s3
endpoint would be <http://s3-aws-region.amazonaws.com/bucket>)
If you're using boto2
from boto.s3.connection import S3Connection
AWS_KEY = 'MY_KEY'
AWS_SECRET = 'MY_SECRET'
aws_connection = S3Connection(AWS_KEY, AWS_SECRET)
bucket = aws_connection.get_bucket('bucket_name', validate=False)
for file_key in bucket.list():
print file_key.name
If you're using boto3
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('name')
|
how to get a full list from a function python
Question: I'm totally new about python. So here is my issue.
def visitdir(path):
result = []
for root,dirs,files in os.walk(path):
for filepath in files:
result = ''.join(os.path.join(root,filepath))
print result
if __name__ == '__main__':
path = raw_input("Please input the folder you want to walk:")
visitdir(path)
It will give the full list while inputting the folder I want to walk. But if I
modified it as below:
import os, sys
def visitdir(path):
result = []
for root,dirs,files in os.walk(path):
for filepath in files:
result = ''.join(os.path.join(root,filepath))
return result
if __name__ == '__main__':
path = raw_input("Please input the folder you want to walk:")
final = visitdir(path)
print final
It will only give one item from selected path. If I just want to the same
result as previous one, How do i invoke the function?
Thanks in advance.
Answer: When you call `return` you finish the function flow and return the value. Also
why do you create the list 'result' and then override it with the string?
I think what you meant to do is something like this:
def visitdir(path):
result = []
for root,dirs,files in os.walk(path):
for filepath in files:
filename = ''.join(os.path.join(root,filepath))
result.append(filename)
return result
eg filling up the list with results and returning it in the end.
Another thing you could do is to use
[yield](https://pythontips.com/2013/09/29/the-python-yield-keyword-
explained/).
|
Page doesn't redirect correctly
Question: I'm learning django with myself and when I was following the tutorial [Writing
your first Django app, part
4](https://docs.djangoproject.com/en/1.9/intro/tutorial04/) today, I met this
problem.(I'm using django 1.9.7 and Python 3.5.2 64-bit and PyCharm)
When I select a choice and click the button Vote, it supposes to be redirected
to the results page, but I get the error message:"You didn't select a choice"
which is supposes to occur when I don't select any choices all the time. And I
have checked my codes, I figured something must went wrong with the function
vote, but I don't know the reason.
polls\view.py
def vote(request, question_id):
question = get_object_or_404(Question, pk=question_id)
try:
selected_choice = question.choice_set.get(pk=request.POST['choice'])
except(KeyError, Choice.DoesNotExist):
return render(request, 'polls/detail.html', {
'question': question,
'error_message': "You didn't select a choice.",
})
else:
selected_choice.votes += 1
selected_choice.save()
return HttpResponseRedirect(reverse('polls:results',args=(question.id,)))
polls\urls.py
from django.conf.urls import url
from . import views
app_name = 'polls'
urlpatterns = [
url(r'^$', views.index, name = 'index'),
url(r'^(?P<question_id>[0-9]+)/$', views.detail, name='detail'),
url(r'^(?P<question_id>[0-9]+)/results/$', views.results, name='results'),
url(r'^(?P<question_id>[0-9]+)/vote/$', views.vote, name='vote'),
]
polls\template\polls\detail.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title></title>
</head>
<body>
<h1>{{ question.question_text }}</h1>
{% if error_message %}<p><strong>{{ error_message }}</strong></p>{% endif %}
<form action="{% url 'polls:vote' question.id %}" methon="post">
{% csrf_token %}
{% for choice in question.choice_set.all %}
<input type="radio" name="choice" id="choice{{ forloop.counter }}" value="{{ choice.id }}" />
<label for="choice{{ forloop.counter }}">{{ choice.choice_text }}</label><br />
{% endfor %}
<input type="submit" value="Vote" />
</form>
</body>
</html>
Answer: Sorry for wasting all your time. It was a typo which caused the problem.
polls\template\polls\detail.html `<form action="{% url 'polls:vote'
question.id %}" methon="post">`
`methon` should be `method`, and the problem has been solved.
|
Does Python's logging.config.dictConfig() apply the logger's configuration settings?
Question: I've been trying to implement a basic logger that writes to a file in Python
3.5, loading the settings from a JSON config file. I'll show my code first;
`log_config.json`
{
"version": 1,
"disable_existing_loggers": "false",
"logging": {
"formatters": {
"basic": {
"class": "logging.Formatter",
"style": "%",
"datefmt": "%I:%M:%S",
"format": "[%(asctime)] %(levelname:<8s): (name:<4s): %(message)"
}
},
"handlers": {
"file": {
"class": "logging.handlers.FileHandler",
"level": "DEBUG",
"formatter": "basic",
"filename": "test.log",
"mode": "a",
"encoding": "utf-8"
}
},
"loggers": { },
"root": {
"handlers": ["file"],
"level": "DEBUG"
}
}
}
And `logger.py`
import json
import logging
import logging.config
logging.basicConfig()
with open("log_config.json", "r") as fd:
logging.config.dictConfig(json.load(fd))
logger = logging.getLogger() # Returns the "root" logger
print(logger.getEffectiveLevel()) # Check what level of messages will be shown
logger.debug("Test debug message")
logger.info("Test info message")
logger.warn("Test warning message")
logger.error("Test error message")
logger.critical("Test critical message")
When run with `python3 logger.py` produces the output (in the terminal);
30
WARNING:root:Test warning message
ERROR:root:Test error message
CRITICAL:root:Test critical message
First; Looking at [ Python's logging
levels](https://docs.python.org/3.5/library/logging.html#logging-levels). 30
is the default logging level of 'WARNING'. This contradicts the setting for
both of the `level` properties I set in the handler and the root logger. It
seems the JSON is incorrect or I have missed a function call to apply it.
Second; [This thread](http://stackoverflow.com/questions/13805905/python-
logging-dictconfig) makes me think that although I load the config with the
call of `dictConfig()`, I still need to apply it to the logging with further
calls in my `logger.py` file. It seems a bit redundant that you have the
config and then have to verbosely apply each setting anyway.
Additionally; When I tried using the [Configuration file
format](https://docs.python.org/3.5/library/logging.config.html#configuration-
file-format) it worked as I thought it would. Namely; loading the file with
one function call and being able to make logging calls straight away. This is
confusing, because why would the older `fileConfig()` call used with this
format offer a more streamlined functionality than `dictConfig()` with JSON or
YAML?
Ultimately, I'm a bit confused and would like to figure this out. I appreciate
your time and help.
EDIT: From Alex.P's comment, I added the following handler to
`log_config.json` and changed the handler the root to it.
"console": {
"class": "logging.StreamHandler",
"level": "DEBUG",
"formatter": "basic",
"stream": "ext://sys.stdout"
},
Checking the output, it is the same as above.
Answer: Ah, I figured out what was wrong. Turns out it was the JSON. From [this
example](https://gist.github.com/glenfant/4358668) which I was basing my work
off of, it has an extra `logging` property in the JSON, which encapsulates all
the loggers, handlers etc.
Removing that property and having the hierarchy more like a YAML file (which I
also tested, and got working correctly), it works as expected. I could even
remove the extra call to `basicConfig` in my `logger.py`.
Final JSON;
{
"version": 1,
"disable_existing_loggers": "false",
"formatters": {
"basic": {
"class": "logging.Formatter",
"datefmt": "%I:%M:%S",
"format": "%(asctime)s %(levelname)s %(name)s %(message)s"
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "DEBUG",
"formatter": "basic",
"stream": "ext://sys.stdout"
},
"file": {
"class": "logging.FileHandler",
"level": "DEBUG",
"formatter": "basic",
"filename": "test.log",
"mode": "w",
"encoding": "utf-8"
}
},
"loggers": { },
"root": {
"handlers": ["console", "file"],
"level": "DEBUG"
}
}
|
How to combine columns in a layout (colspan feature)
Question: I have this code:
#!/usr/bin/env python3
from PyQt5.QtWidgets import *
import sys
class Window(QWidget):
def __init__(self):
QWidget.__init__(self)
layout = QGridLayout()
self.setLayout(layout)
label_1 = QLabel("label 1")
layout.addWidget(label_1, 0, 0)
label_2 = QLabel("label 2")
layout.addWidget(label_2, 0, 1)
label_3 = QLabel("label 3")
layout.addWidget(label_3, 1, 0)
app = QApplication(sys.argv)
screen = Window()
screen.show()
sys.exit(app.exec_())
I have this result:
![image example of current code](http://i.stack.imgur.com/4Hv7L.png)
but I need this:
![image example of code which I need](http://i.stack.imgur.com/ApOhK.png)
How can I do it?
Answer: The fourth and fifth arguments of
[addWidget](http://doc.qt.io/qt-5/qgridlayout.html#addWidget-2) allow you to
specify how many rows and columns to span:
label_3 = QLabel("label 3")
layout.addWidget(label_3, 1, 0, 1, 2)
|
Creating an application with wxPython where I could navigate across several pages
Question: I would like to create an application built with wxPyhton where I could
navigate across several pages with two next and previous buttons.
Can you give me real codes examples?
Answer: What you are likely looking for is a wizard. There are several pages on the
wxPython website that talk about how to create one:
* <https://wiki.wxpython.org/wxWizard>
* <https://wxpython.org/Phoenix/docs/html/wx.adv.Wizard.html?highlight=wizard>
I have also written on the topic on my blog. You can see how to roll your own
wizard [here](http://www.blog.pythonlibrary.org/2012/07/12/wxpython-how-to-
create-a-generic-wizard/) or create one using the widget that wxPython
provides [here](http://www.blog.pythonlibrary.org/2011/01/27/wxpython-a-
wizard-tutorial/).
Here's a short example:
import wx
import wx.wizard as wiz
########################################################################
class TitledPage(wiz.WizardPageSimple):
""""""
#----------------------------------------------------------------------
def __init__(self, parent, title):
"""Constructor"""
wiz.WizardPageSimple.__init__(self, parent)
sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(sizer)
title = wx.StaticText(self, -1, title)
title.SetFont(wx.Font(18, wx.SWISS, wx.NORMAL, wx.BOLD))
sizer.Add(title, 0, wx.ALIGN_CENTRE|wx.ALL, 5)
sizer.Add(wx.StaticLine(self, -1), 0, wx.EXPAND|wx.ALL, 5)
#----------------------------------------------------------------------
def main():
""""""
wizard = wx.wizard.Wizard(None, -1, "Simple Wizard")
page1 = TitledPage(wizard, "Page 1")
page2 = TitledPage(wizard, "Page 2")
page3 = TitledPage(wizard, "Page 3")
page4 = TitledPage(wizard, "Page 4")
wx.wizard.WizardPageSimple.Chain(page1, page2)
wx.wizard.WizardPageSimple.Chain(page2, page3)
wx.wizard.WizardPageSimple.Chain(page3, page4)
wizard.FitToPage(page1)
wizard.RunWizard(page1)
wizard.Destroy()
#----------------------------------------------------------------------
if __name__ == "__main__":
app = wx.App(False)
main()
app.MainLoop()
|
Creating new matrix from dataframe and matrix in pandas
Question: I have a dataframe `df` which looks like this:
id1 id2 weights
0 a 2a 144.0
1 a 2b 52.5
2 a 2c 2.0
3 a 2d 1.0
4 a 2e 1.0
5 b 2a 2.0
6 b 2e 1.0
7 b 2f 1.0
8 b 2b 1.0
9 b 2c 0.008
And a similarity matrix `mat` between the elements of the `id2` column:
2a 2b 2c 2d 2e 2f
2a 1 0.5 0.7 0.2 0.1 0.3
2b 0.5 1 0.6 0.4 0.3 0.4
2c 0.7 0.6 1 0.1 0.4 0.2
2d 0.2 0.4 0.1 1 0.8 0.7
2e 0.1 0.3 0.4 0.8 1 0.8
2f 0.3 0.4 0.2 0.7 0.8 1
Now I would like to create a similarity matrix between the elements of `id1`
and the elements from `id2`. For that I consider the elements of `id1` as
barycentres of the corresponding elements of `id2` ind my dataframe `df` (with
the corresponding `weights`).
My first attempt to do that is with loops (aouch):
ids = df.id1.unique()
output = pd.DataFrame(columns = mat.columns,index = ids)
for id in ids:
df_slice = df.loc[df.id1 == id]
to_normalize = df_slice.weights.sum()
temp = mat.loc[df_slice.id2]
for art in df_slice.id2:
temp.loc[art] *= df_slice.ix[df_slice.id2 == art,'weights'].values[0]
temp.loc[art] /= (1.*to_normalize)
output.loc[id] = temp.sum()
But of course this is way not pythonic, and takes ages (`timeit` for these
small matrix showed `21.3ms` not computable for a 10k-rows `df` and 3k by 3k
`mat`). What would be a more clean/efficient way to do it?
Desired output:
2a 2b 2c 2d 2e 2f
a 0.857606 0.630424 0.672319 0.258354 0.163342 0.329676
b 0.580192 0.540096 0.520767 0.459425 0.459904 0.559425
And is there a way to compute another similarity matrix between the elements
of `id1` (from this data)?
Thank you in advance.
Answer: The following clocks in at 6-7ms (vs. around 30ms that your approach takes on
my machine).
import io
import pandas as pd
raw_df = io.StringIO("""\
id1 id2 weights
0 a 2a 144.0
1 a 2b 52.5
2 a 2c 2.0
3 a 2d 1.0
4 a 2e 1.0
5 b 2a 2.0
6 b 2e 1.0
7 b 2f 1.0
8 b 2b 1.0
9 b 2c 0.008
""")
df = pd.read_csv(raw_df, delim_whitespace=True)
raw_mat = io.StringIO("""\
2a 2b 2c 2d 2e 2f
2a 1 0.5 0.7 0.2 0.1 0.3
2b 0.5 1 0.6 0.4 0.3 0.4
2c 0.7 0.6 1 0.1 0.4 0.2
2d 0.2 0.4 0.1 1 0.8 0.7
2e 0.1 0.3 0.4 0.8 1 0.8
2f 0.3 0.4 0.2 0.7 0.8 1
""")
mat = pd.read_csv(raw_mat, delim_whitespace=True)
df['norm'] = df.groupby('id1')['weights'].transform('sum')
m = pd.merge(df, mat, left_on='id2', right_index=True)
m[mat.index] = m[mat.index].multiply(m['weights'] / m['norm'], axis=0)
output = m.groupby('id1')[mat.index].sum()
output.columns.name = 'id2'
print(output)
Output:
id2 2a 2b 2c 2d 2e 2f
id1
a 0.857606 0.630424 0.672319 0.258354 0.163342 0.329676
b 0.580192 0.540096 0.520767 0.459425 0.459904 0.559425
|
Python - Run function with parameters in command line
Question: Is it possible to run a python script with parameters in command line like
this:
./hello(var=True)
or is it mandatory to do like this:
python -c "from hello import *;hello(var=True)"
The first way is shorter and simpler.
Answer: Most shells use parentheses for grouping or sub-shells. So you can't call any
commands like `command(arg)` from a normal shell ...but you can write a python
script (./hello.py) that takes an argument.
import optparse
parser = optparse.OptionParser()
parser.add_option('-f', dest="f", action="store_true", default=False)
options, remainder = parser.parse_args()
print ("Flag={}".format(options.f))
And the call it with `python hello.py -f`
|
Embedded Python does not work pointing to Python35.zip with NumPy - how to fix?
Question: Okay here's the basic example from the Python website for a simple `runpy.exe`
to run Python scripts below. It works fine using Visual Studio 2015 on x64
Windows after referencing the Python includes and linking to `python35.lib`
for basic functions (the docs don't mention `pyvenv.cfg` must be in the EXE
directory). However, calling a script that imports `NumPy` leads to this error
`ImportError: No module named 'numpy' Failed to load "eig"` only when using
embedded `python35.zip`, so how does one include `NumPy` in an embedded Python
EXE? I.e. I want to also "embed" NumPy (as a .zip, directory, .dll, or .pyd
etc.). I've tried adding the NumPy includes and also linking to `npymath.lib`
but I get the same import error. I've also dug through some Cython wrapper
code but haven't found a solution. Here is the Python embedded sample code:
#include <Python.h>
#include <iostream>
int main(int argc, char *argv[])
{
PyObject *pName, *pModule, *pDict, *pFunc;
PyObject *pArgs, *pValue;
int i;
if (argc < 3) {
fprintf(stderr, "Usage: call pythonfile funcname [args]\n");
return 1;
}
Py_SetPath(L"python35.zip"); //this is in the current directory
Py_Initialize();
pName = PyUnicode_DecodeFSDefault(argv[1]);
/* Error checking of pName left out */
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if (pModule != NULL) {
pFunc = PyObject_GetAttrString(pModule, argv[2]);
/* pFunc is a new reference */
if (pFunc && PyCallable_Check(pFunc)) {
pArgs = PyTuple_New(argc - 3);
for (i = 0; i < argc - 3; ++i) {
pValue = PyLong_FromLong(atoi(argv[i + 3]));
if (!pValue) {
Py_DECREF(pArgs);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert argument\n");
return 1;
}
/* pValue reference stolen here: */
PyTuple_SetItem(pArgs, i, pValue);
}
pValue = PyObject_CallObject(pFunc, pArgs);
Py_DECREF(pArgs);
if (pValue != NULL) {
printf("Result of call: %ld\n", PyLong_AsLong(pValue));
Py_DECREF(pValue);
}
else {
Py_DECREF(pFunc);
Py_DECREF(pModule);
PyErr_Print();
fprintf(stderr, "Call failed\n");
return 1;
}
}
else {
if (PyErr_Occurred())
PyErr_Print();
fprintf(stderr, "Cannot find function \"%s\"\n", argv[2]);
}
Py_XDECREF(pFunc);
Py_DECREF(pModule);
}
else {
PyErr_Print();
fprintf(stderr, "Failed to load \"%s\"\n", argv[1]);
return 1;
}
Py_Finalize();
return 0;
}
Embed file is here:
<https://www.python.org/ftp/python/3.5.2/python-3.5.2-embed-amd64.zip>,
`python35.zip` inside the archive. Here is the simple test script (`runpy eig
eig 10` to test - note if you don't embed `Python35.zip` and have `NumPy` /
`SciPy` installed it WILL run):
eig.py
import numpy as np
from scipy import linalg
def eig(a):
c = np.random.rand(a,a)*100
c = np.corrcoef(c)
print('You are taking the eigsh of a ', a, '^2 matrix')
e, f = linalg.eig(c)
return print('Eigvals are: ',np.diag(f))
Anyone know how to fix this issue? Much appreciated.
Answer: This does not work because numpy is not in the zipfile `python35.zip`. The
runpy-program sets the path to `python35.zip`: It is thus the only path in the
Pythonpath for this programs exception... You have to add the parent-folder of
your local numpy-folder also to the Pythonpath to make it working.
|
RemovedInDjango110Warning: The context_instance argument of render_to_string is deprecated
Question: For one of the apps, I'm overloading the "delete selected objects" method in a
Django 1.9.x project which uses the Admin panel. For that, I have a code
similar to this:
from django.contrib.admin import helpers
from django.http import HttpResponseRedirect
from django.shortcuts import render_to_response
from django.template import RequestContext
class MAdmin(admin.ModelAdmin):
actions = ['delete_selected']
def delete_selected(self, request, queryset):
if 'apply' in request.POST:
# User has confirmed deletion of items
return HttpResponseRedirect(request.get_full_path())
else:
# User must confirm if they wish to delete selected items
return render_to_response('admin/confirm_delete.html', { 'queryset': queryset, 'action_checkbox_name': helpers.ACTION_CHECKBOX_NAME }, context_instance=RequestContext(request))
The last line produces the following warning in the console:
> .../virtualenv/lib/python2.7/site-packages/django/shortcuts.py:45:
> RemovedInDjango110Warning: The context_instance argument of render_to_string
> is deprecated. using=using)
I've tried to find some resource that explains how to "update" syntax
according to version 1.10.x but I've been unable to.
A similar question has already been asked
[here](http://stackoverflow.com/questions/36488024/the-context-instance-
argument-of-render-to-string-is-deprecated) but it's slightly different since
the OP is calling `render_to_string` explicitly and I'm not.
How should I update the call above to make the warning disappear?
Answer: Yes, the "right" way to fix it would be to update it. According to the
[documentation, they recommend using
`render()`](https://docs.djangoproject.com/en/1.9/topics/http/shortcuts/#id1)
> Deprecated since version 1.8: The context_instance argument is deprecated.
> Use the render() function instead which always makes RequestContext
> available.
If you do want to suppress this warning (highly unrecommended), you can use
the `SILENCE_SYSTEM_CHECK` setting.
[More on the system check can be found
here](https://docs.djangoproject.com/en/1.8/ref/checks/#core-system-checks)
|
Capitalization of filenames storing Python classes
Question: **C++**
I use a rigorous rule of capitalizing class names.
Over many years I tried to use the somewhat inconsistent rule of using
lowercase names for the files—when writing in C++.
For example, `class Stopwatch` would be in the files `stopwatch.hpp` and
`stopwatch.cpp`.
I am not sure at this point how or why I found that this is awkward, but I'm
reasonably sure that it turned out to be. I use exactly the same case for the
files. One benefit is that it helps avoid [annoying
issues](http://stackoverflow.com/q/38309106/704972) in version control on OS
X.
**Python**
[PEP 8](https://www.python.org/dev/peps/pep-0008/) recommends lowercase names
for modules and packages. It makes no recommendations regarding filenames
holding classes.
Is there such a recommendation or some best practices?
Answer: In python each file is a module so to follow PEP8 your code should be as
follows
from stopwatch import Stopwatch
Therefore the file should be stopwatch.py
|
Why np.load() couldn't read my ndarray data in pickled file?
Question: I am trying to analyze a tensor data, but I could not read the data in picked
file by using np.load(). My python code is as follows:
import pickle
import numpy as np
import sktensor as skt
import numpy.random as rn
data = np.ones((10, 8, 3), dtype='int32') # 3-mode count tensor of size 10 x 8 x 3
##data = skt.dtensor(data)
with open('data.dat', 'w+') as f: # can be stored as a .dat using pickle
pickle.dump(data, f)
with open('data.dat', 'r+') as f: # can be loaded back in using pickle.load
tmp = pickle.load(f)
assert np.allclose(tmp, data)
But when I attempted to use np.load() to load the data in data.bat as follows:
np.load('G:\data.dat')
Some error appears as"
Traceback (most recent call last):
File "<pyshell#34>", line 1, in <module>
np.load('D:/GDELT_Tensor/data.dat', mmap_mode = 'r')
File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 416, in load
"Failed to interpret file %s as a pickle" % repr(file))
IOError: Failed to interpret file 'D:/data.dat' as a pickle.
Anyone can help me?
Answer: Don't use the pickle module to save NumPy arrays. Instead, use one of the
methods here: <http://docs.scipy.org/doc/numpy/reference/routines.io.html>
There's even one that uses pickle under the hood, for example:
np.save('data.dat', data)
tmp = np.load('data.dat')
Another format like CSV or HDF5 might be more suitable for most applications--
especially where you might want to interoperate with non-Python systems.
|
Something strange happen with python multiprocess
Question: I've just tested python multiprocessing for reading file or a global variable,
but there is something strange happen.
for expample:
import multiprocessing
a = 0
def test(lock, name):
global a
with lock:
for i in range(10):
a = a + 1
print "in process %d : %d" % (name, a)
def main():
lock = multiprocessing.Lock()
p1 = multiprocessing.Process(target=test, args=(lock, 1))
p2 = multiprocessing.Process(target=test, args=(lock, 2))
p1.start()
p2.start()
p1.join()
p2.join()
print "in main process : %d" % a
if __name__=='__main__':
main()
The program read a global variable, but the output is:
in process 1 : 10
in process 2 : 10
in main process : 0
It seems that the sub-process cannot get and edit the global variable
properly. Also, if I change the program to read the file, each sub-process
will read the file completely, ignoring the lock.
So how does these happen? And how to solve this problem?
Answer: Global variables are not shared between processes. When you create and start a
new `Process()`, that process runs inside a separate "cloned" copy of the
current Python interpreter. Updating the variable from within a `Process()`
will only update the variable locally to the particular process it is updated
in.
To share data between Python processes, we need a
[`multiprocessing.Pipe()`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Pipe),
a
[`multiprocessing.Queue()`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Queue),
a
[`multiprocessing.Value()`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Value),
a
[`multiprocessing.Array()`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Array)
or one of the other multiprocessing-safe containers.
Here's an example based on your code:
import multiprocessing
def worker(lock, counter, name):
with lock:
for i in range(10):
counter.value += 1
print "In process {}: {}".format(name, counter.value)
def main():
lock = multiprocessing.Lock()
counter = multiprocessing.Value('i', 0)
p1 = multiprocessing.Process(target=worker, args=(lock, counter, 1))
p2 = multiprocessing.Process(target=worker, args=(lock, counter, 2))
p1.start()
p2.start()
p1.join()
p2.join()
print "In main process: {}".format(counter.value)
if __name__=='__main__':
main()
This gives me:
In process 1: 10
In process 2: 20
In main process: 20
Now, if you really want to use a global variable, you can use a
[`multiprocessing.Manager()`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Manager),
but I think the first method is preferable, and this is a "heavier" solution.
Here's an example:
import multiprocessing
manager = multiprocessing.Manager()
counter = manager.Value('i', 0);
def worker(lock, name):
global counter
with lock:
for i in range(10):
counter.value += 1
print "In process {}: {}".format(name, counter.value)
def main():
global counter
lock = multiprocessing.Lock()
p1 = multiprocessing.Process(target=worker, args=(lock, 1))
p2 = multiprocessing.Process(target=worker, args=(lock, 2))
p1.start()
p2.start()
p1.join()
p2.join()
print "In main process: {}".format(counter.value)
if __name__=='__main__':
main()
|
If two variable values are identical then it is said to be sharing same memory
Question: If two variable values are identical then it is said to be sharing same
memory... so python follows shared memory concept ?....and if i change one
value will it change another?
Answer: See Python data model described
[here](https://docs.python.org/3/reference/datamodel.html)
> Types affect almost all aspects of object behavior. Even the importance of
> object identity is affected in some sense: for immutable types, operations
> that compute new values may actually return a reference to any existing
> object with the same type and value, while for mutable objects this is not
> allowed. E.g., after a = 1; b = 1, a and b may or may not refer to the same
> object with the value one, depending on the implementation, but after c =
> []; d = [], c and d are guaranteed to refer to two different, unique, newly
> created empty lists. (Note that c = d = [] assigns the same object to both c
> and d.)
|
"-bash: python2: command not found" on OS X
Question: I'm trying to use
[this](https://github.com/Tamriel/quod_libet_import_itunes_ratings) script to
import my iTunes library to another program.
At the step where I enter `python2 export_to_quod_libet.py`, I'm getting an
error message that says that the `python2` command can't be found. I figured
out through `python -v` that I definitely have Python 2.7 installed, so I'm
really confused about this.
I did find a [similar
question](http://stackoverflow.com/questions/37411607/command-python2-not-
found) being asked here, but the original poster was using Windows (whereas
I'm using OS X El Capitan), so a lot of what was said at least didn't _seem_
applicable to my situation.
Could someone tell me what I'm doing wrong, please?
Answer: Maybe you could try do define an alias. It seems that python2 is hardcoded
somewhere in the script.
You could try (just an example):
alias python2="python2.7"
and then run the script -- hope that helps.
Kind regards, Julian
|
Component not appearing in Tkinter Python interface
Question: I just start developping in Python to do some interface with Tkinter. There is
so many way to do an interface, so I would like to know if the structure of my
code is correct. Also, I can run my script without error. But, it didn't show
me the label ,Hello, world".
Can you explain me what is wrong ?
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# import
import tkinter
from tkinter import *
from tkinter.filedialog import *
from tkinter.messagebox import showerror
class MyFrame(Frame):
def __init__(self):
# Definition of the main window
Frame.__init__(self)
self.master.geometry("800x800")
self.master.title("Test application")
label = Label(self, text="Hello, world")
label.pack()
def quit(self):
sys.exit()
# ACTION
def load_file(self) :
filename=askopenfilename(filetypes=(("Input files", "*.igs")))
if filename :
try:
print("""here it comes: self.settings["template"].set(filename)""")
except: # <- naked except is a bad idea
showerror("Open Source File", "Failed to read file\n'%s'" % fname)
return
if __name__=="__main__":
MyFrame().mainloop()
Answer: Your Frame (`class MyFrame`) is never packed. Use `self.pack()` inside your
init to display it or pack it inside your main before calling mainloop on it.
The rest looks okay so far.
As you are using tkinter (so python3) i personally would consider using
`"some text {}".format(variable)`
over `"some text %s"%variable` whenever possible.
|
Cannot find plot function in GPy library (python)
Question: I am using the [GPy](https://github.com/SheffieldML/GPy "GPy") library in
Python 2.7 to perform Gaussian Process regressions. I started by following the
tutorial notebooks provided in the GitHub page.
Sample code :
import numpy as np
import matplotlib.pyplot as plt
f = lambda x : np.sin(x**2)
kernel = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
X=np.random.rand(2,1)
Y=f(X)
m = GPy.models.GPRegression(X,Y,kernel)
m.optimize_restarts(num_restarts = 10,verbose=False)
fig=m.plot()
plt.show()
The weird thing I witnessed is that there is no **plot** function implemented
in the **GPRegression** class (_ok, it's just a small sub-class of**GP**._),
nor in its super-class (**GP**), nor in its super-super-class (**Model**)...
all located in **GPy.core**.
The plot function that is executed when I call **m.plot()** is in
**GPy.plotting.gpy_plot** (which does not contain any class, but still uses
the "self" keyword as function argument - but maybe it's just a "bad" name for
a function argument ?).
I cannot see how a **GPy.core.GP** object can access this plot function (at
first sight, there is no link whatsoever between the two python files - Ctrl+F
"plot" in **GPy/core/gp.py** gives nothing for example).
When I call
vars(GPy.models.gp_regression.GP).keys()
, the plot function is indeed there, although not directly implemented in
**GPy.core.GP**.
Same thing for : (Minimal reproducible example)
import GPy.core.gp
import GPy.likelihoods
import GPy.kern
import matplotlib.pyplot as plt
GPy.core.gp.GP.__dict__.keys()
Any idea of _how_ **GP** calls the plot function in **gpy_plot** , and _why_
it is coded this way ?
Answer: The plotting library gets "injected" in GPy/GPy/plotting/__init__.py's
[`inject_plotting()`](https://github.com/SheffieldML/GPy/blob/v1.0.9/GPy/plotting/__init__.py#L36).
Here the line for `plot()`:
from ..core import GP
...
GP.plot = gpy_plot.gp_plots.plot
I assume the reason for this design was that it allows easily changing the
plotting library _on-the-fly_ via
[`change_plotting_library()`](https://github.com/SheffieldML/GPy/blob/v1.0.9/GPy/plotting/__init__.py#L8).
|
convert em-dash to hyphen in python
Question: I'm converting csv files into python Dataframe. And in the original file, one
of the column has characters em-dash. I want it replaced by hyphen "-".
Partial original file from csv:
NoDemande NoUsager Sens IdVehicule NoConduteur HeureDebutTrajet HeureArriveeSurSite HeureEffective'
42192001801 42192002715 — 157Véh 42192000153 ...
42192000003 42192002021 + 157Véh 42192000002 ...
42192001833 42192000485 — 324My3FVéh 42192000157 ...
My code:
#coding=latin-1
import pandas as pd
import glob
pd.set_option('expand_frame_repr', False)
path = r'D:\Python27\mypfe\data_test'
allFiles = glob.glob(path + "/*.csv")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_,index_col=None,header=0,sep=';',parse_dates=['HeureDebutTrajet','HeureArriveeSurSite','HeureEffective'],
dayfirst=True)
df['Sens'].replace(u"\u2014","-",inplace=True,regex=True)
list_.append(df)
And it doesn't work at all, every time it only convert them into `?`, which
looks like it:
42191001122 42191002244 ? 181Véh 42191000114 ...
42191001293 42191001203 ? 319M9pVéh 42191000125 ...
42191000700 42191000272 ? 183Véh 42191000072 ...
And because I have french characters in the file, I'm using `latin-1` instead
of `utf-8`. If I delete the first line and write like this:
df = pd.read_csv(file_,index_col=None,header=0,sep=';',encoding='windows-1252',parse_dates=['HeureDebutTrajet','HeureArriveeSurSite','HeureEffective'],
dayfirst=True)
The result will be:
42191001122 42191002244 â?? 181Véh 42191000114 ...
42191001293 42191001203 â?? 319M9pVéh 42191000125 ...
42191000700 42191000272 â?? 183Véh 42191000072 ...
How can I make all the em-dash `—` replaced by `-`?
I added the part about `repr`:
for line in open(file_):
print repr(line)
And the result turns out:
'"42191002384";"42191000118";"\xe2\x80\x94";"";"42191000182";...
'"42191002464";"42191001671";"+";"";"42191000182";...
'"42191000045";"42191000176";"\xe2\x80\x94";"620M9pV\xc3\xa9h";"42191000003";...
'"42191001305";"42191000823";"\xe2\x80\x94";"310V7pV\xc3\xa9h";"42191000126";...
Answer: `u'\u2014'` (EM DASH) can not be encoded in latin1/iso-8859-1, so that value
can not appear in a properly encoded latin1 file.
Possibly the files are encoded as windows-1252 for which `u'\u2014'` can be
encoded as `'\x97'`.
Another problem is that the CSV file apparently uses whitespace as the column
separator, but your code uses semicolons. You can specify whitespace as the
separator using `delim_whitespace=True`:
df = pd.read_csv(file_, delim_whitespace=True)
You can also specify the file's encoding using the `encoding` parameter.
`read_csv()` will convert the incoming data to unicode:
df = pd.read_csv(file_, encoding='windows-1252', delim_whitespace=True)
In Python 2 (I think that you're using that), if you do not specify the
encoding, the data remains in the original encoding, and this is probably the
reason that your replacements are not working.
Once you have properly loaded the file, you can replace characters as you have
been doing:
df = pd.read_csv(file_, encoding='windows-1252', delim_whitespace=True)
df['Sens'].replace(u'\u2014', '-', inplace=True)
* * *
**EDIT**
Following your update where you show the `repr()` output, your file would
appear to be UTF8 encoded, not latin1, and not Windows-1252. Since you are
using Python 2 you need to specify the encoding when loading the CSV file:
df = pd.read_csv(file_, sep=';', encoding='utf8')
df['Sens'].replace(u'\u2014', '-', inplace=True)
Because you specified an encoding, `read_csv()` will convert the incoming data
to unicode, so `replace()` should now work as shown above. It should be that
easy.
|
Python, Postgres, and integers with blank values?
Question: So I have some fairly sparse data columns where most of the values are blank
but sometimes have some integer value. In Python, if there is a blank then
that column is interpreted as a float and there is a .0 at the end of each
number.
I tried two things:
* Changed all of the columns to text and then stripped the .0 from everything
* Filled blanks with 0 and made each column an integer
Stripping the .0 is kind of time consuming on about 2mil+ rows per day and
then the data is in text format which means I can't do quick sums and stuff.
Filling blanks seems somewhat wasteful because some columns literally have
just a few actual values out of millions. My table for just one month is
already over 80gigs (200 columns, but many of the columns after about 30 or so
are pretty sparse).
What postgres datatype is best for this? There are NO decimals because the
columns contain the number of seconds and it must be pre-rounded by the
application.
Edit - here is what I am doing currently (but this bloats up the size and
seems wasteful):
def create_int(df, col):
df[col].fillna(0, inplace=True)
df[col] = df[col].astype(int)
If I try to create the column astype(int) without filling in the 0s I get the
error:
error: Cannot convert NA to integer
Here is the link the the Gotcha about this.
<http://pandas.pydata.org/pandas-docs/stable/gotchas.html#support-for-integer-
na>
So it makes each int a float. Should I change the datatypes in postgres to
numeric or something? I do not need high precision because there are no values
after the decimal.
Answer: You could take advantage of the fact you are using POSTGRESQL (9.3 or above),
and implement a "poor man's sparse row" by converting your data into Python
dictionaries and then using a JSON datatype (JSONB is better).
The following Python snippets generate random data in the format you said you
have yours, convert them to apropriate json, and upload them into a PostgreSQL
table with a JSONB column.
import psycopg2
import json
import random
def row_factory(n=200, sparcity=0.1):
return [random.randint(0, 2000) if random.random() < sparcity else None for i in range(n)]
def to_row(data):
result = {}
for i, element in enumerate(data):
if element is not None: result[i] = element
return result
def from_row(row, lenght=200):
result = [None] * lenght
for index, value in row.items():
result[int(index)] = value
return result
con = psycopg2.connect("postgresql://...")
cursor = con.cursor()
cursor.execute("CREATE TABLE numbers (values JSONB)")
def upload_data(rows=100):
for i in range(rows):
cursor.execute("INSERT INTO numbers VALUES(%s)", (json.dumps(to_row(row_factory(sparcity=0.5))),) )
upload_data()
# To retrieve the sum of all columns:
cursor.execute("""SELECT {} from numbers limit 10""".format(", ".join("sum(CAST(values->>'{}' as int))".format(i) for i in range(200))))
result = cursor.fetchall()
It took me a while to find out how to perform numeric operations on the JSONB
data inside Postgresql (if you will be using them from Python you can just use
the snippet `from_row` function above). But the last two lines have a Select
operation that performs a SUM on all columns - the select statement itself is
assembled using Python string formatting methods - the key to use a Json value
as number is to select it with the `->>` operator, and them cast it to
number.(the `sum(CAST(values->>'0' as int))` part)
|
Can't access dropdown select using Selenium in Python
Question: I'm new to using Selenium in Python and I'm trying to access index data on
Barclays Live's website. Once I login and the page loads, I'm trying to select
'Custom1' from a dropdown in the page. The select object in the HTML code
associated with the list looks like this:
<select name="customViewId" class="formtext" onchange="submitFromSelect('username');return false;">
<option value=""> </option>
<option value="Favorite Indices">Favorite Indices</option>
<option value="Custom1">Custom1</option>
<option value="CB group">CB group</option>
<option value="Kevin Favorites">Kevin Favorites</option>
<option value="LB Gov/Cdt intermediate">LB Gov/Cdt intermediate</option>
</select>
This is my code up until I try to access this object:
from selenium import webdriver
from selenium.webdriver.support.select import Select
#Get chrome driver and connect to Barclays live site
browser = webdriver.Chrome("C:\Program Files (x86)\Google\Chrome\chromedriver.exe")
browser.get('https://live.barcap.com/')
#Locate username box and enter username
username = browser.find_element_by_name("user")
username.send_keys("username")
#Locate password box and send password
password = browser.find_element_by_name("password")
password.send_keys("password")
#Click login button
login = browser.find_element_by_id("submit")
login.click()
#Open page where you can select indices
browser.get("https://live.barcap.com/BC/barcaplive?menuCode=MENU_IDX_1061")
I've tried a number of proposed solutions that I've found, usually with the
error "Unable to locate element: " followed by whatever method I tried to
access the select object with. I don't seem to be able to access it by name,
xpath, or by using the Select() function. I've tried putting wait time in the
code in case the element hadn't loaded yet, and no luck. Some examples of
things I would expect to work, but don't are:
select_box = browser.find_element_by_name("customViewId")
select_box = browser.find_element_by_xpath("//select[option[@value='Custom1']]"
My background isn't in programming, go easy on me if this is a stupid
question. Thanks in advance for the help.
Answer: > The select element is indeed located in an iframe.
This means that you should _[switch](http://selenium-
python.readthedocs.io/api.html#selenium.webdriver.remote.webdriver.WebDriver.switch_to_frame)
into the context of the frame_ and only then find the element:
browser.switch_to.frame("frame_name_or_id")
select_box = browser.find_element_by_name("customViewId")
If you need to get back from the context of the frame, use:
browser.switch_to.default_content()
As for the manipulating the select box part, there is a better way - use the
[`Select` class](http://selenium-
python.readthedocs.io/api.html#selenium.webdriver.support.select.Select):
from selenium.webdriver.support.select import Select
select_box = Select(browser.find_element_by_name("customViewId"))
select_box.select_by_visible_text("CB group")
|