text
stringlengths 226
34.5k
|
---|
Receiving and empty list when trying to make a webscraper to parse websites for links
Question: I was reading [this](http://docs.python-guide.org/en/latest/scenarios/scrape/)
website and learning how to make a webscraper with `lxml` and `Requests. This
is the webscraper code:
from lxml import html
import requests
web_page = requests.get('http://econpy.pythonanywhere.com/ex/001.html')
tree = html.fromstring(web_page.content)
buyers = tree.xpath('//div[@title="buyer-name"]/text()')
prices = tree.xpath('//span[@class="item-price"]/text()')
print "These are the buyers: ", buyers
print "And these are the prices: ", prices
It works as intended, but when I try to scrape
<https://www.reddit.com/r/cringe/> for all the links I'm simply getting `[]`
as a result:
#this code will scrape a Reddit page
from lxml import html
import requests
web_page = requests.get("https://www.reddit.com/r/cringe/")
tree = html.fromstring(web_page.content)
links = tree.xpath('//div[@class="data-url"]/text()')
print links
What's the problem with the xpath I'm using? I can't figure out what to put in
the square brackets in the xpath
Answer: First off, your xpath is wrong, there are no classes with _data-url_ , it is
an _attribute_ so you would want `div[@data-url]` and to extract the attribute
you would use `/@data-url`:
from lxml import html
import requests
headers = {"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.92 Safari/537.36"}
web_page = requests.get("https://www.reddit.com/r/cringe/", headers=headers)
tree = html.fromstring(web_page.content)
links = tree.xpath('//div[@data-url]/@data-url')
print links
Also you may see html like the following returned if you query too often or
don't use a user-agent so respect what they recommend:
<p>we're sorry, but you appear to be a bot and we've seen too many requests
from you lately. we enforce a hard speed limit on requests that appear to come
from bots to prevent abuse.</p>
<p>if you are not a bot but are spoofing one via your browser's user agent
string: please change your user agent string to avoid seeing this message
again.</p>
<p>please wait 6 second(s) and try again.</p>
<p>as a reminder to developers, we recommend that clients make no
more than <a href="http://github.com/reddit/reddit/wiki/API">one
request every two seconds</a> to avoid seeing this message.</p>
</body>
</html>
If you plan on scraping a lot of reddit, you may want to look at
[PRAW](https://praw.readthedocs.io/en/stable/) and
[w3schools](http://www.w3schools.com/xsl/xpath_syntax.asp) has a nice
introduction to _xpath_ expressions.
To break it down:
//div[@data-url]
searches the doc for _div's_ that have an attribute `data-url` we don't care
what the attribute value is, we just want the div.
That just finds the _div's_ , if you removed the _/@data-url_ you would end up
with a list of elements like:
[<Element div at 0x7fbb27a9a940>, <Element div at 0x7fbb27a9a8e8>,..
`/@data-url` actually extracts the _attrbute value_ i.e the _hrefs_.
Also you just wanted specific links, the _youtube_ links you could filter
using _contains_ :
'//div[contains(@data-url, "www.youtube.com")]/@data-url'
`contains(@data-url, "www.youtube.com")` will check if the _data-url_
attribute values contain _www.youtube.com_ so the output will be a list of the
_youtube_ links.
|
activating virtualenv in windows which was created in ubuntu
Question: I created a `virtualenv` in ubuntu for one of my projects. Later I wanted to
use the same `virtualenv` in windows and tried activating it using only the
`activate` command
But the environment it activated had name `root` instead of the original one.
Also I **could not import python libraries** which were installed in the same
environment in Ubuntu.
**Things to note :** I wanted to use python3 for this project, so initialized
it with python3 in ubuntu. Whereas in windows , I have only python2. Does this
have to do anything with the issue
Answer: You'll not be able to use a virtual environment created in Linux on Windows or
vice versa. The installation files for different packages and libraries would
be different for both the platforms, and you will not be able to use the raw
Linux binaries on Windows anyway.
If you want to maintain parity in virtual environments, I suggest you write a
script for the setting up process of virtual environment, and use it to create
two different virtual environment, one for Windows and one for Linux.
Also, you'll need Python versions on both systems, unless your codebase is
compatible with both Python2 and Python3.
|
Use module as class instance in Python
Question: ## TL; DR
Basically the question is about hiding from the user the fact that my modules
have class implementations so that the user can use the module as if it has
direct function definitions like `my_module.func()`
## Details
Suppose I have a module `my_module` and a class `MyThing` that lives in it.
For example:
# my_module.py
class MyThing(object):
def say():
print("Hello!")
In another module, I might do something like this:
# another_module.py
from my_module import MyThing
thing = MyThing()
thing.say()
But suppose that I don't want to do all that. What I really want is for
`my_module` to create an instance of MyThing automatically on `import` such
that I can just do something like the following:
# yet_another_module.py
import my_module
my_module.say()
In other words, whatever method I call on the module, I want it to be
forwarded directly to a default instance of the class contained in it. So, to
the user of the module, it might seem that there is no class in it, just
direct function definitions in the module itself (where the functions are
actually methods of a class contained therein). Does that make sense? Is there
a short way of doing this?
I know I could do the following in `my_module`:
class MyThing(object):
def say():
print("Hello!")
default_thing = MyThing()
def say():
default_thing.say()
But then suppose `MyThing` has many "public" methods that I want to use, then
I'd have to explicitly define a "forwarding" function for every method, which
I don't want to do.
As an extension to my question above, is there a way to achieve what I want
above, but also be able to use code like `from my_module import *` and be able
to use methods of `MyThing` directly in another module, like `say()`?
Answer: In module `my_module` do the following:
class MyThing(object):
...
_inst = MyThing()
say = _inst.say
move = _inst.move
This is _exactly_ the pattern used by the [`random`
module](https://github.com/python/cpython/blob/master/Lib/random.py#L736).
Doing this automatically is somewhat contrived. First, one needs to find out
_which_ of the instance/class attributes are the methods to export... perhaps
export only names which do not start with `_`, something like
import inspect
for name, member in inspect.getmembers(Foo(), inspect.ismethod):
if not name.startswith('_'):
globals()[name] = member
However in this case I'd say that explicit is better than implicit.
|
Only one usage of each socket address is normally permitted Python
Question: I wrote a basic program in to create a socket with a server and a client. But
the problem is that when I run the code, it gives me an error saying that only
one usage of each socket address is normally permitted. So I think the problem
is due to the port, I changed the port and it still don't work. How do I get
this to work?
This is my code :
Server
import socket
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.bind(('localhost',3200))
sock.listen(1)
print "Server is ready to receive data..."
client, address = sock.accept()
msg = client.recv(1024)
print msg
Client
import socket
connection_to_server = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
connection_to_server.bind(('localhost',3200))
msg = raw_input("Please enter a content :")
connection_to_server.send(msg)
Thanks for your help !
Answer: I think there's a fundamental misunderstanding of how sockets work here.
The
[`socket.bind()`](https://docs.python.org/2/library/socket.html#socket.socket.bind)
call is used to bind to a particular port on a particular interface, the pair
specified using a network address (bind to port `8080` on `127.0.0.1)`. You
need to do this on the server side before you can start reading incoming data
i.e "listening" on a particular socket. Only the server needs to do this. The
client will then use
[`socket.connect`](https://docs.python.org/2/library/socket.html#socket.socket.connect)
to connect to this socket.
As spectras pointed out in the comments, a bind is necessary when you need to
communicate through a particular interface/port combination, which is almost
always necessary for the server, but not always for the client. The client and
server can't _both_ have access/bind to the same port on the same interface,
it makes little sense to do so.
Your client and server both try to start listening on the same socket, which
is as the error message suggests, not allowed.
You should go through the [Socket Programming
HOWTO](https://docs.python.org/2/howto/sockets.html) before proceeding
further.
|
Jupyter notebook and QT Console are calling different version of pandas
Question: QTConsole is running the latest version of pandas (i.e. 0.18). However, when I
import pandas in Jupyter notebook, it can only import 0.15. How can I resolve
this?
**QT Console:**
Jupyter QtConsole 4.2.0
Python 2.7.11 |Anaconda 4.0.0 (x86_64)| (default, Dec 6 2015, 18:57:58)
Type "copyright", "credits" or "license" for more information.
IPython 4.1.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
import pandas
print pandas.__version__
0.18.0
**Jupyter**
import pandas
print pandas.__version__
0.15.0
Answer: You probably have different versions of Python installed via different
distributions. If you are using Windows, I recommend uninstalling all Python
versions/distributions, rebooting and then only installing one.
If you are using Mac, ensure that you have only one version of Anaconda
installed and that it is the version first in your `PATH` if you are using a
terminal. It may be that a different version has been installed by for
instance homebrew. To check your path do `!echo $PATH` from both of the
environments. You should see your anaconda directory early in the path (before
`/usr/local/bin` and `/usr/bin`). You can also do `!which python` from both of
the environments to see which Python binary is being used.
|
Posting Request Data
Question: I am trying to post requests with Python to register an account.
It is not creating the account.
Any help would be great!
It has to accept the user's email and password and confirmation of their
password.
import requests
with requests.Session() as c:
url="http://icebithosting.com/register.php"
EMAIL="charliep1551@gmail.com"
PASSWORD = "test"
c.get(url)
login_data= dict(username=EMAIL,password=PASSWORD,confirm=PASSWORD)
c.post(url, data=login_data,)
page=c.get("http://icebithosting.com/")
print (page.content)
Answer: Your form file names are incorrect, they should be:
email:'foo@bar.com'
password:'bar'
confirm_password:'bar' # confirm_password
Which you can see if you monitor the request in chrome tools:
[![enter image description
here](http://i.stack.imgur.com/ISTEe.png)](http://i.stack.imgur.com/ISTEe.png)
|
matplotlib.pyplot errorbar ValueError depends on array length?
Question: Good afternoon.
I've been struggling with this for a while now, and although I can find
similiar problems online, nothing I found could really help me resolve it.
Starting with a standard data file (.csv or .txt, I tried both) containing
three columns (x, y and the error of y), I want to read in the data and
generate a line plot including error bars.
I can plot the x and y values without a problem, but if I want to add
errorbars using the matplotlib.pyplot errorbar utility, I get the following
error message:
`ValueError: yerr must be a scalar, the same dimensions as y, or 2xN.`
The code below works if I use some arbitrary arrays (numpy or plain python),
but not for data read from the file. I've tried converting the tuples which I
obtain from my input code to numpy arrays using asarray, but to no avail.
import numpy as np
import matplotlib.pyplot as plt
row = []
with open("data.csv") as data:
for line in data:
row.append(line.split(','))
column = zip(*row)
x = column[0]
y = column[1]
yer = column[2]
plt.figure()
plt.errorbar(x,y,yerr = yer)
fig = plt.gcf()
fig.set_size_inches(18.5, 10.5)
fig.savefig('example.png', dpi=300)
It must be that I am overlooking something. I would be very grateful for any
thoughts on the matter.
Answer: `yerr` should be the added/subtracted error from the `y` value. In your case
the added equals the subtracted equals half of the third column.
import numpy as np
import matplotlib.pyplot as plt
data = np.loadtxt('data.csv', delimiter=',')
plt.figure()
yerr_ = np.tile(data[:, 2]/2, (2, 1))
plt.errorbar(data[:, 0], data[:, 1], yerr=yerr_)
plt.xlim([-1, 3])
plt.show()
data.csv
0,2,0.3
1,4,0.4
2,3,0.15
|
"FailedParse: [...] Expecting end of text" when trying to parse parenthesized expressions in grako
Question: In `search_query.ebnf`, I have the following grammar definition for `grako`
3.14.0:
@@grammar :: SearchQuery
start = search_query $;
search_query = parenthesized_query | combined_query | search_term;
parenthesized_query = '(' search_query ')';
combined_query = search_query binary_operator search_query;
binary_operator = '&' | '|';
search_term = /\w+/;
I generate the parser with
grako search_query.ebnf --outfile search_query_parser.py
The result works as I expected for these inputs:
import search_query_parser
parser = search_query_parser.SearchQueryParser()
parser.parse('a') # -> 'a'
parser.parse('(a)') # -> ['(', 'a', ')']
parser.parse('a & b') # -> ['a', '&', 'b']
parser.parse('a | b') # -> ['a', '|', 'b']
parser.parse('(a|b)&c') # -> ['(', ['a', '|', 'b'], ')', '&', 'c']
but if I have a parenthesized expression at the right hand side of an
operator, the parser gives me an error message:
parser.parse('c&(a|b)')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/das-g/.virtualenvs/tmp-d0fd5a9428f7612a/search_query_parser.py", line 82, in parse
return super(SearchQueryParser, self).parse(text, *args, **kwargs)
File "/home/das-g/.virtualenvs/tmp-d0fd5a9428f7612a/lib/python3.5/site-packages/grako/contexts.py", line 227, in parse
result = rule()
File "/home/das-g/.virtualenvs/tmp-d0fd5a9428f7612a/lib/python3.5/site-packages/grako/contexts.py", line 86, in wrapper
return self._call(rule, name, params, kwparams)
File "/home/das-g/.virtualenvs/tmp-d0fd5a9428f7612a/lib/python3.5/site-packages/grako/contexts.py", line 475, in _call
node, newpos, newstate = self._invoke_rule(rule, name, params, kwparams)
File "/home/das-g/.virtualenvs/tmp-d0fd5a9428f7612a/lib/python3.5/site-packages/grako/contexts.py", line 511, in _invoke_rule
rule(self)
File "/home/das-g/.virtualenvs/tmp-d0fd5a9428f7612a/search_query_parser.py", line 87, in _start_
self._check_eof()
File "/home/das-g/.virtualenvs/tmp-d0fd5a9428f7612a/lib/python3.5/site-packages/grako/contexts.py", line 650, in _check_eof
self._error('Expecting end of text.')
File "/home/das-g/.virtualenvs/tmp-d0fd5a9428f7612a/lib/python3.5/site-packages/grako/contexts.py", line 450, in _error
item
grako.exceptions.FailedParse: (1:2) Expecting end of text. :
c&(a|b)
^
start
Am I doing something wrong?
Answer: > Am I doing something wrong?
I don't think so.
This looks like a [known
bug](https://bitbucket.org/apalala/grako/issues/81/left-recursion) in `grako`
concerning "left recursion".
The workaround mentioned in the bug seems to work for your case, too:
@@grammar :: SearchQuery
start = search_query $;
search_query = parenthesized_query | combined_query | search_term;
parenthesized_query = '(' search_query | search_term ')'; ## Workaround
combined_query = search_query binary_operator search_query;
binary_operator = '&' | '|';
search_term = /\w+/;
i.e. mention `search_term` explicitly inside the parentheses, even though the
`search_query` rule should be able to produce it, too.
|
phantomjs not loading instagram and pintersest webpages
Question: I'm using PhantomJS 2.1.1 in python 2.7.12 under Ubuntu Server 16.04.1, with
Display from pyvirtualdisplay
PhantomJS is unable to load instagram interactive dom pages
(<https://www.instagram.com/accounts/login/>). The page code should be within
<span id="react-root"></span>
but it remains empty.
Instagram pages are correctly loaded with PhantomJS 2.1.1 in python 2.7.10
under Mac OS X 10.11.6; PhantomJS under Ubuntu Server can correctly load many
other website (twitter, tumblr etc), so I guess that there's some missing
module in Ubuntu Server but can't understand which one.
It can't neither load <https://www.pinterest.com/login/> but this page is
correctly loaded using simply curl.
Could someone help? Thank you.
Here's the python code:
from selenium import webdriver
from pytvirtualdisplay import Display
display = Display(visible=0,size=(800,600))
display.start()
browser = webdriver.PhantomJS()
browser.set_window_size(800, 600)
browser.get('https://www.instagram.com/accounts/login/')
or
browser.get('https://www.pinterest.com/login/')
the ghostdriver.log
[INFO - 2016-09-12T16:08:37.057Z] GhostDriver - Main - running on port 49739
[INFO - 2016-09-12T16:08:37.933Z] Session [2a14fc60-7903-11e6-a755-53e4799f55f3] - page.settings - {"XSSAuditingEnabled":false,"javascriptCanCloseWindows":true,"javascriptCanOpenWindows":true,"javascriptEnabled":true,"loadImages":true,"localToRemoteUrlAccessEnabled":false,"userAgent":"Mozilla/5.0 (Unknown; Linux x86_64) AppleWebKit/538.1 (KHTML, like Gecko) PhantomJS/2.1.1 Safari/538.1","webSecurityEnabled":true}
[INFO - 2016-09-12T16:08:37.933Z] Session [2a14fc60-7903-11e6-a755-53e4799f55f3] - page.customHeaders: - {}
[INFO - 2016-09-12T16:08:37.933Z] Session [2a14fc60-7903-11e6-a755-53e4799f55f3] - Session.negotiatedCapabilities - {"browserName":"phantomjs","version":"2.1.1","driverName":"ghostdriver","driverVersion":"1.2.0","platform":"linux-unknown-64bit","javascriptEnabled":true,"takesScreenshot":true,"handlesAlerts":false,"databaseEnabled":false,"locationContextEnabled":false,"applicationCacheEnabled":false,"browserConnectionEnabled":false,"cssSelectorsEnabled":true,"webStorageEnabled":false,"rotatable":false,"acceptSslCerts":false,"nativeEvents":true,"proxy":{"proxyType":"direct"}}
[INFO - 2016-09-12T16:08:37.934Z] SessionManagerReqHand - _postNewSessionCommand - New Session Created: 2a14fc60-7903-11e6-a755-53e4799f55f3
* * *
Update: installing phantomjs with
sudo apt-get install phantomjs
it correctly loads the entire page. But this package is missing some important
third-party dependencies (such as find_element Atom).
installing phantomjs with
npm install phantomjs-prebuilt
it doesn't correctly load the page (even if it has got all third-party Atoms).
Is there a way to use the executable installed with apt-get and third-party
Atoms installed by npm?
Answer: SOLVED.
I solved compiling phantomjs on Ubuntu Server directly from git repository. So
maybe the pre-compiled binaries are not complete.
Details here: <http://phantomjs.org/build.html>
|
Importing Tensorflow Session Bundle in Python
Question: How do you import from inside Python a Tensorflow session bundle? The docs
explain [exporting from
Python](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/session_bundle#exporting-
python-code) and [importing in
C++](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/session_bundle#importing-
c-code).
UPDATE: I found the following:
1. [`load_session_bundle_from_path`](https://github.com/tensorflow/tensorflow/blob/c856366b739850a9f4b0bf1469de7f052619042b/tensorflow/contrib/session_bundle/session_bundle.py#L35)
2. [`python.saved_model.loader.load`](https://github.com/tensorflow/tensorflow/blob/c856366b739850a9f4b0bf1469de7f052619042b/tensorflow/python/saved_model/loader.py#L119)
Answer: SessionBundle consists of a checkpoint and a MetaGraph definition that's
needed for serving (see
[here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/session_bundle)).
Since TensorFlow Serving is in C++, I don't think you will find any Python
examples).
However, if you are using Python, you don't actually need this MetaGraph
definition, you can just start a new session and restore from the checkpoint
file, and subsequently do the inferences from this new session. You can find
some good examples
[here](http://stackoverflow.com/questions/38935428/tensorflow-rest-frontend-
but-not-tensorflow-serving).
|
Openshift python requests proxy permission denied
Question: I'm trying to use a proxy with the python 'requests' package on an Openshift
server. I am getting a permission denied error. See below.
Is Openshift blocking the connection or am I not configuring it correctly?
Something else? Openshift doesn't want to let me connect to a proxy because
the code works fine locally and on Heroku.
**Code**
from ssl import PROTOCOL_TLSv1
import ssladapter
proxies = {'https': 'http://{}:{}@96.44.147.34:6060'.format(CFG.proxy_username, CFG.proxy_password)}
url1 = 'https://reservaciones.volaris.com/Flight/DeepLinkSearch'
session = requests.Session()
session.mount('https://', ssladapter.SSLAdapter(ssl_version=PROTOCOL_TLSv1))
request1 = session.get(url1, proxies=proxies)
**Traceback**
requests.exceptions.ProxyError: HTTPSConnectionPool(host='reservaciones.volaris.com', port=443): Max retries exceeded with url: /Flight/DeepLinkSearch (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f4e78386ad0>: Failed to establish a new connection: [Errno 13] Permission denied',)))
Answer: Most probably OpenShift blocks uncommon outgoing ports for [security
reasons](http://security.stackexchange.com/questions/24310/why-block-outgoing-
network-traffic-with-a-firewall). your proxy is listening on 6060. You should
try to ssh into your gear and try `telnet`
In my gear, post 6060 is blocked. See the attached screenshot.
[portquiz](http://portquiz.net/) listens on all TCP ports.
[![enter image description
here](http://i.stack.imgur.com/tdBWP.png)](http://i.stack.imgur.com/tdBWP.png)
|
error using Python Elasticserarch-py package
Question: So I am trying to create a connection to AWS ES. I have successfully connected
to my S3 bucket in the same zone. However, when I try to connect to ES, I get
this message every time.
Please install requests to use RequestsHttpConnection.
I have imported the correct module but nothing seems to fix this issue. Here
is my code
import elasticsearch
from elasticsearch import Elasticsearch, RequestsHttpConnection
from boto3 import client, logging, s3, Session
host = 'search-esdomain-t3rfr4trerdgfdh6t4t43ef.us-east-1.es.amazonaws.com'
es = Elasticsearch(
hosts = host,
connection_class = RequestsHttpConnection,
http_auth = ('user', 'password'),
use_ssl = True,
verify_certs = False)
This looks the same as every example I can find but for some reason it will
not connect.
This is with Python 3.5 and my dev environment is VS 2015.
Answer: As per the documentation for [elasticsearch-py](http://elasticsearch-
py.readthedocs.io/en/master/transports.html).
> Note that the RequestsHttpConnection requires requests to be installed.
There is a need to explictly install the [requests](http://docs.python-
requests.org/en/master/) module if it does not already exist in the
`PYTHONPATH`
|
How to integrate a python program into a kivy app
Question: I'm working on an app written in python with the kivy modules to develop a
cross-platform app. Within this app I have a form which takes some numerical
values. I would like these numerical values to be passed to another python
program I've written, used to calculate some other values, and passed back to
the app and returned to the user. The outside program is currently not
recognizing that the values I'm trying to pass to it exist. Below is sample
code from the 3 files I'm using, 2 for the app and 1 for the outside program.
I apologize about the abundance of seemingly unused kivy modules being
imported, I use them all in the full app.
main.py
import kivy
import flowcalc
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.uix.dropdown import DropDown
from kivy.uix.spinner import Spinner
from kivy.uix.button import Button
from kivy.base import runTouchApp
from kivy.uix.textinput import TextInput
from kivy.properties import NumericProperty, ReferenceListProperty, ObjectProperty, ListProperty
from kivy.uix.gridlayout import GridLayout
from kivy.uix.scrollview import ScrollView
from kivy.core.window import Window
from kivy.uix.slider import Slider
from kivy.uix.scatter import Scatter
from kivy.uix.image import AsyncImage
from kivy.uix.carousel import Carousel
Builder.load_file('main.kv')
#Declare Screens
class FormScreen(Screen):
pass
class ResultsScreen(Screen):
pass
#Create the screen manager
sm = ScreenManager()
sm.add_widget(FormScreen(name = 'form'))
sm.add_widget(ResultsScreen(name = 'results'))
class TestApp(App):
def build(self):
return sm
if __name__ == '__main__':
TestApp().run()
main.kv
<FormScreen>:
BoxLayout:
orientation: 'vertical'
AsyncImage:
source: 'sample.png'
size_hint: 1, None
height: 50
GridLayout:
cols: 2
Label:
text: 'Company Industry'
Label:
text: 'Sample'
Label:
text: 'Company Name'
TextInput:
id: companyname
Label:
text: 'Company Location'
TextInput:
id: companylocation
Label:
text: 'Data1'
TextInput:
id: data1
Label:
text: 'Data2'
TextInput:
id: data2
Label:
text: 'Data3'
TextInput:
id: data3
Button:
text: 'Submit'
size_hint: 1, .1
on_press: root.manager.current = 'results'
<ResultsScreen>:
BoxLayout:
orientation: 'vertical'
AsyncImage:
source: 'sample.png'
size_hint: 1, None
height: 50
Label:
text: 'Results'
size_hint: 1, .1
GridLayout:
cols: 2
Label:
text: 'Results 1'
Label:
text: results1
Label:
text: 'Results 2'
Label:
text: results2
Label:
text: 'Results 3'
Label:
text: results3
Label:
text: 'Results 4'
Label:
text: results4
otherprogram.py
data1float = float(data1.text)
data2float = float(data2.text)
data3float = float(data3.text)
results1 = data1float + data2float
results2 = data1float - data3float
results3 = data2float * data3float
results4 = 10 * data2float
Answer: As far as I understood you want the labels in your GridLayout in the last
section of your code to get their texts from your python code. You could do
something like this:
from otherprogram import results1, results2, results3, results4
class ResultsScreen(Screen):
label1_text = results1
label2_text = results2
label3_text = results3
label4_text = results4
then in your .kv file you could access these values by calling their root
widgets attribute.
Label:
text: root.label1_text
and so on.
|
Connecting to Azure SQL with Python
Question: I am trying to connect to a SQL Database hosted in Windows Azure through
MySQLdb with Python.
I keep getting an error mysql_exceptions.OperationalError: (2001, 'Bad
connection string.')
This information works when connecting through .NET (vb, C#) but I am
definitely not having any luck here.
For below I used my server's name from azure then .database.windows.net Is
this the correct way to go about this?
Here is my code:
#!/usr/bin/python
import MySQLdb
conn = MySQLdb.connect(host="<servername>.database.windows.net", user="myUsername", passwd="myPassword", db="db_name")
cursor = conn.cursor()
I have also tried using pyodbc with FreeTDS with no luck.
Answer: @Kyle Moffat, what OS are you on? Here is how you can use pyodbc on Linux and
Windows: <https://msdn.microsoft.com/en-us/library/mt763261(v=sql.1).aspx>
**Windows:**
* Download and install Python
* Install the Microsoft ODBC Driver 11 or 13:
* v13: <https://www.microsoft.com/en-us/download/details.aspx?id=50420>
* v11: <https://www.microsoft.com/en-us/download/details.aspx?id=36434>
* Open cmd.exe as an administrator
* Install pyodbc using pip - Python package manager
cd C:\Python27\Scripts>
pip install pyodbc
**Linux:**
* Open terminal Install Microsoft ODBC Driver 13 for Linux For Ubuntu 15.04 +
sudo su
wget https://gallery.technet.microsoft.com/ODBC-Driver-13-for-Ubuntu-b87369f0/file/154097/2/installodbc.sh
sh installodbc.sh
* For RedHat 6,7
sudo su
wget https://gallery.technet.microsoft.com/ODBC-Driver-13-for-SQL-8d067754/file/153653/4/install.sh
sh install.sh
* Install pyodbc
sudo -H pip install pyodbc
Once you install the ODBC driver and pyodbc you can use this Python sample to
connect to Azure SQL DB
import pyodbc
server = 'tcp:myserver.database.windows.net'
database = 'mydb'
username = 'myusername'
password = 'mypassword'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 13 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
cursor.execute("SELECT @@version;")
row = cursor.fetchone()
while row:
print row[0]
row = cursor.fetchone()
If you are not able to install the ODBC Driver you can also try pymssql +
FreeTDS
sudo apt-get install python
sudo apt-get --assume-yes install freetds-dev freetds-bin
sudo apt-get --assume-yes install python-dev python-pip
sudo pip install pymssql==2.1.1
Once you follow these steps, you can use the following code sample to connect:
<https://msdn.microsoft.com/en-us/library/mt715796(v=sql.1).aspx>
|
simple SNTP python script
Question: I need help to complete following script:
import socket
import struct
import sys
import time
NTP_SERVER = '0.uk.pool.ntp.org'
TIME1970 = 2208988800L
def sntp_client():
client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
data = str.encode('\xlb' + 47 * '\0')
client.sendto(data, (NTP_SERVER, 123))
data, addr = client.recvfrom(1024)
if data:
print('Response received from:', addr)
t = struct.unpack('!12I', data)[10]
t -= TIME1970
print('\tTime: %s' % time.ctime(t))
if __name__ == '__main__':
sntp_client()
Expected output:
Response received from: ('80.82.244.120', 123)
Time: Tue Sep 13 14:49:38 2016
Problem is that program is not giving any output. It looks like it stucks at:
data, addr = client.recvfrom(1024)
I hope someone can help me with this.
Answer: There is nothing wrong with your script as written, you need to look for
another reason why the server might not be responding to you, such as firewall
settings. My own python SNTP script is almost exactly the same:
#!/bin/env python
import socket
import struct
import sys
import time
TIME1970 = 2208988800L # Thanks to F.Lundh
pow2_31 = pow(2,31)
pow2_32 = pow(2,32)
pow2_16 = pow(2,16)
if len(sys.argv) < 2:
sys.stderr.write("Usage : " + sys.argv[0] + " <SNTP server>")
exit(1)
server = sys.argv[1]
client = socket.socket( socket.AF_INET, socket.SOCK_DGRAM )
data = '\x1b' + 47 * '\0'
time_start = time.time()
try:
client.sendto( data, ( server, 123 ))
client.settimeout(2)
except:
print "server <%s> not recognized" % (server)
exit(2)
try:
data, address = client.recvfrom( 1024 )
except socket.timeout:
print "timed out"
exit(3)
if data:
time_reply = (time.time() - time_start) * 1000
print 'received %d bytes from %s in %d ms :' % (len(data), address, time_reply)
upacket = struct.unpack( '!48B', data )
print upacket
usage: $ ./sntp_client.py 0.uk.pool.ntp.org
received 48 bytes from ('83.170.75.28', 123) in 154 ms : (28, 3, 3, 236, 0, 0,
1, 171, 0, 0, 3, 0, 20, 139, 208, 232, 219, 177, 86, 148, 230, 192, 1, 15, 0,
0, 0, 0, 0, 0, 0, 0, 219, 177, 88, 27, 60, 214, 85, 212, 219, 177, 88, 27, 60,
238, 157, 39)
|
how to print json data
Question: I have following json file and python code and i need output example...
**json file**
{"b": [{"1": "add"},{"2": "act"}],
"p": [{"add": "added"},{"act": "acted"}],
"pp": [{"add": "added"},{"act": "acted"}],
"s": [{"add": "adds"},{"act": "acts"}],
"ing": [{"add": "adding"},{"act": "acting"}]}
**python**
import json
data = json.load(open('jsonfile.json'))
#print data
**out put example**
>> b
>> p
>> pp
>> s
>> ing
any ideas how to do that?
Answer: This doesn't have anything to do with JSON. You have a dictionary, and you
want to print the keys, which you can do with `data.keys()`.
|
Easiest way to parallelise a call to map?
Question: Hey I have some code in Python which is basically a World Object with Player
objects. At one point the Players all get the state of the world and need to
return an action. The calculations the players do are independent and only use
the instance variables of the respective player instance.
while True:
#do stuff, calculate state with the actions array of last iteration
for i, player in enumerate(players):
actions[i] = player.get_action(state)
What is the easiest way to run the inner `for` loop parallel? Or is this a
bigger task than I am assuming?
Answer: The most straightforward way is to use
[multiprocessing.Pool.map](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.map)
(which works just like `map`):
import multiprocessing
pool = multiprocessing.Pool()
def do_stuff(player):
... # whatever you do here is executed in another process
while True:
pool.map(do_stuff, players)
Note however that this uses multiple processes. There is no way of doing
multithreading in Python due to the
[GIL](https://wiki.python.org/moin/GlobalInterpreterLock).
Usually parallelization is done with threads, which can access the same data
inside your program (because they run in the same process). To share data
between processes one needs to use IPC (inter-process communication)
mechanisms like pipes, sockets, files etc. Which costs more resources. Also,
spawning processes is much slower than spawning threads.
Other solutions include:
* vectorization: rewrite your algorithm as computations on vectors and matrices and use hardware accelerated libraries to execute it
* using another Python distribution that doesn't have a GIL
* implementing your piece of parallel code in another language and calling it from Python
A big issue comes when your have to share data between the processes/threads.
For example in your code, each task will access `actions`. If you _have_ to
share state, welcome to [concurrent
programming](https://en.wikipedia.org/wiki/Concurrent_computing), a much
bigger task, and one of the hardest thing to do right in software.
|
unable to execute Celery beat the second time
Question: I am using Celery beat for getting the site data after every 10 seconds.
Therefore I update the settings in my Django project. I am using rabbitmq with
celery.
**settings.py**
# This is the settings file
# Rabbitmq configuration
BROKER_URL = "amqp://abcd:abcd@localhost:5672/abcd"
# Celery configuration
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/Kolkata'
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERYBEAT_SCHEDULE = {
# Executes every Monday morning at 7:30 A.M
'update-app-data': {
'task': 'myapp.tasks.fetch_data_task',
'schedule': timedelta(seconds=10),
},
**celery.py**
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# Indicate Celery to use the default Django settings module
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
app = Celery('myapp')
app.config_from_object('django.conf:settings')
# This line will tell Celery to autodiscover all your tasks.py that are in
# playstore folders
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app_keywords = Celery('keywords')
app_keywords.config_from_object('django.conf:settings')
# This line will tell Celery to autodiscover all your tasks.py that are in
# keywords folders
app_keywords.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app1 = Celery('myapp1')
app1.config_from_object('django.conf:settings')
# This line will tell Celery to autodiscover all your tasks.py that are in
# your app folders
app1.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
**tasks.py**
@task(bind=True)
def fetch_data_task(self, data):
logger.info("Start task")
import pdb;pdb.set_trace()
# post the data to view
headers, cookies = utils.get_csrf_token()
requests.post(settings.SITE_VARIABLES['site_url'] + "/site/general_data/",
data=json.dumps(data), headers=headers, cookies=cookies
)
if data['reviews']:
reviews_data = {'app_id': data['app_data'][
'app_id'], 'reviews': data['reviews'][0]}
requests.post(settings.SITE_VARIABLES['site_url'] + "/site/blog/reviews/",
data=json.dumps(reviews_data), headers=headers, cookies=cookies
)
logger.info("Task fetch data finished")
Now once I call `fetch_data_task` in my api after login to the site, The task
is queued in rabbimq and then It should the call the function along with the
arguments.
Here is the line where I am calling the task for the very first time
`tasks.fetch_data_task.apply_async((data,))`
This queues the task and the task executes each time but it gives me the
following error
> [2016-09-13 18:57:43,044: ERROR/MainProcess] Task
> playstore.tasks.fetch_data_task[3b88c6d0-48db-49c1-b7d1-0b8469775d53]
>
> raised unexpected: TypeError("fetch_data_task() missing 1 required
> positional argument: 'data'",)
>
> Traceback (most recent call last):
>
> File "/Users/chitrankdixit/.virtualenvs/hashgrowth->
> >dev/lib/python3.5/site-packages/celery/app/trace.py", line 240, in
> >trace_task R = retval = fun(*args, **kwargs) File
> "/Users/chitrankdixit/.virtualenvs/hashgrowth->dev/lib/python3.5/site-
> packages/celery/app/trace.py", line 438, in >**protected_call** return
> self.run(*args, **kwargs) TypeError: fetch_data_task() missing 1 required
> positional argument: 'data'
If anyone has worked with celery and rabbitmq and also worked with periodic
task using celery please suggest me to execute the tasks properly.
Answer: The exception tells you what the error is: your task expects a positional
argument, but you do not provide any arguments in your schedule definition.
CELERYBEAT_SCHEDULE = {
# Executes every Monday morning at 7:30 A.M
'update-app-data': {
'task': 'myapp.tasks.fetch_data_task',
'schedule': timedelta(seconds=10),
'args': ({
# whatever goes into 'data'
},) # tuple with one entry, don't omit the comma
},
Calling the task from any other place in your code does not have any effect on
the schedule.
|
python : get list all *.txt files in a directory
Question: i'm beginner in python language
how to get list all `.txt` file in a directory in python language ?
for example get list file :
['1.txt','2.txt','3.txt','4.txt','5.txt','6.txt']
Answer: you can use `os`, `subprocess` and `glob` library
`os` library example:
import os
os.system("ls *.txt")
this command returned all `.txt` file
`subprocess` library example:
my_result_command = subprocess.Popen(['ls', '-l'], stdout=log, stderr=log, shell=True)
you can check my_result_command and get all file or `.txt` file
`glob` library example:
import glob
glob.glob('*.txt')
|
gooey module not installing correctly
Question:
C:\Python34\Scripts>pip install Gooey
Collecting Gooey
Using cached Gooey-0.9.2.3.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Haeshan\AppData\Local\Temp\pip-build- 5waer38m\Gooey\setup.py", line 9, in <module>
version = __import__('gooey').__version__
File "C:\Users\Haeshan\AppData\Local\Temp\pip-build-5waer38m\Gooey\gooey\__init__.py", line 2, in <module>
from gooey.python_bindings.gooey_decorator import Gooey
File "C:\Users\Haeshan\AppData\Local\Temp\pip-build-5waer38m\Gooey\gooey\python_bindings\gooey_decorator.py", line 54
except Exception, e:
^
SyntaxError: invalid syntax
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in
C:\Users\Haeshan\AppData\Local\Temp\pip-build-5waer38m\Gooey\
this error is appearing when I try to install the Gooey module for python, any
ideas why?
Answer: Looks like you're using Python 3.4 but Gooey only supports Python 2:
<https://github.com/chriskiehl/Gooey/issues/65>
<http://python3porting.com/differences.html#except>
|
Read data from binary file python
Question: I have a binary file with this format:
[![enter image description
here](http://i.stack.imgur.com/qHVBs.jpg)](http://i.stack.imgur.com/qHVBs.jpg)
and i use this code to open it:
import numpy as np
f = open("author_1", "r")
dt = np.dtype({'names': ['au_id','len_au_name','au_name','nu_of_publ', 'pub_id', 'len_of_pub_id','pub_title','num_auth','len_au_name_1', 'au_name1','len_au_name_2', 'au_name2','len_au_name_3', 'au_name3','year_publ','num_of_cit','citid','len_cit_tit','cit_tit', 'num_of_au_cit','len_cit_au_name_1','au_cit_name_1', len_cit_au_name_2',
'au_cit_name_2','len_cit_au_name_3','au_cit_name_3','len_cit_au_name_4',
'au_cit_name_4', 'len_cit_au_name_5','au_cit_name_5','year_cit'],
'formats': [int,int,'S13',int,int,int,'S61', int,int,'S8',int,'S7',int,'S12',int,int,int,int,'S50',int,int,
'S7',int,'S7',int,'S9',int,'S8',int,'S1',int]})
a = np.fromfile(f, dtype=dt, count=-1, sep="")
And I take this:
array([ (1, 13, b'Scott Shenker', 200, 1, 61, b'Integrated services in the internet architecture: an overview', 3, 8, b'R Braden', 7, b'D Clark', 12, b'S Shenker\xe2\x80\xa6', 1994, 1000, 401, 50, b'[HTML] An architecture for differentiated services', 5, 7, b'D Black', 7, b'S Blake', 9, b'M Carlson', 8, b'E Davies', 1, b'Z', 1998),
(402, 72, b'Resource rese', 1952544370, 544108393, 1953460848, b'ocol (RSVP)--Version 1 functional specification\x05\x00\x00\x00\x08\x00\x00\x00R Brad', 487013, 541851648, b'Zhang\x08', 1109414656, b'erson\x08', 542310400, b'Herzog\x07\x00\x00\x00S ', 1768776010, 511342, 103168, 22016, b'\x00A reliable multicast framework for light-weight s', 1769173861, 544435823, b'and app', 1633905004, b'tion le', 543974774, b'framing\x04', 458752, b'\x00\x00S Floy', 2660, b'', 1632247894),
Any idea how can open the whole file?
Answer: The data structure stored in this file is hierarchical, rather than "flat":
child arrays of different length are stored within each parent element. It is
not possible to represent such a data structure using numpy arrays (even
recarrays), and therefore it is not possible to read the file with
`np.fromfile()`.
What do you mean by "open the whole file"? What sort of python data structure
would you like to end up with?
It would be straightforward, but still not trivial, to write a function to
parse the file into a list of dictionaries.
|
How to find the source of global(ish) variable?
Question: I inherited some large and unwieldy python code. In one file its using a list
of commands imported from another file. Looking at it with pdb this commands
variable ends up in the global namespace. However there's another file that
doesn't look like its even being used that also has a commands variable in it
and for some reason on certain machines that variable is used instead.
My question is, is there a way in pdb or just code to show the source of the
commands variable? I'm hoping for some concrete evidence that shows it's
pointing to that file for some reason.
It's a nice demonstration on the dangers of global variables I guess, and I
can clean up the code but I'd like to fully understand it first.
Answer: To get the module of the `commands` object, you could try:
import inspect
inspect.getmodule(commands)
|
Python: Create a user and send email with account details to the user
Question: Here is a script I have written which will create a new user account. I am
trying to get help in adding a bit more to it.
I want to have it also send an email to the new user that is created. Ideally,
the program will ask the user creating the new account, what their email is,
and then it will use the user and password variables and send an email to that
new user so they will know how to log in. What would be the best way to do
this, thanks for any advice.
#! /usr/bin/python
import commands, os, string
import sys
import fileinput
def print_menu(): ## Your menu design here
print 20 * "-" , "Perform Below Steps to Create a New TSM Account." , 20 * "-"
print "1. Create User Account"
print 67 * "-"
loop=True
while loop: ## While loop which will keep going until loop = False
print_menu() ## Displays menu
choice = input("Enter your choice [1-5]: ")
if choice==1:
user = raw_input("Enter the Username to be created: " )
password = raw_input( "Enter the password for the user: " )
SRnumber = raw_input( "Enter the Service Request Number: ")
user = user + " "
output = os.system('create user' + user)
output = os.system('set password' + password)
Answer: You can easily send mails with gmail and smtplib (you maybe need to install it
first). This way you can send any message you want.
import smtplib
toaddrs = raw_input('what is your e mail?')
fromaddr = 'youremail@email.com'
msg = 'the message you want to send'
server.starttls()
server.login(fromaddr, "your gmail password")
server = smtplib.SMTP('smtp.gmail.com', 587)
server.set_debuglevel(1)
server.sendmail(fromaddr, toaddrs, msg)
server.quit()
You will have to allow less secure apps in your gmail settings.
|
python scikit-learn TfidfVectorizer: why ValueError when input is 2 single-character strings?
Question: I am trying to run something like this:
from sklearn.feature_extraction.text import TfidfVectorizer
test_text = ["q", "r"]
vect = TfidfVectorizer(min_df=1,
stop_words=None,
lowercase=False)
tfidf = vect.fit_transform(test_text)
print vect.get_feature_names()
But get a ValueError:
`ValueError: empty vocabulary; perhaps the documents only contain stop words`
Does guidance exist on what limitations or constraints for the input are? I
was not able to find anything on the [TfidfVectorizer doc page](http://scikit-
learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html).
I tried to trace it, and got to the `_count_vocab` function, but I have
trouble reading it. Also, when I change the strings to length 2 or more, code
runs fine.
Answer: The error is because of the min_df parameter. When you set the value of min_df
=0, it will work fine as it will not be bounded by the 'minimum threshold'
which is currently set to 1 and each of your word also appears for once only.
|
click on button to send adb commmand python
Question: I would like to build a program to send adb commannd to mobile when i click
the buttton, i tried with the following code but the command is not send to
device,I'm new in Python. Please can someone help me to solve this problem
from Tkinter import *
import os
import subprocess
root = Tk()
root.title("MUT Tester")
root.geometry("500x500")
def button():
cmd= os.system("adb devices")
b = Button(root, text="Enter", width=30, height=2, command = lambda:(button))
b.pack()
root.mainloop()
Answer: In this line:
b = Button(root, text="Enter", width=30, height=2, command = lambda:(button))
the button function is not being called by command when you click (replace
button with a print statement to test). Remove the lambda and replace it with
just command = button.
|
parse table using beautifulsoup in python
Question: I want to traverse through each row and capture values of td.text. However
problem here is table does not have class. and all the td got same class name.
I want to traverse through each row and want following output:
1st row)"AMERICANS SOCCER CLUB","B11EB - AMERICANS-B11EB-WARZALA","Cameron
Coya","Player 228004","2016-09-10","player persistently infringes the laws of
the game","C" (new line)
2nd row) "AVIATORS SOCCER CLUB","G12DB - AVIATORS-G12DB-REYNGOUDT","Saskia
Reyes","Player 224463","2016-09-11","player/sub guilty of unsporting
behavior"," C" (new line)
<div style="overflow:auto; border:1px #cccccc solid;">
<table cellspacing="0" cellpadding="3" align="left" border="0" width="100%">
<tbody>
<tr class="tblHeading">
<td colspan="7">AMERICANS SOCCER CLUB</td>
</tr>
<tr bgcolor="#CCE4F1">
<td colspan="7">B11EB - AMERICANS-B11EB-WARZALA</td>
</tr>
<tr bgcolor="#FFFFFF">
<td width="19%" class="tdUnderLine"> Cameron Coya </td>
<td width="19%" class="tdUnderLine">
Rozel, Max
</td>
<td width="06%" class="tdUnderLine">
09-11-2016
</td>
<td width="05%" class="tdUnderLine" align="center">
<a href="http://www.ncsanj.com/gameRefReportPrint.cfm?gid=228004" target="_blank">228004</a>
</td>
<td width="16%" class="tdUnderLine" align="center">
09/10/16 02:15 PM
</td>
<td width="30%" class="tdUnderLine"> player persistently infringes the laws of the game </td>
<td class="tdUnderLine"> Cautioned </td>
</tr>
<tr class="tblHeading">
<td colspan="7">AVIATORS SOCCER CLUB</td>
</tr>
<tr bgcolor="#CCE4F1">
<td colspan="7">G12DB - AVIATORS-G12DB-REYNGOUDT</td>
</tr>
<tr bgcolor="#FBFBFB">
<td width="19%" class="tdUnderLine"> Saskia Reyes </td>
<td width="19%" class="tdUnderLine">
HollaenderNardelli, Eric
</td>
<td width="06%" class="tdUnderLine">
09-11-2016
</td>
<td width="05%" class="tdUnderLine" align="center">
<a href="http://www.ncsanj.com/gameRefReportPrint.cfm?gid=224463" target="_blank">224463</a>
</td>
<td width="16%" class="tdUnderLine" align="center">
09/11/16 06:45 PM
</td>
<td width="30%" class="tdUnderLine"> player/sub guilty of unsporting behavior </td>
<td class="tdUnderLine"> Cautioned </td>
</tr>
<tr class="tblHeading">
<td colspan="7">BERGENFIELD SOCCER CLUB</td>
</tr>
<tr bgcolor="#CCE4F1">
<td colspan="7">B11CW - BERGENFIELD-B11CW-NARVAEZ</td>
</tr>
<tr bgcolor="#FFFFFF">
<td width="19%" class="tdUnderLine"> Christian Latorre </td>
<td width="19%" class="tdUnderLine">
Coyle, Kevin
</td>
<td width="06%" class="tdUnderLine">
09-10-2016
</td>
<td width="05%" class="tdUnderLine" align="center">
<a href="http://www.ncsanj.com/gameRefReportPrint.cfm?gid=226294" target="_blank">226294</a>
</td>
<td width="16%" class="tdUnderLine" align="center">
09/10/16 11:00 AM
</td>
<td width="30%" class="tdUnderLine"> player persistently infringes the laws of the game </td>
<td class="tdUnderLine"> Cautioned </td>
</tr>
I tried with following code.
import requests
from bs4 import BeautifulSoup
import re
try:
import urllib.request as urllib2
except ImportError:
import urllib2
url = r"G:\Freelancer\NC Soccer\Northern Counties Soccer Association ©.html"
page = open(url, encoding="utf8")
soup = BeautifulSoup(page.read(),"html.parser")
#tableList = soup.findAll("table")
for tr in soup.find_all("tr"):
for td in tr.find_all("td"):
print(td.text.strip())
but it is obvious that it will return text form all td and I will not able to
identify particular column name or will not able to determine start of new
record. I want to know
1) how to identify each column(because class name is same) and there are
headings as well (I will appreciate if you provide code for that)
2) how to identify new record in such structure
Answer: If the data is really structured like a table, there's a good chance you can
read it into pandas directly with pd.read_table(). Note that it accepts urls
in the filepath_or_buffer argument. <http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.read_table.html>
|
why output list is empty in my code in Python 2.7
Question: Using Python 2.7 and trying to do simple tokenization on UTF-8 encoded files.
The output of `a` seems a byte string, which is expected, since after
`tk[0].encode('utf-8')`, it converts from Python `unicode` type to `str/byte`.
My major confusion is why output of `b` is empty list? I think without
encoding (I mean without calling `.encode('utf-8')`), it should be raw unicode
character (e.g. I expect some Chinese character printed, as `1.txt` is UTF-8
encoded Chinese character file).
**Source code** ,
import jieba
if __name__ == "__main__":
with open('1.txt', 'r') as content_file:
content = content_file.read()
segment_list = jieba.tokenize(content.decode('utf-8'), mode='search')
if segment_list is None:
print 'segment is None'
else:
a = [tk[0].encode('utf-8') for tk in segment_list]
b = [tk[0] for tk in segment_list]
print a
print b
** Output **,
['\xe4\xb8\x8a\xe6\xb5\xb7', '\xe6\xb5\xb7\xe5\xb8\x82', '\xe4\xb8\x8a\xe6\xb5\xb7\xe5\xb8\x82', '\xe6\xb7\xb1\xe5\x9c\xb3', '\xe6\xb7\xb1\xe5\x9c\xb3\xe5\xb8\x82', '\xe7\xa6\x8f\xe7\x94\xb0', '\xe7\xa6\x8f\xe7\x94\xb0\xe5\x8c\xba', '\xe6\xa2\x85\xe6\x9e\x97', '\xe6\x9e\x97\xe8\xb7\xaf', '\xe6\xa2\x85\xe6\x9e\x97\xe8\xb7\xaf', '\xe4\xb8\x8a\xe6\xb5\xb7', '\xe6\xb5\xb7\xe5\xb8\x82', '\xe6\xb5\xa6\xe4\xb8\x9c', '\xe6\x96\xb0\xe5\x8c\xba', '\xe4\xb8\x8a\xe6\xb5\xb7\xe5\xb8\x82', '\xe4\xb8\x8a\xe6\xb5\xb7\xe5\xb8\x82\xe6\xb5\xa6\xe4\xb8\x9c\xe6\x96\xb0\xe5\x8c\xba', '\xe8\x80\x80\xe5\x8d\x8e', '\xe8\xb7\xaf', '\r\n']
[]
Answer: It appears that `jieba.tokenize()` returns a generator. A generator can be
iterated over only once. Better do
b = [tk[0] for tk in segment_list]
a = [tk.encode('utf-8') for tk in b]
|
Python CGI Script "Cannot allocate memory" Import Error
Question: I have a simple CGI script on a shared 64bit Ubuntu hosting environment.
#!/kunden/homepages/14/d156645139/htdocs/htdocs/anaconda2/bin/python
# -*- coding: UTF-8 -*-
import sys
import cgi
import cgitb
cgitb.enable()
import numpy
from pandas_datareader.yahoo.daily import YahooDailyReader
When I attempt to run the script I receive the following error:
/kunden/homepages/14/d156645139/htdocs/finance/bin/py/test.py in ()
6 import cgitb
7 cgitb.enable()
=> 8 from pandas_datareader.yahoo.daily import YahooDailyReader
9 import datetime as dt
10 import numpy as np
pandas_datareader undefined, YahooDailyReader undefined
/kunden/homepages/14/d156645139/htdocs/anaconda2/lib/python2.7/site-packages/pandas_datareader/yahoo/daily.py in ()
2
3
4 class YahooDailyReader(_DailyBaseReader):
5
6 """
pandas_datareader undefined, _DailyBaseReader undefined
/kunden/homepages/14/d156645139/htdocs/anaconda2/lib/python2.7/site-packages/pandas_datareader/base.py in ()
7 from requests_file import FileAdapter
8
=> 9 from pandas import to_datetime
10 import pandas.compat as compat
11 from pandas.core.common import PandasError, is_number
pandas undefined, to_datetime undefined
/kunden/homepages/14/d156645139/htdocs/anaconda2/lib/python2.7/site-packages/pandas/__init__.py in ()
35
36 # let init-time option registration happen
=> 37 import pandas.core.config_init
38
39 from pandas.core.api import *
pandas undefined
/kunden/homepages/14/d156645139/htdocs/anaconda2/lib/python2.7/site-packages/pandas/core/config_init.py in ()
16 is_one_of_factory, get_default_val,
17 is_callable)
=> 18 from pandas.formats.format import detect_console_encoding
19
20 #
pandas undefined, detect_console_encoding undefined
/kunden/homepages/14/d156645139/htdocs/anaconda2/lib/python2.7/site-packages/pandas/formats/format.py in ()
19 import pandas.lib as lib
20 from pandas.tslib import iNaT, Timestamp, Timedelta, format_array_from_datetime
=> 21 from pandas.tseries.index import DatetimeIndex
22 from pandas.tseries.period import PeriodIndex
23 import pandas as pd
pandas undefined, DatetimeIndex undefined
/kunden/homepages/14/d156645139/htdocs/anaconda2/lib/python2.7/site-packages/pandas/tseries/index.py in ()
<type 'exceptions.ImportError'>: /kunden/homepages/14/d156645139/htdocs/anaconda2/lib/python2.7/site-packages/pandas/_period.so: failed to map segment from shared object: Cannot allocate memory
args = ('/kunden/homepages/14/d156645139/htdocs/anaconda2...egment from shared object: Cannot allocate memory',)
message = '/kunden/homepages/14/d156645139/htdocs/anaconda2...egment from shared object: Cannot allocate memory'
How can I trace the source of the memory error? For example, is there a way to
understand the memory limits or even increase?
Answer: I was able to increase the RAM of the machine to a guaranteed 2GB which solved
the problem.
|
Calling from the same class, why is one treated as bound method while the other plain function?
Question: I have the following code snippet in Python 3:
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy import Column, Integer, String, Unicode, UnicodeText
from sqlalchemy.ext.hybrid import hybrid_property, hybrid_method
import arrow
datetimeString_format = {
"UTC": "%Y-%m-%d %H:%M:%S+00:00",
"local_with_timezoneMarker": "%Y-%m-%d %H:%M:%S %Z",
"local_without_timezoneMarker": "%Y-%m-%d %H:%M:%S"
}
dateString_format = "%Y-%m-%d"
class My_TimePoint_Mixin:
# define output formats:
datetimeString_inUTC_format = "%Y-%m-%d %H:%M:%S+00:00"
datetimeString_naive_format = "%Y-%m-%d %H:%M:%S"
# instrumented fields:
_TimePoint_in_database = Column('timepoint', String, nullable=False)
_TimePoint_in_database_suffix = Column(
'timepoint_suffix', String, nullable=False)
@hybrid_property
def timepoint(self):
twoPossibleType_handlers = [
self._report_ACCRT_DATE,
self._report_ACCRT_DATETIME
]
for handler in twoPossibleType_handlers:
print("handler: ", handler)
try:
return handler(self)
except (AssertionError, ValueError) as e:
logging.warning("Try next handler!")
@timepoint.setter
def timepoint(self, datetimepointOBJ):
handlers_lookup = {
datetime.datetime: self._set_ACCRT_DATETIME,
datetime.date: self._set_ACCRT_DATE
}
this_time = type(datetimepointOBJ)
this_handler = handlers_lookup[this_time]
print("handler: ", this_handler)
this_handler(datetimepointOBJ)
def _report_ACCRT_DATE(self):
"""Accurate Date"""
assert self._TimePoint_in_database_suffix == "ACCRT_DATE"
date_string = self._TimePoint_in_database
dateString_format = "%Y-%m-%d"
# return a datetime.date
return datetime.datetime.strptime(date_string, dateString_format).date()
def _report_ACCRT_DATETIME(self):
"""Accurate DateTime"""
assert self._TimePoint_in_database_suffix in pytz.all_timezones_set
datetimeString_inUTC = self._TimePoint_in_database
utc_naive = datetime.datetime.strptime(
datetimeString_inUTC, self.datetimeString_inUTC_format)
utc_timepoint = arrow.get(utc_naive, "utc")
# localize
local_timepoint = utc_timepoint.to(self._TimePoint_in_database_suffix)
# return a datetime.datetime
return local_timepoint.datetime
def _set_ACCRT_DATETIME(self, datetimeOBJ_aware):
assert isinstance(datetimeOBJ_aware, datetime.datetime), "Must be a valid datetime.datetime!"
assert datetimeOBJ_aware.tzinfo is not None, "Must contain tzinfo!"
utctime_aware_arrow = arrow.get(datetimeOBJ_aware).to('utc')
utctime_aware_datetime = utctime_aware_arrow.datetime
store_datetime_string = utctime_aware_datetime.strftime(
self.datetimeString_inUTC_format)
self._TimePoint_in_database = store_datetime_string
def _set_ACCRT_DATE(self, dateOBJ):
store_date_string = dateOBJ.isoformat()
self._TimePoint_in_database = store_date_string
For some reason, the getter's handler is treated as a plain function rather
than a method, hence the need to explicitly provide 'self' as its argument.
Is it because of the looping? Or because of the `try...except` structure? Why
is this the case that within the same class, handlers are treated differently?
(The setter's handlers are treated as bound method as expected).
Answer: What you have is not a regular `property` here, you have a [SQLAlchemy
`@hybrid_property`
object](http://docs.sqlalchemy.org/en/latest/orm/extensions/hybrid.html).
Quoting the documentation there:
> “hybrid” means the attribute has distinct behaviors defined at the class
> level and at the instance level.
and
> When dealing with the `Interval` class itself, the `hybrid_property`
> descriptor evaluates the function body given the `Interval` class as the
> argument, which when evaluated with SQLAlchemy expression mechanics returns
> a new SQL expression:
>
>
> >>> print Interval.length
> interval."end" - interval.start
>
So the property is used in a _dual capacity_ , both on instances, and on the
class.
In the case of the property being used on the class itself, `self` is bound to
(a subclass of) `My_TimePoint_Mixin` and the methods are not bound. There is
nothing to bind _to_ in that case as there is no instance.
You'll have to take this into account when coding a `hybrid_property` getter
(the setter only applies to the _on an instance_ case). Your assertions at the
start of `_report_ACCRT_DATE` and `_report_ACCRT_DATETIME` won't hold, for
example.
You can distinguish between the _instance_ case and the _expression_ (on the
class) case, by declaring a separate getter for the latter with the
`hybrid_property.expression` decorator:
@hybrid_property
def timepoint(self):
twoPossibleType_handlers = [
self._report_ACCRT_DATE,
self._report_ACCRT_DATETIME
]
for handler in twoPossibleType_handlers:
print("handler: ", handler)
try:
return handler(self)
except (AssertionError, ValueError) as e:
logging.warning("Try next handler!")
@timepoint.expression
def timepoint(cls):
# return a SQLAlchemy expression for this virtual column
SQLAlchemy will then use the `@timepoint.expression` class method for the
`My_TimePoint_Mixin.timepoint` use, and use the original getter only on
`My_TimePoint_Mixin().timepoint` instance access. See the [_Defining
Expression Behavior Distinct from Attribute Behavior_
section](http://docs.sqlalchemy.org/en/latest/orm/extensions/hybrid.html#defining-
expression-behavior-distinct-from-attribute-behavior).
|
Kivy - My ScrollView doesn't scroll
Question: I'm having problems in my Python application with Kivy library. In particular
I'm trying to create a scrollable list of elements in a TabbedPanelItem, but I
don't know why my list doesn't scroll.
Here is my kv file:
#:import sm kivy.uix.screenmanager
ScreenManagement:
transition: sm.FadeTransition()
SecondScreen:
<SecondScreen>:
tabba: tabba
name: 'second'
FloatLayout:
background_color: (255, 255, 255, 1.0)
BoxLayout:
orientation: 'vertical'
size_hint: 1, 0.10
pos_hint: {'top': 1.0}
canvas:
Color:
rgba: (0.98, 0.4, 0, 1.0)
Rectangle:
pos: self.pos
size: self.size
Label:
text: 'MyApp'
font_size: 30
size: self.texture_size
BoxLayout:
orientation: 'vertical'
size_hint: 1, 0.90
Tabba:
id: tabba
BoxLayout:
orientation: 'vertical'
size_hint: 1, 0.10
pos_hint: {'bottom': 1.0}
Button:
background_color: (80, 1, 0, 1.0)
text: 'Do nop'
font_size: 25
<Tabba>:
do_default_tab: False
background_color: (255, 255, 255, 1.0)
TabbedPanelItem:
text: 'First_Tab'
Tabs:
TabbedPanelItem:
text: 'Second_Tab'
Tabs:
TabbedPanelItem:
text: 'Third_Tab'
Tabs:
<Tabs>:
grid: grid
ScrollView:
scroll_timeout: 250
scroll_distance: 20
do_scroll_y: True
do_scroll_x: False
GridLayout:
id: grid
cols: 1
spacing: 10
padding: 10
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
Label:
text:'scroll'
color: (0, 0, 0, 1.0)
And here my .py code:
__author__ = 'drakenden'
__version__ = '0.1'
import kivy
kivy.require('1.9.0') # replace with your current kivy version !
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition
from kivy.properties import StringProperty, ObjectProperty,NumericProperty
from kivy.uix.tabbedpanel import TabbedPanel
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.button import Button
from kivy.utils import platform
from kivy.uix.gridlayout import GridLayout
from kivy.uix.label import Label
from kivy.uix.scrollview import ScrollView
class Tabs(ScrollView):
def __init__(self, **kwargs):
super(Tabs, self).__init__(**kwargs)
class Tabba(TabbedPanel):
pass
class SecondScreen(Screen):
pass
class ScreenManagement(ScreenManager):
pass
presentation = Builder.load_file("layout2.kv")
class MyApp(App):
def build(self):
return presentation
MyApp().run()
Where/What am I doing wrong?
(Comments and suggests for UI improvements are also accepted)
Answer: I Myself haven't used kivy for a while but if I remember exacly: Because
layout within ScrollView should be BIGGER than scroll view ex ScrollView
width: 1000px, GridView 1100px. So it will be possible to scroll it by 100px
|
how to use an open file for reuse it in severals functions?
Question: I am a beginner in python and not completely bilingual, so I hope you
understand me. I'm trying to develop a code where anyone can open a file, in
order to display its contents in a graph matplotlib, to do this using a
function called `read_file()` with which I get the data and insert a `Listbox`
without any problems. I accomplished the functionality but my concern arises
when I want to call the information contained in the file from another
function called `show_graph()`, in this part I require use the loaded file (in
the `read_file()` function), the only way to achieve this is by adding:
f = open(‘example1.las')
log = LASReader(f, null_subs=np.nan)
with which I can plot, but not practical for me, in other words how to use an
open file for reuse it in severals functions?
Someone could give me their support to solve this please?
Here is the complete code:
from Tkinter import *
from las import LASReader
from pprint import pprint
import tkFileDialog
import matplotlib, sys
matplotlib.use('TkAgg')
import numpy as np
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
root = Tk()
root.geometry("900x700+10+10")
def read_file():
filename = tkFileDialog.askopenfilename()
f = open(filename)
log = LASReader(f, null_subs=np.nan)
for curve in log.curves.names:
parent.insert(END,curve)
def add_name():
it = parent.get(ACTIVE)
child.insert(END, it)
def show_graph():
child = Listbox(root, selectmode=MULTIPLE)
try:
s = child.selection_get()
if s == "GR":
print 'selected:', s
f = open('example1.las')
log = LASReader(f, null_subs=np.nan)
fig = plt.figure(figsize=(6, 7.5))
plt.plot(log.data['GR'], log.data['DEPT'])
plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units)
plt.xlabel(log.curves.GR.descr + " (%s)" % log.curves.GR.units)
plt.ylim(log.stop, log.start)
plt.title(log.well.WELL.data + ', ' + log.well.DATE.data)
plt.grid()
dataPlot = FigureCanvasTkAgg(fig, master=root)
dataPlot.show()
dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2,
sticky=W+E+N+S, padx=380, pady=52)
elif s == "NPHI":
print 'selected:', s
f = open('Shamar-1.las')
log = LASReader(f, null_subs=np.nan)
fig = plt.figure(figsize=(6, 7.5))
plt.plot(log.data['NPHI'], log.data['DEPT'])
plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units)
plt.xlabel(log.curves.NPHI.descr + " (%s)" % log.curves.NPHI.units)
plt.ylim(log.stop, log.start)
plt.title(log.well.WELL.data + ', ' + log.well.DATE.data)
plt.grid()
dataPlot = FigureCanvasTkAgg(fig, master=root)
dataPlot.show()
dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2,
sticky=W+E+N+S, padx=380, pady=52)
elif s == "DPHI":
print 'selected:', s
f = open('Shamar-1.las')
log = LASReader(f, null_subs=np.nan)
fig = plt.figure(figsize=(6, 7.5))
plt.plot(log.data['DPHI'], log.data['DEPT'])
plt.ylabel(log.curves.DEPT.descr + " (%s)" % log.curves.DEPT.units)
plt.xlabel(log.curves.DPHI.descr + " (%s)" % log.curves.DPHI.units)
plt.ylim(log.stop, log.start)
plt.title(log.well.WELL.data + ', ' + log.well.DATE.data)
plt.grid()
dataPlot = FigureCanvasTkAgg(fig, master=root)
dataPlot.show()
dataPlot.get_tk_widget().grid(row=0, column=2, columnspan=2, rowspan=2,
sticky=W+E+N+S, padx=380, pady=52)
except:
print 'no selection'
def remove_name():
child.delete(ACTIVE)
def btnClick():
pass
e = Entry(root)
e.pack(padx=5)
b = Button(root, text="OK", command=btnClick)
b.pack(pady=5)
# create the canvas, size in pixels
canvas = Canvas(width = 490, height = 600, bg = 'grey')
# pack the canvas into a frame/form
canvas.place(x=340, y=50)
etiqueta = Label(root, text='Nemonics:')
etiqueta.place(x=10, y=30)
parent = Listbox(root)
root.title("Viewer")
parent.place(x=5, y=50)
selec_button = Button(root, text='Graph',
command=show_graph)
selec_button.place(x=340, y=20)
remove_button = Button(root, text='<<delete',
command=remove_name)
remove_button.place(x=138, y=150)
add_button = Button(root, text='Add>>',
command=add_name)
add_button.place(x=138, y=75)
child = Listbox(root)
child.place(x=210, y=50)
butt = Button(root, text="load file", command=read_file)
butt.place(x=10, y=5)
root.mainloop()
Answer: You can use a global variable to keep it, declaring it as global before the
variable name (f in your case). However, I don't recommend it if you are
modifying the file.
|
Python debuggers not stepping into a coroutine?
Question: In the example below:
import asyncio
import ipdb
class EchoServerProtocol:
def connection_made(self, transport):
self.transport = transport
def datagram_received(self, data, addr):
message = data.decode()
print('Received %r from %s' % (message, addr))
print('Send %r to %s' % (message, addr))
self.transport.sendto(data, addr)
loop = asyncio.get_event_loop()
ipdb.set_trace(context=21)
print("Starting UDP server")
# One protocol instance will be created to serve all client requests
listen = loop.create_datagram_endpoint( EchoServerProtocol, local_addr=('127.0.0.1', 9999))
transport, protocol = loop.run_until_complete(listen)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
transport.close()
loop.close()
I'm trying to step into the
`loop.create_datagram_endpoint( EchoServerProtocol, local_addr=('127.0.0.1',
9999))`
to understand how it behaves internally. However when I try to step into the
coroutine, the debugger just jumps over it as if `n` has been pressed instead
of `s`.
> ../async_test.py(18)<module>()
17 # One protocol instance will be created to serve all client requests
---> 18 listen = loop.create_datagram_endpoint( EchoServerProtocol, local_addr=('127.0.0.1', 9999))
19 transport, protocol = loop.run_until_complete(listen)
ipdb> s
> ../async_test.py(19)<module>()
18 listen = loop.create_datagram_endpoint( EchoServerProtocol, local_addr=('127.0.0.1', 9999))
---> 19 transport, protocol = loop.run_until_complete(listen)
20
ipdb>
The behavior is experienced with PyCharm (2016 2.3 Community) IDE.
I would expect to end
[here](https://github.com/python/asyncio/blob/f060dff83b3e9505091fc88e80b7be3bc1671e40/asyncio/base_events.py#L751)
and be able to step additionally through the code.
Answer: It works if you call `await` or `yield from` for your coroutine like
listen = await loop.create_datagram_endpoint(EchoServerProtocol,
local_addr=('127.0.0.1', 9999))
In your example `listen` is not a result of coroutine execution but
**coroutine instance** itself. Actual execution is performed by next line:
`loop.run_until_complete()`.
|
remove duplicate values from items in a dictionary in Python
Question: How can I check and remove duplicate values from items in a dictionary? I have
a large data set so I'm looking for an efficient method. The following is an
example of values in a dictionary that contains a duplicate:
'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])]
needs to become
'word': [('769817', [6]), ('769819', [4, 10])]
Answer: This problem essentially boils down to removing duplicates from a list of
**unhashable** types, for which converting to a set does not possible.
One possible method is to check for membership in the current value while
building up a new list value.
d = {'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])]}
for k, v in d.items():
new_list = []
for item in v:
if item not in new_list:
new_list.append(item)
d[k] = new_list
_Alternatively_ , use
[`groupby()`](https://docs.python.org/2/library/itertools.html#itertools.groupby)
for a more concise answer, although **potentially** slower (_the list must be
sorted first, if it is, then it is faster than doing a membership check_).
import itertools
d = {'word': [('769817', [6]), ('769819', [4, 10]), ('769819', [4, 10])]}
for k, v in d.items():
v.sort()
d[k] = [item for item, _ in itertools.groupby(v)]
**Output** -> `{'word': [('769817', [6]), ('769819', [4, 10])]}`
|
Python - Function Calls involving Object Inheritance
Question: Suppose I have a parent class `foo` and an inheriting class `bar` defined as
such:
class foo(object):
def __init__(self, args):
for key in args.keys():
setattr(self, key, args[key])
self.subinit()
def subunit(self):
pass
...
* * *
import math
class bar(foo):
def __init__(self, arg1, arg2, ...):
args = locals()
del args['self']
super(foo, self).__init__(args)
def subunit(self):
super(foo, self).subinit()
self.arg1 = math.radians(self.arg1)
self.arg2 = math.radians(self.arg2)
...
...
I have `bar` overriding the function `subinit` as it was defined by the parent
class `foo`. However, since I am executing the line `self.subinit()` from
inside the superclass constructor. I'm concerned that the `subinit` definition
for `foo` will be used instead of the overridden `subinit` for `bar`. So my
question, then, is this: What is the scope of execution here? If I call
`subinit` from the superclass constructor, will it work in the scope of the
totality of the instance and call `bar.subinit()` or will it work in the scope
of the function and call `foo.subinit()`
Answer: To answer my own question, I ran a test after adjusting the example a little:
class foo(object):
def __init__(self, args):
for key in args.keys():
setattr(self, key, args[key])
self.subinit()
def subinit(self):
pass
* * *
class bar(foo):
def __init__(self, arg1, arg2, arg3):
args = locals()
del args['self']
super(self.__class__, self).__init__(args)
def subinit(self):
print self.arg1, self.arg2, self.arg3
The scope appears to be in the totality of the object, not just within the
superclass. Therefore, `bar.subinit()` is getting called. Good to know.
|
'int' object has no attribute '__getitem__' on a non-integer object
Question: In looking at other answers to this issue I found that the object was usually
an integer so i constructed a simple example showing it is not and integer (or
so I think), **this code:**
import numpy as np
a=np.arange(2,10)
print '1: ', a
print '2: ', a.size
print '3: ', a[3:] #this shows this is not an integer
print '3a: ', len(a[3:]) #len works
print '4: ', a.size[3:] #but yet size does not work
**yields:** ============
1: [2 3 4 5 6 7 8 9]
2: 8
3: [5 6 7 8 9]
4:
------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-5c4b06ceceba> in <module>()
4 print '2: ', a.size
5 print '3: ', a[3:] *#this shows this is not an integer*
----> 6 print '4: ', a.size[3:] *#but yet size does not work*
TypeError: 'int' object has no attribute '__getitem__'
======================
As you can see a[3:] is not an integer - what am I doing wrong?
Answer: If you want the size of `a[3:]` then try:
>>> a[3:].size
5
By writing `a.size[3:]` what you are trying to do is `index` over an `integer`
as `a.size` is an `integer`.
|
How to automatically input using python Popen and return control to command line
Question: I have a question regarding subprocess.Popen .I'm calling a shell script and
provide fewinputs. After few inputs ,I want user running the python script to
input.
Is it possible to transfer control from Popen process to command line.
I've added sample scripts
sample.sh
echo "hi"
read input_var
echo "hello" $input_var
read input_var1
echo "Whats up" $input_var1
read input_var2
echo "Tell something else" $input_var2
test.py
import os
from subprocess import Popen, PIPE
p=Popen([path to sample.sh],stdin=PIPE)
#sample.sh prints asks for input
p.stdin.write("a\n")
#Prompts for input
p.stdin.write("nothing much\n")
#After the above statement ,User should be able to prompt input
#Is it possible to transfer control from Popen to command line
Output of the program python test.py
hi
hello a
Whats up b
Tell something else
Please suggest if any alternate methods available to solve this
Answer: As soon as your last `write` is executed in your python script, it will exit,
and the child shell script will be terminated with it. You request seems to
indicate that you would want your child shell script to keep running and keep
getting input from the user. If that's the case, then `subprocess` might not
be the right choice, at least not in this way. On the other hand, if having
the python wrapper still running and feeding the input to the shell script is
enough, then you can look at something like this:
import os
from subprocess import Popen, PIPE
p=Popen(["./sample.sh"],stdin=PIPE)
#sample.sh prints asks for input
p.stdin.write("a\n")
#Prompts for input
p.stdin.write("nothing much\n")
# read a line from stdin of the python process
# and feed it into the subprocess stdin.
# repeat or loop as needed
line = raw_input()
p.stdin.write(line+'\n')
# p.stdin.flush() # maybe not needed
As many might cringe at this, take it as a starting point. As others have
pointed out, stdin/stdout interaction can be challenging with subprocesses, so
keep researching.
|
Python Turtle - Is it possible to prevent the crash at the end
Question: this is my code. I am using the turtle module to just write some text on the
screen for a project for school. But whenever I do this, the program
crashes/stops responding and I was wondering if it is possible to prevent this
from happening.
import turtle
screen = turtle.Screen()
screen.screensize(500, 500, "pink")
drawingpen = turtle.Turtle()
drawingpen.color("black")
drawingpen.penup()
drawingpen.setposition(-300, -300)
drawingpen.pendown()
drawingpen.pensize(3)
for side in range(4):
drawingpen.forward(600)
drawingpen.left(90)
drawingpen.hideturtle()
y = 243
for x in range(10):
drawingpen.penup()
drawingpen.color("black")
drawingpen.setposition(0, y)
drawingpen.pendown()
drawingpen.write("Test", False, align="center", font=("Arial", 18, "normal"))
drawingpen.hideturtle()
y = y - 57
Answer: Your code hasn't crashed, it just ran out of code to process. The code that is
there is working fine and as expected.
To see what I mean add:
print("END") #Python 3
print "END" #Python 2
to the end of your code. You will see the console prints the word "END" after
your text is finished printing. But a nicer way might be to add:
screen.exitonclick()
to the end. This will close the window when you click on it.
|
Edit list of entries using Python
Question: My script so far:
#DogReg v1.0
import time
Students = ['Mary', 'Matthew', 'Mark', 'Lianne' 'Spencer'
'John', 'Logan', 'Sam', 'Judy', 'Jc', 'Aj' ]
print("1. Add Student")
print("2. Delete Student")
print("3. Edit Student")
print("4. Show All Students")
useMenu = input("What shall you do? ")
if(useMenu != "1" and useMenu != "2" and useMenu != "3" and useMenu != "4"):
print("Invalid request please choose 1, 2, 3, or 4.")
elif(useMenu == "1"):
newName = input("What is the students name? ")
Students.append(newName)
time.sleep(1)
print(str(newName) + " added.")
time.sleep(1)
print(Students)
elif(useMenu == "2"):
remStudent = input("What student would you like to remove? ")
Students.remove(remStudent)
time.sleep(1)
print(str(remStudent) + " has been removed.")
time.sleep(1)
print(Students)
elif(useMenu == "3"):
So I'm trying to be able to let the user input a name they want to edit and
change it.
I tried looking up the function for editing list entries and I haven't found
one I'm looking for.
Answer: Suppose the user wants to change `'Mary'` to `'Maria'`:
student_old = 'Mary'
student_new = 'Maria'
change_index = Students.index(student_old)
Students[change_index] = student_new
Note that you will have to add proper error handling - for example, if the
user asks for modifying `'Xavier'` who is not in your list, you will get a
`ValueError`.
|
Load Spark RDD to Neo4j in Python
Question: I am working on a project where I am using **Spark** for Data processing. My
data is now processed and I need to load the data into **Neo4j**. After
loading into Neo4j, I will be using that to showcase the results.
I wanted all the implementation to de done in **Python** Programming. But I
could't find any library or example on net. Can you please help with links or
the libraries or any example.
My RDD is a PairedRDD. And in every tuple, I have to create a relationship.
**PairedRDD**
Key Value
Jack [a,b,c]
For simplicity purpose, I transformed the RDD to
Key value
Jack a
Jack b
Jack c
Then I have to create relationships between
Jack->a
Jack->b
Jack->c
Based on William Answer, I am able to load a list directly. But this data is
throwing the cypher error.
I tried like this:
def writeBatch(b):
print("writing batch of " + str(len(b)))
session = driver.session()
session.run('UNWIND {batch} AS elt MERGE (n:user1 {user: elt[0]})', {'batch': b})
session.close()
def write2neo(v):
batch_d.append(v)
for hobby in v[1]:
batch_d.append([v[0],hobby])
global processed
processed += 1
if len(batch) >= 500 or processed >= max:
writeBatch(batch)
batch[:] = []
max = userhobbies.count()
userhobbies.foreach(write2neo)
b is the list of lists. Unwinded elt is a list of two elements elt[0],elt[1]
as key and values.
**Error**
ValueError: Structure signature must be a single byte value
Thanks Advance.
Answer: You can do a `foreach` on your RDD, example :
from neo4j.v1 import GraphDatabase, basic_auth
driver = GraphDatabase.driver("bolt://localhost", auth=basic_auth("",""), encrypted=False)
from pyspark import SparkContext
sc = SparkContext()
dt = sc.parallelize(range(1, 5))
def write2neo(v):
session = driver.session()
session.run("CREATE (n:Node {value: {v} })", {'v': v})
session.close()
dt.foreach(write2neo)
I would however improve the function to batch the writes, but this simple
snippet is working for basic implementation
**UPDATE WITH EXAMPLE OF BATCHING WRITES**
sc = SparkContext()
batch = []
max = None
processed = 0
def writeBatch(b):
print("writing batch of " + str(len(b)))
session = driver.session()
session.run('UNWIND {batch} AS elt CREATE (n:Node {v: elt})', {'batch': b})
session.close()
def write2neo(v):
batch.append(v)
global processed
processed += 1
if len(batch) >= 500 or processed >= max:
writeBatch(batch)
batch[:] = []
dt = sc.parallelize(range(1, 2136))
max = dt.count()
dt.foreach(write2neo)
\- Which results with
16/09/15 12:25:47 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
writing batch of 500
writing batch of 500
writing batch of 500
writing batch of 500
writing batch of 135
16/09/15 12:25:47 INFO PythonRunner: Times: total = 279, boot = -103, init = 245, finish = 137
16/09/15 12:25:47 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1301 bytes result sent to driver
16/09/15 12:25:47 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 294 ms on localhost (1/1)
16/09/15 12:25:47 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
16/09/15 12:25:47 INFO DAGScheduler: ResultStage 1 (foreach at /Users/ikwattro/dev/graphaware/untitled/writeback.py:36) finished in 0.295 s
16/09/15 12:25:47 INFO DAGScheduler: Job 1 finished: foreach at /Users/ikwattro/dev/graphaware/untitled/writeback.py:36, took 0.308263 s
|
Scraping issues on a specific website
Question: This is my first question on stack overflow so bear with me, please.
I am trying to download automatically (i.e. scrape) the text of some Italian
laws from the website: [http://www.normattiva.it/](http://www.normattiva.it)
I am using this code below (and similar permutations):
import requests, sys
debug = {'verbose': sys.stderr}
user_agent = {'User-agent': 'Mozilla/5.0', 'Connection':'keep-alive'}
url = 'http://www.normattiva.it/atto/caricaArticolo?art.progressivo=0&art.idArticolo=1&art.versione=1&art.codiceRedazionale=047U0001&art.dataPubblicazioneGazzetta=1947-12-27&atto.tipoProvvedimento=COSTITUZIONE&art.idGruppo=1&art.idSottoArticolo1=10&art.idSottoArticolo=1&art.flagTipoArticolo=0#art'
r = requests.session()
s = r.get(url, headers=user_agent)
#print(s.text)
print(s.url)
print(s.headers)
print(s.request.headers)
As you can see I am trying to load the "**caricaArticolo** " query.
However, the output is a page saying that my search is invalid (**_"session is
not valid or expired"_**)
It seems that the page recognizes that I am not using a browser and loads a
"breakout" javascript function.
<body onload="javascript:breakout();">
I tried to use "browser" simulator python scripts such as **selenium** and
**robobrowser** but the result is the same.
Is there anyone who is willing to spend 10 minutes looking at the page output
and give help?
Answer: Once you click any link on the page with dev tools open, under the doc tab
under Network:
[![enter image description
here](http://i.stack.imgur.com/orZHr.png)](http://i.stack.imgur.com/orZHr.png)
You can see three links, the first is what we click on, the second returns the
html that allows you to jump to a specific _Article_ and the last contains the
article text.
In the source returned from the firstlink, you can see two _iframe_ tags:
<div id="alberoTesto">
<iframe
src="/atto/caricaAlberoArticoli?atto.dataPubblicazioneGazzetta=2016-08-31&atto.codiceRedazionale=16G00182&atto.tipoProvvedimento=DECRETO LEGISLATIVO"
name="leftFrame" scrolling="auto" id="leftFrame" title="leftFrame" height="100%" style="width: 285px; float:left;" frameborder="0">
</iframe>
<iframe
src="/atto/caricaArticoloDefault?atto.dataPubblicazioneGazzetta=2016-08-31&atto.codiceRedazionale=16G00182&atto.tipoProvvedimento=DECRETO LEGISLATIVO"
name="mainFrame" id="mainFrame" title="mainFrame" height="100%" style="width: 800px; float:left;" scrolling="auto" frameborder="0">
</iframe>
The first is for the Article, the latter with _/caricaArticoloDefault_ and the
_id_ _mainFrame_ is what we want.
You need to use the cookies from the initial requests so you can do it with
the _Session_ object and by parsing the pages using
[bs4](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-
and-find-next):
import requests, sys
import os
from urlparse import urljoin
import io
user_agent = {'User-agent': 'Mozilla/5.0', 'Connection': 'keep-alive'}
url = 'http://www.normattiva.it/atto/caricaArticolo?art.progressivo=0&art.idArticolo=1&art.versione=1&art.codiceRedazionale=047U0001&art.dataPubblicazioneGazzetta=1947-12-27&atto.tipoProvvedimento=COSTITUZIONE&art.idGruppo=1&art.idSottoArticolo1=10&art.idSottoArticolo=1&art.flagTipoArticolo=0#art'
with requests.session() as s:
s.headers.update(user_agent)
r = s.get("http://www.normattiva.it/")
soup = BeautifulSoup(r.content, "lxml")
# get all the links from the initial page
for a in soup.select("div.testo p a[href^=http]"):
soup = BeautifulSoup(s.get(a["href"]).content)
# The link to the text is in a iframe tag retuened from the previous get.
text_src_link = soup.select_one("#mainFrame")["src"]
# Pick something to make the names unique
with io.open(os.path.basename(text_src_link), "w", encoding="utf-8") as f:
# The text is in pre tag that is in the div with the pre class
text = BeautifulSoup(s.get(urljoin("http://www.normattiva.it", text_src_link)).content, "html.parser")\
.select_one("div.wrapper_pre pre").text
f.write(text)
A snippet of the first text file:
IL PRESIDENTE DELLA REPUBBLICA
Visti gli articoli 76, 87 e 117, secondo comma, lettera d), della
Costituzione;
Vistala legge 28 novembre 2005, n. 246 e, in particolare,
l'articolo 14:
comma 14, cosi' come sostituito dall'articolo 4, comma 1, lettera
a), della legge 18 giugno 2009, n. 69, con il quale e' stata
conferita al Governo la delega ad adottare, con le modalita' di cui
all'articolo 20 della legge 15 marzo 1997, n. 59, decreti legislativi
che individuano le disposizioni legislative statali, pubblicate
anteriormente al 1° gennaio 1970, anche se modificate con
provvedimenti successivi, delle quali si ritiene indispensabile la
permanenza in vigore, secondo i principi e criteri direttivi fissati
nello stesso comma 14, dalla lettera a) alla lettera h);
comma 15, con cui si stabilisce che i decreti legislativi di cui
al citato comma 14, provvedono, altresi', alla semplificazione o al
riassetto della materia che ne e' oggetto, nel rispetto dei principi
e criteri direttivi di cui all'articolo 20 della legge 15 marzo 1997,
n. 59, anche al fine di armonizzare le disposizioni mantenute in
vigore con quelle pubblicate successivamente alla data del 1° gennaio
1970;
comma 22, con cui si stabiliscono i termini per l'acquisizione del
prescritto parere da parte della Commissione parlamentare per la
semplificazione;
Visto il decreto legislativo 30 luglio 1999, n. 300, recante
riforma dell'organizzazione del Governo, a norma dell'articolo 11
della legge 15 marzo 1997, n. 59 e, in particolare, gli articoli da
20 a 22;
|
Finding a sub string and deleting it using regex, python
Question: I have a data set which looks like thus,
"See the new #Gucci 5th Ave NY windows customized by @troubleandrew for the debut of the #GucciGhost collection."
"Before the #GucciGhost collection debuts tomorrow, read about the artist @troubleandrew"
So i am trying to get rid of all the @ AND the words attached to it. My
dataset should look something like this.
"See the new #Gucci 5th Ave NY windows customized by for the debut of the #GucciGhost collection."
"Before the #GucciGhost collection debuts tomorrow, read about the artist"
So i can use a simple replace statement to get rid of the `@`. But the
adjacent word is a problem.
I am using re to search/find the occurrence. But i am not able to delete this
word.
P.S -- If it was a single word, it would have not been a problem. But there
are multiple words here and there in my data set attached to `@`
Answer: You can use regex
import re
a = [
"See the new #Gucci 5th Ave NY windows customized by @troubleandrew for the debut of the #GucciGhost collection.",
"Before the #GucciGhost collection debuts tomorrow, read about the artist @troubleandrew"
]
pat = re.compile(r"@\S+") # \S+ all non-space characters
for i in range(len(a)):
a[i] = re.sub(pat, "", a[i]) # replace it with empty string
print a
This will give you what you want.
|
ImportError: No module named 'Crypto.HASH' but pycryto installed
Question: I am trying to load pycrypto module. When I do
import Crypto
I get no error but when I do from `Crypto.HASH import SHA256` , I am getting
`ImportError`
>>> import Crypto
>>> hash = SHA256.new()
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
hash = SHA256.new()
NameError: name 'SHA256' is not defined
>>> from Crypto.HASH import SHA256
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
from Crypto.HASH import SHA256
ImportError: No module named 'Crypto.HASH'
>>>
OS : Windows 8 Python : 3.5 32 Bit
Thank you.
Answer: You are misspelling it, the correct module name is `Crypto.Hash`:
>>> from Crypto.Hash import SHA256
>>> h=SHA256.new()
>>> h.update(b"Hello")
>>> h.hexdigest()
'185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969'
|
extract data from website using python
Question: I recently started learning python and one of the first projects I did was to
scrap updates from my son's classroom web page and send me notifications that
they updated the site. This turned out to be an easy project so I wanted to
expand on this and create a script that would automatically check if any of
our lotto numbers hit. Unfortunately I haven't been able to figure out how to
get the data from the website. Here is one of my attempts from last night.
from bs4 import BeautifulSoup
import urllib.request
webpage = "http://www.masslottery.com/games/lottery/large-winningnumbers.html"
websource = urllib.request.urlopen(webpage)
soup = BeautifulSoup(websource.read(), "html.parser")
span = soup.find("span", {"id": "winning_num_0"})
print (span)
Output is here...
<span id="winning_num_0"></span>
The output listed above is also what I see if I "view source" with a web
browser. When I "inspect Element" with the web browser I can see the winning
numbers in the inspect element panel. Unfortunately I'm not even sure
how/where the web browser is getting the data. is it loading from another page
or a script in the background? I thought the following tutorial was going to
help me but I wasn't able to get the data using similar commands.
<http://zevross.com/blog/2014/05/16/using-the-python-library-beautifulsoup-to-
extract-data-from-a-webpage-applied-to-world-cup-rankings/>
Any help is appreciated. Thanks
Answer: If you look closely at the source of the page (I just used `curl`) you can see
this block
<script type="text/javascript">
// <![CDATA[
var dataPath = '../../';
var json_filename = 'data/json/games/lottery/recent.json';
var games = new Array();
var sessions = new Array();
// ]]>
</script>
That `recent.json` stuck out like a sore thumb (I actually missed the
`dataPath` part at first).
After giving that a try, I came up with this:
curl http://www.masslottery.com/data/json/games/lottery/recent.json
Which, as lari points out in the comments, is way easier than scraping HTML.
This easy, in fact:
import json
import urllib.request
from pprint import pprint
websource = urllib.request.urlopen('http://www.masslottery.com/data/json/games/lottery/recent.json')
data = json.loads(websource.read().decode())
pprint(data)
`data` is now a dict, and you can do whatever kind of dict-like things you'd
like to do with it. And good luck ;)
|
ImportError: No module named durationfield.db.models.fields.duration (Python, Django 1.9)
Question: I'm trying to put a duration field in my models and I'm following the
instructions [here](https://django-durationfield.readthedocs.io/en/latest/).
First problems I run into is that I can't seem to import the module. Doesn't
this come standard with Django?
from durationfield.db.models.fields.duration import DurationField
ImportError: No module named durationfield.db.models.fields.duration
Following Daniel Roseman's suggestion, I changed this to:
from django.db.models.field.duration
Now I'm getting:
ImportError: No module named duration
Answer: It's here:
from django.db.models import DurationField
And yes, it comes with Django 1.8+ so you don't need to install it.
|
Python: Get Gmail server with smtplib never ends
Question: I simply tried:
>>> import smtplib
>>> server = smtplib.SMTP('smtp.gmail.com:587')
in my Python interpreter but the second statement never ends.
Can someone help?
Answer: You might find that you need a login and password as a prerequisite to a
successful login-in.
Try something like this:
import smtplib
ServerConnect = False
try:
smtp_server = smtplib.SMTP('smtp.gmail.com','587')
smtp_server.login('your_login', 'password')
ServerConnect = True
except SMTPHeloError as e:
print "Server did not reply"
except SMTPAuthenticationError as e:
print "Incorrect username/password combination"
except SMTPException as e:
print "Authentication failed"
If you get "connection unexpected closed" try changing the server line to:
smtp_server = smtplib.SMTP_SSL('smtp.gmail.com','465')
Be aware: Google may block sign-in attempts from some apps or devices that do
not use modern security standards. Since these apps and devices are easier to
break into, blocking them helps keep your account safe.
See:<https://support.google.com/accounts/answer/6010255?hl=en>
Gmail settings:
SMTP Server (Outgoing Messages) smtp.gmail.com SSL 465
smtp.gmail.com StartTLS 587
IMAP Server (Incoming Messages) imap.gmail.com SSL 993
Please make sure, that IMAP access is enabled in the account settings.
Login to your account and enable IMAP.
You also need to enable "less secure apps" (third party apps) in the Gmail settings:
https://support.google.com/accounts/answer/6010255?hl=en
See also: How to enable IMAP/POP3/SMTP for Gmail account
If all else fails trying to `ping gmail.com` from the command line.
|
How to tell that string is a json?
Question: I have a string that I pull from a REST API that is actually a JSON.
I can't use `req.json()` as python doesn't format json correctly i.e. it is
using single quotes and not double quotes, plus it puts a unicode symbol where
there shouldn't be one. This means I can't use it to respond back to REST as
the JSON is not formatted correctly.
However `r.text` prints json that I could use, if I could just tell python:
"this is a json and not a string, take it just as it is and use it as a json".
Is there anyway I could do this? Or is there anyway to tell Python to properly
format json object as per json spec (i.e. not have unicode characters, and use
double quotes).
EDIT:
Apparently this wasn't clear, I apologize.
The issue is that I have to send back a proper JSON formatted object and NOT
python object. Here is what I get:
r.text: {"domain":"example.com", "Link":null, "monitor":"true"}
r.json(): {u'domain':u'example.com', u'Link": None, u'minotor':True}
This is NOT proper JSON formating. You can't have the unicode character, it
isn't None it is null, and it isn't True it is true. You also should have
double and not single quotes (not as big deal I think).
Hope this clarifies my issues.
Answer: You can check if a string is valid json by catching the error.
import json
def is_json(myjson):
try:
json_object = json.loads(myjson)
except ValueError, e:
return False
return True
Test cases:
print is_json("{}") #prints True
print is_json("{asdf}") #prints False
print is_json('{ "age":100}') #prints True
print is_json("{'age':100 }") #prints False
print is_json("{\"age\":100 }") #prints True
print is_json('{"age":100 }') #prints True
print is_json('{"foo":[5,6.8],"foo":"bar"}') #prints True
|
conversion of np.array(dtype='str') in an np.array(dtype='datetime')
Question: I have a very simple python question. I need to transform the string values
within an np.array into datetime values. The string values contain the
following format: ('%Y%m%d'). Does any one know how to this? Here my test
data:
date_str = np.array([['20121002', '20121002', '20121002'],
['20121003', '20121003', '20121003'],
['20121004', '20121004', '20121004']])
I try to convert this array with the pandas library. Here is my code:
import pandas as pd
pd.to_datetime(date_str, format="%d%m%Y")
Please help me there should be a very simple way to convert this and note that
I'm a python beginner.
Answer: You can create a DataFrame, then [`apply`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.apply.html) to it
[`pd.to_datetime`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.to_datetime.html):
In [68]: pd.DataFrame(date_str).apply(pd.to_datetime)
Out[68]:
0 1 2
0 2012-10-02 2012-10-02 2012-10-02
1 2012-10-03 2012-10-03 2012-10-03
2 2012-10-04 2012-10-04 2012-10-04
In order to verify the type of the result, here's the type of the first
column, for example:
In [73]: pd.DataFrame(date_str).apply(pd.to_datetime).iloc[:, 0].dtype
Out[73]: dtype('<M8[ns]')
|
django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TAB LESPACE, but settings are not configured
Question: I’m using Django 1.9.1 with Python 3.5.2 and I'm having a problem running a
Python script that uses Django models.
C:\Users\admin\trailers>python load_from_api.py
Traceback (most recent call last):
File "load_from_api.py", line 6, in <module>
from movies.models import Movie
File "C:\Users\admin\trailers\movies\models.py", line 5, in <module>
class Genre(models.Model):
File "C:\Users\admin\trailers\movies\models.py", line 6, in Genre
id = models.CharField(max_length=10, primary_key=True)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\models\fi
elds\__init__.py", line 1072, in __init__
super(CharField, self).__init__(*args, **kwargs)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\db\models\fi
elds\__init__.py", line 166, in __init__
self.db_tablespace = db_tablespace or settings.DEFAULT_INDEX_TABLESPACE
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\conf\__init_
_.py", line 55, in __getattr__
self._setup(name)
File "C:\Program Files (x86)\Python35-32\lib\site-packages\django\conf\__init_
_.py", line 41, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting DEFAULT_INDEX_TAB
LESPACE, but settings are not configured. You must either define the environment
variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing s
ettings.
here's the script:
#!/usr/bin/env python
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "trailers.settings")
os.environ["DJANGO_SETTINGS_MODULE"] = "trailers.settings"
import django
django.setup()
import tmdbsimple as tmdb
from movies.models import Movie
#some code...
I can't really figure out what's wrong. Any help is appreciated!
Answer: I would recommend using [Django Custom Management
Commands](https://docs.djangoproject.com/en/1.10/howto/custom-management-
commands/) \- they are really simple to use, they use your settings, your
environment, you can pass parameters and also you can write help strings so
you can use `--help`
Then you just call it with `./manage.py my_custom_command`
Or if you just want to run your script add this to the your script
project_path = '/home/to/your/trailers/project/src'
if project_path not in sys.path:
sys.path.append(project_path)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "trailers.common")
import django
django.setup()
|
Special characters/kanji problems using Python unicode
Question: I want to use videofileclip(), but a UnicodeDecodeError occurs. The videofiles
include japanese kanji or special characters.
My example code:
#-*- coding: utf-8 -*-
import sys
from moviepy.editor import VideoFileClip
reload(sys)
sys.setdefaultencoding('utf-8')
a='H:\\kittens.mkv'
clip1=VideoFileClip(a)
b='H:\\“ēī①”.mp4'
clip2=VideoFileClip(b)
if clip1.fps >= clip2.fps:
os.remove(b)
else:
os.remove(a)
'a' works fine:
>>> a='H:\\kittens.mkv'
>>> clip=VideoFileClip(a)
>>>
but 'b' doesn't work:
>>> b='H:\\“ēī①”.mp4'
>>> clip=VideoFileClip(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 5
5, in __init__
reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt)
File "C:\Python27\lib\site-packages\moviepy\video\io\ffmpeg_reader.py", line 3
2, in __init__
infos = ffmpeg_parse_infos(filename, print_infos, check_duration)
File "C:\Python27\lib\site-packages\moviepy\video\io\ffmpeg_reader.py", line 2
70, in ffmpeg_parse_infos
filename, infos))
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa1 in position 54: invalid
start byte
>>> b
'H:\\\xa1\xb0??\xa8\xe7\xa1\xb1.mp4'
>>> print b
H:\“??①”.mp4
>>> print b.decode('cp949')
H:\“??①”.mp4
>>>
I've tried this, but it also doesn't work.
b=b.decode('cp949')
b=b.decode('cp949').encode('utf-8')
b=unicode(b.decode('cp949'))
I think that Windows 7 supports Unicode file names (in Japanese kanji or
special characters), but the character set of Python (2.x) (cp949) does not
support special characters. What can I do for this problem?
Answer: Here's a workaround using the
[pywin32](https://sourceforge.net/projects/pywin32) extensions. Basically, you
use the
[`GetShortPathName`](http://timgolden.me.uk/pywin32-docs/win32api__GetShortPathName_meth.html)
function to generate a legacy [8.3
filename](https://en.wikipedia.org/wiki/8.3_filename) from a unicode path.
# -*- coding: utf-8 -*-
import os
import win32api
from moviepy.editor import VideoFileClip
def short_path(unicode_path):
return win32api.GetShortPathName(unicode_path)
v1 = '“ēī①”.mp4'
print os.path.isfile(v1) # False
v2 = u'“ēī①”.mp4'
print os.path.isfile(v2) # True
# clip = VideoFileClip(v1) # IOError
# clip = VideoFileClip(v2) # UnicodeEncodeError
clip = VideoFileClip(short_path(v2)) # OK
print clip.duration
|
cant call curl from python3
Question: I am trying to call this `curl` from `python3`. This, from `bash`, is working
fine.
curl -LH "Accept: text/bibliography; style=bibtex" http://dx.doi.org/10.1103/PhysRevLett.117.126802
yielding the expected result:
@article{Chang_2016, title={Observation of the Quantum Anomalous Hall Insulator to Anderson Insulator Quantum Phase Transition and its Scaling Behavior}, volume={117}, ISSN={1079-7114}, url={http://dx.doi.org/10.1103/PhysRevLett.117.126802}, DOI={10.1103/physrevlett.117.126802}, number={12}, journal={Physical Review Letters}, publisher={American Physical Society (APS)}, author={Chang, Cui-Zu and Zhao, Weiwei and Li, Jian and Jain, J. K. and Liu, Chaoxing and Moodera, Jagadeesh S. and Chan, Moses H. W.}, year={2016}, month={Sep}}
in python3, I am doing:
import subprocess
doi = "http://dx.doi.org/10.1103/PhysRevLett.117.126802"
try:
subprocess.call(["curl", "-LH", '"Accept: text/bibliography; style=bibtex"', doi])
except ExplicitException:
print("DOI is not available")
self.Messages.on_warn_clicked("DOI is not given",
"Search google instead")
which is giving error:
<html><body><h1>400 Bad request</h1>
Your browser sent an invalid request.
</body></html>
whats going wrong here?
Answer: You have 3 problems here:
1. don't quote your arguments in `subprocess`, it already does that for you when necessary, since you pass the arguments and not the unsplitted command line (good practice, keep it on, but drop the unneccessary quoting).
2. then, `subprocess.call` does not allow to parse/store the output in python, which is problematic for number 3:
3. and last: your site answers with rubbish HTML (java stacktrace) randomly. This explains why you're getting different output in python, but you can get it in bash as well.
### Problem #1
subprocess.call(["curl", "-LH", '"Accept: text/bibliography; style=bibtex"', doi])
should be
subprocess.call(["curl", "-LH", 'Accept: text/bibliography; style=bibtex', doi])
Else, quotes are applied twice and your `Accept: xxx` argument has quotes
around it, which is unexpected by `curl`
demo of the non-working quote part:
import subprocess,os
doi = "http://dx.doi.org/10.1103/PhysRevLett.117.126802"
#### this is wrong because of the quoting ####
p = subprocess.Popen(["curl", "-LH", '"Accept: text/bibliography; style=bibtex"', doi],stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
[output,error] = p.communicate()
print(output)
result:
b' some stats then ... <html><body><h1>400 Bad request</h1>\nYour browser sent an invalid request.\n</body></html>\n\r\n'
### Problems #2 and #3
I have implemented a retry mechanism which parses the output and retries until
correct output is found:
import subprocess,os,sys
doi = "http://dx.doi.org/10.1103/PhysRevLett.117.126802"
while True:
p = subprocess.Popen(["curl", "-LH", 'Accept: text/bibliography; style=bibtex', doi],stdout=subprocess.PIPE)
[output,error] = p.communicate()
output = output.decode("latin-1")
if "java.util.concurrent.FutureTask.run" in output:
# site crashed when responding: junk HTML output: retry
sys.stderr.write("Wrong answer: retrying\n")
else:
print(output)
break
result:
Wrong answer: retrying <==== here the site throwed a big HTML exception output
@article{Chang_2016, title={Observation of the Quantum Anomalous Hall Insulator to Anderson Insulator Quantum Phase Transition and its Scaling Behavior}, volume={117}, ISSN={1079-7114}, url={http://dx.doi.org/10.1103/PhysRevLett.117.126802}, DOI={10.1103/physrevlett.117.126802}, number={12}, journal={Physical Review Letters}, publisher={American Physical Society (APS)}, author={Chang, Cui-Zu and Zhao, Weiwei and Li, Jian and Jain, J.âK. and Liu, Chaoxing and Moodera, Jagadeesh S. and Chan, Moses H.âW.}, year={2016}, month={Sep}}
So it works, it's just a site problem, but with my python wrapper you are able
to re-submit the request until it yields the proper answer.
|
Python how cyclic fetch a pre-fixed number of elements in array
Question: I'm trying to make a function that will always return me a pre-fixed number of
elements from an array which will be larger than the pre-fixed number:
def getElements(i,arr,size=10):
return cyclic array return
where `i` stands for index of array to fetch and `arr` represent the array of
all elements:
## Example:
a = [0,1,2,3,4,5,6,7,8,9,10,11]
b = getElements(9,a)
>> b
>> [9,10,11,0,1,2,3,4,5,6]
b = getElements(1,a)
>> b
>> [1,2,3,4,5,6,7,8,9,10]
where `i = 9` and the array return the `[9:11]+[0:7]` to complete **10
elements** with `i = 1` don't need to cyclic the array just get `[1:11]`
thanks for the help
## Initial code (not working):
def getElements(i,arr,size=10):
total = len(arr)
start = i%total
end = start+size
return arr[start:end]
#not working cos not being cyclic
## EDIT:
I can't make any `import` for this script
Answer: You could return
array[i: i + size] + array[: max(0, i + size - len(array))]
For example
In [144]: array = list(range(10))
In [145]: array
Out[145]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
In [146]: i, size = 1, 10
In [147]: array[i: i + size] + array[: max(0, i + size - len(array))]
Out[147]: [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
In [148]: i, size = 2, 3
In [149]: array[i: i + size] + array[: max(0, i + size - len(array))]
Out[149]: [2, 3, 4]
In [150]: i, size = 5, 9
In [151]: array[i: i + size] + array[: max(0, i + size - len(array))]
Out[151]: [5, 6, 7, 8, 9, 0, 1, 2, 3]
|
How to get rid of row numbers, pd.read_excel?
Question: I am a complete beginner with Python. I am working on a assignment and I can't
seem to figure out how to get rid of the _row numbers_ from my excel
spreadsheet, while using `import pandas`.
This is what I get when I run the code:
0 $20,000,000 $159,000,000
1 $9,900,000 $35,600,000
2 $35,000,000 $45,000,000
3 $9,900,000 $35,600,000
4 $12,000,000 $9,400,000
But instead I just want:
$20,000,000 $159,000,000
$9,900,000 $35,600,000
$35,000,000 $45,000,000
$9,900,000 $35,600,000
$12,000,000 $9,400,000
This is inside of my main block for formatting:
if __name__ == "__main__":
file_name = "movie_theme.xlsx"
# Formatting numbers (e.g. $1,000,000)
pd.options.display.float_format = '${:,.0f}'.format
# Reading Excel file
df = pd.read_excel(file_name, convert_float = False)
Any suggestions on how to go about doing this?
Answer: Internally your dataframe always needs an index. If you get rid of the integer
index another column has to be your index and you should only use a data
column as your index if you need to for some special purpose.
When you write your dataframe to a file, e.g. with the `to_csv()` method, you
can always specify the keyword `index=False` and you won't get that index
written to your output.
|
Camera calibration for Structure from Motion with OpenCV (Python)
Question: I want to calibrate a car video recorder and use it for 3D reconstruction with
Structure from Motion (SfM). The original size of the pictures I have took
with this camera is 1920x1080. Basically, I have been using the source code
from the [OpenCV tutorial](http://opencv-python-
tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html)
for the calibration.
But there are some problems and I would really appreciate any help.
So, as usual (at least in the above source code), here is the pipeline:
1. Find the chessboard corner with `findChessboardCorners`
2. Get its subpixel value with `cornerSubPix`
3. Draw it for visualisation with `drawhessboardCorners`
4. Then, we calibrate the camera with a call to `calibrateCamera`
5. Call the `getOptimalNewCameraMatrix` and the `undistort` function to undistort the image
In my case, since the image is too big (1920x1080), I have resized it to
640x320 (during SfM, I will also use this size of image, so, I don't think it
would be any problem). And also, I have used a 9x6 chessboard corners for the
calibration.
Here, the problem arose. After a call to the `getOptimalNewCameraMatrix`, the
distortion come out totally wrong. Even the returned ROI is `[0,0,0,0]`. Below
is the original image and its undistorted version:
[![Original
image](http://i.stack.imgur.com/elJx1.jpg)](http://i.stack.imgur.com/elJx1.jpg)
[![Undistorted
image](http://i.stack.imgur.com/nEbeN.jpg)](http://i.stack.imgur.com/nEbeN.jpg)
You can see the image in the undistorted image is at the bottom left.
But, if I didn't call the `getOptimalNewCameraMatrix` and just straight
`undistort` it, I got a quite good image. [![Undistorted
image](http://i.stack.imgur.com/L08QS.jpg)](http://i.stack.imgur.com/L08QS.jpg)
So, I have three questions.
1. Why is this? I have tried with another dataset taken with the same camera, and also with my iPhone 6 Plus, but the results are same as above.
2. Another question is, what is the `getOptimalNewCameraMatrix` does? I have read the documentations several times but still cannot understand it. From what I have observed, if I didn't call the `getOptimalNewCameraMatrix`, my image will retain its size but it would be zoomed and blurred. Can anybody explain this function in more detail for me?
3. For SfM, I guess the call to `getOptimalNewCameraMatrix` is important? Because if not, the undistorted image would be zoomed and blurred, making the keypoint detection harder (in my case, I will be using the optical flow)?
I have tested the code with the opencv sample pictures and the results are
just fine.
Below is my source code:
from sys import argv
import numpy as np
import imutils # To use the imutils.resize function.
# Resizing while preserving the image's ratio.
# In this case, resizing 1920x1080 into 640x360.
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob(argv[1] + '*.jpg')
width = 640
for fname in images:
img = cv2.imread(fname)
if width:
img = imutils.resize(img, width=width)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners2,ret)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
for fname in images:
img = cv2.imread(fname)
if width:
img = imutils.resize(img, width=width)
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# undistort
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.imshow("undistorted", dst)
cv2.waitKey(500)
mean_error = 0
for i in xrange(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print "total error: ", mean_error/len(objpoints)
Already ask someone in answers.opencv.org and he tried my code and my dataset
with success. I wonder what is actually wrong.
Answer: **Question #2:**
With `cv::getOptimalNewCameraMatrix(...)` you can compute a new camera matrix
according to the free scaling parameter `alpha`.
If `alpha` is set to `1` then all the source image pixels are retained in the
undistorted image that is you'll see black and curved border along the
undistorted image (like a pincushion). This scenario is unlucky for several
computer vision algorithms, because new edges are appeared on the undistorted
image for example.
By default `cv::undistort(...)` regulates the subset of the source image that
will be visible in the corrected image and that's why only the sensible pixels
are shown on that - no pincushion around the corrected image but data loss.
Anyway, you are allowed to control the subset of the source image that will be
visible in the corrected image:
cv::Mat image, cameraMatrix, distCoeffs;
// ...
cv::Mat newCameraMatrix = cv::getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, image.size(), 1.0);
cv::Mat correctedImage;
cv::undistort(image, correctedImage, cameraMatrix, distCoeffs, newCameraMatrix);
**Question #1:**
It is just my feeling, but you should also take care, if you resize your image
after the calibration then the camera matrix must be also "scaled" as well,
for example:
cv::Mat cameraMatrix;
cv::Size calibSize; // Image during the calibration, e.g. 1920x1080
cv::Size imageSize; // Your current image size, e.g. 640x320
// ...
cv::Matx31d t(0.0, 0.0, 1.0);
t(0) = (double)imageSize.width / (double)calibSize.width;
t(1) = (double)imageSize.height / (double)calibSize.height;
cameraMatrixScaled = cv::Mat::diag(cv::Mat(t)) * cameraMatrix;
This must be done only for the camera matrix, because the distortion
coefficients do not depend on the resolution.
**Question #3:**
Whatever I think `cv::getOptimalNewCameraMatrix(...)` is not important in your
case, the undistorted image can be zoomed and blurred because you remove the
effect of a non-linear transformation. If I were you then I would try the
optical flow without calling `cv::undistort(...)`. I think that even a
distorted image can contain a lot of good features for tracking.
|
Can I run Numpy (or other Python packages) on Android?
Question: I have implemented a python script, which imports Numpy and Pandas and I would
like to run this script on Android. To be more precise, I would like to embed
this script into an application.
I would like to know whether it is possible? If so, what are the best-
practices to implement it?
I would greatly appreciate any help!
Answer: If you do not want to build a website or app and have Python/Pandas running as
a backend. You can use [Kivy](https://kivy.org/planet/2015/04/python-
on%C2%A0android/) as a [packager to run
Python](https://github.com/kivy/python-for-android) on Android. Further, if
you check out [the answer to this
question](http://stackoverflow.com/questions/33398723/kivy-numpy-android-
error) it points to the documentation for using numpy too - which is to use a
["recipe" for compilation](https://github.com/kivy/python-for-
android/tree/master/pythonforandroid/recipes/numpy).
If using Kivy and not using a pure python library - these recipes will need to
be used or created [if they do not
exist](https://github.com/kivy/buildozer/issues/343#issuecomment-218593658).
So with Pandas, you would need to build this recipe yourself. Even if you do
build this resource, the size of trying to load Pandas (not to include the
resources it can require when performing analysis on dataframes) might be a
bottleneck if trying to include it directly in the app and it still might be
better to do this in a backend situation.
|
Unable to Install Python Package
Question: In trying to install a python package via pip I get the error:
Failed building wheel for atari-py
Running setup.py clean for atari-py
Failed to build atari-py
Installing collected packages: atari-py, PyOpenGL
Running setup.py install for atari-py ... error
Complete output from command C:\Users\xxxxxx\AppData\Local\Continuum\Anaconda2\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\xxxxxx\\appdata\\local\\temp\\pip-build-qhuh1q\\atari-py\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\xxxxxx\appdata\local\temp\pip-z8wnzs-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
Unable to execute 'make build -C atari_py/ale_interface -j 3'. HINT: are you sure `make` is installed?
error: [Error 2] The system cannot find the file specified
In my system when I type make:
C:\Users\xxxxxx>make
'make' is not recognized as an internal or external command,
operable program or batch file.
So, clearly make is missing. But I installed make using conda:
C:\Users\xxxxxx>conda install mingw
Fetching package metadata .........
Solving package specifications: ..........
# All requested packages already installed.
# packages in environment at C:\Users\xxxxxx\AppData\Local\Continuum\Anaconda2:
#
mingw 4.7
So I have mingw 4.7 already installed.
How could I remove the error and get the package?
Many thanks for the help.
Answer: `make` is not in your PATH. Do `echo %PATH%` and check if the path to your
msys utilities is in there. Otherwise you can edit this variable by following
the instructions here: [Adding directory to PATH Environment Variable in
Windows](https://stackoverflow.com/questions/9546324/adding-directory-to-path-
environment-variable-in-windows)
|
How to download this GIF(dynamic) by Python?
Question: I give an url as example:
http://ww4.sinaimg.cn/large/a7bf601fjw1f7jsbj34a1g20kc0bdnph.gif
You can see it in your browser.
Now I want to download it. I **have tried** :
1.
`urllib.urlretrieve(imgurl,filepath')`
failed, got an "error" picture.
2.
`wget.download(imgurl)`
failed, got an "error" picture.
3.
`r = requests.get(imgurl,stream=True) img =
PIL.Image.open(StringIO(r.content)) img.save(filepath)`
failed, got a static picture, I mean, just one frame.
**So what should I do?**
Answer: This works quite fine for me, to get the animated gif:
>>> import requests
>>> uri = 'http://ww4.sinaimg.cn/large/a7bf601fjw1f7jsbj34a1g20kc0bdnph.gif'
>>> with open('/tmp/pr0n.gif', 'wb') as f:
... f.write(requests.get(uri).content)
...
Happy fapping!
|
Substitute Function call with sympy
Question: I want to receive input from a user, parse it, then perform some substitutions
on the resulting expression. I know that I can use
`sympy.parsing.sympy_parser.parse_expr` to parse arbitrary input from the
user. However, I am having trouble substituting in function definitions. Is it
possible to make subsitutions in this manner, and if so, how would I do so?
The overall goal is to allow a user to provide a function of `x`, which is
then used to fit data. `parse_expr` gets me 95% of the way there, but I would
like to provide some convenient expansions, such as shown below.
import sympy
from sympy.parsing.sympy_parser import parse_expr
x,height,mean,sigma = sympy.symbols('x height mean sigma')
gaus = height*sympy.exp(-((x-mean)/sigma)**2 / 2)
expr = parse_expr('gaus(100, 5, 0.2) + 5')
print expr.subs('gaus',gaus) # prints 'gaus(100, 5, 0.2) + 5'
print expr.subs(sympy.Symbol('gaus'),gaus) # prints 'gaus(100, 5, 0.2) + 5'
print expr.subs(sympy.Symbol('gaus')(height,mean,sigma),gaus) # prints 'gaus(100, 5, 0.2) + 5'
# Desired output: '100 * exp(-((x-5)/0.2)**2 / 2) + 5'
This is done using python 2.7.9, sympy 0.7.5.
Answer: After some experimentation, while I did not find a built-in solution, it was
not difficult to build one that satisfies simple cases. I am not a sympy
expert, and so there may be edge cases that I haven't considered.
import sympy
from sympy.core.function import AppliedUndef
def func_sub_single(expr, func_def, func_body):
"""
Given an expression and a function definition,
find/expand an instance of that function.
Ex:
linear, m, x, b = sympy.symbols('linear m x b')
func_sub_single(linear(2, 1), linear(m, b), m*x+b) # returns 2*x+1
"""
# Find the expression to be replaced, return if not there
for unknown_func in expr.atoms(AppliedUndef):
if unknown_func.func == func_def.func:
replacing_func = unknown_func
break
else:
return expr
# Map of argument name to argument passed in
arg_sub = {from_arg:to_arg for from_arg,to_arg in
zip(func_def.args, replacing_func.args)}
# The function body, now with the arguments included
func_body_subst = func_body.subs(arg_sub)
# Finally, replace the function call in the original expression.
return expr.subs(replacing_func, func_body_subst)
def func_sub(expr, func_def, func_body):
"""
Given an expression and a function definition,
find/expand all instances of that function.
Ex:
linear, m, x, b = sympy.symbols('linear m x b')
func_sub(linear(linear(2,1), linear(3,4)),
linear(m, b), m*x+b) # returns x*(2*x+1) + 3*x + 4
"""
if any(func_def.func==body_func.func for body_func in func_body.atoms(AppliedUndef)):
raise ValueError('Function may not be recursively defined')
while True:
prev = expr
expr = func_sub_single(expr, func_def, func_body)
if prev == expr:
return expr
|
How to use libraries, running at docker
Question: Can anybody, please, explain me, how to use a library, which image's running
at docker? And how the process is constructed in genereal: how python access
the image or vice-versa( i mean, its not in the "lib" folder in python,
right?)? And simply, what should i do, to be able to do `import library` , so
it is ready to use. For example, this one:
<https://hub.docker.com/r/kaixhin/caffe/> Using Ubuntu 16. Thanks.
Answer: Use Python's amazing [VirtualEnv](https://virtualenv.pypa.io/en/stable/)
module to bundle imports, essentially making them available at the image's
build time.
Then, in your application, use the Python "binary" of that virtualenv and
enjoy hassle-free imports :)
Here's a [link](https://www.theodo.fr/blog/2015/04/docker-and-virtualenv-a-
clean-way-to-locally-install-python-dependencies-with-pip-in-docker/) I've
found about how some developer achieved the same thing, while bundling
everything inside a docker image.
|
Need a way to test SSH with a timeout
Question: This is my current code to test if a host is SSH-able. It works just fine when
the host is up with or without SSH service running. However, it seems to just
hang when the host crashes, which is the unique usecase that I need to depend
on it giving me a quick True/False response. Due to OS and other dependencies,
we need to keep the Python version to 2.6 for now. So I need a way to get this
function to work and with a timeout of 1-2s.
import commands
def test_ssh(host):
output = commands.getstatusoutput("ssh " + host + " hostname")
if output[0] == 0:
return True
else:
print(host + " not accessible via SSH!")
return False
Answer: You need to determine if it is actually a connect timeout, or if it can
connect, but the server is accepting the connection and doesn't send anything.
If you were to use telnet to manually test, telnet 22 and see if you get a
response from the server at all, you should see something like this if it
connects:
$ telnet localhost 22
Connected to localhost.
Escape character is '^]'.
SSH-2.0-OpenSSH_7.2
If it connects and you don't get any response, then I think you will have to
try a test using sockets in python.
You can find info here: <https://docs.python.org/3/library/socket.html>
|
Why isn't this element visible (Selenium + Python/Django 1.9)
Question: I am using webdriver to fill out a form in Django. The first field, name, is
found and filled out. But the second field is somehow not being found. Here's
the script I'm using...
name = browser.find_element_by_id("name")
value = browser.find_element_by_id("value")
submit = browser.find_element_by_id("offer-submit")
name.send_keys(address)
name.send_keys(Keys.TAB)
# I tried having the browser press tab to see if it becomes visible. no luck.
value.send_keys(random.randrange(1, 100, 2))
Here's the error traceback:
Traceback (most recent call last):
File "populate_map.py", line 71, in <module>
value.send_keys(random.randrange(1, 100, 2))
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webelement.py", line 320, in send_keys
self._execute(Command.SEND_KEYS_TO_ELEMENT, {'value': keys_to_typing(value)})
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webelement.py", line 461, in _execute
return self._parent.execute(command, params)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "C:\Python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotVisibleException: Message: Element is not currently visible and so may not be interacted with
Stacktrace:
at fxdriver.preconditions.visible (file:///c:/users/owner/appdata/local/temp/tmprd4j_t/extensions/fxdriver@googlecode.com/components/command-processor.js:10092)
at DelayedCommand.prototype.checkPreconditions_ (file:///c:/users/owner/appdata/local/temp/tmprd4j_t/extensions/fxdriver@googlecode.com/components/command-processor.js:12644)
at DelayedCommand.prototype.executeInternal_/h (file:///c:/users/owner/appdata/local/temp/tmprd4j_t/extensions/fxdriver@googlecode.com/components/command-processor.js:12661)
at fxdriver.Timer.prototype.setTimeout/<.notify (file:///c:/users/owner/appdata/local/temp/tmprd4j_t/extensions/fxdriver@googlecode.com/components/command-processor.js:625)
The fields are being created with this form:
class OfferForm(forms.ModelForm):
service = forms.BooleanField()
class Meta:
model = Offer
fields = [
"name",
"value",
"description",
"tags",
"location",
"code",
"service",
# "duration"
"icon",
]
widgets = {
'name': forms.TextInput(
attrs={'id': 'name', 'class': 'data', 'style': 'font-family: VT323; font-size: 60%', 'required': True, 'placeholder': 'name'}
),
'value': forms.TextInput(
attrs={'id': 'value', 'class': 'data', 'style': 'font-family: VT323; font-size: 60%', 'required': True, 'placeholder': 'value'}
),
}
Reading [this question](http://stackoverflow.com/questions/6101461/how-to-
force-selenium-webdriver-to-click-on-element-which-is-not-currently-visib), I
see that there should be a good reason that the value isn't visible -- i.e. it
is being made invisible with a style attribute. But when I bring up firebug, I
can't see anything to indicate that it's invisible.
Here's my template code: (I should mention that #offer is clicked earlier in
the script, which activates the display -- it's not none by the time the
webdriver is looking for it.)
<script>
$("#offer").click(function(){
$("#find-offer").css("display", "none");
$("#make-offer").css("display", "block");
$("#popular-offers").css("display", "block")
$(".welcome").css("display", "none");
});
</script>
<div id="make-offer" style="display: none">
<p>Make an offer</p>
<form name="offer-form" action="" method="post" enctype="multipart/form-data">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" id="offer-submit" class="data" style="font-family: Fira Mono; font-size: 70%; padding: 10px; position: absolute" value="Submit" />
</form>
Any other ideas?
Answer: In my experience, [`ActionChains`](http://selenium-
python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains)
are often the answer when I have an issue like this in selenium. It is worth a
try in this case:
from selenium.webdriver.common.action_chains import ActionChains
ActionChains(browser).move_to_element(value).click().send_keys(random.randrange(1, 100, 2)).perform()
This will first move to the element, click to focus the input, and then send
the keys. If the element is not being seen as visible this might raise the
same exception, but it is worth a shot.
|
Python 3 Regex and Unicode Emotes
Question: Using Python 3, a simple script like the following should run as intended, but
appears to choke on unicode emote strings:
import re
phrase = "(╯°□°)╯ ︵ ┻━┻"
pattern = r'\b{0}\b'.format(phrase)
text = "The quick brown fox got tired of jumping over dogs and flipped a table: (╯°□°)╯ ︵ ┻━┻"
if re.search(pattern, text, re.IGNORECASE) != None:
print("Matched!")
If I substitute the word "fox" for the contents of the phrase variable, the
pattern does indeed match. I've been puzzled as to why it doesn't like this
particular string though, and my expeditions into the manual and Stack
Overflow haven't illuminated the issue. From all I can tell, Python 3 should
handle this without issue.
Am I missing something painfully obvious?
Edit: Also, dropping the boundaries (\b) doesn't affect the ability to match
the string either.
Answer:
(╯°□°)╯ ︵ ┻━┻
This expression has brackets in them, you need to escape them. Otherwise they
are interpreted as group.
In [24]: re.search(r'\(╯°□°\)╯ ︵ ┻━┻', text, re.IGNORECASE)
Out[24]: <_sre.SRE_Match object; span=(72, 85), match='(╯°□°)╯ ︵ ┻━┻'>
In [25]: re.findall(r'\(╯°□°\)╯ ︵ ┻━┻', text, re.IGNORECASE)
Out[25]: ['(╯°□°)╯ ︵ ┻━┻']
[Escape the regex string](https://docs.python.org/3/library/re.html#re.escape)
properly and change your code to:
import re
phrase = "(╯°□°)╯ ︵ ┻━┻"
pattern = re.escape(phrase)
text = "The quick brown fox got tired of jumping over dogs and flipped a table: (╯°□°)╯ ︵ ┻━┻"
if re.search(pattern, text, re.IGNORECASE) != None:
print("Matched!")
And then it will work as expected:
$ python3 a.py
Matched!
|
Python setting global variables in different ways in 2.7
Question: I was trying to practice a concept related to setting global variables using
diff methods , but the following example is not working as per my
understanding .
#Scope.py
import os
x = 'mod'
def f1() :
global x
x = 'in f1'
def f2() :
import scope
scope.x = 'in f2'
def print_x() :
print x
def f3() :
import sys
sc = sys.modules['scope']
sc.x = 'in f3'
if __name__ == "__main__" :
f1()
print_x()
f2()
print_x()
f3()
print_x()
It gives the following result
in f1
in f1
in f1
While as per my understanding it shd result in
in f1
in f2
in f3
Can someone help me in understanding what am i doing wrong ??
Answer: Check out this modified piece of code.
x = 'mod'
def f1():
global x
x = 'in f1'
def f2():
import scope
scope.x = 'in f2'
return scope
def print_x():
print(x)
def f3():
import sys
sc = sys.modules['__main__']
sc.x = 'in f3'
return sc
if __name__ == "__main__":
f1()
print_x()
sc = f2()
print_x()
print(sc.x)
sc = f3()
print_x()
print(sc.x)
The thing is that in the original `f2()` you actually import your `scope`
module under name `scope` and modify its variable. And in following
`print_x()` you refer to an unchanged `x` in `__main__`. In `f3()`, you
reference your module by wrong name: to modify it, you should use `__main__`
here. With `scope`, you're actually referencing the module that was imported
in `f2()` (Try removing `f2()` call)... Which is clearly not what you want.
|
Can variables in a function for later use?
Question: Can Python store variables in a function for later use?
This is a stat calculator below (unfinished):
#Statistics Calculator
import random
def main(mod):
print ''
if (mod == '1'):
print 'Mode 1 activated'
dat_entry = dat()
elif (mod == '2'):
print 'Mode 2 activated'
array = rndom(dat_entry)
elif (mod == '3'):
print 'Mode 3 activated'
array = user_input(dat_entry)
elif (mod == '4'):
disp(array)
elif (mod == '5'):
mean = mean(array)
elif (mod == '6'):
var = var(array)
elif (mod == '7'):
sd = sd(array, var)
elif (mod == '8'):
rang(array)
elif (mod == '9'):
median(array)
elif (mod == '10'):
mode(array)
elif (mod == '11'):
trim(array)
print ''
def dat():
dat = input('Please enter the number of data entries. ')
return dat
def rndom(dat_entry):
print 'This mode allows the computer to generate the data entries.'
print 'It ranges from 1 to 100.'
cntr = 0
for cntr in range(cntr):
array[cntr] = random.randint(1,100)
print 'Generating Data Entry', cntr + 1
def rndom(dat_entry):
print 'This mode allows you to enter the data.'
cntr = 0
for cntr in range(cntr):
array[cntr] = input('Please input the value of Data Entry ',
cntr + 1, ': ')
run = 0 #Number of runs
mod = '' #Mode
cont = 'T'
while (cont == 'T'):
print 'Statistics Calculator'
print 'This app can:'
print '1. Set the number of data entries.'
print '2. Randomly generate numbers from 1 to 100.'
print '3. Ask input from you, the user.'
print '4. Display array.'
print '5. Compute mean.'
print '6. Compute variance.'
print '7. Compute standard deviation.'
print '8. Compute range.'
print '9. Compute median.'
print '10. Compute mode.'
print '11. Compute trimmed mean.'
print ''
if (run == 0):
print 'You need to use Mode 1 first.'
mod = '1'
elif (run == 1):
while (mod != '2' or mod != '3'):
print 'Please enter Mode 2 or 3 only.'
mod = raw_input('Please enter the mode to use (2 or 3): ')
if (mod == '2' or mod == '3'):
break
elif (run > 1):
mod = raw_input('Please enter the mode to use (1-11): ')
# Error line
main(mod)
cont = raw_input("Please enter 'T' if and only if you want to continue"
" using this app. ")
run += 1
print ''
This line here is the output (trimmed): Mode 2 activated
Traceback (most recent call last):
File "F:\Com SciActivities\Statistics.py", line 81, in <module>
main(mod)
File "F:\Com Sci Activities\Statistics.py", line 10, in main
array = rndom(dat_entry)
UnboundLocalError: local variable 'dat_entry' referenced before assignment
Please tell me the reason why...
Answer: There's a problem with the logic. If you go straight to mode 2 this is what
will cause this error because "dat_entry" would be undefined.
You've selected mode 2, at this point it doesn't know what dat_entry is:
elif (mod == '2'):
print 'Mode 2 activated'
array = rndom(dat_entry)
You should declare dat_entry somewhere in your main loop or somewhere here
once the user has selected option 2:
if (mod == '2' or mod == '3'):
break
|
python django run bash script in server
Question: I would like to create a website-app to run a bash script located in a server.
Basically I want this website for:
* Upload a file
* select some parameters
* Run a bash script taking the input file and the parameters
* Download the results
I know you can do this with php, javascript... but I have never program in
these languages. However I can program in python. I have used pyQT library in
python for similar purposes.
Can this be done with django? or should I start learning php & javascript? I
cannot find any tutorial for this specific task in Django.
Answer: This can be done in Python using the Django framework.
First create a form including a `FileField` and the fields for the other
parameters:
from django import forms
class UploadFileForm(forms.Form):
my_parameter = forms.CharField(max_length=50)
file = forms.FileField()
Include the `UploadFileForm` in your view and call your function for handling
the uploaded file:
from django.http import HttpResponseRedirect
from django.shortcuts import render
from .forms import UploadFileForm
# Imaginary function to handle an uploaded file.
from somewhere import handle_uploaded_file
def upload_file(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
my_parameter = form.cleaned_data['my_parameter']
# Handle the uploaded file
results = handle_uploaded_file(request.FILES['file'], title)
# Clear the form and parse the results
form = UploadFileForm()
return render(request, 'upload.html', {'form': form, 'results': results})
else:
form = UploadFileForm()
return render(request, 'upload.html', {'form': form})
Create the function to handle the uploaded file and call your bash script:
import subprocess
import os
def handle_uploaded_file(f, my_parameter):
file_path = os.path.join('/path/to/destination/', f.name)
# Save the file
with open(file_path, 'wb+') as destination:
for chunk in f.chunks():
destination.write(chunk)
# Call your bash script with the
output = subprocess.check_output(['./my_script.sh',str(file_path),str(my_parameter)], shell=True)
return output
Check out <https://docs.djangoproject.com/en/1.10/topics/http/file-uploads/>
for more examples and instructions on how the handle file uploads in Django.
|
python - import namespace
Question: If I have a library like:
MyPackage:
* `__init__.py`
* SubPackage1
* `__init__.py`
* moduleA.py
* moduleB.py
* SubPackage2
* `__init__.py`
* moduleC.py
* moduleD.py
But I want that users can import moduleA like `import MyPackage.moduleA`
directly. Can I implement this by write some rules in `MyPackage/__init__.py`?
Answer: In `MyPackage/__init__.py`, import the modules you want available from the
subpackages:
from __future__ import absolute_import # Python 3 import behaviour
from .SubPackage1 import moduleA
from .SubPackage2 import moduleD
This makes both `moduleA` and `moduleD` globals in `MyPackage`. You can then
use:
from MyPackage import moduleA
and that'll bind to the same module, or do
import MyPackage
myPackage.moduleA
to directly access that module.
However, you _can't_ use
from MyPackage.moduleA import somename
as that requires `moduleA` to live directly in MyPackage; a global in the
`__init__` won't cut it there.
|
How to perform input redirection in python like the bash >?
Question: I want to feed text files to a C program, with bash I can do `./prog <file`,
how would you do the same in python ?
Answer: You can do that via
[`subprocess.check_call`](https://docs.python.org/3/library/subprocess.html#subprocess.check_call):
import subprocess
subprocess.check_call(["prog"], stdin=open("/path/to/file"))
|
Python - download video from indirect url
Question: I have a link like this
https://r9---sn-4g57knle.googlevideo.com/videoplayback?id=10bc30daeba89d81&itag=22&source=picasa&begin=0&requiressl=yes&mm=30&mn=sn-4g57knle&ms=nxu&mv=m&nh=IgpwcjA0LmZyYTE2KgkxMjcuMC4wLjE&pl=19&sc=yes&mime=video/mp4&lmt=1439597374686662&mt=1474140191&ip=84.56.35.53&ipbits=8&expire=1474169270&sparams=ip,ipbits,expire,id,itag,source,requiressl,mm,mn,ms,mv,nh,pl,sc,mime,lmt&signature=6EF8ABF841EA789F5314FC52C3C3EA8698A587C9.9297433E91BB6CBCBAE29548978D35CDE30C19EC&key=ck2
which is a temporary generated redirect from (a link like) this link:
<https://2.bp.blogspot.com/bO0q678cHRVZqTDclb33qGUXve_X1CRTgHMVz9NUgA=m22>
(so the first link won't work in a couple of hours)
How can I download the video from the _googlevideo_ site with Python? I
already tried youtube-dl because of
[this](http://stackoverflow.com/a/33818090/5635812) answer, but it isn't
working for me.
The direct URL would already help me a lot!
Answer: You can use [pycurl](http://pycurl.io/docs/latest/quickstart.html)
#!/bin/usr/env python
import sys
import pycurl
c = pycurl.Curl()
c.setopt(c.FOLLOWLOCATION, 1)
c.setopt(c.URL, sys.argv[1])
with open(sys.argv[2], 'w') as f:
c.setopt(c.WRITEFUNCTION, f.write)
c.perform()
Usage:
$ chmod +x a.py
$ ./a.py "https://2.bp.blogspot.com/bO0q678cHRVZqTDclb33qGUXve_X1CRTgHMVz9NUgA=m22" output.mp4
$ file output.mp4
output.mp4: ISO Media, MP4 v2 [ISO 14496-14]
|
why two points can't show in the figure (matplotlib)?
Question: Figure1 show data points[1](http://i.stack.imgur.com/j7b9r.png)
[1](http://i.stack.imgur.com/j7b9r.png):[![enter image description
here](http://i.stack.imgur.com/j7b9r.png)](http://i.stack.imgur.com/j7b9r.png)
I drawed the figure by matplotlib in python, but the data points cannot be
fully displayed. There are two points not displayed the lower-right corner of
the figure.The two coordinates are (-0.6731984257692413, 6.0),
(-0.7105983383119769, 7.0).I don't know why.Anyone could help?
import matplotlib.pyplot as plt
theta = [0.8975979010256552, 2.6927937030769655, 0, -0.6731984257692413, 0.0, -1.7951958020513104, -0.8975979010256552, -1.7951958020513104, -2.6927937030769655, -0.5235987755982988, -0.59839860068377, -0.8975979010256552, -1.1967972013675403, 1.7951958020513104, -0.5609986881410344, -0.59839860068377, -0.6357985132265057, -0.7105983383119769]
r = [1.0, 0.5, 0, 6.0, 1.0, 1.5, 1.0, 1.0, 1.0, 4.5, 4.0, 4.0, 4.0, 0.5, 4.5, 4.5, 4.5, 7.0]
colors = [1.13290242331, 0.81108163706000003, 0, 0.94180655750400011, 0.90356396220000001, 0.946707749135, 1.09650064153, 1.2068422679700002, 1.1150923324999999, 2.4619798379700004, 0.83030335877799999, 0.87957520389799992, 0.872155341769, 0.92537488526299994, 2.70431872671, 1.10024483211, 0.89817718522000012, 1.1547139643100002]
plt.subplot(111,polar=True)
cc=plt.scatter(theta,r,c=colors,cmap=plt.cm.hsv)
cc.set_alpha(0.75)
plt.grid(color='y', alpha=0.8, linestyle='dashed', linewidth=1)
plt.colorbar()
plt.thetagrids([30])
plt.show()
Answer: With the help of Andras Deak, I use `plt.ylim([0, max(r)+1])`to solve this
problem.Thanks.
|
Python: Counting words from a given file starting with 'L'
Question: I am new to python.I want to know how to count the number of words **starting
with a particular letter say 'L'** from a text file.
Answer: [str.startswith(prefix[, start[,
end]])](https://docs.python.org/2/library/stdtypes.html)
Give this a shot but import your file there are also a few other ways.
list = ["apple", "bannana", "custard", "shoe", "ant", "police", "python"]
newList = []
for word in list:
if word.startswith('a'):
newList.append(word)
print newList
['apple', 'ant']
|
np arrays being immutable - "assignment destination is read-only"
Question: FD** - I am a Python newb as well as a stack overflow newb as you can tell. I
have edited the question based on comments.
My goal is to read a set of PNG files, create Image(s) with
Image.open('filename') and convert them to simple 2D arrays with only 1s and
0s. The PNG is of the format RGBA with mostly only 255 and 0 as values. Quite
often in the images, the edges are grey scale values, which I would like to
avoid in the 2D array.
I created the 2D array from image using np.asarray(Image) getting only the
'Red' channel. In each of the 2d image array, I would like to set the cell
value = 1 if the current value is non zero.
So, I loop into the 2d array and I check the cell value and try to set it to
1.
It gives me an error indicating that the array is read-only. I read through
several stack overflow threads discussing that np arrays are immutable and it
is a still bit unclear. I use PIL and numpy
Thanks @user2314737. I will attempt to set that flag. @Eric, thanks for your
comments as well.
from PIL import Image
import numpy as np
The relevant code:
prArray = [np.asarray(img)[:, :, 0] for img in problem_images]
for img in prArray:
for x in range(184):
for y in range(184):
if img[x][y] != 0:
img[x][y] = 1
The error "assignment destination is read-only" is in the last line.
Thank you everyone for help.
Answer: Check if the array is writable with
>>> img.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : False
ALIGNED : True
UPDATEIFCOPY : False
If `WRITEABLE`is false, change it with
img.setflags(write=1)
|
Python: Does 'kron' create sparse matrix when I use ' from scipy.sparse import * '?
Question: For the code below, Mat is a array-type matrix,
a = kron(Mat,ones((8,1)))
b = a.flatten()
If I don't import scipy.sparse package, `a` is an **array-type matrix** , `b`
can also be executed. If I use 'from scipy.sparse import *', `a` is a
**sparse-type matrix** , `b` **cannot** be exectued. Can someone tell me why
`kron` gives different results? And, whether flatten() can be applied to
sparse-type matrix?
Answer: `from module import *` is generally considered bad form in application code,
for the reason you're seeing - it makes it very hard to tell which modules
functions are coming from, especially if you do this for more than one module
Right now, you have:
from numpy import *
# from scipy.sparse import *
a = kron(Mat,ones((8,1)))
b = a.flatten()
Uncommenting the second line might affect where `ones` and `kron` comes from.
But unless you look up whether sparse redefines these, you won't know. Better
to write it like this:
import numpy as np
from scipy import sparse
a = np.kron(Mat, np.ones((8,1)))
b = a.flatten()
And then you can swap `np` for `sparse` where you want to use the sparse
version, and the reader will immediately know which one you're using. And
you'll get an error if you try to use a sparse version when in fact there
isn't one.
|
Anaconda install pyipopt: libipopt.so.1
Question: I'm completely new to Python and most aspects of compiling C.
My default python interpreter is the anaconda interpreter for python 2.7. I'm
trying to install pyipopt following these instructions:
<https://github.com/xuy/pyipopt>. Pyipopt installed to
`/usr/local/lib/python2.7/dist-packages/pyipopt`, but when I try `import
pyipopt` I get an error saying that pyipopt wasn't found.
I then tried copying the installed folder into Anaconda's pkgs folder. At
first it said `Error: import pyipopt ImportError: can not find libipopt.so.1`,
but then it went back to saying that pyipopt wasn't found after I logged out
and back in.
I then tried copying the installed folder into
`{anaconda_dir}/lib/python2.7/site-packages`, but it again said `Error: import
pyipopt ImportError: can not find libipopt.so.1`. The troubleshooting section
on the github page says to copy `libipopt.so.1` into a folder accessible to
ld, but I'm not really sure which folder would fit the bill.
Could someone give a brief explanation or link on how python finds C libraries
or other .so libraries? Thanks.
Answer: The guide you've provided guides the user to install using `sudo`. When one
does that, the packaged is installed in the system. And since you are using
python from Anaconda and not from the system, Anaconda cannot find `pyipopt`,
since it is not on its path.
I suggest that you try installing using:
$ python setup.py build
$ python setup.py install
Note that I removed the `sudo`.
Regarding the `libipopt.so.1` library, maybe [this
answer](http://stackoverflow.com/a/37975815/2029132) from @alk can help you.
|
Python program using class programs to simulate the roll of two dice
Question: My program is supposed to simulate to both simulate the role of a single dice
and the role of two dices but I am having issues. Here is what my code looks
like:
import random
#Dice class simulates both a single and two dice being rolled
#sideup data attribute with 'one'
class Dice:
#sideup data attribute with 'one'
def __init__(self):
self.sideup='one'
def __init__(self):
self.twosides='one and two'
#the toss method generates a random number
#in the range of 1 through 6.
def toss(self):
if random.randint(1,6)==1:
self.sideup='one'
elif random.randint(1,6)==2:
self.sideup='two'
elif random.randint(1,6)==3:
self.sideup='three'
elif random.randint(1,6)==4:
self.sideup='four'
elif random.randint(1,6)==5:
self.sideup='five'
else:
self.sideup='six'
def get_sideup(self):
return self.sideup
def doubletoss(self):
if random.randint(1,6)==1 and random.randint(1,6)==2:
self.twosides='one and two'
elif random.randint(1,6)==1 and random.randint(1,6)==3:
self.twosides='one and three'
elif random.randint(1,6)==1 and random.randint(1,6)==4:
self.twosides='one and four'
elif random.randint(1,6)==1 and random.randint(1,6)==5:
self.twosides='one and five'
elif random.randint(1,6)==1 and random.randint(1,6)==6:
self.twosides='one and six'
elif random.randint(1,6)==1 and random.randint(1,6)==1:
self.twosides='one and one'
def get_twosides(self):
return self.twosides
#main function
def main():
#create an object from the Dice class
my_dice=Dice()
#Display the siide of the dice is factory
print('This side is up',my_dice.get_sideup())
#toss the dice
print('I am tossing the dice')
my_dice.toss()
#toss two dice
print('I am tossing two die')
my_dice.doubletoss()
#Display the side of the dice that is facing up
print('this side is up:',my_dice.get_sideup())
#display both dices with the sides of the dice up
print('the sides of the two dice face up are:',my_dice.get_twosides())
main()
Here is the output of my program when I run it:
> "Traceback (most recent call last): File
> "C:/Users/Pentazoid/Desktop/PythonPrograms/DiceClass.py", line 79, in main()
> File "C:/Users/Pentazoid/Desktop/PythonPrograms/DiceClass.py", line 61, in
> main print('This side is up',my_dice.get_sideup()) File
> "C:/Users/Pentazoid/Desktop/PythonPrograms/DiceClass.py", line 32, in
> get_sideup return self.sideup
>
> AttributeError: 'Dice' object has no attribute 'sideup'
What am I doing wrong?
Answer: You have two **init** methods. The second replaces the first, which negates
your definition of sideup.
change to:
def __init__(self):
self.sideup='one'
self.twosides='one and two'
|
Comments not showing in post_detail view
Question: I am doing a project in django 1.9.9/python 3.5, for exercise reasons I have a
blog app, an articles app and a comments app. Comments app has to be genericly
related to blog and articles. My problem is that the templates are not showing
my comments. Comments are being created and related to their post/article
because I can see it in admin, so it is not a comment creation problem. They
are simply not showing in my template.
My comments/models.py:
from django.db import models
class Comment(models.Model):
post = models.ForeignKey('blog.Entry',related_name='post_comments', blank=True, null=True)
article = models.ForeignKey('articles.Article', related_name='article_comments', blank=True, null=True)
body = models.TextField()
created_date = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.body
My commments/views.py:
from django.utils import timezone
from django.shortcuts import render, get_object_or_404
from django.shortcuts import redirect
from .forms import CommentForm
from .models import Comment
from blog.models import Entry
from articles.models import Article
from blog.views import post_detail
from articles.views import article_detail
def article_new_comment(request, pk):
article = get_object_or_404(Article, pk=pk)
if request.method == "POST":
form = CommentForm(request.POST)
if form.is_valid():
comment = form.save(commit=False)
comment.article = article
comment.created_date=timezone.now()
comment.save()
return redirect(article_detail, pk=article.pk)
else:
form=CommentForm()
return render(request, 'comments/add_new_comment.html', {'form': form})
def blog_new_comment(request, pk):
post = get_object_or_404(Entry, pk=pk)
if request.method == "POST":
form = CommentForm(request.POST)
if form.is_valid():
comment = form.save(commit=False)
comment.post = post
comment.created_date = timezone.now()
comment.save()
return redirect(post_detail, pk=post.pk)
else:
form=CommentForm()
return render(request, 'comments/add_new_comment.html', {'form': form})
And here is my post_detail.html, where comments should be. I will not post
article_detail.html because they are exactly the same:
{% extends 'blog/base.html' %}
{% block content %}
<div class="post">
{% if post.modified_date %}
<div class="date">
{{ post.modified_date }}
</div>
{% else %}
<div class="date">
{{ post.published_date }}
</div>
{% endif %}
<a class="btn btn-default" href="{% url 'post_edit' pk=post.pk %}"><span class="glyphicon glyphicon-pencil"> Edit Post </span></a>
<h1>{{ post.title }}</h1>
<p>{{ post.text|linebreaksbr }}</p>
<hr>
<a class="btn btn-default" href="{% url 'new_blog_comment' pk=post.pk %}"><span class="glyphicon glyphicon-pencil"> Add Comment </span></a>
{% for comment in post.comments.all %}
<div class="comment">
<div class="date">{{ comment.created_date }}</div>
<p>{{ comment.body|linebreaksbr }}</p>
</div>
{% empty %}
<p>No comments here yet</p>
{% endfor %}
</div>
{% endblock %}
Let me know if any other file would help you to help me, like blog/models,
views, although I don't think the problem is there.
Answer: You've explicitly set the related name of comments on your post to
`post_comments`. So you would have to access them like:
{% for comment in post.post_comments.all %}
This is assuming `post` in your template refers to an instance of the `Entry`
model.
|
Python Class instance variables printing out as tuples instead of string
Question: I am creating the following `class` within python. But when I create an
instance of the `class` and print out the `imdb_id` value. It prints it as a
_tuple_.
What am I doing wrong? I would like it to simply print out the _string_.
class Movie(object):
""" Class provides a structure to store Movie information """
def __init__(self, imdb_id, title = None, release_year = None, rating = None, run_time = None, genre = None, director = None, actors = None, plot = None, awards = None, poster_image = None, imdb_votes = None, youtube_trailer = None):
self.imdb_id = imdb_id,
self.title = title,
self.release_year = release_year
self.rating = rating,
self.run_time = run_time,
self.genre = genre,
self.director = director,
self.actors = actors,
self.plot = plot,
self.awards = awards,
self.poster_image = poster_image,
self.imdb_votes = imdb_votes,
self.youtube_trailer = youtube_trailer
Here is how I am initiating the class:
import media
toy_story=media.Movie("trtreter")
toy_story.imdb_id
Answer: Why are you adding a comma at the end of most statements? That creates a
tuple. Remove the trailing comma.
Really, why are you doing that?
|
Usefullness of one-line statements in Python
Question: Is using one line loops, even nested loops always a good practice in Python? I
see a lot of people just love "one-liners", but to me they're hard to read
sometimes, especially if we're talking about nested loops.
Moreover most of nested loops I've seen so far exceed the recommanded 79
characters per line.
So I'd like to know if "one-liners" offer anything extra apart being compact?
Do they use less memory maybe?
Answer: Yes, they may be easily faster, since more code may be run in C (in explicit
loop all of immediate steps has to be available to interpreter, in one-liner
list comprehension it does not have to). There is also overhead of `.append`
method call, method look-up etc. In list comprehension all of that is avoided:
import timeit
def f1():
result = []
for x in range(100):
for y in range(100):
result.append(x * y)
return result
def f2():
return [x * y for y in range(100) for x in range(100)]
print('loop: ', timeit.timeit(f1, number=1000))
print('oneliner:', timeit.timeit(f2, number=1000))
Results (Python 3):
loop: 1.2545137699926272
oneliner: 0.6745600730064325
|
How to deal with Python long import
Question: This is about python long import like this:
from aaa.bbb.ccc.ddd.eee.fff.ggg.hhh.iii.jjj.kkk.lll.mmm.nnn.ooo import xxx
The length between 'from' and 'import' is already above than 80 characters, is
there any better pythonic ways to deal with it?
Answer: You can always wrap lines using the `\` character at the end of the line.
from a.very.long.and.unconventional.structure.\
and.name import foo
For multiple statements to import after the `from x import` statement, you can
use parentheses and wrap inside these parentheses without a newline escape:
from foo.bar import (test,
and,
others)
|
DES in python can't get the correct encoded data using pycrypto
Question: I hava a algorithm to encrypt data in java ,I want to rewrite it in python.But
the two algorithm can't get the same encoded data. java code is :
String strDefaultKey = "QabC-+50";
Key key = new SecretKeySpec(strDefaultKey.getBytes("UTF-8"), "DES");
encryptCipher = Cipher.getInstance(DES_ECB);
encryptCipher.init(Cipher.ENCRYPT_MODE, key);
String seed = "2016-09-19 05:11";
MessageDigest md5 = MessageDigest.getInstance("MD5");
md5.update(seed.getBytes());
byte[] m = md5.digest();
encryptCipher.doFinal(m);
byte[] encodeUrl = Base64.encodeBase64(sEncription.encrypt(m));
String finalUrl = new String(encodeUrl);
finalResult = finalUrl.substring(2, 8) + finalUrl.substring(10, 13);
my python code is:
m = 'QabC-+50'
text = '2016-09-19 05:11'
md5 = MD5.new()
md5.update(text)
text = md5.hexdigest()
cipher = DES.new(m, DES.MODE_ECB)
text_temp = cipher.encrypt(text)
final_str = base64.b64encode(text_temp)
print final_str
print final_str[2:8] + final_str[10:13]
print type(text_temp)
The two version codes can't get the same final string. Does anybody know why?
Answer: You crypto logic is ok,the difference between the two methods is their `MD5`
result.
Without the `MD5` step:
Java code(I don't know what your `sEncription` is,remove it):
import java.security.Key;
import java.security.MessageDigest;
import java.util.Base64;
import javax.crypto.Cipher;
import javax.crypto.spec.SecretKeySpec;
public class H {
public static void main(String args[]){
try{
String strDefaultKey = "QabC-+50";
Key key = new SecretKeySpec(strDefaultKey.getBytes("UTF-8"), "DES");
Cipher encryptCipher = Cipher.getInstance("DES/ECB/NoPadding");
encryptCipher.init(Cipher.ENCRYPT_MODE, key);
String seed = "2016-09-19 05:11";
byte[] a = encryptCipher.doFinal(seed.getBytes());
byte[] encodeUrl = Base64.getEncoder().encode(a);
// byte[] encodeUrl = Base64.encodeBase64(sEncription.encrypt(m));
String finalUrl = new String(encodeUrl);
String finalResult = finalUrl.substring(2, 8) + finalUrl.substring(10, 13);
System.out.println(finalUrl);
System.out.println(finalResult);
}catch(Exception e){
e.printStackTrace();
}
}
}
OUTPUT:
Wm+DLy8m9G2BJnH2wvtKvA==
+DLy8m2BJ
Python Code:
from Crypto.Hash import MD5
from Crypto.Cipher import DES
import base64
m = 'QabC-+50'
text = '2016-09-19 05:11'
md5 = MD5.new()
md5.update(text)
# text = md5.hexdigest()
cipher = DES.new(m, DES.MODE_ECB)
text_temp = cipher.encrypt(text)
print 'text_temp is ', text_temp
final_str = base64.b64encode(text_temp)
print final_str
print final_str[2:8] + final_str[10:13]
OUTPUT:
Wm+DLy8m9G2BJnH2wvtKvA==
+DLy8m2BJ
So without the `MD5` step,the java code and the python code have the some
output.
What does matter is the MD5 method in the java code, it is not a right way to
get the MD5 value of a string.
Code below contains the right way to get the string's MD5 value:
import java.security.Key;
import java.security.MessageDigest;
import java.util.Base64;
import javax.crypto.Cipher;
import javax.crypto.spec.SecretKeySpec;
public class H {
public static void main(String args[]){
try{
String strDefaultKey = "QabC-+50";
Key key = new SecretKeySpec(strDefaultKey.getBytes("UTF-8"), "DES");
Cipher encryptCipher = Cipher.getInstance("DES/ECB/NoPadding");
encryptCipher.init(Cipher.ENCRYPT_MODE, key);
String seed = "2016-09-19 05:11";
String seedMd5 = MD5(seed);
byte[] a = encryptCipher.doFinal(seedMd5.getBytes());
byte[] encodeUrl = Base64.getEncoder().encode(a);
String finalUrl = new String(encodeUrl);
String finalResult = finalUrl.substring(2, 8) + finalUrl.substring(10, 13);
System.out.println(finalUrl);
System.out.println(finalResult);
}catch(Exception e){
e.printStackTrace();
}
}
static String MD5(String src) {
MessageDigest md;
try {
md = MessageDigest.getInstance("MD5");
StringBuffer deviceIDString = new StringBuffer(src);
src = convertToHex(md.digest(deviceIDString.toString().getBytes()));
} catch (Exception e) {
src = "00000000000000000000000000000000";
}
return src;
}
private static String convertToHex(byte[] data) {
StringBuffer buf = new StringBuffer();
for (int i = 0; i < data.length; i++) {
int halfbyte = (data[i] >>> 4) & 0x0F;
int two_halfs = 0;
do {
if ((0 <= halfbyte) && (halfbyte <= 9))
buf.append((char) ('0' + halfbyte));
else
buf.append((char) ('a' + (halfbyte - 10)));
halfbyte = data[i] & 0x0F;
} while (two_halfs++ < 1);
}
return buf.toString();
}
}
OUTPUT:
c/C16RAE1fADZXNi2H0YlevNhuucGYYHGVQ7v0Eoo9w=
C16RAEADZ
Python Code:
from Crypto.Hash import MD5
from Crypto.Cipher import DES
import base64
m = 'QabC-+50'
text = '2016-09-19 05:11'
md5 = MD5.new()
md5.update(text)
text = md5.hexdigest()
cipher = DES.new(m, DES.MODE_ECB)
text_temp = cipher.encrypt(text)
final_str = base64.b64encode(text_temp)
print final_str
print final_str[2:8] + final_str[10:13]
OUTPUT:
c/C16RAE1fADZXNi2H0YlevNhuucGYYHGVQ7v0Eoo9w=
C16RAEADZ
Now everything is ok! :)
|
Simple Python web crawler
Question: I'm following a python tutorial on youtube and got up to where we make a basic
web crawler. I tried making my own to do a very simple task. Go to my cities
car section on craigslist and print the title/link of every entry, and jump to
the next page and repeat if needed. It works for the first page, but won't
continue to change pages and get the data. Can someone help explain what's
wrong?
import requests
from bs4 import BeautifulSoup
def widow(max_pages):
page = 0 # craigslist starts at page 0
while page <= max_pages:
url = 'http://orlando.craigslist.org/search/cto?s=' + str(page) # craigslist search url + current page number
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'lxml') # my computer yelled at me if 'lxml' wasn't included. your mileage may vary
for link in soup.findAll('a', {'class':'hdrlnk'}):
href = 'http://orlando.craigslist.org' + link.get('href') # href = /cto/'number'.html
title = link.string
print(title)
print(href)
page += 100 # craigslist pages go 0, 100, 200, etc
widow(0) # 0 gets the first page, replace with multiples of 100 for extra pages
Answer: Looks like you have a problem with your indentation, you need to do `page +=
100` in the main while block and **not** inside the for loop.
def widow(max_pages):
page = 0 # craigslist starts at page 0
while page <= max_pages:
url = 'http://orlando.craigslist.org/search/cto?s=' + str(page) # craigslist search url + current page number
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, 'lxml') # my computer yelled at me if 'lxml' wasn't included. your mileage may vary
for link in soup.findAll('a', {'class':'hdrlnk'}):
href = 'http://orlando.craigslist.org' + link.get('href') # href = /cto/'number'.html
title = link.string
print(title)
print(href)
page += 100 # craigslist pages go 0, 100, 200, etc
|
Error when trying to install PyCrypto
Question: I'm using Mac with latest OS X update. I've trying to install PyCrypto over
Terminal but I'm getting error which is shown on image below. The command I
used is `sudo pip install pycrypto`. Can you please help me with this issue?
How do I resolve this? Thanks for your answers.
[![enter image description
here](http://i.stack.imgur.com/W77XY.png)](http://i.stack.imgur.com/W77XY.png)
Here is the error:
macfive:Desktop admin$ sudo pip install pycrypto
The directory '/Users/admin/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/admin/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting pycrypto
Downloading pycrypto-2.6.1.tar.gz (446kB)
100% |████████████████████████████████| 450kB 2.4MB/s
Installing collected packages: pycrypto
Running setup.py install for pycrypto ... error
Complete output from command /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-CYttJL/pycrypto/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-mWAGUD-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
.
.
.
src/hash_template.c:291: warning: return from incompatible pointer type
src/hash_template.c: At top level:
src/hash_template.c:306: error: initializer element is not constant
src/hash_template.c:306: error: (near initialization for ‘ALG_functions[1].ml_name’)
src/hash_template.c:306: error: initializer element is not constant
src/hash_template.c:306: error: (near initialization for ‘ALG_functions[1].ml_meth’)
fatal error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/lipo: can't figure out the architecture type of: /var/tmp//ccCeO0Zf.out
error: command 'gcc-4.2' failed with exit status 1
----------------------------------------
Command "/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-build-CYttJL/pycrypto/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-mWAGUD-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-CYttJL/pycrypto/
Error is to big to copy it all. So I just copied the beginning and the end.
Answer: You need to install the Python development files. I think it will work. Try
apt-get install autoconf g++ python2.7-dev
Or
sudo apt-get install python-dev
Either one of the above and then this below one
pip install pycrypto
|
Preventing fedora from installing mariadb
Question: I'm running Fedora 24, with kde plasma, having recently decided to try it
after mostly being on Ubuntu.
This morning while trying to update, I ran into a conflict between mariadb and
percona. I had installed percona from rpms (since I couldn't install 5.7 from
repos), but mariadb isn't installed, so I'm a little surprised.
According to the update note it relates to this
bug:<https://bugzilla.redhat.com/show_bug.cgi?id=1352946>
Which is all well and good, but now I'm getting:
Sep 19 09:57:48 SUBDEBUG
Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/dnf/cli/main.py", line 60, in main
return _main(base, args)
File "/usr/lib/python3.5/site-packages/dnf/cli/main.py", line 120, in _main
ret = resolving(cli, base)
File "/usr/lib/python3.5/site-packages/dnf/cli/main.py", line 149, in resolving
base.do_transaction(display=displays)
File "/usr/lib/python3.5/site-packages/dnf/cli/cli.py", line 228, in do_transaction
super(BaseCli, self).do_transaction(display)
File "/usr/lib/python3.5/site-packages/dnf/base.py", line 591, in do_transaction
self._trans_error_summary(errstring))
dnf.exceptions.Error: Transaction check error:
file /usr/lib64/mysql/plugin/dialog.so from install of mariadb-common-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-server-57-5.7.10-3.1.el7.x86_64
file /usr/bin/mysql from install of mariadb-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-client-57-5.7.10-3.1.el7.x86_64
file /usr/bin/mysqladmin from install of mariadb-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-client-57-5.7.10-3.1.el7.x86_64
file /usr/bin/mysqlbinlog from install of mariadb-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-client-57-5.7.10-3.1.el7.x86_64
file /usr/bin/mysqlcheck from install of mariadb-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-client-57-5.7.10-3.1.el7.x86_64
file /usr/bin/mysqldump from install of mariadb-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-client-57-5.7.10-3.1.el7.x86_64
file /usr/bin/mysqlimport from install of mariadb-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-client-57-5.7.10-3.1.el7.x86_64
file /usr/bin/mysqlshow from install of mariadb-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-client-57-5.7.10-3.1.el7.x86_64
file /usr/bin/mysqlslap from install of mariadb-3:10.1.17-1.fc24.x86_64 conflicts with file from package Percona-Server-client-57-5.7.10-3.1.el7.x86_64
.
.<SNIP>
.
.
Maria is in the normal fedora repos - is there any way to tell Fedora to NOT
install mariadb via update?
The puzzling thing for me is why it's trying to install it. Have I done
something stupid?
Any help appreciated.
Answer: Had to go back to gnome 3 and uninstall kde, then the problem disappeared.
Guess the issue was kde.
|
how to make logging.logger to behave like print
Question: Let's say I got this
[logging.logger](https://docs.python.org/2/library/logging.html) instance:
import logging
logger = logging.getLogger('root')
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
logging.basicConfig(format=FORMAT)
logger.setLevel(logging.DEBUG)
Problem comes when I try to use it like the builtin print with a dynamic
number of arguments:
>>> logger.__class__
<class 'logging.Logger'>
>>> logger.debug("hello")
[<stdin>:1 - <module>() ] hello
>>> logger.debug("hello","world")
Traceback (most recent call last):
File "c:\Python2711\Lib\logging\__init__.py", line 853, in emit
msg = self.format(record)
File "c:\Python2711\Lib\logging\__init__.py", line 726, in format
return fmt.format(record)
File "c:\Python2711\Lib\logging\__init__.py", line 465, in format
record.message = record.getMessage()
File "c:\Python2711\Lib\logging\__init__.py", line 329, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Logged from file <stdin>, line 1
How could i emulate the print behaviour still using logging.Logger?
Answer: Alternatively, define a function that accepts `*args` and then `join` them in
your call to `logger`:
def log(*args, logtype='debug', sep=' '):
getattr(logger, logtype)(sep.join(str(a) for a in args))
I added a `logtype` for flexibility here but you could remove it if not
required.
|
Using arg parser in python in another class
Question: I'm trying to write a test in Selenium using python,
I managed to run the test and it passed, But now I want add arg parser so I
can give the test a different URL as an argument.
The thing is that my test is inside a class, So when I'm passing the argument
I get an error:
app_url= (args['base_url'])
NameError: global name 'args' is not defined
How can I get args to be defined inside the Selenium class?
This is my code:
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
from selenium import webdriver
import unittest, time, re
import os
import string
import random
import argparse
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8))
agmuser = id_generator()
class Selenium(unittest.TestCase):
def setUp(self):
chromedriver = "c:\chromedriver.exe"
os.environ["webdriver.chrome.driver"] = chromedriver
self.driver = webdriver.Chrome(chromedriver)
app_url = (args['base_url'])
#app_url= "http://myd-vm16635.fufu.net:8080/"
print "this is the APP URL:" + ' ' + app_url
self.base_url = app_url
self.verificationErrors = []
self.accept_next_alert = True
def test_selenium(self):
#id_generator.user = id_generator()
driver = self.driver
driver.get(self.base_url + "portal/")
driver.find_element_by_css_selector("span").click()
driver.find_element_by_id("j_loginName").clear()
driver.find_element_by_id("j_loginName").send_keys(agmuser)
driver.find_element_by_id("btnSubmit").click()
driver.find_element_by_link_text("Login as" + ' ' + agmuser).click()
driver.find_element_by_css_selector("#mock-portal-Horizon > span").click()
# driver.find_element_by_id("gwt-debug-new-features-cancel-button").click()
# driver.find_element_by_xpath("//table[@id='gwt-debug-module-dropdown']/tbody/tr[2]/td[2]").click()
# driver.find_element_by_id("gwt-debug-menu-item-release-management").click()
def is_element_present(self, how, what):
try: self.driver.find_element(by=how, value=what)
except NoSuchElementException as e: return False
return True
def is_alert_present(self):
try: self.driver.switch_to_alert()
except NoAlertPresentException as e: return False
return True
def close_alert_and_get_its_text(self):
try:
alert = self.driver.switch_to_alert()
alert_text = alert.text
if self.accept_next_alert:
alert.accept()
else:
alert.dismiss()
return alert_text
finally: self.accept_next_alert = True
def tearDown(self):
self.driver.quit()
self.assertEqual([], self.verificationErrors)
if __name__ == "__main__":
#####################******SCRIPT PARAMS****;**###################################
# these values can be changed type 'python selenium_job.py --help' for assistance
##################################################################################
parser = argparse.ArgumentParser(description='DevOps team - Sanity test')
parser.add_argument('-b', '--base_url', help='base_url', default="http://myd-vm16635.fufu.net:8080/")
args = vars(parser.parse_args())
unittest.main()
Answer: Put the `parser = argparse.ArgumentParser(...)` and `parser.add_argument()`
outside `if __name__ == "__main__":` so that it always gets created but not
evaluated. Keep `args = vars(parser.parse_args())` inside `__main__`.
That way you can import it from the file like `from selenium_tests import
parser` and then in your other script, do `parser.parse_args()`.
And a cleaner way to do it is to create a function which returns the parser,
like:
def get_parsed_args():
parser = argparse.ArgumentParser(...)
parser.add_argument(...)
# etc.
args = parser.parse_args()
return args
# or just...
return parser.parse_args()
#and then call that in the main program:
if __name__ == '__main__':
args = get_parsed_args()
# etc.
And in other scripts which you want to import it into, do
from selenium_tests import get_parsed_args
if __name__ == '__main__':
args = get_parsed_args()
# etc.
|
python def creation within a .py
Question: I am trying to create a def file within a py file that is external eg.
`calls.py`:
def printbluewhale():
whale = animalia.whale("Chordata",
"",
"Mammalia",
"Certariodactyla",
"Balaenopteridae",
"Balaenoptera",
"B. musculus",
"Balaenoptera musculus",
"Blue whale")
print("Phylum - " + whale.getPhylum())
print("Clade - " + whale.getClade())
print("Class - " + whale.getClas())
print("Order - " + whale.getOrder())
print("Family - " + whale.getFamily())
print("Genus - " + whale.getGenus())
print("Species - " + whale.getSpecies())
print("Latin Name - "+ whale.getLatinName())
print("Name - " + whale.getName())
`mainwindow.py`:
import calls
import animalist
#import defs
keepgoing = 1
print("Entering main window")
while True:
question = input("Which animal would you like to know about?" #The question it self
+ animalist.lst) #Animal Listing
if question == "1":
print(calls.printlion())#Calls the animal definition and prints the characteristics
if question == "2":
print(calls.printdog())
if question == "3":
print(calls.printbluewhale())
'''if question == "new":
def new_animal():
question_2=input("Enter the name of the new animal :")'''
What I am trying to do is that `question == new` would create a new def in the
`calls.py` and that I would be able to add a name to the `def` and the
attributes as well.
I was hoping you could lead me to a way of how to do this, and if it is not
possible please just say and I will rethink my project :)
Answer: What you're trying to do here seems a bit of a workaround, at least in the way
you're trying to handle it.
If i understood the question correctly, you're trying to make a python script
that takes input from the user, then if that input is equal to "new", have it
be able to define a new animal name.
You're currently handling this using a whole lot of manual work, and this is
going to be extremely hard to expand, especially considering the size of the
data set you're presumably working with (the whole animal kingdom?).
You could try handling it like this:
define a data set using a dictionary:
birds = dict()
fish = dict()
whales = dict()
whales["Blue Whale"] = animalia.whale("Chordata",
"",
"Mammalia",
"Certariodactyla",
"Balaenopteridae",
"Balaenoptera",
"B. musculus",
"Balaenoptera musculus",
"Blue whale")
whales["Killer Whale"] = ... # just as an example, keep doing this to define more whale species.
animals = {"birds": birds, "fish": fish, "whales": whales} # using a dict for this makes you independent from indices, which is much less messy.
This will build your data set. Presuming every `whale` class instance (if
there is one) inherits properties from a presumptive `Animal` class that
performs all the printing, say:
Class Animal():
# do some init
def print_data(self):
print("Phylum - " + self.getPhylum())
print("Clade - " + self.getClade())
print("Class - " + self.getClas())
print("Order - " + self.getOrder())
print("Family - " + self.getFamily())
print("Genus - " + self.getGenus())
print("Species - " + self.getSpecies())
print("Latin Name - "+ self.getLatinName())
print("Name - " + self.getName())
You can then have a Whale class:
class Whale(Animal)
Which now has the print_data method.
for whale in whales:
whales[whale].print_data()
With that out of the way, you can move on to adding input: In your main.py:
while True:
question = input("Which animal would you like to know about?" #The question it self
+ animalist.lst) #Animal Listing
try:
id = int(question)
# if the input can be converted to an integer, we assume the user has entered an index.
print(calls.animals[animals.keys[id]])
except:
if str(question).lower() == "new": # makes this case insensitive
new_species = input("Please input a new species")
calls.animals[str(new_species)] = new_pecies
# here you should process the input to determine what new species you want
Beyond this it's worth mentioning that if you use dicts and arrays, you can
put things in a database, and pull your data from there.
Hope this helps :)
|
How to partially remove content from cell in a dataframe using Python
Question: I have the following dataframe:
import pandas as pd
df = pd.DataFrame([
['\nSOVAT\n', 'DVR', 'MEA', '\n195\n'],
['PINCO\nGALLO ', 'DVR', 'MEA\n', '195'],
])
which looks like this:
[![enter image description
here](http://i.stack.imgur.com/ldKxo.png)](http://i.stack.imgur.com/ldKxo.png)
My goal is to analyze every single cell of the dataframe so that:
* if the substring `\n` appears only once, then I delete it along with all the characters that come before it;
* if the substring `\n` appears more than once in a specific cell, then I remove all the `\n` contained along with what comes before and after them (except for what is in between)
The output of the code should be this:
[![enter image description
here](http://i.stack.imgur.com/Zws8B.png)](http://i.stack.imgur.com/Zws8B.png)
Notice: so far I only know how to remove the what comes before or after the
substring by using the following command:
df = df.astype(str).stack().str.split('\n').str[-1].unstack()
df = df.astype(str).stack().str.split('\n').str[0].unstack()
However this line of code does not lead me to the desired results since the
output is:
[![enter image description
here](http://i.stack.imgur.com/WMgyN.png)](http://i.stack.imgur.com/WMgyN.png)
Answer: [`df.replace`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.replace.html) and some regex.
In [1]: import pandas as pd
...: df = pd.DataFrame([
...: ['\nSOVAT\n', 'DVR', 'MEA', '\n195\n'],
...: ['PINCO\nGALLO ', 'DVR', 'MEA\n', '195'],
...: ])
...:
In [2]: df.replace(r'.*\n(.*)\n?.*', r'\1', regex=True)
Out[3]:
0 1 2 3
0 SOVAT DVR MEA 195
1 GALLO DVR 195
|
Acessing a variable as a string in a module
Question: Following other posts here, I have a function that prints out information
about a variable based on its name. I would like to move it into a module.
#python 2.7
import numpy as np
def oshape(name):
#output the name, type and shape/length of the input variable(s)
#for array or list
x=globals()[name]
if type(x) is np.array or type(x) is np.ndarray:
print('{:20} {:25} {}'.format(name, repr(type(x)), x.shape))
elif type(x) is list:
print('{:20} {:25} {}'.format(name, repr(type(x)), len(x)))
else:
print('{:20} {:25} X'.format(name, type(t)))
a=np.array([1,2,3])
b=[4,5,6]
oshape('a')
oshape('b')
Output:
a <type 'numpy.ndarray'> (3,)
b <type 'list'> 3
I would like to put this function oshape() into a module so that it can be
reused. However, placing in a module does not allow access to the globals from
the main module. I have tried things like 'import __main__ ' and even storing
the function globals() and passing it into the submodule. The problem is that
globals() is one function, which specifically returns the globals of the
module it is called from, not a different function for each module.
import numpy as np
import olib
a=np.array([1,2,3])
b=[4,5,6]
olib.oshape('a')
olib.oshape('b')
Gives me:
KeyError: 'a'
Extra information: The goal is to reduce redundant typing. With a slight
modification (I took it out to make it simpler for the question), oshape could
report on a list of variables, so I could use it like:
oshape('a', 'b', 'other_variables_i_care_about')
So solutions that require typing the variable names in twice are not really
what I am looking for. Also, just passing in the variable does not allow the
name to be printed. Think about using this in a long log file to show the
results of computations & checking variable sizes.
Answer: The actual problem you have here is a namespace problem.
You could write your method this way:
def oshape(name, x):
# output the name, type and shape/length of the input variable(s)
# for array or list
if type(x) in (np.array, np.ndarray):
print('{:20} {:25} {}'.format(name, repr(type(x)), x.shape))
elif type(x) is list:
print('{:20} {:25} {}'.format(name, repr(type(x)), len(x)))
else:
print('{:20} {:25} X'.format(name, type(x)))
and use it like this:
import numpy as np
import olib
a=np.array([1,2,3])
b=[4,5,6]
olib.oshape('a', a)
olib.oshape('b', b)
but it's looks very redundant to have the variable and its name in the
arguments.
Another solution would be to give the `globals()` dict to the method and keep
your code.
Have a look at [this
answer](http://stackoverflow.com/questions/15959534/python-visibility-of-
global-variables-in-imported-modules) about the visibility of the global
variables through modules.
|
Replace newline in python when reading line for line
Question: I am trying to do a simple parsing on a text in python which I have no issues
with in bash using tr '\n' ' '. Basically to get all of the lines on a single
line. In python print line is a bit different from what I understand. re.sub
cannot find my new line because it doesn't exist even though when I print to
an output it does. Can someone explain how I can work around this issue in
python?
Here is my code so far:
# -*- iso-8859-1 -*-
import re
def proc():
f= open('out.txt', 'r')
lines=f.readlines()
for line in lines:
line = line.strip()
if '[' in line:
line_1 = line
line_1_split = line_1.split(' ')[0]
line_2 = re.sub(r'\n',r' ', line_1_split)
print line_2
proc()
Edit: I know that "print line," will print without the newline. The issue is
that I need to handle these lines both before and after doing operations line
by line. My code in shell uses sed, awk and tr to do this.
Answer: You can write directly to stdout to avoid the automatic newline of `print`:
from sys import stdout
stdout.write("foo")
stdout.write("bar\n")
This will print `foobar` on a single line.
|
Raspberry LCD IP display format
Question: I'm working on a little project with a Raspberry Pi, and I need to display the
IP adress of the PI on an LCD screen.
I followed this tutorial : <https://learn.adafruit.com/drive-a-16x2-lcd-
directly-with-a-raspberry-pi/python-code>
It seems to work fine, however there is a problem displaying the IP. Instead
of displaying "192.168.0.68", it shows "fe80::779b:a7a1:9282:f4d5". It shows
the time just fine ("Sep 19 18:20:41").
Being new to programming, I couldn't find the problem, so here I am asking for
help
Thanks in advance !
Answer: I found the `netifaces` package to be useful for obtaining the IP address. The
link below explains well about its basic usage
<https://pypi.python.org/pypi/netifaces>
Below is an example to obtain the ip address in the python interpreter.
>>>import netifaces
>>>addr = netifaces.ifaddresses('en1')
>>>addr
{18: [{'addr': 'e4:ce:8f:30:98:0c'}], 2: [{'broadcast': '192.168.1.255', 'addr': '192.168.1.22', 'netmask': '255.255.255.0'}], 30: [{'addr': 'fe80::e6ce:8fff:fe30:980c%en1', 'netmask': 'ffff:ffff:ffff:ffff::'}]}
>>>addr[netifaces.AF_INET][0]['addr']
'192.168.1.22'
Note: I use `'en1'` because I'm on a Mac. In the Pi typically this would be
`'eth0'`
|
python ImportError: No module named cy_unity graphlab
Question: I am new to python and I am trying to work on a project with deep learning and
want to use graphlab library. I use sublime text for coding on windows 10. My
code is only this line:
`import graphlab`
I get this error msg:
Traceback (most recent call last):
File "test.py", line 1, in
import graphlab
File "graphlab__init__.py", line 59, in
from graphlab.data_structures.sgraph import Vertex, Edge
File "graphlab\data_structures__init__.py", line 25, in
from . import sframe
File "graphlab\data_structures\sframe.py", line 19, in
from ..connect import main as glconnect
File "graphlab\connect\main.py", line 26, in
from ..cython.cy_unity import UnityGlobalProxy
ImportError: No module named cy_unity
Answer: Try using `graphlab.get_dependencies()` in your interpreter.
|
Align ListBox in Frame wxpython
Question: I'm trying to figure out how to align a ListBox properly. As soon as i insert
the lines of ListBox, the layout transforms into a mess.
#!/usr/bin/python
# -*- coding: utf-8 -*-
import wx
oplist=[]
with open("options.txt","r") as f:
for line in f:
oplist.append(line.rstrip('\n'))
print(oplist)
class Example(wx.Frame):
def __init__(self, parent, title):
super(Example, self).__init__(parent, title = title, size=(200,300))
self.InitUI()
self.Centre()
self.Show()
def InitUI(self):
p = wx.Panel(self)
vbox= wx.BoxSizer(wx.VERTICAL)
self.l1 = wx.StaticText(p, label="Enter number", style=wx.ALIGN_CENTRE)
vbox.Add(self.l1, -1, wx.ALIGN_CENTER_HORIZONTAL, 200)
self.b1 = wx.Button(p, label="Buton 1")
vbox.Add(self.b1, -1, wx.ALIGN_CENTER_HORIZONTAL,100)
self.flistbox= wx.ListBox(self,choices=oplist, size=(100,100), name="Field", wx.ALIGN_CENTER_HORIZONTAL)
vbox.Add(self.flistbox, -1, wx.CENTER, 10)
p.SetSizer(vbox)
app = wx.App()
Example(None, title="BoxSizer")
app.MainLoop()
Here the outputs with and without: [![With
ListBox](http://i.stack.imgur.com/52j71.png)](http://i.stack.imgur.com/52j71.png)
[![Without](http://i.stack.imgur.com/15VkA.png)](http://i.stack.imgur.com/15VkA.png)
Answer: The listbox is being parented to the frame by using self.
self.flistbox= wx.ListBox(
self,choices=oplist, size=(100,100), name="Field", wx.ALIGN_CENTER_HORIZONTAL)
It should be parented to the panel by using p like the other controls.
self.flistbox= wx.ListBox(
p,choices=oplist, size=(100,100), name="Field", wx.ALIGN_CENTER_HORIZONTAL)
|
Python ImageIO Gif Set Delay Between Frames
Question: I am using ImageIO: <https://imageio.readthedocs.io/en/latest/userapi.html> ,
and I want to know how to set delay between frames in a gif.
Here are the relevant parts of my code.
import imageio
. . .
imageio.mimsave(args.output + '.gif', ARR_ARR)
where `ARR_ARR` is an array of `numpy uint8` 2d array of couplets.
To be clear, I have no problem writing the gif. I cannot, however, find any
clarification on being able to write the amount of delay between frames.
So, for example, I have frames 0 ... 9
They always play at the same rate. I would like to be able to control the
number of milliseconds or whatever unit between frames being played.
Answer: Found it using `imageio.help("GIF")` you would pass in something like
`imageio.mimsave(args.output + '.gif', ARR_ARR, fps=$FRAMESPERSECOND)`
And that seems to work.
|
Insert python variable value into SQL table
Question: I have a password system that stores the password for a python program in an
SQL table. I want the user to be able to change the password in a tkinter
window but I am not sure how to use the value of a python variable as the
value for the SQL table. Here is a sample code:
import tkinter
from tkinter import *
import sqlite3
conn = sqlite3.connect('testDataBase')
c = conn.cursor()
c.execute("INSERT INTO info Values('test')")
c.execute("SELECT password FROM info")
password = (c.fetchone()[0])
#Window setup
admin = tkinter.Tk()
admin.minsize(width=800, height = 600)
admin.maxsize(width=800, height = 600)
#GUI
passwordChangeLabel = Label(admin, text="It is recommended to change your password after first login!", font=("Arial", 14))
passwordChangeLabel.pack()
passwordChangeCurrentPasswordLabel = Label(admin, text="Current Password: ", font=("Arial", 11))
passwordChangeCurrentPasswordLabel.place(x=275, y=30)
passwordChangeCurrentPasswordEntry = Entry(admin)
passwordChangeCurrentPasswordEntry.place(x=405, y=32.5)
passwordChangeNewPasswordLabel = Label(admin, text="New Password: ", font=("Arial", 11))
passwordChangeNewPasswordLabel.place(x=295, y=50)
passwordChangeNewPasswordEntry = Entry(admin)
passwordChangeNewPasswordEntry.place(x=405, y=52.5)
passwordChangeButton = Button(admin, text="Submit", width=20)
passwordChangeButton.place(x=350, y=80)
def changePasswordFunction(event):
newPassword = passwordChangeNewPasswordEntry.get()
enteredPassword = passwordChangeCurrentPasswordEntry.get()
if enteredPassword == password:
c.execute("REPLACE INTO info(password) VALUES(newPassword)")
else:
wrong = tkinter.Toplevel()
wrong.minsize(width=200, height = 100)
wrong.maxsize(width=200, height = 100)
Label(wrong, text="Sorry that password is incorrect!", font=("Arial", 24), anchor=W, wraplength=180, fg="red").pack()
admin.bind('<Return>', changePasswordFunction)
passwordChangeButton.bind('<Button-1>', changePasswordFunction)
This code will bring up an error:
sqlite3.OperationalError: no such column: newPassword
How can I properly replace the previous value in the password column with the
new entered password?
Answer: There are two valid ways to use `VALUES()`: with a label or a string literal.
A string literal is a string in single or double quotes.
Since you didn't put `newPassword` in quotes, Sqlite assumes `newPassword` is
a label, i.e. a column name. It goes looking for the value of the
`newPassword` field in the current record, and throws an error since that
field doesn't exist.
What you want to do is, take the value of the `newPassword` python variable
and put it in quotes. Here's a corrected query string (not tested):
"REPLACE INTO info(password) VALUES('" + newPassword + "')"
As someone mentions in the comments, updating your database directly with user
inputs is highly insecure. Database interfaces generally provide easy ways to
sanitize all inputs, and you should make it a point to follow those practices.
A great resource with code examples from many popular languages is [Bobby
Tables](http://bobby-tables.com/). (Also note that failing to include
information about SQL security in your answers at SO can result in down-
votes.)
|
Checksum for a list of numbers
Question: I have a large number of lists of integers. I want to check if any of the
lists are duplicates. I was thinking a good way of doing this would be to
calculate a basic checksum, then only doing an element by element check if the
checksums coincide. But I can't find a checksum algorithm with good
properties, namely:
* Verifies order effectively;
* Quick to calculate;
* Returns a small result, eg short integer;
* Has a fairly uniform distribution, giving a low probability of different lists coinciding.
For example, a function check_sum which returned different numbers in the
range [0,65536] for the following 5 calls would be ideal.
check_sum([1,2,3,4,5])
check_sum([1,2,3,5,4])
check_sum([5,4,3,2,1])
check_sum([1,2,3,4,4])
I looked at the IPv4 header checksum algorithm which returns a result of about
the right size but doesn't check order so isn't what I'm looking for.
I'm going to implement it in python, but any format will do for algorithm, or
pointer at a good reference material.
Answer: Calculate the checksums with `hash()`:
checksums = \
list(
map(
lambda l:
hash(tuple(l)),
list_of_lists
)
)
To know how many duplicates you have:
from collections import Counter
counts = Counter(checksums)
To compile a unique list:
unique_list = list(dict(zip(checksums, list_of_lists)).values())
|
How many FLOPs are there in calculating a factorial using math.factorial(n) in python
Question: I am trying to understand how many FLOPs are there if I use a certain
algorithm to find the exponential approximated sum, specially If I use
math.factorial(n) in python. I understand FLOPs for binary operation, so is
factorial also a binary operation here within a function? Not being a computer
science major, I have some difficulties with these. My code looks like this:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import math
x = input ("please enter a number for which you want to run the exponential summation e^{x}")
N = input ("Please enter an integer before which term you want to turncate your summation")
function= math.exp(x)
exp_sum = 0.0
abs_err = 0.0
rel_err = 0.0
for n in range (0, N):
factorial = math.factorial(n) #How many FLOPs here?
power = x**n # calculates N-1 times
nth_term = power/factorial #calculates N-1 times
exp_sum = exp_sum + nth_term #calculates N-1 times
abs_err = abs(function - exp_sum)
rel_err = abs(abs_err)/abs(function)
Please help me understand this. I might also be wrong about the other FLOPs!
Answer: According to that [SO answer](http://stackoverflow.com/a/9815339/5847976) and
to the [C source
code](https://hg.python.org/cpython/file/2145593d108d/Modules/mathmodule.c#l1121),
in python2.7 `math.factorial(n)` uses a naive algorithm to compute the
factorial so **it computes using about n operations** as
factorial(n)=1*2*3*4*...*n.
A small mistake regarding the rest is that **`for n in range(0,N)` will loop
`N` times** , not `N-1` (from `n=0` to `n=N-1`).
A final note is that counting FLOP may not be representative of the actual
algorithm real world performance especially in python that is an interpretted
language and that it tends to hide most of its inner working behind clever
syntax that links to compiled C code(eg: `exp_sum + nth_term` is actualy
`exp_sum.__add__(nth_term)`).
|
django celery unit tests with pycharm 'No module named celery'
Question: my tests work fine when my target is a single function (see 'Target' field in
the image):
questionator.test_mturk_views.TestReport.submit
However, when I specify my target to include all tests within my questionator
app:
questionator
I get this error:
> Error ImportError: Failed to import test module:
> src.questionator.test_mturk_views Traceback (most recent call last):
> File "C:\Python27\Lib\unittest\loader.py", line 254, in _find_tests module
> = self._get_module_from_name(name) File
> "C:\Python27\Lib\unittest\loader.py", line 232, in _get_module_from_name
> **import**(name) File "C:\Users\Andy\questionator_app\src__init__.py", line
> 5, in from .celery import app as celery_app # noqa ImportError: No module
> named celery
Note that my tests include my settings via 'Environment variables' (see this
in the pic too):
DJANGO_SETTINGS_MODULE=questionator_app.settings.development;PYTHONUNBUFFERED=1
The celery [documentation](http://docs.celeryproject.org/projects/django-
celery/en/2.4/cookbook/unit-testing.html) mentions a "Using a custom test
runner to test with celery" but this is in the now defunct djcelery package. I
did though copy/paste/tweak [this mentioned test
runner](https://github.com/celery/django-
celery/blob/master/djcelery/contrib/test_runner.py%5C) and used it as
described, but I get the same error.
Unfortunately using CELERY_ALWAYS_EAGER also does not work
<http://docs.celeryproject.org/en/latest/configuration.html#celery-always-
eager>
I would appreciate some guidance. With best wishes, Andy.
[![enter image description
here](http://i.stack.imgur.com/KQTtS.png)](http://i.stack.imgur.com/KQTtS.png)
Answer: with-the-same-problem (most likely me),
I had followed the official
[tutorial](http://docs.celeryproject.org/en/latest/django/first-steps-with-
django.html) for getting celery working in my project. They advised the below:
[![enter image description
here](http://i.stack.imgur.com/gaOE3.png)](http://i.stack.imgur.com/gaOE3.png)
Just making the last import explicit solved my problem:
from taskapp.celery import app as celery_app # noqa
I'll see if I can nudge Celery's creators to update their tutorial ([pull
request](https://github.com/celery/celery/pull/3463)).
|
Should I notify while holding the lock on a condition or after releasing it?
Question: The [Python `threading`
documentation](https://docs.python.org/3/library/threading.html) lists the
following example of a producer:
from threading import Condition
cv = Condition()
# Produce one item
with cv:
make_an_item_available()
cv.notify()
I had to review threading and I looked at [the C++ documentation, which
states](http://en.cppreference.com/w/cpp/thread/condition_variable/notify_all):
> The notifying thread does not need to hold the lock on the same mutex as the
> one held by the waiting thread(s); in fact doing so is a pessimization,
> since the notified thread would immediately block again, waiting for the
> notifying thread to release the lock.
That would suggest doing something like this:
# Produce one item
with cv:
make_an_item_available()
cv.notify()
Answer: Don't read C++ documentation to understand Python APIs. Per [the actual Python
docs](https://docs.python.org/3/library/threading.html#threading.Condition.notify):
> If the calling thread has not acquired the lock when this method is called,
> a `RuntimeError` is raised.
Python explicitly requires that the lock be held while `notify`ing.
|
importing ecoinvent 3.2 with brightway
Question: I am having some trouble importing Ecoinvent 3.2 with Brightway2, I was
following the [example
notebook](http://nbviewer.jupyter.org/urls/bitbucket.org/cmutel/brightway2/raw/default/notebooks/IO%20-%20importing%20Ecoinvent.ipynb):
from brightway2 import *
fp = 'D:\LCAdb\e_3.2_cutoff_lci\datasets'
projects.set_current("myproject")
bw2setup()
ei = SingleOutputEcospold2Importer(fp, "ecoinvent 3.2 cutoff")
I get a warning message that seems to block the import process
> C:\Users\@@@\Anaconda3\envs\bw3\lib\site-packages\bw2data\project.py:157:
> UserWarning: **_Read only project_**
>
> This project is being used by another process and no writes can be made
> until: 1\. You close the other program, or switch to a different project,
> _and_ 2\. You call `projects.enable_writes` _and_ get the response `True`.
>
>
> If you are **sure** that this warning is incorrect, call
> `projects.enable_writes(force=True)` to enable writes.
>
>
> warnings.warn(READ_ONLY_PROJECT)
if I run
projects.enable_writes(force=True)
I get another a persmission error
> PermissionError Traceback (most recent call last) in () \----> 1
> projects.enable_writes(force=True)
>
> C:\Users\@@@\Anaconda3\envs\bw3\lib\site-packages\bw2data\project.py in
> enable_writes(self, force) 234 """Enable writing for the current project."""
> 235 if force: \--> 236 os.remove(os.path.join(self.dir, "write-lock")) 237
> self.read_only = not self._lock.acquire(timeout = 0.05) 238 if not
> self.read_only:
>
> PermissionError: [WinError 32] El proceso no tiene acceso al archivo porque
> está siendo utilizado por otro proceso:
> 'C:\Users\@@@\AppData\Local\pylca\Brightway3\myproject.4da39212894ad06eb7c95810f8a2a6b0\write-
> lock'
the winerror translated would be something like "the process does not have
access to the file because the file is being used by other process"
I do not have other Brightway environments running at the same time and I have
recently updated Brightway2 so I do not know where the problem may be. Any
ideas?
thanks!
UPDATE1: I have installed brightway2 in a different computer and I have found
the same warning message. Despite the message, the import seems to be correct.
once the database is loaded and written, if I open the project again the
database is still there.
In the previous laptop the process of importing seems to be too much for the
machine (an ASUS S56CB with windows 10 and 6 GB RAM). After 40 min waiting for
the result I usually despair and kill it. I will give a try reinstalling
python...
Answer: I have run into this in the past, surely because of the reasons @Chris evoked.
You can use `projects.read_only = False` to force-write data. Please make sure
that this is really what you want to do. You will _not_ want to do this, for
example, if you are accessing the same project through two different kernels
that may try to write data at the same time.
|
Package version difference between pip and OS?
Question: I have Debian OS and python version 2.7 installed on it. But I have a strange
issue about package `six`. I want to use 1.10 version.
I have installed six 1.10 via pip:
$ pip list
...
six (1.10.0)
But when I run the following script
python -c "import six; print(six.__version__)"
it says `1.8.0`
The reason is that veriosn installed in OS is different:
$ sudo apt-cache policy python-six
python-six:
Installed: 1.8.0-1
Candidate: 1.8.0-1
Version table:
1.9.0-3~bpo8+1 0
100 http://172.24.70.103:9999/jessie-backports/ jessie-backports/main amd64 Packages
*** 1.8.0-1 0
500 ftp://172.24.70.103/mirror/jessie-debian/ jessie/main amd64 Packages
500 http://172.24.70.103:9999/jessie-debian/ jessie/main amd64 Packages
100 /var/lib/dpkg/status
**How to force python to use package installed via pip?**
Answer: You can use `virtualenv` for this.
pip install virtualenv
cd project_folder
virtualenv venv
`virtualenv venv` will create a folder in the current directory which will
contain the Python executable files, and a copy of the pip library which you
can use to install other packages. The name of the virtual environment (in
this case, it was venv) can be anything; omitting the name will place the
files in the current directory instead.
Set the wished python interpreter
virtualenv -p /usr/bin/python2.7 venv
Activate the environment
source venv/bin/activate
From now on, any package that you install using pip will be placed in the
`venv` folder, isolated from the **global** Python installation.
pip install six
Now you run code. When you have finished simpliy deactivate `venv`
deactivate
See also [the original resources](http://docs.python-
guide.org/en/latest/dev/virtualenvs/).
|
Python CSV: Can I do this with one 'with open' instead of two?
Question: I am a noobie.
I have written a couple of scripts to modify CSV files I work with.
The scripts:
1.) change the headers of a CSV file then save that to a new CSV file,.
2.) Load that CSV File, and change the order of select columns using
DictWriter.
from tkinter import *
from tkinter import filedialog
import os
import csv
root = Tk()
fileName = filedialog.askopenfilename(filetypes=(("Nimble CSV files", "*.csv"),("All files", "*.*")))
outputFileName = os.path.splitext(fileName)[0] + "_deleteme.csv" #my temp file
forUpload = os.path.splitext(fileName)[0] + "_forupload.csv"
#Open the file - change the header then save the file
with open(fileName, 'r', newline='') as infile, open(outputFileName, 'w', newline='') as outfile:
reader = csv.reader(infile)
writer = csv.writer(outfile, delimiter=',', lineterminator='\n')
row1 = next(reader)
#new header names
row1[0] = 'firstname'
row1[1] = 'lastname'
row1[4] = 'phone'
row1[5] = 'email'
row1[11] = 'address'
row1[21] = 'website'
#write the temporary CSV file
writer.writerow(row1)
for row in reader:
writer.writerow(row)
#Open the temporary CSV file - rearrange some columns
with open(outputFileName, 'r', newline='') as dInFile, open(forUpload, 'w', newline='') as dOutFile:
fieldnames = ['email', 'title', 'firstname', 'lastname', 'company', 'phone', 'website', 'address', 'twitter']
dWriter = csv.DictWriter(dOutFile, restval='', extrasaction='ignore', fieldnames=fieldnames, lineterminator='\n')
dWriter.writeheader()
for row in csv.DictReader(dInFile):
dWriter.writerow(row)
My question is: Is there a more efficient way to do this?
It seems like I shouldn't have to make a temporary CSV file ("_deleteme.csv")
I then delete.
I assume making the temporary CSV file is a rookie move -- is there a way to
do this all with one 'With open' statement?
Thanks for any help, it is greatly appreciated.
\--Luke
Answer: `csvfile` can be any object with a `write()` method. You could craft a custom
element, or use [StringIO](https://docs.python.org/2/library/stringio.html).
You'd have to verify efficiency yourself.
|
Not getting required output using findall in python
Question: Earlier ,I could not put the exact question.My apologies.
Below is what I am looking for :
I am reading a string from file as below and there can be multiple such kind
of strings in the file.
" VEGETABLE 1
POTATOE_PRODUCE 1.1 1SIMLA(INDIA)
BANANA 1.2 A_BRAZIL(OR INDIA)
CARROT_PRODUCE 1.3 A_BRAZIL/AFRICA"
I want to capture the entire string as output using findall only.
**My script:**
import re
import string
f=open('log.txt')
contents = f.read()
output=re.findall('(VEGETABLE.*)(\s+\w+\s+.*)+',contents)
print output
Above script is giving output as
[('VEGETABLE 1', '\n CARROT_PRODUCE 1.3 A_BRAZIL/AFRICA')]
But contents in between are missing.
Answer: Solution in last snippet in this answer.
>>> import re
>>> str2='d1 talk walk joke'
>>> re.findall('(\d\s+)(\w+\s)+',str2)
[('1 ', 'walk ')]
output is a list with only one occurrence of the given pattern. The tuple in
the list contains two strings that matched corresponding two groupings given
within () in the pattern
# Experiment 1
Removed the last '+' which made pattern to select the first match instead of
greedy last match
>>> re.findall('(\d\s+)(\w+\s)',str2)
[('1 ', 'talk ')]
# Experiment 2
Added one more group to find the third words followed with one or more spaces.
But if the sting has more than 3 words followed by spaces, this will still
find only three words.
>>> re.findall('(\d\s+)(\w+\s)(\w+\s)',str2)
[('1 ', 'talk ', 'walk ')] #
# Experiment 3
Using '|' to match the pattern multipel times. Note the tuple has disappeared.
Also note that the first match is not containing only the number. This may be
because \w is superset of \d
>>> re.findall('\d\s+|\w+\s+',str2)
['d1 ', 'talk ', 'walk ']
# Final Experiment
>>> re.findall('\d\s+|[a-z]+\s+',str2)
['1 ', 'talk ', 'walk ']
Hope this helps.
|