text
stringlengths 226
34.5k
|
---|
Upload file to MS SharePoint using Python OneDrive SDK
Question: Is it possible to upload a file to the **Shared Documents** library of a
**Microsoft SharePoint** site with the **[Python OneDrive
SDK](https://github.com/OneDrive/onedrive-sdk-python)**?
**[This documentation](https://dev.onedrive.com/readme.htm)** says it should
be (in the first sentence), but I can't make it work.
I'm able to authenticate (with Azure AD) and upload to a **OneDrive** folder,
but when trying to upload to a **SharePoint** folder, I keep getting this
error:
> "Exception of type
> 'Microsoft.IdentityModel.Tokens.**AudienceUriValidationFailedException** '
> was thrown."
The code I'm using that returns an object with the error:
(...authentication...)
client = onedrivesdk.OneDriveClient('https://{tenant}.sharepoint.com/{site}/_api/v2.0/', auth, http)
client.item(path='/drive/special/documents').children['test.xlsx'].upload('test.xlsx')
[![where I'd like to upload on the
web](http://i.stack.imgur.com/ZQEj4.png)](http://i.stack.imgur.com/ZQEj4.png)
I can successfully upload to `https://{tenant}-my.sharepoint.com/_api/v2.0/`
(notice the "**-my** " after the `{tenant}`) with the following code:
client = onedrivesdk.OneDriveClient('https://{tenant}-my.sharepoint.com/_api/v2.0/', auth, http)
returned_item = client.item(drive='me', id='root').children['test.xlsx'].upload('test.xlsx')
How could I upload the same file to a **SharePoint** site?
_(Answers to similar questions
([1](http://stackoverflow.com/questions/37451835/onedrive-api-refer-to-
sharepoint-file-to-upload-or-download-invalid-
audience),[2](http://stackoverflow.com/questions/37233669/onedrive-api-python-
sdk-points-to-login-live-com-not-mydomain-sharepoint-
com),[3](http://stackoverflow.com/questions/29635758/onedrive-sharepoint-
oauth-invalid-audience-
error),[4](http://stackoverflow.com/questions/39822092/which-sdk-or-api-
should-i-use-to-list-and-upload-files-into-office-365-sharepoin)) on Stack
Overflow are either too vague or suggest using a different API. My question is
if it's possible using the OneDrive Python SDK, and if so, how to do it.)_
* * *
**Update** : Here is my full code and output. (_Sensitive original data
replaced with similarly formatted gibberish._)
import re
import onedrivesdk
from onedrivesdk.helpers.resource_discovery import ResourceDiscoveryRequest
# our domain (not the original)
redirect_uri = 'https://example.ourdomain.net/'
# our client id (not the original)
client_id = "a1234567-1ab2-1234-a123-ab1234abc123"
# our client secret (not the original)
client_secret = 'ABCaDEFGbHcd0e1I2fghJijkL3mn4M5NO67P8Qopq+r='
resource = 'https://api.office.com/discovery/'
auth_server_url = 'https://login.microsoftonline.com/common/oauth2/authorize'
auth_token_url = 'https://login.microsoftonline.com/common/oauth2/token'
http = onedrivesdk.HttpProvider()
auth = onedrivesdk.AuthProvider(http_provider=http, client_id=client_id,
auth_server_url=auth_server_url,
auth_token_url=auth_token_url)
should_authenticate_via_browser = False
try:
# Look for a saved session. If not found, we'll have to
# authenticate by opening the browser.
auth.load_session()
auth.refresh_token()
except FileNotFoundError as e:
should_authenticate_via_browser = True
pass
if should_authenticate_via_browser:
auth_url = auth.get_auth_url(redirect_uri)
code = ''
while not re.match(r'[a-zA-Z0-9_-]+', code):
# Ask for the code
print('Paste this URL into your browser, approve the app\'s access.')
print('Copy the resulting URL and paste it below.')
print(auth_url)
code = input('Paste code here: ')
# Parse code from URL if necessary
if re.match(r'.*?code=([a-zA-Z0-9_-]+).*', code):
code = re.sub(r'.*?code=([a-zA-Z0-9_-]*).*', r'\1', code)
auth.authenticate(code, redirect_uri, client_secret, resource=resource)
# If you have access to more than one service, you'll need to decide
# which ServiceInfo to use instead of just using the first one, as below.
service_info = ResourceDiscoveryRequest().get_service_info(auth.access_token)[0]
auth.redeem_refresh_token(service_info.service_resource_id)
auth.save_session() # Save session into a local file.
# Doesn't work
client = onedrivesdk.OneDriveClient(
'https://{tenant}.sharepoint.com/sites/{site}/_api/v2.0/', auth, http)
returned_item = client.item(path='/drive/special/documents')
.children['test.xlsx']
.upload('test.xlsx')
print(returned_item._prop_dict['error_description'])
# Works, uploads to OneDrive instead of SharePoint site
client2 = onedrivesdk.OneDriveClient(
'https://{tenant}-my.sharepoint.com/_api/v2.0/', auth, http)
returned_item2 = client2.item(drive='me', id='root')
.children['test.xlsx']
.upload('test.xlsx')
print(returned_item2.web_url)
Output:
Exception of type 'Microsoft.IdentityModel.Tokens.AudienceUriValidationFailedException' was thrown.
https://{tenant}-my.sharepoint.com/personal/user_domain_net/_layouts/15/WopiFrame.aspx?sourcedoc=%1ABCDE2345-67F8-9012-3G45-6H78IJKL9M01%2N&file=test.xlsx&action=default
Answer: I finally found a solution, with the help of (_SO user_) sytech.
The answer to my original question is that using the original **[Python
OneDrive SDK](https://github.com/OneDrive/onedrive-sdk-python)** , it's **not
possible** to upload a file to the `Shared Documents` folder of a `SharePoint
Online` site (at the moment of writing this): when the SDK queries the
[**resource discovery
service**](https://dev.onedrive.com/auth/aad_oauth.htm#step-3-discover-the-
onedrive-for-business-resource-uri), it drops all services whose
`service_api_version` is not `v2.0`. However, I get the SharePoint service
with `v1.0`, so it's dropped, although it could be accessed using API v2.0
too.
**However** , by extending the `ResourceDiscoveryRequest` class (in the
OneDrive SDK), we can create a workaround for this. I managed to **upload a
file** this way:
import json
import re
import onedrivesdk
import requests
from onedrivesdk.helpers.resource_discovery import ResourceDiscoveryRequest, \
ServiceInfo
# our domain (not the original)
redirect_uri = 'https://example.ourdomain.net/'
# our client id (not the original)
client_id = "a1234567-1ab2-1234-a123-ab1234abc123"
# our client secret (not the original)
client_secret = 'ABCaDEFGbHcd0e1I2fghJijkL3mn4M5NO67P8Qopq+r='
resource = 'https://api.office.com/discovery/'
auth_server_url = 'https://login.microsoftonline.com/common/oauth2/authorize'
auth_token_url = 'https://login.microsoftonline.com/common/oauth2/token'
# our sharepoint URL (not the original)
sharepoint_base_url = 'https://{tenant}.sharepoint.com/'
# our site URL (not the original)
sharepoint_site_url = sharepoint_base_url + 'sites/{site}'
file_to_upload = 'C:/test.xlsx'
target_filename = 'test.xlsx'
class AnyVersionResourceDiscoveryRequest(ResourceDiscoveryRequest):
def get_all_service_info(self, access_token, sharepoint_base_url):
headers = {'Authorization': 'Bearer ' + access_token}
response = json.loads(requests.get(self._discovery_service_url,
headers=headers).text)
service_info_list = [ServiceInfo(x) for x in response['value']]
# Get all services, not just the ones with service_api_version 'v2.0'
# Filter only on service_resource_id
sharepoint_services = \
[si for si in service_info_list
if si.service_resource_id == sharepoint_base_url]
return sharepoint_services
http = onedrivesdk.HttpProvider()
auth = onedrivesdk.AuthProvider(http_provider=http, client_id=client_id,
auth_server_url=auth_server_url,
auth_token_url=auth_token_url)
should_authenticate_via_browser = False
try:
# Look for a saved session. If not found, we'll have to
# authenticate by opening the browser.
auth.load_session()
auth.refresh_token()
except FileNotFoundError as e:
should_authenticate_via_browser = True
pass
if should_authenticate_via_browser:
auth_url = auth.get_auth_url(redirect_uri)
code = ''
while not re.match(r'[a-zA-Z0-9_-]+', code):
# Ask for the code
print('Paste this URL into your browser, approve the app\'s access.')
print('Copy the resulting URL and paste it below.')
print(auth_url)
code = input('Paste code here: ')
# Parse code from URL if necessary
if re.match(r'.*?code=([a-zA-Z0-9_-]+).*', code):
code = re.sub(r'.*?code=([a-zA-Z0-9_-]*).*', r'\1', code)
auth.authenticate(code, redirect_uri, client_secret, resource=resource)
service_info = AnyVersionResourceDiscoveryRequest().\
get_all_service_info(auth.access_token, sharepoint_base_url)[0]
auth.redeem_refresh_token(service_info.service_resource_id)
auth.save_session()
client = onedrivesdk.OneDriveClient(sharepoint_site_url + '/_api/v2.0/',
auth, http)
# Get the drive ID of the Documents folder.
documents_drive_id = [x['id']
for x
in client.drives.get()._prop_list
if x['name'] == 'Documents'][0]
items = client.item(drive=documents_drive_id, id='root')
# Upload file
uploaded_file_info = items.children[target_filename].upload(file_to_upload)
Authenticating for a different service gives you a different token.
|
"Extra data" error trying to load a JSON file with Python
Question: I'm trying to load the following JSON file, named `archived_sensor_data.json`,
into Python:
[{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475899932.677}, "id": "40898785-6e82-40a2-a36a-70bd0c772056", "name": "Elizabeth Woods"}][{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475899932.677}, "id": "40898785-6e82-40a2-a36a-70bd0c772056", "name": "Elizabeth Woods"}, {"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475816130.812}, "id": "2f896308-884d-4a5f-a8d2-ee68fc4c625a", "name": "Susan Wagner"}]
The script I'm trying to run (from the same directory) is as follows:
import json
reconstructed_data = json.load(open("archived_sensor_data.json"))
However, I get the following error:
ValueError: Extra data: line 1 column 164 - line 1 column 324 (char 163 - 323)
I'm not sure where this is going wrong, because from
[www.json.org](http://www.json.org/) it seems like valid JSON syntax for an
array of dictionaries. Any ideas what is causing the error?
Answer: It is not a valid json; There are two list in here; one is
[{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475899932.677}, "id": "40898785-6e82-40a2-a36a-70bd0c772056", "name": "Elizabeth Woods"}]
and the other one
[{"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475899932.677}, "id": "40898785-6e82-40a2-a36a-70bd0c772056", "name": "Elizabeth Woods"}, {"timestamp": {"timezone": "+00:00", "$reql_type$": "TIME", "epoch_time": 1475816130.812}, "id": "2f896308-884d-4a5f-a8d2-ee68fc4c625a", "name": "Susan Wagner"}]
You can see the validation error in here;
<http://www.jsoneditoronline.org/?id=569644c48d5753ceb21daf66483d80cd>
|
Converting a nested array into a pandas dataframe in python
Question: I'm attempting to convert several dictionaries contained in an array to a
pandas dataframe. The dicts are saved as such:
[[{u'category': u'anti-social-behaviour',u'location': {u'latitude': u'52.309886',
u'longitude': u'0.496902'},u'month': u'2015-01'},{u'category': u'anti-social-behaviour',u'location': {u'latitude': u'52.306209',
u'longitude': u'0.490475'},u'month': u'2015-02'}]]
I'm trying to format my data to the format below:
Category Latitude Longitude
0 anti-social 524498.597 175181.644
1 anti-social 524498.597 175181.644
2 anti-social 524498.597 175181.644
. ... ...
. ... ...
. ... ...
I've tried to force the data into a dataframe with the below code but it
doesn't produce the intended output.
for i in crimes:
for x in i:
print pd.DataFrame([x['category'], x['location']['latitude'], x['location']['longitude']])
I'm very new to Python so any links/tips to help me build this dataframe would
be highly appreciated!
Answer: You are on the right track, but you are creating a new dataframe for each row
and not giving the proper `columns`. The following snippet should work:
import pandas as pd
import numpy as np
crimes = [[{u'category': u'anti-social-behaviour',u'location': {u'latitude': u'52.309886',
u'longitude': u'0.496902'},u'month': u'2015-01'},{u'category': u'anti-social-behaviour',u'location': {u'latitude': u'52.306209',
u'longitude': u'0.490475'},u'month': u'2015-02'}]]
# format into a flat list
formatted_crimes = [[x['category'], x['location']['latitude'], x['location']['longitude']] for i in crimes for x in i]
# now pass the formatted list to DataFrame and label the columns
df = pd.DataFrame(formatted_crimes, columns=['Category', 'Latitude', 'Longitude'])
The result is:
Category Latitude Longitude
0 anti-social-behaviour 52.309886 0.496902
1 anti-social-behaviour 52.306209 0.490475
|
Exchanging out specific lines of a file
Question: I don't know if this should be obvious to the more tech savvy among us but is
there a specific way to read a line out of a text file, then edit it and
insert it back into the file in the original location? I have looked on the
site but all the solutions I find seem to be for python 2.7.
Below is an example of what I am looking for:
with open example.txt as file:
for line in file:
if myline in file:
file.edit("foo","fah")
Answer: In 95% cases, replacing data (e.g. text) in a file usually means
1. Read the file in chunks, e.g. line-by-line
2. Edit the chunk
3. Write the edited chunk to a new file
4. Replace the old file with a new file.
So, a simple code will be:
import os
with open(in_path, 'r') as fin:
with open(temp_path, 'w') as fout:
for line in fin:
line.replace('foo', 'fah')
fout.write(line)
os.rename(temp_path, in_path)
Why not in-place replacements? Well, a file is a fixed sequence of bytes, and
the only way to grow it - is to append to the end of file. Now, if you want to
replace the data of the same length - no problems. However if the original and
new sequences' lengths differ - there is a trouble: a new sequence will be
overwriting the following characters. E.g.
original: abc hello abc world
replace abc -> 12345
result: 12345ello 12345orld
|
RotatingFileHandler does not continue logging after error encountered
Question:
Traceback (most recent call last):
File "/usr/lib64/python2.6/logging/handlers.py", line 76, in emit
if self.shouldRollover(record):
File "/usr/lib64/python2.6/logging/handlers.py", line 150, in shouldRollover
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
ValueError: I/O operation on closed file
I have a line in my script:
handler = logging.handlers.RotatingFileHandler(cfg_obj.log_file,maxBytes = maxlog_size, backupCount = 10)
It runs fine when there are no error messages. But when there's an error log,
the logs after the error are not written to the file unless the process is
restarted. We do not want to restart the process every time there is an error.
Thanks for your help in advance!
Answer: I highly recommend you to use a configuration file. The configuration code
below "logging.conf" has different handlers and formatters just as example:
[loggers]
keys=root
[handlers]
keys=consoleHandler, rotatingFileHandler
[formatters]
keys=simpleFormatter, extendedFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler, rotatingFileHandler
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[handler_rotatingFileHandler]
class=handlers.RotatingFileHandler
level=INFO
formatter=extendedFormatter
args=('path/logs_file.log', 'a', 2000000, 1000)
[formatter_simpleFormatter]
format=%(asctime)s - %(levelname)s - %(message)s
datefmt=
[formatter_extendedFormatter]
format= %(asctime)s - %(levelname)s - %(filename)s:%(lineno)s - %(funcName)s() %(message)s
datefmt=
Now how to use it "main.py":
import logging.config
# LOGGER
logging.config.fileConfig('path_to_conf_file/logging.conf')
LOGGER = logging.getLogger('root')
try:
LOGGER.debug("Debug message...")
LOGGER.info("Info message...")
except Exception as e:
LOGGER.exception(e)
Let me know if you need more help.
|
How to pass data from one page to another page in python tkinter?
Question: my program is this.. import tkinter as tk from tkinter import *
TITLE_FONT = ("Helvetica", 18, "bold")
class SampleApp(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
container = tk.Frame(self)
self.title(" Allocation")
width, height = self.winfo_screenwidth(), self.winfo_screenheight()
self.geometry('%dx%d+0+0' % (width,height))
self.state('zoomed')
self.wm_iconbitmap('icon.ico')
container.pack(side="top", fill="both", expand=True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (StartPage, PageOne, PageTwo):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(StartPage)
def show_frame(self, c):
frame = self.frames[c]
frame.tkraise()
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
logo = tk.PhotoImage(file="backesh.ppm")
BGlabel = tk.Label(self,image=logo)
BGlabel.image = logo
BGlabel.place(x=0,y=0,width=592,height=450)
label = tk.Label(self, text="This is the start page", font=TITLE_FONT)
label.place(x=0,y=0,width=592,height=44)
frame1 = Frame(self)
Label(frame1, bd=5,bg="black",text=" Enter text : ",font=("Helvetica", 14),fg="light green",width=21).pack(side=LEFT)
emp=Entry(frame1, bd =5,font=("Times New Roman", 14),width=25)
emp.pack(side=LEFT)
frame1.place(x=400,y=160)
button1 = tk.Button(self, text="Go to Page One",
command=lambda: controller.show_frame(PageOne))
button2 = tk.Button(self, text="Go to Page two",
command=lambda: controller.show_frame(PageTwo))
button3 = tk.Button(self, text="Exit",
command=self.quit)
button1.place(x=100,y=406,width=200,height=44)
button2.place(x=300,y=406,width=200,height=44)
button3.place(x=500,y=406,width=80,height=44)
class PageOne(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
logo = tk.PhotoImage(file="backesh.ppm")
BGlabel = tk.Label(self,image=logo)
BGlabel.image = logo
BGlabel.place(x=0,y=0,width=592,height=450)
label = tk.Label(self, text="This is page one", font=TITLE_FONT)
label.place(x=0,y=0,width=592,height=44)
button1 = tk.Button(self, text="Go to Start Page",
command=lambda: controller.show_frame(StartPage))
#button2 = tk.Button(self, text="Go to Page two",
# command=lambda: controller.show_frame(PageTwo))
button3 = tk.Button(self, text="Exit",
command=self.quit)
button1.place(x=100,y=406,width=200,height=44)
button3.place(x=300,y=406,width=200,height=44)
class PageTwo(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
logo = tk.PhotoImage(file="backesh.ppm")
BGlabel = tk.Label(self,image=logo)
BGlabel.image = logo
BGlabel.place(x=0,y=0,width=592,height=450)
label = tk.Label(self, text="This is page two", font=TITLE_FONT)
label.place(x=0,y=0,width=592,height=44)
button1 = tk.Button(self, text="Go to Start Page",
command=lambda: controller.show_frame(StartPage))
#button2 = tk.Button(self, text="Go to Page two",
# command=lambda: controller.show_frame(PageTwo))
button3 = tk.Button(self, text="Exit",
command=self.quit)
button1.place(x=100,y=406,width=200,height=44)
button3.place(x=300,y=406,width=200,height=44)
if __name__ == "__main__":
app = SampleApp()
app.mainloop()
i want to take entry text data from StartPage and display it as label in
PageOne. How to do it? I am new to this. Please type the code. Thanks in
advance.
Answer: Firstly, correct the code of the class **PageOne** and **StartPage** and add
_self.controller = controller_ to the __init__ function:
class PageOne(tk.Frame):
def __init__(self, parent, controller):
#your code
self.controler = controller
Add self before entry field in StartPage and before label in PageOne:
#entry in StartPage
self.emp=tk.Entry(frame1, bd =5,font=("Times New Roman", 14),width=25)
self.emp.pack(side=tk.LEFT)
#label in PageOne
self.label = tk.Label(self, text="This is page one", font=TITLE_FONT)
self.label.place(x=0,y=0,width=592,height=44)
Then add a function go_to_page_one to StartPage class:
def go_to_page_one(self):
self.controller.SomeVar = self.emp.get() #save text from entry to some var
self.controller.frames[PageOne].correct_label() #call correct_label function
self.controller.show_frame(PageOne) #show page one
On button1 in StartPage class change command to lambda: self.go_to_page_one():
button1 = tk.Button(self, text="Go to Page One",
command=lambda: self.go_to_page_one())
At last add a function correct label to the class PageOne:
def correct_label(self):
self.label.config(text=self.controller.SomeVar) #correct the label
|
Remote build doesn't install dependencies using python 3.2 standard runtime
Question: i'm uploading a worker to iron worker running Python 3.2 with in the standard
environment, using my own http client directly (not the ruby or go cli)
according to the REST API. However, despite having a .worker file along with
my python script in a zip file and despite successfully uploading my worker,
dependencies are not installed prior to the worker execution, so I get an
error like that :
Traceback (most recent call last):
File "/mnt/task/pakt.py", line 3, in <module>
import requests
ImportError: No module named requests
requests module is declared in my worker file that way :
pip "requests"
How can I fix this ? thanks .
Answer: You should use the new Docker based workflow, then you can be sure you have
the correct dependencies, and that everything is working, before uploading.
<https://github.com/iron-io/dockerworker/tree/master/python>
|
Where is my below python code failing?
Question: if the function call is like: backwardsPrime(9900, 10000) then output should
be [9923, 9931, 9941, 9967]. Backwards Read Primes are primes that when read
backwards in base 10 (from right to left) are a different prime. It is one of
the kata in Codewars and on submitting the below solution, I am getting the
following error:
Traceback: in File "./frameworks/python/cw-2.py", line 28, in assert_equals
expect(actual == expected, message, allow_raise) File
"./frameworks/python/cw-2.py", line 18, in expect raise
AssertException(message) cw-2.AssertException: [1095047, 1095209, 1095319]
should equal [1095047, 1095209, 1095319, 1095403]
import math
def isPrime(num):
#flag = True
rt = math.floor(math.sqrt(num))
for i in range(2,int(rt)+1):
if num % i == 0:
return False
else:
return True
def Reverse_Integer(Number):
Reverse = 0
while(Number > 0):
Reminder = Number %10
Reverse = (Reverse *10) + Reminder
Number = Number //10
return Reverse
def backwardsPrime(start, stop):
s = list()
for i in range(start,stop):
if i>9 and i != Reverse_Integer(i):
if isPrime(i) and isPrime(Reverse_Integer(i)):
s.append(i)
else:
continue
return s
The Codewars have their own test functions. Sample test case below:
a = [9923, 9931, 9941, 9967]
test.assert_equals(backwardsPrime(9900, 10000), a)
Answer: Looks like your code passes the test when run manually. Maybe they range to
scan over is set wrong on the test causing it to miss the last one?
backwardsPrime(1095000, 1095405)
[1095047, 1095209, 1095319, 1095403]
e.g. the second parameter is set to `1095400` or something.
|
python preserving output csv file column order
Question: The issue is common, when I import a csv file and process it and finally
output it, the order of column in the output csv file may be different from
the original one,for instance:
dct={}
dct['a']=[1,2,3,4]
dct['b']=[5,6,7,8]
dct['c']=[9,10,11,12]
header = dct.keys()
rows=pd.DataFrame(dct).to_dict('records')
with open('outTest.csv', 'wb') as f:
f.write(','.join(header))
f.write('\n')
for data in rows:
f.write(",".join(str(data[h]) for h in header))
f.write('\n')
the original csv file is like:
a,c,b
1,9,5
2,10,6
3,11,7
4,12,8
while I'd like to fixed the order of the column like the output:
a,b,c
1,5,9
2,6,10
3,7,11
4,8,12
and the answers I found are mostly related to `pandas`, I wonder if we can
solve this in another way.
Any help is appreciated, thank you.
Answer: Instead of `dct={}` just do this:
from collections import OrderedDict
dct = OrderedDict()
The keys will be ordered in the same order you define them.
Comparative test:
from collections import OrderedDict
dct = OrderedDict()
dct['a']=[1,2,3,4]
dct['b']=[5,6,7,8]
dct['c']=[9,10,11,12]
stddct = dict(dct) # create a standard dictionary
print(stddct.keys()) # "wrong" order
print(dct.keys()) # deterministic order
result:
['a', 'c', 'b']
['a', 'b', 'c']
|
How to include __build_class__ when creating a module in the python C API
Question: I am trying to use the [Python 3.5 C API](https://docs.python.org/3/c-api/) to
execute some code that includes constructing a class. Specifically this:
class MyClass:
def test(self):
print('test')
MyClass().test()
The problem I have is that it errors like this:
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: __build_class__ not found
So somehow I need my module to include `__build_class__`, but I am not sure
how (I guess that I would also miss other things you get by default when using
[Python](https://www.python.org/) too) - is there a way to include all this
built-in stuff in my module?
Here is my code so far:
#include <Python.h>
int main(void)
{
int ret = 0;
PyObject *pValue, *pModule, *pGlobal, *pLocal;
Py_Initialize();
pGlobal = PyDict_New();
pModule = PyModule_New("mymod");
pLocal = PyModule_GetDict(pModule);
pValue = PyRun_String(
"class MyClass:\n\tdef test(self):\n\t\tprint('test')\n\nMyClass().test()",
Py_file_input,
pGlobal,
pLocal);
if (pValue == NULL) {
if (PyErr_Occurred()) {
PyErr_Print();
}
ret = 1;
} else {
Py_DECREF(pValue);
}
Py_Finalize();
return ret;
}
so `pValue` is `NULL` and it is calling `PyErr_Print`.
Answer: Their are (at least) two ways to solve this it seems...
## Way 1
Instead of:
pGlobal = PyDict_New();
You can import the `__main__` module and get it's globals dictionary like
this:
pGlobal = PyModule_GetDict(PyImport_AddModule("__main__"));
This way is described like so:
> BUT PyEval_GetGlobals will return null it it is not called from within
> Python. This will never be the case when extending Python, but when Python
> is embedded, it may happen. This is because PyRun_* define the global scope,
> so if you're not somehow inside a PyRun_ thing (e.g. module called from
> python called from embedder), there are no globals.
>
> In an embedded-python situation, if you decide that all of your PyRun_*
> calls are going to use `__main__` as the global namespace,
> PyModule_GetDict(PyImport_AddModule("`__main__`")) will do it.
Which I got from the question [embedding](http://www.gossamer-
threads.com/lists/python/python/8946#8946) I found over on this [Python
list](http://www.gossamer-threads.com/lists/python/python/).
## Way 2
Or as an alternative, which I personally prefer to importing the main module
(and found [here](http://stackoverflow.com/a/10684099/1039947)), you can do
this to populate the new dictionary you created with the built-in stuff which
includes `__build_class__`:
pGlobal = PyDict_New();
PyDict_SetItemString(pGlobal, "__builtins__", PyEval_GetBuiltins());
|
C++ uses twice the memory when moving elements from one dequeue to another
Question: In my project, I use [pybind11](https://github.com/pybind/pybind11) to bind
C++ code to Python. Recently I have had to deal with very large data sets
(70GB+) and encountered need to split data from one `std::deque` between
multiple `std::deque`'s. Since my dataset is so large, I expect the split not
to have much of memory overhead. Therefore I went for one pop - one push
strategy, which in general should ensure that my requirements are met.
That is all in theory. In practice, my process got killed. So I struggled for
past two days and eventually came up with following minimal example
demonstrating the problem.
Generally the minimal example creates bunch of data in `deque` (~11GB),
returns it to Python, then calls again to `C++` to move the elements. Simple
as that. Moving part is done in executor.
The interesting thing is, that if I don't use executor, memory usage is as
expected and also when limits on virtual memory by ulimit are imposed, the
program really respects these limits and doesn't crash.
**test.py**
from test import _test
import asyncio
import concurrent
async def test_main(loop, executor):
numbers = _test.generate()
# moved_numbers = _test.move(numbers) # This works!
moved_numbers = await loop.run_in_executor(executor, _test.move, numbers) # This doesn't!
if __name__ == '__main__':
loop = asyncio.get_event_loop()
executor = concurrent.futures.ThreadPoolExecutor(1)
task = loop.create_task(test_main(loop, executor))
loop.run_until_complete(task)
executor.shutdown()
loop.close()
**test.cpp**
#include <deque>
#include <iostream>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
namespace py = pybind11;
PYBIND11_MAKE_OPAQUE(std::deque<uint64_t>);
PYBIND11_DECLARE_HOLDER_TYPE(T, std::shared_ptr<T>);
template<class T>
void py_bind_opaque_deque(py::module& m, const char* type_name) {
py::class_<std::deque<T>, std::shared_ptr<std::deque<T>>>(m, type_name)
.def(py::init<>())
.def(py::init<size_t, T>());
}
PYBIND11_PLUGIN(_test) {
namespace py = pybind11;
pybind11::module m("_test");
py_bind_opaque_deque<uint64_t>(m, "NumbersDequeue");
// Generate ~11Gb of data.
m.def("generate", []() {
std::deque<uint64_t> numbers;
for (uint64_t i = 0; i < 1500 * 1000000; ++i) {
numbers.push_back(i);
}
return numbers;
});
// Move data from one dequeue to another.
m.def("move", [](std::deque<uint64_t>& numbers) {
std::deque<uint64_t> numbers_moved;
while (!numbers.empty()) {
numbers_moved.push_back(std::move(numbers.back()));
numbers.pop_back();
}
std::cout << "Done!\n";
return numbers_moved;
});
return m.ptr();
}
**test/__init__.py**
import warnings
warnings.simplefilter("default")
**Compilation** :
g++ -std=c++14 -O2 -march=native -fPIC -Iextern/pybind11 `python3.5-config --includes` `python3.5-config --ldflags` `python3.5-config --libs` -shared -o test/_test.so test.cpp
**Observations:**
* When the moving part is not done by executor, so we just call `moved_numbers = _test.move(numbers)`, all works as expected, memory usage showed by htop stays around `11Gb`, great!.
* When moving part is done in executor, the program takes double the memory and crashes.
* When limits on virtual memory are introduced (~15Gb), all works fine, which is probably the most interesting part.
`ulimit -Sv 15000000 && python3.5 test.py` >> `Done!`.
* When we increase the limit the program crashes (150Gb > my RAM).
`ulimit -Sv 150000000 && python3.5 test.py` >> `[1] 2573 killed python3.5
test.py`
* Usage of deque method `shrink_to_fit` doesn't help (And nor it should)
**Used software**
Ubuntu 14.04
gcc version 5.4.1 20160904 (Ubuntu 5.4.1-2ubuntu1~14.04)
Python 3.5.2
pybind11 latest release - v1.8.1
**Note**
Please note that this example was made merely to demonstrate the problem.
Usage of `asyncio` and `pybind` is necessary for problem to occur.
Any ideas on what might be going on are most welcomed.
Answer: The problem turned out to be caused by Data being created in one thread and
then deallocated in another one. It is so because of malloc arenas in glibc
[(for reference see this)](https://siddhesh.in/posts/malloc-per-thread-arenas-
in-glibc.html). It can be nicely demonstrated by doing:
executor1 = concurrent.futures.ThreadPoolExecutor(1)
executor2 = concurrent.futures.ThreadPoolExecutor(1)
numbers = await loop.run_in_executor(executor1, _test.generate)
moved_numbers = await loop.run_in_executor(executor2, _test.move, numbers)
which would take twice the memory allocated by `_test.generate` and
executor = concurrent.futures.ThreadPoolExecutor(1)
numbers = await loop.run_in_executor(executor, _test.generate)
moved_numbers = await loop.run_in_executor(executor, _test.move, numbers)
which wound't.
This issue can be solved either by rewriting the code so it doesn't move the
elements from one container to another (my case) or by setting environment
variable `export MALLOC_ARENA_MAX=1` which will limit number of malloc arenas
to 1. This however might have some performance implications involved (There is
a good reason for having multiple arenas).
|
How to run a zeppelin notebook using REST api and return results in python?
Question: I am running a zeppelin notebook using the following REST call from python:
`import requests
requests.post('http://x.y.z.x:8080/api/notebook/job/2BZ3VJZ4G').json()`
The output is {u'status': u'OK'}
But I want to return some results/exception(if any) from few blocks in the
zeppelin notebook to the python script.
I also tried to run only a paragraph in the notebook using
requests.post('http://x.y.z.x:8080/api/notebook/job/2BZ3VJZ4G/20160922-140926_526498241').json()
and received the same output {u'status': u'OK'}.
Can somebody help me to retrieve the results from zeppelin in python?
Answer: Zeppelin has introduced a synchronous API to run a paragraph in its latest yet
to be releases 0.7.0 version. You can clone the latest code form their repo
and build a snapshot yourself. URL for API is <http://[zeppelin-
server]:[zeppelin-port]/api/notebook/run/[notebookId]/[paragraphId]>. This
will return output of the paragraph after it has run completely.
|
selenium.common.exceptions.WebDriverException: Message: Service
Question: I had a trouble when i use selenium to control my Chrome. Here is my code:
from selenium import webdriver
driver = webdriver.Chrome()
When i tried to operate it ,it runs successfully at first,the Chrome pop on
the screen. However, it shut down at the few seconds. [![Here is the Traceback
information](https://i.stack.imgur.com/EbQE7.jpg)](https://i.stack.imgur.com/EbQE7.jpg)
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
driver = webdriver.Chrome('C:\Program Files (x86)\Google\Chrome\chrome.exe')
File "C:\Users\35273\AppData\Local\Programs\Python\Python35\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 62, in __init__
self.service.start()
File "C:\Users\35273\AppData\Local\Programs\Python\Python35\lib\site-packages\selenium\webdriver\common\service.py", line 86, in start
self.assert_process_still_running()
File "C:\Users\35273\AppData\Local\Programs\Python\Python35\lib\site-packages\selenium\webdriver\common\service.py", line 99, in assert_process_still_running
% (self.path, return_code)
selenium.common.exceptions.WebDriverException: Message: Service C:\Program Files (x86)\Google\Chrome\chrome.exe unexpectedly exited. Status code was: 0
Answer: You need to provide the path of chromedriver...download from
<http://chromedriver.storage.googleapis.com/index.html?path=2.24/...unzip> it
and provide path to it in... webdriver.chrome ("path to chromedriver")
I explain the things here:
from selenium import webdriver
driver = webdriver.Chrome("C:\Program Files (x86)\Google\Chrome\Application\chrome.exe")
This is the error if i run the above code:
C:\Python27\python.exe C:/Users/Gaurav.Gaurav-PC/PycharmProjects/Learning/StackOverflow/SeleniumQuestion/test123.py
Traceback (most recent call last):
File "C:/Users/Gaurav.Gaurav-PC/PycharmProjects/Learning/StackOverflow/SeleniumQuestion/test123.py", line 4, in <module>
driver = webdriver.Chrome("C:\Program Files (x86)\Google\Chrome\Application\chrome.exe")
File "C:\Python27\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 62, in __init__
self.service.start()
File "C:\Python27\lib\site-packages\selenium\webdriver\common\service.py", line 86, in start
self.assert_process_still_running()
File "C:\Python27\lib\site-packages\selenium\webdriver\common\service.py", line 99, in assert_process_still_running
% (self.path, return_code)
selenium.common.exceptions.WebDriverException: Message: Service C:\Program Files (x86)\Google
\Chrome\Application\chrome.exe unexpectedly exited. Status code was: 0
Which is same as mentioned by @Weiziyoung in original problem.
The solution is as I mentioned you need to provide the path to chromedriver in
place of chrome browser like
driver = webdriver.Chrome("E:\Jars\chromedriver.exe")
It will resolve the problem
|
can we call python script in node js and run node js to get call?
Question:
python.py
from pymongo import MongoClient
from flask import Flask
app = Flask(__name__)
host = "10.0.0.10"
port = 8085
@app.route('/name/<string:name>',methods=['GET','POST'])
def GetNoteText(name):
print name
return "Data Received"
@app.route('/', methods=['POST'])
def abc():
print "Hii"
return ('Welcome')
users=[]
@app.route('/getNames')
def getName():
client = MongoClient('mongodb://localhost:27017/')
db = client.bridgeUserInformationTable
cursor = db.bridgeUsersInfo.find()
for document in cursor:
#print "Name : ",document['name']
users.append(document['name'])
print document['name']
#print (users)
return "<html><body><h1>"+str(users)+"</h1></body></html>"
if __name__ == '__main__':
app.run(
host=host, port=port
)
node.j
var PythonShell = require('python-shell');
PythonShell.run('pass.py', function (err) {
if (err) throw err;
console.log('finished');
});
As i tried can we call python script in node js after running node js script
from getting input from the android device? I am lit bit confused how it
should be solved? And how both languages should communicate each other like
python to node js?
Answer: [ZERORPC](http://www.zerorpc.io/) is a really nifty library built on top of
ZeroMQ. This is probably the easiest way to make call python code from Node.
For a really simple approach and non-robust approach, you could use a tmp file
to write the python commands from Node. With an event loop running inside
Python, read the tmp file for any changes and execute the commands therein.
|
Python handle 'NoneType' object has no attribute 'find_all' error with if else statement
Question: I am using beautifulsoup4 to grab stock data and send to a spreadsheet in
python. The problem I am having is that I cannot get my loop to skip over
attributes that return None. So what I am needing is the code to add null
values to rows where attribute would return none.
//my dictionay for storing data
data = {
'Fiscal Quarter End' : [],
'Date Reported' : [],
'Earnings Per Share' : [],
'Consensus EPS* Forecast' : [],
'% Surprise' : []
}
url = ""
html = requests.get(url)
data = html.text
soup = bs4.BeautifulSoup(data)
table = soup.find("div", class_="genTable")
for row in table.find_all('tr')[1:]:
if row.has_attr('tr'):
cols = row.find_all("td")
data['Fiscal Quarter End'].append( cols[0].get_text() )
data['Date Reported'].append( cols[1].get_text() )
data['Earnings Per Share'].append( cols[2].get_text() )
data['Consensus EPS* Forecast'].append( cols[3].get_text() )
data['% Surprise'].append( cols[4].get_text() )
else:
//where i need to add in the empty 'n/a' values
data['Fiscal Quarter End'].append()
data['Date Reported'].append()
data['Earnings Per Share'].append()
data['Consensus EPS* Forecast'].append()
data['% Surprise'].append()
Answer: You have used the `data` variable for two different things. The second usage
overwrote your dictionary. It is simpler to just use `html.text` in the call
to `soup.find()`. Try the following:
import requests
import bs4
# My dictionary for storing data
data = {
'Fiscal Quarter End' : [],
'Date Reported' : [],
'Earnings Per Share' : [],
'Consensus EPS* Forecast' : [],
'% Surprise' : []
}
empty = 'n/a'
url = ""
html = requests.get(url)
soup = bs4.BeautifulSoup(html.text, "html.parser")
table = soup.find("div", class_="genTable")
rows = []
if table:
rows = table.find_all('tr')[1:]
for row in rows:
cols = row.find_all("td")
data['Fiscal Quarter End'].append(cols[0].get_text())
data['Date Reported'].append(cols[1].get_text())
data['Earnings Per Share'].append(cols[2].get_text())
data['Consensus EPS* Forecast'].append(cols[3].get_text())
data['% Surprise'].append(cols[4].get_text())
if len(rows) == 0:
# Add in the empty 'n/a' values if no columns found
data['Fiscal Quarter End'].append(empty)
data['Date Reported'].append(empty)
data['Earnings Per Share'].append(empty)
data['Consensus EPS* Forecast'].append(empty)
data['% Surprise'].append(empty)
In the event the `table` or `rows` was empty, `data` would hold the following:
{'Date Reported': ['n/a'], 'Earnings Per Share': ['n/a'], '% Surprise': ['n/a'], 'Consensus EPS* Forecast': ['n/a'], 'Fiscal Quarter End': ['n/a']}
|
Remove newline characters that appear between curly brackets
Question: I'm currently writing a text processing script which contains static text and
variable values (surrounded by curly brackets). I need to be able to strip out
newline characters but only if they appear between the curly brackets:
`Some text\nwith a {variable\n} value"`
to:
`Some text\nwith a {variable} value"`
Further down in the processing I'm already doing this:
`re.sub(r'\{.*?\}', '(.*)', text, flags=re.MULTILINE|re.DOTALL)`
But I'm not sure how to target just the newline character and not the entirely
of the curly bracket pair. There is also the possibility of multiple newlines:
`Some text\nwith a {variable\n\n\n} value"`
* * *
Using Python 3.x
Answer: You may pass the match object to a lambda in a `re.sub` and replace all
newlines inside `{...}`:
import re
text = 'Some text\nwith a {variable\n} value"'
print(re.sub(r'{.*?}', lambda m: m.group().replace("\n", ""), text, flags=re.DOTALL))
See [online Python 3 demo](http://ideone.com/ElV2ph)
Note that you do not need `re.MULTILINE` flag with this regex as it has no
`^`/`$` anchors to redefine the behavior of, and you do not need to escape `{`
and `}` in the current expression (without excessive backslashes, regexps look
cleaner).
|
How to remove 'selected' attribute from item in ModelChoiceField?
Question: ModelChoiceField adds attribute `selected` in HTML, if object from choices
list has FK to parent object.
How\where can I remove this 'selected' attribute in order to get just list of
choices? Want to mention, that I need to remove just 'selected' attribute, i.e
the value itself should not be removed from the list of choices. I need to
hook it somehow from python side, not from HTML. I tried to find needed
atribute in different places inside `form`, but no luck.
Does anyone know the part of Django code, where there is a check if an object
from choices list has FK to parent model?
Answer: I don't know would it work or not, but an idea would be clear for you.
So, i found source of `Select` widget that sets your `selected` property in
html. It's
[here](https://docs.djangoproject.com/en/1.10/_modules/django/forms/widgets/#Select),
just search for `selected_html`.
You can try to subclass `Select` widget:
from django.forms.widgets import Select
class CustomSelect(Select):
def render_option(self, selected_choices, option_value, option_label):
if option_value is None:
option_value = ''
option_value = force_text(option_value)
if option_value in selected_choices:
selected_html = '' # make it empty string like in else statement or refactor all that method
if not self.allow_multiple_selected:
# Only allow for a single selection.
selected_choices.remove(option_value)
else:
selected_html = ''
return format_html('<option value="{}"{}>{}</option>', option_value, selected_html, force_text(option_label))
And then in forms
class YourForm(forms.Form):
your_field = forms.ModelChoiceField(widget=CustomSelect())
...
It's just solution that i came up with, and i know that this is not so
elegant, but it seems that there is no simple way to disable that `selected`
thing.
|
convert coordinates into binary image python
Question: I made a GUI in which I can load an image from disk and convert the mouse
motion ( drawing the contours ) into coordinates. Now I need to convert the
coordinates into a binary image and I don't know how. Here is my code:
from Tkinter import *
from tkFileDialog import askopenfilename
from PIL import Image, ImageTk
import numpy as np
import cv2
class Browse_image :
def __init__ (self,master) :
frame = Frame(master)
frame.grid(sticky=W+E+N+S)
self.browse = Button(frame, text="Browse", command = lambda: browseim(self))
self.browse.grid(row=13, columnspan=1)
self.photo = PhotoImage(file="browse.png")
self.label = Label(frame, image=self.photo)
self.label.grid(row=1,rowspan=10)
self.label.bind('<B1-Motion>', self.mouseevent)
def mouseevent(self,event):
w=self.photo.width()
h=self.photo.height()
a = np.zeros(shape=(h,w))
#print event.x, event.y
a[event.x,event.y]=1
plt.imsave('binary.png', a, cmap=cm.gray)
def browseim(self):
path = askopenfilename(filetypes=(("png files","*.png"),("jpeg files",
"*.jpeg")) )
if path:
img = Image.open(path)
self.photo = ImageTk.PhotoImage(img)
self.label.configure(image = self.photo)
#self.label.image = self.photo
root= Tk()
b= Browse_image(root)
root.mainloop()
`
Answer: Your issue is that you are creating a new empty array at each mouse event with
the line `a = np.zeros(shape=(h,w))` in mouseevent. To fix that you should
have `a` declared in the `__init__` as an attribute (ie `self.a =
np.zeros(shape=(h,w))`) so that you can acces and update it without
overwriting it in your `mouseevent` function.
|
(Python) How to find the index of the maximum value of a list?
Question: The instructions for the program: To create an empty list and fill it with 20
random integers between 1-500 inclusive, print the list on a single line, and
print the maximum value of that list on a separate line. The max value
function cannot be used.
But then it asks to print the index of the maximum value in the list. Here's
what I have so far:
import random
print()
nums = []
for num in range(20):
nums.append(random.randint(1, 500)); sep=''
max = nums[0]
for i in nums:
if i > max:
max = i
print(nums)
print("The maximum value that appears in the list is " + str(max) + ".")
So I have the max value problem solved, but I'm having a hard time determining
how exactly to find the index of the maximum value from the list.
Any help appreciated.
Answer: Lists have a `.index` method. That should fit the bill.
>>> ["a","b","c"].index("b")
1
|
connecting to a remote server and downloading and extracting binary tar file using python
Question: I would like to connect to a remote server, download and extract binary tar
file into a specific directory on that host. I am using python 2.6.8
1. What would be a simple way to ssh to that server?
2. I see the below errors on my script to download the tar file and extract it
Traceback (most recent call last): File "./wgetscript.py", line 16, in tar =
tarfile.open(file_tmp) File "/usr/lib64/python2.6/tarfile.py", line 1653, in
open return func(name, "r", fileobj, **kwargs) File
"/usr/lib64/python2.6/tarfile.py", line 1715, in gzopen fileobj =
bltn_open(name, mode + "b") TypeError: coercing to Unicode: need string or
buffer, tuple found
#!/usr/bin/env python
import os
import tarfile
import urllib
url = 'http://**************/Lintel/Mongodb/mongodb-linux-x86_64-enterprise-suse12-3.2.6.tgz'
fullfilename = os.path.join('/tmp/demo1','file.tgz')
file_tmp = urllib.urlretrieve(url,fullfilename)
print file_tmp
base_name = os.path.basename(url)
print base_name
file_name, file_extension = os.path.splitext(base_name)
print file_name, file_extension
tar = tarfile.open(file_tmp)
nameoffile = os.path.join('/tmp/demo1','file')
tar.extractall(file_name,nameoffile)
tar.close()
Answer: There are 2 errors here:
* `urllib.urlretrieve` (or `urllib.requests.urlretrieve` in Python 3) returns a `tuple`: filename, httprequest. You have to unpack the result in 2 values (or in the original `fullfilename`)
* download is OK but the `tarfile` module doesn't work the way to think it works: `tar.extractall` takes 2 arguments: path of the .tar/.tgz file and optional `list` of members (that you can get with `tar.getmembers()` in your case). For this example, I propose that we drop that filter and extract all the contents in the temp directory.
Fixed code:
url = 'http://**************/Lintel/Mongodb/mongodb-linux-x86_64-enterprise-suse12-3.2.6.tgz'
temp_dir = '/tmp/demo1'
fullfilename = os.path.join(temp_dir,'file.tgz')
# urllib.request in Python 3
file_tmp,http_message = urllib.urlretrieve(url,fullfilename)
base_name = os.path.basename(url)
file_name, file_extension = os.path.splitext(base_name)
tar = tarfile.open(file_tmp)
#print(tar.getmembers()[0:5]) # would print 5 first members of the archive
tar.extractall(os.path.join(temp_dir,file_name))
tar.close()
|
Timing blocks of code - Python
Question: I'm trying to measure the time it takes to run a block of instructions in
Python, but I don't want to write things like:
start = time.clock()
...
<lines of code>
...
time_elapsed = time.clock() - start
Instead, I want to know if there is a way I can send the block of instructions
as a parameter to a function that returns the elapsed time, like
time_elapsed = time_it_takes(<lines of code>)
The implementation of this method could be something like
def time_it_takes(<lines of code>):
start = time.clock()
result = <lines of code>
return (result, time.clock() - start)
Does anybody know if there is some way I can do this? Thanks in advanced.
Answer: This would be a good use of a decorator. You could write a decorator that does
that like this
import time
def timer(func):
def wrapper(*args, **kwargs):
start = time.time()
func(*args, **kwargs)
print('The function ran for', time.time() - start)
return wrapper
@timer
def just_sleep():
time.sleep(5)
just_sleep()
Output
The function ran for 5.0050904750823975
and then you can decorate any function you want to time with `@timer` and you
can also do some other fancy things inside the decorator. Like if the function
ran for more than 15 seconds do something...else do another thing
**Note: This is not the most accurate way to measure execution time of a
function in python**
|
OSError: .pynative/... : file too short
Question: When I try to apply OCRopus (a python-based OCR tool) to a TIFF image, I get
the following python error:
Traceback (most recent call last):
File "/usr/local/bin/ocropus-nlbin", line 10, in <module>
import ocrolib
File "/usr/local/lib/python2.7/dist-packages/ocrolib/__init__.py", line 12, in <module>
from common import *
File "/usr/local/lib/python2.7/dist-packages/ocrolib/common.py", line 18, in <module>
import lstm
File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 32, in <module>
import nutils
File "/usr/local/lib/python2.7/dist-packages/ocrolib/nutils.py", line 25, in <module>
lstm_native = compile_and_load(lstm_utils)
File "/usr/local/lib/python2.7/dist-packages/ocrolib/native.py", line 68, in compile_and_load
return ctypes.CDLL(path)
File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
OSError: .pynative/cccd32009099f8dade0fe6cd205bf188.so: file too short
Traceback (most recent call last):
File "/usr/local/bin/ocropus-gpageseg", line 22, in <module>
import ocrolib
File "/usr/local/lib/python2.7/dist-packages/ocrolib/__init__.py", line 12, in <module>
from common import *
File "/usr/local/lib/python2.7/dist-packages/ocrolib/common.py", line 18, in <module>
import lstm
File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 32, in <module>
import nutils
File "/usr/local/lib/python2.7/dist-packages/ocrolib/nutils.py", line 25, in <module>
lstm_native = compile_and_load(lstm_utils)
File "/usr/local/lib/python2.7/dist-packages/ocrolib/native.py", line 68, in compile_and_load
return ctypes.CDLL(path)
File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
OSError: .pynative/cccd32009099f8dade0fe6cd205bf188.so: file too short
Traceback (most recent call last):
File "/usr/local/bin/ocropus-rpred", line 7, in <module>
import ocrolib
File "/usr/local/lib/python2.7/dist-packages/ocrolib/__init__.py", line 12, in <module>
from common import *
File "/usr/local/lib/python2.7/dist-packages/ocrolib/common.py", line 18, in <module>
import lstm
File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 32, in <module>
import nutils
File "/usr/local/lib/python2.7/dist-packages/ocrolib/nutils.py", line 25, in <module>
lstm_native = compile_and_load(lstm_utils)
File "/usr/local/lib/python2.7/dist-packages/ocrolib/native.py", line 68, in compile_and_load
return ctypes.CDLL(path)
File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
OSError: .pynative/cccd32009099f8dade0fe6cd205bf188.so: file too short
Traceback (most recent call last):
File "/usr/local/bin/ocropus-hocr", line 8, in <module>
import ocrolib
File "/usr/local/lib/python2.7/dist-packages/ocrolib/__init__.py", line 12, in <module>
from common import *
File "/usr/local/lib/python2.7/dist-packages/ocrolib/common.py", line 18, in <module>
import lstm
File "/usr/local/lib/python2.7/dist-packages/ocrolib/lstm.py", line 32, in <module>
import nutils
File "/usr/local/lib/python2.7/dist-packages/ocrolib/nutils.py", line 25, in <module>
lstm_native = compile_and_load(lstm_utils)
File "/usr/local/lib/python2.7/dist-packages/ocrolib/native.py", line 68, in compile_and_load
return ctypes.CDLL(path)
File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
OSError: .pynative/cccd32009099f8dade0fe6cd205bf188.so: file too short
Since this is as python issue, I haven't tagged OCROpus, should I tag it as
well? Could it be an Python installation matter? If so, how can I solve it?
Answer: Problem solved. I saw other people having trouble (on diverse matters) with:
> OSError:**[X]**... : file too short
My suggestion is: whatever you are doing, check for hidden directories named
[X] in the current directory and delete them.
|
elementTree is not opening and parsing my xml file
Question: seems like `vehicles.write(cars_file)` and `vehicles =
cars_Etree.parse(cars_file)` are having a problem with the file name:
import argparse
import xml.etree.ElementTree as cars_Etree
# elementTree not reading xml file properly
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
dest='file_name',
action='store',
help='File name',
metavar='FILE'
)
parser.add_argument(
'car_make', help='car name')
args = parser.parse_args()
with open(args.file_name, 'r') as cars_file:
vehicles = cars_Etree.parse(cars_file)
cars = vehicles.getroot()
for make in cars.findall(args.car_make):
name = make.get('name')
if name != args.car_make:
cars.remove(make)
with open(args.file_name, 'w') as cars_file:
vehicles.write(cars_file)
Error:
Traceback (most recent call last):
File "/Users/benbitdiddle/PycharmProjects/VehicleFilter/FilterTest.py", line 23, in <module>
vehicles = cars_Etree.parse(cars_file)
File "/Applications/anaconda/lib/python3.5/xml/etree/ElementTree.py", line 1184, in parse
tree.parse(source, parser)
File "/Applications/anaconda/lib/python3.5/xml/etree/ElementTree.py", line 596, in parse
self._root = parser._parse_whole(source)
xml.etree.ElementTree.ParseError: syntax error: line 1, column 0
XML file, which I am trying to filter, is in the same project folder. I tried
supplying the path with the file name and it still didn't work.
<?xml version="1.0" encoding="UTF-8"?>
<cars>
<make name="toyota">
<model name="tacoma" />
<model name="tundra" />
</make>
<make name="ford">
<model name="escort" />
<model name="taurus" />
</make>
<make name="chevy">
<model name="silverado" />
<model name="volt" />
</make>
</cars>
Answer: works fine now but im still improving it. thank you
I made this modification in main.py:
path = "/Users/benbitdiddle/PycharmProjects/VehicleFilter/"
CF = CarFilter(path+args.file_name, args.car_make)
CF.filterCar()
And changed 'w' to wb' in CarFilter.py class file:
with open(self.file_name, 'wb') as car_file:
self.vehicles.write(car_file)
|
DoesNotExist at /admin/login/
Question:
DoesNotExist at /admin/login/
Site matching query does not exist.
Request Method: GET
Request URL: https://tehb123.pythonanywhere.com/admin/login/?next=/admin/
Django Version: 1.9.3
Exception Type: DoesNotExist
Exception Value:
Site matching query does not exist.
Exception Location: /usr/local/lib/python3.5/dist-packages/django/db/models/query.py in get, line 387
Python Executable: /usr/local/bin/uwsgi
Python Version: 3.5.1
Python Path:
['/var/www',
'.',
'',
'/home/tehb123/.local/lib/python3.5/site-packages',
'/var/www',
'/usr/lib/python3.5',
'/usr/lib/python3.5/plat-x86_64-linux-gnu',
'/usr/lib/python3.5/lib-dynload',
'/usr/local/lib/python3.5/dist-packages',
'/usr/lib/python3/dist-packages',
'/home/tehb123/mysite']
Server time: Thu, 13 Oct 2016 05:34:55 +0000
* urls
from django.conf.urls import url, patterns, include from django.contrib import
admin from django.contrib.flatpages import views
urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^',
include('Mysitez.urls')), # url(r'^pages/',
include('django.contrib.flatpages.urls')), ]
urlpatterns += [ url(r'^(?P.*/)$', views.flatpage), ]
* settings
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth',
'django.contrib.contenttypes', 'django.contrib.sessions',
'django.contrib.messages', 'django.contrib.staticfiles',
'django.templatetags', 'django.apps', 'django.contrib.sites',
'django.contrib.flatpages', 'Mysitez',
]
Answer: # The admin site
once you create the project with django version > 1.6 , the admin site will
enable django itself. but make sure you have following
["<https://docs.djangoproject.com/en/1.10/ref/contrib/admin/>"], once you
create project , make `python manage.py migrate` then create the admin user
using (`python manage.py createsuperuser` ). do the run command see the url
like ("<http://127.0.0.1:8000/admin/>") you can see the login page
|
selenium python webscrape fails after first iteration
Question: Im iterating through tripadvisor to save comments(non-translated, original)
and translated comments (from portuguese to english). So the scraper first
selects portuguese comments to be displayed , then as usual it converts them
into english one by one and saves the translated comments in com_, whereas the
expanded non-translated comments in expanded_comments.
The code works fine with first page but from second page onward it fails to
save translated comments. Strangely it just translates only first comment from
each of the pages and doesnt even save them.
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
com_=[]
expanded_comments=[]
date_=[]
driver = webdriver.Chrome("C:\Users\shalini\Downloads\chromedriver_win32\chromedriver.exe")
driver.maximize_window()
from bs4 import BeautifulSoup
def expand_reviews(driver):
# TRYING TO EXPAND REVIEWS (& CLOSE A POPUP)
try:
driver.find_element_by_class_name("moreLink").click()
except:
print "err"
try:
driver.find_element_by_class_name("ui_close_x").click()
except:
print "err2"
try:
driver.find_element_by_class_name("moreLink").click()
except:
print "err3"
def save_comments(driver):
expand_reviews(driver)
# SELECTING ALL EXPANDED COMMENTS
#xpanded_com_elements=driver.find_elements_by_class_name("entry")
time.sleep(3)
#or i in expanded_com_elements:
# expanded_comments.append(i.text)
spi=driver.page_source
sp=BeautifulSoup(spi)
for t in sp.findAll("div",{"class":"entry"}):
if not t.findAll("p",{"class":"partial_entry"}):
#print t
expanded_comments.append(t.getText())
# Saving review date
for d in sp.findAll("span",{"class":"recommend-titleInline"}) :
date=d.text
date_.append(date_)
# SELECTING ALL GOOGLE-TRANSLATOR links
gt= driver.find_elements(By.CSS_SELECTOR,".googleTranslation>.link")
# NOW PRINTING TRANSLATED COMMENTS
for i in gt:
try:
driver.execute_script("arguments[0].click()",i)
#com=driver.find_element_by_class_name("ui_overlay").text
com= driver.find_element_by_xpath(".//span[@class = 'ui_overlay ui_modal ']//div[@class='entry']")
com_.append(com.text)
time.sleep(5)
driver.find_element_by_class_name("ui_close_x").click().perform()
time.sleep(5)
except Exception as e:
pass
# ITERATING THROIGH ALL 200 tripadvisor webpages and saving comments & translated comments
for i in range(200):
page=i*10
url="https://www.tripadvisor.com/Airline_Review-d8729164-Reviews-Cheap-Flights-or"+str(page)+"-TAP-Portugal#REVIEWS"
driver.get(url)
wait = WebDriverWait(driver, 10)
if i==0:
# SELECTING PORTUGUESE COMMENTS ONLY # Run for one time then iterate over pages
try:
langselction = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "span.sprite-date_picker-triangle")))
langselction.click()
driver.find_element_by_xpath("//div[@class='languageList']//li[normalize-space(.)='Portuguese first']").click()
time.sleep(5)
except Exception as e:
print e
save_comments(driver)
Answer: There are 3 problems in your code
1. Inside method `save_comments()`, at the `driver.find_element_by_class_name("ui_close_x").click().perform()`, the method `click()` of a webelement is not an ActionChain so you cannot call `perform()`. Therefore, that line should be like this:
driver.find_element_by_class_name("ui_close_x").click()
2. Inside method `save_comments()`, at the `com= driver.find_element_by_xpath(".//span[@class = 'ui_overlay ui_modal ']//div[@class='entry']")`, you find the element when it doesn't appear yet. So you have to add wait before this line. Your code should be like this:
wait = WebDriverWait(driver, 10)
wait.until(EC.element_to_be_clickable((By.XPATH, ".//span[@class = 'ui_overlay ui_modal ']//div[@class='entry']")))
com= driver.find_element_by_xpath(".//span[@class = 'ui_overlay ui_modal ']//div[@class='entry']")
3. There are 2 buttons which can open the review, one is displayed and one is hidden. So you have to skip the hidden button.
if not i.is_displayed():
continue
driver.execute_script("arguments[0].click()",i)
|
Initialize variable depending on another variables type
Question: In Python 2.7 I want to intialize a variables type depending on another
variable.
For example I want to do something like:
var_1 = type(var_2)
Is there a simple/fast way to do that?
Answer: Just create another instance
var_1 = type(var_2)()
Note that if you're not sure whether the object has a non-default constructor,
you cannot rely on the above, but you can use `copy` or `deepcopy` (you get a
"non-empty" object.
import copy
var_1 = copy.copy(var_2) # or copy.deepcopy
You could use both combined with the latter as a fallback mechanism
Note: `deepcopy` will ensure that your second object is completely independent
from the first (If there are lists of lists, for instance)
|
Parse dates and create time series from .csv
Question: I am using a simple csv file which contains data on calory intake. It has 4
columns: `cal`, `day`, `month`, year. It looks like this:
cal month year day
3668.4333 1 2002 10
3652.2498 1 2002 11
3647.8662 1 2002 12
3646.6843 1 2002 13
...
3661.9414 2 2003 14
# data types
cal float64
month int64
year int64
day int64
I am trying to do some simple time series analysis. I hence would like to
parse `month`, `year`, and `day` to a single column. I tried the following
using `pandas`:
import pandas as pd
from pandas import Series, DataFrame, Panel
data = pd.read_csv('time_series_calories.csv', header=0, pars_dates=['day', 'month', 'year']], date_parser=True, infer_datetime_format=True)
My questions are: (1) How do I parse the data and (2) define the data type of
the new column? I know there are quite a few other similar questions and
answers (see e.g. [here](http://stackoverflow.com/questions/23797491/parse-
dates-in-pandas), [here](http://stackoverflow.com/questions/34146679/python-
pandas-csv-parsing) and
[here](http://stackoverflow.com/questions/30864676/pandas-parse-dates-from-
csv)) - but I can't make it work so far.
Answer: You can use parameter `parse_dates` where define column names in `list` in
[`read_csv`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.read_csv.html):
import pandas as pd
import numpy as np
import io
temp=u"""cal,month,year,day
3668.4333,1,2002,10
3652.2498,1,2002,11
3647.8662,1,2002,12
3646.6843,1,2002,13
3661.9414,2,2003,14"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), parse_dates=[['year','month','day']])
print (df)
year_month_day cal
0 2002-01-10 3668.4333
1 2002-01-11 3652.2498
2 2002-01-12 3647.8662
3 2002-01-13 3646.6843
4 2003-02-14 3661.9414
print (df.dtypes)
year_month_day datetime64[ns]
cal float64
dtype: object
Then you can rename column:
df.rename(columns={'year_month_day':'date'}, inplace=True)
print (df)
date cal
0 2002-01-10 3668.4333
1 2002-01-11 3652.2498
2 2002-01-12 3647.8662
3 2002-01-13 3646.6843
4 2003-02-14 3661.9414
Or better is pass `dictionary` with new column name to `parse_dates`:
df = pd.read_csv(io.StringIO(temp), parse_dates={'dates': ['year','month','day']})
print (df)
dates cal
0 2002-01-10 3668.4333
1 2002-01-11 3652.2498
2 2002-01-12 3647.8662
3 2002-01-13 3646.6843
4 2003-02-14 3661.9414
|
Get Pixel count using RGB values in Python
Question: I have to find count of pixels of RGB which is using by RGB values. So i need
logic for that one.
Example.
i have image called image.jpg. That image contains width = 100 and height =
100 and Red value is 128. Green is 0 and blue is 128. so that i need to find
how much RGB pixels have that image. can anyone help?
Answer: As already mentioned in [this](http://stackoverflow.com/questions/138250/how-
can-i-read-the-rgb-value-of-a-given-pixel-in-python) question, by using
`Pillow` you can do the following:
from PIL import Image
im = Image.open('image.jpg', 'r')
If successful, this function returns an Image object. You can now use instance
attributes to examine the file contents:
width, height = im.size
pixel_values = list(im.getdata())
print(im.format, im.size, im.mode)
The `format` attribute identifies the source of an image. If the image was not
read from a file, it is set to None. The `mode` attribute defines the number
and names of the bands in the image, and also the pixel type and depth. Common
modes are “L” (luminance) for greyscale images, “RGB” for true color images,
and “CMYK” for pre-press images.
|
python append() and remove html tags
Question: I need some help. My output seems wrong. How can I correctly append the values
of dept, job_title, job_location. And there are html tags with the values of
dept. how can I remove those tags.
my code
response = requests.get("http://hortonworks.com/careers/open-positions/")
soup = BeautifulSoup(response.text, "html.parser")
jobs = []
div_main = soup.select("div#careers_list")
for div in div_main:
dept = div.find_all("h4", class_="department_title")
div_career = div. find_all("div", class_="career")
title = []
location = []
for dv in div_career:
job_title = dv.find("div", class_="title").get_text().strip()
title.append(job_title)
job_location = dv.find("div", class_="location").get_text().strip()
location.append(job_location)
job = {
"job_location": location,
"job_title": title,
"job_dept": dept
}
jobs.append(job)
pprint(jobs)
It should look like
{'job_dept': Consulting,
'job_location':'Chicago, IL'
'job_title': Sr. Consultant - Central'
1 value for each variables.
Answer: The structure of your html is sequential, not hierarchical, so you have to
iterate through your job list and update department title as you go:
import requests
from bs4 import BeautifulSoup, Tag
from pprint import pprint
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:21.0) Gecko/20130331 Firefox/21.0'}
response = requests.get("http://hortonworks.com/careers/open-positions/", headers=headers)
soup = BeautifulSoup(response.text, "html.parser")
jobs = []
div_main = soup.select("div#careers_list")
for div in div_main:
department_title = ""
for element in div:
if isinstance(element, Tag) and "class" in element.attrs:
if "department_title" in element.attrs["class"]:
department_title = element.get_text().strip()
elif "career" in element.attrs["class"]:
location = element.select("div.location")[0].get_text().strip()
title = element.select("div.title")[0].get_text().strip()
job = {
"job_location": location,
"job_title": title,
"job_dept": department_title
}
jobs.append(job)
pprint(jobs)
|
How to modify the XML file using Python?
Question: Actually I have got the XML string and parse the string to get the attributes
from it. Now I want my XML file to change, viewing the attributes. Like I want
to change the color of the stroke. Is there any way? How I will change and
then again save the file.
import requests
from xml.dom import minidom
response = requests.get('http://localhost:8080/geoserver/rest/styles/pakistan.sld',
auth=('admin', 'geoserver'))
fo=open("/home/adeel/Desktop/untitled1/yes.xml", "wb")
fo.write(response.text)
fo.close()
xmldoc = minidom.parse('yes.xml')
itemlist = xmldoc.getElementsByTagName('CssParameter')
print "Len : ", len(itemlist)
#print "Attribute Name : ", \
itemlist[0].attributes['name'].value
print "Text : ", itemlist[0].firstChild.nodeValue
for s in itemlist :
print "Attribute Name : ", s.attributes['name'].value
print "Text : ", s.firstChild.nodeValue
Answer: You should probably read through the [SLD Cook
book](http://docs.geoserver.org/latest/en/user/styling/sld/cookbook/index.html)
to get hints on how to change things like the colour of the lines in your SLD.
Once you've changed it you need to make a [`PUT`
request](http://docs.geoserver.org/latest/en/user/rest/api/styles.html) to
place the file back on the server.
|
Attempted relative import beyond toplevel package
Question: Here is my folder structure:
Mopy/ # no init.py !
bash/
__init__.py
bash.py # <--- Edit: yep there is such a module too
bass.py
bosh/
__init__.py # contains from .. import bass
bsa_files.py
...
test_bash\
__init__.py # code below
test_bosh\
__init__.py
test_bsa_files.py
In `test_bash\__init__.py` I have:
import sys
from os.path import dirname, abspath, join, sep
mopy = dirname(dirname(abspath(__file__)))
assert mopy.split(sep)[-1].lower() == 'mopy'
sys.path.append(mopy)
print 'Mopy folder appended to path: ', mopy
while in `test_bsa_files.py`:
import unittest
from unittest import TestCase
import bosh
class TestBSAHeader(TestCase):
def test_read_header(self):
bosh.bsa_files.Header.read_header()
if __name__ == '__main__':
unittest.main()
Now when I issue:
python.exe "C:\_\JetBrains\PyCharm 2016.2.2\helpers\pycharm\utrunner.py" C:\path\to\Mopy\test_bash\test_bosh\test_bar.py true
I get:
Traceback (most recent call last):
File "C:\_\JetBrains\PyCharm 2016.2.2\helpers\pycharm\utrunner.py", line 124, in <module>
modules = [loadSource(a[0])]
File "C:\_\JetBrains\PyCharm 2016.2.2\helpers\pycharm\utrunner.py", line 43, in loadSource
module = imp.load_source(moduleName, fileName)
File "C:\Dropbox\eclipse_workspaces\python\wrye-bash\Mopy\test_bash\test_bosh\test_bsa_files.py", line 4, in <module>
import bosh
File "C:\Dropbox\eclipse_workspaces\python\wrye-bash\Mopy\bash\bosh\__init__.py", line 50, in <module>
from .. import bass
ValueError: Attempted relative import beyond toplevel package
Since 'Mopy" is in the sys.path and `bosh\__init__.py` is correctly resolved
why it complains about relative import above the top level package ? _Which is
the top level package_ ?
Incidentally this is my attempt to add tests to a legacy project - had asked
in [Python test package layout](http://stackoverflow.com/q/34694437/281545)
but was closed as a duplicate of [Where do the Python unit tests
go?](http://stackoverflow.com/q/61151/281545). Any comments on my current test
package layout is much appreciated !
* * *
Well the [answer below](http://stackoverflow.com/a/40022629/281545) does not
work in my case:
The _module_ bash.py is the entry point to the application containing:
if __name__ == '__main__':
main()
When I use `import bash.bosh` or `from Bash import bosh` I get:
C:\_\Python27\python.exe "C:\_\JetBrains\PyCharm 2016.2.2\helpers\pycharm\utrunner.py" C:\Dropbox\eclipse_workspaces\python\wrye-bash\Mopy\test_bash\test_bosh\test_bsa_files.py true
Testing started at 3:45 PM ...
usage: utrunner.py [-h] [-o OBLIVIONPATH] [-p PERSONALPATH] [-u USERPATH]
[-l LOCALAPPDATAPATH] [-b] [-r] [-f FILENAME] [-q] [-i]
[-I] [-g GAMENAME] [-d] [-C] [-P] [--no-uac] [--uac]
[--bashmon] [-L LANGUAGE]
utrunner.py: error: unrecognized arguments: C:\Dropbox\eclipse_workspaces\python\wrye-bash\Mopy\test_bash\test_bosh\test_bsa_files.py true
Process finished with exit code 2
This usage messgae is from the main() in bash. No comments...
Answer: TLDR: Do
import bash.bosh
or
from bash import bosh
If you also just happen to have a construct like `bash.bash`, you have to make
sure your package takes precedence over its contents. Instead of appending,
add it to the front of the search order:
# test_bash\__init__.py
sys.path.insert(0, mopy)
* * *
When you do
import bosh
it will import the _module_ `bosh`. This means `Mopy/bash` is in your
`sys.path`, python finds the file `bosh` there, and imports it. The module is
now globally known by the name `bosh`. Whether `bosh` is itself a module or
package doesn't matter for this, it only changes whether `bosh.py` or
`bosh/__init__.py` is used.
Now, when `bosh` tries to do
from .. import bass
this is _not_ a file system operation ("one directory up, file bass") but a
module name operation. It means "one package level up, module bass". `bosh`
wasn't imported from its package, but on its own, though. So going up one
package is not possible - you end up at the package `''`, which is not valid.
Let's look at what happens when you do
import bash.bosh
instead. First, the _package_ `bash` is imported. Then, `bosh` is imported as
a module _of that package_ \- it is globally know as `bash.bosh`, even if you
used `from bash import bosh`.
When `bosh` does
from .. import bass
that one works now: going one level up from `bash.bosh` gets you to `bash`.
From there, `bass` is imported as `bash.bass`.
See also [this related answer](http://stackoverflow.com/a/39932711/5349916).
|
Python SQLite removes/escapes part of regular expression pattern in user defined function
Question: I did a simple replace implementation for regular expressions in python
sqlite3:
import sqlite3, re
db = sqlite3.connect(':memory:')
c = db.cursor()
c.executescript("""
create table t2 (n REAL, v TEXT, t TEXT);
insert into t2 VALUES (6, "F", "ef");
insert into t2 VALUES (1, "A", "aa");
insert into t2 VALUES (2, "B", "be");
insert into t2 VALUES (4, "D", "de");
insert into t2 VALUES (5, "E", "ee");
insert into t2 VALUES (3, "C", "ze");
""");
db.commit()
def preg_replace(string, pattern, replace):
return re.sub(pattern, replace, string)
db.create_function('replace',1,preg_replace)
c = db.cursor()
# This does not do anything
c.execute("UPDATE t2 SET t=replace(t,?,?)",('e$','ee'))
db.commit()
c = db.cursor()
c.execute("select * from t2")
print c.fetchall()
// This makes 'be' to 'bee'
print preg_replace("be","e$","ee")
My problem is now that my UPDATE command does not replace 'e' at the end of
the table entries.
If I use just 'e' as a pattern it works fine ('be' ends up as 'bee').
If I manually change 'be' in the table to 'be$', it gets modiefied to 'bee'.
However, I can replace the 'e' at the end of the string if I use the
preg_replace function directly.
I do not understand why. is there some string escaping going on when the input
for my user defined function goes to sqlite? Thanks a lot in advance.
PS: Running Python 2.7.3
Answer: You must tell
[create_function()](https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.create_function)
how many parameters the function has.
Your `preg_replace` will be used if the SQL calls `replace()` with one
parameter.
|
Python "list index out of range" other rule works
Question: I have tried to make a script to read out a csv file and determine some
information.
Now I receive an error:
Traceback (most recent call last):
File "/home/pi/vullijst/vullijst.py", line 26, in <module>
startdate = datetime.datetime.strptime (row[0],"%d-%m-%Y")
IndexError: list index out of range
Part of Script:
import csv
import datetime
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
#Variabelen
smtpserver = ''
smtplogin = ''
smtppassword = ''
sender = ''
csvfile = '/home/pi/vullijst/vullijst.csv'
#Inlezen CSV File
f = open(csvfile)
csv_f = csv.reader(f, delimiter=';')
today = datetime.datetime.now()
#Tijd bepalen en opstellen E-mail
for row in csv_f:
startdate = datetime.datetime.strptime (row[0],"%d-%m-%Y")
enddate = datetime.datetime.strptime (row[1],"%d-%m-%Y")
if today >= startdate and today <= enddate:
receiver = row[3]
The csv file has the following structure:
1-10-2016;12-10-2016;Test 1;test0@email.com;06-123456789
12-10-2016;13-10-2016;Test 2;test1@email.com;06-123456789
13-10-2016;14-10-2016;Test 3;test2@email.com;06-123456789
14-10-2016;15-10-2016;Test 4;test3@email.com;06-123456790
15-10-2016;16-10-2016;Test 5;test4@email.com;06-123456791
16-10-2016;17-10-2016;Test 6;test5@email.com;06-123456792
17-10-2016;18-10-2016;Test 7;test6@email.com;06-123456793
If I comment out this rule then I don't receive the error on the rule below.
Does somebody know what is wrong?
Answer: Your csv file appears to have an empty line at the end, after the last row
with real data. So your script reads all the real lines and processes them,
but it breaks when the last line is parsed into an empty list. The `row[0]`
you're trying to parse into a date isn't valid in that situation.
To avoid this issue, put a check at the top of your loop that skips the rest
of the loop body if `row` is empty.
for row in csv_f:
if not row: # skip empty rows
continue
startdate = datetime.datetime.strptime (row[0],"%d-%m-%Y")
# ...
|
Python: Connecting list values with array values
Question: I have created a tornado plot taking inspiration from
[here](http://stackoverflow.com/questions/32132773/a-tornado-chart-
and-p10-p90-in-python-matplotlib). It has input variables labelled on the
y-axis (a1,b1,c1...) and their respective correlation coefficients plotted
next to them. See pic below:
[![enter image description
here](https://i.stack.imgur.com/4NE3f.jpg)](https://i.stack.imgur.com/4NE3f.jpg)
I then sorted the correlation coefficients in a way that the highest absolute
value without loosing its sign gets plotted first, then the next highest and
so on. using `sorted(values,key=abs, reverse=True)`. See the result below
[![enter image description
here](https://i.stack.imgur.com/xamYL.jpg)](https://i.stack.imgur.com/xamYL.jpg)
If you notice, in the second pic even though the bars were sorted in the
absolute descending order, the y-axis label still stay the same.
Question: How do I make the y-axis label(variable) connect to the correlation
coefficient such that it always corresponds to its correlation coefficient.
Below is my code:
import numpy as np
from matplotlib import pyplot as plt
#####Importing Data from csv file#####
dataset1 = np.genfromtxt('dataSet1.csv', dtype = float, delimiter = ',', skip_header = 1, names = ['a', 'b', 'c', 'x0'])
dataset2 = np.genfromtxt('dataSet2.csv', dtype = float, delimiter = ',', skip_header = 1, names = ['a', 'b', 'c', 'x0'])
dataset3 = np.genfromtxt('dataSet3.csv', dtype = float, delimiter = ',', skip_header = 1, names = ['a', 'b', 'c', 'x0'])
corr1 = np.corrcoef(dataset1['a'],dataset1['x0'])
corr2 = np.corrcoef(dataset1['b'],dataset1['x0'])
corr3 = np.corrcoef(dataset1['c'],dataset1['x0'])
corr4 = np.corrcoef(dataset2['a'],dataset2['x0'])
corr5 = np.corrcoef(dataset2['b'],dataset2['x0'])
corr6 = np.corrcoef(dataset2['c'],dataset2['x0'])
corr7 = np.corrcoef(dataset3['a'],dataset3['x0'])
corr8 = np.corrcoef(dataset3['b'],dataset3['x0'])
corr9 = np.corrcoef(dataset3['c'],dataset3['x0'])
np.set_printoptions(precision=4)
variables = ['a1','b1','c1','a2','b2','c2','a3','b3','c3']
base = 0
values = np.array([corr1[0,1],corr2[0,1],corr3[0,1],
corr4[0,1],corr5[0,1],corr6[0,1],
corr7[0,1],corr8[0,1],corr9[0,1]])
values = sorted(values,key=abs, reverse=True)
# The y position for each variable
ys = range(len(values))[::-1] # top to bottom
# Plot the bars, one by one
for y, value in zip(ys, values):
high_width = base + value
#print high_width
# Each bar is a "broken" horizontal bar chart
plt.broken_barh(
[(base, high_width)],
(y - 0.4, 0.8),
facecolors=['red', 'red'], # Try different colors if you like
edgecolors=['black', 'black'],
linewidth=1)
# Draw a vertical line down the middle
plt.axvline(base, color='black')
# Position the x-axis on the top/bottom, hide all the other spines (=axis lines)
axes = plt.gca() # (gca = get current axes)
axes.spines['left'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.spines['top'].set_visible(False)
axes.xaxis.set_ticks_position('bottom')
# Make the y-axis display the variables
plt.yticks(ys, variables)
plt.ylim(-2, len(variables))
plt.show()
Many thanks in advance
Answer: use build-in zip function - returns a list of tuples, where the i-th tuple
contains the i-th element from each of the argument sequences or iterables.
But aware the returned list is truncated in length to the length of the
shortest argument sequence.
|
subprocess.Popen process stdout returning empty?
Question: I have this python code
input()
print('spam')
saved as `ex1.py`
in interactive shell
>>>from subprocess import Popen ,PIPE
>>>a=Popen(['python.exe','ex1.py'],stdout=PIPE,stdin=PIPE)
>>> a.communicate()
(b'', None)
>>>
why it is not printing the `spam`
Answer: Input expects a whole line, but your input is empty. So there is only an
exception written to `stderr` and nothing to `stdout`. At least provide a
newline as input:
>>> a = Popen(['python3', 'ex1.py'], stdout=PIPE, stdin=PIPE)
>>> a.communicate(b'\n')
(b'spam\n', None)
>>>
|
Sublime 3 Plugin Storing Quick Panel Return Val
Question: I'm trying to write a simple plugin that generates a quick panel based on some
list, waits for the user to select an item, and then performs an action based
on the value the user selected. Basically, I'd like to do the following:
class ExampleCommand(sublime_plugin.TextCommand):
def __init__(self):
self._return_val = None
self._list = ['a', 'b', 'c']
def callback(self, idx)
self._return_val = self._list[idx]
def run(self):
sublime.active_window().show_quick_panel(
options, self.callback)
if self._return_val == 'a'
// do something
However, show_quick_panel returns before anything is selected and therefore
self._return_val won't be assigned to the index selected until after the if
statement runs.
How can I solve this problem? With an event listener? I'm very new to Python
and Sublime plugin development.
Answer: Showing the quickpanel obviously does not block the program execution. I
recommend to create and pass a continuation:
import sublime
import sublime_plugin
class ExampleQuickpanelCommand(sublime_plugin.WindowCommand):
def run(self):
# create your items
items = ["a", "b", "c"]
def on_select(index):
if index == -1: # canceled
return
item = items[index]
# process the selected item...
sublime.error_message("You selected '{0}'".format(item))
self.window.show_quick_panel(items, on_select)
|
Generate X random integers between Y and Z?
Question: Is there a function in Python (I'm working with SageMath) to get 10 random
integers between 0 and 30, for instance, without repeating them?
Answer:
import random
random.sample(range(31), 10)
|
Random comma inserted at character 8192 in python "json" result called from node.js
Question: I'm a JS developer just learning python. This is my first time trying to use
node (v6.7.0) and python (v2.7.1) together. I'm using restify with python-
runner as a bridge to my python virtualenv. My python script uses a RAKE NLP
keyword-extraction package.
I can't figure out for the life of me why my return data in **server.js**
inserts a random comma at character 8192 and roughly multiples of. There's no
pattern except the location; Sometimes it's in the middle of the object key
string other times in the value, othertimes after the comma separating the
object pairs. This completely breaks the JSON.parse() on the return data.
Example outputs below. When I run the script from a python shell, this doesn't
happen.
I seriously can't figure out why this is happening, any experienced devs have
any ideas?
_Sample output in browser_
[..., {...ate': 1.0, 'intended recipient': 4.,0, 'correc...}, ...]
_Sample output in python shell_
[..., {...ate': 1.0, 'intended recipient': 4.0, 'correc...}, ...]
**DISREGARD ANY DISCREPANCIES REGARDING OBJECT CONVERSION AND HANDLING IN THE
FILES BELOW. THE CODE HAS BEEN SIMPLIFIED TO SHOWCASE THE ISSUE**
**server.js**
var restify = require('restify');
var py = require('python-runner');
var server = restify.createServer({...});
server.get('/keyword-extraction', function( req, res, next ) {
py.execScript(__dirname + '/keyword-extraction.py', {
bin: '.py/bin/python'
})
.then( function( data ) {
fData = JSON.parse(data); <---- ERROR
res.json(fData);
})
.catch( function( err ) {...});
return next();
});
server.listen(8001, 'localhost', function() {...});
**keyword-extraction.py**
import csv
import json
import RAKE
f = open( 'emails.csv', 'rb' )
f.readline() # skip line containing col names
outputData = []
try:
reader = csv.reader(f)
for row in reader:
email = {}
emailBody = row[7]
Rake = RAKE.Rake('SmartStoplist.txt')
rakeOutput = Rake.run(emailBody)
for tuple in rakeOutput:
email[tuple[0]] = tuple[1]
outputData.append(email)
finally:
file.close()
print( json.dumps(outputData))
Answer: This looks suspiciously like a bug related to size of some buffer, since 8192
is a power of two.
The main thing here is to isolate exactly where the failure is occurring. If I
were debugging this, I would
1. Take a closer look at the output from `json.dumps`, by printing several characters on either side of position 8191, ideally the integer character code (unicode, ASCII, or whatever).
2. If that looks OK, I would try capturing the output from the python script as a file and read that directly in the node server (i.e. don't run a python script).
3. If that works, then create a python script that takes that file and outputs it without manipulation and have your node server execute that python script instead of the one it is using now.
That should help you figure out where the problem is occurring. From comments,
I suspect that this is essentially a bug that you cannot control, unless you
can increase the python buffer size enough to guarantee your data will never
blow the buffer. 8K is pretty small, so that might be a realistic solution.
If that is inadequate, then you might consider processing the data on the the
node server, to remove every character at `n * 8192`, if you can consistently
rely on that. Good luck.
|
Selecting all elements that meet a criteria using selenium (python)
Question: Using selenium, is there a way to have the script pick out elements that meet
a certain criteria?
What I'm exactly trying to do is have selenium select all Twitch channels that
have more than X viewers. If you inspect element, you find this:
<p class="info"
562
viewers on
<a class="js-profile-link" href="/hey_jase/profile"
data-tt_content="live_channel" data-tt_content_index="1"
data-tt_medium="twitch_directory" data-ember-action="1471">
Hey_Jase
</a>
</p>
Answer: First of all, you can find all twitch channel links. Then, filter them based
on the view count.
Something along these lines:
import re
from selenium import webdriver
THRESHOLD = 100
driver = webdriver.Firefox()
driver.get("url")
pattern = re.compile(r"(\d+)\s+viewers on")
for link in driver.find_elements_by_css_selector("p.info a[data-tt_content=live_channel]"):
text = link.find_element_by_xpath("..").text # get to the p parent element
match = pattern.search(text) # extract viewers count
if match:
viewers_count = int(match.group(1))
if viewers_count >= THRESHOLD:
print(link.text, viewers_count)
|
Import os doesn't not work in linux
Question: I just want to install the [suds library](https://pypi.python.org/pypi/suds),
but suddenly it wasn't proceeding because it was not finding the `os` library,
so I tried to open my python and it results me an error. When I import the
`os` library and there are some errors during the opening as well. Please, see
the snapshot of my error when importing the library mentioned.
[![enter image description
here](https://i.stack.imgur.com/Q4S7e.png)](https://i.stack.imgur.com/Q4S7e.png)
Is there any solution for this? Or do I need to reinstall python?
Answer: You need to [install a more recent version of Python - version 2.7.12 or
later](https://www.python.org/downloads/). [All versions of Python 2.6 have
been end-of-lifed and any use of Python 2.6 is actively
discouraged](http://www.snarky.ca/stop-using-python-2-6).
|
Django Allauth and urls
Question: I have a dev environment for a test of Django. I am running Python 3.5.2 out
of a "local" `pyenv` install. I have Django 1.10.2. I discovered the `allauth`
registration plugin yesterday and have been playing with it but have hit a
snag.
My site is "dev.my.domain.com". The intent is that there will not be any
"public" information on the production version of this site. The production
version will be called something like: "members.my.domain.com". So, I wonder
if it is possible for the "allauth" plugin to have all non-/adomn inbound
requests check for auth?
So, requests to:
* dev.my.domain.com
* dev.my.domain.com/foo
* dev.my.domain.com/foo/../bar/...
should all be checked for auth. If not there then I assume "allauth" will
redirect to a login/signup page.
I have tried setting the `Members/urls.py` file as:
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^$', include('allauth.urls')),
url(r'^admin/', admin.site.urls),
]
but that bombs with a Page Not Found error and the `DEBUG` message:
Using the URLconf defined in Members.urls, Django tried these URL patterns, in this order:
^$ ^ ^signup/$ [name='account_signup']
^$ ^ ^login/$ [name='account_login']
^$ ^ ^logout/$ [name='account_logout']
^$ ^ ^password/change/$ [name='account_change_password']
^$ ^ ^password/set/$ [name='account_set_password']
^$ ^ ^inactive/$ [name='account_inactive']
^$ ^ ^email/$ [name='account_email']
^$ ^ ^confirm-email/$ [name='account_email_verification_sent']
^$ ^ ^confirm-email/(?P<key>[-:\w]+)/$ [name='account_confirm_email']
^$ ^ ^password/reset/$ [name='account_reset_password']
^$ ^ ^password/reset/done/$ [name='account_reset_password_done']
^$ ^ ^password/reset/key/(?P<uidb36>[0-9A-Za-z]+)-(?P<key>.+)/$ [name='account_reset_password_from_key']
^$ ^ ^password/reset/key/done/$ [name='account_reset_password_from_key_done']
^$ ^social/
^$ ^google/
^$ ^facebook/
^$ ^facebook/login/token/$ [name='facebook_login_by_token']
^admin/
The current URL, , didn't match any of these.
I bowed to my ignorance and went back to the allauth docs and used their
default `urls` setting:
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^accounts/', include('allauth.urls')),
url(r'^admin/', admin.site.urls),
]
but that also bombs with a Page Not Found and a different message:
Using the URLconf defined in Members.urls, Django tried these URL patterns, in this order:
^accounts/
^admin/
The current URL, , didn't match any of these.
I think the rest of the "allauth" install was done correctly but I am missing
something.
Ideas?
Answer: In your _view.py_ file, you just need to do a little "filter" before giving
away the page to see if the user is authenticated.
An example for this will be:
def myview(request):
if request.user.is_authenticated():
# do something if the user is authenticated, like showing a page.
else:
# do somthing else
Regrading the urls structure - just try to add /accounts/ to the url and the
404 page will show you all the end points if you are on Debug mode (DEBUG =
True). You can also find all urls for endpoints on the documentation.
Hope I understood your problem correctly :)
|
Requiring tensorflow with Python 2.7.11 occurs ImportError
Question: I tried `pip install tensorflow` on OS X El Capitan and it succeeded. However,
if I tried to import tensorflow, ImportError occured. Please tell me when you
know.
>>> import tensorflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 23, in <module>
from tensorflow.python import *
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
ImportError: dlopen(/usr/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so, 10): no suitable image found. Did find:
/usr/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x03
>>>
* * *
Answer: I got the same question. Try to follow the [official instruction] to install
tensorflow:
<https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#pip-
installation>
# Mac OS X
$ sudo easy_install pip
$ sudo easy_install --upgrade six
Then, select the correct binary to install:
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp27-none-linux_x86_64.whl
# Mac OS X, CPU only, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py2-none-any.whl
# Mac OS X, GPU enabled, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc0-py2-none-any.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.4
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp34-cp34m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, CPU only, Python 3.5
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc0-cp35-cp35m-linux_x86_64.whl
# Mac OS X, CPU only, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc0-py3-none-any.whl
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc0-py3-none-any.whl
Install TensorFlow:
# Python 2
$ sudo pip install --upgrade $TF_BINARY_URL
# Python 3
$ sudo pip3 install --upgrade $TF_BINARY_URL
|
How to make replacement in python's dict?
Question: The goal I want to achieve is to exchange all items whose form is
`#item_name#` to the from `(item_value)` in the dict. I use two `dict` named
`test1` and `test2` to test my function. Here is the code:
test1={'integer_set': '{#integer_list#?}', 'integer_list': '#integer_range#(?,#integer_range#)*', 'integer_range': '#integer#(..#integer#)?', 'integer': '[+-]?\\d+'}
test2={'b': '#a#', 'f': '#e#', 'c': '#b#', 'e': '#d#', 'd': '#c#', 'g': '#f#', 'a': 'correct'}
def change(pat_dict:{str:str}):
print('Expanding: ',pat_dict)
num=0
while num<len(pat_dict):
inv_pat_dict = {v: k for k, v in pat_dict.items()}
for value in pat_dict.values():
for key in pat_dict.keys():
if key in value:
repl='#'+key+'#'
repl2='('+pat_dict[key]+')'
value0=value.replace(repl,repl2)
pat_dict[inv_pat_dict[value]]=value0
num+=1
print('Result: ',pat_dict)
change(test1)
change(test2)
sometimes I can get correct result like:
Expanding: {'integer': '[+-]?\\d+', 'integer_list': '#integer_range#(?,#integer_range#)*', 'integer_set': '{#integer_list#?}', 'integer_range': '#integer#(..#integer#)?'}
Result: {'integer': '[+-]?\\d+', 'integer_list': '(([+-]?\\d+)(..([+-]?\\d+))?)(?,(([+-]?\\d+)(..([+-]?\\d+))?))*', 'integer_set': '{((([+-]?\\d+)(..([+-]?\\d+))?)(?,(([+-]?\\d+)(..([+-]?\\d+))?))*)?}', 'integer_range': '([+-]?\\d+)(..([+-]?\\d+))?'}
Expanding: {'c': '#b#', 'f': '#e#', 'e': '#d#', 'b': '#a#', 'g': '#f#', 'd': '#c#', 'a': 'correct'}
Result: {'c': '((correct))', 'f': '(((((correct)))))', 'e': '((((correct))))', 'b': '(correct)', 'g': '((((((correct))))))', 'd': '(((correct)))', 'a': 'correct'}
But most of time I get wrong results like that:
Expanding: {'integer_range': '#integer#(..#integer#)?', 'integer': '[+-]?\\d+', 'integer_set': '{#integer_list#?}', 'integer_list': '#integer_range#(?,#integer_range#)*'}
Result: {'integer_range': '([+-]?\\d+)(..([+-]?\\d+))?', 'integer': '[+-]?\\d+', 'integer_set': '{(#integer_range#(?,#integer_range#)*)?}', 'integer_list': '#integer_range#(?,#integer_range#)*'}
Expanding: {'f': '#e#', 'a': 'correct', 'd': '#c#', 'g': '#f#', 'b': '#a#', 'c': '#b#', 'e': '#d#'}
Result: {'f': '(((((correct)))))', 'a': 'correct', 'd': '(((correct)))', 'g': '((((((correct))))))', 'b': '(correct)', 'c': '((correct))', 'e': '((((correct))))'}
How could I update my code to achieve my goal?
Answer: Your problem is caused by the fact that python dictionaries are unordered. Try
using a
[OrderedDict](https://docs.python.org/2/library/collections.html#collections.OrderedDict)
instead of `dict` and you should be fine. The OrderedDict works just like a
normal `dict` but with ordering retained, at a small performance cost.
Note that while you could create an OrderedDict from a dict literal (like I
did here at first), that dict would be unordered, so the ordering might not be
guaranteed. Using a list of `(key, value)` pairs preserves the ordering in all
cases.
from collections import OrderedDict
test1=OrderedDict([('integer_set', '{#integer_list#?}'), ('integer_list', '#integer_range#(?,#integer_range#)*'), ('integer_range', '#integer#(..#integer#)?'), ('integer', '[+-]?\\d+')])
test2=OrderedDict([('b', '#a#'), ('f', '#e#'), ('c', '#b#'), ('e', '#d#'), ('d', '#c#'), ('g', '#f#'), ('a', 'correct')])
def change(pat_dict:{str:str}):
print('Expanding: ',pat_dict)
num=0
while num<len(pat_dict):
inv_pat_dict = {v: k for k, v in pat_dict.items()}
for value in pat_dict.values():
for key in pat_dict.keys():
if key in value:
repl='#'+key+'#'
repl2='('+pat_dict[key]+')'
value0=value.replace(repl,repl2)
pat_dict[inv_pat_dict[value]]=value0
num+=1
print('Result: ',pat_dict)
change(test1)
change(test2)
|
broken dropdown menu code
Question: I was following <https://www.youtube.com/watch?v=PSm-tq5M-Dc> a tutorial for
doing a drop-down menu in a gui. In the video the code works but i can't get
mine too, I think it may have something to do with different python versions.
from tkinter import *
def doNothing():
print ("ok ok i won't...")
root = Tk()
menu = Menu(root)
roo.config(menu=menu)
subMenu = Menu(menu)
menu.add_cascade(label="File", menu=subMenu)
subMenu.add_command(label="New Project..."), comand=doNothing
subMenu.add_command(label="New"), comand=doNothing
subMenu.add_separator()
subMenu.add_command(label="Exit", command=doNothing)
editMenu = Menu(menu)
menu.add_cascade(label="Edit", menu=editMenu)
editMenu.add_command(label="Redo", comand=doNothing)
root.mainloop()
This is the error
C:\Users\TheSheep\Desktop\pygui>python dropdown.py
File "dropdown.py", line 14
subMenu.add_command(label="New Project..."), comand=doNothing
^
SyntaxError: can't assign to function call
Answer: You have few "typos"
* it has to be `root` instead `roo` in `roo.config()`
* `)` has to be at the end of line in both
subMenu.add_command(label="New Project..."), comand=doNothing #
subMenu.add_command(label="New"), comand=doNothing
* it has to be `command=` instead of `comand=` (see: `mm`)
.
from tkinter import *
def doNothing():
print ("ok ok i won't...")
root = Tk()
menu = Menu(root)
root.config(menu=menu)
subMenu = Menu(menu)
menu.add_cascade(label="File", menu=subMenu)
subMenu.add_command(label="New Project...", command=doNothing)
subMenu.add_command(label="New", command=doNothing)
subMenu.add_separator()
subMenu.add_command(label="Exit", command=doNothing)
editMenu = Menu(menu)
menu.add_cascade(label="Edit", menu=editMenu)
editMenu.add_command(label="Redo", command=doNothing)
root.mainloop()
|
Spark - PySpark sql error
Question: I have a simple pyspark code but i can't run it. I try to run it on Ubuntu
system and I use PyCharm IDE. I would like to connect to Oracle XE Database
and I want to print my test table.
Here comes my spark python code:
from pyspark import SparkContext
from pyspark.sql import SQLContext
sc = SparkContext()
sqlContext = SQLContext(sc)
demoDf = sqlContext.read.format("jdbc").options(
url="jdbc:oracle:thin:@10.10.10.10:1521:XE",
driver="oracle.jdbc.driver.OracleDriver",
table="tst_table",
user="xxx",
password="xxx").load()
demoDf.show()
And this is my trace:
Traceback (most recent call last):
File "/home/kebodev/PycharmProjects/spark_tst/cucc_spark.py", line 13, in <module>
password="xxx").load()
File "/home/kebodev/spark-2.0.1/python/pyspark/sql/readwriter.py", line 153, in load
return self._df(self._jreader.load())
File "/home/kebodev/spark-2.0.1/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/home/kebodev/spark-2.0.1/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/home/kebodev/spark-2.0.1/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o27.load.
: java.lang.RuntimeException: Option 'dbtable' not specified
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$2.apply(JDBCOptions.scala:30)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$2.apply(JDBCOptions.scala:30)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at org.apache.spark.sql.execution.datasources.CaseInsensitiveMap.getOrElse(ddl.scala:117)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:30)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:33)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
Process finished with exit code 1
Can Anybody help me?
Answer: Change to `dbtable` from `table` like this,
demoDf = sqlContext.read.format("jdbc").options(
url="jdbc:oracle:thin:@10.10.10.10:1521:XE",
driver="oracle.jdbc.driver.OracleDriver",
dbtable="tst_table",
user="xxx",
password="xxx").load()
|
How to create angular2 jwt token in angular2
Question: I need to create a jwt in angular2. How do I do that? I am using this for an
api service.
jwt_token = ?
I can create in python with no issue.
Answer: **Normally tokens are generated on the server-side !! If you are creating them
on client side, its easy to see your password!!**
Anyway, to answer your question:
Best way is to use an existing library, like this one:
<https://www.npmjs.com/package/jsonwebtoken>
`npm install jsonwebtoken --save`
install its typings (if you use TypeScript)
`npm install @types/jsonwebtoken --save`
and use it like this:
import * as jwt from 'jsonwebtoken';
let token = jwt.sign({ anyObject: 'here..' }, 'your super secret password!!');
|
XML Processing not working
Question: I am trying to extract data from a sensor (it communicate with "xml type"
strings) and convert it to csv. With my actual code i already write xml files,
but the data come in single rows (from root to /root it is).
Dunno if this is the reason but i get a **elementtree.parse error junk after
document element**. Everything that i've read so far, the problem was in xml
construction (more than one root, no root, etc) so i'm a bit at loss with my
case.
Logged in xml file:
<li820><data><celltemp>5.1120729e1</celltemp><cellpres>9.7705745e1</cellpres><co2>7.7808494e2</co2><co2abs>5.0983281e-2</co2abs><ivolt>1.1380004e1</ivolt><raw>2726238,1977386</raw></data></li820>
<li820><data><celltemp>5.1120729e1</celltemp><cellpres>9.7684698e1</cellpres><co2>7.7823929e2</co2><co2abs>5.0991268e-2</co2abs><ivolt>1.1380004e1</ivolt><raw>2725850,1976922</raw></data></li820>
<li820><data><celltemp>5.1120729e1</celltemp><cellpres>9.7705745e1</cellpres><co2>7.7797288e2</co2><co2abs>5.0977463e-2</co2abs><ivolt>1.1373291e1</ivolt><raw>2726166,1977001</raw></data></li820>
Content of one of the previous row (in tree view) :
<li820>
<data>
<celltemp>1.9523970e1</celltemp>
<cellpres>9.8993663e1</cellpres>
<co2>3.5942180e4</co2>
<co2abs>4.0364418e-1</co2abs>
<ivolt>1.1802978e1</ivolt>
<raw>2789123,1884335</raw>
</data>
</li820>
Error :
Traceback (most recent call last):
File "licor_read.py", line 96, in <module>
tree = et.parse(file_xml) # Set XML Parser
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1182, in parse
tree.parse(source, parser)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 656, in parse
parser.feed(data)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: junk after document element: line 2, column 0
My code :
import os, sys, subprocess
import time, datetime
import serial
import string
import glob
import csv
import xml.etree.ElementTree as et
from xml.etree.ElementTree import XMLParser, XML, fromstring, tostring
from os import path
from bs4 import BeautifulSoup as bs
#-------------------------------------------------------------
#------------------ Open configurations ----------------------
#-------------------------------------------------------------
############
# Settings #
############
DEBUG = True
LOG = True
FREQ = 1
PORT = '/dev/ttyUSB0'
BAUD = 9600
PARITY = 'N'
STOPBIT = 1
BYTE_SZ = 8
TIMEOUT = 5.0
log_dir = 'logs/'
out_dir = 'export/'
fname_xml = 'licor820-data-{}.xml'.format(datetime.datetime.now()) # DO NOT touch the {} brackets
fname_csv = 'licor820-data-{}.csv'.format(datetime.datetime.now()) #
isLooping = 20 # Nr of data extractions
isHeader = True # Do not touch if data headers are required
isBegin = False
#-------------------------------------------------------------
#----- Better know what you are doing from this point --------
#-------------------------------------------------------------
##################
# Initialisation #
##################
file_xml = os.path.join(log_dir, fname_xml) # Define path and file name
file_csv = os.path.join(out_dir, fname_csv) #
fp_xml = open(file_xml, 'w') # Open writing streams
fp_csv = open(file_csv, 'w') #
try:
buff = serial.Serial(PORT, BAUD, BYTE_SZ, PARITY, STOPBIT, TIMEOUT) # Open Serial connection
except Exception as e:
if DEBUG:
print ("ERROR: {}".format(e))
sys.exit("Could not connect to the Licor")
csv_writer = csv.writer(fp_csv) # Define CSV writer
instruct_head = [] # ''
################
# Main program #
################
while isLooping : # Define nr of refreshed data extracted
#os.system('clear')
print('RAW/XML in progress... ' + str(isLooping)) # Debug this loop
if(isBegin is False) : # Verify presence of the <licor> tag
while(buff.readline()[0] is not '<' and buff.readline()[1] is not 'l') :
raw_output = buff.readline() # Jump the lines readed until <licor>
isBegin = True
raw_output = buff.readline()
xml_output = raw_output
print(xml_output)
fp_xml.write(xml_output) # Write from serial port to xml
isLooping -= 1
fp_xml.close()
tree = et.parse(file_xml) # Set XML Parser
root = tree.getroot() # ''
for instruct_row in root.findall('li820'): # XML to CSV buffer
instruct = []
if isHeader is True: # Buffering header
celltemp = instruct_row.find('celltemp').tag
instruct_head.append(celltemp)
cellpres = instruct_row.find('cellpres').tag
instruct_head.append(cellpres)
co2 = instruct_row.find('co2').tag
instruct_head.append(co2)
co2abs = instruct_row.find('co2abs').tag
instruct_head.append(co2abs)
ivolt = instruct_row.find('ivolt').tag
instruct_head.append(ivolt)
raw = instruct_row.find('raw').tag
instruct_head.append(raw)
csv_writer.writerow(instruct_head) # Write header
isHeader = False
celltemp = instruct_row.find('celltemp').text # Buffering data
instruct.append(celltemp)
cellpres = instruct_row.find('cellpres').text
instruct.append(cellpres)
co2 = instruct_row.find('co2').text
instruct.append(co2)
co2abs = instruct_row.find('co2abs').text
instruct.append(co2abs)
ivolt = instruct_row.find('ivolt').text
instruct.append(ivolt)
raw = instruct_row.find('raw').text
instruct.append(raw)
csv_writer.writerow(instruct) # Write data'''
csv_writer.close()
fp_csv.close()
os.system('clear')
print('Job done. \nSaved at : ./' + file_xml + '\nAnd at ./' + file_csv + '\n')
Answer: You should open your input file by 'read' instead of 'write'. Or you will
empty your file when you run your code.
fp_xml = open(file_xml, 'r');
Besides, I have a better way to get all elements.You don't need to know what
the names of all tags ahead of time.
header = []
isHeader = True
for instruct_row in root.getchildren(): # XML to CSV buffer
instruct = []
for i,item in enumerate(instruct_row.getchildren()):
if isHeader is True:
header.append(item.tag)
instruct.append(item.text)
isHeader = False
csv_writer.writerow(instruct)# Write data'''
fp_csv.close()
My input xml is below:
<li820>
<data><celltemp>5.1120729e1</celltemp><cellpres>9.7705745e1</cellpres><co2>7.7808494e2</co2><co2abs>5.0983281e-2</co2abs><ivolt>1.1380004e1</ivolt><raw>2726238,1977386</raw></data>
<data><celltemp>5.1120729e1</celltemp><cellpres>9.7705745e1</cellpres><co2>7.7808494e2</co2><co2abs>5.0983281e-2</co2abs><ivolt>1.1380004e1</ivolt><raw>2726238,1977386</raw></data>
</li820>
Finally you can see data in you csv file.
|
How to stop or interrupt a function in python 3 with Tkinter
Question: I started with programming in python just a few months ago and I really love
it. It's so intuitive and fun to start with.
Data for starting point: I have a linux mashine on which runs python 3.2.3 I
have three buttons on GUI to start a function with and one to stop that
proccess or the proccesses (Idea).
The source is as follows:
def printName1(event):
while button5 != True
print('Button 1 is pressed')
time.sleep(3) # just for simulation purposes to get reaction time for stopping
return
print('STOP button is pressed')
def StopButton():
button5 = True
I have tried while, and try with except, but the main problem is that the GUI
(thinter) is not responding at that time during the process is running. It
stores the input and runs it after the first function (printName1) is
finished. I also looked here at stackoverflow, but the solutions didn't worked
properly for me and they had the same issues with interrupting. I apologize
for that (maybe) basic question, but I am very new in python and spend a few
day of searching an trying.
Is there a way to do that? The solution can maybe made with threading? But
how? Any advise/help is really appreciated.
Thanks a lot!
Answer: Use `threading.Event`
import threading
class ButtonHandler(threading.Thread):
def __init__(self, event):
threading.Thread.__init__(self)
self.event = event
def run (self):
while not self.event.is_set():
print("Button 1 is pressed!")
time.sleep(3)
print("Button stop")
myEvent = threading.Event()
#The start button
Button(root,text="start",command=lambda: ButtonHandler(myEvent).start()).pack()
#Say this is the exit button
Button(root, text="stop",command=lambda: myEvent.set()).pack()
|
How do I call an Excel VBA script using xlwings v0.10
Question: I used to use the info in this question to run a VBA script that does some
basic formatting after I run my python code.
[How do I call an Excel macro from Python using
xlwings?](http://stackoverflow.com/questions/30308455/how-do-i-call-an-excel-
macro-from-python-using-xlwings)
Specifically I used the first update.
from xlwings import Workbook, Application
wb = Workbook(...)
Application(wb).xl_app.Run("your_macro")
Now I'm using v0.10.0 of xlwings and this code no longer works.
When I try the suggested new code for v0.10.0:
wb.app.macro('your_macro')
Python returns an object:
<xlwings.main.Macro at 0x92d3198>
and my macro isn't run in Excel.
The documentation
(<http://docs.xlwings.org/en/stable/api.html#xlwings.App.macro>) has an
example that is a custom function but I have a script that does several things
in Excel (formats the data I output from python, adds some formulas in the
sheet, etc.) that I want to run.
I'm sure I'm missing something basic here.
**Update** Based on Felix Zumstein's suggestion, I tried:
import xlwings as xw
xlfile = 'model.xlsm'
wb = xw.Book(xlfile)
wb.macro('your_macro')
This returns the same thing as wb.app.macro('your_macro'):
<xlwings.main.Macro at 0x92d05888>
and no VBA script run inside Excel.
Answer: You need to use `Book.macro`. As your link to the docs says, `App.macro` is
only for macros that are not part of a workbook (i.e. addins). So use:
wb.macro('your_macro')
|
How do I write a doctest in python 3.5 for a function using random to replace characters in a string?
Question:
def intoxication(text):
"""This function causes each character to have a 1/5 chance of being replaced by a random letter from the string of letters
INSERT DOCTEST HERE
"""
import random
string_letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
text = "".join(i if random.randint(0,4) else random.choice(string_letters) for i in text)
return text
Answer: You need to mock the random functions to give something that is predetermined.
def intoxication(text):
"""This function causes each character to have a 1/5 chance of being replaced by a random letter from the string of letters
If there is 0% chance of a random character chosen, result will be the same as input
>>> import random
>>> myrand = lambda x, y: 1
>>> mychoice = lambda x: "T"
>>> random.randint = myrand
>>> random.choice = mychoice
>>> intoxication("Hello World")
'Hello World'
If there is 100% chance of a random character chosen, result will be the same as 'TTTTTTTTTTT'
>>> import random
>>> myrand = lambda x, y: 0
>>> mychoice = lambda x: "T"
>>> random.randint = myrand
>>> random.choice = mychoice
>>> intoxication("Hello World")
'TTTTTTTTTTT'
If every second character is replaced
>>> import random
>>> thisone = 0
>>> def myrand(x, y): global thisone; thisone+=1; return thisone % 2
>>> mychoice = lambda x: "T"
>>> random.randint = myrand
>>> random.choice = mychoice
>>> intoxication("Hello World")
'HTlToTWTrTd'
If every third character is replaced
>>> import random
>>> thisone = 0
>>> def myrand(x, y): global thisone; thisone+=1; return thisone % 3
>>> mychoice = lambda x: "T"
>>> random.randint = myrand
>>> random.choice = mychoice
>>> intoxication("Hello World")
'HeTloTWoTld'
"""
import random
string_letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
text = "".join(i if random.randint(0,4) else random.choice(string_letters) for i in text)
return text
if __name__ == "__main__":
import doctest
doctest.testmod()
|
geckodriver executable needs to be in path
Question: I have read previous questions asked on this topic and tried to follow the
suggestions but I continue to get errors. On terminal, I ran
export PATH=$PATH:/Users/Conger/Documents/geckodriver-0.8.0-OSX
I also tried
export PATH=$PATH:/Users/Conger/Documents/geckodriver
When I run the following Python code
from selenium import webdriver
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
firefox_capabilities = DesiredCapabilities.FIREFOX
firefox_capabilities['marionette'] = True
firefox_capabilities['binary'] = '/Users/Conger/Documents/Firefox.app'
driver = webdriver.Firefox(capabilities=firefox_capabilities)
I still get the following error
Python - testwebscrap.py:8
Traceback (most recent call last):
File "/Users/Conger/Documents/Python/Crash_Course/testwebscrap.py", line 11, in <module>
driver = webdriver.Firefox(capabilities=firefox_capabilities)
File "/Users/Conger/miniconda2/lib/python2.7/site-packages/selenium/webdriver/firefox/webdriver.py", line 135, in __init__
self.service.start()
File "/Users/Conger/miniconda2/lib/python2.7/site-packages/selenium/webdriver/common/service.py", line 71, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.firefox.service.Service object at 0x1006df6d0>> ignored
[Finished in 0.194s]
Answer: First we know that gekodriver is the driver engine of Firefox,and we know that
`driver.Firefox()` is used to open Firefox browser, and it will call the
gekodriver engine ,so we need to give the gekodirver a executable permission.
so we download the latest gekodriver uncompress the tar packge ,and put
gekodriver at the `/usr/bin/` ok,that's my answer and i have tested.
|
'helloworld.pyx' doesn't match any files
Question: I am a beginner in python and I have just got familiar with cython as well. I
am using Anaconda on Windows 64-bit. I am trying to run the "helloworld"
example as follows:
1- I build a helloworld.pyx file containing:
print("Hello World")
2- I build a setup.py file containing:
from distutils.core import setup
from Cython.Build import cythonize
setup(name='Hello world app',ext_modules=cythonize("helloworld.pyx"),)
But I get the following error:
'helloworld.pyx' doesn't match any files
Could you please tell me what should I do now? Where should I save these two
files?
Answer: From here: <https://github.com/cython/cython/wiki/enhancements-
distutils_preprocessing>
from distutils.core import setup
from Cython.Build import cythonize
setup(
name = 'MyProject',
ext_modules = cythonize(["*.pyx"]),
)
Looks like cythonize accepts a list of strings, but you're providing a string.
|
counter not inceasing in python
Question: I have looked at other while loops and am stuck on why this one is not
working.
points = int(input('How many points: '))
while True:
u_cnt, c_cnt = 0, 0
if u_cnt < points or c_cnt < points:
if u < c:
c_cnt += 1
elif u > c:
u_cnt += 1
Is my problems having the `c_cnt += 1` inside of two if statements?
I have put it outside of the `while` loop, yet that doesn't increase count
either. I have also put the `u_cnt = 0` and `c_cnt = 0` on separate lines.
It's not doing an infinite loop, as it should not do. It's just not
incrementing.
Thank you
edit:
import random
u_name = input('What is your name? ')
points = int(input('How many points: '))
u_cnt, c_cnt = 0, 0
while True:
a = ['rock', 'paper', 'scissors']
comp = random.choice(a)
print('Pick from:', str(a).strip('[]'))
user = input()
u = a.index(user)
c = a.index(comp)
line_1 = '{} : {}\t{} : {}'.format(u_name, user, 'Computer', comp)
line_2 = '{} : {}\t{} : {}\n'.format(u_name, u_cnt, 'Computer', c_cnt)
if c_cnt < points or u_cnt < points:
if u > c or u == 0 and c == 2:
u_cnt += 1
print(line_1, '\t', u_name, 'wins')
print(line_2)
elif u < c or u == 2 and c == 0:
c_cnt += 1
print(line_1, '\t', 'Computer wins')
print(line_2)
elif u == c:
print(line_1, '\t', 'Tie')
print(line_2)
else:
break
so when you run this the first time, you get an answer back like
What is your name? chad
How many points: 3
Pick from: 'rock', 'paper', 'scissors'
rock
chad : rock Computer : scissors chad wins
chad : 0 Computer : 0
Pick from: 'rock', 'paper', 'scissors'
how to get the count be 1, 0 on the first iteration through. that might be a
better question.
Answer: The code have to be right there:
points = int(input('How many points: '))
u_cnt, c_cnt = 0, 0 # asign the values before the loop
while True:
if u_cnt < points or c_cnt < points:
if u < c:
c_cnt += 1
elif u > c:
u_cnt += 1
|
Cannot import urllib in Python
Question: I would like to import `urllib` to use the function '`request`'. However, I
encountered an error when trying to download via Pycharm:
> "Could not find a version that satisfies the requirement urllib (from
> versions: ) No matching distribution found for urllib"
I tried `pip install urllib` but still had the same error. I am using Python
2.7.11. Really appreciate any help
Answer: A few things:
1. As metioned in the comments, `urllib` is not installed through `pip`, it is part of the standard library, so you can just do `import urllib` without installation.
2. Python 3.x has a [`urllib.request`](https://docs.python.org/3/library/urllib.request.html) module, but Python 2.x does not, as far as I know.
3. The functionality that you are looking for from `urllib.request` is most likely contained in [`urllib2`](https://docs.python.org/2/library/urllib2.html#module-urllib2) (which is also part of the standard library), but you might be even better off using [`requests`](http://docs.python-requests.org/en/master/), which probably does need to be installed through `pip` in your case:
pip install requests
In fact, the `urllib2` documentation itself recommends that you use the
`requests` library "for a higher-level HTTP client interface" - but I am not
sure what you wish to do, so it is hard to say what would be best for your
particular use case.
|
root.query_pointer()._data causes high CPU usage
Question: I'm a total noob in Python, programming and Linux. I wrote a simple python
script to track usage time of various apps. I've noticed that after some time
python is going nuts utilizing 100% of the CPU. Turns out it's the code
obtaining mouse position is causing issues.
I've tried running this code in an empty python script:
import time
from Xlib import display
while True:
d = display.Display().screen().root.query_pointer()._data
print(d["root_x"], d["root_y"])
time.sleep(0.1)
It works but the CPU usage is increasing over time. With `time.sleep(1)` it
takes some time but sooner or later it reaches crazy values.
I'm on Ubuntu 16.04.1 LTS using Python 3.5 with python3-xlib 0.15
Answer: To keep the CPU usual steady I put `display.Display().screen()` before the
loop so that it didn't have to do so much work all the time. The screen
shouldn't change so nor should that value so it made sense to set it up
before.
import time
from Xlib import display
disp = display.Display().screen()
while True:
d = disp.root.query_pointer()._data
print(d["root_x"], d["root_y"])
time.sleep(0.1)
I've tested it and it stays at about 0.3% for me.
Hope it this helps :)
|
MiniBatchKMeans OverflowError: cannot convert float infinity to integer?
Question: I am trying to find the right number of clusters, `k`, according to silhouette
scores using `sklearn.cluster.MiniBatchKMeans`.
from sklearn.cluster import MiniBatchKMeans
from sklearn.feature_extraction.text import HashingVectorizer
docs = ['hello monkey goodbye thank you', 'goodbye thank you hello', 'i am going home goodbye thanks', 'thank you very much sir', 'good golly i am going home finally']
vectorizer = HashingVectorizer()
X = vectorizer.fit_transform(docs)
for k in range(5):
model = MiniBatchKMeans(n_clusters = k)
model.fit(X)
And I receive this error:
Warning (from warnings module):
File "C:\Python34\lib\site-packages\sklearn\cluster\k_means_.py", line 1279
0, n_samples - 1, init_size)
DeprecationWarning: This function is deprecated. Please call randint(0, 4 + 1) instead
Traceback (most recent call last):
File "<pyshell#85>", line 3, in <module>
model.fit(X)
File "C:\Python34\lib\site-packages\sklearn\cluster\k_means_.py", line 1300, in fit
init_size=init_size)
File "C:\Python34\lib\site-packages\sklearn\cluster\k_means_.py", line 640, in _init_centroids
x_squared_norms=x_squared_norms)
File "C:\Python34\lib\site-packages\sklearn\cluster\k_means_.py", line 88, in _k_init
n_local_trials = 2 + int(np.log(n_clusters))
OverflowError: cannot convert float infinity to integer
I know the `type(k)` is `int`, so I don't know where this issue is coming
from. I can run the following just fine, but I can't seem to iterate through
integers in a list, even though the `type(2)` is equal to `k = 2; type(k)`
model = MiniBatchKMeans(n_clusters = 2)
model.fit(X)
Even running a different `model` works:
>>> model = KMeans(n_clusters = 2)
>>> model.fit(X)
KMeans(copy_x=True, init='k-means++', max_iter=300, n_clusters=2, n_init=10,
n_jobs=1, precompute_distances='auto', random_state=None, tol=0.0001,
verbose=0)
Answer: Let's analyze your code:
* `for k in range(5)` returns the following sequence:
* `0, 1, 2, 3, 4`
* `model = MiniBatchKMeans(n_clusters = k)` inits model with `n_clusters=k`
* Let's look at the first iteration:
* `n_clusters=0` is used
* Within the optimization-code (look at the output):
* `int(np.log(n_clusters))`
* = `int(np.log(0))`
* = `int(-inf)`
* ERROR: no infinity definition for integers!
* -> casting floating-point value of -inf to int not possible!
Setting `n_clusters=0` does not make sense!
|
Dictionary key and value flipping themselves unexpectedly
Question: I am running python 3.5, and I've defined a function that creates XML
SubElements and adds them under another element. The attributes are in a
dictionary, but for some reason the dictionary keys and values will sometimes
flip when I execute the script.
Here is a snippet of kind of what I have (the code is broken into many
functions so I combined it here)
import xml.etree.ElementTree as ElementTree
def AddSubElement(parent, tag, text='', attributes = None):
XMLelement = ElementTree.SubElement(parent, tag)
XMLelement.text = text
if attributes != None:
for key, value in attributes:
XMLelement.set(key, value)
print("attributes =",attributes)
return XMLelement
descriptionTags = ([('xmlns:g' , 'http://base.google.com/ns/1.0')])
XMLroot = ElementTree.Element('rss')
XMLroot.set('version', '2.0')
XMLchannel = ElementTree.SubElement(XMLroot,'channel')
AddSubElement(XMLchannel,'g:description', 'sporting goods', attributes=descriptionTags )
AddSubElement(XMLchannel,'link', 'http://'+ domain +'/')
XMLitem = AddSubElement(XMLchannel,'item')
AddSubElement(XMLitem, 'g:brand', Product['ProductManufacturer'], attributes=bindingParam)
AddSubElement(XMLitem, 'g:description', Product['ProductDescriptionShort'], attributes=bindingParam)
AddSubElement(XMLitem, 'g:price', Product['ProductPrice'] + ' USD', attributes=bindingParam)
The key and value does get switched! Because I'll see this in the console
sometimes:
attributes = [{'xmlns:g', 'http://base.google.com/ns/1.0'}]
attributes = [{'http://base.google.com/ns/1.0', 'xmlns:g'}]
attributes = [{'http://base.google.com/ns/1.0', 'xmlns:g'}]
...
And here is the xml string that sometimes comes out:
<rss version="2.0">
<channel>
<title>example.com</title>
<g:description xmlns:g="http://base.google.com/ns/1.0">sporting goods</g:description>
<link>http://www.example.com/</link>
<item>
<g:id http://base.google.com/ns/1.0="xmlns:g">8987983</g:id>
<title>Some cool product</title>
<g:brand http://base.google.com/ns/1.0="xmlns:g">Cool</g:brand>
<g:description http://base.google.com/ns/1.0="xmlns:g">Why is this so cool?</g:description>
<g:price http://base.google.com/ns/1.0="xmlns:g">69.00 USD</g:price>
...
What is causing this to flip?
Answer:
attributes = [{'xmlns:g', 'http://base.google.com/ns/1.0'}]
This is a list containing a set, not a dictionary. Neither sets nor
dictionaries are ordered.
|
Python Flask: RQ Worker raising KeyError because of environment variable
Question: I'm trying to setup a redis queue and a worker to process the queue with my
flask app. I'm implementing this to handle a task that sends emails. I'm a
little confused because it appears that the stack trace is saying that my
'APP_SETTINGS' environment variable is not set when it is in fact set.
Prior to starting up the app, redis or the worker, I set APP_SETTINGS:
export APP_SETTINGS="project.config.DevelopmentConfig"
However, when an item gets added to the queue, here's the stack trace:
17:00:00 *** Listening on default...
17:00:59 default: project.email.sendMailInBG(<flask_mail.Message object at 0x7fc930e1c3d0>) (aacf9546-5558-4db8-9232-5f36c25d521b)
17:01:00 KeyError: 'APP_SETTINGS'
Traceback (most recent call last):
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/worker.py", line 588, in perform_job
rv = job.perform()
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/job.py", line 498, in perform
self._result = self.func(*self.args, **self.kwargs)
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/job.py", line 206, in func
return import_attribute(self.func_name)
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/utils.py", line 150, in import_attribute
module = importlib.import_module(module_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/tony/pyp-launch/project/__init__.py", line 24, in <module>
app.config.from_object(os.environ['APP_SETTINGS'])
File "/home/tony/pyp-launch/venv/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'APP_SETTINGS'
Traceback (most recent call last):
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/worker.py", line 588, in perform_job
rv = job.perform()
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/job.py", line 498, in perform
self._result = self.func(*self.args, **self.kwargs)
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/job.py", line 206, in func
return import_attribute(self.func_name)
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/utils.py", line 150, in import_attribute
module = importlib.import_module(module_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/tony/pyp-launch/project/__init__.py", line 24, in <module>
app.config.from_object(os.environ['APP_SETTINGS'])
File "/home/tony/pyp-launch/venv/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'APP_SETTINGS'
17:01:00 Moving job to u'failed' queue
17:01:00
17:01:00 *** Listening on default...
**email.py**
from flask.ext.mail import Message
from project import app, mail
from redis import Redis
from rq import use_connection, Queue
q = Queue(connection=Redis())
def send_email(to, subject, template, emailable):
if emailable==True:
msg = Message(
subject,
recipients=[to],
html=template,
sender=app.config['MAIL_DEFAULT_SENDER']
)
q.enqueue(sendMailInBG, msg)
else:
print("no email sent, emailable set to: " + str(emailable))
def sendMailInBG(msgContent):
with app.test_request_context():
mail.send(msgContent)
**worker.py**
import os
import redis
from rq import Worker, Queue, Connection
listen = ['default']
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(list(map(Queue, listen)))
worker.work()
I'd really appreciate another set of eyes on this. I can't for the life of me
figure out what's going on here.
Answer: Thanks to the prompting of @danidee, I discovered that the environment
variables need to be defined in each terminal. Hence, APP_SETTINGS was defined
for the actual app, but not for the worker.
The solution was to set APP_SETTINGS in the worker terminal.
|
PHP equivalent of Python's `urljoin`
Question: What is the PHP equivalent for building a URL from a base URL and a
potentially-relative path? Python provides
[`urlparse.urljoin`](https://docs.python.org/2/library/urlparse.html#urlparse.urljoin)
but there does not seem to be any standard implementation in PHP.
The closest I've found is people suggesting the use of
[`parse_url`](http://php.net/manual/en/function.parse-url.php) and then
rebuilding the URL from parts, but implementations doing that generally get
things like protocol-relative links wrong (for example, `//example.com/foo`
turning into `http://example.com/foo` or `https://example.com/foo`, inheriting
the base URL's protocol), and it also doesn't make it easy to handle things
like parent directory links. Here are examples of those things working
correctly in `urlparse.urljoin`:
>>> from urlparse import urljoin
>>> urljoin('http://example.com/some/directory/filepart', 'foo.jpg')
'http://example.com/some/directory/foo.jpg'
>>> urljoin('http://example.com/some/directory/', 'foo.jpg')
'http://example.com/some/directory/foo.jpg'
>>> urljoin('http://example.com/some/directory/', '../foo.jpg')
'http://example.com/some/foo.jpg'
>>> urljoin('http://example.com/some/directory/', '/foo.jpg')
'http://example.com/foo.jpg'
>>> urljoin('http://example.com/some/directory/', '//images.example.com/bar.jpg')
'http://images.example.com/bar.jpg'
>>> urljoin('https://example.com/some/directory/', '//images.example.com/bar.jpg')
'https://images.example.com/bar.jpg'
>>> urljoin('ftp://example.com/some/directory/', '//images.example.com/bar.jpg')
'ftp://images.example.com/bar.jpg'
>>> urljoin('http://example.com:8080/some/directory/', '//images.example.com/bar.jpg')
'http://images.example.com/bar.jpg'
Is there an idiomatic way of achieving the same in PHP, or a well-regarded
simple library or implementation that actually gets all of these cases
correct?
Answer: Because there is clearly a need for this functionality and none of the random
scripts out there cover all the bases, I've started a [project on
Github](https://github.com/plaidfluff/php-urljoin) to try to do it right.
The implementation of `urljoin()` is currently as follows:
function urljoin($base, $rel) {
$pbase = parse_url($base);
$prel = parse_url($rel);
$merged = array_merge($pbase, $prel);
if ($prel['path'][0] != '/') {
// Relative path
$dir = preg_replace('@/[^/]*$@', '', $pbase['path']);
$merged['path'] = $dir . '/' . $prel['path'];
}
// Get the path components, and remove the initial empty one
$pathParts = explode('/', $merged['path']);
array_shift($pathParts);
$path = [];
$prevPart = '';
foreach ($pathParts as $part) {
if ($part == '..' && count($path) > 0) {
// Cancel out the parent directory (if there's a parent to cancel)
$parent = array_pop($path);
// But if it was also a parent directory, leave it in
if ($parent == '..') {
array_push($path, $parent);
array_push($path, $part);
}
} else if ($prevPart != '' || ($part != '.' && $part != '')) {
// Don't include empty or current-directory components
if ($part == '.') {
$part = '';
}
array_push($path, $part);
}
$prevPart = $part;
}
$merged['path'] = '/' . implode('/', $path);
$ret = '';
if (isset($merged['scheme'])) {
$ret .= $merged['scheme'] . ':';
}
if (isset($merged['scheme']) || isset($merged['host'])) {
$ret .= '//';
}
if (isset($prel['host'])) {
$hostSource = $prel;
} else {
$hostSource = $pbase;
}
// username, password, and port are associated with the hostname, not merged
if (isset($hostSource['host'])) {
if (isset($hostSource['user'])) {
$ret .= $hostSource['user'];
if (isset($hostSource['pass'])) {
$ret .= ':' . $hostSource['pass'];
}
$ret .= '@';
}
$ret .= $hostSource['host'];
if (isset($hostSource['port'])) {
$ret .= ':' . $hostSource['port'];
}
}
if (isset($merged['path'])) {
$ret .= $merged['path'];
}
if (isset($prel['query'])) {
$ret .= '?' . $prel['query'];
}
if (isset($prel['fragment'])) {
$ret .= '#' . $prel['fragment'];
}
return $ret;
}
This function will correctly handle users, passwords, port numbers, query
strings, anchors, and even `file:///` URLs (which seems to be a common defect
in existing functions of this type).
|
python calling variables from another script into current script
Question: I'm trying to call a variable from another script into my current script but
running into "variable not defined" issues.
botoGetTags.py
20 def findLargestIP():
21 for i in tagList:
22 #remove all the spacing in the tags
23 ec2Tags = i.strip()
24 #seperate any multiple tags
25 ec2SingleTag = ec2Tags.split(',')
26 #find the last octect of the ip address
27 fullIPTag = ec2SingleTag[1].split('.')
28 #remove the CIDR from ip to get the last octect
29 lastIPsTag = fullIPTag[3].split('/')
30 lastOctect = lastIPsTag[0]
31 ipList.append(lastOctect)
32 largestIP = int(ipList[0])
33 for latestIP in ipList:
34 if int(latestIP) > largestIP:
35 largestIP = latestIP
36 return largestIP
37 #print largestIP
38
39 if __name__ == '__main__':
40 getec2Tags()
41 largestIP = findLargestIP()
42 print largestIP
So this script ^ correctly returns the value of `largestIP` but in my other
script
terraform.py
1 import botoGetTags
8 largestIP = findLargestIP()
before I execute any of the functions in my script, terraTFgen.py, I get:
Traceback (most recent call last):
File "terraTFgen.py", line 8, in <module>
largestIP = findLargestIP()
NameError: name 'findLargestIP' is not defined
I thought that if I import another script I could use those variables in my
current script is there another step I should take?
Thanks
Answer: You imported the module, not the function. So you need to refer to the
function via the module:
import botoGetTags
largestIP = botoGetTags.findLargestIP()
Alternatively you could import the function directly:
from botoGetTags import findLargestIP
largestIP = findLargestIP()
|
Django, viewing models from different app
Question: I am super new to python programming and django and i got the basics out of
the way. I created a project with two apps, home and video. In my video
models.py i have the following data:
class Video(models.Model):
name = models.CharField(max_length=200)
description = models.TextField(blank=True, null=True)
I want to do something with this in my home app in the views.py, such as
display the data in an html page and currently it is set up as followed:
from video.models import Video
def display_video(request):
video_list = Video.objects.all()
context = {'video_list': video_list}
return render(request, 'home/home.html', context)
in my home.html
{% if video_list %}
{% for video in video_list %}
<p>{{ video.name }}</p>
<p>{{ video.description }}</p>
{% endfor %}
{% else %}
<p>no videos to display</p>
{% endif %}
my home.html always returns "no videos to display"
But when i query Video.objects.all() in my video app it finds 2 objects. any
help is appreciated.
Answer: In settings, check the following is there.
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
|
create file containing '/' in file name in python
Question: how can I create a file in python if the filename contains '/'
url='https://www.udacity.com/cs101x/index.html'
f=open(url,'w')
f.write('123')
f.close()
above code produces an error as
Traceback (most recent call last):
File "9.py", line 2, in <module>
f=open(url,'w')
IOError: [Errno 22] invalid mode ('w') or filename:https://www.udacity.com/cs101x/index.html'
Answer: Use os.path.basename() to isolate the filename.
import os
url='https://www.udacity.com/cs101x/index.html'
filename = os.path.basename(url)
f=open(filename,'w')
f.write('123')
f.close()
This will create a file called index.html
|
What is axis in Python with Numpy module?
Question: when I use np.stack, sometimes have to use axis, like axis=1. I don't
understand what the axis means for it. for exmaple,
c1 = np.ones((2, 3))
c2 = np.zeros((2, 3))
c = np.stack([c1, c2], axis = 1)
this shows like,
array([[[1., 1., 1.],
[0., 0., 0.]],
[[1., 1., 1.],
[0., 0., 0.]]])
what rules make the result?
Answer: Axis means the dimension . For a simple example consider
[numpy.sum](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.sum.html)
import numpy as np
a=np.array([1,2,3],[2,3,1])
sum1=np.sum(a,axis=0)
sum2=np.sum(a,axis=1)
print sum1,sum2
This will give me sum1=12 and sum2=[3,5,4]
My array has two dimensions/axis. The first one is of length 2 and the second
one is of length 3. So by specifying axis you simpy tell your code along which
dimension you want to do your job.
[numpy.ndarray.ndim](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ndim.html)
can tell you how many axes do you have
|
Dictionary removing duplicate along with subtraction and addition of values
Question: New to python here. I would like to eliminate duplicate dictionary key into
just one along with performing arithmetic such as adding/subtracting the
values if duplicates are found.
**Current Code Output**
> {('GRILLED AUSTRALIA ANGU',): (('1',), ('29.00',)), ('Beer', 'Carrot Cake',
> 'Chocolate Cake'): (('10', '1', '1'), ('30.00', '2.50', '3.50')), ('**Beer**
> ', '**Beer** '): (('**1** ', '**1** '), ('**3.00** ', '**3.00** ')),
> ('Carrot Cake', 'Chocolate Cake'): (('1', '1'), ('2.50', '3.50')), ('Carrot
> Cake',): (('1',), ('2.50',)), ('BRAISED BEANCURD WITH',): (('1',),
> ('10.00',)), ('SAUSAGE WRAPPED WITH B', 'ESCARGOT WITH GARLIC H', 'PAN
> SEARED FOIE GRAS', 'SAUTE FIELD MUSHROOM W', 'CRISPY CHICKEN WINGS', 'ONION
> RINGS'): (('1', '1', '1', '1', '1', '1'), ('10.00', '12.00', '15.00',
> '9.00', '7.00', '6.00')), ('**Beer** ', '**Beer** ', '**Carrot Cake** ',
> '**Chocolate Cake** '): (('**-1** ', '**10** ', '**1** ', '**1** '),
> ('**-3.00** ', '**30.00** ', '**2.50** ', '**3.50** ')), ('Beer',):
> (('10',), ('30.00',))}
What i want: example:
**SUBTRACTION FOR DUPLICATE**
> {'Beer': [9, 27]} , {'carrot cake': [1, 2.5]} , {'Chocolate Cake': [1, 3.5]}
notice that for duplicate item entry i trimmed Beer into one along with
(10-1=9) for quantity amount and (30-3=27) for the cost. How do i automate
this process?
**ADDITION FOR DUPLICATE**
> {'Beer': [2, 6]}
notice that I added beer and beer into one entry and along with the quantity
(1+1) and cost (3+3=6)
**My code:**
import csv
from itertools import groupby
from operator import itemgetter
import re
d = {}
#open directory and saving directory
with open("rofl.csv", "rb") as f, open("out.csv", "wb") as out:
reader = csv.reader(f)
next(reader)
writer = csv.writer(out)
#the first column header
writer.writerow(["item","quantity","amount"])
groups = groupby(csv.reader(f), key=itemgetter(0))
for k, v in groups:
v = list(v)
sales= [ x[1] for x in v[8:] ]
salesstring= str(sales)
#using re.findall instead of re.search to return all via regex for items
itemoutput= re.findall(r"(?<=\s\s)\w+(?:\s\w+)*(?=\s\s)",textwordfortransaction)
#using re.findall instead of re.search to return all via regex for amount aka quantity
amountoutput= re.findall(r"'(-?\d+)\s+(?:[A-Za-z ]*)",textwordfortransaction)
#using re.findall instead of re.search to return all via regex for cost
costoutput= re.findall(r"(?:'-?\d+[A-Za-z ]*)(-?\d+[.]?\d*)",textwordfortransaction)
d[tuple(itemoutput)] = tuple(amountoutput),tuple(costoutput)
#writing the DATA to output CSV
writer.writerow([d])
#to remove the last entry else it would keep on stacking the previous
d.clear()
link to csv file if needed
<https://drive.google.com/open?id=0B1kSBxOGO4uJOFVZSWh2NWx6dHc>
Answer: Working with your current output as posted in the question, you can just
[`zip`](https://docs.python.org/3/library/functions.html#zip) the different
lists of tuples of items and quantities and prices to align the items with
each other, add them up in two `defaultdicts`, and finally combine those to
the result.
output = {('GRILLED AUSTRALIA ANGU',): (('1',), ('29.00',)), ...}
from collections import defaultdict
prices, quantities = defaultdict(int), defaultdict(int)
for key, val in output.items():
for item, quant, price in zip(key, *val):
quantities[item] += int(quant)
prices[item] += float(price)
result = {item: (quantities[item], prices[item]) for item in prices}
Afterwards, `result` is this: Note that you do _not_ need a special case for
subtracting duplicates when the quantity and/or price are negative; just add
the negative number.
{'ESCARGOT WITH GARLIC H': (1, 12.0),
'BRAISED BEANCURD WITH': (1, 10.0),
'CRISPY CHICKEN WINGS': (1, 7.0),
'SAUSAGE WRAPPED WITH B': (1, 10.0),
'ONION RINGS': (1, 6.0),
'PAN SEARED FOIE GRAS': (1, 15.0),
'Beer': (31, 93.0),
'Chocolate Cake': (3, 10.5),
'SAUTE FIELD MUSHROOM W': (1, 9.0),
'Carrot Cake': (4, 10.0),
'GRILLED AUSTRALIA ANGU': (1, 29.0)}
* * *
If you want to keep the individual items separate, just move the declaration
of `prices`, `quantities`, and `result` _inside_ the outer loop:
for key, val in output.items():
prices, quantities = defaultdict(int), defaultdict(int)
for item, quant, price in zip(key, *val):
quantities[item] += int(quant)
prices[item] += float(price)
result = {item: (quantities[item], prices[item]) for item in prices}
# do something with result or collect in a list
Example result for the two-beer line:
('Beer', 'Beer', 'Carrot Cake', 'Chocolate Cake') (('-1', '10', '1', '1'), ('-3.00', '30.00', '2.50', '3.50'))
{'Chocolate Cake': (1, 3.5), 'Beer': (9, 27.0), 'Carrot Cake': (1, 2.5)}
If you prefer the `result` to group the items, quantities and prices together,
use this:
items = list(prices)
result = (items, [quantities[x] for x in items], [prices[x] for x in items])
Result is this like this:
(['Carrot Cake', 'Beer', 'Chocolate Cake'], [1, 9, 1], [2.5, 27.0, 3.5])
|
Python Indention Block? Why?
Question: I have tried untabifying region.. and did not mix spaces/tabs.. What could be
wrong here? When I run the module it traces to `if result["geo"]:` and says
"There's an error in your program: expected an indention block"
from twitter import *
import sys
import csv
latitude = 8.8015
longitude = 125.7407
max_range = 1000
num_results = 500
outfile = "nimal.csv"
config = {}
execfile("config.py", config)
twitter = Twitter(
auth = OAuth(config["access_key"], config["access_secret"], config["consumer_key"], config["consumer_secret"]))
csvfile = file(outfile, "w")
csvwriter = csv.writer(csvfile)
row = [ "date", "user", "text", "latitude", "longitude" ]
csvwriter.writerow(row)
result_count = 0
last_id = None
while result_count < num_results:
query = twitter.search.tweets(q = "urios", geocode = "%f,%f,%dkm" % (latitude, longitude, max_range), count = 100, since_id = 2016-10-8, max_id = last_id)
for result in query["statuses"]:
if result["geo"]:
date = result["created_at"]
user = result["user"]["screen_name"]
text = text.encode('ascii', 'replace')
latitude = result["geo"]["coordinates"][0]
longitude = result["geo"]["coordinates"][1]
row = [ date, user, text, latitude, longitude ]
csvwriter.writerow(row)
result_count += 1
last_id = result["id"]
print "got %d results" % result_count
csvfile.close()
print "written to %s" % outfile
Answer: here is the problem :
for result in query["statuses"]:
if result["geo"]:
date = result["created_at"]
python has specific syntax and it has to be considered
you have to change it to:
for result in query["statuses"]:
if result["geo"]:
date = result["created_at"]
|
digits to words from sys.stdin
Question: I'm trying to convert digits to words from std input (txt file). If the input
is for example : 1234, i want the output to be one two three four, for the
next line in the text file i want the output to be on a new line in the
shell/terminal: 1234 one two three four 56 five six The problem is that i
can't get it to output on the same line. Code so far :
#!/usr/bin/python3
import sys
import math
def main():
number_list = ["zero","one","two","three","four","five","six","seven","eight","nine"]
for line in sys.stdin:
number = line.split()
for i in number:
number_string = "".join(i)
number2 = int(number_string)
print(number_list[number2])
main()
Answer: Put the words in a list, join them, and print the line.
#!/usr/bin/python3
import sys
import math
def main():
number_list = ["zero","one","two","three","four","five","six","seven","eight","nine"]
for line in sys.stdin:
digits = list(line.strip())
words = [number_list[int(digit)] for digit in digits]
words_line = ' '.join(words)
print(words_line)
main()
|
Convert QueryDict into list of arguments
Question: I'm receiving via POST request the next payload through the view below:
class CustomView(APIView):
"""
POST data
"""
def post(self, request):
extr= externalAPI()
return Response(extr.addData(request.data))
And in the `externalAPI` class I have the `addData()` function where I want to
convert _QueryDict_ to a simple list of arguments:
def addData(self, params):
return self.addToOtherPlace(**params)
In other words, what I get in params is somethin like:
<QueryDict: {u'data': [u'{"object":"a","reg":"1"}'], u'record': [u'DAFASDH']}>
And I need to pass it to the addToOtherPlace() function like:
addToOtherPlace(data={'object':'a', 'reg': 1}, record='DAFASDH')
I have tried with different approaches but I have to say I'm not very familiar
with dictionaries in python.
Any help would be really appreciated.
Thanks!
Answer: You can write a helper function that walks through the _QueryDict_ object and
converts valid _JSON_ objects to Python objects, string objects that are
digits to integers and returns the first item of lists from lists:
import json
def restruct(d):
for k in d:
# convert value if it's valid json
if isinstance(d[k], list):
v = d[k]
try:
d[k] = json.loads(v[0])
except ValueError:
d[k] = v[0]
# step into dictionary objects to convert string digits to integer
if isinstance(d[k], dict):
restruct(d[k])
elif d[k].isdigit():
d[k] = int(d[k])
params = {u'data': [u'{"object":"a","reg":"1"}'], u'record': [u'DAFASDH']}
restruct(params)
print(params)
# {'record': 'DAFASDH', 'data': {'object': 'a', 'reg': 1}}
Note that this approach modifies the initial object _in-place_. You can make a
`deepcopy`, and modify the copy instead if you're going to keep the original
object intact:
import copy
def addData(self, params):
params_copy = copy.deepcopy(params)
restruct(params_copy)
return self.addToOtherPlace(**params_copy)
|
How to detect fast moving soccer ball with OpenCV, Python, and Raspberry Pi?
Question: This is the code. I am trying to detect different types of soccer ball using
OpenCV and python. Soccer ball could of different colors. I can detect the
ball if it is not moving. But, the code does not work if the ball is moving
fast.
from picamera.array import PiRGBArray
from picamera import PiCamerafrom picamera.array import PiRGBArray
from picamera import PiCamera
import time, cv2, sys, imutils, cv
import cv2.cv as cv
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (400, 300)
camera.framerate = 30
rawCapture = PiRGBArray(camera, size=(400, 300))
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
image = frame.array
img = cv2.medianBlur(image, 3)
imgg = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#imgg = cv2.blur(imgg, (3,3))
#imgg = cv2.dilate(imgg, np.ones((5, 5)))
#imgg = cv2.GaussianBlur(imgg,(5,5),0)
circles = cv2.HoughCircles(imgg, cv.CV_HOUGH_GRADIENT, 1, 20, param1=100, param2=40, minRadius=5, maxRadius=90)
cv2.imshow("Frame", imgg)
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
if circles is None:
continue
for i in circles[0,:]:
cv2.circle(imgg,(i[0],i[1]),i[2],(0,255,0),1) # draw the outer circle
cv2.circle(imgg,(i[0],i[1]),2,(0,0,255),3) # draw the center of the circle
cv2.imshow("Framed", imgg)
Answer: May I suggest you read this post?
<http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/>
There are also a few comments below indicating how to detect multiple balls
rather than one.
|
Nonewline python
Question: I am trying to print random numbers using random , but when I try to print the
output in one line using `end= " "` the output doesnt show anything until I
break the program.
import random
import time
while True:
x = random.randint(1,6)
print(x, end=" ")
time.sleep(1)
The out put is like this after I interrupt :
C1 2 3 5 5 4 5 4 1 ---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
Answer: You can disable buffering by passing `flush=True` to `print` function (in
python3)
print(x, end=" ", flush=True)
|
python csv new line
Question: iam using this code to convert db table to csv file, it is converting in to
csv but instead of new line / line breck its using double quotes , can someone
help me
import MySQLdb
import csv
conn = MySQLdb.connect(user='root', db='users', host='localhost')
cursor = conn.cursor()
cursor.execute('SELECT * FROM users.newusers')
ver = cursor.fetchall()
ver1 = []
for ve in ver:
ve = str(ve).replace("', '","|").replace(", '","|").replace(","," ").replace(" "," ").replace("|",",")
ver1.append(ve)
print ver1
csv_file = open('filename', 'wb')
writer = csv.writer(csv_file)
writer.writerow(ver1) csv_file.close()
current output
"(114L,New,9180971675,Ravi Raju,RAJRAVI,National,#N.A,No,No,No,#N.A,OS40,005056BB0803,192.168.0.1,no,yes')","(115L,New,9180971676,Rajendran Mohan,rajemoh,National,#N.A,No,No,No,#N.A,OS40,005056BB0803,192.168.10.10,no,yes')"
expected out
114L,New,9180971675,Ravi Raju,RAJRAVI,National,#N.A,No,No,No,#N.A,OS40,005056BB0803,192.168.0.1,no,yes
115L,New,9180971676,Rajendran Mohan,rajemoh,National,#N.A,No,No,No,#N.A,OS40,005056BB0803,192.168.10.10,no,yes
Answer: SQL queries will return results to you in a list of tuples from `fetchall()`.
In your current approach, you iterate through this list but call `str()` on
each tuple, thereby converting the whole tuple to its string representation.
Instead, you could use a list comprehension on each tuple both to get it into
a nested list format and apply your `replace()` operations.
for ve in ver:
ver1.append([str(item).replace(<something>) for item in ve])
In the final step you use `csv.writerow()` which is designed to be used within
a `for` loop. If your list is nested in the form `[[row1], [row2]]` then you
can just change this to writerow**s**. Also, it's best practice to use the
context manager `with` to handle files as this will automatically close the
file once the operation is completed.
with open('filename.csv', 'wb') as csv_file:
writer = csv.writer(csv_file)
writer.writerows(ver1)
|
Trying to Solve Numerical Diff Eq Using Euler's Method, Invalid Value Error
Question: I am trying to learn it from this website:
<http://nbviewer.jupyter.org/github/numerical-mooc/numerical-
mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb>
I was trying to code it with as little help as possible, but I kept getting
this error:
C:\Users\"My Real Name"\Anaconda2\lib\site-packages\ipykernel__main__.py:29:
RuntimeWarning: invalid value encountered in double_scalars
With no data points on my plot. So I literally pasted all the code in directly
from the website and I still get there error! I give up, can someone help a
python newbie?
import numpy as np
from matplotlib import pyplot
from math import sin, cos, log, ceil
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
# model parameters:
g = 9.8 # gravity in m s^{-2}
v_t = 30.0 # trim velocity in m s^{-1}
C_D = 1/40 # drag coefficient --- or D/L if C_L=1
C_L = 1 # for convenience, use C_L = 1
### set initial conditions ###
v0 = v_t # start at the trim velocity (or add a delta)
theta0 = 0 # initial angle of trajectory
x0 = 0 # horizotal position is arbitrary
y0 = 1000 # initial altitude
def f(u):
v = u[0]
theta = u[1]
x = u[2]
y = u[3]
return np.array([-g*sin(theta) - C_D/C_L*g/v_t**2*v**2, -g*cos(theta)/v + g/v_t**2*v, v*cos(theta), v*sin(theta)])
def euler_step(u, f, dt):
u + dt * f(u)
T = 100 # final time
dt = 0.1 # time increment
N = int(T/dt) + 1 # number of time-steps
t = np.linspace(0, T, N) # time discretization
# initialize the array containing the solution for each time-step
u = np.empty((N, 4))
u[0] = np.array([v0, theta0, x0, y0])# fill 1st element with initial values
# time loop - Euler method
for n in range(N-1):
u[n+1] = euler_step(u[n], f, dt)
x = u[:,2]
y = u[:,3]
pyplot.figure(figsize=(8,6))
pyplot.grid(True)
pyplot.xlabel(r'x', fontsize=18)
pyplot.ylabel(r'y', fontsize=18)
pyplot.title('Glider trajectory, flight time = %.2f' % T, fontsize=18)
pyplot.plot(x,y, 'k-', lw=2);
Answer: The solution is very simple. You forgot the return statement in euler_step.
Change
def euler_step(u, f, dt):
u + dt * f(u)
to
def euler_step(u, f, dt):
return u + dt * f(u)
and it will work
|
Python - First and last character in string must be alpha numeric, else delete
Question: I am wondering how I can implement a string check, where I want to make sure
that the first (&last) character of the string is alphanumeric. I am aware of
the `isalnum`, but how do I use this to implement this check/substitution?
So, I have a string like so:
st="-jkkujkl-ghjkjhkj*"
and I would want back:
st="jkkujkl-ghjkjhkj"
Thanks..
Answer: Though not exactly what you want, but using str.strip should serve your
purpose
import string
st.strip(string.punctuation)
Out[174]: 'jkkujkl-ghjkjhkj'
|
Building a function of a random variable dynamically in python
Question: I have some random variables using `scipy.stats` as follows:
import scipy.stats as st
x1 = st.uniform()
x2 = st.uniform()
Now I would like make another random variable based on previous random
variables and make some calculations like `var` for the new random variable.
Assume that I want the new random variable to be something like `max(2, x1) +
x2`. How can I define this dynamically?
Answer: Not directly, I think. However, this approach might be of use to you.
Assume to begin with that you know either the pdf or the cdf of the function
of the random variables of interest. Then you can use rv_continuous in
scipy.stats to calculate the variance and other moments of that function using
the recipe offered at [SO
doc](http://stackoverflow.com/documentation/scipy/6873/rv-continuous-for-
distribution-with-parameters#t=201610161945055670237). (Incidentally, someone
doesn't like it for some reason. If you think of improvements, please
comment.)
Obviously the 'fun' begins here. Usually you would attempt to define the cdf.
For any given value of the random variable this is the probability that an
expression such as the one you gave is not more than the given value. Thus
determining the cdf reduces to solving a (an infinite) collection of
inequalities in two variables. Of course there is often a strong pattern that
greatly reduces the complexity and difficulty of performing this task.
|
How to call a variable inside main function from another program in python?
Question: I have two python files first.py and second.py
first.py looks like
def main():
#some computation
first_variable=computation_result
second.py looks like
import first
def main():
b=getattr(first, first_variable)
#computation
but I am getting No Attribute error. Is there any way to access a variable
inside main() method in first.py through second.py?
Answer: You should use function calls and return values instead of this.
Return the computation_result from the function in the first file, and then
store the result in the b variable in the second file.
first.py
def main():
# computation
return computation_result
second.py
import first
def main():
b = first.main()
Other option is to use a global variable in the first file where you will
store the value and later reference it.
|
Interpolate without having negative values in python
Question: I've been trying to create a smooth line from these values but I can't have
negative values in my result. So far all the methods I tried do give negative
values. Would love some help.
import matplotlib.pyplot as plt
from scipy.interpolate import UnivariateSpline
import numpy as np
y = np.asarray([0,5,80,10,1,10,40,30,80,5,0])
x = np.arange(len(y))
plt.plot(x, y, 'r', ms=5)
spl = UnivariateSpline(x, y)
xs = np.linspace(0,len(y)-1, 1000)
spl.set_smoothing_factor(2)
plt.plot(xs, spl(xs), 'g', lw=3)
plt.show()
[![enter image description
here](https://i.stack.imgur.com/RY5hW.png)](https://i.stack.imgur.com/RY5hW.png)
Answer: Spline fitting is known to overshoot. You seem to be looking for one of the
so-called _monotonic_ interpolators. For instance,
In [10]: from scipy.interpolate import pchip
In [11]: pch = pchip(x, y)
produces
In [12]: xx = np.linspace(x[0], x[-1], 101)
In [13]: plt.plot(x, y, 'ro', label='points')
Out[13]: [<matplotlib.lines.Line2D at 0x7fce0a7fe390>]
In [14]: plt.plot(xx, pch(xx), 'g-', label='pchip')
Out[14]: [<matplotlib.lines.Line2D at 0x7fce0a834b10>]
[![enter image description
here](https://i.stack.imgur.com/xuYMR.png)](https://i.stack.imgur.com/xuYMR.png)
|
Python program asks for input twice, doesn't return value the first time
Question: Here is my code, in the `get_response()` function, if you enter 'y' or 'n', it
says invalid the first time but then works the second time.
How do I fix this?
import random
MIN = 1
MAX = 6
def main():
userValue = 0
compValue = 0
again = get_response()
while again == 'y':
userRoll, compRoll = rollDice()
userValue += userRoll
compValue += compRoll
if userValue > 21:
print("User's points: ", userValue)
print("Computer's points: ", compValue)
print("Computer wins")
else:
print('Points: ', userValue, sep='')
again = get_response()
if again == 'n':
print("User's points: ", userValue)
print("Computer's points: ", compValue)
if userValue > compValue:
print('User wins')
elif userValue == compValue:
print('Tie Game!')
else:
print('Computer wins')
def rollDice():
userRoll = random.randint(MIN, MAX)
compRoll = random.randint(MIN, MAX)
return userRoll, compRoll
def get_response():
answer = input('Do you want to roll? ')
if answer != 'y' or answer != 'n':
print("Invalid response. Please enter 'y' or 'n'.")
answer = input('Do you want to roll? ')
main()
Answer: `answer != 'y' or answer != 'n':` is always true; `or` should be `and`.
|
GAE (Python) Best practice: Load config from JSON file or Datastore?
Question: I wrote a platform in GAE Python with a Datastore database (using NDB). My
platform allows for a theme to be chosen by the user. Before _every_ page
load, I load in a JSON file (using `urllib.urlopen(FILEPATH).read()`). Should
I instead save the JSON to the Datastore and load it through NDB instead?
Here's an example of my JSON config file. These can range in size but not by
much. They're generally very small.
{
"TITLE": "Test Theme",
"VERSION": "1.0",
"AUTHOR": "ThePloki",
"DESCRIPTION": "A test theme for my platform",
"FONTS": ["Arial", "Times New Roman"],
"TOOLBAR": [
{"left":[
{"template":"logo"}
]},
{"center":[
{"template":"breadcrumbs"}
]},
{"right":[
{"template":"link", "url":"account", "msg":"Account"},
{"template":"link", "url":"logout", "msg":"Log Out"}
]}
],
"NAV_LEFT": true,
"SHOW_PAGE_TITLE": false
}
I don't currently notice any delays, but I'm working locally. During
production would the `urllib.urlopen().read()` cause problems if there is high
traffic?
Answer: Do you expect the configuration to change without the application code being
re-deployed? That is the scenario where it would make sense to store the
configuration in the Datastore.
If the changing the configuration involves re-deploying the code anyway, a
local file is probably fine - you might even consider making it a Python file
rather than JSON, so that it'd simply be a matter of importing it rather than
messing around with file handles.
|
Unable to access modified value of imported variable
Question: I am new to python and have some problem understanding the scope here.
I have a python module A with three global variables :
XYZ = "val1"
ABC = {"k1" : "v1", "k2" : "v2"}
PQR = 1
class Cls_A() :
def sm_fn_A(self) :
global XYZ
global ABC
global PQR
XYZ = "val2"
ABC["k1"] = "z1"
ABC["k3"] = "v3"
PQR += 1
And another module B :
from A import Cls_A, XYZ, ABC, PQR
class Cls_B():
def sm_fn_B(self) :
Cls_A().sm_fn_A()
print XYZ
print ABC
print PQR
Cls_B().sm_fn_B()
This gives me the following output :
val1
{'k3': 'v3', 'k2': 'v2', 'k1': 'z1'}
1
Since these are all global variables, why do I not get updated values of all
the global variables printed ?
Answer: # Explanation
Three global variables are defined in module `A`, in this code:
XYZ = "val1"
ABC = {"k1" : "v1", "k2" : "v2"}
PQR = 1
Then new global variables `XYZ`, `ABC`, `PQR` are defined in module `B`, in
this code:
from A import Cls_A, XYZ, ABC, PQR
This line of code creates new variables, just as if the following was written:
import A
XYZ = A.XYZ
ABC = A.ABC
PQR = A.PQR
It is important to understand that `A.XYZ` and `B.XYZ` are two variables which
point to the same object. They are not the same variable.
Then a new object is assigned to `A.XYZ`:
XYZ = "val2"
This modified `A.XYZ`, but did not modify `B.XYZ`. The two used to be two
variables which pointed to the same object, but now `A.XYZ` points to a
different object.
On the other hand, `A.ABC` is not assiciated with a different object. Instead,
the object itself is modified. When the object is modified, both `A.ABC` and
`B.ABC` still point to the same object:
ABC["k1"] = "z1"
ABC["k3"] = "v3"
The third case is also not a case of object modification, but rather
reassignment:
PQR += 1
The value was incremented. That created a new object and than thet new object
was assigned to `A.PQR`. `B.PQR` is unchanged. This is equivalent to:
PQR = PQR + 1
A thing which may not be obvious is that both strings and integers are
_immutable_ objects in Python (there is no way to change number to `2` to
become `3` \- one can only assign a different int object to a variable, not
change the existing one). Because of that, there is actually no way to change
`A.XYZ` in a way that affects `B.XYZ`.
## The dictionary could behave the same way
The reason why with the dictionary it _"worked"_ is that the object was
modified. If a new dictioanry was assigned to `A.ABC`, that would not work.
E.g.
ABC = {'k3': 'v3', 'k2': 'v2', 'k1': 'z1'}
Now it would not affect `B.ABC`, because the object in `A.ABC` was not
changed. Another object was assigned to `A.ABC` instead.
# Not related to modules
The same behaviour can be seen without any modules:
A_XYZ = "val1"
A_ABC = {"k1" : "v1", "k2" : "v2"}
A_PQR = 1
B_XYZ = A_XYZ
B_ABC = A_ABC
B_PQR = A_PQR
A_XYZ = "val2"
A_ABC["k1"] = "z1"
A_ABC["k3"] = "v3"
A_PQR += 1
print B_XYZ
print B_ABC
print B_PQR
Prints:
val1
{'k3': 'v3', 'k2': 'v2', 'k1': 'z1'}
1
# Solution
Well, don't keep reference to the temporary object. Use the variable which has
the correct value.
For example, in module `B`:
import A
class Cls_B():
def sm_fn_B(self) :
A.Cls_A().sm_fn_A()
print A.XYZ
print A.ABC
print A.PQR
Cls_B().sm_fn_B()
Now there is actually no `B.XYZ` variable, which could be wrong. `A.XYZ` is
always used.
|
Catch anything and save it into a variable
Question: I'm wondering if there is a keyword for "all" in python `except`. I've ran
into this seemingly simple problem:
try:
#do stuff
except any as error:
print('error: {err}'.format(err=error))
I know that you can do `except:` to catch all errors, but I don't know how to
add an `as` keyword to get a `print`able object. I want to catch any error and
be able to get an object for use in printing or something else.
Answer: You can catch almost anything this way:
try:
#do stuff
except Exception as error:
print('error: {err}'.format(err=error))
But to catch really everything, you can do this:
import sys
try:
#do stuff
except:
err_type, error, traceback = sys.exc_info()
print('error: {err}'.format(err=error))
|
Remote command does not return python
Question: I am rebooting a remote machine through Python, as it is a reboot, the current
ssh session is killed. The request is not returned. I am not interested in the
return though.
os.command('sshpass -p password ssh user@host reboot')
The code is performing the reboot, however the current session is killed and
the script never returns.
I can do an async and just ignore the thread, any other easy options?
Answer: I'm surprised that the script doesn't return. The connection should be reset
by the remote before it reboots. You can run the process asyc, the one problem
is that subprocesses not cleaned up up by their parents become zombies (still
take up space in the process table). You can add a Timer to give the script
time to do its dirty work and then clean it up in the background.
Notice that I switched that command to a list of parameters and skipped
setting `shell=True`. It just means that no intermediate shell process is
executed.
import sys
import subprocess as subp
import threading
import timing
def kill_process(proc):
# kill process if alive
if proc.poll() is None:
proc.kill()
time.sleep(.1)
if proc.poll() is None:
sys.stderr.write("Proc won't die\n")
return
# clean up dead process
proc.wait()
proc = subp.Popen(['sshpass', '-p', password, 'ssh', 'user@host', 'reboot')
threading.Timer(5, kill_process, args=(proc,))
# continue on your busy day...
|
python receive image over socket
Question: I'm trying to send an image over a socket - I'm capturing an image on my
raspberry pi using pycam, sending to a remote machine for processing, and
sending a response back.
On the server (the pi), I'm capturing the image, transforming to an array,
rearranging to a 1D array and using the tostring() method.
On the server, the string received is not the same length. Any thoughts on
what is going wrong here? Attached is the code I'm running, as well as the
output on both the server and the client
SERVER CODE:
from picamera.array import PiRGBArray
from picamera import PiCamera
import socket
import numpy as np
from time import sleep
import sys
camera = PiCamera()
camera.resolution = (640,480)
rawCapture = PiRGBArray(camera)
s = socket.socket()
host = 'myHost'
port = 12345
s.bind((host,port))
s.listen(1)
while True:
c,addr = s.accept()
signal = c.recv(1024)
print 'received signal: ' + signal
if signal == '1':
camera.start_preview()
sleep(2)
camera.capture(rawCapture, format = 'bgr')
image = rawCapture.array
print np.shape(image)
out = np.reshape(image,640*480*3)
print out.dtype
print 'sending file length: ' + str(len(out))
c.send(str(len(out)))
print 'sending file'
c.send(out.tostring())
print 'sent'
c.close()
break
CLIENT CODE:
import socket, pickle
import cv2
import numpy as np
host = '192.168.1.87'
port = 12345
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
s.send('1')
#while true:
x = long(s.recv(1024))
rawPic = s.recv(x)
print 'Received'
print x
print len(rawPic)
type(rawPic)
#EDITED TO INCLUDE DTYPE
image = np.fromstring(rawPic,np.uint8)
s.close()
SERVER OUTPUT:
received signal: 1
(480, 640, 3)
uint8
sending file length: 921600
sending file
CLIENT OUTPUT:
Received
921600
27740
str
ValueError Traceback (most recent call last)
<ipython-input-15-9c39eaa92454> in <module>()
----> 1 image = np.fromstring(rawPic)
ValueError: string size must be a multiple of element size
I'm wondering if the issue is i'm calling tostring() on a uint8, and if the
fromstring() is assuming it's a uint32? I can't figure out why the received
string is so much smaller than what is sent.
**EDIT** It seems for some reason the server is not fully sending the file. It
never prints 'sent', which it should do at completion. If I change the send
line to:
c.send(str(len(out[0:100])))
print 'sending file'
c.send(out[0:100].tostring())
Everything works fine. Thoughts on what could be cutting off my sent file
midway through?
Answer: # Decoding to the Proper Type
When you call `tostring()`, datatype (and shape) information is lost. You must
supply numpy with the datatype you expect.
Ex:
import numpy as np
image = np.random.random((50, 50)).astype(np.uint8)
image_str = image.tostring()
# Works
image_decoded = np.fromstring(image_str, np.uint8)
# Error (dtype defaults to float)
image_decoded = np.fromstring(image_str)
# Recovering Shape
If shape is always fixed, you can do
image_with_proper_shape = np.reshape(image_decoded, (480, 640, 3))
In the client.
Otherwise, you'll have to include shape information in your message, to be
decoded in the client.
|
How do I print a variable within a class' if statement?
Question: I'm trying to print a variable in Python using the following code:
from time import sleep
import random
class Hero:
def __init__(self,name):
self.name = name
if name == "rock":
self.health = 50
self.attack = 10
elif name == "paper":
self.health = 70
self.attack = 7
elif name == "scissors":
self.health = 100
self.attack = 5
def dmg(self, other):
other.ehealth -= self.attack
start = 1
while start == 1:
name = input("Pick a class [rock/paper/scissors]")
if name != "rock" or "paper" or "scissors":
print ("That player does not exist. Try again.")
start = 1
else:
start = 1
player = Hero(name)
enemyName = ["erock", "epaper", "escissors"]
ename = random.choice(enemyName)
print ("Your character is", name, "which comes with", self.health, "health, and", self.attack, "attack.")
print("")
sleep(1)
print ("Your enemy is", ename, "which comes with", ehealth, "health, and", eattack, "attack.")
I can't figure out a way to access and print the variable "self.health" under
the "Hero" class. I can access "name" just fine. Maybe it's the fact that it's
under an if statement? Can someone help me out?
Answer: `self` is just a parameter name used inside the methods. Don't use it outside
the method.
To access the variables refer to them using the object name (`player`) like
this
player.health
The `name` variable you are printing "works" because it's not from the object.
You should use the same notation to access that too:
player.name
|
Saving image in python
Question: I'm new to python. What I want to do is to read in an image, convert it to
gray value and save it.
This is what I have so far:
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
#Greyvalue image
im = Image.open(args["image"])
im_grey = im.convert('LA') # convert to grayscale
Now my problem is how to save it. As far as I unterstood there are a lot of
different modules (Using python 2.7) Could someone give my an example?
Thanks
Answer: **Method 1: save method**
im_grey.save('greyscale.png')
Use Image_object.save() method
**Method 2: imsave method**
import matplotlib.image as mpimg
mpimg.imsave("greyscale.png", im_grey)
|
Django 1.10.2 error "NoReverseMatch at " ,"Reverse for 'django.contrib.auth.views.login' with arguments '()' and keyword arguments '{}' not found."
Question: I am new to python & Django. I am getting one error and have absolutely no
idea how to solve it. Any help will be appreciated. [![enter image description
here](https://i.stack.imgur.com/4Cw94.png)](https://i.stack.imgur.com/4Cw94.png)
from django.shortcuts import render
# Create your views here.
#log/views.py
from django.shortcuts import render
from django.contrib.auth.decorators import login_required
# Create your views here.
# this login required decorator is to not allow to any
# view without authenticating
@login_required(login_url="login/")
def home(request):
return render(request,"home.html")
The code in urls.py is,
from django.conf.urls import include,url
from django.contrib import admin
from django.contrib.auth import views as auth_views
from log.forms import LoginForm
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'', include('log.urls')),
url(r'^login/$', auth_views.login ,{'template_name': 'login.html','authentication_form': LoginForm}),
url(r'^logout/$', auth_views.logout, {'next_page': '/login'}),
]
The code in login.html is,
{% extends 'base.html' %}
{% block content %}
{% if form.errors %}
<p>Your username and password didn't match. Please try again.</p>
{% endif %}
{% if next %}
{% if user.is_authenticated %}
<p>Your account doesn't have access to this page. To proceed,
please login with an account that has access.</p>
{% else %}
<p>Please login to see this page.</p>
{% endif %}
{% endif %}
<div class="container">
<div class="row">
<div class="col-md-4 col-md-offset-4">
<div class="login-panel panel panel-default">
<div class="panel-heading">
<h3 class="panel-title">Please Sign In</h3>
</div>
<div class="panel-body">
<form method="post" action="{% url 'django.contrib.auth.views.login' %}">
{% csrf_token %}
<p class="bs-component">
<table>
<tr>
<td>{{ form.username.label_tag }}</td>
<td>{{ form.username }}</td>
</tr>
<tr>
<td>{{ form.password.label_tag }}</td>
<td>{{ form.password }}</td>
</tr>
</table>
</p>
<p class="bs-component">
<center>
<input class="btn btn-success btn-sm" type="submit" value="login" />
</center>
</p>
<input type="hidden" name="next" value="{{ next }}" />
</form>
</div>
</div>
</div>
</div>
</div>
{% endblock %}
{% block javascript %}
<script>
{% if not user.is_authenticated %}
$("ul.nav.navbar-nav.navbar-right").css("display","none");
{% endif %}
</script>
{% endblock %}
Hope this much info will do....
Answer: Add a `name` to the login _url pattern_ :
kwargs = {'template_name': 'login.html','authentication_form': LoginForm}
...
url(r'^login/$', auth_views.login, kwargs=kwargs, name='login'),
# ^^^^
and then use that `name` in your template:
<form method="post" action="{% url 'login' %}">
|
Word document to python-docx
Question: **Objective:** I use a word template to which I want to pass paragraph values
from python.
**Pipeline:** The pipeline involves using python-docx and sending output
paragraph to python docx;thus, creating a docx file.
from docx import Document
from docx.shared import Inches
document = Document()
document.add_heading('Document Title', 0)
r = """sample paragraph"""
p = document.add_paragraph(r)
document.add_page_break()
document.save('Test.docx')
**Question:**
I already got a sample template that I want to use, is it possible to create a
blue print of the template using python-docx and continue with the content
blocking. By blue print, I mean the section header,footer,margin ,spacing to
name a few,must be preserved or coded in pyton-docx format automatically. So
that I can send sample paragraphs to the relevant section.
If I need to create a template using another one as base; I believe,I need to
hard code the section,margin and style again in python-docx. Is there a way to
circumnavigate this work path?
Answer: I think the approach you'll find most useful is to define **paragraph styles**
in your template document that embody the paragraph and character formatting
for the different types of paragraphs you want (heading, body paragraph,
etc.), and then apply the correct style to each paragraph as you add it.
> <http://python-docx.readthedocs.io/en/latest/user/styles-understanding.html>
> <http://python-docx.readthedocs.io/en/latest/user/styles-using.html>
You'll still need to write the document from "top" to "bottom". If the items
don't arrive in sequence you'll probably want to keep them organized in a
memory data structure until you have them all and then write the document from
that data structure.
There are ways around this, but there's no notion of a "cursor" in python-docx
(yet) where you can insert a paragraph at an arbitrary location.
|
How to smooth lines in a figure in python?
Question: So with the code below I can plot a figure with 3 lines, but they are angular.
Is it possible to smooth the lines?
import matplotlib.pyplot as plt
import pandas as pd
# Dataframe consist of 3 columns
df['year'] = ['2005, 2005, 2005, 2015, 2015, 2015, 2030, 2030, 2030']
df['name'] = ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C']
df['weight'] = [80, 65, 88, 65, 60, 70, 60, 55, 65]
fig,ax = plt.subplots()
# plot figure to see how the weight develops through the years
for name in ['A','B','C']:
ax.plot(df[df.name==name].year,df[df.name==name].weight,label=name)
ax.set_xlabel("year")
ax.set_ylabel("weight")
ax.legend(loc='best')
Answer: You should apply interpolation on your data and it shouldn't be "linear". Here
I applied the "cubic" interpolation using scipy's `interp1d`. Also, note that
for using cubic interpolation your data should have at least 4 points. So I
added another year 2031 and another value too all weights (I got the new
weight value by subtracting 1 from the last value of weights):
Here's the code:
import matplotlib.pyplot as plt
import pandas as pd
from scipy.interpolate import interp1d
import numpy as np
# df['year'] = ['2005, 2005, 2005, 2015, 2015, 2015, 2030, 2030, 2030']
# df['name'] = ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C']
# df['weight'] = [80, 65, 88, 65, 60, 70, 60, 55, 65]
df1 = pd.DataFrame()
df1['Weight_A'] = [80, 65, 60 ,59]
df1['Weight_B'] = [65, 60, 55 ,54]
df1['Weight_C'] = [88, 70, 65 ,64]
df1.index = [2005,2015,2030,2031]
ax = df1.plot.line()
ax.set_title('Before interpolation')
ax.set_xlabel("year")
ax.set_ylabel("weight")
f1 = interp1d(df1.index, df1['Weight_A'],kind='cubic')
f2 = interp1d(df1.index, df1['Weight_B'],kind='cubic')
f3 = interp1d(df1.index, df1['Weight_C'],kind='cubic')
df2 = pd.DataFrame()
new_index = np.arange(2005,2031)
df2['Weight_A'] = f1(new_index)
df2['Weight_B'] = f2(new_index)
df2['Weight_C'] = f3(new_index)
df2.index = new_index
ax2 = df2.plot.line()
ax2.set_title('After interpolation')
ax2.set_xlabel("year")
ax2.set_ylabel("weight")
plt.show()
And the results:
[![Before
interpolation](https://i.stack.imgur.com/QnDvl.png)](https://i.stack.imgur.com/QnDvl.png)
[![After
interpolation](https://i.stack.imgur.com/aUJGs.png)](https://i.stack.imgur.com/aUJGs.png)
|
Why z3c.RML ignores the pageSize Attribute of <template>
Question: I am trying to get an A4-Landscape output file. The Document i am modifying is
A4-Portrait, so i thought simple switch: **_pageSize="(21cm, 29.7cm)"_** to
**_pageSize="(29.7cm, 21cm)"_** , but nothing happend.
I then fount an Attribute: **_rotation="90"_**. The Page on the Screen ist
still A4-Portrait, but the content ist turned 90 degres around. On paper it
woulde be fine, but on screen i have to turn my head by 90 degress, not very
comfortable.
After this i tryed: **_pageSize="(10cm, 10cm)"_** , thought this should look
terrible, but nothing changed.
Could it be possible, that the Size of the generated PDF-File is set in
thePython-Code and not set by the RML-File?
This is the Python Code:
#!venv/bin/python
# -*- coding: utf-8 -*-
from z3c.rml import pagetemplate
rmlPageTemplate = pagetemplate.RMLPageTemplateFile("test.rml")
open('test.pdf', 'wb').write(rmlPageTemplate())
My RML-File locks like:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE document SYSTEM "rml_1_0.dtd">
<document test.pdf">
<docinit>
...
</docinit>
<template pageSize="(10cm, 10cm)"
rotation="90"
leftMargin="2.5cm"
rightMargin="2.5cm"
topMargin="2.5cm"
bottomMargin="2.5cm"
showBoundary="1"
>
<pageTemplate id="main">
<frame id="first" x1="2.5cm" y1="2.5cm" width="24.7cm" height="16cm" showBoundary="1"/>
</pageTemplate>
</template>
<stylesheet>
...
</stylesheet>
<story>
...
</story>
</document>
Thank you very much.
Answer: <https://github.com/zopefoundation/z3c.rml/blob/master/RML-DIFFERENCES.rst>
**RML2PDF and z3c.rml Implementation Differences**
This document outlines the differences between ReportLab Inc.'s RML2PDF
library and z3c.rml.
**Incompatibilies**
_page**S** ize_: This is called page**s** ize in this implementation to match
the API.
|
Matplotlib axis not displayed
Question: The python code (python 2.7) running on windows 7 shown below results in the
following inconsistent behaviour with respect to the display of axis which I
do not understand:
1 - a window is opened and a plot without an axis is displayed showing a point
2 - on closing the window, another window is opened and a plot is displayed
showing the same point but this time with an axis.
from osgeo import ogr
import pylab
from ospybook.vectorplotter import VectorPlotter
vp = VectorPlotter(False)
myLoc = ogr.Geometry(ogr.wkbPoint)
myLoc.AddPoint(59.5,13)
vp.plot(myLoc,'rs')
pylab.show() ## the plot is displayed --without-- axes displayed
myLoc.AddPoint(59.5,13)
vp.plot(myLoc,'rs')
pylab.show() ## the plot is displayed with axes displayed
Please note that in my environment, if the vector plotter interactive mode is
set to True, pylab.show() opens window but no plot is displayed.
Answer: try
vp = VectorPlotter(interactive=False, ticks=True)
|
Using external class methods inside the imported module
Question: My python application consists of various _separate_ processing
algorithms/modules combined within a single (Py)Qt GUI for ease of access.
Every processing algorithm sits within its own module and all the
communication with GUI elements is implemented within a single class in the
main module. In particular, this GUI class has a progressbar
([QProgressBar](http://pyqt.sourceforge.net/Docs/PyQt4/qprogressbar.html))
object designed to represent the current processing progress of a chosen
algorithm. This object has `.setValue()` method
(`self.dlg.progressBar.setValue(int)`).
The problem is that since `self.dlg.progressBar.setValue()` is a class method
I cannot use it inside my imported processing modules _to report their
progress state within their own code_.
The only workaround I found is to add `progressbar` variable to definition of
each processing module, pass there `self.dlg.progressBar` inside the main
module and then blindly call `progressbar.setValue(%some_processing_var%)`
inside the processing module.
Is this the only way to use outer class methods inside imported modules or are
there better ways?
Answer: No. I think this approach somewhat breaks software engineering principles
(e.g. [single
responsibility](https://en.wikipedia.org/wiki/Single_responsibility_principle)).
In single responsibility principle, each module is only in charge of its
assigned task and nothing else. If we consider UI a separate layer, so your
processing modules shouldn't have anything to do with the UI layer.
In this case, your modules should have a method called
`publish_progress(callback)` which `callback` is a function to be called for
each progress step ([more
info](https://en.wikipedia.org/wiki/Callback_\(computer_programming\))). Then,
in your UI layer define a function which is given an integer (between 0 to
100) and updates the progress bar. Once you've defined it, you should register
it with `publish_progress` method of your modules.
def progress_callback(prg):
self.dlg.progressBar.setValue(prg)
Registering it:
my_module.publish_progress(progress_callback)
Calling the callback in your module:
progress_callback(0)
...
# do something
...
progress_callback(20)
...
# do something
...
progress_callback(100)
|
Python read lines from a file and write from a specific line number to another file
Question: I want to read the lines from a file and write from a specific line number to
another file. I have this script, which writes all the read lines. I need to
skip the first four lines and write the rest to another fils. Any ideas?
for k in range (0,16):
print 'k =',k
from abaqus import session
k=k+1
print k
f1 = open('VY_NM_VR_lin_o1_bonded_results_{k}.txt'.format(k=k))
#with open('VY_{k}'.format(k=k), 'a') as f1:
lines = f1.readlines()
for i, line in enumerate(lines):
#print i
print(repr(line))
#if line.startswith(searchquery):
f2.write(line)
#f2.write('%s'%listc + "\n")
i = i+1
#else :
# i = i+1
#os.close(f1)
f1.close()
f2.close()
Answer: [`itertools.islice` is designed for
this](https://docs.python.org/3/library/itertools.html#itertools.islice):
import itertools
with open('VY_NM_VR_lin_o1_bonded_results_{k}.txt'.format(k=k)) as f1:
# islice w/4 & None skips first four lines of f1, then generates the rest,
# and writelines can take that iterator directly to write them all out
f2.writelines(itertools.islice(f1, 4, None))
If you need to process the lines as you go, then skip `writelines` and go back
to:
for line in itertools.islice(f1, 4, None):
... do stuff with line ...
f2.write(line)
Either way, you never even see the first four lines (Python is reading them
and discarding them for you seamlessly).
|
Python type checking not working as expected
Question: I'm sure I'm missing something obvious here, but why does the following script
actually work?
import enum
import typing
class States(enum.Enum):
a = 1
b = 2
states = typing.NewType('states', States)
def f(x: states) -> states:
return x
print(
f(States.b),
f(3)
)
As far I understand it, it should fail on the call `f(3)`, however it doesn't.
Can someone shed some light on this behaviour?
Answer: No checking is performed by Python itself. This is specified in the ["Non-
Goals" section](https://www.python.org/dev/peps/pep-0484/#non-goals) of PEP
484. When executed (i.e during run-time), Python completely ignores the
annotations you provided and evaluates your statements as it usually does,
dynamically.
If you need type checking, you should perform it yourself. This can currently
be performed by static type checking tools like
[`mypy`](http://mypy.readthedocs.io/en/latest/).
|
Python HTML source code
Question: I would like to write a script that picks a special point from the source code
and returns it. (print it)
import urllib.request
Webseite = "http://myip.is/"
html_code = urllib.request.urlopen(Webseite)
print(html_code.read().decode('ISO-8859-1'))
This is my current code. I would like to print only the IP address that the
website gives. The input of this I will print in python (title="copy ip
address").
Answer: You could use [jsonip](http://jsonip.com) which returns a JSON object that you
can easily parse using standard Python library
import json
from urllib2 import urlopen
my_ip = json.load(urlopen('http://jsonip.com'))['ip']
|
python os.walk and unicode error
Question: two questions: 1\. why does
In [21]:
....: for root, dir, file in os.walk(spath):
....: print(root)
print the whole tree but
In [6]: for dirs in os.walk(spath):
...: print(dirs)
chokes on this unicode error?
UnicodeEncodeError: 'charmap' codec can't encode character '\u2122' in position 1477: character maps to <undefined>
[NOTE: this is the TM symbol]
2. I looked at these answers
[Scraping works well until I get this error: 'ascii' codec can't encode
character u'\u2122' in
position](http://stackoverflow.com/questions/22184178/scraping-works-well-
until-i-get-this-error-ascii-codec-cant-encode-character)
[What's the deal with Python 3.4, Unicode, different languages and
Windows?](http://stackoverflow.com/questions/30539882/whats-the-deal-with-
python-3-4-unicode-different-languages-and-windows/30551552#30551552)
[python 3.2 UnicodeEncodeError: 'charmap' codec can't encode character
'\u2013' in position 9629: character maps to
<undefined>](http://stackoverflow.com/questions/16346914/python-3-2-unicodeencodeerror-
charmap-codec-cant-encode-character-u2013-i?noredirect=1&lq=1)
<https://github.com/Drekin/win-unicode-console>
[https://docs.python.org/3/search.html?q=IncrementalDecoder&check_keywords=yes&area=default](https://docs.python.org/3/search.html?q=IncrementalDecoder&check_keywords=yes&area=default)
and tried these variations
----> 1 print(dirs, encoding='utf-8')
TypeError: 'encoding' is an invalid keyword argument for this function
In [11]: >>> u'\u2122'.encode('ascii', 'ignore')
Out[11]: b''
print(dirs).encode(‘utf=8’)
all to no effect.
This was done with python 3.4.3 and visual studio code 1.6.1 on Windows 10.
The default settings in Visual Studio Code include:
> // The default character set encoding to use when reading and writing files.
> "files.encoding": "utf8",
python 3.4.3 visual studio code 1.6.1 ipython 3.0.0
**UPDATE EDIT** I tried this again in the Sublime Text REPL, running a script.
Here's what I got:
# -*- coding: utf-8 -*-
import os
spath = 'C:/Users/Semantic/Documents/Align'
with open('os_walk4_align.txt', 'w') as f:
for path, dirs, filenames in os.walk(spath):
print(path, dirs, filenames, file=f)
Traceback (most recent call last):
File "listdir_test1.py", line 8, in <module>
print(path, dirs, filenames, file=f)
File "C:\Python34\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2605' in position 300: character maps to <undefined>
This code is only 217 characters long, so where does ‘position 300’ come from?
Answer: Here's a test case:
C:\TEST
├───dir1
│ file1™
│
└───dir2
file2
Here's a script (Python 3.x):
import os
spath = r'c:\test'
for root,dirs,files in os.walk(spath):
print(root)
for dirs in os.walk(spath):
print(dirs)
Here's the output, on an IDE that supports UTF-8 (PythonWin, in this case):
c:\test
c:\test\dir1
c:\test\dir2
('c:\\test', ['dir1', 'dir2'], [])
('c:\\test\\dir1', [], ['file1™'])
('c:\\test\\dir2', [], ['file2'])
Here's the output, on my Windows console, which defaults to `cp437`:
c:\test
c:\test\dir1
c:\test\dir2
('c:\\test', ['dir1', 'dir2'], [])
Traceback (most recent call last):
File "C:\test.py", line 9, in <module>
print(dirs)
File "C:\Python33\lib\encodings\cp437.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u2122' in position 47: character maps to <undefined>
For Question 1, the reason `print(root)` works is that no directory had a
character that wasn't supported by the output encoding, but `print(dirs)` is
now printing a tuple containing `(root,dirs,files)` and one of the files has
an unsupported character in the Windows console.
For Question 2, the first example misspelled `utf-8` as `utf=8`, and the
second example didn't declare an encoding for the file the output was written
to, so it used a default that didn't support the character.
Try this:
import os
spath = r'c:\test'
with open('os_walk4_align.txt', 'w', encoding='utf8') as f:
for path, dirs, filenames in os.walk(spath):
print(path, dirs, filenames, file=f)
Content of `os_walk4_align.txt`, encoded in UTF-8:
c:\test ['dir1', 'dir2'] []
c:\test\dir1 [] ['file1™']
c:\test\dir2 [] ['file2']
|
Python: [Errno 2] No such file or directory - weird issue
Question: I'm learning with a tutorial [Create your own shell in
Python](https://hackercollider.com/articles/2016/07/05/create-your-own-shell-
in-python-part-1/) and I have some weird issue. I wrote following code:
import sys
import shlex
import os
SHELL_STATUS_RUN = 1
SHELL_STATUS_STOP = 0
def shell_loop():
status = SHELL_STATUS_RUN
while status == SHELL_STATUS_RUN:
sys.stdout.write('> ') #display a command prompt
sys.stdout.flush()
cmd = sys.stdin.readline() #read command input
cmd_tokens = tokenize(cmd) #tokenize the command input
status = execute(cmd_tokens) #execute the command and retrieve new status
def main():
shell_loop()
def tokenize(string):
return shlex.split(string)
def execute(cmd_tokens): #execute command
os.execvp(cmd_tokens[0], cmd_tokens) #return status indicating to wait for the next command in shell_loop
return SHELL_STATUS_RUN
if __name__ == "__main__":
main()
And now when I'm typing a "mkdir folder" command it returns error: `[Errno 2]
No such file or directory`. BUT if I write previously "help" command which
works correctly (displays me all available commands), command mkdir works
correctly and it creating a folder. Please, guide me what's wrong with my
code? I'm writing in Notepad++ on Windows 8.1 64x
Answer: Copy-paste from comments in my link (thanks for [Ari
Gold](http://stackoverflow.com/users/3009212/ari-gold))
Hi tyh, it seems like you tried it on Windows. (I forgot to note that it works
on Linux and Mac or Unix-like emulator like Cygwin only)
For the first problem, it seems like it cannot find `mkdir` command on your
system environment. You might find the directory that the `mkdir` binary
resides in and use execvpe() to explicitly specify environment instead
For the second problem, the `os` module on Windows has no fork() function.
However, I suggest you to use Cygwin on Windows to emulate Unix like
environment and two problems above should be gone.
In Windows 10, there is Linux-Bash, which might work, but I never try.
|
Python - insert lines on txt following a sequence without overwriting
Question: I want to insert the name of a file before each file name obtained through
glob.glob, so I can concatenate them through FFMPEG by sorting them into
INTRO+VIDEO+OUTRO, the files have to follow this order:
INSERTED FILE NAME
FILE
INSERTED FILE NAME
INSERTED FILE NAME
FILE
INSERTED FILE NAME
INSERTED FILE NAME
FILE
INSERTED FILE NAME
This is this code I'm using:
import glob
open("Lista.txt", 'w').close()
file = open("Lista.txt", "a")
list =glob.glob("*.mp4")
for item in list:
file.write("%s\n" % item)
file.close()
f = open("Lista.txt", "r")
contents = f.readlines()
f.close()
for x,item in enumerate(list):
contents.insert(x, "CTA\n")
f = open("Lista.txt", "w")
contents = "".join(contents)
f.write(contents)
print f
f.close()
But I obtain the values in the wrong order:
INSERTED FILE NAME
INSERTED FILE NAME
INSERTED FILE NAME
FILE
FILE
FILE
How could I solve this? EDIT: As pointed out, maybe the issue is being caused
by modifying a list that I'm currently using.
Answer: You are trying to modify `contents` list. I think if new list is used to get
final output, then it will be simple and more readable as below. And as **Zen
of Python** states
**Simple is always better than complex.**
1. Consider you got `file_list` as below after doing `glob.glob`.
> file_list = ["a.mp4","b.mp4","c.mp4"]
2. Now you want to add "`INSERTFILE`" before and after every element of `file_list` so that final_list will look like
> final_list = ['INSERTFILE', 'a.mp4', 'INSERTFILE', 'INSERTFILE', 'b.mp4',
> 'INSERTFILE', 'INSERTFILE', 'c.mp4', 'INSERTFILE']
3. This final_list you will write to file.
**_Problem summary is how to achieve step 2._**
Below code will get step 2.
**Code (with comments inline:**
#File List from glob.glob
file_list = ["a.mp4","b.mp4","c.mp4"]
#File to be added
i = "INSERTFILE"
final_list = []
for x in file_list:
#Create temp_list
temp_list = [i,x,i]
#Extend (not append) to final_list
final_list.extend(temp_list)
#Clean temp_list for next iteration
temp_list = []
print (final_list)
**Output:**
C:\Users\dinesh_pundkar\Desktop>python c.py
['INSERTFILE', 'a.mp4', 'INSERTFILE', 'INSERTFILE', 'b.mp4', 'INSERTFILE', 'INSERTFILE', 'c.mp4', 'INSERTFILE']
|
I am using Kivy in Python and only the last button has it's embedded objects appearing
Question: I apologize in advance if my question is stupid or obvious, but I have been
researching this over and over and am coming up with nothing. I am currently
using Kivy and have multiple buttons in a gridlayout, which is in a
scrollview. Withing these buttons I have a label and an image. Only the last
of my buttons shows the label and image whenever I run my code. I think it has
something to do with the positions, but I can't figure out what it is. Here is
my code:
Window.clearcolor = (0.937, 0.698, 0.176, 1)
class Solis(App):
def build(self):
b1 = Button(text= "",
background_color=(237, 157, 59, 1),
size_hint_y= None,
background_normal = '',
valign = 'middle',
font_size=20)
lab1 = Label(text= Content1,
color= (0.937, 0.698, 0.176, 1),
valign= 'middle',
pos=(b1.x , b1.height / 2))
b1I = AsyncImage(source = lead1image)
def callback1(self):
webbrowser.open(lead1)
b1.bind(on_press=callback1)
b2 = Button(text= "",
background_color=(237, 157, 59, 1),
size_hint_y= None,
background_normal = '',
valign = 'middle',
font_size=20)
lab2 = Label(text= Content2,
color= (0.937, 0.698, 0.176, 1),
valign= 'middle',
halign= 'center',
pos=(b2.x + 800, b2.height))
b2I = AsyncImage(source = lead2image)
def callback2(self):
webbrowser.open(lead2)
b2.bind(on_press=callback2)
b3 = Button(text= "",
background_color=(237, 157, 59, 1),
size_hint_y= None,
background_normal = '',
valign = 'middle',
font_size=20)
lab3 = Label(text= Content3,
color= (0.937, 0.698, 0.176, 1),
valign= 'middle',
pos=(b3.x + 800, b3.height / 4))
b3I = AsyncImage(source = lead3image)
def callback3(self):
webbrowser.open(lead3)
b3.bind(on_press=callback3)
l = GridLayout(cols=1, spacing=10, size_hint_y=None, orientation = 'vertical')
l.bind(minimum_height=l.setter('height'))
s = ScrollView(size_hint=(1, None), size=(Window.width, Window.height))
s.add_widget(l)
l.add_widget(b1)
l.add_widget(b2)
l.add_widget(b3)
b1.add_widget(b1I)
b1.add_widget(lab1)
b2.add_widget(b2I)
b2.add_widget(lab2)
b3.add_widget(b3I)
b3.add_widget(lab3)
return s
if __name__ == "__main__":
Solis().run()
And here is my result [here](https://i.stack.imgur.com/CMkTi.png)
Answer: Here is something that should help you get what you want:
A main.py like this:
from kivy.app import App
import webbrowser
class Solis(App):
def __init__(self, **kwargs):
super(Solis, self).__init__(**kwargs)
self.lead1image='https://pbs.twimg.com/profile_images/562300519008333825/6WcGRXLU.png'
self.lead2image='https://pbs.twimg.com/profile_images/439154912719413248/pUBY5pVj.png'
self.lead3image='https://pbs.twimg.com/profile_images/424495004/GuidoAvatar.jpg'
def callback(self,url):
webbrowser.open(url)
if __name__ == '__main__':
Solis().run()
And a solis.kv like this:
ScrollView:
GridLayout:
cols:1
size_hint_y:None
height: self.minimum_height
Button:
size_hint_y: None
height:'50dp'
on_press: app.callback(app.lead1image)
AsyncImage:
id:im1
size_hint: None, None
height: self.parent.height
width: '48dp'
source: app.lead1image
pos: (self.parent.x, self.parent.y)
Label:
size_hint: None, None
height:self.parent.height
width: self.parent.width - im1.width
text: 'Text for the 1st Label'
pos: (self.parent.x, self.parent.y)
Button:
size_hint_y: None
height:'50dp'
on_press: app.callback(app.lead2image)
AsyncImage:
size_hint: None, None
height: self.parent.height
width: '48dp'
source: app.lead2image
pos: (self.parent.x, self.parent.y)
Label:
size_hint: None, None
height:self.parent.height
width: self.parent.width - im1.width
text: 'Text for the 2st Label'
pos: (self.parent.x, self.parent.y)
Button:
size_hint_y: None
height:'50dp'
on_press: app.callback(app.lead3image)
AsyncImage:
size_hint: None, None
height: self.parent.height
width: '48dp'
source: app.lead3image
pos: (self.parent.x, self.parent.y)
Label:
size_hint: None, None
height:self.parent.height
width: self.parent.width - im1.width
text: 'Text for the 2st Label'
pos: (self.parent.x, self.parent.y)
Thus, there is now two files (a ".py" and a ".kv"): This will help you
separate the logic of your application from its User Interface. Note that it
is possible to reduce the length of the code with some customized widgets.
Also, if you want to keep juste one .py file, I would add a "for" loop.
Finally, regarding your issue: I guess your labels and images were
superimposed. I avoided this by using "pos:".
|
Python code to return total count of no. of positions in which items are differing at same index
Question: A=[1,2,3,4,5,6,7,8,9] B=[1,2,3,7,4,6,5,8,9]
I have to compare these two lists and return the count of no. of location in
which items are differing using one line python code.
For example: the output should be 4 for given arrays because at index
(3,4,5,6) the items are differing.So, program should return 4.
My way of doing this is comparing each and every location using for loop:
count=0
for i in range(0,len(A)):
if(A[i]==B[i]):
continue
else:
count+=1
print(count)
Please help me in writing one line python code for this.
Answer:
count = sum(a != b for a, b in zip(A, B))
print(count)
or just `print sum(a != b for a, b in zip(A, B))`
you can check about [zip/lambda/map
here](https://bradmontgomery.net/blog/pythons-zip-map-and-lambda/), those
tools are very powerfull and important in python..
[Here](http://stackoverflow.com/questions/14050824/add-sum-of-values-of-two-
lists-into-new-list) you can also check others kind of ways to use those
tools.
Have fun!!
|
How do I find the location of Python module sources while I can not import it?
Question: The answer in [How do I find the location of Python module
sources?](http://stackoverflow.com/questions/269795/how-do-i-find-the-
location-of-python-module-sources) says just import it and print its
`__file__`. But my question is that I cannot import a library `cv2` while it
returns `ImportError: libcudart.so.7.5: cannot open shared object file: No
such file or directory`, so I cannot get its `__file__` too. I want to find
where does Python import this library so that I can check what is wrong with
the library.
Answer: Try:
import imp
imp.find_module('cv2')
|
Not Fount :/media/app/images/my_image.png
Question: image not found but the image exist at exact location .
print(profile.image.url)
#/media/app/images/my_image.png
prints the image location but when i used in my template , other column are
current but image is not found
profile = same_table.object.get(pk = 1)
setting:
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(MY_BASE_DIR , 'media')
my media location has no problem because when i upload image to server it is
working fine
Template:
<img src = "{{profile.image.url}}">
any idea what i messed ?
Using Django 1.10 with Python 3.4 in window 10
Answer: Add to end of your root urls config `urls.py` this:
from django.conf.urls.static import static
from django.conf import settings
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
|
finding a special path string in HTML text in python
Question: I'm trying to extract a path in an HTML file that I read. In this case the
path that I'm looking for is a logo from google's main site.
I'm pretty sure that the regular expression I defined is right, but I guess
I'm missing something.
The code is:
import re
import urllib
a=urllib.urlopen ('https://www.google.co.il/')
Text = a.read(250)
print Text
print '\n\n'
b= re.search (r'\"\/[a-z0-9 ]*',Text)
print format(b.group(0))
The actual text that I want to get is:
**/images/branding/googleg/1x/googleg_standard_color_128dp.png**
I'd really appreciate it if someone could point me in the right direction
Answer: this can help you:
re.search(r'\"\/.+\"',Text).group(0)
result:
>>> re.search(r'\"\/.+\"',Text).group(0)
'"/images/branding/googleg/1x/googleg_standard_color_128dp.png"'
|