text
stringlengths 226
34.5k
|
---|
Execute Python scripts with Selenium via Crontab
Question: I have several python scripts that use selenium webdriver on a Debian server.
If I run them manually from the terminal (usually as root) everything is ok
but every time I tried to run them via crontab I have an exception like this:
WebDriverException: Message: Can't load the profile. Profile Dir: /tmp/tmpQ4vStP If you specified a log_file in the FirefoxBinary constructor, check it for details.
Try for example this script:
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from pyvirtualdisplay import Display
from selenium import webdriver
import datetime
import logging
FIREFOX_PATH = '/usr/bin/firefox'
if __name__ == '__main__':
cur_date = datetime.datetime.now().strftime('%Y-%m-%d')
logging.basicConfig(filename="./logs/download_{0}.log".format(cur_date),
filemode='w',
level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s')
try:
display = Display(visible=0, size=(800, 600))
display.start()
print 'start'
logging.info('start')
binary = FirefoxBinary(FIREFOX_PATH,
log_file='/home/egor/dev/test/logs/firefox_binary_log.log')
driver = webdriver.Firefox()
driver.get("http://google.com")
logging.info('title: ' + driver.title)
driver.quit()
display.stop()
except:
logging.exception('')
logging.info('finish')
print 'finish'
The crontab command for it:
0 13 * * * cd "/home/egor/dev/test" && python test.py
The log file for this script looks like this:
2016-09-27 16:30:01,742 - DEBUG - param: "['Xvfb', '-help']"
2016-09-27 16:30:01,743 - DEBUG - command: ['Xvfb', '-help']
2016-09-27 16:30:01,743 - DEBUG - joined command: Xvfb -help
2016-09-27 16:30:01,745 - DEBUG - process was started (pid=23042)
2016-09-27 16:30:01,747 - DEBUG - process has ended
2016-09-27 16:30:01,748 - DEBUG - return code=0
2016-09-27 16:30:01,748 - DEBUG - stdout=
2016-09-27 16:30:01,751 - DEBUG - param: "['Xvfb', '-br', '-nolisten', 'tcp', '-screen', '0', '800x600x24', ':1724']"
2016-09-27 16:30:01,751 - DEBUG - command: ['Xvfb', '-br', '-nolisten', 'tcp', '-screen', '0', '800x600x24', ':1724']
2016-09-27 16:30:01,751 - DEBUG - joined command: Xvfb -br -nolisten tcp -screen 0 800x600x24 :1724
2016-09-27 16:30:01,753 - DEBUG - param: "['Xvfb', '-br', '-nolisten', 'tcp', '-screen', '0', '800x600x24', ':1725']"
2016-09-27 16:30:01,753 - DEBUG - command: ['Xvfb', '-br', '-nolisten', 'tcp', '-screen', '0', '800x600x24', ':1725']
2016-09-27 16:30:01,753 - DEBUG - joined command: Xvfb -br -nolisten tcp -screen 0 800x600x24 :1725
2016-09-27 16:30:01,755 - DEBUG - process was started (pid=23043)
2016-09-27 16:30:01,755 - DEBUG - DISPLAY=:1725
2016-09-27 16:30:01,855 - INFO - start
2016-09-27 16:30:31,965 - ERROR -
Traceback (most recent call last):
File "test.py", line 31, in <module>
driver = webdriver.Firefox()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/webdriver.py", line 103, in __init__
self.binary, timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 51, in __init__
self.binary.launch_browser(self.profile, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 68, in launch_browser
self._wait_until_connectable(timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 106, in _wait_until_connectable
% (self.profile.path))
WebDriverException: Message: Can't load the profile. Profile Dir: /tmp/tmpQ4vStP If you specified a log_file in the FirefoxBinary constructor, check it for details.
2016-09-27 16:30:31,966 - INFO - finish
What I have tried:
1. Ensure that the script file is owned by root
2. Use export DISPLAY=:0; or export DISPLAY=:99; in crontab command
3. set the HOME variable in the crontab to the path of the home directory of the user that the cronjob was being run as
I'm really stuck with this problem.
I have python 2.7.10, selenium 2.53.6 with Xvbf and Firefox 47.0.1 on Debian
7.7
Answer: Try a hardcoded Firefox binary
<https://seleniumhq.github.io/selenium/docs/api/py/webdriver_firefox/selenium.webdriver.firefox.firefox_binary.html>
selenium.webdriver.firefox.firefox_binary.FirefoxBinary("/your/binary/location/firefox")
driver = webdriver.Firefox(firefox_binary=binary)
|
Can't access blob storage via azure-storage package in Python WebJob
Question: I am trying to read/write from blob storage using a Python WebJob on an Azure
App Service. My App Service's requirements.txt file includes the azure-storage
package name: the package is successfully installed via pip during App Service
deployment. However, when I include the following in my WebJob's run.py file:
import sys
sys.path.append('D:\\home\\site\\wwwroot\\env\\Lib\\site-packages')
from azure.storage.blob import BlockBlobService
...I get the following error message at runtime:
[09/27/2016 17:51:09 > 775106: SYS INFO] Status changed to Initializing
[09/27/2016 17:51:09 > 775106: SYS INFO] Run script 'run.py' with script host - 'PythonScriptHost'
[09/27/2016 17:51:09 > 775106: SYS INFO] Status changed to Running
[09/27/2016 17:51:10 > 775106: ERR ] Traceback (most recent call last):
[09/27/2016 17:51:10 > 775106: ERR ] File "run.py", line 11, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from azure.storage.blob import BlockBlobService
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\blob\__init__.py", line 15, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from .models import (
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\blob\models.py", line 15, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from .._common_conversion import _to_str
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\_common_conversion.py", line 22, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from .models import (
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\models.py", line 23, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from ._error import (
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\_error.py", line 15, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from ._common_conversion import _to_str
[09/27/2016 17:51:10 > 775106: ERR ] ImportError: cannot import name '_to_str'
[09/27/2016 17:51:10 > 775106: SYS INFO] Status changed to Failed
[09/27/2016 17:51:10 > 775106: SYS ERR ] Job failed due to exit code 1
FWIW, several other packages were loaded properly using the same approach. Can
anyone suggest a method to get the azure-storage package working in Python
Azure WebJobs?
Answer: Looks like six module is missing. This issue is also tracked via this thread:
<https://github.com/Azure/azure-storage-python/issues/22>. You can fix issue
by adding the six module to requirements.txt or manually installing six module
by pip install six.
|
Python parse string into Python dictionary of list
Question: There are two parts to this question:
**I. I'd like to parse Python string into a list of dictionary.**
****Here is the Python String****
../Data.py:92 final computing result as shown below: [historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]
****Expected Python Output:****
{
"data" :[
{
"id": "A(long) 11A"
"startdate": "42521"
"numvaluelist": "0.1065599566767107"
},
{
"id": "A(short) 11B"
"startdate": "42521"
"numvaluelist": "0.0038113334533441123"
},
{
"id": "B(long) 11C"
"startdate": "42521"
"numvaluelist": "20.061623176440904"
}
]
}
**II. I need to further parse key values of id and numvaluelist. I am not sure
if there is a better way to do it. Hence, I am converting string to Python
Dictionary, loop through that and parse further. Please guide me if I am
overthinking the solution.**
**Update: Code**
text = "[historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]"
data = text.strip("../Data.py:92 final computing result as shown below: ")
print data
Answer: Your input raw text looks pretty predictable, try this:
>>> import re
>>> raw = "[historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]"
>>> line_re = re.compile(r'\{[^\}]+\}')
>>> records = line_re.findall(raw)
>>> record_re = re.compile(
... r"""
... id:\s*\'(?P<id>[^']+)\'\s*
... startdate:\s*(?P<startdate>\d+)\s*
... numvaluelist:\s*(?P<numvaluelist>[\d\.]+)\s*
... datelist:\s*(?P<datelist>\d+)\s*
... """,
... re.X
... )
>>> record_parsed = record_re.search(line_re.findall(raw)[0])
>>> record_parsed.groupdict()
{'startdate': '42521', 'numvaluelist': '0.1065599566767107', 'datelist': '42521', 'id': 'A(long) 11A'}
>>> for record in records:
... record_parsed = record_re.search(record)
... # Here is where you would do whatever you need with the fields.
To parse the subelements of the id, e.g.:
>>> record_re2 = re.compile(
... r"""
... id:\s*\'
... (?P<id_letter>[A-Z]+)
... \(
... (?P<id_type>[^\)]+)
... \)\s*
... (?P<id_codenum>\d+)
... (?P<id_codeletter>[A-Z]+)
... \'\s*
... startdate:\s*(?P<startdate>\d+)\s*
... numvaluelist:\s*(?P<numvaluelist>[\d\.]+)\s*
... datelist:\s*(?P<datelist>\d+)\s*
... """,
... re.X
... )
>>> record2_parsed = record_re2.search(line_re.findall(raw)[0])
>>> record2_parsed.groupdict()
{'startdate': '42521', 'numvaluelist': '0.1065599566767107', 'id_letter': 'A', 'id_codeletter': 'A', 'datelist': '42521', 'id_type': 'long', 'id_codenum': '11'}
|
Python Selenium 'module' object is not callable in python selenium script
Question: Learning Selenium driven by Python and in my practice I keep getting the
following error. I am stuck and could use some guidance
> Traceback (most recent call last): File "test_login.py", line 14, in
> test_Login loginpage = homePage(self.driver) TypeError: 'module' object is
> not callable
Here is my code
> test_login.py
import unittest
import homePage
from selenium import webdriver
class Login(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.get("https://hub.docker.com/login/")
def test_Login(self):
loginpage = homePage(self.driver)
loginpage.login(email,password)
def tearDown(self):
self.driver.close()
if __name__ == '__main__': unittest.main()
> homePage.py
from selenium.webdriver.common.by import By
class BasePage(object):
def __init__(self, driver):
self.driver = drive
class LoginPage(BasePage):
locator_dictionary = {
"userID": (By.XPATH, '//input[@placeholder="Username"]'),
"passWord": (By.XPATH, '//input[@placeholder="Password"]'),
"submittButton": (By.XPATH, '//button[text()="Log In"]'),
}
def set_userID(self, id):
userIdElement = self.driver.find_element(*LoginPage.userID)
userIdElement.send_keys(id)
def login_error_displayed(self):
notifcationElement = self.driver.find_element(*LoginPage.loginError)
return notifcationElement.is_displayed()
def set_password(self, password):
pwordElement = self.driver.find_element(*LoginPage.passWord)
pwordElement.send_keys(password)
def click_submit(self):
submitBttn = self.driver.find_element(*LoginPage.submitButton)
submitBttn.click()
def login(self, id, password):
self.set_password(password)
self.set_email(id)
self.click_submit()
Any help is appreciated
Answer: I think here:
loginpage = homePage(self.driver)
you meant to instantiate the `LoginPage` class:
loginpage = homePage.LoginPage(self.driver)
|
Save file before running custom command in Sublime3
Question: This question is similar to this one [Is it possible to chain key binding
commands in sublime text 2?](http://stackoverflow.com/q/9646552/1391441) Some
years have passed since that question (and the answers given), and I'm using
Sublime Text 3 (not 2), so I believe this new question is justified.
I've setup a custom keybind:
{
"keys": ["f5"],
"command": "project_venv_repl"
}
to run the `project_venv_repl.py` script:
import sublime_plugin
class ProjectVenvReplCommand(sublime_plugin.TextCommand):
"""
Starts a SublimeREPL, attempting to use project's specified
python interpreter.
Instructions to make this file work taken from:
http://stackoverflow.com/a/25002696/1391441
"""
def run(self, edit, open_file='$file'):
"""Called on project_venv_repl command"""
cmd_list = [self.get_project_interpreter(), '-i', '-u']
if open_file:
cmd_list.append(open_file)
self.repl_open(cmd_list=cmd_list)
def get_project_interpreter(self):
"""Return the project's specified python interpreter, if any"""
settings = self.view.settings()
return settings.get('python_interpreter', '/usr/bin/python')
def repl_open(self, cmd_list):
"""Open a SublimeREPL using provided commands"""
self.view.window().run_command(
'repl_open', {
'encoding': 'utf8',
'type': 'subprocess',
'cmd': cmd_list,
'cwd': '$file_path',
'syntax': 'Packages/Python/Python.tmLanguage'
}
)
This runs the opened file in a SublimeREPL when the `f5` key is pressed.
What I need is a way to mimic the "Build" shortcut (`Ctrl+B`). This is: when
the `f5` key is pressed, the current (opened) file should be _saved_ before
running the `project_venv_repl` command.
Can this instruction be added to the `project_venv_repl.py` script, or to the
`command` line in the keybind definition?
Answer: There's no need to do anything fancy. If you just want to save the current
view before running the REPL, edit your `ProjectVenvReplCommand` class's
`run()` method (which is called when the `project_venv_repl` command is
executed) and add the following line at the beginning:
self.view.run_command("save")
This will silently save the current view unless it has not been saved before,
in which case a Save As... dialog will open just like usual.
If you want to save all open files in the window, you can use this code:
for open_view in self.view.window().views():
open_view.run_command("save")
|
Drawing a polygon in Python with a set of random colors
Question: I am working on a simple python program which prompts the user to enter the
length of the side of a polygon and the program (using turtle) will draw the
polygon with a random color that has been set using the random.randint
my code so far is:
import turtle
polygonSideLength = int(input('Enter length of polygon side: \n'))
numberOfSides = 5 + (7 / 4)
turnAngle = 360 / numberOfSides
import random
randomColor = random.randint(0,5)
if randomColor == 0:
fillcolor="red"
elif randomColor == 1:
fillcolor="green"
elif randomColor == 2:
fillcolor="blue"
elif randomColor == 3:
fillcolor="cyan"
elif randomColor == 4:
fillcolor="magenta"
elif randomColor == 5:
fillcolor="yellow"
turtle.begin_fill()
turtle.pen(pensize = 5, pencolor="black", fillcolor = randomColor)
for i in range(numberOfSides):
turtle.forward(polygonSideLength)
turtle.right(turnAngle)
turtle.end_fill()
turtle.done()
I have found the problem in the code is with the "fillcolor = randomColor"
the error I receive is "unknown color name for: 5" I know the randint is
working because sometimes the error gives me 1,2,3,4,5
So to sum it up, how do I get the fillcolor to match the set colors in the
random randint?
Answer: I am agree with the fact that you are choosing a random color with
randomColor = random.randint(0,5)
but when you want to set the random color to your polygon
you specify an integer (value of randomColor variable) instead of a string
(value of fillcolor variable)
fillcolor variable is expected to be a string type with a color name value (
"blue","white","red", etc) but never an integer.
so may you please change the following line:
turtle.pen(pensize = 5, pencolor="black", fillcolor = randomColor)
to
turtle.pen(pensize = 5, pencolor="black", fillcolor = fillcolor)
|
Exception: Cannot find PyQt5 plugin directories when using Pyinstaller despite PyQt5 not even being used
Question: A month ago I solved my applcation freezing issues for Python 2.7 as you can
see [here](http://stackoverflow.com/questions/39135408/using-pyinstaller-on-
parmap-causes-a-tkinter-matplotlib-import-error-why). I have since adapted my
code to python 3.5 (using Anaconda) and it appears to be working. couldn't get
pyinstaller working with Anaconda so switched to trying to generate an .exe
using the standard Python 3.5 compiler. I am using the same settings as in the
link above (`pyinstaller --additional-hooks-dir=. --clean --win-private-
assemblies pipegui.py`), except I get the following error message instead:
`Exception: Cannot find PyQt5 plugin directories`
[This](http://stackoverflow.com/questions/19207746/using-cx-freeze-in-
pyqt5-cant-find-pyqt5) may be related? Except I'm using Pyinstaller and I
don't have a setup.py so don't know how I can make use of the solution there,
if at all
I find this error message bizarre because I am not using PyQt5, but PyQt4.
Here is the full output:
C:\Users\Cornelis Dirk Haupt\PycharmProjects\Mesoscale-Brain-Explorer\src>pyinstaller --additional-hooks-dir=. --clean --win-private-assemblies pipegui.py
62 INFO: PyInstaller: 3.2
62 INFO: Python: 3.5.0
62 INFO: Platform: Windows-10.0.14393
62 INFO: wrote C:\Users\Cornelis Dirk Haupt\PycharmProjects\Mesoscale-Brain-Explorer\src\pipegui.spec
62 INFO: UPX is not available.
62 INFO: Removing temporary files and cleaning cache in C:\Users\Cornelis Dirk Haupt\AppData\Roaming\pyinstaller
62 INFO: Extending PYTHONPATH with paths
['C:\\Users\\Cornelis Dirk Haupt\\PycharmProjects\\Mesoscale-Brain-Explorer',
'C:\\Users\\Cornelis Dirk '
'Haupt\\PycharmProjects\\Mesoscale-Brain-Explorer\\src']
62 INFO: checking Analysis
62 INFO: Building Analysis because out00-Analysis.toc is non existent
62 INFO: Initializing module dependency graph...
62 INFO: Initializing module graph hooks...
62 INFO: Analyzing base_library.zip ...
1430 INFO: running Analysis out00-Analysis.toc
1727 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-math-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1742 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-runtime-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1742 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-locale-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1758 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-stdio-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1758 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-heap-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1774 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-string-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1774 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-environment-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1774 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-time-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1789 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-filesystem-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1789 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-conio-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1789 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-process-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1805 WARNING: Can not get binary dependencies for file: C:\Anaconda3\api-ms-win-crt-convert-l1-1-0.dll
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
1805 INFO: Caching module hooks...
1805 INFO: Analyzing C:\Users\Cornelis Dirk Haupt\PycharmProjects\Mesoscale-Brain-Explorer\src\pipegui.py
1992 INFO: Processing pre-find module path hook distutils
2055 INFO: Processing pre-safe import module hook six.moves
3181 INFO: Processing pre-find module path hook site
3181 INFO: site: retargeting to fake-dir 'c:\\users\\cornelis dirk haupt\\appdata\\local\\programs\\python\\python35\\lib\\site-packages\\PyInstaller\\fake-modules'
4298 INFO: Processing pre-safe import module hook win32com
9975 INFO: Loading module hooks...
9975 INFO: Loading module hook "hook-_tkinter.py"...
10121 INFO: checking Tree
10121 INFO: Building Tree because out00-Tree.toc is non existent
10122 INFO: Building Tree out00-Tree.toc
10184 INFO: checking Tree
10184 INFO: Building Tree because out01-Tree.toc is non existent
10185 INFO: Building Tree out01-Tree.toc
10198 INFO: Loading module hook "hook-matplotlib.py"...
10404 INFO: Loading module hook "hook-pywintypes.py"...
10526 INFO: Loading module hook "hook-xml.py"...
10526 INFO: Loading module hook "hook-pydoc.py"...
10527 INFO: Loading module hook "hook-scipy.linalg.py"...
10527 INFO: Loading module hook "hook-scipy.sparse.csgraph.py"...
10529 INFO: Loading module hook "hook-plugins.py"...
10721 INFO: Processing pre-find module path hook PyQt4.uic.port_v3
10726 INFO: Processing pre-find module path hook PyQt4.uic.port_v2
12402 INFO: Loading module hook "hook-OpenGL.py"...
12583 INFO: Loading module hook "hook-PyQt4.QtGui.py"...
12802 INFO: Loading module hook "hook-encodings.py"...
12807 INFO: Loading module hook "hook-PyQt4.uic.py"...
12812 INFO: Loading module hook "hook-PyQt5.QtWidgets.py"...
12813 INFO: Loading module hook "hook-xml.etree.cElementTree.py"...
12813 INFO: Loading module hook "hook-setuptools.py"...
12814 INFO: Loading module hook "hook-scipy.special._ufuncs.py"...
12814 INFO: Loading module hook "hook-PyQt5.QtCore.py"...
Traceback (most recent call last):
File "<string>", line 2, in <module>
ImportError: DLL load failed: The specified procedure could not be found.
Traceback (most recent call last):
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\runpy.py", line 170, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Cornelis Dirk Haupt\AppData\Local\Programs\Python\Python35\Scripts\pyinstaller.exe\__main__.py", line 9, in <module>
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\__main__.py", line 90, in run
run_build(pyi_config, spec_file, **vars(args))
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\__main__.py", line 46, in run_build
PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\building\build_main.py", line 788, in main
build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build'))
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\building\build_main.py", line 734, in build
exec(text, spec_namespace)
File "<string>", line 16, in <module>
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\building\build_main.py", line 212, in __init__
self.__postinit__()
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\building\datastruct.py", line 178, in __postinit__
self.assemble()
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\building\build_main.py", line 470, in assemble
module_hook.post_graph()
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\building\imphook.py", line 409, in post_graph
self._load_hook_module()
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\building\imphook.py", line 376, in _load_hook_module
self.hook_module_name, self.hook_filename)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\compat.py", line 725, in importlib_load_source
return mod_loader.load_module()
File "<frozen importlib._bootstrap_external>", line 385, in _check_name_wrapper
File "<frozen importlib._bootstrap_external>", line 806, in load_module
File "<frozen importlib._bootstrap_external>", line 665, in load_module
File "<frozen importlib._bootstrap>", line 268, in _load_module_shim
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 662, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\hooks\hook-PyQt5.QtCore.py", line 15, in <module>
binaries = qt_plugins_binaries('codecs', namespace='PyQt5')
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\utils\hooks\qt.py", line 64, in qt_plugins_binaries
pdir = qt_plugins_dir(namespace=namespace)
File "c:\users\cornelis dirk haupt\appdata\local\programs\python\python35\lib\site-packages\PyInstaller\utils\hooks\qt.py", line 38, in qt_plugins_dir
raise Exception('Cannot find {0} plugin directories'.format(namespace))
Exception: Cannot find PyQt5 plugin directories
I will say I also have no clue what to make of the `TypeError: a bytes-like
object is required, not
'str'`[This](http://stackoverflow.com/questions/33054527/python-3-5-typeerror-
a-bytes-like-object-is-required-not-str) may be related? I only use binary
mode with pickle a handful of times as far as I can tell this is my only
usage:
pickle.dump( roiState, open( fileName, "wb" ) )
roiState = pickle.load(open(fileName, "rb"))
I don't have any errors when I run the application, only getting these errors
when trying to generate an .exe using pyinstaller. Why?
Note also that Anaconda3 does pop up in the traceback above (why is it looking
for binaries there?) but I:
1. Uninstalled pyinstaller from Anaconda
2. Am using the standard Python 3.5 (64-bit) compiler
Only thing I can think of that may be the culprit is that I'm no longer using
the developer version of Pyinstaller (it just flat does not run in Python
3.5). I had to use the developer version to solve my freezing issue
[here](http://stackoverflow.com/questions/39135408/using-pyinstaller-on-
parmap-causes-a-tkinter-matplotlib-import-error-why) when my code was written
for python 2.7
Answer: Uninstall Anaconda and everything works... I conclude that you simply cannot
have Anaconda installed and use the standard Python 3.5 compiler at the same
time if you're using Pyinstaller. Maybe
[this](http://stackoverflow.com/questions/39728108/running-pyinstaller-after-
anaconda-install-results-in-importerror-no-module-nam) is related.
This is not the [first](http://stackoverflow.com/a/37398710/2734863) time that
uninstalling Anaconda appears to solve my issues... If I should report this
issue somewhere please comment below. I don't know where.
|
Python : Extract one string from 100 lines of text
Question: 1. I need to extract a particular string from 100 lines of log data. I tried split and then tried to get the needed string but couldn't succeed. Any suggestions/help appreciated. Thanks!
In the log below, I would like to extract the highlighted part, that is
**zqn.2005- 04.com.sanblaze:virtualun.init74-2.initiator-00000000-0000** (its
in line 57)
1 Out[19]:
2 {'portstatus': {'errors': {'busy_errors': '0',
3 'checkcondition_errors': '0',
4 'compare_errors': '0',
5 'ioc_errors': '0',
6 'notready_errors': '0',
7 'read_errors': '0',
8 'read_retries': '0',
9 'scsi_errors': '0',
10 'test_errors': '0',
11 'timeout_errors': '0',
12 'write_errors': '0',
13 'write_retries': '0'},
14 'net_counters': {'RxBytes': '148476788547060',
15 'RxCompressed': '0',
16 'RxDrop': '96188',
17 'RxErrs': '0',
18 'RxFIFO': '0',
19 'RxFrame': '0',
20 'RxMulticast': '259513',
21 'RxPFCPause': '0',
22 'RxPackets': '77165581759',
23 'RxStandardPause': '0',
24 'TxBytes': '20440169002909',
25 'TxCompressed': '0',
26 'TxDrop': '0',
27 'TxErrs': '0',
28 'TxFIFO': '0',
29 'TxFrame': '0',
30 'TxMulticast': '0',
31 'TxPFCPause': '0',
32 'TxPackets': '55075507366',
33 'TxStandardPause': '5349727',
34 'net_avgriops': '0',
35 'net_avgrrate': '0.00',
36 'net_avgtiops': '0',
37 'net_avgtrate': '0.00',
38 'net_riops': '0',
39 'net_rrate': '0.00',
40 'net_tiops': '0',
41 'net_trate': '0.00'},
42 'perfdata': {'avg_oiocnt': '0',
43 'avgiops': '0',
44 'avgriops': '0',
45 'avgrrate': '0.00',
46 'avgtrate': '0.00',
47 'avgwiops': '0',
48 'avgwrate': '0.00',
49 'iops': '0',
50 'max_oiocnt': '509',
51 'oiocnt': '0',
52 'riops': '0',
53 'rrate': '0.00',
54 'trate': '0.00',
55 'wiops': '0',
56 'wrate': '0.00'},
57 'status': {'initiator0_iqn': 'zqn.2005- 04.com.sanblaze:virtualun.init74-2.initiator-00000000-0000',
58 'initiator10_iqn': 'zqn.2005-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0010',
59 'initiator11_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0011',
60 'initiator12_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0012',
61 'initiator13_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0013',
62 'initiator14_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0014',
63 'initiator15_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0015',
64 'initiator16_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0016',
65 'initiator17_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0017',
66 'initiator18_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0018',
67 'initiator19_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0019',
68 'initiator1_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0001',
69 'initiator20_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0020',
70 'initiator21_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0021',
71 'initiator22_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0022',
72 'initiator23_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0023',
73 'initiator24_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0024',
74 'initiator25_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0025',
75 'initiator26_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0026',
76 'initiator27_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0027',
77 'initiator28_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0028',
78 'initiator29_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0029',
79 'initiator2_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0002',
80 'initiator30_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0030',
81 'initiator31_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0031',
82 'initiator32_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0032',
83 'initiator33_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0033',
84 'initiator34_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0034',
85 'initiator35_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0035',
86 'initiator36_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0036',
87 'initiator37_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0037',
88 'initiator38_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0038',
89 'initiator39_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0039',
90 'initiator3_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0003',
91 'initiator40_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0040',
92 'initiator41_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0041',
93 'initiator42_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0042',
94 'initiator43_iqn': 'iqn.2003-04.com.sanblaze:virtualun.init74-2.initiator-00000000-0043',
95 'mode': 'Init',
96 'numinitiators': '50',
97 'numtargets': '0',
98 'port': '0',
99 'portwwnn': '90:e2:ba:82:92:1c',
100 'portwwpn': '90:e2:ba:82:92:1c',
101 'speed': '10G',
102 'state': 'Online',
103 'topo': 'iSCSI'},
104 'sympn': '-',
Answer: Looks like the string is in dictionary format (correct me if I'm wrong), so
you could try converting it to one. Then you won't need regex.
import ast
your_dict = ast.literal_eval(your_string)
Then what you want would be:
your_dict['status']['initiator0_iqn']
|
Load JSON object including escaped json string
Question: I'm trying to load a JSON object from a string (via Python). This object has a
single key mapped to an array. The array includes a single value which is
another serialized JSON object. I have tried a few online JSON parsers /
validators, but can't seem to identify what the issue with loading this object
is.
JSON Data:
{
"parent": [
"{\"key\":\"value\"}"
]
}
Trying to load from Python:
>>> import json
>>> test_string = '{"parent":["{\"key\":\"value\"}"]}'
>>> json.loads(test_string)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 1 column 15 (char 14)
Answer: If you try out your string in the REPL, you'll see pretty quickly why it
doesn't work:
>>> '{"parent":["{\"key\":\"value\"}"]}'
'{"parent":["{"key":"value"}"]}'
Notice the `\` have gone away because python is treating them as escape
sequences ...
One easy fix is to use a raw string:
>>> r'{"parent":["{\"key\":\"value\"}"]}'
'{"parent":["{\\"key\\":\\"value\\"}"]}'
e.g.
>>> import json
>>> test_string = r'{"parent":["{\"key\":\"value\"}"]}'
>>> json.loads(test_string)
{u'parent': [u'{"key":"value"}']}
|
Return string that is not a substring of other strings - is it possible in time less than O(n^2)?
Question: You are given an array of strings. you have to return only those strings that
are not sub strings of other strings in the array. Input -
`['abc','abcd','ab','def','efgd']`. Output should be - `'abcd'` and `'efgd'` I
have come up with a solution in python that has time complexity O(n^2). Is
there a possible solution that gives a lesser time complexity? My solution:
def sub(l,s):
l1=l
for i in range (len(l)):
l1[i]=''.join(sorted(l1[i]))
for i in l1:
if s in i:
return True
return False
def main(l):
for i in range(len(l)):
if sub(l[0:i-1]+l[i+1:],l[i])==False:
print l[i]
main(['abc','abcd','ab','def','efgd'])
Answer: Is memory an issue? You could turn to the tried and true...TRIE!
Build a suffix tree!
Given your input `['abc','abcd','ab','def','efgd']`
We would have a tree of
_
/ | \
a e d
/ | \
b* f e
/ | \
c* g f*
/ |
d* d*
Utilizing a DFS (Depth-First-Search) search of said tree you would locate the
deepest leafs `abcd`, `efgd`, and `def`
Tree traversal is pretty straight forward and your time complexity is
`O(n*m).` A much better improvement over the `O(n^2)` time you had previously.
With this approach it becomes simple to add new keys and still make it easy to
find the unique keys.
Consider adding the key `deg`
your new tree would be approximately
_
/ | \
a e d
/ | \
b* f e
/ | / \
c* g g* f*
/ |
d* d*
With this new tree it is still a simple matter of performing a DFS search to
obtain the unique keys that are not prefixes of others.
from typing import List
class Trie(object):
class Leaf(object):
def __init__(self, data, is_key):
self.data = data
self.is_key = is_key
self.children = []
def __str__(self):
return "{}{}".format(self.data, "*" if self.is_key else "")
def __init__(self, keys):
self.root = Trie.Leaf('', False)
for key in keys:
self.add_key(key)
def add_key(self, key):
self._add(key, self.root.children)
def has_suffix(self, suffix):
leaf = self._find(suffix, self.root.children)
if not leaf:
return False
# This is only a suffix if the returned leaf has children and itself is not a key
if not leaf.is_key and leaf.children:
return True
return False
def includes_key(self, key):
leaf = self._find(key, self.root.children)
if not leaf:
return False
return leaf.is_key
def delete(self, key):
"""
If the key is present as a unique key as in it does not have any children nor are any of its nodes comprised of
we should delete all of the nodes up to the root
If the key is a prefix of another long key in the trie, umark the leaf node
if the key is present in the trie and contains no children but contains nodes that are keys we should delete all
nodes up to the first encountered key
:param key:
:return:
"""
if not key:
raise KeyError
self._delete(key, self.root.children, None)
def _delete(self, key, children: List[Leaf], parents: (List[Leaf], None), key_idx=0, parent_key=False):
if not parents:
parents = [self.root]
if key_idx >= len(key):
return
key_end = True if len(key) == key_idx + 1 else False
suffix = key[key_idx]
for leaf in children:
if leaf.data == suffix:
# we have encountered a leaf node that is a key we can't delete these
# this means our key shares a common branch
if leaf.is_key:
parent_key = True
if key_end and leaf.children:
# We've encountered another key along the way
if parent_key:
leaf.is_key = False
else:
# delete all nodes recursively up to the top of the first node that has multiple children
self._clean_parents(key, key_idx, parents)
elif key_end and not leaf.children:
# delete all nodes recursively up to the top of the first node that has multiple children
self._clean_parents(key, key_idx, parents)
# Not at the key end so we need to keep traversing the tree down
parents.append(leaf)
self._delete(key, leaf.children, parents, key_idx + 1, key_end)
def _clean_parents(self, key, key_idx, parents):
stop = False
while parents and not stop:
p = parents.pop()
# Need to stop processing a removal at a branch
if len(p.children) > 1:
stop = True
# Locate our branch and kill its children
for i in range(len(p.children)):
if p.children[i].data == key[key_idx]:
p.children.pop(i)
break
key_idx -= 1
def _find(self, key, children: List[Leaf]):
if not key:
raise KeyError
match = False
if len(key) == 1:
match = True
suffix = key[0]
for leaf in children:
if leaf.data == suffix and not match:
return self._find(key[1:], leaf.children)
elif leaf.data == suffix and match:
return leaf
return None
def _add(self, key, children: List[Leaf]):
if not key:
return
is_key = False
if len(key) == 1:
is_key = True
suffix = key[0]
for leaf in children:
if leaf.data == suffix:
self._add(key[1:], leaf.children)
break
else:
children.append(Trie.Leaf(suffix, is_key))
self._add(key[1:], children[-1].children)
return
@staticmethod
def _has_children(leaf):
return bool(leaf.children)
def main():
keys = ['ba', 'bag', 'a', 'abc', 'abcd', 'abd', 'xyz']
trie = Trie(keys)
print(trie.includes_key('ba')) # True
print(trie.includes_key('b')) # False
print(trie.includes_key('dog')) # False
print(trie.has_suffix('b')) # True
print(trie.has_suffix('ab')) # True
print(trie.has_suffix('abd')) # False
trie.delete('abd') # Should only remove the d
trie.delete('a') # should unmark a as a key
trie.delete('ba') # should remove the ba trie
trie.delete('xyz') # Should remove the entire branch
trie.delete('bag') # should only remove the g
print(trie)
if __name__ == "__main__":
main()
Please note the above trie implementation does not have a DFS search
implemented; however, provides you with some amazing legwork to get started.
|
SQLalchemy find id and use it to lookup other information
Question: I'm making a simple lookup application for Japanese characters (Kanji), where
the user can search the database using any of the information available.
## My database structure
**Kanji** :
* id
* character (A kanji like 頑)
* heisig6 (a number indicating the order of showing Kanji)
* kanjiorigin (a number indicating the order of showing Kanji)
**MeaningEN** (1 kanji_id can have multiple entries with different meanings):
* kanji_id (FOREIGN KEY(kanji_id) REFERENCES "Kanji" (id)
* meaning
## User handling
The user can choose to search by 'id', 'character', 'heisig6', 'kanjiorigin'
or 'meaning' and it should then return all information in all those fields.
(All fields return only 1 result, except meanings, which can return multiple
results)
## Code, EDIT 4+5: my code with thanks to @ApolloFortyNine and #sqlalchemy on
IRC, EDIT 6: `join` \--> `outerjoin` (otherwise won't find information that
has no Origins)
import sqlalchemy as sqla
import sqlalchemy.orm as sqlo
from tableclass import TableKanji, TableMeaningEN, TableMisc, TableOriginKanji # See tableclass.py
# Searches database with argument search method
class SearchDatabase():
def __init__(self):
#self.db_name = "sqlite:///Kanji_story.db"
self.engine = sqla.create_engine("sqlite:///Kanji.db", echo=True)
# Bind the engine to the metadata of the Base class so that the
# declaratives can be accessed through a DBSession instance
tc.sqla_base.metadata.bind = self.engine
# For making sessions to connect to db
self.db_session = sqlo.sessionmaker(bind=self.engine)
def retrieve(self, s_input, s_method):
# s_input: search input
# s_method: search method
print("\nRetrieving results with input: {} and method: {}".format(s_input, s_method))
data = [] # Data to return
# User searches on non-empty string
if s_input:
session = self.db_session()
# Find id in other table than Kanji
if s_method == 'meaning':
s_table = TableMeaningEN # 'MeaningEN'
elif s_method == 'okanji':
s_table = TableOriginKanji # 'OriginKanji'
else:
s_table = TableKanji # 'Kanji'
result = session.query(TableKanji).outerjoin(TableMeaningEN).outerjoin(
(TableOriginKanji, TableKanji.origin_kanji)
).filter(getattr(s_table, s_method) == s_input).all()
print("result: {}".format(result))
for r in result:
print("r: {}".format(r))
meanings = [m.meaning for m in r.meaning_en]
print(meanings)
# TODO transform into origin kanji's
origins = [str(o.okanji_id) for o in r.okanji_id]
print(origins)
data.append({'character': r.character, 'meanings': meanings,
'indexes': [r.id, r.heisig6, r.kanjiorigin], 'origins': origins})
session.close()
if not data:
data = [{'character': 'X', 'meanings': ['invalid', 'search', 'result']}]
return(data)
## Question EDIT 4+5
* Is this an efficient query?: `result = session.query(TableKanji).join(TableMeaningEN).filter(getattr(s_table, s_method) == s_input).all()` (The .join statement is necessary, because otherwise e.g. `session.query(TableKanji).filter(TableMeaningEN.meaning == 'love').all()` returns all the meanings in my database for some reason? So is this either the right query or is my `relationship()` in my tableclass.py not properly defined?
* **fixed** (see `lambda:` in tableclass.py) `kanji = relationship("TableKanji", foreign_keys=[kanji_id], back_populates="OriginKanji")` <\-- **what is wrong** about this? It gives the error:
File "/_path_ /python3.5/site-packages/sqlalchemy/orm/mapper.py", line 1805,
in get_property "Mapper '%s' has no property '%s'" % (self, key))
sqlalchemy.exc.InvalidRequestError: Mapper 'Mapper|TableKanji|Kanji' has no
property 'OriginKanji'
## Edit 2: tableclass.py (EDIT 3+4+5: updated)
import sqlalchemy as sqla
from sqlalchemy.orm import relationship
import sqlalchemy.ext.declarative as sqld
sqla_base = sqld.declarative_base()
class TableKanji(sqla_base):
__tablename__ = 'Kanji'
id = sqla.Column(sqla.Integer, primary_key=True)
character = sqla.Column(sqla.String, nullable=False)
radical = sqla.Column(sqla.Integer) # Can be defined as Boolean
heisig6 = sqla.Column(sqla.Integer, unique=True, nullable=True)
kanjiorigin = sqla.Column(sqla.Integer, unique=True, nullable=True)
cjk = sqla.Column(sqla.String, unique=True, nullable=True)
meaning_en = relationship("TableMeaningEN", back_populates="kanji") # backref="Kanji")
okanji_id = relationship("TableOriginKanji", foreign_keys=lambda: TableOriginKanji.kanji_id, back_populates="kanji")
class TableMeaningEN(sqla_base):
__tablename__ = 'MeaningEN'
kanji_id = sqla.Column(sqla.Integer, sqla.ForeignKey('Kanji.id'), primary_key=True)
meaning = sqla.Column(sqla.String, primary_key=True)
kanji = relationship("TableKanji", back_populates="meaning_en")
class TableOriginKanji(sqla_base):
__tablename__ = 'OriginKanji'
kanji_id = sqla.Column(sqla.Integer, sqla.ForeignKey('Kanji.id'), primary_key=True)
okanji_id = sqla.Column(sqla.Integer, sqla.ForeignKey('Kanji.id'), primary_key=True)
order = sqla.Column(sqla.Integer)
#okanji = relationship("TableKanji", foreign_keys=[kanji_id], backref="okanji")
kanji = relationship("TableKanji", foreign_keys=[kanji_id], back_populates="okanji_id")
Answer: We would really have to be able to see your database schema to give real
critique, but assuming no foreign keys, what you said is basically the best
you can do.
SQLAlchemy really begins to shine when you have complicated relations going on
however. For example, if you properly had foreign keys set, you could do
something like the following.
# Assuming kanji is a tc.tableMeaningEN.kanji_id object
kanji_meaning = kanji.meanings
And that would return the meanings for the kanji as an array, without any
further queries.
You can go quite deep with relationships, so I'm linking the documentation
here. <http://docs.sqlalchemy.org/en/latest/orm/relationships.html>
EDIT: Actually, you don't need to manually join at all, SQLAlchemy will do it
for you.
The case is wrong on your classes, but I'm not sure if SQLAlchemy is case
sensitive there or not. If it works, then just move on.
If you query the a table (self.session.query(User).filter(User.username ==
self.name).first()) you should have an object of the table type (User here).
So in your case, querying the TableKanji table alone will return an object of
that type.
kanji_obj = session.query(TableKanji).filter(TableKanji.id == id).first()
# This will return an array of all meaning_ens that match the foreign key
meaning_arr = kanji_obj.meaning_en
# This will return a single meeting, just to show each member of the arr is of type TableMeaningEn
meaning_arr[0].meaning
I have a project made use of some of these features, hope it helps:
<https://github.com/ApolloFortyNine/SongSense> Database declaration (with
relationships):
<https://github.com/ApolloFortyNine/SongSense/blob/master/songsense/database.py>
Automatic joins:
<https://github.com/ApolloFortyNine/SongSense/blob/master/songsense/getfriend.py#L134>
I really like my database structure, but as for the rest it's pretty awful.
Hope it still helps though.
|
requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600)
Question: **This is not a duplicate of[this
question](http://stackoverflow.com/questions/35403605/ssl-certificate-verify-
failed-ssl-c600)**
I checked [this](http://stackoverflow.com/questions/38522939/requests-
exceptions-sslerror-ssl-certificate-verify-failed-certificate-verif) but going
insecure way doesn't looks good to me.
I am working on image size fetcher in python, which would fetch size of image
on a web page. Before doing that I need to get web page status-code. I tried
doing this way
import requests
hdrs = {'User-Agent': 'Mozilla / 5.0 (X11 Linux x86_64) AppleWebKit / 537.36 (KHTML, like Gecko) Chrome / 52.0.2743.116 Safari / 537.36'}
urlResponse = requests.get(
'http://aucoe.info/', verify=True, headers=hdrs)
print(urlResponse.status_code)
This gives error:
> ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
> (_ssl.c:600)
I tried changing `verify=True` to
`verify='/etc/ssl/certs/ca-certificates.crt'`
and
`verify='/etc/ssl/certs'`
But it still gives the same error. I need to get status code for more than
5000 urls. Kindly help me. Thanks in advance.
**Python Version :** 3.4
**Requests version :** requests==2.11.1
**O.S :** Ubuntu 14.04
**pyOpenSSL :** 0.13
**openssl version :** OpenSSL 1.0.1f 6 Jan 2014
Answer: You need to download the GoDaddy root certificates, available at [this
site](https://certs.godaddy.com/repository) and then pass it in as a parameter
to `verify`, like this:
>>> r = requests.get('https://aucoe.info', verify='/path/to/gd_bundle-g2-g1.crt')
>>> r.status_code
200
If you'll be doing multiple requests, you may want to configure the SSL as
part of the session, as highlighted in the [documentation](http://docs.python-
requests.org/en/master/user/advanced/#ssl-cert-verification).
|
Download specific file in url using PHP/Python
Question: I previously used to use `wget -r` on the linux terminal for downloading files
with certain extensions:
wget -r -A Ext URL
But now I was assigned by my lecturer to do the same thing using PHP or
Python. Who can help?
Answer: I guess urllib pretty well for you
import urllib
urllib.urlretrieve (URL, file)
|
Using curl within a Databricks+Spark notebook
Question: I'm running a Spark cluster using Databricks. I'd like to transfer data from a
server using curl. For example,
curl -H "Content-Type: application/json" -H "auth:xxxx" -X GET "https://websites.net/Automation/Offline?startTimeInclusive=201609240100&endTimeExclusive=201609240200&dataFormat=json" -k > automation.json
How does one do this within a Databricks notebook (preferably in python, but
Scala is also okay)?
Answer: In Scala, you can do something like:
import sys.process._
val command = """curl -H "Content-Type: application/json" -H "auth:xxxx" -X GET "http://google.com" -k > /home/user/automation.json"""
Seq("/bin/bash", "-c", command).!!
|
Sympy to numpy causes the AttributeError: 'Symbol' object has no attribute 'cos'
Question: I am trying to do partial derivatives using sympy and I want to convert it to
a function so that I can substitute values and estimate the derivatives at
some values of t_1, t_2. The code I am using is as follows:
import sympy as sp
import numpy as np
from sympy import init_printing
init_printing()
t_1,t_2,X_1,X_2,Y_1,Y_2,X_c1,X_c2,Y_c1,Y_c2,a_1,a_2,psi_1,psi_2,b_1,b_2= sp.symbols('t_1 t_2 X_1 X_2 Y_1 Y_2 X_c1 X_c2 Y_c1 Y_c2 a_1 a_2 psi_1 psi_2 b_1 b_2')
X_1=X_c1 + (a_1 * sp.cos(t_1) * sp.cos(psi_1)) - ((b_1) * sp.sin(t_1)* sp.sin(psi_1))
X_2=X_c2 + (a_2 * sp.cos(t_2) * sp.cos(psi_2)) - ((b_2) * sp.sin(t_2)* sp.sin(psi_2))
Y_1=Y_c1 + (a_1 * sp.cos(t_1) * sp.sin(psi_1)) + ((b_1) * sp.sin(t_1)* sp.cos(psi_1))
Y_2=Y_c2 + (a_2 * sp.cos(t_2) * sp.sin(psi_2)) + ((b_2) * sp.sin(t_2)* sp.sin(psi_2))
D=(((X_2-X_1)**2) + ((Y_2-Y_1)**2))**0.5
y_1=sp.diff(D,t_1)
y_2=sp.diff(D,t_2)
f=sp.lambdify(t_1, y_1, "numpy")
g=sp.lambdify(t_2, y_2, "numpy")
When I try to substitute a value for t_1 using,
f(np.pi/2)
I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-26-f37892b21c8b> in <module>()
----> 1 f(np.pi/2)
/users/vishnu/anaconda3/lib/python3.5/site-packages/numpy /__init__.py in <lambda>(_Dummy_23)
AttributeError: 'Symbol' object has no attribute 'cos'
I referred to the following links:
[What causes this error (AttributeError: 'Mul' object has no attribute 'cos')
in Python?](http://stackoverflow.com/questions/32640759/what-causes-this-
error-attributeerror-mul-object-has-no-attribute-cos-in)
[Python
AttributeError:cos](http://stackoverflow.com/questions/13336048/python-
attributeerrorcos)
but I think my imports of numpy and sympy are not clashing unlike the cases
mentioned in those links. Any help is appreciated.
Answer: This type of error occurs when you call `np.cos(a_symbol)`, which apparently
translates under-the-hood in numpy to `a_symbol.cos()`.
`lambdify` is for numeric calculations - it replaces all `sp` calls with `np`
calls. But what you're doing is symbolic. This is enough for your problem:
f1 = lambda t: y_1.subs({t_1: t})
f2 = lambda t: y_2.subs({t_2: t})
|
file writing not working as expected
Question: I have a python code where it will take the first column of a sample.csv file
and copy it to temp1.csv file. Now I would like to compare this csv file with
another serialNumber.txt file for any common rows. If any common rows found,
It should write to a result file. My temp1.csv is being created properly but
the problem is the result file which is being created is empty.
script.py
import csv
f = open("sample.csv", "r")
reader = csv.reader(f)
data = open("temp1.csv", "wb")
w = csv.writer(data)
for row in reader:
my_row = []
my_row.append(row[0])
w.writerow(my_row)
data.close()
with open('temp1.csv', 'r') as file1:
with open('serialNumber.txt', 'r') as file2:
same = set(file1).intersection(file2)
print same
with open('result.csv', 'w') as file_out:
for line in same:
file_out.write(line)
print line
sample.csv
M11435TDS144,STB#1,Router#1
M11543TH4292,STB#2,Router#1
M11509TD9937,STB#3,Router#1
M11543TH4258,STB#4,Router#1
serialNumber.txt
G1A114042400571
M11543TH4258
M11251TH1230
M11435TDS144
M11543TH4292
M11509TD9937
Answer:
with open('temp1.csv', 'r') as file1:
list1 = file1.readlines()
set1 = set(list1)
with open('temp2.csv', 'r') as file2:
list2 = file2.readlines()
set2 = set(list2)
now you can process the contents as sets
I want to note that your original code does not insert line breaks after each
line so I am not sure if they would be present. If not you need to add those,
otherwise you will have a mess
|
Test Requests for Django Rest Framework aren't parsable by its own Request class
Question: I'm writing an endpoint to receive and parse [GitHub Webhook
payloads](https://developer.github.com/webhooks/#example-delivery) using
Django Rest Framework 3. In order to match the payload specification, I'm
writing a payload request factory and testing that it's generating valid
requests.
However, the problem comes when trying to test the request generated with
DRF's `Request` class. Here's the smallest failing test I could come up with -
the problem is that a request generated with DRF's `APIRequestFactory` seems
to not be parsable by DRF's `Request` class. Is that expected behaviour?
from rest_framework.request import Request
from rest_framework.parsers import JSONParser
from rest_framework.test import APIRequestFactory, APITestCase
class TestRoundtrip(APITestCase):
def test_round_trip(self):
"""
A DRF Request can be loaded into a DRF Request object
"""
request_factory = APIRequestFactory()
request = request_factory.post(
'/',
data={'hello': 'world'},
format='json',
)
result = Request(request, parsers=(JSONParser,))
self.assertEqual(result.data['hello'], 'world')
And the stack trace is:
E
======================================================================
ERROR: A DRF Request can be loaded into a DRF Request object
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 380, in __getattribute__
return getattr(self._request, attr)
AttributeError: 'WSGIRequest' object has no attribute 'data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/james/active/prlint/prlint/github/tests/test_payload_factories/test_roundtrip.py", line 22, in test_round_trip
self.assertEqual(result.data['hello'], 'world')
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 382, in __getattribute__
six.reraise(info[0], info[1], info[2].tb_next)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 186, in data
self._load_data_and_files()
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 246, in _load_data_and_files
self._data, self._files = self._parse()
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/request.py", line 312, in _parse
parsed = parser.parse(stream, media_type, self.parser_context)
File "/home/james/active/prlint/venv/lib/python3.4/site-packages/rest_framework/parsers.py", line 64, in parse
data = stream.read().decode(encoding)
AttributeError: 'str' object has no attribute 'read'
----------------------------------------------------------------------
I'm obviously doing something stupid - I've messed around with encodings...
realised that I needed to pass the parsers list to the `Request` to avoid the
`UnsupportedMediaType` error, and now I'm stuck here.
Should I do something different? Maybe avoid using `APIRequestFactory`? Or
test my built GitHub requests a different way?
* * *
## More info
GitHub sends a request out to registered webhooks that has a `X-GitHub-Event`
header and therefore in order to test my webhook DRF code I need to be able to
emulate this header at test time.
My path to succeeding with this has been to build a custom Request and load a
payload using a factory into it. This is my factory code:
def PayloadRequestFactory():
"""
Build a Request, configure it to look like a webhook payload from GitHub.
"""
request_factory = APIRequestFactory()
request = request_factory.post(url, data=PingPayloadFactory())
request.META['HTTP_X_GITHUB_EVENT'] = 'ping'
return request
The issue has arisen because I want to assert that `PayloadRequestFactory` is
generating valid requests for various passed arguments - so I'm trying to
parse them and assert their validity but DRF's `Request` class doesn't seem to
be able to achieve this - hence my question with a failing test.
So really my question is - how should I test this `PayloadRequestFactory` is
generating the kind of request that I need?
Answer: "Yo dawg, I heard you like Request, cos' you put a Request inside a Request"
XD
I'd do it like this:
from rest_framework.test import APIClient
client = APIClient()
response = client.post('/', {'github': 'payload'}, format='json')
self.assertEqual(response.data, {'github': 'payload'})
# ...or assert something was called, etc.
Hope this helps
|
Generate Swagger specification from Python code without annotations
Question: I am searching for a way to generate a Swagger specification (the JSON and
Swagger-UI) from the definition of a Python service. I have found many options
(normally Flask-based), but all of them use annotations, which I cannot
properly handle from within the code, e.g. define 10 equivalent services with
different routes in a loop (the handler could be currified in this example,
first function call to obtain it).
Is there any way to do this without annotations? I would like to generate the
method in the API and its documentation by calling a function (or a method, or
a constructor of a class) with the values corresponding to the implementation
and the documentation of that API method.
Answer: I suggest looking into
[apispec](http://apispec.readthedocs.io/en/latest/quickstart.html).
apispec is a pluggable API specification generator.
Currently supports the OpenAPI 2.0 specification (f.k.a. Swagger 2.0)
**_apispec_**
from apispec import APISpec
spec = APISpec(
title='Gisty',
version='1.0.0',
info=dict(description='A minimal gist API')
)
spec.definition('Gist', properties={
'id': {'type': 'integer', 'format': 'int64'},
'content': 'type': 'string'},
})
spec.add_path(
path='/gist/{gist_id}',
operations=dict(
get=dict(
responses={
'200': {
'schema': {'$ref': '#/definitions/Gist'}
}
}
)
)
)
|
How to use plt.text() function to type special symbols into a plot produced by python3?
Question: I am trying to type a text (which includes some astrophysical symbols like
solar mass and Hubble's Parameter) inside an empty figure in a python script:
import numpy as np
import matplotlib.pyplot as plt
plt.figure(4)
frame = plt.gca()
frame.axes.get_xaxis().set_ticks([])
frame.axes.get_yaxis().set_ticks([])
A = 2
B = 2
C = 3
D = 4
plt.text(0.05, 0.05, r'$R_{200m}$={:.0f} kpc physical \n\n $M_{200m}$={:.3e} $h^{-1} M_{\sun}$ \n\n\n\n x={:.0f} \n\n $M_{DM}$={:.3e} $h^{-1} M_{\odot}$'.format(A, B, C, D), size=20)
plt.show()
I am receiving the following error message after running the script with
`python3 example.py` :
File "exam.py", line 12, in <module>
plt.text(0.05, 0.05, r'$R_{200m}$={:.0f} kpc physical \n\n $M_{200m}$={:.3e} $h^{-1} M_{\sun}$ \n\n\n\n x={:.0f} \n\n
$M_{DM}$={:.3e} $h^{-1} M_{\odot}$'.format(A, B, C, D), size=20)
KeyError: '200m'
I don't know how to make LateX typings possible inside a python script?
Answer: replace your `plt.text...` line with the following:
plt.text(0.05, 0.05, '$R_{{200m}}$={:.0f} kpc physical \n\n $M_{{200m}}$={:.3e} $h^{{-1}} M_\u2609$ \n\n\n\n x={:.0f} \n\n $M_{{DM}}$={:.3e} $h^{{-1}} M_{{\odot}}$'.format(A, B, C, D), size=20)
I just:
1. replaced `{\sun}` with `\u2609`, see [How to do the astronomical symbol "\sun" in PyX](http://stackoverflow.com/questions/12486778/how-to-do-the-astronomical-symbol-sun-in-pyx).
2. doubled the `{` of each LateX one, so the `format` method won't put its parameters in it.
3. removed the `r` before the string so that the `\n`s will affect.
[![enter image description
here](http://i.stack.imgur.com/SCjTF.png)](http://i.stack.imgur.com/SCjTF.png)
|
Summing the values of one element of a dictionary based upon the values of another element
Question: Using Python, I have a list of two-element dictionaries which I would like to
sum all the values of one element based upon the values of another element.
ie.
[{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}]
This is the format (although there are much more entries than this), and for
each different `elev` I would like to have the sum of all the `area` values
that correspond to it. For the `elev` value of `0.0` I would like the sum of
all values, same for `elev` of `0.1` etc
Answer: this is very easily achieved using pandas. Sample code:
import pandas as pd
df = pd.DataFrame([{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}])
which gives the following dataframe:
area elev
0 3.523558 0.0
1 3.523558 0.0
Then group by the elev columns and sum the area's:
desired_output = df.groupby('elev').sum()
which gives:
area
elev
0.0 7.047115
If you want you can then output this dataframe back to a dictionary in a
useful format using:
desired_output.to_dict('index')
which returns
{0.0: {'area': 7.0471151003078063}}
|
Linear Programming with Anaconda
Question: I have installed Anaconda on my windows 10 and I am using it for Python. I
have a class in Mathematical optimization and need a good package for basic
LP. **Is there a "pre-installed" package that is good for LP in Anaconda, that
I can just import to my python file, or do I have to install packages?** I the
latter case, any suggestions on which packages that are good and available for
Anaconda on Windows 10? I have heard that PulP is adequate, but also that it
doesn't come "pre-installed" with the Anaconda.
Answer: If you wish to install PuLP on top of Anaconda on Windows it looks like you
need to run:
> pip install pulp
See pulp
[docs](https://pythonhosted.org/PuLP/main/installing_pulp_at_home.html)
|
Adding an extension in Sphinx (Python Documentation Generator) configuration file
Question: I want to use Sphinx as a documentation generator. When I try to run the
**make html** command, I have the following error :
`Extension error: Could not import extension sphinxcontrib.httpdomain
(exception: No module named sphinxcontrib.httpdomain) make: *** [html] Error
1`
I've found this web page explaining that I have to manually add the extension
to the Sphinx configuration file <https://pythonhosted.org/sphinxcontrib-
httpdomain/#module-sphinxcontrib.httpdomain>
But I can't find this configuration file.
Do you have any idea where I could find it ? I'm on Mac OS X
Answer: The configuration is in the `source` folder of your Sphinx project. It is
named `conf.py` and contains an `extensions` option which should look like
this:
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
...
]
|
How to use num2date/ date2num with Tkinter mainloop()
Question: I have this code inside a tkinter `mainloop()`:
self.raw_start_date = num2date(date2num(dt.datetime.strptime(self.end_date, "%Y-%m-%d")) - self.period)
self.start_date = self.raw_start_date.strftime("%Y-%m-%d")
I get the following error:
> File "D:\Python35-32\lib\tkinter__init__.py", line 1949, in **getattr**
> return getattr(self.tk, attr) RecursionError: maximum recursion depth
> exceeded
Can someone please help out with this?
Answer: This is an artifact of subclassing `tkinter.Tk` and overriding the `__init__`
method without ever calling `Tk.__init__`:
import tkinter
class Application(tkinter.Tk):
def __init__(self):
"do out stuff, forget to call Tk.__init__(self) !"
pass
app = Application()
app.any_possible_attribute_name #recursion error here
This happens because:
1. `Tk.__init__` initializes the very important `.tk` attribute.
2. Any attribute lookup (that cannot be resolved) is forwarded to the `.tk` attribute.
So normally if you did `app.thing` and `.thing` was not already defined then
it would try to return `app.tk.thing`, but when `app.tk` is not defined then
it tries to look up `app.tk.tk` which requires looking up `app.tk` which
causes the recursion error.
* * *
## To fix this
Just remember to call `Tk.__init__(self)` in your initialize method:
import tkinter
class Application(tkinter.Tk):
def __init__(self):
"do out stuff, just make sure to call Tk.__init__(self) !"
tkinter.Tk.__init__(self)
app = Application()
app.any_possible_attribute_name
#now we just get an AttributeError
|
Converting python tuple, lists, dictionaries containing pandas objects (series/dataframes) to json
Question: I know I can convert pandas object like `Series`, `DataFrame` to json as
follows:
series1 = pd.Series(np.random.randn(5), name='something')
jsonSeries1 = series1.to_json() #{"0":0.0548079371,"1":-0.9072821424,"2":1.3865642993,"3":-1.0609052074,"4":-3.3513341839}
However what should I do when that series is encapsulated inside other
datastructure, say dictionary as follows:
seriesmap = {"key1":pd.Series(np.random.randn(5), name='something')}
How do I convert above map to json like this:
{"key1":{"0":0.0548079371,"1":-0.9072821424,"2":1.3865642993,"3":-1.0609052074,"4":-3.3513341839}}
`simplejson` does not work:
jsonObj = simplejson.dumps(seriesmap)
gives
Traceback (most recent call last):
File "C:\..\py2.py", line 86, in <module>
jsonObj = json.dumps(seriesmap)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\simplejson\__init__.py", line 380, in dumps
return _default_encoder.encode(obj)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\simplejson\encoder.py", line 275, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\simplejson\encoder.py", line 357, in iterencode
return _iterencode(o, 0)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\simplejson\encoder.py", line 252, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: 0 -0.038824
1 -0.047297
2 -0.887672
3 -1.510238
4 0.900217
Name: something, dtype: float64 is not JSON serializable
To generalize this even further, I want to convert arbitrary object to json.
The arbitrary object may be simple int, string or of complex types such that
tuple, list, dictionary containing pandas objects along with other types. In
dictionary the pandas object may lie at arbitrary depth as some key's value. I
want to safely convert such structure to valid json. Is it possible?
**Update**
I just tried encapsulating DataFrame as a value of one of the keys of a
dictionary and converting that dictionary to json by encapsulating in another
DataFrame (as suggested in below answer). But seems that it does not work:
import pandas as pd
d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
mapDict = {"key1":df}
print(pd.DataFrame(mapDict).to_json())
This gave:
Traceback (most recent call last):
File "C:\Mahesh\repos\JavaPython\JavaPython\bin\py2.py", line 80, in <module>
print(pd.DataFrame(mapDict).to_json())
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\pandas\core\frame.py", line 224, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\pandas\core\frame.py", line 360, in _init_dict
return _arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\pandas\core\frame.py", line 5231, in _arrays_to_mgr
index = extract_index(arrays)
File "C:\Mahesh\Program Files\WinPython-64bit-3.4.4.4Qt5\python-3.4.4.amd64\lib\site-packages\pandas\core\frame.py", line 5270, in extract_index
raise ValueError('If using all scalar values, you must pass'
ValueError: If using all scalar values, you must pass an index
Answer: call `pd.DataFrame` on `seriesmap` then use `to_json`
pd.DataFrame(seriesmap).to_json()
'{"key1":{"0":0.8513342674,"1":-1.3357052602,"2":0.2102391775,"3":-0.5957492995,"4":0.2356552588}}'
|
Flask return multiple variables?
Question: I am learning Flash with Python. My python skills are okay, but I have no
experience with web apps. I have a form that takes some information and I want
to display it back after it is submitted. I can do that part, however I can
only return one variable from that form even tho there are 3 variables in the
form. I can return each one individually but not all together. If I try all 3,
I get a 500 error. Here is the code I am working with:
from flask import Blueprint
from flask import render_template
from flask import request
simple_page = Blueprint('simple_page', __name__)
@simple_page.route('/testing', methods=['GET', 'POST'])
def my_form():
if request.method=='GET':
return render_template("my-form.html")
elif request.method=='POST':
firstname = request.form['firstname']
lastname = request.form['lastname']
cellphone = request.form['cellphone']
return firstname, lastname, cellphone
If I change the last return line to:
return firstname
it works, or:
return lastname
or:
return cellphone
If I try two variables it will only return the first, once I add the 3rd I get
the 500 error. I am sure I am doing something silly, but even with tons of
googling I could not get it figured out. Any help would be great. Thank you.
Answer: Flask requires either a `str` or `Response` to be return, in you case you are
attempting to return a `tuple`.
You can either return your `tuple` as a formatted `str`
return '{} {} {}'.format(firstname, lastname, cellphone)
Or you can pass the values into another `template`
return render_template('my_other_template.html',
firstname=firstname,
lastname=lastname,
cellphone=cellphone)
|
How to delete a line(input) in the list in Python
Question: I have a text file which has multiple lines.
the line string format is [studentnumber, course, specialisation]
studentnumber as ID, course name, specialisation
how can i delete the line containing specific ID with user input
for example:
ID = 12345678
if the ID 12345678 exists, delete the line: "[12345678, ABC, DE]" in the text
and print "The student was deleted from the file"
and if not print "The student does not exist in the file"?
This is what i have so far:
studentList = []
studentFile = open ("students1.txt", "r")
for line in studentFile:
smallList = (line.rstrip()).split()
studentList.append(smallList)
studentFile.close()
deleteNumber = input("Enter number: ")
if len(deleteNumber) == 0:
print("WARNING: Input can not blank -- Please do again!")
Delete()
while len(deleteNumber) != 8:
print("Student number must have 8 digits")
deleteNumber = input("Please try again: ")
with open("students1.txt") as studentList, \
open("students1.txt", "w") as studentList:
for line in studentList:
if deleteNumber not in line:
studentList.write(line)
print("The student was deleted from the file…!")
else:
print("The student does not exist in the file…!")
It did not work. My lecturer said that i have to delete from the list first.
After that, copy the list to the file. However, i do not know how?
Thank you very much for helping!
Answer: Try something like:
import re
id_to_delete = input('Enter ID u want to delete from file: ')
filename = 'test.txt'
get_id = re.compile(r'\d+')
with open(filename, 'r+') as f:
for idx, line in enumerate(f, 1):
line_id = get_id(line)
if line_id is not None and line_id.group() == id_to_delete:
print('{data} at line {line_no}'.format(data= line, line_no= idx)
break
else:
print('given {id} can not be find in file'.format(id= id_to_delete)
It prints the line number if given id exist (as you said), however it's not
deleting it from file.
|
python pandas/numpy quick way of replacing all values according to a mapping scheme
Question: let's say I have a huge panda data frame/numpy array where each element is a
list of ordered values:
sequences = np.array([12431253, 123412531, 12341234,12431253, 145345],
[5463456, 1244562, 23452],
[243524, 141234,12431253, 456367],
[456345, 253451],
[75635, 14145, 12346,12431253])
or,
sequences = pd.DataFrame({'sequence': [[12431253, 123412531, 12341234,12431253, 145345],
[5463456, 1244562, 23452],
[243524, 141234, 456367,12431253],
[456345, 253451],
[75635, 14145, 12346,12431253]]})
and I want to replace them with another set of identifiers that start from 0,
so I design a mapping like this:
from compiler.ast import flatten
from sets import Set
mapping = pd.DataFrame({'v0': list(Set(flatten(sequences['sequence']))), 'v1': range(len(Set(flatten(sequences['sequence'])))})
......
so the result I was looking for:
sequences = np.array([1, 2, 3,1, 4], [5, 6, 7], [8, 9, 10,1], [11, 12], [13, 14, 15,1])
how can I scale this up to a huge data frame/numpy of sequences ?
Thanks so much for any guidance! Greatly appreciated!
Answer: Here's an approach that flattens into a `1D` array, uses `np.unique` to assign
unique IDs to each element and then splits back into list of arrays -
lens = np.array(map(len,sequences))
seq_arr = np.concatenate(sequences)
ids = np.unique(seq_arr,return_inverse=1)[1]
out = np.split(ids,lens[:-1].cumsum())
Sample run -
In [391]: sequences = np.array([[12431253, 123412531, 12341234,12431253, 145345],
...: [5463456, 1244562, 23452],
...: [243524, 141234,12431253, 456367],
...: [456345, 12431253],
...: [75635, 14145, 12346,12431253]])
In [392]: out
Out[392]:
[array([12, 13, 11, 12, 5]),
array([10, 9, 2]),
array([ 6, 4, 12, 8]),
array([ 7, 12]),
array([ 3, 1, 0, 12])]
In [393]: np.array(map(list,out)) # If you need NumPy array as final o/p
Out[393]:
array([[12, 13, 11, 12, 5], [10, 9, 2], [6, 4, 12, 8], [7, 12],
[3, 1, 0, 12]], dtype=object)
|
AJAX + Flask update server request when filling form
Question: On the Flask website there's a tutorial on how to use AJAX and this an example
to display the sums of two numbers.
This is the python app from flask import Flask, render_template, request,
jsonify
# Initialize the Flask application
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html')
@app.route('/_add_numbers')
def add_numbers():
a = request.args.get('a', 0, type=int)
b = request.args.get('b', 0, type=int)
return jsonify(result=a + b)
if __name__ == '__main__':
app.run(
host="0.0.0.0",
port=int("80"),
debug=True
)
This is the HTML file
<!DOCTYPE html>
<html lang="en">
<head>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<link href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"
rel="stylesheet">
<script type=text/javascript>
$(function() {
$('a#calculate').bind('click', function() {
$.getJSON('/_add_numbers', {
a: $('input[name="a"]').val(),
b: $('input[name="b"]').val()
}, function(data) {
$("#result").text(data.result);
});
return false;
});
});
</script>
</head>
<body>
<div class="container">
<div class="header">
<h3 class="text-muted">How To Manage JSON Requests</h3>
</div>
<hr/>
<div>
<p>
<input type="text" size="5" name="a"> +
<input type="text" size="5" name="b"> =
<span id="result">?</span>
<p><a href="javascript:void();" id="calculate">calculate server side</a>
</form>
</div>
</div>
</body>
</html>
I see that with this example one is able to send a request to the server every
time one clicks on the link, however, I would like to know if it's possible to
skip this step and get the request overtime the text inputted by the user
changes.
Answer: Yes. It's wasteful of resources, but you could change
`$('a#calculate').bind('click', function() {` to
`$('input[name="a"]').change(function() {`
And do the same for input `b`
* * *
# Edit:
And, to test AS YOU TYPE:
`$('input[name="a"]').on('input',function(e){`
or:
`$('input[name="a"]').keyup(function(e){`
|
Parse an XML file to get a full tag by using Python's lxml package
Question: I've got the following XML file:
<root>
<scene name="scene1">
<view ath="0" atv="10"/>
<image url="img1.jgp"/>
<hotspot name="hot1"/>
</scene>
<scene name="scene2">
<view ath="20" atv="10"/>
<image url="img2.jgp"/>
<hotspot name="hot2"/>
</scene>
</root>
I'm writing a Python script using lxml package, to get the entire `view` tag
within `scene1`. That is:
<view ath="0" atv="10" />
I've read the lxml documentation but all I can find is how to get the tag, its
attributes or its content, but not the entire tag.
Can anybody at least point me in the right direction? Does lxml has a function
or a method to achieve this?
Thanks,
Rafael
Answer: Your given XML source contains some errors; I fixed those, see my source
below:
from lxml import etree
source = """
<root>
<scene name="scene1">
<view ath="0" atv="10" />
<image url="img1.jgp" />
<hotspot name="hot1" />
</scene>
<scene name="scene2">
<view ath="20" atv="10" />
<image url="img2.jgp" />
<hotspot name="hot2" />
</scene>
</root>
"""
To parse this source you will create an etree:
tree = etree.fromstring(source)
(For source coming from a file, use `etree.parse()` instead.)
Now you can browse through the parsed XML by accessing `tree` properly. My
favorite way of doing so is by navigating with XPaths (mastering these is out
of scope of your question):
allViews = tree.xpath('//root/scene/view')
for view in allViews:
print view.attrib
This will print all XML attributes for each view tag found by the XPath:
{'atv': '10', 'ath': '0'}
{'atv': '10', 'ath': '20'}
Of course you can also access the other attributes of the view elements like
their embedded text (which is empty here of course) or their subelements
(children) (of course, in your example they also do not have children).
The wording of your question suggests that you might not have built up an
understanding of the fact that this `view` object is indeed "the entire view
tag". You can ask the `view` object for the tag it is made up of (`view`), for
its attributes (see above), its contents (`view.text`) and even its
subelements (`view.getchildren()`, but there are none).
You can convert the parsed XML structure back to an ASCII representation by
calling `etree.tostring(view)`; this will return a string like `'<view
ath="20" atv="10"/>\n '`. In most cases you will not do this.
You can also access the elements view the children of the elements:
print tree.getchildren()[1].getchildren()[0].attrib
This will print the XML attributes of the 0th child (a `view`) of the first
child (a `scene`) of the `tree` element:
{'atv': '10', 'ath': '20'}
|
Python: Break down large file, filter based on criteria, and put all data into new csv file
Question: I have a super large csv.gzip file that has 59 mill rows. I want to filter
that file for certain rows based on certain criteria and put all those rows in
a new master csv file. As of now, I broke the gzip file into 118 smaller csv
files and saved them on my computer. I did that with the following code:
import pandas as pd
num = 0
df = pd.read_csv('google-us-data.csv.gz', header = None,
compression = 'gzip', chunksize = 500000,
names = ['a','b','c','d','e','f','g','h','i','j','k','l','m'],
error_bad_lines = False, warn_bad_lines = False)
for chunk in df:
num = num + 1
chunk.to_csv('%ggoogle us'%num ,sep='\t', encoding='utf-8'
The code above worked perfectly and I now have a folder with my 118 small
files. I then wrote code to go through the 118 files one by one, extract rows
that matched certain conditions, and append them all to a new csv file that
I've created and named 'google final us'. Here is the code:
import pandas as pd
import numpy
for i in range (1,118)
file = open('google final us.csv','a')
df = pd.read_csv('%ggoogle us'%i, error_bad_lines = False,
warn_bad_lines = False)
df_f = df.loc[(df['a']==7) & (df['b'] == 2016) & (df['c'] =='D') &
df['d'] =='US')]
file.write(df_f)
Unfortunately, the code above is giving me the below error:
KeyError Traceback (most recent call last)
C:\Users\...\Anaconda3\lib\site-packages\pandas\indexes\base.py in
get_loc(self, key, method, tolerance)
1875 try:
-> 1876 return self._engine.get_loc(key)
1877 except KeyError:
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4027)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:3891)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12408)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12359)()
KeyError: 'a'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-9-0ace0da2fbc7> in <module>()
3 file = open('google final us.csv','a')
4 df = pd.read_csv('1google us')
----> 5 df_f = df.loc[(df['a']==7) & (df['b'] == 2016) &
(df['c'] =='D') & (df['d'] =='US')]
6 file.write(df_f)
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\frame.py in
__getitem__(self, key)
1990 return self._getitem_multilevel(key)
1991 else:
-> 1992 return self._getitem_column(key)
1993
1994 def _getitem_column(self, key):
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\frame.py in
_getitem_column(self, key)
1997 # get column
1998 if self.columns.is_unique:
-> 1999 return self._get_item_cache(key)
2000
2001 # duplicate columns & possible reduce dimensionality
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\generic.py in
_get_item_cache(self, item)
1343 res = cache.get(item)
1344 if res is None:
-> 1345 values = self._data.get(item)
1346 res = self._box_item_values(item, values)
1347 cache[item] = res
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\internals.py in
get(self, item, fastpath)
3223
3224 if not isnull(item):
-> 3225 loc = self.items.get_loc(item)
3226 else:
3227 indexer = np.arange(len(self.items))
[isnull(self.items)]
C:\Users\...\Anaconda3\lib\site-packages\pandas\indexes\base.py in
get_loc(self, key, method, tolerance)
1876 return self._engine.get_loc(key)
1877 except KeyError:
-> 1878 return
self._engine.get_loc(self._maybe_cast_indexer(key))
1879
1880 indexer = self.get_indexer([key], method=method,
tolerance=tolerance)
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4027)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:3891)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12408)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12359)()
KeyError: 'a'
Any ideas what's going wrong? I've read numerous other stackoverflow posts
(eg. [Create dataframes from unique value pairs by filtering across multiple
columns](http://stackoverflow.com/questions/38215009/create-dataframes-from-
unique-value-pairs-by-filtering-across-multiple-columns) or [How can I break
down a large csv file into small files based on common records by
python](http://stackoverflow.com/questions/33839540/how-can-i-break-down-a-
large-csv-file-into-small-files-based-on-common-records-b)), but still not
sure how to do this. Also, if you have a better way to extract data than this
method - please let me know!
Answer: When you use file.write(df_f) you are effectively saving a string
representation of the DataFrame, which is meant for humans to look at. By
default that representation will truncate rows and columns so that large
frames can be displayed on the screen in a sensible manner. As a result column
"a" may get chopped.
with open('google final us.csv','a') as file:
for i in range(1, 118):
headers = i == 1
...
df_f.to_csv(file, headers=headers)
I did not test the above snippet, but you should get an idea how to get going
now.
There other issues with this code, which you may want to correct:
1. Open the file to write before the loop, close it after. Best to use context manager.
2. If the entire data fits in memory why go through a trouble to split it into 118 files? Simply filter it and save the resulting DataFrame using df.to_csv() method.
3. Instead of pandas consider using csv.DictReader and filter the lines on the fly.
Lastly, if this a one-time job, why even write code for something that you
could accomplish with a grep command (on Unix-like systems)?
|
traversing folders, several subfolders for the files in python
Question: I've a folder structure similar to what's outlined below.
Path
|
|
+----SubDir1
| |
| +---SubDir1A
| | |
| | |----- FileA.0001.ext
| | |----- ...
| | |----- ...
| | |----- FileA.1001.ext
| | |----- FileB.0001.ext
| | |----- ...
| | |----- ...
| | |----- FileB.1001.ext
| +---SubDir1B
|
| | |----- FileA.0001.ext
| | |----- ...
| | |----- ...
| | |----- FileA.1001.ext
| | |----- FileB.0001.ext
| | |----- ...
| | |----- ...
| | |----- FileB.1001.ext
+----SubDir2
| |
| |----- FileA.0001.ext
| |----- ...
| |----- ...
| |----- FileA.1001.ext
| |----- FileB.0001.ext
| |----- ...
| |----- ...
| |----- FileB.1001.ext
I want to be able to list the first FileA and first FileB for each SubDir1 and
SubDir2
I've looked online and seen os.walk in a for loop, similar to:
import os
rootDir = '.'
for dirName, subdirList, fileList in os.walk(rootDir):
print('Found directory: %s' % dirName)
for fname in fileList:
print('\t%s' % fname)
# Remove the first entry in the list of sub-directories
# if there are any sub-directories present
if len(subdirList) > 0:
del subdirList[0
But that seems to only work if there's a file directly inside a subdirectory.
My problem is that sometimes there's an additional subdirectory inside the
subdirectory(!!)
Does anyone have any ideas how to solve this?
Answer: Your issue is actually those two lines, remove them and you sould be fine:
if len(subdirList) > 0:
del subdirList[0]
**Explanation** :
What they do is **they make the first subdirectory inside each directory
disappear before`os.walk` had time to walk it**. So it is not surprising that
you get weird behaviour regarding subdirectories.
Here's an illustration of that behaviour using the following tree :
test0/
├── test10
│ ├── test20
│ │ └── testA
│ ├── test21
│ │ └── testA
│ └── testA
├── test11
│ ├── test22
│ │ └── testA
│ ├── test23
│ │ └── testA
│ └── testA
└── testA
**Without** the problematic lines:
Found directory: ./test/test0
testA
Found directory: ./test/test0/test10
testA
Found directory: ./test/test0/test10/test21
testA
Found directory: ./test/test0/test10/test20
testA
Found directory: ./test/test0/test11
testA
Found directory: ./test/test0/test11/test22
testA
Found directory: ./test/test0/test11/test23
testA
**With** the problematic lines:
Found directory: ./test/test0
testA
Found directory: ./test/test0/test11
testA
Found directory: ./test/test0/test11/test23
testA
So we clearly see that the two subfolders `test10` and `test22` that were
first in line have been ignored altogether because of the "bad lines".
|
Factory class with abstractmethod
Question: I've created a factory class called `FitFunction` that adds a whole bunch of
stuff beyond what I've shown. The label method `pretty_string` is supposed to
just return the string as written. When I run this file, it prints a string
that is as useful as the `repr`. Does someone know how I would go about
implementing this?
#!/usr/bin/env python
from __future__ import print_function, absolute_import
import abc
import types
import numpy as np
class FitFunction(object):
def __init__(self, python_function):
assert isinstance(python_function, types.FunctionType)
self._py_function = python_function
@abc.abstractmethod
def pretty_string():
r"""
Return some pretty string.
"""
class Gaussian(FitFunction):
def __init__(self):
def gaussian(x, mu, sigma, A):
coeff = (_np.sqrt(2.0 * _np.pi) * sigma)**(-1.0)
arg = -.5 * (((x - mu) / sigma)**2.0)
return A * coeff * _np.exp(arg)
FitFunction.__init__(self, gaussian)
@staticmethod
def pretty_string():
return "1D Gaussian"
if __name__ == "__main__":
print("Gaussian.pretty_string: %s" % Gaussian().pretty_string() )
I subclass `FitFunction` to create `Gaussian` because I apply `Gaussian` to
many different data sets with the same parameters so that I can compare the
output.
For reference, this is what happens when I execute the file:
me$ ./FitFunction_SO_test.py
Gaussian.pretty_string: <bound method Gaussian.pretty_string of <__main__.Gaussian object at 0x1005e2f90>>
I'm looking for the following result:
me$ ./FitFunction_SO_test.py
Traceback (most recent call last):
File "./FitFunction_SO_test.py", line 43, in <module>
print("Gaussian.pretty_string: %s" % Gaussian().pretty_string())
TypeError: pretty_string() takes no arguments (1 given)
Answer: Use:
print("Gaussian.pretty_string: %s" % Gaussian.pretty_string())
Or else you are printing the `repr` of the _method_ , not the _result of
calling the method_ , which is the string you are looking for.
|
wxPython: Redirecting events on other widgets (TextCtrl)
Question: The case study doesn't seem too hard to explain, but I guess TextCtrl in
wxPython aren't often used in this way. So here it is: I have a simple window
with two TextCtrls. One is an input widget (the user is supposed to enter
commands there), the second is an output widget (the system displays the
result of the commands). The output field is a read-only TextCtrl, only the
system can write in it.
So far so good. Now, I would like to intercept events in the output widget: If
users type in this output field (a read-only widget), they should be
redirected to the input field and the text they have begun typing should
appear there. The first part isn't complicated: I intercept the EVT_KEY_DOWN
on the output widget and can do something like self.input.SetFocus(). However,
the key that has been pressed by the user is lost. If he/she began to type
something, she has to start over again. This is supposed to be a shortcut
feature (no matter in what field the user type, it should be written in the
input widget).
A short note on why I do this, since it can still quite stupid: Sighted users
don't often fool around with read-only widgets; they see them and leave them
alone. This application is mostly designed for users with screen readers, who
have to move around the output field. The cursor is therefore often there, and
a key press doesn't have any effect (since it's a read-only widget). It would
be great if, on typing in the output widget, the user was redirected to the
input field, with the text he was typing already in this widget.
import wx
class MyFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent)
self.panel = MyPanel(self)
self.Show()
self.Maximize()
class MyPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
sizer = wx.BoxSizer(wx.VERTICAL)
self.SetSizer(sizer)
# Input field
self.input = wx.TextCtrl(self, -1, "", size=(125, -5),
style=wx.TE_PROCESS_ENTER)
# Ouput
self.output = wx.TextCtrl(self, -1, "",
size=(600, 400), style=wx.TE_MULTILINE|wx.TE_READONLY)
# Add the output fields in the sizer
sizer.Add(self.input)
sizer.Add(self.output, proportion=8)
# Event handler
self.output.Bind(wx.EVT_KEY_DOWN, self.OnKeyDown)
def OnKeyDown(self, e):
"""A key is pressed in the output widget."""
modifiers = e.GetModifiers()
key = e.GetUnicodeKey()
if not key:
key = e.GetKeyCode()
print "From there, we should redirect to the input"
self.input.SetFocus()
# Let's run that
app = wx.App()
MyFrame(None)
app.MainLoop()
Answer: Give `self.input.EmulateKeyPress(e)` a try. If you're on Windows it should
work fine. On other platforms it is not perfect, but basically works there
too.
Other options would be to use `wx.UiActionSimulator`, or simply to append the
new character to the input textctrl in your code.
|
subprocess.Popen: 'OSError: [Errno 2] No such file or directory' only on Linux
Question: > This is not a duplicate of [subprocess.Popen: 'OSError: [Errno 13]
> Permission denied' only on
> Linux](http://stackoverflow.com/questions/39777345/subprocess-popen-oserror-
> errno-13-permission-denied-only-on-linux) as that problem occurred due to
> wrong permissions. That has been fixed and this is an entirely different
> problem.
When my code (given below) executes on Windows (both my laptop and AppVeyor
CI), it does what it's supposed to do. But on Linux (VM on TravisCI), it
throws me a file not found error.
I am executing in `/home/travis/build/sayak-brm/espeak4py/`.
`ls -l` outputs:
$ ls -l
total 48
-rw-rw-r-- 1 travis travis 500 Sep 29 20:14 appveyor.yml
drwxrwxr-x 3 travis travis 4096 Sep 29 20:14 espeak4py
-rw-rw-r-- 1 travis travis 32400 Sep 29 20:14 LICENSE.md
-rw-rw-r-- 1 travis travis 2298 Sep 29 20:14 README.md
-rw-rw-r-- 1 travis travis 0 Sep 29 20:14 requirements.txt
-rw-rw-r-- 1 travis travis 759 Sep 29 20:14 test.py
$ ls -l espeak4py
total 592
-rwxr-xr-x 1 travis travis 276306 Sep 30 06:42 espeak
drwxrwxr-x 5 travis travis 4096 Sep 29 20:14 espeak-data
-rw-rw-r-- 1 travis travis 319488 Sep 29 20:14 espeak.exe
-rw-rw-r-- 1 travis travis 1125 Sep 29 20:14 __init__.py
$ ls -l /home/travis/build/sayak-brm/espeak4py/espeak4py
total 592
-rwxr-xr-x 1 travis travis 276306 Sep 30 06:42 espeak
drwxrwxr-x 5 travis travis 4096 Sep 30 06:42 espeak-data
-rw-rw-r-- 1 travis travis 319488 Sep 30 06:42 espeak.exe
-rw-rw-r-- 1 travis travis 1216 Sep 30 06:42 __init__.py
which shows that the files are where they are supposed to be.
The `espeak` file is a Linux ELF Binary.
* * *
**Error:**
$ python3 test.py
Testing espeak4py
Testing wait4prev
Traceback (most recent call last):
File "test.py", line 10, in <module>
mySpeaker.say('Hello, World!')
File "/home/travis/build/sayak-brm/espeak4py/espeak4py/__init__.py", line 38, in say
self.prevproc = subprocess.Popen(cmd, executable=self.executable, cwd=os.path.dirname(os.path.abspath(__file__)))
File "/opt/python/3.2.6/lib/python3.2/subprocess.py", line 744, in __init__
restore_signals, start_new_session)
File "/opt/python/3.2.6/lib/python3.2/subprocess.py", line 1394, in _execute_child
raise child_exception_type(errno_num, err_msg)
OSError: [Errno 2] No such file or directory: '/home/travis/build/sayak-brm/espeak4py/espeak4py/espeak'
* * *
**Code:**
`espeak4py/__init__.py`:
#! python3
import subprocess
import os
import platform
class Speaker:
"""
Speaker class for differentiating different speech properties.
"""
def __init__(self, voice="en", wpm=120, pitch=80):
self.prevproc = None
self.voice = voice
self.wpm = wpm
self.pitch = pitch
if platform.system() == 'Windows': self.executable = os.path.dirname(os.path.abspath(__file__)) + "/espeak.exe"
else: self.executable = os.path.dirname(os.path.abspath(__file__)) + "/espeak"
def generateCommand(self, phrase):
cmd = [
self.executable,
"--path=.",
"-v", self.voice,
"-p", self.pitch,
"-s", self.wpm,
phrase
]
cmd = [str(x) for x in cmd]
return cmd
def say(self, phrase, wait4prev=False):
cmd=self.generateCommand(phrase)
if wait4prev:
try: self.prevproc.wait()
except AttributeError: pass
else:
try: self.prevproc.terminate()
except AttributeError: pass
self.prevproc = subprocess.Popen(cmd, executable=self.executable, cwd=os.path.dirname(os.path.abspath(__file__)))
`test.py`:
#! python3
import espeak4py
import time
print('Testing espeak4py\n')
print('Testing wait4prev')
mySpeaker = espeak4py.Speaker()
mySpeaker.say('Hello, World!')
time.sleep(1)
mySpeaker.say('Interrupted!')
time.sleep(3)
mySpeaker.say('Hello, World!')
time.sleep(1)
mySpeaker.say('Not Interrupted.', wait4prev=True)
time.sleep(5)
print('Testing pitch')
myHighPitchedSpeaker = espeak4py.Speaker(pitch=120)
myHighPitchedSpeaker.say('I am a demo of the say function')
time.sleep(5)
print('Testing wpm')
myFastSpeaker = espeak4py.Speaker(wpm=140)
myFastSpeaker.say('I am a demo of the say function')
time.sleep(5)
print('Testing voice')
mySpanishSpeaker = espeak4py.Speaker(voice='es')
mySpanishSpeaker.say('Hola. Como estas?')
print('Testing Completed.')
* * *
I don't understand why it works only on one platform and not the other.
Travis CI Logs: <https://travis-ci.org/sayak-brm/espeak4py>
AppVeyor Logs: <https://ci.appveyor.com/project/sayak-brm/espeak4py>
GitHub: <https://sayak-brm.github.io/espeak4py>
Answer: I've tested your `espeak` Python wrapper on Linux, and it works for me.
Probably it's just an issue with Windows trailing `\r` characters. You could
try the following:
sed -i 's/^M//' espeak4py/__init__.py
To enter the `^M`, type `Ctrl-V` followed by `Ctrl-M`, and see if running that
`sed` command solves the issue.
|
Python compute a specific inner product on vectors
Question: Assume having two vectors with m x 6, n x 6
import numpy as np
a = np.random.random(m,6)
b = np.random.random(n,6)
using np.inner works as expected and yields
np.inner(a,b).shape
(m,n)
with every element being the scalar product of each combination. I now want to
compute a special inner product (namely Plucker). Right now im using
def pluckerSide(a,b):
a0,a1,a2,a3,a4,a5 = a
b0,b1,b2,b3,b4,b5 = b
return a0*b4+a1*b5+a2*b3+a4*b0+a5*b1+a3*b2
with a,b sliced by a for loop. Which is way too slow. Any plans on vectorizing
fail. Mostly broadcast errors due to wrong shapes. Cant get np.vectorize to
work either. Maybe someone can help here?
Answer: There seems to be an indexing based on some random indices for pairwise
multiplication and summing on those two input arrays with function
`pluckerSide`. So, I would list out those indices, index into the arrays with
those and finally use `matrix-multiplication` with
[`np.dot`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)
to perform the sum-reduction.
Thus, one approach would be like this -
a_idx = np.array([0,1,2,4,5,3])
b_idx = np.array([4,5,3,0,1,2])
out = a[a_idx].dot(b[b_idx])
If you are doing this in a loop across all rows of `a` and `b` and thus
generating an output array of shape `(m,n)`, we can vectorize that, like so -
out_all = a[:,a_idx].dot(b[:,b_idx].T)
To make things a bit easier, we can re-arrange `a_idx` such that it becomes
`range(6)` and re-arrange `b_idx` with that pattern. So, we would have :
a_idx = np.array([0,1,2,3,4,5])
b_idx = np.array([4,5,3,2,0,1])
Thus, we can skip indexing into `a` and the solution would be simply -
a.dot(b[:,b_idx].T)
|
Speckle ( Lee Filter) in Python
Question: I am trying to do speckle noise removal in satellite SAR image.I am not
getting any package which does speckle noise removal in SAR image. I have
tried pyradar but it works with python 2.7 and I am working on Anaconda with
python 3.5 on windows. Also Rsgislib is available but it is on Linux. Joseph
meiring has also given a Lee filter code on github but it fails to work. :
<https://github.com/reptillicus/LeeFilter>
Kindly, can anyone share the python script for Speckle Filter or how to
proceed for speckle filter design in python.
Answer: This is a fun little problem. Rather than try to find a library for it, why
not write it from the definition?
from scipy.ndimage.filters import uniform_filter
from scipy.ndimage.measurements import variance
def lee_filter(img, size):
img_mean = uniform_filter(img, (size, size))
img_sqr_mean = uniform_filter(img**2, (size, size))
img_variance = img_sqr_mean - img_mean**2
overall_variance = variance(img)
img_weights = img_variance**2 / (img_variance**2 + overall_variance**2)
img_output = img_mean + img_weights * (img - img_mean)
return img_output
If you don't want the window to be a square of size x size, just replace
uniform_filter with something else (convolution with a disk, etc)
This seems rather old-fashioned as a filter, it won't behave well at edges.
You may want to look into modern edge-aware denoisers, like guided filter or
bilateral filter.
|
Neural Network Inception v3 doesn't create labels
Question: I am facing an error with testing the Neural Network Inception v3 and
Tensorflow.
I avtivated and trained the model this way with Python:
source tf_files/tensorflow/bin/activate
python tf_files/tensorflow/examples/image_retraining/retrain.py --bottleneck_dir=tf_files/bottlenecks --how_many_training_steps 500 --model_dir=tf_files/inception --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --image_dir tf_files/data
Which gave me the following error:
> CRITICAL:tensorflow:Label kiwi has no images in the category testing.
`Kiwi` is a folder which contains images. The other folder called `Apples`
gave me no error. But maybe it occurs because it contains less than 20 images.
And it doesn't create a file called `retrained_labels.txt`.
So when executing following following command it gives me an error saying it
couldn't find the file, which is mentioned above.
python image_label.py apple.jpg
Everything is in it's folders and the content of `image_label.py` is:
import tensorflow as tf
import sys
# change this as you see fit
image_path = sys.argv[1]
# Read in the image_data
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
# Loads label file, strips off carriage return
label_lines = [line.rstrip() for line
in tf.gfile.GFile("tf_files/retrained_labels.txt")]
# Unpersists graph from file
with tf.gfile.FastGFile("tf_files/retrained_graph.pb", 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
with tf.Session() as sess:
# Feed the image_data as input to the graph and get first prediction
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, \
{'DecodeJpeg/contents:0': image_data})
# Sort to show labels of first prediction in order of confidence
top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
for node_id in top_k:
human_string = label_lines[node_id]
score = predictions[0][node_id]
print('%s (score = %.5f)' % (human_string, score))
Answer: I solved it. The error occured **because the folder hadn't got enough images
to train with**. So after increasing the number of the images from 14 to 38 it
gives me the predictions!
|
Python: edit matrix row in parallel
Question: here is my problem:
I would like to define an array of persons and change the entries of this
array in a for loop. Since I also would like to see the asymptotics of the
resulting distribution, I want to repeat this simulation quiet a lot, thus I'm
using a matrix to store the several array in each row. I know how to do this
with two for loops:
import random
import numpy as np
nobs = 100
rep = 10**2
steps = 10**2
dmoney = 1
state = np.matrix([[10] * nobs] * rep)
for i in range(steps):
for j in range(rep)
sample = random.sample(range(state.shape[1]),2)
state[j,sample[0]] = state[j,sample[0]] + dmoney
state[j,sample[1]] = state[j,sample[1]] - dmoney
I thought I use the multiprocessing library but I don't know how to do it,
because in my simple mind, the workers manipulate the same global matrix in
parallel, which I read is not a good idea.
So, how can I do this, to speed up calculations?
Thanks in advance.
Answer: OK, so this might not be much use, I haven't profiled it to see if there's a
speed-up, but list comprehensions will be a little faster than normal loops
anyway.
...
y_ix = np.arange(rep) # create once as same for each loop
for i in range(steps):
# presumably the two locations in the population to swap need refreshing each loop
x_ix = np.array([np.random.choice(nobs, 2) for j in range(rep)])
state[y_ix, x_ix[:,0]] += dmoney
state[y_ix, x_ix[:,1]] -= dmoney
PS what numpy splits over multiple processors depends on what libraries have
been included when compiled (BLAS etc). You will be able to find info on line
about this.
EDIT I can confirm, after comparing the original with the numpy indexed
version above, that the original method is faster!
|
Multiple Choice Quiz With Randomising Answer Positions PYTHON 3.5.2
Question: I am creating a Que Card quiz in which a keyword from a text file is chosen at
random, the program should then show the correct definition along with 2 other
incorrect definitions that are in the text file as well. So far I have the
keyword, the correct definition and the 2 incorrect definitions stored inside
a list. Is there anyway that I can randomise the order of the items in the
list and randomise their positions when the user is answering the question. I
have tried to look this up but I can't find anything.
Example:
"Keyword is frog 1: Frogs are blue 2: Frogs are Green 3: Frogs are purple "
But then the next time the keyword frog comes up they will be in different
orders.
"Keyword is frog 1: Frogs are green 2: Frogs are blue 3: Frogs are purple "
Answer: Here's one possible approach based on `random.shuffle()`. I've use objects to
separate the question from the answers, and followed your convention that the
first answer provided in constructing the question is the correct one. I've
chosen to shuffle the indices rather than the answers themselves, just to
illustrate that it's possible to maintain the original inputs as-is and still
attain a random ordering.
import random
class Question:
def __init__(self, q, a):
self.q = q
self.a = a
self.order = [i for i in range(len(a))]
random.shuffle(self.order)
def query(self):
print(self.q, [self.a[i] for i in self.order])
def correct_answer(self):
print("\tCorrect answer is", self.a[0])
q1 = Question("Frogs are:", ["green", "red", "purple"])
q1.query()
q1.correct_answer()
q2 = Question("Unicorns are:", ["mythological", "silver", "gold"])
q2.query()
q2.correct_answer()
which produces results such as:
Frogs are: ['purple', 'red', 'green']
Correct answer is green
Unicorns are: ['gold', 'mythological', 'silver']
Correct answer is mythological
Obviously this is just the sketch of a solution, input/output formatting and
presentation details would need to be worked out by you to your own
satisfaction.
|
Speed optimisation in Flask
Question: My project (Python 2.7) consists of a screen scraper that collects data once a
day, extracts what is useful and stores that in a couple of pickles. The
pickles are rendered to an HTML-page using Flask/Ninja. All that works, but
when running it on my localhost (Windows 10), it's rather slow. I plan to
deploy it on PythonAnywhere.
The site also has an about page. Content for the about page is a markdown
file, that I convert to HTML using `markdown2` after each edit. The about-
template loads the HTML, like so:
{% include 'about_content.html' %}
This loads _much_ faster than letting `Flask-Markdown` render the about-text
(as I had at first):
{% filter markdown %}
{% include 'about_content.md' %}
{% endfilter %}
Now then. I'm a bit worried that the main page will not load fast enough when
I deploy the site. The content gets updated only once a day, there is no need
to re-render anything if you refresh the main page. So I'm wondering if I can
do a similar trick as with the about-content:
Can I let Flask, after rendering the pickles, save the result as html, and
then serve that from the site deployed? Or can I invoke some browser module,
save its output, and serve that? Or is that a bad idea altogether, and
shouldn't I worry because Flask is zoomingly fast on real life servers?
Answer: ## Your Question on Rendering
You can actually do a lot with Jinja. It is possible to run Jinja whenever you
want and save it as a HTML file. This way every time you send a request for a
file, it doesn't have to render it again. It just serves the static file.
Here is some code. I have a view that doesn't change throughout it's lifetime.
So I create a static HTML file once the view is created.
from jinja2 import Environment, FileSystemLoader
def example_function():
'''Jinja templates are used to write to a new file instead of rendering when a request is received. Run this function whenever you need to create a static file'''
# I tell Jinja to use the templates directory
env = Environment(loader=FileSystemLoader('templates'))
# Look for the results template
template = env.get_template('results.html')
# You just render it once. Pass in whatever values you need.
# I'll only be changing the title in this small example.
output_from_parsed_template = template.render(title="Results")
with open("/directory/where/you/want/to/save/index.html", 'w') as f:
f.write(output_from_parsed_template)
# Flask route
@app.route('/directory/where/you/want/to/save/<path:path>')
def serve_static_file(path):
return send_from_directory("directory/where/you/want/to/save/", path)
Now if you go the above URI
`localhost:5000/directory/where/you/want/to/save/index.html` is served without
rendering.
**EDIT** Note that `@app.route` takes a URL, so
`/directory/where/you/want/to/save` must start at the root, otherwise you get
`ValueError: urls must start with a leading slash`. Also, you can save the
rendered page with the other templates, and then route it as below,
eliminating the need for (and it's just as fast as) `send_from_directory`:
@app.route('/')
def index():
return render_template('index.html')
## Other Ways
If you want to get better performance consider serving your Flask app via
_gunicorn, nginx and the likes._
[Setting up nginx, gunicorn and
Flask](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-
applications-with-gunicorn-and-nginx-on-ubuntu-14-04)
[Don't use Flask in
production](http://stackoverflow.com/questions/33086555/why-shouldnt-flask-be-
deployed-with-the-built-in-server)
Flask also has an option where you can enable multi threading.
app.run(threaded=True)
|
Scrapy (python) TypeError: unhashable type: 'list'
Question: I have this simple scrappy code. However get this error when i use
`response.urljoin(port_homepage_url)` this portion of the code.
import re
import scrapy
from vesseltracker.items import VesseltrackerItem
class GetVessel(scrapy.Spider):
name = "getvessel"
allowed_domains = ["marinetraffic.com"]
start_urls = [
'http://www.marinetraffic.com/en/ais/index/ports/all/flag:AE',
]
def parse(self, response):
item = VesseltrackerItem()
for ports in response.xpath('//table/tr[position()>1]'):
item['port_name'] = ports.xpath('td[2]/a/text()').extract()
port_homepage_url = ports.xpath('td[7]/a/@href').extract()
port_homepage_url = response.urljoin(port_homepage_url)
yield scrapy.Request(port_homepage_url, callback=self.parse, meta={'item': item})
What could be wrong?
Here is the error log.
2016-09-30 17:17:13 [scrapy] DEBUG: Crawled (200) <GET http://www.marinetraffic.com/robots.txt> (referer: None)
2016-09-30 17:17:14 [scrapy] DEBUG: Crawled (200) <GET http://www.marinetraffic.com/en/ais/index/ports/all/flag:AE> (referer: None)
2016-09-30 17:17:14 [scrapy] ERROR: Spider error processing <GET http://www.marinetraffic.com/en/ais/index/ports/all/flag:AE> (referer: None)
Traceback (most recent call last):
File "/Users/noussh/python/env/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
File "/Users/noussh/python/env/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
for x in result:
File "/Users/noussh/python/env/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "/Users/noussh/python/env/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "/Users/noussh/python/env/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "/Users/noussh/python/vesseltracker/vesseltracker/spiders/marinetraffic.py", line 19, in parse
port_homepage_url = response.urljoin(port_homepage_url)
File "/Users/noussh/python/env/lib/python2.7/site-packages/scrapy/http/response/text.py", line 78, in urljoin
return urljoin(get_base_url(self), url)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urlparse.py", line 261, in urljoin
urlparse(url, bscheme, allow_fragments)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urlparse.py", line 143, in urlparse
tuple = urlsplit(url, scheme, allow_fragments)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urlparse.py", line 176, in urlsplit
cached = _parse_cache.get(key, None)
TypeError: unhashable type: 'list'
Answer: The `ports.xpath('td[7]/a/@href').extract()` returns a _list_ and when you try
to do the "urljoin" on it, it fails. Use `extract_first()` instead:
port_homepage_url = ports.xpath('td[7]/a/@href').extract_first()
|
Reverse redirect does not work but inserts data into db
Question:
from django.db import models
from django.core.urlresolvers import reverse
class Gallery(models.Model):
Title = models.CharField(max_length=250)
Category = models.CharField(max_length=250)
Gallery_logo = models.CharField(max_length=1000)
def get_absolute_url(self):
return reverse('photos:detail', kwargs={'pk', self.pk})
def __str__(self):
return self.Title + '_' + self.Gallery_logo
class Picture (models.Model):
Gallery = models.ForeignKey(Gallery, on_delete=models.CASCADE)
Title = models.CharField(max_length=250)
Artist = models.CharField(max_length=250)
Price = models.CharField(max_length=20)
interested = models.BooleanField(default=False)
def __str__(self):
return self.Title
Am getting this error below
TypeError at /photos/gallery/add/
_reverse_with_prefix() argument after ** must be a mapping, not set
Request Method: POST
Request URL: http://127.0.0.1:8000/photos/gallery/add/
Django Version: 1.10.1
Exception Type: TypeError
Exception Value:
_reverse_with_prefix() argument after ** must be a mapping, not set
Exception Location: C:\Users\JK\AppData\Local\Programs\Python\Python35\lib\site-packages\django-1.10.1-py3.5.egg\django\urls\base.py in reverse, line 91
Python Executable: C:\Users\JK\AppData\Local\Programs\Python\Python35\python.exe
Python Version: 3.5.2
Python Path:
['C:\\Users\\JK\\PycharmProjects\\catalog',
'C:\\Users\\JK\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages\\django-1.10.1-py3.5.egg',
'C:\\Users\\JK\\PycharmProjects\\catalog',
'C:\\Users\\JK\\AppData\\Local\\Programs\\Python\\Python35\\python35.zip',
'C:\\Users\\JK\\AppData\\Local\\Programs\\Python\\Python35\\DLLs',
'C:\\Users\\JK\\AppData\\Local\\Programs\\Python\\Python35\\lib',
'C:\\Users\\JK\\AppData\\Local\\Programs\\Python\\Python35',
'C:\\Users\\JK\\AppData\\Local\\Programs\\Python\\Python35\\lib\\site-packages']
Server time: Fri, 30 Sep 2016 17:15:55 +0300
Again am just a newbie
Answer: You're passing `kwargs` to `reverse` as a _set_ , when it should be a
dictionary:
kwargs={'pk': self.pk}
# ^
|
How to get points coordinate position in the face landmark detection program of dlib?
Question: There is one example python program in dlib to detect the face landmark
position.
[face_landmark_detection.py](http://dlib.net/face_landmark_detection.py.html)
This program detect the face feature and denote the landmarks with dots and
lines in original photo.
I wonder if it is possible to obtain each points' coordinate position. Like
a(10, 25). 'a' denotes corner of the mouth.
After slightly modifying the program to process one picture at one time, I try
to print out the value of dets and shape without success.
>>>print(dets)
<dlib.dlib.rectangles object at 0x7f3eb74bf950>
>>>print(dets[0])
[(1005, 563) (1129, 687)]
The arguments to denote face landmark points and the datatype of arguments
still remain unknown. And here is the simplified code
import dlib
from skimage import io
#shape_predictor_68_face_landmarks.dat is the train dataset in the same directory
predictor_path = "shape_predictor_68_face_landmarks.dat"
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path)
win = dlib.image_window()
#FDT.jpg is the picture file to be processed in the same directory
img = io.imread("FDT.jpg")
win.set_image(img)
dets = detector(img)
print("Number of faces detected: {}".format(len(dets)))
for k, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
k, d.left(), d.top(), d.right(), d.bottom()))
# Get the landmarks/parts for the face in box d.
shape = predictor(img, d)
#print(shape)
print("Part 0: {}, Part 1: {} ...".format(shape.part(0),
shape.part(1)))
# Draw the face landmarks on the screen.
win.add_overlay(shape)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
\---------------------------update on 3.10.2016---------------------------
Today, I remember the help() method in python and have a trial with it.
>>>help(predictor)
Help on shape_predictor in module dlib.dlib object:
class shape_predictor(Boost.Python.instance)
| This object is a tool that takes in an image region containing
some object and outputs a set of point locations that define the pose
of the object. The classic example of this is human face pose
prediction, where you take an image of a human face as input and are
expected to identify the locations of important facial landmarks such
as the corners of the mouth and eyes, tip of the nose, and so forth.
In the original code, variable `shape` is the output of predictor method.
>>>help(shape)
The description of shape
class full_object_detection(Boost.Python.instance)
| This object represents the location of an object in an image along
with the positions of each of its constituent parts.
----------------------------------------------------------------------
| Data descriptors defined here:
|
| num_parts
| The number of parts of the object.
|
| rect
| The bounding box of the parts.
|
| ----------------------------------------------------------------------
It seems that variable `shape` is related with points coordinate position.
>>>print(shape.num_parts)
68
>>>print(shape.rect)
[(1005, 563) (1129, 687)]
I assume that there are 68 denoted face landmark points.
>>> print(shape.part(68))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: Index out of range
>>> print(shape.part(65))
(1072, 645)
>>> print(shape.part(66))
(1065, 647)
>>> print(shape.part(67))
(1059, 646)
If it is true. The remained problem is that which part is responding to which
face landmark point.
Answer: I slightly modified the code.
import dlib
import numpy as np
from skimage import io
predictor_path = "shape_predictor_68_face_landmarks.dat"
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(predictor_path)
img = io.imread("FDT.jpg")
dets = detector(img)
#output face landmark points inside retangle
#shape is points datatype
#http://dlib.net/python/#dlib.point
for k, d in enumerate(dets):
shape = predictor(img, d)
vec = np.empty([68, 2], dtype = int)
for b in range(68):
vec[b][0] = shape.part(b).x
vec[b][1] = shape.part(b).y
print(vec)
Here is the output
[[1003 575]
[1005 593]
[1009 611]
[1014 627]
[1021 642]
[1030 655]
[1041 667]
[1054 675]
[1069 677]
[1083 673]
[1095 664]
[1105 651]
[1113 636]
[1120 621]
[1123 604]
[1124 585]
[1124 567]
[1010 574]
[1020 570]
[1031 571]
[1042 574]
[1053 578]
[1070 577]
[1081 572]
[1092 568]
[1104 566]
[1114 569]
[1063 589]
[1063 601]
[1063 613]
[1063 624]
[1050 628]
[1056 630]
[1064 632]
[1071 630]
[1077 627]
[1024 587]
[1032 587]
[1040 586]
[1048 588]
[1040 590]
[1031 590]
[1078 587]
[1085 585]
[1093 584]
[1101 584]
[1094 588]
[1086 588]
[1045 644]
[1052 641]
[1058 640]
[1064 641]
[1070 639]
[1078 640]
[1086 641]
[1080 651]
[1073 655]
[1066 656]
[1059 656]
[1052 652]
[1048 645]
[1059 645]
[1065 646]
[1071 644]
[1083 642]
[1072 645]
[1065 647]
[1059 646]]
And there is another open source project OpenFace, which is based on dlib and
describes each point's correlating part in face.
[The url of describing image](http://openface-
api.readthedocs.io/en/latest/_images/dlib-landmark-mean.png)
|
Looping with while function with Selenium throws error NameError: name 'neadaclick' is not defined
Question: I am trying to automate a task in my work. I already have the task and every
time I click on the program I can accomplish it, however I would want to be
able to do the tasks several times with one click so I want to enter a loop
using while. So I started testing, this is my current code:
from selenium import webdriver
browser = webdriver.Chrome()
def countdown(n):
while (n >= 0):
#Lets get rid of this
# print (n)
browser.get('http://www.simplesite.com/')
needaclick = browser.find_element_by_id('startwizard')
neadaclick.click()
n = n - 1
print ('Sucess!')
#Change from static to user input.
#countdown (10)
countdown (int(input('Enter a real number:')))
#Failed script code, leaving it here for documentation
#int(countdown) = input('Enter a real number:')
As you can see I have a simple countdown and in theory (or at least in my
mind) what should happen is that the number of times that I input should be
the number of times that the program should open a browser and click on the
element startwizard. But, I keep getting the error `needaclick` is not defined
and I am unsure of how to fix this properly.
Error code:
> Traceback (most recent call last): File
> "C:/Users/AMSUser/AppData/Local/Programs/Python/Python35-32/Scripts/Countdown
> Test.py", line 14, in countdown (int(input('Enter a real number:'))) File
> "C:/Users/AMSUser/AppData/Local/Programs/Python/Python35-32/Scripts/Countdown
> Test.py", line 9, in countdown neadaclick.click() NameError: name
> 'neadaclick' is not defined
Answer: @ElmoVanKielmo pointed out a mistake that i failed to notice, my first
declaration is needaclick but on the next line i wrote neadaclick, this has
been solved and its working.
|
Playing music on loop until a key is released. Python
Question: I'm making a little GUI with python, using cocos2d and pyglet modules. The GUI
should play a sound while the "h" is pressed and stop when it is released. The
problem here is that I can't find a solution to this. After searching this
site I've found this question - [How to play music continuously in
pyglet](http://stackoverflow.com/questions/27391240/how-to-play-music-
continuously-in-pyglet), the problem with this one is that I can't get the
sound to stop after it starts.
EDIT: I found a way to play the sound until keyrelease, but ran into another
problem
Right now the code that is supposed to play the music looks like this:
class Heartbeat (cocos.layer.Layer):
is_event_handler=True
def __init__ (self):
super(Heartbeat, self).__init__()
global loop, music, player
music = pyglet.media.load('long_beep.wav')
loop=pyglet.media.SourceGroup(music.audio_format, None)
player=pyglet.media.Player()
loop.queue(music)
player.queue(loop)
def on_key_press(self, key, modifiers):
if chr(key)=='h':
loop.loop=True
player.play()
def on_key_release (self, key, modifiers):
if chr(key)=="h":
loop.loop=False
This code works when the "h" key is pressed and held down for the first time,
it doesn't work in subsequent attempts. Python doesn't raise an exception, it
just seems to ignore "h" key presses that occur after the first release.
NOTE: The statement - `if chr(key)=="h"` may not be the best solution to
keypress handling, but I'm relitively new to using cocos2d and pyglet modules.
Answer: Nevermind, I have figured this out, all I had to do was move the line
`player.queue(loop)` from the initialization function to the function that
handles keypresses. The updated code looks like this:
class Heartbeat (cocos.layer.Layer):
is_event_handler=True
def __init__ (self):
super(Heartbeat, self).__init__()
global loop, music, player
music = pyglet.media.load('long_beep.wav')
loop=pyglet.media.SourceGroup(music.audio_format, None)
player=pyglet.media.Player()
loop.queue(music)
def on_key_press(self, key, modifiers):
if chr(key)=='h':
loop.loop=True
player.queue(loop) #This is the line that had to be moved
player.play()
def on_key_release (self, key, modifiers):
if chr(key)=="h":
loop.loop=False
NOTE: I ommited statements such as import and others, used for initialization
as they are not relevant to this issue.
|
python, multthreading, safe to use pandas "to_csv" on common file?
Question: I've got some code that works pretty nicely. It's a while-loop that goes
through a list of dates, finds files on my HDD that corresponds to those
dates, does some calculations with those files, and then outputs to a
"results.csv" file using the command:
my_df.to_csv("results.csv",mode = 'a')
I'm wondering if it's safe to create a new thread for each date, and call the
stuff in the while loop on several dates at a time?
MY CODE:
import datetime, time, os
import sys
import threading
import helperPY #a python file containing the logic I need
class myThread (threading.Thread):
def __init__(self, threadID, name, counter,sn, m_date):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.counter = counter
self.sn = sn
self.m_date = m_date
def run(self):
print "Starting " + self.name
m_runThis(sn, m_date)
print "Exiting " + self.name
def m_runThis(sn, m_date):
helperPY.helpFn(sn,m_date) #this is where the "my_df.to_csv()" is called
sn = 'XXXXXX'
today=datetime.datetime(2016,9,22) #
yesterday=datetime.datetime(2016,6,13)
threadList = []
i_threadlist=0
while(today>yesterday):
threadList.append(myThread(i_threadlist, str(today), i_threadlist,sn,today))
threadList[i_threadlist].start()
i_threadlist = i_threadlist +1
today = today-datetime.timedelta(1)
Answer: Writing the file in multiple threads is not safe. But you can create a
[lock](https://docs.python.org/3.6/library/threading.html#threading.Lock) to
protect that one operation while letting the rest run in parallel. Your
`to_csv` isn't shown, but you could create the lock
csv_output_lock = threading.Lock()
and pass it to `helperPY.helpFn`. When you get to the operation, do
with csv_output_lock:
my_df.to_csv("results.csv",mode = 'a')
You get parallelism for other operations - subject to the GIL of course - but
the file access is protected.
|
Set string in 'A' python script from 'B' python script
Question: I have two scripts:
A.py (is TK window)
def function():
string = StringVar()
string.set("Hello I'm A.py")
From B.py I wish change string that appear in Tk window.
def changestring():
string.set("Hello I'm B.py")
Obviously don't work! How I can change string from another python script?
Answer: Variables have scopes. You cannot access variables which are in the scope of
one function from another function.
There has to be some common point in the code which "knows" about A and about
B. That code should pass the variables from one to the other.
Based on this comment:
> A.py is the graphic and B is the core with a infinite loop that listen the
> USB interrupt
I would say that you need two objects implementing the two functionalities,
let's call them "graphic" and "usb". One of them has to "know" about the
other. Either "graphic" should observe "usb", or "usb" should update
"graphic".
For example:
# possibly in A.py:
class Graphic(object):
def __init__(self):
self.string = StringVar()
def function(self):
self.string.set("Hello I'm A.py")
# possibly in B.py:
class USB(object):
def __init__(self, graphic):
self.graphic = graphic
def changestring(self):
self.graphic.string.set("Hello I'm B.py")
# somewhere:
from A import Graphic
from B import USB
def main():
graphic = Graphic()
usb = USB(graphic)
#...
usb.changestring()
|
How to use the `pos` argument in `networkx` to create a flowchart-style Graph? (Python 3)
Question: **I am trying create a linear network graph using`Python`** (preferably with
`matplotlib` and `networkx` although would be interested in `bokeh`) similar
in concept to the one below.
[![enter image description
here](http://i.stack.imgur.com/qojHg.png)](http://i.stack.imgur.com/qojHg.png)
**How can this graph plot be constructed efficiently (`pos`?) in Python using
`networkx`?** I want to use this for more complicated examples so I feel that
hard coding the positions for this simple example won't be useful :( . Does
`networkx` have a solution to this?
> [pos (dictionary, optional) – A dictionary with nodes as keys and positions
> as values. If not specified a spring layout positioning will be computed.
> See networkx.layout for functions that compute node positions.
> ](https://networkx.github.io/documentation/development/reference/generated/networkx.drawing.nx_pylab.draw_networkx.html#networkx.drawing.nx_pylab.draw_networkx)
I haven't seen any tutorials on how this can be achieved in `networkx` which
is why I believe this question will be a reliable resource for the community.
I've extensively gone through the [`networkx`
tutorials](https://networkx.github.io/documentation/development/tutorial/index.html)
and nothing like this is on there. The layouts for `networkx` would make this
type of network impossible to interpret without careful use of the `pos`
argument... which I believe is my only option. **None of the precomputed
layouts on
the<https://networkx.github.io/documentation/networkx-1.9/reference/drawing.html>
documentation seem to handle this type of network structure well.**
**Simple Example:**
(A) every outer key is the iteration in the graph moving from left to the
right (e.g. iteration 0 represents samples, iteration 1 has groups 1 - 3, same
with iteration 2, iteration 3 has Groups 1 - 2, etc.). (B) The inner
dictionary contains the current grouping at that particular iteration, and the
weights for the previous groups merging that represent the current group (e.g.
`iteration 3` has `Group 1` and `Group 2` and for `iteration 4` all of
`iteration 3's` `Group 2` has gone into `iteration 4's` `Group 2` but
`iteration 3's` `Group 1` has been split up. The weights always sum to 1.
My code for the connections w/ weights for the plot above:
D_iter_current_previous = {
1: {
"Group 1":{"sample_0":0.5, "sample_1":0.5, "sample_2":0, "sample_3":0, "sample_4":0},
"Group 2":{"sample_0":0, "sample_1":0, "sample_2":1, "sample_3":0, "sample_4":0},
"Group 3":{"sample_0":0, "sample_1":0, "sample_2":0, "sample_3":0.5, "sample_4":0.5}
},
2: {
"Group 1":{"Group 1":1, "Group 2":0, "Group 3":0},
"Group 2":{"Group 1":0, "Group 2":1, "Group 3":0},
"Group 3":{"Group 1":0, "Group 2":0, "Group 3":1}
},
3: {
"Group 1":{"Group 1":0.25, "Group 2":0, "Group 3":0.75},
"Group 2":{"Group 1":0.25, "Group 2":0.75, "Group 3":0}
},
4: {
"Group 1":{"Group 1":1, "Group 2":0},
"Group 2":{"Group 1":0.25, "Group 2":0.75}
}
}
**This is what happened when I made the Graph in`networkx`**:
import networkx
import matplotlib.pyplot as plt
# Create Directed Graph
G = nx.DiGraph()
# Iterate through all connections
for iter_n, D_current_previous in D_iter_current_previous.items():
for current_group, D_previous_weights in D_current_previous.items():
for previous_group, weight in D_previous_weights.items():
if weight > 0:
# Define connections using `|__|` as a delimiter for the names
previous_node = "%d|__|%s"%(iter_n - 1, previous_group)
current_node = "%d|__|%s"%(iter_n, current_group)
connection = (previous_node, current_node)
G.add_edge(*connection, weight=weight)
# Draw Graph with labels and width thickness
nx.draw(G, with_labels=True, width=[G[u][v]['weight'] for u,v in G.edges()])
[![enter image description
here](http://i.stack.imgur.com/aiXnI.png)](http://i.stack.imgur.com/aiXnI.png)
Note: The only other way, I could think of to do this would be in `matplotlib`
creating a scatter plot with every tick representing a iteration (5 including
the initial samples) then connecting the points to each other with different
weights. This would be some pretty messy code especially trying to line up the
edges of the markers w/ the connections...However, I'm not sure if this and
`networkx` is the best way to do it or if there is a tool (e.g. `bokeh` or
`plotly`) that is designed for this type of plotting.
Answer: Networkx has decent plotting facilities for exploratory data analysis, it is
not the tool to make publication quality figures, for various reason that I
don't want to go into here. I hence rewrote that part of the code base from
scratch, and made a stand-alone drawing module called netgraph that can be
found [here](https://github.com/paulbrodersen/netgraph) (like the original
purely based on matplotlib). The API is very, very similar and well
documented, so it should not be too hard to mold to your purposes.
Building on that I get the following result:
[![enter image description
here](http://i.stack.imgur.com/kjwqP.png)](http://i.stack.imgur.com/kjwqP.png)
I chose colour to denote the edge strength as you can
1) indicate negative values, and
2) distinguish small values better.
However, you can also pass an edge width to netgraph instead (see
`netgraph.draw_edges()`).
The different order of the branches is a result of your data structure (a
dict), which indicates no inherent order. You would have to amend your data
structure and the function `_parse_input()` below to fix that issue.
Code:
import itertools
import numpy as np
import matplotlib.pyplot as plt
import netgraph; reload(netgraph)
def plot_layered_network(weight_matrices,
distance_between_layers=2,
distance_between_nodes=1,
layer_labels=None,
**kwargs):
"""
Convenience function to plot layered network.
Arguments:
----------
weight_matrices: [w1, w2, ..., wn]
list of weight matrices defining the connectivity between layers;
each weight matrix is a 2-D ndarray with rows indexing source and columns indexing targets;
the number of sources has to match the number of targets in the last layer
distance_between_layers: int
distance_between_nodes: int
layer_labels: [str1, str2, ..., strn+1]
labels of layers
**kwargs: passed to netgraph.draw()
Returns:
--------
ax: matplotlib axis instance
"""
nodes_per_layer = _get_nodes_per_layer(weight_matrices)
node_positions = _get_node_positions(nodes_per_layer,
distance_between_layers,
distance_between_nodes)
w = _combine_weight_matrices(weight_matrices, nodes_per_layer)
ax = netgraph.draw(w, node_positions, **kwargs)
if not layer_labels is None:
ax.set_xticks(distance_between_layers*np.arange(len(weight_matrices)+1))
ax.set_xticklabels(layer_labels)
ax.xaxis.set_ticks_position('bottom')
return ax
def _get_nodes_per_layer(weight_matrices):
nodes_per_layer = []
for w in weight_matrices:
sources, targets = w.shape
nodes_per_layer.append(sources)
nodes_per_layer.append(targets)
return nodes_per_layer
def _get_node_positions(nodes_per_layer,
distance_between_layers,
distance_between_nodes):
x = []
y = []
for ii, n in enumerate(nodes_per_layer):
x.append(distance_between_nodes * np.arange(0., n))
y.append(ii * distance_between_layers * np.ones((n)))
x = np.concatenate(x)
y = np.concatenate(y)
return np.c_[y,x]
def _combine_weight_matrices(weight_matrices, nodes_per_layer):
total_nodes = np.sum(nodes_per_layer)
w = np.full((total_nodes, total_nodes), np.nan, np.float)
a = 0
b = nodes_per_layer[0]
for ii, ww in enumerate(weight_matrices):
w[a:a+ww.shape[0], b:b+ww.shape[1]] = ww
a += nodes_per_layer[ii]
b += nodes_per_layer[ii+1]
return w
def test():
w1 = np.random.rand(4,5) #< 0.50
w2 = np.random.rand(5,6) #< 0.25
w3 = np.random.rand(6,3) #< 0.75
import string
node_labels = dict(zip(range(18), list(string.ascii_lowercase)))
fig, ax = plt.subplots(1,1)
plot_layered_network([w1,w2,w3],
layer_labels=['start', 'step 1', 'step 2', 'finish'],
ax=ax,
node_size=20,
node_edge_width=2,
node_labels=node_labels,
edge_width=5,
)
plt.show()
return
def test_example(input_dict):
weight_matrices, node_labels = _parse_input(input_dict)
fig, ax = plt.subplots(1,1)
plot_layered_network(weight_matrices,
layer_labels=['', '1', '2', '3', '4'],
distance_between_layers=10,
distance_between_nodes=8,
ax=ax,
node_size=300,
node_edge_width=10,
node_labels=node_labels,
edge_width=50,
)
plt.show()
return
def _parse_input(input_dict):
weight_matrices = []
node_labels = []
# initialise sources
sources = set()
for v in input_dict[1].values():
for s in v.keys():
sources.add(s)
sources = list(sources)
for ii in range(len(input_dict)):
inner_dict = input_dict[ii+1]
targets = inner_dict.keys()
w = np.full((len(sources), len(targets)), np.nan, np.float)
for ii, s in enumerate(sources):
for jj, t in enumerate(targets):
try:
w[ii,jj] = inner_dict[t][s]
except KeyError:
pass
weight_matrices.append(w)
node_labels.append(sources)
sources = targets
node_labels.append(targets)
node_labels = list(itertools.chain.from_iterable(node_labels))
node_labels = dict(enumerate(node_labels))
return weight_matrices, node_labels
# --------------------------------------------------------------------------------
# script
# --------------------------------------------------------------------------------
if __name__ == "__main__":
# test()
input_dict = {
1: {
"Group 1":{"sample_0":0.5, "sample_1":0.5, "sample_2":0, "sample_3":0, "sample_4":0},
"Group 2":{"sample_0":0, "sample_1":0, "sample_2":1, "sample_3":0, "sample_4":0},
"Group 3":{"sample_0":0, "sample_1":0, "sample_2":0, "sample_3":0.5, "sample_4":0.5}
},
2: {
"Group 1":{"Group 1":1, "Group 2":0, "Group 3":0},
"Group 2":{"Group 1":0, "Group 2":1, "Group 3":0},
"Group 3":{"Group 1":0, "Group 2":0, "Group 3":1}
},
3: {
"Group 1":{"Group 1":0.25, "Group 2":0, "Group 3":0.75},
"Group 2":{"Group 1":0.25, "Group 2":0.75, "Group 3":0}
},
4: {
"Group 1":{"Group 1":1, "Group 2":0},
"Group 2":{"Group 1":0.25, "Group 2":0.75}
}
}
test_example(input_dict)
pass
|
Sending email with python script
Question: What I'm trying to do is get my python code to send an email. This code is
supposed to use the yahoo smtp to send the email. I don't need any attachments
or anything else. The code bugs out where it says `Error: unable to send
email.` Other than the obvious of putting in correct email receiver and sender
addresses, what can I do to get this thing to work?
#!/usr/bin/env python
from smtplib import SMTP
from smtplib import SMTP_SSL
from smtplib import SMTPException
from email.mime.text import MIMEText
import sys
#Global varialbes
EMAIL_SUBJECT = "Email from Python script"
EMAIL_RECEIVERS = ['receiverId@gmail.com']
EMAIL_SENDER = 'senderId@yahoo.com'
TEXT_SUBTYPE = "plain"
YAHOO_SMTP = "smtp.mail.yahoo.com"
YAHOO_SMTP_PORT = 465
def listToStr(lst):
"""This method makes comma separated list item string"""
return ','.join(lst)
def send_email(content, pswd):
"""This method sends an email"""
msg = MIMEText(content, TEXT_SUBTYPE)
msg["Subject"] = EMAIL_SUBJECT
msg["From"] = EMAIL_SENDER
msg["To"] = listToStr(EMAIL_RECEIVERS)
try:
#Yahoo allows SMTP connection over SSL.
smtpObj = SMTP_SSL(YAHOO_SMTP, YAHOO_SMTP_PORT)
#If SMTP_SSL is used then ehlo and starttls call are not required.
smtpObj.login(user=EMAIL_SENDER, password=pswd)
smtpObj.sendmail(EMAIL_SENDER, EMAIL_RECEIVERS, msg.as_string())
smtpObj.quit();
except SMTPException as error:
print "Error: unable to send email : {err}".format(err=error)
def main(pswd):
"""This is a simple main() function which demonstrates sending of email using smtplib."""
send_email("Test email was generated by Python using smtplib and email libraries", pswd);
if __name__ == "__main__":
"""If this script is executed as stand alone then call main() function."""
if len(sys.argv) == 2:
main(sys.argv[1])
else:
print "Please provide password"
sys.exit(0)
Answer: I don't know about Yahoo, but Google blocked the login via their smtp-port. It
would be way too easy to conduct brute force attacks otherwise. So even if
your code is perfectly ok, the login might still fail because of that. I have
tried to do the exact same thing for my gmail account.
|
Why is `self` not used in this method?
Question: I was under the impression that methods within Python classes _always_ require
the `self` argument (I know that it doesn't actually have to be `self`, just
some keyword). But, this class that I wrote doesn't require it:
import ZipFile
import os
class Zipper:
def make_archive(dir_to_zip):
zf = zipfile.ZipFile(dir_to_zip + '.zip', 'w')
for filename in files:
zf.write(os.path.join(dirname, filename))
zf.close()
See? No `self`. When I include a `self` argument to `make_archive`, I get a
`TypeError: make_archive() missing one positional argument` error. In my
search to figure out why this is happening, I actually copied and tried to run
a similar program from the docs:
class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
print(MyClass.f()) # I added this statement to have a call line
and I get the same error!
TypeError: f() missing 1 required positional argument: 'self'
In the same module that contains the `Zipper()` class, I have multiple classes
that all make use of `self`. I don't understand the theory here, which makes
it difficult to know when to do what, especially since a program copied
directly from the docs ([this is the docs
page](https://docs.python.org/3/tutorial/classes.html)) failed when I ran it.
I'm using Python 3.5 and 3.4 on Debian Linux. The only thing that I can think
of is that it's a static method (and the `Zipper.make_archive()` as written
above works fine if you include `@staticmethod` above the `make_archive`
method), but I can't find a good explanation to be sure.
Answer: You are trying to use it as a static method. In your example;
class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
a = MyClass()
a.f() # This should work.
Calling `MyClass.f()` assumes `f` is static for `MyClass`. You can make it
static as:
class MyClass:
@staticmethod
def f(): # No self here
return 'hello world'
MyClass.f()
|
switch variable and write problems,python
Question:
import csv
import sys
def switch():
file1=open('enjoysport.csv','r')
for line in file1:
line.split(",")[0],line.split(",")[-1]=line.split(",")[-1],line.split(",")[0]
file1.close()
origin=sys.stdout
fil2=open("test.csv","w")
sys.stdout=fil2
print(file1)
sys.stdout = origin
fil2.close()
switch()
I want to switch the first column and last column ,but it didn't work,what's
more,the file after switching cannot be written into the new csv.file?
like this:
<_io.TextIOWrapper name='agaricus_lepiota.csv' mode='r' encoding='cp1252'>
what's wrong?Thank you in advance:)
Answer: You need to actually write the updated row to the file, you can also use the
_csv_ lib to read and write your file content:
def switch():
with open('enjoysport.csv') as f, open("test.csv", "w") as out:
wr = csv.writer(out)
for row in csv.reader(f):
row[0], row[-1] = row[-1], row[0]
wr.writerow(row) # actually write it to test.csv
In your own code all you are doing is shifting elements in each row, you never
write the result so it is a pointless exercise.
Why you see `<_io.TextIOWrapper name='agaricus_lepiota.csv' mode='r'
encoding='cp1252'>` is because `sys.stdout = fil2` _redirects stdout_ to
_file2_ , then you print the reference to the `file1` i.e the file object so
that gets written/redirected to your file.
|
Python - multiprocessing while writing to a single result file
Question: I am really new to the multiprocessing package and I am failing to get the
task done.
I have lots of calculations to do on a list of objects.
The results I need to write down are saved in those objects, too.
The results should be written in a single file as soon as the process finished
the calculations (the way I got it at least working, waits until all
calculations are done).
import multiprocessing
import time
import csv
class simpl():
def __init__(self, name, val):
self.name = name
self.val = val
def pot_val(inpt):
print("Process %s\t ..." % (inpt.name))
old_v = inpt.val
inpt.val *= inpt.val
if old_v != 8:
time.sleep(old_v)
print("Process %s\t ... Done" % (inpt.name))
def mp_worker(inpt):
pot_val(inpt)
return inpt
def mp_handler(data_list):
p = multiprocessing.Pool(4)
with open('results.csv', 'a') as f:
res = p.map_async(mp_worker, data_list)
results = (res.get())
for result in results:
print("Writing result for ",result.name)
writer= csv.writer(f, lineterminator = '\n', delimiter=";")
writer.writerow((result.name, result.val))
if __name__=='__main__':
data = []
counter=0
for i in range(10):
data.append(simpl("name"+str(counter),counter))
counter += 1
for d in data:
print(d.name, d.val)
mp_handler(data)
How to write the results from the calculations simultaneously to one single
file, without having to wait for all processes to finish?
Answer: You can use
[imap_unordered](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.imap_unordered)
def mp_handler(data_list):
p = multiprocessing.Pool(4)
with open('results.csv', 'a') as f:
writer= csv.writer(f, lineterminator = '\n', delimiter=";")
for result in p.imap_unordered(mp_worker, data_list):
print("Writing result for ",result.name)
writer.writerow((result.name, result.val))
With Python 3.3+ better do
def mp_handler(data_list):
with multiprocessing.Pool(4) as p:
with open('results.csv', 'a') as f:
writer= csv.writer(f, lineterminator = '\n', delimiter=";")
for result in p.imap_unordered(mp_worker, data_list):
print("Writing result for ",result.name)
writer.writerow((result.name, result.val))
|
No attribute 'HookManager'
Question: I am copying the key logger from this video:
(<https://www.youtube.com/watch?v=8BiOPBsXh0g>) and running the code:
import pyHook, sys, logging, pythoncom
file_log = 'C:\Users\User\Google Drive\Python'
def OnKeyboardEvent(event):
logging.basicConfig(filename = file_log, level = logging.DEBUG, format = '%(message)s')
chr(event.Ascii)
logging.log(10, chr(event.Ascii))
return True
hooks_manager = pyHook.HookManager()
hooks_manager.KeyDown = OnKeyboardEvent
hooks_manager.HookKeyboard()
pythoncom.Pumpmessages()
This returns the error:
Traceback (most recent call last):
File "C:\Users\User\Google Drive\Python\pyHook.py", line 2, in <module>
import pyHook, sys, logging, pythoncom
File "C:\Users\User\Google Drive\Python\pyHook.py", line 12, in <module>
hooks_manager = pyHook.HookManager()
AttributeError: 'module' object has no attribute 'HookManager'
I am running Python 2.7.11 and a windows computer. I don't know what the
problem is; please help. Thank you
Answer: I'm still unsure what the issue is but I found a solution. If you move the
program you are trying to run into the same folder as the HookManager.py file
then it works.
For me this file was: C:\Python27\Lib\site-packages\pyHook
|
make bouncing turtle with python
Question: i am a beginner with python I wrote this code to make bouncing ball with
turtle it works but have some erors like ball dissapering
import turtle
turtle.shape("circle")
xdir = 1
x = 1
y = 1
ydir = 1
while True:
x = x + 3 * xdir
y = y + 3 * ydir
turtle.goto(x , y)
if x >= turtle.window_width():
xdir = -1
if x <= -turtle.window_width():
xdir = 1
if y >= turtle.window_height():
ydir = -1
if y <= -turtle.window_height():
ydir = 1
turtle.penup()
turtle.mainloop()
Answer: You need `window_width()/2` and `window_height()/2` to keep inside window.
ie.
if x >= turtle.window_width()/2:
xdir = -1
if x <= -turtle.window_width()/2:
xdir = 1
if y >= turtle.window_height()/2:
ydir = -1
if y <= -turtle.window_height()/2:
ydir = 1
|
Find all common N-sized tuples in list of tuples
Question: I have to create an an application that does the following (I have to parse
the data only once and store them in a database):
I am given K tuples (with K over 1000000) and each tuple is in the form of
(UUID, (tuple of N integers))
Lets assume that N equals 20 for every k-tuple and that every 20-sized tuple
is sorted. I have saved all my data in a database in the following two forms
(2 different tables), so that I can process them more easily:
1. _id, UUID, tuple.as_a_string()
2. _id, UUID, 1st_elem, 2nd_elem, 3rd_3lem, ... 20th_elem
The goal is to find all 10-sized tuples from the list of tuples, such that
every one of those tuples to exist in more than one 20-sized tuple.**
For example, if we are given the two following 20-sized tuples:
(1, (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,161,17,18,19,20))
(2, (1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39))
the common tuple is: (1,3,5,7,9,11,13,15,17,19)
which is a 10-sized tuple, so the result is something like the following:
(1, 2, (1,3,5,7,9,11,13,15,17,19))
In order to accomplish this, what I am currently doing is (in Python 3):
* Create a set with the elements of the 20-sized tuple of the 1st row in the database.
* Create a set for each row with the elements of the 20-sized tuple of the rest rows in the database.
* For each set in the second list of sets, I do the intersection with the first set.
* Then I create the combination of the intersection with 10 elements _(in Python it is itertools.combinations(new_set, 10) )_ , which gives me the result that I want.
But this procedure is **very slow**. Even using multiprocessing to utilize my
8 CPU cores fully, each computing for a different number, it takes forever. I
have the program running for 2 days now and it is only in 20%.
Do you have any ideas on how to optimize the process? Would NumPy arrays help
with the speed of execution? Is there any way in SQL to calculate what I want
for each row, even one row at a time?
Thanks in advance.
Answer: It appears that you could put the tuples into the rows of a matrix and make a
map from row numbers to UUIDs. Then it's feasible to store all of the tuples
in a numpy array since the elements of the tuples are small. numpy has code
capable of computing the intersections between rows of such an array. This
code generates combinations to process as tuples first, then it makes the
comparisons.
from itertools import combinations
import numpy as np
from time import time
minInt=1
maxInt=100
tupleSize=20
intersectionSize=10
K=100
rows=np.zeros((K,tupleSize),dtype=np.int8)
print ('rows uses', rows.nbytes, 'bytes')
for i,c in enumerate(combinations(range(minInt,maxInt),tupleSize)):
if i>=K:
break
for j,_ in enumerate(c):
rows[i,j]=_
t_begin=time()
for i in range(K-1):
for j in range(i+1,K):
intrsect=np.intersect1d(rows[i],rows[j],True)
if intrsect.shape[0]==intersectionSize:
print (i,j,intrsect)
t_finish=time()
print ('K=', K, t_finish-t_begin, 'seconds')
Here are some sample measurements made on my old two-core P4 clunker at home.
rows uses 200 bytes K= 10 0.0009770393371582031 seconds
rows uses 1000 bytes K= 50 0.0410161018371582 seconds
rows uses 2000 bytes K= 100 0.15625 seconds
rows uses 10000 bytes K= 500 3.610351085662842 seconds
rows uses 20000 bytes K= 1000 14.931640863418579 seconds
rows uses 100000 bytes K= 5000 379.5498049259186 seconds
If you run the code on your machine you can extrapolate. I don't know if it
would make your calculation feasible or not.
Maybe I'll just get a bunch of negative votes!
|
How can I install mpmath as an external library for Blender?
Question: I'm interested in trying out sympy with Blender (v2.76, Python 3.4.2 Console,
Windows 8.1). I followed this
[answer](http://blender.stackexchange.com/questions/15453/error-import-
point3d-of-geometry-module-of-sympy0-7-5-in-blender2-71/15513#15513) from
Blender SE, downloaded sympy as a ZIP from Githib, and moved the sympy folder
to C:\Program Files\Blender Foundation\Blender\2.76\python\lib\site-packages.
However, when I opened Blender and tried to import sympy in the Python
Console, I got the following error:
>>> import sympy
Traceback (most recent call last):
File "<blender_console>", line 1, in <module>
File "C:\Program Files\Blender Foundation\Blender\2.76\python\lib\site-packages\sympy\__init__.py", line 20, in <module>
raise ImportError("SymPy now depends on mpmath as an external library. "
ImportError: SymPy now depends on mpmath as an external library. See http://docs.sympy.org/latest/install.html#mpmath for more information.
I don't know how to install an external library. I tried going to the
[link](http://docs.sympy.org/latest/install.html#mpmath) mentioned in the
ImportError, and I saw `pip install mpmath`. I tried it in cmd, but got this:
>pip install mpmath
Requirement already satisfied (use --upgrade to upgrade): mpmath in c:\anaconda3
\lib\site-packages
I did install Anaconda a while ago, so I guess it makes sense to have this
output. How can I install mpmath as an external library for Blender so I can
import sympy in it?
Answer: You want to install mpmath into blenders python folder, the same as you have
done for sympy.
Your example of running pip was done in a system installed python that is
setup to find the mpmath that you have installed in `c:\anaconda3\lib\site-
packages`
Another option is to use the existing install of mpmath and sympy by adding
your existing path to
[sys.path](https://docs.python.org/3/library/sys.html#sys.path) or adding it
to the `PYTHONPATH` environment variable before you start blender.
|
explicitly setting style sheet in python pyqt4?
Question: In pyqt standard way for setting style sheet is like this
`MainWindow.setStyleSheet(_fromUtf8("/*\n" "gridline-color: rgb(85, 170,
255);\n" "QToolTip\n" "{\n" " border: 1px solid #76797C;\n" " background-
color: rgb(90, 102, 117);;\n" " color: white;\n" " padding: 5px;\n" " opacity:
200;\n" "}\n" "#label_3{\n" " background-color:rgb(90, 102, 117);\n" "
color:white;\n" " padding-left:20px;\n" " \n" "}\n" "#label_2{\n" "
color:white;\n" " padding-left:20px;\n" " \n" "}\n"`
But like we link the stylesheet in html `<link rel="stylesheet"
href="style.css">` .can't we do the same in pyqt?It helps in organizing the
things.
Answer: There are currently only two main ways to set a stylesheet. The first is to
use the `setStyleSheet` method:
widget.setStyleSheet("""
QToolTip {
border: 1px solid #76797C;
background-color: rgb(90, 102, 117);
color: white;
padding: 5px;
opacity: 200;
}
""")
This will only take a string, so an external resource would need to be
explicitly read from a file, or imported from a module.
The second method is to use the [`-stylesheet` command-line
argument](http://doc.qt.io/qt-4.8/qapplication.html#QApplication), which
allows an external qss resource to be specified as a path:
python myapp.py -stylesheet style.qss
This opens up the possibility of a tempting hack, since it is easy enough to
manipulate the args passed to the `QApplication` constructor, and explicitly
insert a default stylesheet:
import sys
args = list(sys.argv)
args[1:1] = ['-stylesheet', 'style.qss']
app = QtGui.QApplication(args)
(Inserting the extra arguments at the beginning of the list ensures that it is
still possible for the user to override the default with their own
stylesheet).
|
KNeighborsClassifier .predict() function doesn't work
Question: i am working with KNeighborsClassifier algorithm from scikit-learn library in
Python. I followed basic instructions e.g. split my data and labels into
training and test data, then trained my model on a training data. Now I am
trying to predict accuracy of testing data but get an error. Here is my code:
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
data_train, data_test, label_train, label_test = train_test_split(df, labels,
test_size=0.2,
random_state=7)
mod = KNeighborsClassifier(n_neighbors=4)
mod.fit(data_train, label_train)
predictions = mod.predict(data_test)
print accuracy_score(label_train, predictions)
The error I get:
ValueError: Found arrays with inconsistent numbers of samples: [140 558]
140 is the portion of training data and 558 is the test data based on the
test_size=0.2 (my data set is 698 samples). I verified that labels and data
sets are of the same size 698. However, I get this error which is basically
trying to compare test data and training data sets.
Does anyone knows what is wrong here? What should I use to train my model
against to and what should I use to predict the score?
Thanks!
Answer: Did you tried to solve your issue via the following question ?
> [sklearn: Found arrays with inconsistent numbers of samples when calling
> LinearRegression.fit()](http://stackoverflow.com/questions/30813044/sklearn-
> found-arrays-with-inconsistent-numbers-of-samples-when-calling-linearre)
|
Python list to string spacing
Question: I have a list such as this
list = ['Hi', ',', 'my', 'name', 'is', 'Bob', '!']
I wanted to convert this to a string, and originally, I found on stackoverflow
that .join() could be used. So i did:
x = ' '.join(list)
print(x)
which prints
"Hi , my name is Bob !"
when what I want printed is:
"Hi, my name is Bob!"
How do I not add spaces before periods and exclamation points? I want a more
general case so that I can for example read in a text file as a list, and
convert it to a string.
Thanks!
Answer: To solve it in a general case, use the `nltk`'s ["moses"
detokenizer](https://github.com/nltk/nltk/pull/1282):
In [1]: l = ["Hi", ",", "my", "name", "is", "Bob", "!"]
In [2]: from nltk.tokenize.moses import MosesDetokenizer
In [3]: detokenizer = MosesDetokenizer()
In [4]: detokenizer.detokenize(l, return_str=True)
Out[4]: u'Hi, my name is Bob!'
The detokenizer is not yet a part of a stable `nltk` package. To be able to
use it now, install `nltk` directly from github.
|
How can jupyter access a new tensorflow module installed in the right path?
Question: Where should I stick the model folder? I'm confused because python imports
modules from somewhere in anaconda (e.g. import numpy), but I can also import
data (e.g. file.csv) from the folder in which my jupyter notebook is saved in.
The TF-Slim image models library is not part of the core TF library. So I
checked out the tensorflow/models repository as:
cd $HOME/workspace
git clone https://github.com/tensorflow/models/
I'm not sure what $HOME/workspace is. I'm running a ipython/jupyter notebook
from users/me/workspace/ so I saved it to:
users/me/workspace/models
In jupyter, I'll write:
import tensorflow as tf
from datasets import dataset_utils
# Main slim library
slim = tf.contrib.slim
But I get an error:
ImportError: No module named datasets
Any tips? I understand that my tensorflow code is stored in
'/Users/me/anaconda/lib/python2.7/site-packages/tensorflow/**init**.pyc' so
maybe I should save the new models folder (which contains models/datasets)
there?
Answer: From the error "ImportError: No module named datasets"
It seems that no package named datasets is present. You need to install
datasets package and then run your script.
Once you install it, then you can find the package present in location
"/Users/me/anaconda/lib/python2.7/site-packages/" or at the
location "/Users/me/anaconda/lib/python2.7/"
Download the package from <https://pypi.python.org/pypi/dataset> and install
it.
This should work
|
F test with python, finding the critical value
Question: Using python, Is it possible to calculate the critical value on F distribution
with x and y degrees of freedom? In other words, I need to calculate the
critical value given a x degrees of freedom and a confidence level 5%, but i
do not see the table from statistical books, is it posible to get it with any
function from python?
For example, I want to find the critical value for a F distribution with 3 an
39 degrees of freedom for 5% of confidence level. The answer should be: 2.85
Answer: IIUC, you can use
[`scipy.stats.f.ppf`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f.html),
which gives the inverse of the cdf:
>>> import scipy.stats
>>> scipy.stats.f.ppf(q=1-0.05, dfn=3, dfd=39)
2.8450678052793514
>>> crit = _
>>> scipy.stats.f.cdf(crit, dfn=3, dfd=39)
0.95000000000000007
|
python: convert pandas categorical values to integer when reading csv in chunks
Question: I have large csv file with 1000 columns, column 0 is an id, the other columns
are categorical. I would like to convert them to integer values in order to
use them for data analysis. First "dummy" way would work if I had enough
memory:
filename_cat_train = "../input/train_categorical.csv"
df = pd.read_csv(filename_cat_train, dtype=str)
for column in df.columns[1:]:
df[column] = df[column].astype('category')
columns = df.select_dtypes(['category']).columns
df[columns] = df[columns].apply(lambda x: x.cat.codes)
df.to_csv("../input/train_categorical_rawconversion.csv", index=False)
but it lasts very long, and definitely not a smart way to solve the task.
I could just load the data file in chunks and then combine after converting to
int values using the approach above. However when loading in chunks (even 100k
large), not all categories are present in my data. This means, having values
T10, T11, T13 in the first chunk, and T10, T11, T12 in the second, different
values appear for categories in chunks.
The optimal way for me would be: 0\. create the list of categorical and
corresponding int values (there are only like 100, and it is easy to retrieve
them all from the data) 1\. Load data in chunks 2\. substitute the values from
the list 3\. save each chunk and them combine them.
How could I perform such steps efficiently? Maybe better approach exists?
Thanks!
**Update1:** the categorical data in of the same 'type. They are keys like
T12, T45689, A3333 etc. the csv file is like that:
4,,,,,T12,,,,,,A44,,,,,,B3333,
Answer: In this case, it indeed seems that a two-pass scheme might be effective.
Starting with
import pandas as pd
data=pd.read_csv(my_file_name, chunksize=my_chunk_size)
You could do:
import collections
uniques = collections.defaultdict(list)
for chunk in data:
for col in chunk:
uniques[col].update(chunk[col].unique())
At this point, uniques should map each column name to the unique items
appearing in it. To translate to a map, you can now use
for col in uniques:
uniques[col] = dict((e[1], e[0]) for e in enumerate(uniques[col]))
Now read the file again, and translate each column using the map corresponding
to it (see [here](http://stackoverflow.com/questions/20250771/remap-values-in-
pandas-column-with-a-dict).)
* * *
If your columns all contain keys from "the same dictionary", you can do the
following:
Starting with the following
import pandas as pd
data=pd.read_csv(my_file_name, chunksize=my_chunk_size)
You could do:
uniques = set([])
for chunk in data:
for col in cols:
uniques.update(chunk[col].unique())
At this point, uniques should contain the unique items appearing in the
DataFrame. To translate to a map, you can now use
uniques = dict((e[1], e[0]) for e in enumerate(uniques))
Now, load the DataFrame again, and use
[pd.DataFrame.replace](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.replace.html).
|
function that select ALL paths in folder containing specificic pieces of string
Question: I would like to use the os.walk method in Python in order to select ALL the
files that contain certain strings in their name. Here the code I wrote
def func(root = root, element = ''):
c = []
for path, subdirs, files in os.walk(root):
c = c + [ os.path.join(path, name) for name in files \
if element in os.path.join(path, name) ]
return c
Now if I type
func(root, 'Raman')
than I select all the files which contain the string 'Raman' in their name. I
would like to have a function in which the second argument is a list of string
func(root, ['string 1', 'string 2', ... 'string n'])
and it select ALL the paths which contain 'string 1', 'string 2' .. 'string
n', but the problem is harder than it looks like. Is there anyone who could
suggest me a modification of the previous code?
Answer: You may use `any` built-in function:
import os
def func(root, elements):
c = []
for path, subdirs, files in os.walk(root):
c = c + [os.path.join(path, name) for name in files \
if any(element in os.path.join(path, name) for element in elements)]
return c
Unfortunately, `func` in current form does not read that well. I'd suggest
converting it to generator
def func(root, elements):
for root_path, _, files in os.walk(root):
for name in files:
full_path = os.path.join(root_path, name)
if any(element in full_path for element in elements):
yield full_path
|
Python and time / datetime
Question: I've recently began work on a Python program as seen in the fragment below.
# General Variables
running = False
new = True
timeStart = 0.0
timeElapsed = 0.0
def endProg():
curses.nocbreak()
stdscr.keypad(False)
curses.echo()
curses.endwin()
quit()
# Draw
def draw():
stdscr.addstr(1, 1, ">", curses.color_pair(6))
stdscr.border()
if running:
stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.ctime( timeStart - timeElapsed ) ) )
stdscr.redrawwin()
stdscr.refresh()
# Calculate
def calc():
if running:
timeElapsed = t.clock() - timeStart
stdscr.border()
stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) )
# Main Loop
while True:
# Get Input
kInput = stdscr.getch()
# Close the program
if kInput == ord('q'):
endProg()
# Stop the current run
elif kInput == ord('s'):
stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) )
running = False
new = True
# Start a run
elif kInput == ord(' ') and new:
running = not running
new = not new
timeStart = dt.datetime.now()
# Toggle the timer
elif kInput == ord('p') and not new:
timeStart = dt.datetime.now() - timeStart
running = not running
calc()
draw()
**My program is a bit between solutions currently** , sorry if something
doesn't look right. I'll be more than happy to explain.
I've spent the last several hours reading online about the time and datetime
modules for python, trying to figure out how I can use them to accomplish my
goals, but however I've tried to implement them it's been no use.
Essentially, I need my program to measure the elapsed time from when a button
is pressed and be able to display it in a hour:minute.second format. The
subtraction has made it very difficult, having to implement things such as
timedelta. From what I have read online there is no way to do what I'm wanting
without the datetime module, but it's given me nothing but problems.
Is there an easier solution, does my code have any outstanding errors, and how
stupid am I?
Answer: Using ˋtime.time`:
import time
star = time.time()
# run long processing...
elapsed = time.time() - start
Et voilà !
|
call function from list(string) in another py file
Question: i have 3 python scripts ('testPrint01.py','testPrint02.py','testPrint03.py')
and i would like to call function from 'testPrint02'
import sys
sys.path.append(path)
a = ['testPrint01','testPrint02','testPrint03']
import sys.a[1]
a[1].justPrintIt()
thank you
Answer: If you're in the same directory as your target script you can use
from testPrint02 import x
Where x is the function you want to import.
**Edit** : To import a module from a string variable you can use `importlib`
as described here: [import module from string
variable](http://stackoverflow.com/questions/8718885/import-module-from-
string-variable)
|
Error installing MySQL-python: Unable to find vcvarsall.bat
Question: I was trying to install `mysql-python` using **pip**
I'm getting the following error:
error: Unable to find vcvarsall.bat
----------------------------------------
Failed building wheel for mysql-python
...
...
running build_ext
building '_mysql' extension
error: Unable to find vcvarsall.bat
----------------------------------------
Command "c:\python27\python.exe -u -c "import setuptools,
tokenize;__file__='c:\\users\\hp\\appdata\\local\\temp\\pip-build-
jy5yhb\\mysql-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)
(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record
c:\users\hp\appdata\local\temp\pip-jditig-record\install-record.txt --single-
version-externally-managed --compile" failed with error code 1 in
c:\users\hp\appdata\local\temp\pip-build-jy5yhb\mysql-python\
Background works did:
* Installed MySQL from official website([link](https://www.mysql.com/why-mysql/windows/))
* Installed SQLite3
* Installed Visual C++ Redistributable 2008, 2010, 2013, 2015
* Installed C++ Compiler for python (latest)
I have tried all the answers that already exist in Stack Overflow and other
Websites, nothing helped.
Context:
I'm trying to run a _Django_ server in my laptop, for which I need mysql-
python.
(OS: Windows 10)
Answer: First check your Python installation & Windows 10 Installation [i.e. Is it
32Bit or 64Bit].
# Visual C++ Redistributable comes with x86 as well x64 make sure you have
installed compatible package.
|
How to get python tcp server/client to allow multiple clients on ant the same time
Question: I have started to make my own TCP server and client. I was able to get the
server and the client to connect over my LAN network. But when I try to have
another client connect to make a three way connection, it does not work. What
will happen is only when the first connected client has terminated the
connection between, the server and the client, can the other client connect
and start the chat session. I do not understand why this happens. I have tried
threading, loops, and everything else I can think of. I would appreciate any
advice. I feel like there is just one small thing i am missing and I can not
figure out what it is.
## Here is my server:
import socket
from threading import Thread
def whatBeip():
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(('8.8.8.8', 0))
local_ip_address = s.getsockname()[0]
print('Current Local ip: ' + str(local_ip_address))
def clietConnect():
conn, addr = s.accept()
print 'Connection address:', addr
i = True
while i == True:
data = conn.recv(BUFFER_SIZE)
if not data:
break
print('IM Recieved: ' + data)
conn.sendall(data) # echo
whatBeip()
TCP_IP = ''
TCP_PORT = 5005
BUFFER_SIZE = 1024
peopleIn = 4
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(peopleIn)
for client in range(peopleIn):
Thread(target=clietConnect()).start()
conn.close()
## Here is my client
import socket
TCP_IP = '10.255.255.3'
TCP_PORT = 5005
BUFFER_SIZE = 1024
MESSAGE = "Hello, World!"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
i = True
while i == True:
s.sendall(raw_input('Type IM: '))
data = s.recv(BUFFER_SIZE)
s.close()
Answer: This is your main problem: `Thread(target=clietConnect()).start()` executes
the function `clientConnect` and uses it's return value as the Thread function
(which is None, so the Thread does nothing)
Also have a look at:
1) You should wait for all connections to close instead of `conn.close()` in
the end of the server:
threads = list()
for client in range(peopleIn):
t = Thread(target=clietConnect)
t.start()
threads.append(t)
for t in threads: t.join()
and to close the connection when no data is received:
if not data:
conn.close()
return
2) You probably want to use SO_REUSEADDR [ [Socket options SO_REUSEADDR and
SO_REUSEPORT, how do they differ? Do they mean the same across all major
operating systems?](http://stackoverflow.com/questions/14388706/) , [Python:
Binding Socket: "Address already in
use"](http://stackoverflow.com/questions/6380057/) ]
3) And have a look at asyncio for python
|
How do you get data from QTableWidget that user has edited (Python with PyQT)
Question: I asked a similar question before, but the result didn't work, and I don't
know why. Here was the original code:
def click_btn_printouts(self):
self.cur.execute("""SELECT s.FullName, m.PreviouslyMailed, m.nextMail, m.learnersDate, m.RestrictedDate, m.DefensiveDate FROM
StudentProfile s LEFT JOIN Mailouts m ON s.studentID=m.studentID""")
self.all_data = self.cur.fetchall()
self.search_results()
self.table.setRowCount(len(self.all_data))
self.tableFields = ["Check","Full name","Previously mailed?","Next mail","learnersDate","Restricted date","Defensive driving date"]
self.table.setColumnCount(len(self.tableFields))
self.table.setHorizontalHeaderLabels(self.tableFields)
self.checkbox_list = []
for i, self.item in enumerate(self.all_data):
FullName = QtGui.QTableWidgetItem(str(self.item[0]))
PreviouslyMailed = QtGui.QTableWidgetItem(str(self.item[1]))
LearnersDate = QtGui.QTableWidgetItem(str(self.item[2]))
RestrictedDate = QtGui.QTableWidgetItem(str(self.item[3]))
DefensiveDate = QtGui.QTableWidgetItem(str(self.item[4]))
NextMail = QtGui.QTableWidgetItem(str(self.item[5]))
self.table.setItem(i, 1, FullName)
self.table.setItem(i, 2, PreviouslyMailed)
self.table.setItem(i, 3, LearnersDate)
self.table.setItem(i, 4, RestrictedDate)
self.table.setItem(i, 5, DefensiveDate)
self.table.setItem(i, 6, NextMail)
chkBoxItem = QtGui.QTableWidgetItem()
chkBoxItem.setFlags(QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled)
chkBoxItem.setCheckState(QtCore.Qt.Unchecked)
self.checkbox_list.append(chkBoxItem)
self.table.setItem(i, 0, self.checkbox_list[i])
The suggested code to add was this (indentation accurate) to the end of the
function:
self.changed_items = set()
self.table.itemChanged.connect(self.log_change)
And add the following function:
def log_change(self):
self.changed_items.add(self.item)
print(self.item)
The expected print was the edited data, but what I get is the data before it
was edited.
I can't use QTableView and QtSql unless I can find a way to use it with an SQL
query, get every selected record into a list, and stop certain columns from
being edited. If anybody knows how to do these, that would be great, I just
really have no time to go through all the documentation myself at the moment.
All I want to do is have the user be able to change data from the
QTableWidget, and get that changed data as a record.
Basically, my end goal is to have the equivalent of
`setEditStrategy(QSqlTableModel.OnManualSubmit)` for QTableWidget.
I have been trying to figure this out for a while now and I just want it
sorted out, it is the last thing I need to do to finish this program for a
client.
Answer: It is always difficult to answer without a minimal working example, so I
produced one myself and put the suggestion from the [other
post](http://stackoverflow.com/questions/39742199/how-do-i-get-the-
information-that-the-user-has-changed-in-a-table-in-pyqt-with-p) in, modifying
it, such that it outputs the changed item's text and its position inside the
table.
# runs with Python 2.7 and PyQt4
from PyQt4 import QtGui, QtCore
import sys
class App(QtGui.QMainWindow):
def __init__(self, parent=None):
super(App, self).__init__(parent)
self.setMinimumSize(600,200)
self.all_data = [["John", True, "01234", 24],
["Joe", False, "05671", 13],
["Johnna", True, "07145", 44] ]
self.mainbox = QtGui.QWidget(self)
self.layout = QtGui.QVBoxLayout()
self.mainbox.setLayout(self.layout)
self.setCentralWidget(self.mainbox)
self.table = QtGui.QTableWidget(self)
self.layout.addWidget(self.table)
self.button = QtGui.QPushButton('Update',self)
self.layout.addWidget(self.button)
self.click_btn_printouts()
self.button.clicked.connect(self.update)
def click_btn_printouts(self):
self.table.setRowCount(len(self.all_data))
self.tableFields = ["Name", "isSomething", "someProperty", "someNumber"]
self.table.setColumnCount(len(self.tableFields))
self.table.setHorizontalHeaderLabels(self.tableFields)
self.checkbox_list = []
for i, self.item in enumerate(self.all_data):
FullName = QtGui.QTableWidgetItem(str(self.item[0]))
FullName.setFlags(FullName.flags() & ~QtCore.Qt.ItemIsEditable)
PreviouslyMailed = QtGui.QTableWidgetItem(str(self.item[1]))
LearnersDate = QtGui.QTableWidgetItem(str(self.item[2]))
RestrictedDate = QtGui.QTableWidgetItem(str(self.item[3]))
self.table.setItem(i, 0, FullName)
self.table.setItem(i, 1, PreviouslyMailed)
self.table.setItem(i, 2, LearnersDate)
self.table.setItem(i, 3, RestrictedDate)
self.changed_items = []
self.table.itemChanged.connect(self.log_change)
def log_change(self, item):
self.table.blockSignals(True)
item.setBackgroundColor(QtGui.QColor("red"))
self.table.blockSignals(False)
self.changed_items.append(item)
print item.text(), item.column(), item.row()
def update(self):
print "Updating "
for item in self.changed_items:
self.table.blockSignals(True)
item.setBackgroundColor(QtGui.QColor("white"))
self.table.blockSignals(False)
self.writeToDatabase(item)
def writeToDatabase(self, item):
text, col, row = item.text(), item.column(), item.row()
#write those to database with your own code
if __name__=='__main__':
app = QtGui.QApplication(sys.argv)
thisapp = App()
thisapp.show()
sys.exit(app.exec_())
You may use this example now to refer to any further problems.
|
Replacing the existing MainWindow with a new window with Python, PyQt, Qt Designer
Question: I'm new to Python GUI programming I'm have trouble making a GUI app. I have a
main window with only a button widget on it. What i want to know is how to
replace the existing window with a new window when an event occurs (such as a
button click).
An answer to a similar question here [Replace CentralWidget in
MainWindow](http://stackoverflow.com/questions/13550076/replace-centralwidget-
in-mainwindow), suggests using QStackedWidgets but they did not use Qt
Designer to make their GUI apps whereas I have two .py files, one is the main
window file and the other of window that i want to show after a button press
take place, hence i don't know how to combine these two in my main.py file.
For Example my main window looks like this:
[![Main
Window](http://i.stack.imgur.com/soSsW.png)](http://i.stack.imgur.com/soSsW.png)
And after clicking on the button it should replace the existing window with
this:
[![New
Window](http://i.stack.imgur.com/jmgk4.png)](http://i.stack.imgur.com/jmgk4.png)
I would also like to know if the second window should be of type
QStackedWindow, QDialog or QWidget?
Here is my main.py code
from PyQt4 import QtGui
import sys
import design, design1
import os
class ExampleApp(QtGui.QMainWindow, design.Ui_MainWindow):
def __init__(self, parent=None):
super(ExampleApp, self).__init__(parent)
self.setupUi(self)
self.btnBrowse.clicked.connect(self.doSomething)
def doSomething(self):
# Code to replace the main window with a new window
pass
def main():
app = QtGui.QApplication(sys.argv)
form = ExampleApp()
form.show()
app.exec_()
if __name__ == '__main__':
main()
Answer: You probably don't want to actually create and delete a bunch of windows, but
if you really want to, you could do it like this
def doSomething(self):
# Code to replace the main window with a new window
window = OtherWindow()
window.show()
self.close()
The in the `OtherWindow` class
class OtherWindow(...):
...
def doSomething(self):
window = ExampleApp()
window.show()
self.close()
You actually probably don't want to do this. It would be much better if you
simply created 1 main window, with a `QStackedWidget` and put the different
controls and widgets on different tabs of the stacked widget and just switch
between them in the same window.
|
how to import scripts as modules in ipyhon?
Question: So, I've two python files:
the 1st "m12345.py"
def my():
return 'hello world'
the 2nd "1234.py":
from m12345 import *
a = m12345.my()
print(a)
On ipython I try to exec such cmds:
exec(open("f:\\temp\\m12345.py").read())
exec(open("f:\\temp\\1234.py").read())
the error for the 2nd command is:
ImportError: No module named 'm12345'
Please, help how to add the 1st file as a module for the 2nd?
Answer: First off, if you use the universal import (`from m12345 import *`) then you
just call the `my()` function and not the `m12345.my()` or else you will get a
> NameError: name 'm12345' is not defined
Secondly, you should add the following snippet in every script in which you
want to have the ability of directly running it or not (when importing it).
if "__name__" = "__main__":
pass
PS. Add this to the 1st script ("m12345.py"). PS2. Avoid using the universal
import method since it has the ability to mess the namespace of your script.
(For that reason, it isn't considered best practice).
**edit:** Is the m12345.py located in the python folder (where it was
installed in your hard drive)? If not, then you should add the directory it is
located in the sys.path with:
import sys
sys.path.append(directory)
where directory is the string of the location where your m12345.py is located.
Note that if you use Windows you should use `/` and not `\`. However it would
be much easier to just relocate the script (if it's possible).
|
Numpy not found in Python3
Question: I am trying to run numpy in Python 3, using the WinPy distribution. I put
#!python3 at the top of the script, because I was told that is something that
Winpy has that allows you to make it run in a certain version. If I run the
script in the shell(Eclipse) it works fine, but when I try to run it from the
console, I get this error:
Traceback (most recent call last):
File "C:\Users\Dax\workspace\Python3\TestofPython3.py", line 9, in <module>
import numpy
ImportError: No module named 'numpy'
If I don't put that at that the top of the script, it runs numpy fine until it
gets to 'input()'. It works in the shell with or without #!python3.
Answer: The "#!python3" is to help the console determine the right version of python.
However you need to make sure the path is correct. Instead of putting
"#!python3", put "#!/usr/bin/" and then your python version, so "python" or
"python3".
Check this article for more information on this. [Article on "#!"
Scripts.](http://stackoverflow.com/questions/2429511/why-do-people-write-usr-
bin-env-python-on-the-first-line-of-a-python-script)
|
How can I replace the vowels of a word with underscores in python?
Question: I'm a beginner learning the python language and I'm stumped on how take the
vowels of a word and replacing them with an underscore.
So far this is what I have come up with and it just doesn't work
word = input("Enter a word: ")
new_word = ""
vowels = "aeiouy"
for letter in word:
if letter != vowels:
new_word += word
else:
new_word += "_"
print(new_word)
Answer: You can use `string.translate` and `maketrans`.
from string import maketrans
vowels = "aeiouy"
t = "______"
st = "trying this string"
tran = maketrans(vowels, t)
print st.translate(tran)
# Gives tr__ng th_s str_ng
You may also want to check uppercases.
|
Getting PostgreSQL percent_rank and scipy.stats.percentileofscore results to match
Question: I'm trying to QAQC the results of calculations that are done in a PostgreSQL
database, using a python script to read in the inputs to the calculation and
echo the calculation steps and compare the final results of the python script
against the results from the PostgreSQL calculation.
The calculations in the PostgreSQL database use the [percent_rank
function](https://www.postgresql.org/docs/current/static/functions-
aggregate.html), returning the percentile rank (from 0 to 1) of a single value
in a list of values. In the python script I am using the [Scipy
percentileofscore
function.](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.percentileofscore.html#scipy-
stats-percentileofscore)
So, here's the question: I can't get the results to match, and I am wondering
if anyone knows what settings I should use in the Scipy percentileofscore
function to match the PostgreSQL percent_rank function.
Answer: You can use
[`scipy.stats.rankdata`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rankdata.html).
The following example reproduces the result shown at
<http://docs.aws.amazon.com/redshift/latest/dg/r_WF_PERCENT_RANK.html>:
In [12]: import numpy as np
In [13]: from scipy.stats import rankdata
In [14]: values = np.array([15, 20, 20, 20, 30, 30, 40])
`rankdata(values, method='min')` gives the desired rank:
In [15]: rank = rankdata(values, method='min')
In [16]: rank
Out[16]: array([1, 2, 2, 2, 5, 5, 7])
Then a basic calculation gives the equivalent of `percent_rank`:
In [17]: (rank - 1) / (len(values) - 1)
Out[17]:
array([ 0. , 0.16666667, 0.16666667, 0.16666667, 0.66666667,
0.66666667, 1. ])
(I'm using Python 3.5. In Python 2, use something like `(rank - 1) /
float(len(values) - 1)`.)
* * *
You can use `percentileofscore`, but:
* You have to use the argument `kind='strict'`.
* You have to scale the result by `n/(n-1)`, where `n` is the number of values.
* You have to divide by 100 to convert from a true percentage to a fraction between 0 and 1.
* `percentileofscore` expects its second argument to be a scalar, so you have to use a loop to compute the result separately for each value.
Here's an example using the same values as above:
In [87]: import numpy as np
In [88]: from scipy.stats import percentileofscore
In [89]: values = np.array([15, 20, 20, 20, 30, 30, 40])
In [90]: n = len(values)
Here I use a list comprehension to generate the result:
In [91]: [n*percentileofscore(values, val, kind='strict')/100/(n-1) for val in values]
Out[91]:
[0.0,
0.16666666666666666,
0.16666666666666666,
0.16666666666666666,
0.66666666666666663,
0.66666666666666663,
1.0]
|
Python how to convert a value with shape (1000L, 1L) to the value of the shape (1000L,)
Question: I has a variable with a shape of (1000L, 1L), but the structure causes some
errors for subsequent analysis. It needs to be converted to the one with the
shape (1000L,). Let me be more specific.
import numpy as np
a = np.array([1,2,3])
b = np.array([[1],[2],[3]])
I want to convert b to a. Is there any quick way to do that?
Answer: There are a lot of ways you could do that, such as indexing:
a = b[:, 0]
raveling:
a = numpy.ravel(b)
or reshaping:
a = numpy.reshape(b, (-1,))
|
python: could not broadcast input array from shape (3,1) into shape (3,)
Question:
import numpy as np
def qrhouse(A):
(m,n) = A.shape
R = A
V = np.zeros((m,n))
for k in range(0,min(m-1,n)):
x = R[k:m,k]
x.shape = (m-k,1)
v = x + np.sin(x[0])*np.linalg.norm(x.T)*np.eye(m-k,1)
V[k:m,k] = v
R[k:m,k:n] = R[k:m,k:n]-(2*v)*(np.transpose(v)*R[k:m,k:n])/(np.transpose(v)*v)
R = np.triu(R[0:n,0:n])
return V, R
A = np.array( [[1,1,2],[4,3,1],[1,6,6]] )
print qrhouse(A)
It's qr factorization pytho code I don't know why error happens... value error
happens in `V[k:m,k] = v`
value error :
could not broadcast input array from shape (3,1) into shape (3)
.
Answer: `V[k:m,k] = v`; `v` has shape (3,1), but the target is (3,). `k:m` is a 3 term
slice; `k` is a scalar.
Try using `v.ravel()`. Or `V[k:m,[k]]`.
But also understand why `v` has its shape.
|
scrapy - spider module def functions not getting invoked
Question: My intention is to invoke start_requests method to login to the website. After
login, scrape the website. Based on the log message, I see that 1\. But, I see
that start_request is not invoked. 2\. call_back function of the parse is also
not invoking.
Whats actually happening is spider is only loading the urls in the start_urls.
Question:
1. Why the spider is not crawling through other pages(say page 2, 3, 4)?
2. Why looking from spider is not working?
Note:
1. My method to calculate page number and url creation is correct. I verified it.
2. I referred this link to write this code [Using loginform with scrapy](http://stackoverflow.com/questions/29809524/using-loginform-with-scrapy)
My code:
zauba.py (spider)
#!/usr/bin/env python
from scrapy.spiders import CrawlSpider
from scrapy.http import FormRequest
from scrapy.http.request import Request
from loginform import fill_login_form
import logging
logger = logging.getLogger('Zauba')
class zauba(CrawlSpider):
name = 'Zauba'
login_url = 'https://www.zauba.com/user'
login_user = 'scrapybot1@gmail.com'
login_password = 'scrapybot1'
logger.info('zauba')
start_urls = ['https://www.zauba.com/import-gold/p-1-hs-code.html']
def start_requests(self):
logger.info('start_request')
# let's start by sending a first request to login page
yield scrapy.Request(self.login_url, callback = self.parse_login)
def parse_login(self, response):
logger.warning('parse_login')
# got the login page, let's fill the login form...
data, url, method = fill_login_form(response.url, response.body,
self.login_user, self.login_password)
# ... and send a request with our login data
return FormRequest(url, formdata=dict(data),
method=method, callback=self.start_crawl)
def start_crawl(self, response):
logger.warning('start_crawl')
# OK, we're in, let's start crawling the protected pages
for url in self.start_urls:
yield scrapy.Request(url, callback=self.parse)
def parse(self, response):
logger.info('parse')
text = response.xpath('//div[@id="block-system-main"]/div[@class="content"]/div[@style="width:920px; margin-bottom:12px;"]/span/text()').extract_first()
total_entries = int(text.split()[0].replace(',', ''))
total_pages = int(math.ceil((total_entries*1.0)/30))
logger.warning('*************** : ' + total_pages)
print('*************** : ' + total_pages)
for page in xrange(1, (total_pages + 1)):
url = 'https://www.zauba.com/import-gold/p-' + page +'-hs-code.html'
log.msg('url%d : %s' % (pages,url))
yield scrapy.Request(url, callback=self.extract_entries)
def extract_entries(self, response):
logger.warning('extract_entries')
row_trs = response.xpath('//div[@id="block-system-main"]/div[@class="content"]/div/table/tr')
for row_tr in row_trs[1:]:
row_content = row_tr.xpath('.//td/text()').extract()
if (row_content.__len__() == 9):
print row_content
yield {
'date' : row_content[0].replace(' ', ''),
'hs_code' : int(row_content[1]),
'description' : row_content[2],
'origin_country' : row_content[3],
'port_of_discharge' : row_content[4],
'unit' : row_content[5],
'quantity' : int(row_content[6].replace(',', '')),
'value_inr' : int(row_content[7].replace(',', '')),
'per_unit_inr' : int(row_content[8].replace(',', '')),
}
loginform.py
#!/usr/bin/env python
import sys
from argparse import ArgumentParser
from collections import defaultdict
from lxml import html
__version__ = '1.0' # also update setup.py
def _form_score(form):
score = 0
# In case of user/pass or user/pass/remember-me
if len(form.inputs.keys()) in (2, 3):
score += 10
typecount = defaultdict(int)
for x in form.inputs:
type_ = (x.type if isinstance(x, html.InputElement) else 'other'
)
typecount[type_] += 1
if typecount['text'] > 1:
score += 10
if not typecount['text']:
score -= 10
if typecount['password'] == 1:
score += 10
if not typecount['password']:
score -= 10
if typecount['checkbox'] > 1:
score -= 10
if typecount['radio']:
score -= 10
return score
def _pick_form(forms):
"""Return the form most likely to be a login form"""
return sorted(forms, key=_form_score, reverse=True)[0]
def _pick_fields(form):
"""Return the most likely field names for username and password"""
userfield = passfield = emailfield = None
for x in form.inputs:
if not isinstance(x, html.InputElement):
continue
type_ = x.type
if type_ == 'password' and passfield is None:
passfield = x.name
elif type_ == 'text' and userfield is None:
userfield = x.name
elif type_ == 'email' and emailfield is None:
emailfield = x.name
return (userfield or emailfield, passfield)
def submit_value(form):
"""Returns the value for the submit input, if any"""
for x in form.inputs:
if x.type == 'submit' and x.name:
return [(x.name, x.value)]
else:
return []
def fill_login_form(
url,
body,
username,
password,
):
doc = html.document_fromstring(body, base_url=url)
form = _pick_form(doc.xpath('//form'))
(userfield, passfield) = _pick_fields(form)
form.fields[userfield] = username
form.fields[passfield] = password
form_values = form.form_values() + submit_value(form)
return (form_values, form.action or form.base_url, form.method)
def main():
ap = ArgumentParser()
ap.add_argument('-u', '--username', default='username')
ap.add_argument('-p', '--password', default='secret')
ap.add_argument('url')
args = ap.parse_args()
try:
import requests
except ImportError:
print 'requests library is required to use loginform as a tool'
r = requests.get(args.url)
(values, action, method) = fill_login_form(args.url, r.text,
args.username, args.password)
print '''url: {0}
method: {1}
payload:'''.format(action, method)
for (k, v) in values:
print '- {0}: {1}'.format(k, v)
if __name__ == '__main__':
sys.exit(main())
The Log Message:
2016-10-02 23:31:28 [scrapy] INFO: Scrapy 1.1.3 started (bot: scraptest)
2016-10-02 23:31:28 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scraptest.spiders', 'FEED_URI': 'medic.json', 'SPIDER_MODULES': ['scraptest.spiders'], 'BOT_NAME': 'scraptest', 'ROBOTSTXT_OBEY': True, 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:39.0) Gecko/20100101 Firefox/39.0', 'FEED_FORMAT': 'json', 'AUTOTHROTTLE_ENABLED': True}
2016-10-02 23:31:28 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.throttle.AutoThrottle']
2016-10-02 23:31:28 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-02 23:31:28 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-02 23:31:28 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-02 23:31:28 [scrapy] INFO: Spider opened
2016-10-02 23:31:28 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-02 23:31:28 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-10-02 23:31:29 [scrapy] DEBUG: Crawled (200) <GET https://www.zauba.com/robots.txt> (referer: None)
2016-10-02 23:31:38 [scrapy] DEBUG: Crawled (200) <GET https://www.zauba.com/import-gold/p-1-hs-code.html> (referer: None)
2016-10-02 23:31:38 [scrapy] INFO: Closing spider (finished)
2016-10-02 23:31:38 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 558,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 136267,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 10, 3, 6, 31, 38, 560012),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 10, 3, 6, 31, 28, 927872)}
2016-10-02 23:31:38 [scrapy] INFO: Spider closed (finished)
Answer: I figured out the crapy mistake i did!!!!
**I didn't place the functions inside the class. Thats why .... things didnt
work as expected. Now, I added a tab space to all the fuctions and things
started to work fine**
Thanks @user2989777 and @Granitosaurus for coming forward to debug
|
calculating delta time between records in dataframe
Question: I have an interesting problem, I am trying to calculate the delta time between
records done at different locations.
id x y time
1 x1 y1 10
1 x1 y1 12
1 x2 y2 14
2 x4 y4 8
2 x5 y5 12
I am trying to get some thing like
id x y time delta
1 x1 y1 10 4
1 x2 y2 14 0
2 x4 y4 8 4
2 x5 y5 12 0
I have done this type of processing with HiveQL by using custom UDTF but was
thinking how can I achieve this with DataFrame in general (may it be in R,
Pandas, PySpark). Ideally, I am trying to find a solution for Python pandas
and pyspark.
Any hint is appreciated, thank you for your time !
Answer: I think you need [`drop_duplicates`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.drop_duplicates.html) with `groupby`
with [`DataFrameGroupBy.diff`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.diff.html),
[`shift`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.core.groupby.Series.shift.html) and
[`fillna`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.fillna.html):
df1 = df.drop_duplicates(subset=['id','x','y']).copy()
df1['delta'] = df1.groupby(['id'])['time'].diff().shift(-1).fillna(0)
Final code:
import pandas as pd df = pd.read_csv("sampleInput.txt",
header=None,
usecols=[0,1,2,3],
names=['id','x','y','time'],
sep="\t")
delta = df.groupby(['id','x','y']).first().reset_index()
delta['delta'] = delta.groupby('id')['time'].diff().shift(-1).fillna(0)
**Timings** :
In [111]: %timeit df.groupby(['id','x','y']).first().reset_index()
100 loops, best of 3: 2.42 ms per loop
In [112]: %timeit df.drop_duplicates(subset=['id','x','y']).copy()
1000 loops, best of 3: 658 µs per loop
|
How to call from function to another function
Question: I am making a minesweeper game within python with pygame.
import pygame, math, sys
def bomb_check():
if check in BOMBS:
print("You hit a bomb!")
sys.exit
def handle_mouse(mousepos):
x, y = mousepos
x, y = math.ceil(x / 40), math.ceil(y / 40)
check = print("("+"{0}, {1}".format(x,y)+")")
I want to call "check" to "bomb_check" Any other solution to this problem is
welcome, I am but a rookie at python.
Answer: Just use it as an argument:
import pygame, math, sys
def bomb_check(check):
if check in BOMBS:
print("You hit a bomb!")
sys.exit
def handle_mouse(mousepos):
x, y = mousepos
x, y = math.ceil(x / 40), math.ceil(y / 40)
check = x, y
print(check)
bomb_check(check)
That will work only if you are storing a tuples of 2 items (integers) in your
BOMBS. Of course it also requires BOMBS to be in global scope.
|
Concerting Tweets to python dictionary
Question: I want to analyse twitter data.I have downloaded some tweets and saved them in
a .txt file.
When I tried to extract useful information from the tweets data , i was not
able to make any progress because for a beginner like me it seems very
difficult to extract tweets , location etc.
while googling i found if we convert json into dictionary it would be easy to
extract the info.
Now I want to convert my JSON data to python dictionaries. I don't know how to
proceed.
Here is the code used to save tweets
import tweepy
import json
import jsonpickle
consumer_key = "*********"
consumer_secret = "*******"
access_token = "************"
access_token_secret = "**********"
auth = tweepy.AppAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
# It make the Tweepy API call auto wait (sleep) when it hits the rate limit and continue upon expiry of the window.
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
if (not api):
print ("Can't Authenticate")
sys.exit(-1)
searchQuery = 'SomeHashtag'
maxTweets = 10000000 # Some arbitrary large number
tweetsPerQry = 100
fName = 'file.txt'
sinceId = None
max_id = "Latest tweet ID"
tweetCount = 0
print("Downloading max {0} tweets".format(maxTweets))
with open(fName, 'a') as f:
while tweetCount < maxTweets:
try:
if (max_id <= 0):
if (not sinceId):
new_tweets = api.search(q=searchQuery, lang ="en", count=tweetsPerQry)
else:
new_tweets = api.search(q=searchQuery, lang ="en", count=tweetsPerQry,
since_id=sinceId)
else:
if (not sinceId):
new_tweets = api.search(q=searchQuery, lang ="en", count=tweetsPerQry,
max_id=str(max_id - 1))
else:
new_tweets = api.search(q=searchQuery, lang ="en", count=tweetsPerQry,
max_id=str(max_id - 1),
since_id=sinceId)
if not new_tweets:
print("No more tweets found")
break
for tweet in new_tweets:
f.write(jsonpickle.encode(tweet._json, unpicklable=False) + '\n')
tweetCount += len(new_tweets)
print("Downloaded {0} tweets".format(tweetCount))
max_id = new_tweets[-1].id
except tweepy.TweepError as e:
# Just exit if any error
print("some error : " + str(e))
break
print ("Downloaded {0} tweets, Saved to {1}".format(tweetCount, fName))
Answer: It seems you can just read your file line by line and unpickle it using
[`jsonpickle.decode`](https://jsonpickle.github.io/api.html#jsonpickle.decode)
method:
tweets = []
with open(filename) as f:
for line in f:
tweets.append(jsonpickle.decode(line))
And I think you can bypass third-party library at all:
import json
with open(filename, 'w') as f:
for tweet in new_tweets:
f.write(json.dumps(tweet) + '\n')
tweets = []
with open(filename) as f:
for line in f:
tweets.append(json.loads(line))
|
issue passing URL from json config file to python script
Question: I'm currently writing a small python script to monitor all Urls within my
teams pool of web apps. I have a python script that basically runs in an
infinite loop and will check the urls every 60 min. My issue lies in pulling
my url's from my json config. for some reason I cannot use a url that has the
adress:port and extension thereafter
My python script or function is as follows(partial) and basically bombs out
when it gets to the connection portion conn = httplib.HTTPConnection(website).
The issue lies in reading URLs in this format ("url":
"zardev0201230265:7778/apt/server/login/#")
def monitor():
import httplib
import logging
import json
import os
p = os.getpid()
#Basic config for / displays what will appear in log file
#Open Json config file and use data function to load and read the file
logging.basicConfig(filename='SalesTriggerCHECK.log', format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.INFO)
with open('htppchecklogconfig.json') as json_data_file:
data = json.load(json_data_file)
#Declare i as counter to iterate through json list, range will be set to length of the list
#Assign data for current iteration for server/category and URL to variables
for i in range(len(data["application_details"])):
server = (data["application_details"][i]["server"])
application = (data["application_details"][i]["application"])
category = (data["application_details"][i]["category"])
website = (data["application_details"][i]["url"])
#Connection settings that are required to make a connection request to check URL
#The data function is used in conjunction with the counter(i) to find the URL in the Json file
#Res variable stores response from connection request / if statement checks for status 200 for ok
#We can get the response code and reason using the the get response function when make the connection
conn = httplib.HTTPConnection(website)
conn.request("HEAD", "/index.html")
res = conn.getresponse()
My JSON config file is as follows:
{
"application_details": [
{
"server": "Server120",
"application": "sales application",
"category": "DEV",
"url": "zardev0201230265:7778/apt/server/login/#"
},
{
"server": "Server130",
"application": "Dashboard-Hangfire",
"category": "DEV",
"url": "zardev0201230297:7779"
}
]
error details
Traceback (most recent call last):
File "C:\Users\pc\Desktop\app\httpchecklog.py", line 50, in <module>
monitor()
File "C:\Users\pc\Desktop\app\httpchecklog.py", line 33, in monitor
conn = httplib.HTTPConnection(website)
File "C:\Python27\lib\httplib.py", line 751, in __init__
(self.host, self.port) = self._get_hostport(host, port)
File "C:\Python27\lib\httplib.py", line 792, in _get_hostport
raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
InvalidURL: nonnumeric port: '7778/apt/server/login/#'
any advice around this would be highly appreciated
Answer: As it says in the exception: nonnumeric port. The HTTPConnection class
interprets everything after the ':' as the port, in your case:
'7778/apt/server/login/#'. This can only be numeric. If you change it to
'7778', the exception shouldn't occur.
The available parameters can be found in the python docs:
<https://docs.python.org/2/library/httplib.html>
|
Python 3 Pandas Filter/Extract by multiple column values, including <> 0
Question: Working with a publicly available csv file from USASPENDING.gov. Able to
extract data from Navy but do not know the right syntax to add a second filter
to exclude all records with `Dollarsobligated = 0`.
Code is:
import pandas as pd
df = pd.read_csv("2016_DOD_Contracts_Full_20160915.csv")
df.columns = [c.replace(' ','_') for c in df.columns]
new_df = df[(df.mod_agency == '1700: DEPT OF THE NAVY') & (df.dollarsobligated <> 0)]
# Export result to CSV
new_df.to_csv('example15.csv')
I get an error that says `<>` is invalid syntax. No examples of 'does not
equal 0' on the web yet.
Answer: I think you need replace `<>` to `!=` in [`boolean
indexing`](http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-
indexing), because [in Python3, <> was
removed](https://docs.python.org/3.0/whatsnew/3.0.html#removed-syntax), thanks
[`unutbu`](http://stackoverflow.com/questions/39830890/python-3-pandas-filter-
extract-by-multiple-column-values-including-0#comment66951499_39830890).
Also you can use [`str.replace`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.Series.str.replace.html):
df.columns = df.columns.str.replace(' ','_')
new_df = df[(df.mod_agency == '1700: DEPT OF THE NAVY') & (df.Dollarsobligated != 0)]
Sample:
df = pd.DataFrame({'mod agency':['1700: DEPT OF THE NAVY',
'1700: DEPT OF THE NAVY',
'1800: DEPT OF THE NAVY'],
'Dollarsobligated':[1,0,0],
'C':[7,8,9]})
print (df)
C Dollarsobligated mod agency
0 7 1 1700: DEPT OF THE NAVY
1 8 0 1700: DEPT OF THE NAVY
2 9 0 1800: DEPT OF THE NAVY
df.columns = df.columns.str.replace(' ','_')
new_df = df[(df.mod_agency == '1700: DEPT OF THE NAVY') & (df.Dollarsobligated != 0)]
print (new_df)
C Dollarsobligated mod_agency
0 7 1 1700: DEPT OF THE NAVY
|
Python time error: mktime overflow
Question: While working with Python's `time` module I got this error:
> `OverflowError: mktime argument out of range`
What I have found concerning this was that the time might be outside of the
epoch and therefore can not be displayed on my windows enviroment.
However the code tested by me is:
import time
s = "20 3 59 3 "
t = time.mktime(time.strptime(s, "%d %H %M %S "))
print(t)
How can I avoid this? My goal is to get the difference between to points of
time on the same day. (I won't get any information about month or year)
Answer: You problem is that the timetuple created by `time.strptime(s, "%d %H %M %S
")` is:
(tm_year=1900, tm_mon=1, tm_mday=20, tm_hour=3, tm_min=59, tm_sec=3, tm_wday=5, tm_yday=20, tm_isdst=-1)
...and the documentation for `time.mktime()` states (emphasis mine):
> **time.mktime(t)** This is the inverse function of localtime(). Its argument
> is the struct_time or full 9-tuple (since the dst flag is needed; use -1 as
> the dst flag if it is unknown) which expresses the time in local time, not
> UTC. It returns a floating point number, for compatibility with time(). If
> the input value cannot be represented as a valid time, either OverflowError
> or ValueError will be raised (which depends on whether the invalid value is
> caught by Python or the underlying C libraries). _The earliest date for
> which it can generate a time is platform-dependent._
So this suggests that `1900` is too early to convert. On my system (Win 7), I
also get an error, but if I change your code to include a recent year:
>>> s = "1970 20 3 59 3 "
>>> t = time.mktime(time.strptime(s, "%Y %d %H %M %S "))
>>> print t
1655943.0
I get no error, but if I change the year to `1950`, I get `OverflowError`.
So the solution is to include a year in your string, that `time.mktime()` can
convert.
|
PyQt: How do I load a ui file from a resource?
Question: In general, I load all my ui files via the `loadui()` method, and this works
fine for me. This looks like this:
#!/usr/bin/env python
#-*- coding:utf-8 -*-
'''
The modules for Qt are imported.
PyQt are a set of Python bindings for Qt.
'''
from PyQt4.QtGui import QDialog
from PyQt4.uic import loadUi
from PyQt4.QtCore import Qt, QFile
from PyQt4 import QtCore
class My_Window(QDialog):
def __init__(self, parent):
QDialog.__init__(self, parent)
UI_PATH = QFile(":/ui_file/test.ui")
UI_PATH.open(QFile.ReadOnly)
self.ui_test = loadUi(UI_PATH, self)
UI_PATH.close()
Now I try to load the ui file via `loaduiType()`, but it doesn't work. I tried
with this code:
from PyQt4.uic import loadUiType
UI_PATH = QFile(":/ui_file/test.ui")
Ui_Class, _ = loadUiType(UI_PATH)
class Test_Window(QDialog, UiClass):
def __init__(self, parent):
QDialog.__init__(self, parent)
self.setupUi(self)
What is the correct and best why to load the ui file with the `loadUiType()`
method?
Answer: It's not really much different from what you were already doing:
from PyQt4.QtCore import QFile
from PyQt4.uic import loadUiType
import resources_rc
def loadUiClass(path):
stream = QFile(path)
stream.open(QFile.ReadOnly)
try:
return loadUiType(stream)[0]
finally:
stream.close()
Ui_Class = loadUiClass(':/ui_file/test.ui')
|
python3 12 digits script each digit equal three time beore him?
Question: Write a program that displays **12 digits** ,
each digit is equal to three times the digit before him.
I tried to code like this
a , b , c = 1 , 1 , 1
print(c)
while c < 12 : # for looping
c = c + 1 # c for counting
b = a+b
y = 3*b
print(c,y)
can any one help me to correct the result
Answer: You can use [power
operator](https://docs.python.org/3/reference/expressions.html#the-power-
operator) for that:
from itertools import islice
def numbers(x, base=3):
n = 0
while True:
yield x * base ** n
n += 1
for n in islice(numbers(1), 12):
print(n)
Or if you really like your way of doing that, here's a fixed version of your
code:
b, c = 1, 0
while c < 12:
print(c, b)
b *= 3
c += 1
|
pip show xml shows Null
Question: I am using Python 2.7.12,tried
import xml.etree # successfully imported,
tried
import lxml.etree # successfully imported.
when i tried to get the version of xml through
pip show xml #Result is Null
pip show lxml show version 3.4.2
why it is not showing xml version
Answer: Because [`xml`](https://docs.python.org/2/library/xml.html#module-xml) is a
built-in package in Python 2.7. Built-in modules and packages are tied to the
Python version; they're usually only upgraded whenever you upgrade your Python
version.
`pip version` only works with 3rd-party packages that have been installed by
`pip` or compatible tools.
Furthermore it seems that `pip` doesn't output any diagnostics at all, even
when trying to force with `-v` (verbose) flag; it just exits with exit status
1 when given an invalid package name:
% pip show -v asdfasdfasdfasfd; echo Exit status: $?
Exit status: 1
|
How to save the edited .csv file in python
Question: I have sensor readings stored in csv files and now I am adding some more
values to these files. How can I save these files in new locations in csv
format for future use.
Answer: Take a look at this guide: <http://www.pythonforbeginners.com/systems-
programming/using-the-csv-module-in-python/>
Basically the [csv module](https://docs.python.org/3.3/library/csv.html) does
what you need:
import csv
|
IOError: [Errno socket error] using BeautifulSoup
Question: I am trying to get the data from US Census website using beautiful soup with
Python 2.7. This is the code that I use:
import urllib
from bs4 import BeautifulSoup
url = "https://www.census.gov/quickfacts/table/PST045215/01"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
However, this is the error I got:
IOError Traceback (most recent call last)
<ipython-input-5-47941f5ea96a> in <module>()
59
60 url = "https://www.census.gov/quickfacts/table/PST045215/01"
---> 61 html = urllib.urlopen(url).read()
62 soup = BeautifulSoup(html)
63
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.pyc in urlopen(url, data, proxies, context)
85 opener = _urlopener
86 if data is None:
---> 87 return opener.open(url)
88 else:
89 return opener.open(url, data)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.pyc in open(self, fullurl, data)
211 try:
212 if data is None:
--> 213 return getattr(self, name)(url)
214 else:
215 return getattr(self, name)(url, data)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.pyc in open_https(self, url, data)
441 if realhost: h.putheader('Host', realhost)
442 for args in self.addheaders: h.putheader(*args)
--> 443 h.endheaders(data)
444 errcode, errmsg, headers = h.getreply()
445 fp = h.getfile()
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.pyc in endheaders(self, message_body)
1051 else:
1052 raise CannotSendHeader()
-> 1053 self._send_output(message_body)
1054
1055 def request(self, method, url, body=None, headers={}):
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.pyc in _send_output(self, message_body)
895 msg += message_body
896 message_body = None
--> 897 self.send(msg)
898 if message_body is not None:
899 #message_body was not a string (i.e. it is a file) and
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.pyc in send(self, data)
857 if self.sock is None:
858 if self.auto_open:
--> 859 self.connect()
860 else:
861 raise NotConnected()
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.pyc in connect(self)
1276
1277 self.sock = self._context.wrap_socket(self.sock,
-> 1278 server_hostname=server_hostname)
1279
1280 __all__.append("HTTPSConnection")
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.pyc in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname)
351 suppress_ragged_eofs=suppress_ragged_eofs,
352 server_hostname=server_hostname,
--> 353 _context=self)
354
355 def set_npn_protocols(self, npn_protocols):
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.pyc in __init__(self, sock, keyfile, certfile, server_side, cert_reqs, ssl_version, ca_certs, do_handshake_on_connect, family, type, proto, fileno, suppress_ragged_eofs, npn_protocols, ciphers, server_hostname, _context)
599 # non-blocking
600 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
--> 601 self.do_handshake()
602
603 except (OSError, ValueError):
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.pyc in do_handshake(self, block)
828 if timeout == 0.0 and block:
829 self.settimeout(None)
--> 830 self._sslobj.do_handshake()
831 finally:
832 self.settimeout(timeout)
IOError: [Errno socket error] [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:590)
I have looked from two Stack Overflow sources such as
[this](http://stackoverflow.com/questions/30918761/sslv3-alert-handshake-
failure-with-urllib2) and
[this](http://stackoverflow.com/questions/33394517/html-link-parsing-using-
beautifulsoup) for solutions but they do not solve the issue.
Answer: One workaround to this problem would be to switch to
[`requests`](http://docs.python-requests.org/en/master/):
import requests
from bs4 import BeautifulSoup
url = "https://www.census.gov/quickfacts/table/PST045215/01"
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")
print(soup.title.get_text())
Prints:
Alabama QuickFacts from the US Census Bureau
Note that this might need [`requests\[security\]`
package](http://stackoverflow.com/a/34473533/771848) to be installed as well:
pip install requests[security]
|
Python, QT and matplotlib scatter plots with blitting
Question: I am trying to animate a scatter plot (it needs to be a scatter plot as I want
to vary the circle sizes). I have gotten the matplotlib documentation tutorial
[matplotlib documentation tutorial
](http://matplotlib.org/examples/animation/rain.html) to work in my PyQT
application, but would like to introduce blitting into the equation as my
application will likely run on slower machines where the animation may not be
as smooth.
I have had a look at many examples of animations with blitting, but none ever
use a scatter plot (they use plot or lines) and so I am really struggling to
figure out how to initialise the animation (the bits that don't get re-
rendered every time) and the ones that do. I have tried quite a few things,
and seem to be getting nowhere (and I am sure they would cause more confusion
than help!). I assume that I have missed something fairly fundamental. Has
anyone done this before? Could anyone help me out splitting the figure into
the parts that need to be initiated and the ones that get updates?
The code below works, but does not blit. Appending
blit=True
to the end of the animation call yields the following error:
RuntimeError: The animation function must return a sequence of Artist objects.
Any help would be great.
Regards
FP
import numpy as np
from PyQt4 import QtGui, uic
import sys
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
class MainWindow(QtGui.QMainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setupAnim()
self.show()
def setupAnim(self):
self.fig = plt.figure(figsize=(7, 7))
self.ax = self.fig.add_axes([0, 0, 1, 1], frameon=False)
self.ax.set_xlim(0, 1), self.ax.set_xticks([])
self.ax.set_ylim(0, 1), self.ax.set_yticks([])
# Create rain data
self.n_drops = 50
self.rain_drops = np.zeros(self.n_drops, dtype=[('position', float, 2),
('size', float, 1),
('growth', float, 1),
('color', float, 4)])
# Initialize the raindrops in random positions and with
# random growth rates.
self.rain_drops['position'] = np.random.uniform(0, 1, (self.n_drops, 2))
self.rain_drops['growth'] = np.random.uniform(50, 200, self.n_drops)
# Construct the scatter which we will update during animation
# as the raindrops develop.
self.scat = self.ax.scatter(self.rain_drops['position'][:, 0], self.rain_drops['position'][:, 1],
s=self.rain_drops['size'], lw=0.5, edgecolors=self.rain_drops['color'],
facecolors='none')
self.animation = FuncAnimation(self.fig, self.update, interval=10)
plt.show()
def update(self, frame_number):
# Get an index which we can use to re-spawn the oldest raindrop.
self.current_index = frame_number % self.n_drops
# Make all colors more transparent as time progresses.
self.rain_drops['color'][:, 3] -= 1.0/len(self.rain_drops)
self.rain_drops['color'][:, 3] = np.clip(self.rain_drops['color'][:, 3], 0, 1)
# Make all circles bigger.
self.rain_drops['size'] += self.rain_drops['growth']
# Pick a new position for oldest rain drop, resetting its size,
# color and growth factor.
self.rain_drops['position'][self.current_index] = np.random.uniform(0, 1, 2)
self.rain_drops['size'][self.current_index] = 5
self.rain_drops['color'][self.current_index] = (0, 0, 0, 1)
self.rain_drops['growth'][self.current_index] = np.random.uniform(50, 200)
# Update the scatter collection, with the new colors, sizes and positions.
self.scat.set_edgecolors(self.rain_drops['color'])
self.scat.set_sizes(self.rain_drops['size'])
self.scat.set_offsets(self.rain_drops['position'])
if __name__== '__main__':
app = QtGui.QApplication(sys.argv)
window = MainWindow()
sys.exit(app.exec_())
Answer: You need to add `return self.scat,` at the end of the `update` method if you
want to use `FuncAnimation` with `blit=True`. See also this nice
[StackOverflow post](http://stackoverflow.com/a/9416663/4481445) that presents
an example of a scatter plot animation with matplotlib using blit.
As a side-note, if you wish to embed a mpl figure in a Qt application, it is
better to avoid using the pyplot interface and to use instead the Object
Oriented API of mpl as suggested in the [matplotlib
documentation](http://matplotlib.org/examples/user_interfaces/embedding_in_qt4.html).
This could be achieved, for example, as below, where `mplWidget` can be
embedded as any other Qt widget in your main application. Note that I renamed
the `update` method to `update_plot` to avoid conflict with the already
existing method of the `FigureCanvasQTAgg` class.
import numpy as np
from PyQt4 import QtGui
import sys
import matplotlib as mpl
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg
from matplotlib.animation import FuncAnimation
import matplotlib.pyplot as plt
class mplWidget(FigureCanvasQTAgg):
def __init__(self):
super(mplWidget, self).__init__(mpl.figure.Figure(figsize=(7, 7)))
self.setupAnim()
self.show()
def setupAnim(self):
ax = self.figure.add_axes([0, 0, 1, 1], frameon=False)
ax.axis([0, 1, 0, 1])
ax.axis('off')
# Create rain data
self.n_drops = 50
self.rain_drops = np.zeros(self.n_drops, dtype=[('position', float, 2),
('size', float, 1),
('growth', float, 1),
('color', float, 4)
])
# Initialize the raindrops in random positions and with
# random growth rates.
self.rain_drops['position'] = np.random.uniform(0, 1, (self.n_drops, 2))
self.rain_drops['growth'] = np.random.uniform(50, 200, self.n_drops)
# Construct the scatter which we will update during animation
# as the raindrops develop.
self.scat = ax.scatter(self.rain_drops['position'][:, 0],
self.rain_drops['position'][:, 1],
s=self.rain_drops['size'],
lw=0.5, facecolors='none',
edgecolors=self.rain_drops['color'])
self.animation = FuncAnimation(self.figure, self.update_plot,
interval=10, blit=True)
def update_plot(self, frame_number):
# Get an index which we can use to re-spawn the oldest raindrop.
indx = frame_number % self.n_drops
# Make all colors more transparent as time progresses.
self.rain_drops['color'][:, 3] -= 1./len(self.rain_drops)
self.rain_drops['color'][:, 3] = np.clip(self.rain_drops['color'][:, 3], 0, 1)
# Make all circles bigger.
self.rain_drops['size'] += self.rain_drops['growth']
# Pick a new position for oldest rain drop, resetting its size,
# color and growth factor.
self.rain_drops['position'][indx] = np.random.uniform(0, 1, 2)
self.rain_drops['size'][indx] = 5
self.rain_drops['color'][indx] = (0, 0, 0, 1)
self.rain_drops['growth'][indx] = np.random.uniform(50, 200)
# Update the scatter collection, with the new colors,
# sizes and positions.
self.scat.set_edgecolors(self.rain_drops['color'])
self.scat.set_sizes(self.rain_drops['size'])
self.scat.set_offsets(self.rain_drops['position'])
return self.scat,
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
window = mplWidget()
sys.exit(app.exec_())
|
Python 3 pandas directory search for a string in filename
Question: Hello again StackExchange!
Attempting to print all files in a directory but this time I only want to
print all of the .csv files that have the string ..."AMX_error"...csv
somewhere in the filename. I have the "all .csv" working, but am missing that
bit of search logic.
import glob
import pandas as pd
path = r'C:\Users\Desktop\Experiment\'
#Following command to search for string in the filename
allFiles = glob.glob(path + "/*.csv") & (search filename 'AMX_error' = true)
for filename in allFiles:
print(filename)
#rest of code..
What is the notation to search for a string in a filename? Thanks!
Answer: Unless you have a reason for filtering the files first, you can simply check
that the string of interest is in the filename while you're in the for loop.
import glob
import pandas as pd
path = r'C:\Users\Desktop\Experiment'
#Following command to search for string in the filename
allFiles = glob.glob(path + "/*.csv")
for filename in allFiles:
if 'AMX_error' in filename:
print(filename)
|
Python writing (xlwt) to an existing Excel Sheet, drops charts and formatting
Question: Am using python to automate some tasks and ultimately write to an existing
spreadsheet. Am using the xlwt, xlrd and xlutils modules.
So the way I set it up is to open the file, make a copy, write to it and then
save it back to the same file. When I do the last step, all excel formatting
such as comments and charts are dropped. Is there a way around that? I think
it has something to do with excel objects.
Thank you
Sample code
import xlwt
import os
import xlrd, xlutils
from xlrd import open_workbook
from xlutils.copy import copy
style1 = xlwt.easyxf('font: name Calibri, color-index black, bold off; alignment : horizontal center', num_format_str ='###0')
script_dir = os.path.dirname('_file_')
Scn1 = os.path.join(script_dir, "\sample\Outlet.OUT")
WSM_1V = []
infile = open (Scn1, "r")
for line in infile.readlines():
WSM_1V.append(line [-10:-1])
infile.close()
Existing_xls = xlrd.open_workbook(r'\test\test2.xls', formatting_info=True, on_demand=True)
wb = xlutils.copy.copy(Existing_xls)
ws = wb.get_sheet(10)
for i,e in enumerate(WSM_1V,1):
ws.write (i,0, float(e),style1)
wb.save('test2.xls')
Answer: Could you do this with win32com?
from win32com import client
...
xl = client.Dispatch("Excel.Application")
wb = xl.Workbooks.Open(r'\test\test2.xls')
ws = wb.Worksheets[10]
for i,e in enumerate(WSM_1V,1):
ws.Cells[i][0].Value = float(e)
wb.save
wb.Close()
xl.Quit()
xl = None
|
Python user input file path
Question: I am working on an easy project which requires user input a path for program
and goes to this path Here, I Worte on OSX:
from pathlib import Path
def main():
user_input_path = Path(input())
And debug like this
>>> /Users/akrios/Desktop/123
SyntaxError: invalid syntax
Answer: The `input()` function reads in the input, and then tries to parse it as if
it's something in Python notation. Instead, use the `raw_input()` function, it
does not parse anything and returns the input as a string.
|
Split timestamp column CSV
Question: I have a CSV file in the following format:
name, lat, lon, alt, time
id1, 40.436047, -74.814883, 33000, 2016-01-21T08:08:00Z
I am trying to use Python to split the time into new columns so it looks like
this:
name, lat, lon, alt, year, month, day, hour, min, sec
id1, 40.436047, -74.814883, 33000, 2016,-01,-21, 08, 08, 00
I also want to set the amount of places in the float columns to always be set
to 5 decimal places.
This is the script I have so far:
import numpy as np
name,lat,lon,alt,time = np.loadtxt(
'test_track.csv',
delimiter=',',
dtype='str',
skiprows=1,
unpack = True
)
year = time[0:3]
print year
Unfortunately, instead of parsing the time into year, it prints out the first
full times instead of just the year.
Answer: You should try importing the data with Pandas instead of numpy. Panda read_csv
handles dates quite nicely
try something like this
import pandas as pd
yourData = pd.read_csv(yourData_Path,delimiter = ',',skiprows = 0,
parse_dates={'time':[-1]},header = 1,na_values = -9999)
Pandas also allows you to index by the datetimes which is quite nice :)
|
Access docker bridge using docker exec
Question: first of all, I'm a totally n00b in docker, but I got into a project that are
actually running in docker, so I've been reading about it.
My problem is, I have to inspect my development environment in a mobile
device(iOS). I tried to access by my docker ip because this is what I
basically do in my computer. After a few failed attempts I noticed that I've
to access with the _docker network bridge_ instead of _docker host_(the
default).
I already have defined my docker bridge( I think its default), but i have no
idea how to run my server with this network, can you guys help me?
A few important notes:
* I'm using MAC OS X El capitan ( 10.11.1 )
* The device and the mac are in the same wi-fi network and i can access using regularly localhost outside docker.
* My following steps to run my server is:
1. cd gsat_grupo_5/docker && docker-compose -p gsat_grupo_5 up -d
2. docker exec -it gsatgrupo5_web_1 bash
3. python manage.py runserver 0.0.0.0:8000
* When I run `docker ps` my _output_ is:
[![enter image description
here](http://i.stack.imgur.com/d2Kxs.png)](http://i.stack.imgur.com/d2Kxs.png)
My docker bridge output:
{
"Name": "bridge",
"Id": "1b3ddfda071096b16b92eb82590326fff211815e56344a5127cb0601ab4c1dc8",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {
"565caba7a4397a55471bc6025d38851b1e55ef1618ca7229fcb8f8dfcad68246": {
"Name": "gsatgrupo5_mongo_1",
"EndpointID": "471bcecbef0291d42dc2d7903f64cba6701f81e003165b6a7a17930a17164bd6",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
},
"5e4ce98bb19313272aabd6f56e8253592518d6d5c371d270d2c6331003f6c541": {
"Name": "gsatgrupo5_thumbor_1",
"EndpointID": "67f37d27e86f4a53b05da95225084bf5146261304016809c99c7965fc2414068",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"a0b62a2da367e720d3a55deb7377e517015b06ebf09d153c6355b8ff30cc9977": {
"Name": "gsatgrupo5_web_1",
"EndpointID": "52687cc252ba36825d9e6d8316d878a9aa8b198ba2603b8f1f5d6ebcb1368dad",
"MacAddress": "02:42:ac:11:00:06",
"IPv4Address": "172.17.0.6/16",
"IPv6Address": ""
},
"b3286bbbe9259648f15e363c8968b64473ec0a9dfe1b1a450571639b8fa0ef6f": {
"Name": "gsatgrupo5_mysql_1",
"EndpointID": "53290cb44cf5ed8322801d2dd0c529518f7d414b3c5d71cb6cca527767dd21bd",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
If there's some another smart approach to access my environment in my mobile
device I'm listening.
Answer: > I've to access with the docker network bridge instead of docker host(the
> default).
Unless you have a protocol that does something odd, like connecting back out
to the device from the server, normally accessing `<macip>:8000` from your
device would be enough. Can you test the service from any other computers?
If you do require direct access the container network, that's a bit harder
when using a Mac...
**Docker for Mac** [doesn't support direct access to the Linux virtual
machines bridge
networks](https://github.com/docker/docker/issues/22753#issuecomment-222943352)
where your containers run.
**Docker Toolbox** runs a VirtualBox VM with the boot2docker vm image. It
would be possible to use this but it's a little harder to apply custom network
config to the VM that is setup and run via the `docker-machine` tools.
Plain **Virtualbox** is probably your best option, running your own VM with
Docker installed.
Add two [bridged network
interfaces](https://www.virtualbox.org/manual/ch06.html#network_bridged) to
the VM in Virtualbox. One for the VM and one for the the container, so they
can both be available on your main network.
The first interface is for the host. It should pick up an address from DHCP
like normal and Docker will then be available on your normal network.
The second bridged interface [can be attached to your docker bridge and then
the containers on that bridge will be on your home
network](http://stackoverflow.com/a/35799206/1318694).
On pre v1.10 versions of docker
[Pipework](https://github.com/jpetazzo/pipework) can be used to [physically
mapped an interface in to the
container](https://github.com/jpetazzo/pipework#connect-a-container-to-a-
local-physical-interface).
There is some [specific VirtualBox interface setup required for both
methods](https://github.com/jpetazzo/pipework#virtualbox) to make sure all
this works.
### Vagrant
[Vagrant](https://www.vagrantup.com/downloads.html) might make the VM setup a
bit easier and repeatable.
$ mkdir dockervm
$ cd dockervm
$ vagrant init debian/jessie64
`Vagrantfile` network config:
config.vm.network "public_network", bridge: "en1: Wi-Fi (AirPort)"
config.vm.network "public_network", bridge: "en1: Wi-Fi (AirPort)"
config.vm.provider "virtualbox" do |v|
v.customize ['modifyvm', :id, '--nictype1', 'Am79C973']
v.customize ['modifyvm', :id, '--nicpromisc1', 'allow-all']
v.customize ['modifyvm', :id, '--nictype2', 'Am79C973']
v.customize ['modifyvm', :id, '--nicpromisc2', 'allow-all']
end
Note that this VM will have 3 interfaces. The first interface is for Vagrant
to use as a management address and should be left as is.
Start up
$ vagrant up
$ vagrant ssh
|
Automatically find all latitude and longitude of all my locations
Question: I have a long list of vendors I would like to find the latitude and longitude
for. I already have the addresses.
Is this something I can do using Python? or can I do it with another language?
I just started learning Python.
Answer: The process you are describing is called
"[geocoding](https://developers.google.com/maps/documentation/geocoding/start)".
Python can't do the conversion of addresses to coordinates itself, since it
needs a large database of addresses. However online services such as google or
microsoft-bing can geocode addresses, and you can use python to access these
services.
For example you can use the standard "requests" module to look up an address
>>> import requests
>>> url = 'https://maps.googleapis.com/maps/api/geocode/json'
>>> params = {'sensor': 'false', 'address': 'Mountain View, CA'}
>>> r = requests.get(url, params=params)
>>> results = r.json()['results']
>>> location = results[0]['geometry']['location']
>>> location['lat'], location['lng']
(37.3860517, -122.0838511)
The process can be simplified with the [geocoding
package](https://pypi.python.org/pypi/geocoder) (which you would have to
install yourself):
>>> import geocoder
>>> g = geocoder.google('Mountain View, CA')
>>> g.latlng
(37.3860517, -122.0838511)
(These examples are taken from the geocoding documentation)
Google and other geocoding providers do place limits on your access to their
services. For google I believe it is [2500 per day and 50 per
second](https://developers.google.com/maps/documentation/geocoding/usage-
limits).
In conclusion: python can't do geocoding itself, but it does help you access
services that can.
|
Parsing floating number from ping output in text file
Question: So I am writing this python program that must extract the round trip time from
a text file that contains numerous pings, whats in the text file I previewed
below:
64 bytes from a104-100-153-112.deploy.static.akamaitechnologies.com (104.100.153.112): icmp_seq=1 ttl=60 time=12.6ms
64 bytes from a104-100-153-112.deploy.static.akamaitechnologies.com (104.100.153.112): icmp_seq=2 ttl=60 time=1864ms
64 bytes from a104-100-153-112.deploy.static.akamaitechnologies.com (104.100.153.112): icmp_seq=3 ttl=60 time=107.8ms
What I want to extract from the text file is the 12.6, 1864, and the 107.8. I
used regex to do this and have the following:
import re
ping = open("pingoutput.txt")
rawping = ping.read()
roundtriptimes = re.findall(r'times=(\d+.\d+)', rawping)
roundtriptimes.sort()
print (roundtriptimes)
The issue I'm having is that I believe the numbers are being read into the
roundtriptimes list as strings so when I go to sort them they do not sort as I
would like them to.
Any idea how to modify my regex findall command to make sure it recognizes
them as numbers would help tremendously! Thanks!
Answer: I don't know of a way to do that in RegEx, but if you add the following line
before the sort, it should take care of it for you:
roundtriptimes[:] = [float(x) for x in roundtriptimes]
|
cannot import name multiarray Django Apache2
Question: I am currently running a django app on an ec2 with apache. The app works fine
when I run it using djangos runserver command. However I receive a 'cannot
import name multiarray' when I run it using apache. I have tried reinstalling
numpy and various packages many times and have still not had any luck. Below
is my .conf file for apache. I am using python3.4 and have recently updated
pip within my venv. I can provide more information if necessary.
Include conf-available/serve-cgi-bin.conf
WSGISCRIPTALIAS /site /home/ubuntu/site/initsite/wsgi.py
WSGIDaemonProcess site_group threads=15 display-name=%{GROUP} python-path='/home/ubuntu/site:/home/ubuntu/site/envsite/lib/python3.4/site-packages'
<Directory /home/ubuntu/site>
<files wsgi.py>
Require all granted
</Files>
</Directory>
<Location /site>
WSGIProcessGroup site_group
</Location>
Alias /site1/ /home/ubuntu/site/fullsite/static/css
<Directory /home/ubuntu/site/fullsite/static/css>
Require all granted
AllowOverride All
</Directory>
Traceback:
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/django/core/handlers/exception.py" in inner
39. response = get_response(request)
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/django/core/handlers/base.py" in _get_response
172. resolver_match = resolver.resolve(request.path_info)
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/django/urls/resolvers.py" in resolve
270. for pattern in self.url_patterns:
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/django/utils/functional.py" in __get__
35. res = instance.__dict__[self.name] = self.func(instance)
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/django/urls/resolvers.py" in url_patterns
313. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/django/utils/functional.py" in __get__
35. res = instance.__dict__[self.name] = self.func(instance)
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/django/urls/resolvers.py" in urlconf_module
306. return import_module(self.urlconf_name)
File "/usr/lib/python2.7/importlib/__init__.py" in import_module
37. __import__(name)
File "/home/ubuntu/site/initSite/urls.py" in <module>
21. url(r'', include('fullSite.urls')),
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/django/conf/urls/__init__.py" in include
50. urlconf_module = import_module(urlconf_module)
File "/usr/lib/python2.7/importlib/__init__.py" in import_module
37. __import__(name)
File "/home/ubuntu/site/fullSite/urls.py" in <module>
2. from . import views
File "/home/ubuntu/site/fullSite/views.py" in <module>
1. import numpy as np
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/numpy/__init__.py" in <module>
180. from . import add_newdocs
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/numpy/add_newdocs.py" in <module>
13. from numpy.lib import add_newdoc
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/numpy/lib/__init__.py" in <module>
8. from .type_check import *
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/numpy/lib/type_check.py" in <module>
11. import numpy.core.numeric as _nx
File "/home/ubuntu/site/envsite/lib/python3.4/site-packages/numpy/core/__init__.py" in <module>
14. from . import multiarray
Exception Type: ImportError at /
Exception Value: cannot import name multiarray
drwxrwxr-x 5 ubuntu ubuntu 4096 Oct 3 22:31 site
Answer: I can possibly spot three issues that may effect your setup.
Your virutalenv is placed inside your project itself, this in my experience
sometimes leads to conflicts. Virtualenvs and the project should always be
separated. May I suggest a structures such as:
/home/ubuntu/site - django project files /home/ubuntu/virtualenv/ - for the
virtualenv
The second issue is that you have placed the site and the virtualenv inside a
user's home directly. Though you have set the top level directory permissions
correctly, it may still be very likely that sub folders, files inside
`/home/ubuntu/site` or `/home/ubuntu/virtualenv/` may be inaccessible to the
apache user. This is very likely to happen if you made additions changes after
the initial permissions were set.
The third issue is that you may have a corrupted numpy installation, please
uninstall and install again and make sure to use pip3 instead of pip. To be
more specific: `pip3 install numpy`
|
Command line arguments not being passed in sbatch
Question: I am trying to submit a job using the SLURM job scheduler and am finding that
when I use the `--export=VAR=VALUE` syntax then some of my variables are not
being passed (often the variable in the first instance of `export`). My
understanding is that I need to specify `--export=...` for each variable, e.g.
sbatch --export=build=true --export=param=p100_256 run.py
My script "run.py" looks like this:
#! /usr/bin/env python
import os,fnmatch
print(os.environ["SLURM_JOB_NAME"])
print(os.environ["SLURM_JOB_ID"])
print(fnmatch.filter(os.environ.keys(),"b*"))
print(fnmatch.filter(os.environ.keys(),"p*"))
I'd prefer to submit a python script as all of my existing scripts (used
previously with PBS) are already in python and I don't want to have to rewrite
them in shell scripts. My problem is best demonstrated through a short
example.
Firstly,
> sbatch --export=build=true --export=param=p100_256 run.py
> Submitted batch job 2249581
produces a log file with the following:
run.py
2249581
[]
['param']
If I reverse the order of the `export` flags for 'build' and 'param',
> sbatch --export=param=true --export=build=p100_256 run.py
> Submitted batch job 2249613
then the log file now looks like,
run.py
2249613
['build']
[]
which would suggest that only the final instance of the `export` flag is being
passed. If I add in a third instance of `export`,
> sbatch --export=param=1 --export=build=p100_256 --export=build_again=hello run.py
> Submitted batch job 2249674
then the log file returns,
run.py
2249674
['build_again']
[]
So does anybody know why only the final instance of `export` is being passed?
Have I got the syntax incorrect? Do I need to specify an additional flag?
Thanks!
Answer: Yes, looks like I had the syntax incorrect. I missed in the documentation that
additional variables should be comma separated and specified with a single
`export` flag, e.g.
> sbatch --export=build=true,param=p100_256 run.py
So previous instances of `export` must be being replaced each time `export` is
specified.
|
How can I write a C function that takes either an int or a float?
Question: I want to create a function in C that extends Python that can take inputs of
either float or int type. So basically, I want `f(5)` and `f(5.5)` to be
acceptable inputs.
I don't think I can use `if (!PyArg_ParseTuple(args, "i", $value))` because it
only takes only int or only float.
How can I make my function allow inputs that are either ints or floats?
I'm wondering if I should just take the input and put it into a PyObject and
somehow take the type of the PyObject - is that the right approach?
Answer: If you declare a C function to accept floats, the compiler won't complain if
you hand it an int. For instance, this program produces the answer 2.000000:
#include <stdio.h>
float f(float x) {
return x+1;
}
int main() {
int i=1;
printf ("%f", f(i));
}
A python module version, iorf.c:
#include <Python.h>
static PyObject *IorFError;
float f(float x) {
return x+1;
}
static PyObject *
fwrap(PyObject *self, PyObject *args) {
float in=0.0;
if (!PyArg_ParseTuple(args, "f", &in))
return NULL;
return Py_BuildValue("f", f(in));
}
static PyMethodDef IorFMethods[] = {
{"fup", fwrap, METH_VARARGS,
"Arg + 1"},
{NULL, NULL, 0, NULL} /* Sentinel */
};
PyMODINIT_FUNC
initiorf(void)
{
PyObject *m;
m = Py_InitModule("iorf", IorFMethods);
if (m == NULL)
return;
IorFError = PyErr_NewException("iorf.error", NULL, NULL);
Py_INCREF(IorFError);
PyModule_AddObject(m, "error", IorFError);
}
The setup.py:
from distutils.core import setup, Extension
module1 = Extension('iorf',
sources = ['iorf.c'])
setup (name = 'iorf',
version = '0.1',
description = 'This is a test package',
ext_modules = [module1])
An example:
03:21 $ python
Python 2.7.10 (default, Jul 30 2016, 18:31:42)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import iorf
>>> print iorf.fup(2)
3.0
>>> print iorf.fup(2.5)
3.5
|
Checking if string contains valid Python code
Question: I am writing some sort of simple web-interpreter for vk.com . I look for
messages, check if they are valid Python code, and then I want to execute that
code, and return any `stdout` to code sender. I have implemented anything but
code checker.
import ast
def is_valid(code):
try:
ast.parse(code)
except SyntaxError:
print('Input isnt code.')
return False
print('Code is ok.')
return True
`is_valid()` always return `True` regardless of what comes in. Im really
confused...
Answer: Keep in mind, the difference between a runtime error and a parser error is
significant in your case and example. The statement:
test
is _valid_ code. Even though this statement will throw a `NameError` when the
Python VM executes the code, the parser will not know that it actually wasn't
assigned a value before the statement is parsed, so that's why it's a runtime
error, and not a syntax error.
|
Sum of difference of squares between each combination of rows of 17,000 by 300 matrix
Question: Ok, so I have a matrix with 17000 rows (examples) and 300 columns (features).
I want to compute basically the euclidian distance between each possible
combination of rows, so the sum of the squared differences for each possible
pair of rows. Obviously it's a lot and iPython, while not completely crashing
my laptop, says "(busy)" for a while and then I can't run anything anymore and
it certain seems to have given up, even though I can move my mouse and
everything.
Is there any way to make this work? Here's the function I wrote. I used numpy
everywhere I could. What I'm doing is storing the differences in a difference
matrix for each possible combination. I'm aware that the lower diagonal part
of the matrix = the upper diagonal, but that would only save 1/2 the
computation time (better than nothing, but not a game changer, I think).
**EDIT** : I just tried using `scipy.spatial.distance.pdist`but it's been
running for a good minute now with no end in sight, is there a better way? I
should also mention that I have NaN values in there...but that's not a problem
for numpy apparently.
features = np.array(dataframe)
distances = np.zeros((17000, 17000))
def sum_diff():
for i in range(17000):
for j in range(17000):
diff = np.array(features[i] - features[j])
diff = np.square(diff)
sumsquares = np.sum(diff)
distances[i][j] = sumsquares
Answer: You could always divide your computation time by 2, noticing that d(i, i) = 0
and d(i, j) = d(j, i).
But have you had a look at `sklearn.metrics.pairwise.pairwise_distances()` (in
v 0.18, see [the doc here](http://scikit-
learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html))
?
You would use it as:
from sklearn.metrics import pairwise
import numpy as np
a = np.array([[0, 0, 0], [1, 1, 1], [3, 3, 3]])
pairwise.pairwise_distances(a)
|